top of page

Teens & Tech: The Growth of GenAI and Cybersecurity Risks //

The Harm AI Can Cause

  • Writer: Mike Klase
    Mike Klase
  • May 13
  • 3 min read

Updated: May 16

There are many ways AI can be used maliciously. Here are a few examples to be aware of!



ALGORITHMS

GenAI and social media algorithms can sometimes be harmful. A study from the University of Washington found that about 30% of chatbot responses mentioned things like violence, drugs, or mental health, partly because AI learns from negative news about teens. That means the responses can feel untrue to what most teens experience.


Social media platforms also use AI to track what you click, watch, or even say in private messages. Then they push content—like extreme diets, deepfake videos, or intense mental health topics—that keeps you hooked. This kind of targeted content can increase stress, anxiety, and depression. Since these platforms focus more on profit than well-being, it's important to stay aware and protect your mental health online.



CSAM

AI-generated child sexual abuse material (CSAM) is when AI is used to make fake but realistic-looking explicit images of kids. Even though the pictures aren’t real, they look real enough to cause serious harm.

This kind of content can be used in sextortion, where someone threatens to share the fake images unless you do what they say—like sending money or doing things you don’t want to do.


The scariest part is, people don’t even need real nudes anymore. They can make fake ones using regular photos found on social media or school websites.


It's a serious and growing danger, and it shows why protecting your online photos is more important than ever.



GROOMING

AI-driven grooming is when predators use tech to learn about your online activity—like what you post, who you talk to, and what you like. This helps them figure out how to target and get close to you.

They might create fake profiles that match your interests or send messages that feel personal and trustworthy, all to trick and manipulate you. It’s way more advanced than typical grooming, which makes it even more important to stay careful online.



LONELINESS

There’s a growing loneliness problem, especially since Covid. Mental health issues like depression have gone up by almost 30% in young people. Because of this, many teens are turning to AI companions for support. These virtual friends use smart technology to talk like real people, understand emotions, and can help you feel less alone. They might even boost your mood, confidence, and help with social skills.


These AI companions can be helpful, but they’re not perfect. If you use ones on apps like Snapchat or Facebook, they might show you things that aren’t safe or accurate, because AI isn’t heavily regulated yet. It’s super important to protect your privacy and remember that AI is meant to add to your real-life friendships—not replace them. Nothing beats real connection with friends, family, and people who truly understand you.



MISINFORMATION

GenAI has made it super easy to create fake photos, videos, and other content, and more teens are saying they’ve been tricked by it online. Seeing so much false info can make people stop trusting tech companies.


For teens already dealing with mental health struggles, this kind of content can be even more harmful. When people use GenAI to spread lies or propaganda, it can mess with how we think, turn people against each other, and even hurt democracy by spreading mistrust and dividing groups that actually have a lot in common.

 
 
bottom of page