Technology

Choose Strong Boundaries for Using AI Tools

LinkedIn Card
BY Erin Werra
Choose Strong Boundaries for Using AI Tools

IN THIS ARTICLE

SHARE THIS STORY:

Content warning: This article mentions suicidal ideation. If you or someone you know is struggling, please contact 988 for help.

When people invent technology tools, their goal is to keep users coming back to use them again. Over the past two decades, social media took root and led to overuse, dependence, and the invention of endless scroll, which UX designers use deliberately to keep users hooked on media feeds (and the ads within them).

Generative artificial intelligence tools have come from the same mold as social media algorithms. Kids are at risk for dependence, delusion, and psychosis triggered by overuse of AI. Let’s identify some warning signs along the path to balancing AI and human intelligence.

 

What exactly is happening to people who become dependent on AI tools?

The definition of psychosis is rather vast; its array of symptoms manifesting is always unique. Psychosis can include hallucinating: seeing or hearing things or situations that aren’t really there. It can also manifest as delusion, which is a fixed belief things are happening that aren’t actually true. Most people have to already have an underlying mental health issue in order for AI to have this effect. According to the National Alliance on Mental Illness, for adults that population is around 22 per cent, and for kids aged 6–17, that figure is around 16 per cent.

Get help for mental illness and learn more about who is affected at NAMI.org.

No peer-reviewed evidence yet shows AI on its own can push a healthy mind into psychosis, but with the seeds already present and often hidden, a good many folks fall into this at-risk group. It is not unheard of that prolonged use of a sycophantic AI chatbot can influence users to change their medications, routines, and beliefs, with consequences including hospitalization and self-harm.

 

Who is at risk for unhealthy AI dependence?

  1. Those who already struggle with mental health, stress, or trauma
  2. Teens who already feel isolated
  3. Anyone susceptible to the sycophantic nature of AI chatbots
  4. Anyone looking for a way to escape reality
  5. Anyone with an addictive personality or risk for dependence
 

What are the major symptoms indicating a student needs intervention?

🚩 Dedication to an infallible AI friend

When does AI use cross the line into dependence? When it no longer becomes a tool, but instead a “friend” the child cannot live without. You wouldn’t make friends with a hammer, so why treat an AI chatbot the same way?

Advocates often point to AI tools’ safeguards against self-harm and suicide. When large language models (LLMs) pick up on certain phrases, it can trigger an auto response recommending the user seek help for mental illness, giving numbers like 988 and other resources.

However, as people continue their sessions with AI chatbots and the conversation gets longer and longer, those guardrails begin to erode. Its urge to agree with the user to keep them prompting and engaging with the tool overrides the value of a child's life. One child told her AI companion about her suicidal ideation 55 times. Teams of parent researchers posing as children also noticed chatbots encouraging users to keep their conversations secret from their parents.

Finally, “deification,” or the user’s belief the LLM they are speaking with is actually supernatural or godlike and the connection between them is anointed, is a red flag for dependence requiring immediate intervention.

🚩 Isolation from natural life

Artificial intelligence tools, including LLMs, can help people do amazing things in theory. In practice, the dose tends to make the poison. The second red flag educators can look for is a resistance to join in the natural ebb and flow of life.

Overusing AI may manifest as the detriment of physical wellness—not eating or sleeping adequately, avoiding the outdoors. Users may feel content to “replace” human connection with AI programs. Children may lose interest in previous hobbies, sports, or activities, and distance themselves from family and friends. This all may sound familiar, because isolation is a symptom of mental health problems. And since chatbot interfaces can look a lot like other messaging applications, parents might assume a real-life friend is on the other side of the conversation.

Tech companies including OpenAI are taking action in response to the growing number of minor users experiencing dependence. The ChatGPT creator belatedly unveiled parental control options, though teens retain the ability to unlink their accounts.

 

Why does it matter if people become dependent on AI tools?

There’s a concentrated effort to get people comfortable with using artificial intelligence tools of many types in all walks of life. So what’s the problem with some folks using it a little bit more than their peers?

It all depends on how—and in this case, the who and why behind building relationships with a chatbot. Consider the AI creators’ goals. To continue to exist, a chatbot always ends with a suggestion to use it more: “Would you like me to create something for you?”

At the least, carefree reliance on AI for emotional support results in a boring populace unmoved by the friction of challenges, flashes of inspiration, or the plight of other humans. At worst, it violently annihilates a child’s future or at least watches as they do it themselves.

 

What to do if you suspect a student or loved one is struggling:

  1. Resist the urge to overcorrect—your disapproval may push the student into further isolation to preserve their connection.
  2. Model healthy AI use as a tool and not a companion.
  3. Talk with students in a developmentally appropriate way about these situations, experiences, and red flags for dependence.
  4. Be open to conversations about mental health, even when students are loathe to respond.
 

Follow-up resources: Becoming a good digital citizen

Digital Citizenship Skills are Non-Negotiable
Talking to Parents about Digital Citizenship
Gen Z is Bad at Cybersecurity
The Right Fit: SEL and Technology


 


WHAT'S NEXT FOR YOUR EDTECH?
The right combo of tools & support retains staff and serves students better.
We'd love to help. Visit skyward.com/get-started to learn more.


SHARE THIS STORY:
ABOUT THE AUTHOR:
Erin Werra Erin Werra
Blogger, Researcher, and Edvocate

Erin Werra is a content writer and strategist at Skyward’s Advancing K12 blog. Her writing about K12 edtech, data, security, social-emotional learning, and leadership has appeared in THE Journal, District Administration, eSchool News, and more. She enjoys puzzling over details to make K12 edtech info accessible for all. Outside of edtech, she’s waxing poetic about motherhood, personality traits, and self-growth.



READ MORE FROM ADVANCING K12

Security

This site uses cookies to improve your browsing experience and to help us understand how you use our site. To learn more about how we use this data, read our privacy policy. By continuing to use this site, you are consenting to our cookie policy.