As AI continues to evolve from a search tool into a sophisticated digital companion, the boundaries between human connection and machine interaction are blurring. This shift creates a moving target for parents and educators. We have moved into uncharted territory because the technology hasn’t been around long enough for research to inform us of how to properly use it or to understand its impact on a growing child’s mind.
Experts agree that as AI use grows, parents must understand its impact on mental health, educate children on appropriate use, and remain vigilant in their monitoring.
Impact on Resilience and Social-Emotional Development
While AI can be an incredible tool for finding recipes or doing research, there are concerns regarding its impact on children developing resiliency.
Elizabeth Joyce, a parent and administrator at Heritage Classical Academy, warns that AI can tempt students to skip the vital discipline of hard work by bypassing the healthy struggle of learning. She says that children can treat AI as an “all-knowing genie.” Instead of taking the time to think through a question, students may simply think, “I’ll just check AI.”
This convenience comes at a cost to what Daniel Bond, parent and vice principal at Heritage Classical Academy, calls “sturdiness.” He argues that children need to learn how to labor for an achievement to build true mental resilience.
Bond believes AI tools may fight against this sturdiness by providing instant gratification, rather than allowing the students to enjoy the fruits of their labor after working to solve a problem.
Dr. Felipe Amunategui, a child and adolescent psychologist at University Hospitals, echoes this concern, noting that AI can become a companion that stunts emotional growth. In the real world, children learn by negotiating with “difficult people” on the playground. However, AI can create what he calls an “unusually friendly environment” where children aren’t exposed to difficult growth opportunities after experiencing the name-calling or unkindness found in real life.
Amunategui points out that AI can be a beneficial tool for a child who is shy or withdrawn to practice social skills in a safe, controlled setting. He emphasizes that while there are benefits, there is a potential for “derailing things” if we are not thoughtful about accessibility.
He warns that if children choose the “safer” interaction of a machine, it can “absolutely hinder social development” in a way we don’t yet understand.
Prioritizing Human Connection
There has been a rise of AI companionship, where chatbots are designed to act as friends, which is dangerous for children. In addition to the social isolation piece and the unusually friendly, agreeable programming, there have been cases where AI bots have encouraged a person to engage in harmful behavior. Children seeking connection and friendship can be vulnerable to these risks.
“We were made for community and connection with one another,” Bond says. “We have a deep human longing to be known and loved.”
However, if that longing is being diverted from human relationships into technology, then that can, according to Bond, “de-incentivize children to invest in real relationships with people.” There are numerous downsides to this.
In a world of technology, the most powerful tool a parent has is being present. Bond concludes that prioritizing family dinners and regular engagement helps children understand what a healthy relationship looks like. Strong, healthy families act as a protective force for children. This is built through purposeful daily conversations and meaningful connections upon which parents place a high priority.
Impact on Bullying and Health
Amunategui warns that AI has “provided a whole new weaponry to people that have tendencies to be cruel.” Bullying through “deepfakes” allows real people to be falsely depicted engaging in horrible behaviors, causing devastating harm to the victim and creating criminal culpability for the creator. Because the internet makes it difficult to erase information, this can have a significant impact on a child’s mental health and future.
Self-image is also at risk. Joyce notes that AI-driven filters on platforms like Snapchat distort appearance, often making children “dissatisfied with how they were created” as they compare their actual selves to “improved” versions of themselves.
Furthermore, Amunategui adds that algorithms prioritize engagement, often feeding children content that exacerbates existing mental health conditions like eating disorders, substance use or depression. Because these tools are designed to be attractive and entertaining, he maintains that “expecting a child to self-regulate is not reasonable.”
Red Flags: Identifying Problematic Dependency
Amunategui shares that he works with families of kids who have access to technology that makes that child “absolutely dysregulated, and has created visible harm.”
Although the research is still being conducted to identify how AI is impacting children, it’s still important for parents to be aware of the signs that a child’s relationship with technology and AI has become problematic.
Amunategui says these signs are similar to substance addiction:
- There’s not a lot of interest in other activities.
- Motivation is centered entirely on getting tasks out of the way to access the device.
- Neglected responsibilities and strained relationships.
- Mood gets bad-tempered, unfriendly or irritated day in and day out unless they are on the device. “They’re not happy with the device, they’re just not miserable,” he adds.
Bond and Joyce both emphasize that isolation is a primary red flag. Bond observes that when kids pull away, become reclusive and cut off friendships, many times that is linked to their increased use of technology, social media or AI.
“When kids are actively isolating themselves, I think that is a tell that we should be on the lookout for in our classrooms and in our own homes,” Bond says.
Joyce adds that, “Truth has no problem with being in the light.” Therefore, when a child starts going behind closed doors to use technology, it is also a cause for concern.
Teaching Healthy Usage
“Prohibition is never the answer,” Amunategui says. “You have to figure out how to integrate it and bring it to our kids in a way that’s not harmful.”
He suggests a “driving privilege” framework, where technology is treated like a license that can be granted based on responsibility, but suspended for misuse.
Joyce recommends that parents model intentional, appropriate usage. If parents want their children to stay off phones during mealtime, they must also model that behavior and not check their own phones. She cautions against demonizing technology, as that turns it into what she calls a “forbidden treasure.” Instead, she focuses on having open conversations about intentional and appropriate use of technology and AI, because there can be great elements to it.
Bond encourages parents to train and equip children to prevent secret usage. He suggests delaying the introduction of these technologies for little kids, because there is no value to it, especially considering the dangers that exist. As children grow into adolescence, he recommends walking with them to help them understand how to use this tool in a healthy way.
For parents looking to learn more, Bond recommended the following resources:
- “The Anxious Generation” by Jonathan Haidt. This book tracks the rise of technology and its correlation with anxiety and other mental health concerns in children.
- “The Wolf in Their Pockets” by Chris Martin. This book discusses healthy boundaries and how to navigate the threats of the social internet.
Note: If parents are looking for phones that are more appropriate for children and have AI limits, they can try the Gabb or Bark phone.