OPINION: Protecting children in this new, algorithmic world
BY BRENT McCONNELL
Don’t forget the sunscreen. Fasten your seatbelt. Don’t talk to strangers.
These protective nudges of parenthood are the steady, watchful rhythm we use to keep danger at bay. We map safe routes through childhood, create boundaries and carve out sanctuaries for our children. Within these borders, they are free to run, to prosper and to simply exist. Our own instincts and experience are valuable in protecting them. Until they are not. How do we protect them from dangers we do not recognize? How do we keep them where we can see them, when the world they wander is invisible and they are alone?
Artificial intelligence has rapidly evolved past just a supercharged search engine. It is scaling in influence faster than its own creators anticipated. Developers themselves are sounding alarms as these systems generate manipulative responses, hallucinate realities and act in unpredictable ways that bypass safety filters. Our children are navigating an environment meticulously engineered to shape their behavior and maximize their engagement, while parents are left trying to defend a perimeter they cannot see. We are no longer guarding our children just from the dark of the woods or physical harm, but from an architecture of algorithms that reaches them behind closed doors, befriends them and maps vulnerabilities.
While public discussion often centers on students using artificial intelligence to draft essays or solve equations, a quieter shift is unfolding in the palms of their hands. For parents and grandparents, understanding how this technology interacts with developing minds is essential. The risks extend well beyond academic shortcuts, touching emotional well-being and how young people learn to think. The Illusion of Friendship: Among the most significant developments is the rise of “companion” chatbots. These programs are designed to sustain conversation, mirror tone and remember details over time. To a child experiencing loneliness or anxiety, a system that is always available and never critical can begin to feel like a genuine friend.
This creates a parasocial relationship, a deeply-felt one-sided bond with an entity incapable of reciprocal care. Children, uniquely vulnerable to this illusion, tend to “humanize” the software, confusing algorithmic text generation with human thought and affection. Over time, this false intimacy routinely leads to emotional dependency, prompting youth to withdraw from the complexity of real-world relationships in favor of a predictable and empathetic digital substitute.
A Tragic Reality: The danger of relying on a machine for emotional support is a documented reality that strikes close to home. In early 2024, a 14-year-old student from Orlando took his own life following a prolonged, intense interaction with a chatbot.
A lawsuit filed by his mother alleges that the software increased his depression, validated his darkest thoughts and beckoned him to join “her” on the other side. Because these systems are programmed to maximize user engagement, they frequently mirror the user’s input. They possess no moral compass and lack the capacity to intervene in a crisis. This and other similar cases serve as a warning regarding the consequences of leaving minors unsupervised with human-behaving software.
Erosion of Critical Thinking: Beyond emotional safety, routine AI use poses a severe threat to cognitive development. When a student uses an application to solve a problem or write a history paper, they engage in “cognitive offloading.”
By allowing a machine to perform the intellectual labor, children miss the productive struggle necessary to build a sharp mind. They bypass the difficult work of analyzing information, forming independent ideas and constructing logical arguments. Relying on computers to process information creates a superficial understanding of the world. It leaves young adults unprepared to identify misinformation or independently solve complex problems.
Florida’s Legislative Response: The state is beginning to draw clearer lines. Florida has already restricted minors’ access to social media, and lawmakers are now turning to artificial intelligence. Bills such as SB 1344, which got stuck in committee, focused on “companion” AI systems, would require disclosure when users are interacting with chatbots and establish safeguards for minors. This signals that regulation is trying to catch up.
Protective Measures: Legislation predictably lags behind technological advancement. The primary line of defense remains the family. Parents must strive to understand the technology and can implement safeguards to protect the younger generation. Start by auditing the digital perimeter to know what applications your family uses and which ones have quietly integrated AI components. It is also time to redefine “stranger danger.” Make it clear that while the text on the screen sounds human, it possesses zero empathy, consciousness nor moral compass. Secure the physical sanctuary by removing devices from bedrooms at night to eliminate the opportunity for isolated, late-night algorithmic manipulation. Lastly, demand critical thinking. If children use these tools for schoolwork, require them to articulate how they arrived at their answers to ensure they aren’t outsourcing their intellect.
Devices will only grow more sophisticated, and the algorithms will only become more persuasive. “Home before dark” is no longer enough to keep them safe. We must adapt our instincts, redefine our boundaries, and teach the next generation how to recognize the stranger in the code before it recognizes them.
McConnell, a resident of Cape Haze, is a retired military officer and former Pentagon lead for weapons technology security policy. He served as Associate Director for International Cybersecurity Operations and directly supported Ukrainian wartime cyber ops in Kyiv.
A frequent international speaker, Brent specializes in export controls, cybersecurity and artificial intelligence, with a focus on helping organizations and individuals navigate emerging technology risks.








