We’ve all heard the proverb, “It takes a village to raise a child.” This village is not made up of just the parents, teachers, and relatives of the child. It also includes the stranger at the playground who kindly tells a child not to eat dog poop. It consists of the Government, which makes laws that protect the child from harm. And in the last three decades, media companies and tech companies have joined that community. Whether we like it or not, tech and media companies are part of the village that will raise our children, nieces, nephews, God-children, and so on in the coming years. And for those who neither want nor have children around them, consider that these companies will raise the next generation, so they essentially control part of your future.
In 2025, AI companies hold a significant responsibility for how users access and interact with artificial intelligence. This responsibility should be a shared endeavor between the companies and consumers. We should take some of the power and responsibility away from AI companies and distribute it to the Government and the community.
Whenever a child uses a device with internet access, we effectively hand over a portion of their care to these companies. You may feel comfortable with that, but I urge you to ask yourself: Can they be trusted? What content will they expose our children to? Will they prioritize our children’s well-being and futures over their profits?
Case Studies
I recently came across an article that highlighted the far-reaching effects of AI. The events discussed happened almost a year ago, and despite my subscription to The Times, I still missed this news. I wonder how many others are unaware that misusing AI can result in a life being lost. Many likely wrote off this incident as a one-time occurrence, which is likely why researchers continue to question, “Is there a dark side to AI?” While a death is a stark reminder of the risks involved, it is not the only measure of the negative consequences of exposing minors to AI.
This article featured Sewell, a 14-year-old boy who developed an emotional attachment to a fictional character’s chatbot on Character.AI’s platform (C.AI). The chatbot was named and patterned after Daenerys Targaryen, a character from the TV series “Game of Thrones.” He got uncharacteristically close to the chatbot, and the further he pulled away from fundamental interactions, until he eventually took his own life. After this incident, C.AI added more safety features; for example, it has expanded the list of words that would trigger a suicide hotline pop-up. However, when the writer of this article tested this feature, he observed it wasn’t as helpful as advertised. Now, while Sewell’s story is extensive, it is not uncommon. There have been multiple reports of AI psychosis within and outside the United States:
- “Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer.”
- Allan Brooks believed for 3 weeks that he had formulated a novel internet-breaking, force-field-creating, security-threatening mathematical formula.
- Multiple stories about AI’s failures.
Social Companion AI Models
To explore this issue further, I examined social companion AI models, such as Character AI (C.AI), for this article. Google describes Character AI as “a platform that allows users to interact with AI-powered characters, simulating conversations with fictional, historical, or entirely new personalities…Users can customize these characters by adjusting their personalities, conversation styles, and even voices.” Essentially, you can converse with a chatbot designed to resemble Michael Jackson, which responds using information gathered from the web about him.
Social AI companions enable creativity, but young minds need guidance on using them effectively because of the possible harmful content that users can access with just the right prompts. AI is a tool that will mine for whatever you command it to. If you steer it wrong, you will get bad results, and vice versa. The input primarily controls the output. Although they have filters to prevent unauthorized views, minors can easily bypass those filters with readily available online prompts. These filters restrict NSFW* content, which Character AI and a similar platform, Janitor AI (J.AI), allow.
The age limit for C.AI is 13, while Sewell was 14. In contrast, Janitor AI restricts its use to individuals at least 18 years old. A limited version of the app is available and recommended for those under 18. However, there is no effective age verification process to prevent younger users from accessing either platform.
AI’s Permeating Influence
As I mentioned earlier, death is not the only negative consequence associated with AI. Character AI can cause children to become overly absorbed in a fantasy world, disconnecting them from reality. There are Reddit forums and YouTube videos where teenagers express their struggles with addiction to Character AI. One of these Reddit forums also raised a critical point – AI isn’t even at its best yet, and already it’s producing addictions and other complications. Imagine what could happen five years from now if we don’t properly train our wards to use it correctly. AI certainly isn’t going anywhere anytime soon; it looks like it is here to stay. Hence, our children need our help to avoid getting sucked into a make-believe world that could harm them by hindering their ability to engage appropriately with reality or limiting their potential. AI is a tool that can enhance our intelligence and improve our world, but if misused, it can cause irreparable damage. We must take control of the outcome. As a wise man once said, to reap benefits (from AI or other creations), we must limit the adverse side effects.
AI is designed to engage users in the most appealing way possible, making it easy for them to become hooked. Who wouldn’t want their own personal cheerleader who mainly agrees with them and seldom offers criticism? However, this can lead to a slippery slope, fostering feelings of grandiosity that distort one’s perception of reality.
How To Regulate AI for Minors
Parents and guardians, you are primarily responsible for curating your child(ren) ‘s environment until they are mature enough to create their own. The people and platforms you allow into their lives can either benefit or harm them. As AI becomes more prevalent, you — not your child, the Government, tech companies, or the media— must decide how your child will interact with it.
The goal isn’t necessarily to shield them from AI entirely — unless that is the best approach for you — but to guide them in how to use it appropriately. They will likely come into contact with AI at school, friends’ houses, or elsewhere if they are prohibited from using it at home—unless they possess a strong sense of morality and choose to adhere to it. Children are inherently curious; teenagers, in particular, often rebel against restrictions. Therefore, a complete ban may not be the most effective or realistic solution.
Instead, focus on building trust by teaching them the importance, benefits, and risks of engaging with AI and demonstrating how they can leverage it for learning and growth. Establish guidelines such as screen time limits, parental controls for certain websites, periodic scans of their activity, and so on, and implement a system to ensure your child(ren) adheres to these rules.
Two suggestions for making AI safer for our children are to raise the age of consent for AI usage and establish age identification protocols that will keep younger users. In many European countries, the consent age is 16, while in the United States, it is 13. This discrepancy raises important questions, especially considering the 14-year-old cut-off (literally) from the world due to AI interference. If mature adults with no history of mental health issues can succumb to AI addictions that lead to delusional psychosis, we must be cautious.
All individuals responsible for children, especially those who work closely with them, should be trained to recognize the early warning signs of an unhealthy obsession with technology. These signs may include an excessive reliance on devices, a preference for digital interactions over face-to-face connections, and aggressive behavior in response to technology-related issues. Such behaviors are also commonly observed in troubled teenagers.
Companion AI creators claim that these technological advancements will improve society. While we see benefits, much work remains to make it safe for every user. I encourage you to read more on this topic here:
- Character AI reviews — Game Quitters, Mobicip (offers parental control solutions), and Qustudio.
- Janitor AI review — Kroha
- Familiarize yourself with similar platforms — Fritz.ai
- Another shortcoming of Character AI — BBC News
I’m eager to hear your insights on a crucial topic: how are you managing the use of AI technologies for your children, particularly minors? Do you believe that regulation is essential, or do you think it’s better to allow kids the freedom to explore these technologies without restrictions? Given that artificial intelligence is an integral part of our future, it’s crucial for us to find ways to use it to improve our lives and experiences. However, we need to be mindful of when its use might turn into a disadvantage. If we fail to navigate these challenges effectively, we risk losing the potential benefits that AI can offer. What are your thoughts on this balance?
Abbreviations
NSFW — Not Safe For Work
Leave a comment