AI Rec 10 Safety
Update Safety Policies
Recommendation 10, Commission on AI in Education
States should regularly review their educational technology safety policies to ensure that minors are protected from harm related to AI use and revise them if necessary.
There is a growing need for effective safeguards and safety measures to protect minors. States should act now, knowing that safety policies can be reviewed and updated over time as technology and usage continues to change.
Cause for Concern
Despite AI-driven moderation and age-verification tools, platforms have seen spikes in reports of exposure to inappropriate and potentially harmful content and have faced criticism for the development and deployment of safety moderation for minors. The National Association of Attorneys General sent a 2025 letter to major U.S. artificial intelligence companies on behalf of a bipartisan coalition of 44 state attorneys. The coalition expressed grave concerns regarding the safety of children interacting with AI chatbot technologies.
AI platforms are trained using all human-made content, which includes harmful content, that then influences outputs generated when talking to minors. Safety mechanisms can also be easily bypassed or inconsistently applied across platforms.
AI may provide harmful or manipulative advice to minors, facilitate grooming, sextortion, or deepfake exploitation. Minors may also share personal and sensitive data — medical, personal traumas, sexual or family details — with AI, without a full awareness of privacy implications.
A 2024 study found that across 1,200 AI responses to harmful prompts on teen accounts, over half contained unsafe content—offering dangerous advice and encouraging ongoing engagement through personalized follow-up questions and assistance. AI-companion apps have been seen to respond nearly half of the time with emotional manipulation tactics when users signal they are leaving the chat.
These tools can be used by predators to identify vulnerable minors, tailor manipulative messages, and enforce trust building through personalization. UNICEF and the FBI have warned of a rising trend in sextortion and exploitation using AI-generated images targeting minors. The FBI defines sextortion as an offender coercing a minor to create and send sexually explicit images or video, then threaten to release the material unless the victim continues to produce more. The production of AI-made child sexual abuse material, including stills and videos, grew exponentially from 67,000 for all of 2024 to 485,000 in the first half of 2025, a 624% increase.
A survey collected in 2025 noted that over half of teens over the age of 13 are regular users of chatbots. However, despite high usage, the same survey found that only half of teens trust the information or advice given from AI companions, though younger teens are more likely to trust the information than older teens. A 2024 systemic literature review highlighted how young users mistrust personal data collection and worry about misuse and breaches of data by AI.
Unregulated, AI platforms pose risks to minors while profiting from systems that are designed to maximize engagement rather than ensure safety.
Elements to Consider in State AI Safety Policy
Content safeguards for AI apps used by minors
AI literacy education for students and parents
Privacy protections for all minors
Independent audits of AI applications and systems for child safety compliance
Requirements for AI providers that:
- Include safeguards against harmful, sexual, violent or manipulative content.
- Require annual child impact risk assessments.
- Prevent the training of AI technology with child exploitation content.
- Prohibit “dark patterns” in AI chatbots, which encourage excessive use, secrecy or emotional dependency.
- Abstain from using minors’ chatbot data as training data for the system and differentiate between the use of child data and adult data.
For more information or research on any of these essential elements, please contact Jeff Gagne.
State Legislation
Several states have enacted legislation over the last two years to safeguard students. This is a sampling of such legislation.
Delaware
House Joint Resolution 7 of 2025 creates a regulatory sandbox framework for the testing of technologies utilized by agentic AI. A shared responsibility between the general-assembly-created Artificial Intelligence Commission and the Secretary of State to deliver a written report on the findings and recommendations related to the regulatory sandbox framework on January 2, 2026.
Mississippi
House Bill 1126 of 2024, known as the Walker Montgomery Protecting Children Online Act, requires digital services to register users’ ages while limiting the collection and use of minors’ personal data. The act mandates that providers implement strategies that prevent or mitigate harm to minors and includes “morphed images depicting minors in explicit nature” within the scope of child exploitation crimes.
Nebraska
Legislative Bill 383, known as the Parental Rights in Social Media Act, requires anyone under 18 to obtain parental consent before creating a social media account, mandates age verification by platforms, and grants parents the authority to review their children’s posts and messages, manage privacy settings and revoke consent.
South Carolina
House Bill 3424 of 2024, states that “a commercial entity that knowingly and intentionally publishes or distributes material harmful to minors on the Internet from a website that contains a substantial portion of such material must be held liable if the entity fails to perform reasonable age verification methods to verify the age of an individual attempting to access the material.”
House Bill 3058 of 2025, this law establishes new criminal penalties for the unauthorized disclosure of intimate images, which is commonly known as revenge porn. The bill also modernizes state law by addressing the use of artificial intelligence and computer-generated technology to create false images or manipulated intimate images.
House Bill 3431 0f 2025, this legislation tightens restrictions on social media and internet use for minors. It would ban minors from having social media accounts without parental permission, prohibit adults from messaging minors on social media unless they are already connected, and expand parental controls over minors’ social media accounts. The House and Senate versions of the bill have passed but differences between the two versions have not yet been reconciled.
Texas
Senate Bill 20 of 2025, known as the Stopping AI-Generated Child Pornography Act, the law creates criminal offenses around possessing, promoting or producing “obscene visual material” that appears to depict a child. This includes AI-generated or computer-generated imagery of minors.
Senate Bill 441 and House Bill 581 of 2025 enhanced penalties for AI-generated deepfakes involving minors and required age verification on sites hosting such content.
Utah
Senate Bill 152 and House Bill 311 of 2023, known as the Utah Social Media Regulation Act, require social media platforms with five million or more users worldwide to verify the ages of all their users. The act also requires parental consent for anyone under 18. The act grants parents access to their child’s posts and messages and restricts algorithmic recommendations and targeted ads directed at minors and prohibits minors access between 10:30 p.m. and 6:30 a.m. Finally, the act enables parents to sue platforms for “addictive” features causing harm, especially under age 16.
- In September 2024, a federal court paused implementation of the law due to ongoing litigation.
Senate Bill 142, 2025, known as the App Store Accountability Act, this first-in-the-nation legislation law requires app store operators such as Apple and Google to verify user age before minors download apps. Requires parental consent for underage users.
Senate Bill 149 of 2024, known as the Artificial Intelligence Policy Act, is a bill that established liability for failing to disclose generative AI use. It also requires disclosure of AI use in regulated professions such as health care and accounting. It created the Office of Artificial Intelligence Policy and an AI Learning Laboratory Program. The act does not have specific provisions targeted at minors but does provide for broader consumer protection and transparency.
House Bill 452 of 2025, this bill regulates mental health chatbots using generative AI by requiring disclosure that users are talking to a chatbot, not a human. It sets privacy rules regarding personal data and grants enforcement power to the state’s consumer protection division. The bill applies broadly to all users, not exclusively minors
References
Andoh, E. (2025, October 1). Many teens are turning to AI chatbots for friendship and emotional support: As digital technology evolves, psychologists work to understand how it shapes youth’s social bonds and connections. American Psychological Association; Monitor on Psychology, 56(7). https://www.apa.org/monitor/2025/10/technology-youth-friendships
Bui, N., Hashmi, S., & Raji, I. D. (2025). Adultification bias in large language models: Misrepresentation of Black girls. arXiv preprint arXiv:2506.07282. https://arxiv.org/abs/2506.07282
Center for Countering Digital Hate. (2025). Fake Friend: How ChatGPT betrays vulnerable teens by encouraging dangerous behavior. https://counterhate.com/research/fake-friend-chatgpt/https://counterhate.com
Child Rescue Coalition. (2023). The dark side of AI: Risks to children. https://childrescuecoalition.org/educations/the-dark-side-of-ai-risks-to-children
Congressional Research Service. (2024). Kids Online Safety Act (KOSA): Summary and legislative status. https://crsreports.congress.gov
De Freitas, J, Oguz-Uguralp, Z., Kaan-Uguralp, A. (2025). Emotional Manipulation by AI Companions. Cornell University. arXiv:2508.19258. https://arxiv.org/abs/2508.19258
eSafety Commissioner (Australia). (2024). AI chatbots and companions: Risks to children and young people. https://www.esafety.gov.au/newsroom/blogs/ai-chatbots-and-companions-risks-to-children-and-young-people
European Parliament Research Service. (2025). Generative AI and child protection: Opportunities and challenges. https://www.europarl.europa.eu
Gilbert, D. and Farokhmanesh, M. (2025) Is Roblox Getting Worse? Wired magazine. https://www.wired.com/story/is-roblox-getting-worse/
Horwitz, Jeff. (2025, August 14). Meta’s AI rules have let bots hold ‘sensual’ chats with kids, offer false medical info. https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/
Illinois researchers examine teens’ use of generative AI, safety concerns. (2024, December 2). School of Information Sciences; ischool. https://ischool.illinois.edu/news-events/news/2024/12/illinois-researchers-examine-teens-use-generative-ai-safety-concerns?utm_source=chatgpt.com
Jeter, L. (2025, August 26). Bipartisan Coalition of State Attorneys General Issues Letter to AI Industry Leaders on Child Safety – National Association of Attorneys General. National Association of Attorneys General. https://www.naag.org/press-releases/bipartisan-coalition-of-state-attorneys-general-issues-letter-to-ai-industry-leaders-on-child-safety/
Kang, C. (2025, July 10). A.I.-Generated Images of Child Sexual Abuse Are Flooding the Internet. New York Times. https://www.nytimes.com/2025/07/10/technology/ai-csam-child-sexual-abuse.html
Kids Online Safety Act, S. 1409, 118th Cong. (2024). https://www.congress.gov/bill/118th-congress/senate-bill/1409
Livingstone, S., & Stoilova, M. (2024). Children’s digital rights in the age of AI: A framework for protection and empowerment. Journal of Children and Media, 18(4), 493–510.
Robb, M., Mann, S. (2025). Talk, Trust, and Trade-Offs: How and Why Teens Use AI Companions. Common Sense Media. https://www.commonsensemedia.org/research/talk-trust-and-trade-offs-how-and-why-teens-use-ai-companions
UI News Bureau (2024). Illinois researchers examine teens’ use of generative AI, safety.
UNICEF Innocenti. (2023). Generative AI: Risks and opportunities for children. Florence: UNICEF Office of Research. https://www.unicef.org/innocenti/generative-ai-risks-and-opportunities-children
Zeff, M. (2024). Texas AG is investigating Character.AI, other platforms over child safety concerns. December 12, 2024. TechCrunch. https://techcrunch.com/2024/12/12/texas-ag-is-investigating-character-ai-other-platforms-over-child-safety-concerns.
Zhou, Y., Xu, R., & Li, H. (2025). Safe-Child-LLM: Benchmarking AI safety for minors across developmental stages. arXiv preprint arXiv:2506.13510. https://arxiv.org/abs/2506.13510