Skip to main content
  • Home
  • News - U.S.
  • “After Smartphones Comes AI” — OpenAI Launches ChatGPT for Teenagers

“After Smartphones Comes AI” — OpenAI Launches ChatGPT for Teenagers

Picture

Member for

10 months
Real name
Lauren Robinson
Bio
Vice Chief Editor
With a decade of experience in education journalism, Lauren Robinson leads The EduTimes with a sharp editorial eye and a passion for academic integrity. She specializes in higher education policy, admissions trends, and the evolving landscape of online learning. A firm believer in the power of data-driven reporting, she ensures that every story published is both insightful and impactful.

Modified

Explicit and Violent Content Blocked, Law Enforcement Alerts in Emergencies
Altman: “A new and powerful technology, teenagers require substantial protection”
Move comes amid U.S. Federal Trade Commission probe

OpenAI has unveiled a dedicated version of ChatGPT for users under 18. The new release automatically blocks violent or sexually explicit content and strengthens parental controls, allowing guardians to manage usage hours and functions directly. In certain circumstances, the system may even alert law enforcement authorities. The move comes as the U.S. Federal Trade Commission (FTC) launches a formal probe into the protection of minors, coinciding with a nationwide push in American schools to restrict mobile phone use.

Altman: “For minors, safety takes precedence over freedom”

According to CNBC on the 16th (local time), Chief Executive Sam Altman emphasized in a blog post, “We prioritize safety over privacy and freedom for teenagers. This technology is new and powerful, and we believe minors need substantial protection.” OpenAI had already announced last month that it would roll out a parental control feature, and has now disclosed specific details.

Key functions include linking a parent’s account to a teen’s account via email, setting restricted hours, and limiting or managing certain features. The system guides teens on how to interact with the chatbot, while notifying parents if a child encounters a serious crisis. OpenAI also noted that it is developing more accurate age prediction tools, but by default provides the under-18 environment whenever user data is incomplete or uncertain. Altman added, “The decision to launch a dedicated version was not easy, but after consultations with experts, we concluded it was the right choice. We want to be transparent about our intentions.”

The safeguards follow the FTC’s decision to begin examining the potential adverse effects of AI chatbots on minors. In a press release, the FTC said it had requested documentation from technology companies, including OpenAI, to verify measures taken to ensure chatbot safety. Some analysts, however, view the initiative as a revenue strategy disguised under the rhetoric of protection. While ostensibly aimed at securing minors’ safety and learning environments, critics argue the move is designed to monetize dedicated accounts.

Phone bans in 35 out of 50 states

The development aligns with the broader global trend of restricting smartphone use in schools. In the United States, 35 out of 50 states now impose statutory or regulatory bans on mobile phone and electronic device use within schools. The shift gained momentum after Florida passed the first such law in 2023. In some states, bans extend beyond class time to cover recess and lunch breaks, requiring students to store phones in magnetic pouches or lockers.

Student reactions are mixed. “At first, most students hated giving up their phones, but now many prefer it to avoid distractions,” said Audreana Johnson, a junior at McNair High School near Atlanta. Others who study with music expressed discomfort with the restrictions.

Parents are also divided. According to research by Emory University, parental opposition has been the biggest obstacle to phone bans. Johnson’s mother said, “I need to reach my child immediately in cases of school violence or threats,” voicing support for phone access. Jason Allen, director of the National Parent Union, added, “Most parents support the policy, but practical communication tools remain essential for safety and scheduling.”

Research into policy effectiveness is still nascent. Emory professor Julie Gazmararian said teachers report fewer distractions and greater focus on education, along with more positive peer interactions. However, she noted there is still insufficient data to determine whether bullying has decreased or student mental health has improved.

ChatGPT completing assignments, students stop thinking

Educators also worry about outsourcing learning itself, beyond smartphone distractions. Reports of students delegating assignments to ChatGPT and submitting the output are already widespread. While AI may reduce short-term academic burdens, experts warn it risks undermining critical thinking and creative problem-solving.

“The process of writing itself develops our reasoning skills,” Joshua Wilson, associate professor of education at the University of Delaware, told the Washington Post. “ChatGPT skips that process, handing students a finished product, which could erase their ability to think.” Without the process of reflection and grappling with ideas, students may struggle to cultivate logic and critical analysis.

Concerns also extend to mental health. In California, the parents of a 16-year-old boy who died by suicide on July 25 filed a lawsuit against OpenAI. Matthew Lane and Maria In alleged their son Adam grew increasingly dependent on ChatGPT for homework and, in his final exchange on April 11, received answers that assisted him in planning his suicide. Adam was found dead only hours later. The parents are seeking damages and court orders mandating safety features, including automatic termination of self-harm discussions and enhanced protections for minors.

A report by the Center for Countering Digital Hate (CCDH), a U.S. nonprofit, found that more than half (53%—638 out of 1,200) of ChatGPT’s responses to researchers posing as 13-year-olds contained harmful content. ChatGPT provided detailed instructions on self-harm, drug abuse, and appetite suppression. In some cases, it even drafted farewell letters for users expressing suicidal impulses. Researchers also noted that even when ChatGPT refused to answer, the system could be easily circumvented with simple explanations such as “I am preparing for a presentation.”

Picture

Member for

10 months
Real name
Lauren Robinson
Bio
Vice Chief Editor
With a decade of experience in education journalism, Lauren Robinson leads The EduTimes with a sharp editorial eye and a passion for academic integrity. She specializes in higher education policy, admissions trends, and the evolving landscape of online learning. A firm believer in the power of data-driven reporting, she ensures that every story published is both insightful and impactful.