A recent evaluation of Google’s Gemini AI platforms has raised concerns over their safety for children and teenagers. The assessment suggests that both the under-13 and teen versions of Gemini may expose young users to inappropriate material and are not fully tailored to their developmental needs.
Despite efforts to implement safety features, the platforms still risk sharing content related to adult topics such as substance use, sexual content, and mental health guidance that may be unsuitable for younger audiences. Experts emphasize that simply adding safety filters to adult-oriented AI is insufficient, and dedicated design for different age groups is critical.
The evaluation also highlights the importance of developmental considerations in AI tools for young users. Without guidance specifically structured for children and teens, there is a potential for confusion, misinterpretation, or exposure to unsafe content.
The findings underscore the growing debate around AI usage by younger audiences and the responsibility of tech companies to ensure these platforms are appropriate, safe, and supportive of child development.