Character AI, the startup that allows users to create and interact with various characters, is introducing new parental insights features aimed at enhancing user safety. This initiative comes in response to recent lawsuits and criticism regarding the company’s ability to protect children from potential harm.
The new features will provide parents or guardians with a weekly email summary of their teens’ activity on the app. This email will include information such as the average daily time spent by teens on the app and the web, the time spent interacting with each character, and a list of the top characters that teens engaged with throughout the week. The company noted that this data is intended to give parents a better understanding of their teens’ engagement habits on the platform, while assuring them that they will not have direct access to users’ chats
. In a press release, Character AI highlighted that it implemented these features more swiftly than other platforms, positioning itself as proactive in addressing safety concerns.
Last year, the company introduced additional safety measures, including a dedicated model for users under 18, notifications about time spent on the platform, and disclaimers reminding users that they are chatting with AI-generated characters. Additionally, the company has implemented new classifiers to block sensitive content in both user input and output for teenage users. Earlier this year, Character AI filed a motion to dismiss a lawsuit claiming the company contributed to a teen’s suicide, arguing that the case is protected under the First Amendment.