Roblox has created an open-source AI model designed to analyze audio clips for inappropriate content such as profanity, racism, bullying, and sexting. Released to the public last July on GitHub and the AI platform Hugging Face, this model has been downloaded over 20,000 times. Recently, the company developed an updated version that includes support for additional languages like Spanish, French, German, and Japanese, along with enhanced infrastructure for customizing the model to meet specific requirements. This new version is expected to be open-sourced by the end of Q1 2025, and Roblox plans to introduce more open-source tools for content classification later in the year. By participating in ROOST and serving as co-chair of a technical advisory committee, Roblox aims to bolster these open-source initiatives, focusing on creating AI models that organizations of all sizes can use for content moderation, especially concerning child safety. “As large corporations like us can allocate resources for systems like this,” states Koneru, “it becomes nearly impossible for smaller game developers to establish effective safety systems on their own.” Some AI systems may be hosted by ROOST, enabling external companies to integrate them easily through API calls, simplifying the process of managing complex infrastructure, according to Koneru. “They might not only open-source models but could also offer hosted services that allow you to make API calls without having to deal with the intricate details of operating the models efficiently,” he notes. Additionally, ROOST may provide open-source tools for labeling training data samples, distinguishing between allowed and disallowed content, and overseeing how this data is utilized for training and refining AI systems. This includes technology to efficiently manage extensive human moderation efforts while maintaining consistency in rule enforcement to ensure AI models are trained on dependable samples. ROOST is supported not only by for-profit companies but also by various philanthropic organizations, such as the Future of Online Trust and Safety Fund, the Knight Foundation, AI Collaborative, and the McGovern Foundation. It has successfully raised over $27 million to fund its operations for the first four years.