Sam Altman, the CEO of OpenAI, has expressed growing concern that the proliferation of AI-generated content is fundamentally changing the nature of social media, making it increasingly difficult to distinguish between human and bot-generated posts. During a recent interview, Altman stated that platforms like Twitter and Reddit are starting to feel “fake” due to the volume of AI-generated content, an irony that has been noted given his role at the forefront of AI development.
Altman provided a multi-layered analysis of the problem, pointing to three key factors. First, he noted that the writing styles of real people have begun to mimic the predictable patterns of large language models, a phenomenon he referred to as “LLM-speak.” Second, he suggested that the engagement and monetization models of social media platforms inherently reward extreme or sensational content, which can be easily manufactured by AI. Finally, he speculated that some of the deceptive content could be part of larger, coordinated disinformation campaigns.
His observations are supported by recent industry data. A report by a data security firm found that in 2024, over half of all internet traffic was non-human, a significant portion of which was driven by bots and AI. Altman’s comments highlight a critical challenge in the age of generative AI: the potential for a flood of synthetic content to erode trust in online information and the platforms that host it.