Yoel Roth, formerly Twitter’s Trust & Safety lead and now at Match Group, has raised serious concerns about decentralized social networks like Mastodon, Threads, Bluesky, and Pixelfed. According to Roth, these platforms struggle to tackle misinformation, spam, and illegal content such as child sexual abuse imagery because they lack the moderation tools once available on centralized networks.
Roth explained that many decentralized services promote community-driven governance but offer minimal technical assistance to moderators. In contrast, Twitter (during Roth’s tenure) implemented transparency around content decisions—even when controversial, such as the Trump ban—and retained forensic data like IP logs and device identifiers for threat analysis. Meanwhile, decentralized platforms often remove harmful posts silently, leaving users unaware of moderation actions or violations.
Funding remains a major barrier. Roth pointed to the collapse of IFTAS, a nonprofit providing trust and safety infrastructure to federated networks, which folded earlier in 2025 due to budget constraints. He emphasized that volunteer-driven moderation structures are simply not sustainable long-term, especially given the costs of machine-learning tools required to detect harmful content.
Bluesky, which did hire its own moderation staff and built user-customizable trust and safety tools, is still not immune to issues. Roth acknowledged their efforts but noted that decentralization creates new governance dilemmas—such as who assumes responsibility for doxxing or hate speech when community settings are highly personalized.
Privacy considerations further complicate the picture. Roth pointed out that decentralized administrators often avoid collecting logs to preserve user anonymity. But that approach makes it extremely difficult to distinguish between coordinated bot campaigns and legitimate users—an issue Roth recalls firsthand from Twitter, where even its founder Jack Dorsey once reshared content from a Russian troll masquerading as an American.
The rise of AI-generated content adds another layer of challenge. Roth cited research suggesting advanced language models can produce political commentary more persuasive than human authors. As a result, he recommends building moderation systems that move beyond content analysis and incorporate behavioral signals like account automation patterns, posting frequency, and unusual posting times.
In Roth’s view, decentralized social platforms began as an idealistic vision for democratic online interactions. Yet without the infrastructure to enforce transparency, accountability, and safety, they may ultimately fall short of their promise. The struggle to balance user autonomy with effective moderation underscores the fragility of the open social web moving forward.