Sam Altman, CEO of OpenAI and a Reddit shareholder, recently stated that social media platforms are increasingly feeling "fake" due to the proliferation of bots and human communication mimicking large language models (LLMs). This observation, posted on X on Monday, stemmed from his experience reading discussions on platforms like Reddit.
Altman reported difficulty discerning human-authored content, particularly within subreddits discussing AI coding services such as OpenAI's Codex and Anthropic's Claude Code. He noted a trend of posts celebrating the shift to Codex, which led him to question their authenticity, even while acknowledging the reported strong growth of Codex.
Altman attributed this phenomenon to several factors, including real people adopting "quirks of LLM-speak," the correlated behaviors of "Extremely Online" communities, the "over/back" extremism of hype cycles, optimization pressures from social platforms for engagement, and past experiences with "astroturfing" by competitors. OpenAI's LLMs were notably trained on Reddit data, and Altman served on Reddit's board until 2022, maintaining a significant shareholder position disclosed during the company's IPO.
The increasing challenge of distinguishing authentic human interaction from AI-generated or bot-driven content has implications beyond social media. Reports indicate that LLMs are impacting authenticity across various digital domains, including educational assessments, journalism, and legal documentation, where the veracity of information is paramount.
Data security firm Imperva reported that over 50% of all internet traffic in 2024 was non-human, largely attributed to LLM activity. Similarly, X's own bot, Grok, estimates "hundreds of millions of bots" operate on the platform. The pervasive nature of non-human traffic underscores a growing concern regarding the reliability of digital information, potentially affecting sectors reliant on accurate digital intelligence for market analysis, supply chain monitoring, and competitor assessments.
Some commentators have suggested Altman's public lament might serve as an early marketing effort for a rumored OpenAI social media platform, a project reportedly in its nascent stages aimed at competing with existing networks like X and Facebook. Regardless of potential motives, the observations highlight a broader industry challenge concerning content authenticity in the age of advanced AI.