Skip to content

OpenAI Faces Scrutiny Over Sora Launch, Regulatory Tactics, and Internal Concerns

OpenAI Faces Scrutiny Over Sora Launch, Regulatory Tactics, and Internal Concerns
Published:

OpenAI, the artificial intelligence research company, is navigating intensifying scrutiny over its operational practices and policy approaches following the launch of its Sora video generation tool, ongoing copyright disputes, and recent allegations of regulatory intimidation. Chris Lehane, OpenAI's Vice President of Global Policy, addressed some of these challenges at the Elevate conference in Toronto, while the company concurrently faced new criticism regarding its conduct.

The company's recent launch of Sora, its video generation tool, has drawn particular attention due to concerns over its training data. Sora reportedly utilized copyrighted material, prompting a shift in OpenAI's approach to content usage. Initially, the company allowed rights holders to opt out of having their work used for training. Following observations of user preferences, OpenAI "evolved" towards an opt-in model for certain content, according to reports. This development occurs amidst existing lawsuits against OpenAI from major publishers, including The New York Times and The Toronto Star, alleging unauthorized use of their content for training AI models. Lehane, when questioned on the matter, invoked the concept of "fair use" as a critical component of U.S. tech industry practices.

In parallel, OpenAI is undertaking significant infrastructure expansion, including a data center campus in Abilene, Texas, and a new facility in Lordstown, Ohio, in partnership with Oracle and SoftBank. These operations demand substantial energy and water resources. Lehane acknowledged the significant energy requirements, noting OpenAI's need for approximately a gigawatt of energy per week, and framed the scale of AI development within a geopolitical context, suggesting that democratic nations must compete in AI infrastructure. He expressed optimism that this expansion could modernize energy systems.

Ethical considerations surrounding AI generation have also surfaced. Zelda Williams, daughter of the late actor Robin Williams, publicly appealed against the creation and distribution of AI-generated videos depicting her father. When asked about reconciling such intimate harms with OpenAI's mission, Lehane cited the company's commitment to responsible design, testing frameworks, and government partnerships, acknowledging the lack of established precedents for these issues.

Further intensifying the scrutiny, Nathan Calvin, a lawyer at the nonprofit advocacy organization Encode AI, reported that a sheriff's deputy served him a subpoena at his Washington, D.C., residence on behalf of OpenAI. The subpoena sought Calvin's private messages with California legislators, college students, and former OpenAI employees. Calvin characterized the action as an intimidation tactic related to his opposition to California's SB 53, an AI safety bill, alleging OpenAI weaponized its legal dispute with Elon Musk as a pretext. Calvin publicly referred to Lehane as a "master of the political dark arts" in this context.

Internally, some OpenAI personnel have expressed reservations. Boaz Barak, an OpenAI researcher and Harvard professor, commented on Sora 2, stating it was "technically amazing but it's premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes." Josh Achiam, OpenAI's head of mission alignment, publicly questioned the company's trajectory, writing, "We can't be doing things that make us into a frightening power instead of a virtuous one. We have a duty to and a mission for all of humanity. The bar to pursue that duty is remarkably high."

More in Live

See all

More from Industrial Intelligence Daily

See all

From our partners