Leaked financial documents, reported by tech blogger Ed Zitron this week, provide new insights into OpenAI’s revenue share payments to Microsoft and its substantial compute costs. The documents indicate that Microsoft received $493.8 million in revenue share from OpenAI in 2024, a figure that increased to $865.8 million in the first three quarters of 2025.
These payments stem from a reported agreement where OpenAI shares approximately 20% of its revenue with Microsoft, a condition tied to the software giant's investment exceeding $13 billion in the AI startup, according to Reuters and a source familiar with the matter cited by TechCrunch. Neither OpenAI nor Microsoft has publicly confirmed this specific revenue-sharing percentage.
A source familiar with the matter informed TechCrunch that the reported figures represent Microsoft's net revenue share. This implies that Microsoft deducts its own revenue share payments to OpenAI—estimated at 20% of revenues from Bing and Azure OpenAI Service, which utilize OpenAI's models—before reporting its internal revenue share from OpenAI. Microsoft does not separately disclose its earnings from Bing and Azure OpenAI in its financial statements.
Based on the widely reported 20% revenue-share statistic, OpenAI’s gross revenue could be inferred as at least $2.5 billion in 2024 and $4.33 billion in the first three quarters of 2025. These figures align with previous reports from The Information, which placed OpenAI’s 2024 revenue around $4 billion and its revenue for the first half of 2025 at $4.3 billion. OpenAI CEO Sam Altman recently stated that the company's revenue is "well more" than reports of $13 billion annually and projected an annualized run rate exceeding $20 billion by the end of the year.
The documents also detail OpenAI's significant expenditures on inference, the compute resources used to run trained AI models. Zitron's analysis suggests OpenAI spent approximately $3.8 billion on inference in 2024, increasing to roughly $8.65 billion in the first nine months of 2025. A source familiar with the matter told TechCrunch that while compute spend for model training is primarily non-cash, often covered by credits from Microsoft's investment, inference spend is largely cash-based. OpenAI has historically relied on Microsoft Azure for much of its compute, while also establishing deals with CoreWeave, Oracle, AWS, and Google Cloud.
These figures suggest a considerable expenditure on operational AI model deployment. OpenAI declined to comment on the leaked documents, and Microsoft did not respond to a request for comment from TechCrunch.