
Nvidia has reported record-breaking revenue for its second fiscal quarter, reaching $46.7 billion, a significant 56% year-over-year increase primarily fueled by the accelerating demand for AI infrastructure. However, nearly 40% of this unprecedented growth was attributed to just two undisclosed “direct” customers, signaling both the immense scale of the AI boom and a concentrated revenue stream.
These two customers, identified only as “Customer A” and “Customer B,” accounted for 23% and 16% of total Q2 revenue, respectively. Nvidia defines “direct” customers as Original Equipment Manufacturers (OEMs), which are companies that produce components or products for sale by another company under its brand; system integrators; or distributors. This structure implies that prominent hyperscale cloud providers, major consumers of AI chips, likely acquire these components indirectly through these direct partners, with large cloud providers reportedly making up half of Nvidia's data center revenue.
For industrial leaders, this concentration highlights the current high-stakes nature of the AI hardware supply chain. While Nvidia's performance underscores the immense investment flowing into AI capabilities—essential for advanced automation, predictive maintenance, and operational optimization in manufacturing and logistics—it also suggests that the health of this vital component supplier is significantly tied to a few major players. This structure could influence component availability and pricing down the line for businesses implementing AI solutions across their operations.
Despite the inherent risk associated with such concentrated revenue, analysts note these large customers possess substantial financial resources and are projected to continue heavy spending on data centers. This trend suggests sustained momentum in AI infrastructure development, even as the industry watches for diversification in demand channels to ensure long-term stability in the burgeoning AI hardware market.