Article reprint source: AIGC
Source: Hard AI
Author | Chang Jiashuai
This year, generative AI has undoubtedly entered a stage of "rapid development".
"Consumer-grade products" such as ChatGPT, Midjourney, and Wenxin Yiyan have brought AI into thousands of households; veteran technology giants such as Adobe and Microsoft have been "reborn" with the help of AI; and Nvidia, the "AI shovel seller" with its explosive performance and soaring market value, has become an absolute star in this year's capital market.
However, from the leading Microsoft and OpenAI to the rapidly progressing Google and Meta, most technology companies' AI products are still in the stage of losing money to gain publicity, and it is hard to say whether consumers will buy into them.
The unclear downstream prospects have raised a series of questions:
Why do we stockpile so many GPUs? How much money do we need to make to make back our investment? Who will pay for it in the end?
On September 20, David Cahn, a partner at venture capital firm Sequoia, published an article summarizing these questions as “the $200 billion problem for the AI industry.”
David Cahn believes that in order to make a profit, the AI industry needs to achieve $200 billion in revenue, but it is currently $125 billion short...
Therefore, David Cahn believes that while it may be a good thing for companies to hoard large amounts of GPU computing power in the long run, it may cause chaos in the short term.
The following is a condensed version of David Cahn’s original text, please enjoy~ ✌️
Since last summer, the generative AI wave has entered hyperspeed mode. The catalyst for this acceleration was Nvidia’s Q2 earnings guidance and its subsequent performance that exceeded expectations. This showed the market that the demand for GPUs and AI model training is “insatiable”.
Prior to Nvidia’s announcement, consumer launches like ChatGPT, Midjourney, and Stable Diffusion had already pushed AI into the public eye. With Nvidia’s impressive results, founders and investors had empirical evidence that AI could create billions of dollars in new net revenue, prompting the field to move forward at full speed.
While investors have inferred a lot from Nvidia’s performance, with AI investments now occurring at a breakneck pace and valuations hitting records, an important question remains: What are all these GPUs being used for? Who are the ultimate customers? And how much value needs to be created to justify this rapid investment?
Consider the following scenario:
Every $1 spent on GPUs corresponds to roughly $1 in data center energy costs, which means that if Nvidia can sell $50 billion worth of GPUs by the end of the year (based on conservative estimates by analysts), data center spending will be as high as $100 billion.
Further assuming that if GPU end customers, i.e. companies that make GPU applications, can earn 50% profit on AI business to make ends meet, it means that at least $200 billion in revenue is needed to recover the initial investment cost. This does not include the profits of cloud providers. If they are to make money, the total revenue requirement should be even higher.
According to public documents, most of the increase in data center construction comes from large technology companies. For example, Google, Microsoft, and Meta have all reported a significant increase in data center capital expenditures. According to related reports, companies such as ByteDance, Tencent, and Alibaba are also major customers of Nvidia. Looking ahead, companies such as Amazon, Oracle, Apple, Tesla, and Coreweave may also spend heavily on data center construction.
The important question to ask is: How much of this CapEx construction is related to real end-customer demand, and how much is being built based on “anticipated demand”? That’s the $200 billion question.
According to The Information, OpenAI's annual revenue is about $1 billion. Microsoft has said that it expects products such as Copilot to bring in $10 billion in annual revenue. If we add other companies in, assuming that Meta and Apple can also make $10 billion a year from AI, Oracle, ByteDance, Alibaba, Tencent, X, Tesla and other companies' AI businesses can make $5 billion, and the total is only $75 billion.
—These are all hypotheticals, but the point is that even if you get a huge benefit from AI, at today’s spending levels, you’re still at least $125 billion away from making a profit.
There is a huge opportunity for startups to fill this gap, and our goal is to “follow the GPU” and find the next generation of startups that are using AI to create real end-customer value – and we want to invest in those companies.
The goal of this analysis is to highlight the gaps we see today.
The AI hype has finally caught up to the deep learning technology breakthroughs that have been developed since 2017. This is good news. A major capex build is happening. This should significantly reduce AI development costs in the long term. Before, you had to buy a rack of servers to build any application. Now, you can use the public cloud at a much lower cost.
Likewise, many AI companies today are investing the majority of their venture capital in GPUs. As today’s supply constraints give way to oversupply, the cost of running AI workloads will fall. This should spur more product development. It should also attract more founders to start businesses in this space.
In historical technology cycles, overbuilding of infrastructure has tended to burn capital but also unlock future innovation by reducing the marginal cost of developing new products. We expect this pattern to repeat itself in AI.
For startups, the lesson is clear: as a community, we need to shift our thinking from infrastructure to end-customer value. Happy customers are the fundamental requirement for every great business. In order for AI to have an impact, we need to find ways to use this new technology to improve people’s lives. How do we turn these amazing innovations into products that customers use, love, and are willing to pay for every day?
The construction of AI infrastructure is underway. Infrastructure is no longer a problem. Many of the basic models are being developed - that is no longer a problem. And the tools for AI today are pretty good.
So the $200 billion question is:
How are you going to use this infrastructure? How are you going to use it to change people’s lives?
This article is compiled from:
https://www.sequoiacap.com/article/follow-the-gpus-perspective/?utm_source=bensbites&utm_medium=referral&utm_campaign=dall-e-3-image-generation-in-chatgpt