Business Description
Nvidia designs graphics processing units (GPUs), which were originally used for video games and professional visualization. Its parallel computing performance since then repurposed to handle Artificial Intelligence (AI) related computing, mostly in Data Centers, and partial in "edge" devices, e.g. Automotive for autonomous driving. Nvidia since then coined the term "accelerated computing" for its high performance in computing.
The units, also called chips, are as useful as the software that can take advantage of its parallel computing power, so Nvidia also owns CUDA (originally Compute Unified Device Architecture) is a proprietary parallel computing platform and application programming interface (API) that allows software to use its chips for accelerated general-purpose processing. Subsequently, Nvidia also develops libraries and tools which are used by multiple AI training frameworks like TensorFlow and Pytorch. These frameworks and libraries are used by the top developers in the industry.
The high performance of its hardware, the widespread use of CUDA, and the ecosystem around the libraries creates a strong moat for its business. Competitors will need to create a separate ecosystem with talents knowing other frameworks, using other brands of hardware with equal if not higher performance per cost to compete with Nvidia. That is not trivial. That's why Nvidia's earnings went up so fast from 2021 then exploded in 2023-2024 by riding the Autonomous boom, then the AI boom.
Recently (2023-2024), it also expands to cloud-based AI services for businesses and developers, and also engages in developing platforms for robotics, medical imaging, diagnostics, and scientific research and simulations.
Its main revenue source in 2024 is from Data Centers, which is about 84% of its total revenue. The second source is gaming, which is 13% of the revenue. The rest are professional visualization (2%) and Automotive (1%).
SWOT analysis
Strengths
Strong moat in parallel computing created by the ecosystem that consists of the hardware, the platform that takes advantage of the hardware, and the libraries and framework using the platform. Switching cost is so high that Nvidia can charge a high price for its chips. Its moat and the potential threads are best explained in Nvidia’s CUDA Monopoly.
Nvidia sells shovels for the hottest technology trends like AI, and its customers are rich (e.g. Microsoft, Amazon, Alphabet, Meta). Its customers are also intensely competing with each other, so Nvidia's growth is only limited by how much these customers can pay, which is a lot, and keep growing.
Weaknesses
Relies on how customers can make use of its technology, so it needs to make sure customers keep investing in the talents and future talents to learn about its platform.
Relies on only a handful of manufacturers for its chips, mostly TSMC.
Opportunities
The AI boom ignited by the Large Language Model is real. It can boost productivity in a lot of areas that Nvidia will have no problem keeping selling more and more shoves for the expanded use cases.
Threats
Its clients intensively look for replacement solutions due to the high cost of Nvidia chips. These clients are top notch talents from the top technology companies. By spreading the computing needs to different hardware by having an abstraction on top of the actual hardware use, they may one day get good replacement solutions as long as they can find hardware with comparable performance per cost, which can bypass the moat developed by Nvidia.
The applications of the chips may be reduced if new algorithms are found which does not need the parallel processing computing power.
The applications may be good enough at some point, so that clients do not need to keep buying the best hardware in high quantity to improve the performance for their use cases, e.g. AI models.
References
2024/02/26 Q4 FY24 Investor Presentation
2023-08-07 Nvidia’s CUDA Monopoly
Updates
2024/11/21 Updated valuation after 2024 Q3 earnings
It's close to the year-end, so I am using 2025 expected EPS in my valuation. My valuation assumes a certain fast growth rate for the next 10 years (20% in this case), so usually I avoid re-calculate the valuation with the assumption of the fast growth rate to extend one more year (i.e. 11 years from the original assumption). However, I am cutting some slack for Nvidia given they have credible growth prospective:
The AI industry is still booming and big tech companies are not slowing down the capex, yet
Jensen Huang is still in the reign to drive the innovation engine of Nvidia. This makes sure the ecosystem of Nvidia's chips will get stronger.
I may not extend one more year of fast growth next year, tho...
The AI industry is still booming and big tech companies are not slowing down the capex, yet
Jensen Huang is still in the reign to drive the innovation engine of Nvidia. This makes sure the ecosystem of Nvidia's chips will get stronger.
2024/09/04 2024 Q2 (fiscal 2025 Q2) earnings call notes
Revenue y/y up 122%, operating income up 156%, net income up 152%
Expect Q3 revenue up 80%. It grows faster than expenses, so I expect 100+% in earnings.
2025 Q2 earnings call transcript
One of the most important question regarding the sustainability of Nvidia's growth, which is based on capex from the big tech companies, was asked by Toshiya Hari
"...As you may know, there's a pretty heated debate in the market on your customers and customer's customers return on investment and what that means for the sustainability of CapEx going forward. Internally at NVIDIA, like what are you guys watching? What's on your dashboard as you try to gauge customer return and how that impacts CapEx?..."
Jensen Huang replied, "we're going through two simultaneous platform transitions at the same time. The first one is transitioning from accelerated computing to -- from general-purpose computing to accelerated computing. And the reason for that is because CPU scaling has been known to be slowing for some time. And it is slow to a crawl. And yet the amount of computing demand continues to grow quite significantly. You could maybe even estimate it to be doubling every single year.
And so if we don't have a new approach, computing inflation would be driving up the cost for every company, and it would be driving up the energy consumption of data centers around the world. In fact, you're seeing that. And so the answer is accelerated computing. We know that accelerated computing, of course, speeds up applications. It also enables you to do computing at a much larger scale, for example, scientific simulations or database processing. But what that translates directly to is lower cost and lower energy consumed.
And in fact, this week, there's a blog that came out that talked about a whole bunch of new libraries that we offer. And that's really the core of the first platform transition going from general purpose computing to accelerated computing. And it's not unusual to see someone save 90% of their computing cost. And the reason for that is, of course, you just sped up an application 50x, you would expect the computing cost to decline quite significantly.
The second was enabled by accelerated computing because we drove down the cost of training large language models or training deep learning so incredibly, that it is now possible to have gigantic scale models, multi-trillion parameter models, and train it on -- pre-train it on just about the world's knowledge corpus and let the model go figure out how to understand a human represent -- human language representation and how to codify knowledge into its neural networks and how to learn reasoning, and so -- which caused the generative AI revolution.
Now generative AI, taking a step back about why it is that we went so deeply into it is because it's not just a feature, it's not just a capability, it's a fundamental new way of doing software. Instead of human-engineered algorithms, we now have data. We tell the AI, we tell the model, we tell the computer what's the -- what are the expected answers, What are our previous observations. And then for it to figure out what the algorithm is, what's the function. It learns a universal -- AI is a bit of a universal function approximator and it learns the function.
And so you could learn the function of almost anything, you know. And anything that you have that's predictable, anything that has structure, anything that you have previous examples of. And so now here we are with generative AI. It's a fundamental new form of computer science. It's affecting how every layer of computing is done from CPU to GPU, from human-engineered algorithms to machine-learned algorithms. And the type of applications you could now develop and produce is fundamentally remarkable.
And there are several things that are happening in generative AI. So the first thing that's happening is the frontier models are growing in quite substantial scale. And they're still seeing -- we're still all seeing the benefits of scaling. And whenever you double the size of a model, you also have to more than double the size of the dataset to go train it. And so the amount of flops necessary in order to create that model goes up quadratically. And so it's not unusual -- it's not unexpected to see that the next-generation models could take 20 -- 10, 20, 40 times more compute than last generation.
So we have to continue to drive the generational performance up quite significantly, so we can drive down the energy consumed and drive down the cost necessary to do it. So the first one is, there are larger frontier models trained on more modalities and surprisingly, there are more frontier model makers than last year. And so you have more on more on more. That's one of the dynamics going on in generative AI. The second is although it's below the tip of the iceberg. What we see are ChatGPT, image generators, we see coding. We use a generative AI for coding quite extensively here at NVIDIA now.
We, of course, have a lot of digital designers and things like that. But those are kind of the tip of the iceberg. What's below the iceberg are the largest systems -- largest computing systems in the world today, which are -- and you've heard me talk about this in the past, which are recommender systems moving from CPUs, it's now moving from CPUs to generative AI. So recommended systems, ad generation, custom ad generation targeting ads at very large scale and quite hyper targeting search and user-generated content. These are all very large-scale applications have now evolved to generative AI.
Of course, the number of generative AI startups is generating tens of billions of dollars of cloud renting opportunities for our cloud partners and sovereign AI. Countries that are now realizing that their data is their natural and national resource and they have to use -- they have to use AI, build their own AI infrastructure so that they could have their own digital intelligence.
Enterprise AI, as Colette mentioned earlier, is starting and you might have seen our announcement that the world's leading IT companies are joining us to take the NVIDIA AI enterprise platform to the world's enterprises. The companies that we're talking to. So many of them are just so incredibly excited to drive more productivity out of their company.
And then I -- and then General Robotics. The big transformation last year as we are able to now learn physical AI from watching video and human demonstration and synthetic data generation from reinforcement learning from systems like Omniverse. We are now able to work with just about every robotics companies now to start thinking about start building on general robotics. And so you can see that there are just so many different directions that generative AI is going. And so we're actually seeing the momentum of generative AI accelerating."
Ultimately, Jensen belief in the growth sustainability boiled down to 1) the transition from CPU to GPU to keep speeding up application computation time, and 2) the vast applications of generative AI.
Nvidia's economic moat is still solid. Buying it at this level (~$110) is just a matter of beating S&P in a wide or narrow margin long-term, not a matter of right or wrong.
Revenue y/y up 122%, operating income up 156%, net income up 152%
Expect Q3 revenue up 80%. It grows faster than expenses, so I expect 100+% in earnings.
2024/08/02 Valuation
Its earnings went up so rapidly, and hundreds of percentages that it is hard to project how much growth left for the company. It looks "too expensive" when realizing its market cap has already reached $2.6 trillion. The estimate for the calendar year 2025 (fiscal year 2026) EPS is $3.72.
If we use that as a starting point, and believe the company can grow 20% annually for 10 years, and it has a P/E of 20 ten years later, we get the following buy below price with a discount rate of 15%:
$3.72 * 1.2^10 * 20 / 1.15^10 = $113. P/E = 30.4
Analysts are projecting about 10% annual growth for the next 8 years after fiscal year 2026, so my assumption may be too optimistic. However, given my optimistic view on AI, I don't think the 20% annual growth has much downside risk.
No comments:
Post a Comment