Who will dominate the new landscape of AI chips between Broadcom and Nvidia?
pipipen00007
发表于 6 天前
3671
0
0
Last Friday, the US stock market of Broadcom surged 24.43%, reaching a market value of trillions of dollars. On Monday of this week, Broadcom's stock price rose by 11.21%, reaching a market value of 1.17 trillion dollars. After the company released its latest financial report on supermarket expectations, the market's interest in AI customized chips is still high. Even though multiple chip stocks in the US stock market fell on Tuesday, Broadcom's stock price fell 3.91%, and its closing market value remained above $1.1 trillion.
In the field of AI, Broadcom is involved in customized or application specific integrated circuits (ASICs) and Ethernet network components. Broadcom collaborates with three large cloud vendors to develop customized AI chips. As a more specialized chip, ASIC stands in opposition to the more general GPU (Graphics Processing Unit), the former being Google Meta、 The camp of Amazon and many startup companies, with the latter mainly standing with Nvidia and AMD.
The takeoff of Broadcom's stock price is just a prelude to ASIC's counterattack against the GPU camp. In addition to cloud vendors replacing Nvidia GPUs with self-developed ASICs, there is also a wave of entrepreneurship in the ASIC field, with startups seeking customers worldwide. In the eyes of industry insiders, the battle between GPU and ASIC is more like a battle between the general and specialized camps. Before AI is finally finalized, neither chip will completely replace the other, and this game may not necessarily result in a one lose one win outcome.
Who is creating performance for Broadcom?
GPU giant NVIDIA has been in the spotlight for too long, and people may easily overlook the chip making efforts of various cloud vendors behind it. Their ASIC designs may have a deeper penetration rate than many people imagine.
ASIC includes multiple types of chips, such as TPU (Tensor Processing Unit), LPU (Language Processor), NPU (Neural Network Processor), etc. Among cloud vendors, Google has been laying out TPU for many years, and its sixth generation TPU Trillium is officially open for use by customers this month; Meta has launched a customized chip MTIA v2 designed specifically for AI training and inference this year; Amazon has Trainium2 and plans to release Trainium3 next year; Microsoft has developed its own AI chip Azure Maia.
Perhaps because they do not sell chips to the outside world, these cloud vendors' AI chips receive less market attention. But in reality, these cloud vendors have already deployed ASIC chips in their data centers and are focusing on expanding the use of these chips.
Represented by Google, TechInsights data shows that last year Google quietly became the world's third-largest data center processor design company, following CPU giant Intel and GPU giant Nvidia. Google's internal workload runs TPU without selling chips externally.
Amazon has made multiple investments in OpenAI's competitor Anthropic, deepening its ties with the company. Anthropic used Amazon's Trainium. Amazon recently revealed that the Rainier supercomputer cluster project for Anthropic will soon be completed, and Amazon is also building more production capacity to meet the demand of other customers for Trainium.
The related orders from custom chip manufacturers Broadcom and Marvel come from these cloud vendors. Among them, ASIC chips from Google and Meta collaborate with Broadcom for customization. In addition to Google, JPMorgan analysts predict that Meta is expected to become the next ASIC customer to bring in $1 billion in revenue for Broadcom. Amazon has partnered with chip manufacturer Marvel. At the beginning of this month, Amazon AWS just reached a five-year agreement with Marvel to expand cooperation in AI and data center connectivity products, in order for Amazon to deploy its semiconductor product portfolio and dedicated network hardware.
Reflected in performance, in the fiscal year 2024, Broadcom's revenue increased by 44% year-on-year, reaching a record high of 51.6 billion US dollars. In the fiscal year, Broadcom's artificial intelligence revenue increased by 220% year-on-year, reaching $12.2 billion, driving the company's semiconductor revenue to a record high of $30.1 billion. Broadcom also expects a year-on-year revenue growth of 22% in the first quarter of fiscal year 2025.
According to Marvel's Q3 2025 financial report released earlier this month, the company's revenue for the quarter was $1.516 billion, a year-on-year increase of 7% and a month on month increase of 19%. The company stated that the month on month growth rate was higher than the midpoint of its previous guidance and predicted that revenue would continue to grow by 19% in the next quarter. Marvel stated that the performance in the third quarter and the expectation of strong performance in the fourth quarter are mainly driven by customized AI chip projects, which have already started mass production and are expected to maintain strong demand in the 2026 fiscal year.
Apart from Google Meta、 Amazon and other cloud providers, OpenAI、 Apple has also repeatedly reported cooperation with ASIC custom chip manufacturers. Recently, Apple has been rumored to be developing an AI server chip and collaborating with Broadcom to develop the chip's network technology. OpenAI previously announced that it has been working with Broadcom for several months to build an AI inference chip.
ASIC startups recruit customers
Cloud vendors have independently developed large models and invested in some large model startups. The self-developed chips, which cooperate with ASIC customization vendors, are used for training and reasoning these large models without relying on external sales. ASIC startups are different, they choose different chip foundries and need to find customers themselves.
Among them, Cerebras Systems, which has launched wafer level chips, handed over the chips to TSMC for production, while Etched's Sohu chips use TSMC's 4nm process. The Groq LPU chip, which adopts a near memory computing architecture, does not have such high process requirements and uses GlobalFoundries' 14nm process.
These ASIC startups are recruiting customers worldwide, and searching for customers from Middle Eastern countries that are intensifying their AI deployment has become a common choice for some ASIC startups. According to publicly available data from Cerebras Systems, its net sales reached nearly $79 million in 2023 and $136.4 million in the first half of this year. In 2023, the company's revenue from G42, a company based in Abu Dhabi, UAE, accounted for 83% of its total revenue. G42 also pledged to purchase $1.43 billion worth of Cerebras Systems products and services next year.
Reporters also saw the presence of Cerebras Systems, Groq, and another AI chip startup company SambaNova Systems at the AI Summit in Saudi Arabia in September. Cerebras Systems signed a memorandum of understanding with Saudi Aramco at the time, and Saudi Aramco planned to train and deploy large models using Cerebras Systems' products.
Groq has partnered with Saudi Aramco's digital and technology subsidiary to build the world's largest inference data center in Saudi Arabia. The data center will be completed and put into operation by the end of this year, initially including 19000 Groq LPUs, and is expected to expand to 200000 LPUs in the future. According to the official website of SymbaNova Systems, the company has also partnered with Dubai based Solidus AI Tech to provide SymbaNova Cloud for high-performance computing data centers in Europe, and with Canvass AI, which operates in the Middle East, South Asia, Europe, and Africa, to provide AI solutions to enterprises.
In addition, according to the company's official website, SymbaNova Systems has partnered with Argonne National Laboratory in the United States. Groq collaborates with Carahsoft, a vendor that provides IT solutions to government departments in the United States and Canada, and with Earth Wind& Power cooperation plans to build an AI computing center in Norway.
The Debate between Specialization and Universality
The advantages and disadvantages of GPUs and ASICs are currently very clear. GPUs excel in general applications, capable of running various algorithms, and have a mature NVIDIA CUDA ecosystem with ease of use. However, the disadvantage is that general GPUs may waste some computing power and energy consumption. ASIC is relatively specialized, and its design for specific algorithms may result in better computing power and power consumption performance. Taking Groq's LPU as an example, the company claims that the LPU is ten times faster than Nvidia GPUs, but its price and power consumption are only one tenth of the latter. However, the more specialized the ASIC, the more difficult it is to tolerate too many algorithms. It may not be easy to migrate large models that originally ran on GPUs to run on ASICs, and overall usability is lower than that of GPUs.
Under the increasingly fierce attack of ASIC, are the two types of chips about to determine the winner? Or, has the capital market's optimism about Broadcom "backfired" its market expectations for Nvidia? When Broadcom's market value reached trillions of dollars, Nvidia's stock price fell for three consecutive days from last Friday to this Tuesday in the US stock market. You need Nvidia, but I think the market is also saying that there are other beneficiaries besides that, "commented Keith Lerner, co chief investment officer of Truist, a trust investment company. However, some chip industry insiders believe that the battle between GPUs and ASICs can be seen as a battle between general-purpose chips and specialized chips. From this perspective, both chips have room for maneuver for a period of time, rather than a simple relationship of one replacing the other.
From the perspective of usage scenarios, an industry insider told reporters that GPUs still need to be used in a large number of parallelized general-purpose use cases, and other requirements beyond this can use lower cost ASICs, such as using low-power AISC on the inference side. McKinsey's research also suggests that in the future, AI workloads will mainly shift towards inference, and by 2030, AI accelerators equipped with ASIC chips will handle the majority of AI workloads.
However, there may still be variables regarding how much market share ASIC can capture in the future AI chip market, which comes from GPU's absorption of the advantages of ASIC chips. Bao Minqi, Product Director of Anmou Technology, told reporters that GPUs may not necessarily be replaced by other chips. GPUs are mainly used in AI cloud applications, and they are more easily integrated into software programming ecosystems such as openCL CUDA or SYCL, providing convenience. From the perspective of energy efficiency, GPUs will bring more multi-threaded context switching overhead, which cannot be ignored. From this perspective, in the future, GPU and other chips will gradually move towards integration rather than replacing each other in end side scenarios. Just like Nvidia H100's Tensor Core, which has introduced more Tensor specific technologies, chips are gradually taking advantage of each other's strengths to compensate for their own shortcomings.
Chen Wei, Chairman of Qianxin Technology, also believes that GPUs can still be improved within their own scope to address shortcomings such as high energy consumption. This improvement is the absorption of the strengths of specialized chips.
There is a game of power between GPU and other AI chip architectures, with the old and the new competing. Microsoft, Tesla, Google, and others have already embarked on the path of researching more specialized chips. Although Nvidia still focuses on GPUs, its path has also shifted from traditional GPUs to more specialized computing structures. Its Tensor Core part has clearly surpassed the original CUDA Core part Chen Wei told reporters.
At present, there are increasingly ASIC chips specifically designed for large models, which improve chip efficiency through more extreme specificity. For example, Etched fixed the Transformer architecture based on mainstream large models on the chip Sohu, claiming that a server integrated with 8 Sohu modules can match the performance of 160 Nvidia H100 GPUs. Chen Wei told reporters that he speculated that there may also be dedicated GPUs for large model applications in the future, and GPU manufacturers have a high probability of further improving the Tensor Core structure, sacrificing some support for graphics memory.
However, this extreme specificity is also a double-edged sword. Another industry insider told reporters that the current mainstream architecture for AI is Transformer. In the future, as AI architecture evolves, Transformer may not necessarily be the end result. In this process, general-purpose GPUs can always be adopted, but when the mainstream architecture of AI changes, specialized ASIC chips will not be able to adapt.
From this perspective, ASIC also needs to consider the risk of sacrificing generality. The importance of GPU universality is indeed so Bao Minqi told reporters that when Transformer changes, GPU will have an advantage. Taking NPU as an example, on the one hand, the original DSA (Domain Specific Architecture) may not be able to cope with changes in algorithm flow, so more general capabilities need to be considered for some vector computations. On the other hand, with general computing power, chips may not be optimized for specific computing types, resulting in performance bottlenecks. Therefore, when designing, it is necessary to introduce more general computing power to adapt to changes in algorithms, while balancing general computing power with performance in executing specific tasks.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Nvidia's US stock fell more than 2% before trading
- Nvidia's stock price fell 2.1% in pre-market trading and is expected to decline for four consecutive trading days
- Who is the biggest buyer of Nvidia AI chips? This tech giant is dominating the rankings ahead of its peers
- Nvidia's US stock rose over 2% in pre-market trading
- Research institution: Microsoft will purchase far more Nvidia AI chips than its competitors in 2024
- Nvidia's stock price rose 2.5% in pre-market trading and is expected to end its four consecutive declines
- Why did Vistra lead the way in this year's S&P 500 rising list for power stocks and make a comeback?
- Nvidia reportedly has preliminarily finalized the GB300 order configuration
- Thai Prime Minister meets with Nvidia CEO to strengthen cooperation in artificial intelligence
- Nvidia launches ExBody2 system to enhance bipedal robot balance and adaptability
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 3 天前
- 支持
- 反对
- 回复
- 收藏
-
【GPT-5屋台で大きな問題:数億ドルを燃やした後、OpenAIは牛が吹くのが早いことを発見した】OpenAIのGPT-5プロジェクト(Orion)はすでに18カ月を超える準備をしており、関係者によると、このプロジェクトは現在進 ...
- SNT
- 昨天 13:11
- 支持
- 反对
- 回复
- 收藏
-
【英偉達はExBody 2システムを発売して2足ロボットのバランスと適応能力を強化】12月18日、英偉達、MIT、カリフォルニア大学は共同で最新の研究を発表し、ロボットが「固定シナリオ」による運動限界を打破し、ロボ ...
- smile929
- 昨天 19:00
- 支持
- 反对
- 回复
- 收藏
-
日産自動車のカルロス・ゴーン前会長は現地時間12月20日のインタビューで、日産自動車がホンダ自動車との合意を求めていることは、前者が「パニックモード」であることを示していると述べた。「これは絶望的な行為 ...
- 什么大师特
- 3 天前
- 支持
- 反对
- 回复
- 收藏