首页 News 正文

NVDA has gone crazy.
Just a week before NVIDIA's financial report was released, NVIDIA's stock price has risen by 15.85%, with a total market value soaring to $371.8 billion, equivalent to 10 HP (QCOM, as of May 30, with a market value of $228.6 billion).
As of the time of publication, Nvidia has a total market value of $2.72 trillion, ranking third in the US stock market. It is only $200 billion away from Apple's $2.93 trillion market value, and there is still $360 billion left to catch up with Microsoft's $3.08 trillion market value and climb to the top of the US stock market.
According to the financial report, in the first quarter of the 2025 fiscal year, Nvidia's revenue reached $26.04 billion, a year-on-year increase of 262%; The net profit is 14.88 billion US dollars, a year-on-year increase of 628%, and the expected revenue for the second quarter is around 28 billion US dollars. In the data center sector with the highest proportion, Nvidia achieved a record breaking revenue of $22.6 billion in the first quarter, a month on month increase of 23% and a year-on-year increase of 427%. Nvidia's overall gross profit margin in the first quarter was as high as 78.9%, with an expected annual gross profit margin of around 70%.
At the same time, Nvidia announced that its "1 spin 10" stock split plan would take effect on June 7th. After the market, Nvidia's stock price jumped 6%, breaking through $1000 per share for the first time and reaching a historic high.
However, in the Chinese market, Nvidia is not as tough as its financial performance. On the second day after the release of the financial report, foreign media reported that in response to more intense competition, Nvidia had lowered the price of its H20 series chips specifically targeting the Chinese market, with servers equipped with eight sets of chips priced between approximately 1.1 million yuan and 1.3 million yuan per unit.
Subsequently, Time Weekly reporters simultaneously verified this news with Nvidia, and as of the time of publication, no specific response has been received.
However, according to a reporter from Time Weekly who learned from multiple artificial intelligence and chip manufacturing companies, what makes Nvidia "terrifyingly strong" is not only the top GPUs, but also the ecology. At present, most companies still use the H100 and A800 they have hoarded, and due to performance issues, many companies have not purchased H20. However, some companies admit that Nvidia's chip ecosystem, which has been accumulated over several decades, may be an unavoidable path at present.
"If one chip lacks computing power, it's better to spend more money to buy a few more chips." Industry insiders have pointed out that although Nvidia chips supplied to the Chinese market have greatly reduced performance, some companies are still willing to purchase them for the sake of the ecosystem.
At present, domestic chip manufacturers are trying to break through from multiple directions such as optimizing algorithms, upgrading services, and exploring open source architectures.
"Do your best" in the Chinese market
"We'll do our best."
During the first quarter financial report conference call, Huang Renxun talked about the Chinese market. He stated that due to restrictions on technology exports and increasingly fierce competition in the Chinese market, Nvidia's business in the Chinese market has declined compared to the past. Nvidia remains committed to making every effort to serve customers and the market, and to providing the best possible service.
Since Nvidia launched H20 for the Chinese market at the beginning of the year, the controversy over it has not been resolved.
H20 is the castrated version of the H100 sold by Nvidia overseas, with less than 15% of the computing power of the H100. Reuters previously reported that, The performance of H20 in certain specific scenarios is inferior to that of the domestic AI chip Huawei Ascend 910B, but its pricing is almost the same as 910B.
This has also cooled the sales of H20 in China. Accordingly, the Wall Street Journal quoted insiders as saying that the number of chips ordered by Alibaba, Tencent, Baidu, ByteDance and other major Chinese customers from Nvidia this year will be far less than the Nvidia high-performance chips originally planned to purchase.
Nvidia also seems to be seeking change. Recently, Reuters reported exclusively that in some cases, the price of Nvidia H20 chips will be more than 10% lower than Huawei's Ascend 910B.
Although NVIDIA did not verify this news with Time Weekly reporters, the information publicly conveyed by NVIDIA to the outside world still wants to actively strive for the cake in China.
In fiscal years 2022 and 2023, the revenue of the Chinese market (including mainland China, Hong Kong, and Taiwan) accounted for 58% and 47% of Nvidia's global revenue, respectively. In the fiscal year 2024, which was the explosive year of generative AI, the proportion of the Chinese market actually decreased to 39%.
Image source: Screenshot from NVIDIA financial report
Huang Renxun has emphasized the importance of the Chinese market more than once in public interviews. In 2024, he also visited Nvidia's offices in Shenzhen, Shanghai, and Beijing, and celebrated the New Year with employees.
On May 26, according to Taiwan, China media reports, Huang Renxun and his wife arrived in Taiwan, China, China, ready to participate in COMPUTEX 2024, and will visit supply chain companies including Hon Hai, Quanta and TSMC. He previously mentioned in an interview that a large number of components in Nvidia chips are produced in China, and the globalization of the supply chain is difficult to break.
NVIDIA places great emphasis on cultivating China's ecological "circle of friends". At the BEYOND Expo 2024 held in Macau recently, Nvidia set up a special session to promote its startup acceleration plan.
It is understood that, OpenAI was a member of Nvidia's startup acceleration program six years ago, and now, the AI dividends brought by OpenAI are vigorously feeding back this company.
According to Lou Ming, the Director of NVIDIA Startup Ecology China, as of the end of 2023, more than 19000 companies worldwide have joined the program, and over 2000 companies in China have become members of the NVIDIA Startup Acceleration Program, ranking second globally.
Image source: Photographed by a reporter from Time Weekly
In this presentation, Nvidia also repeatedly mentioned the new generation AI chip architecture Blackwell. This is a product announced at the GTC conference in March this year, which has an amazing improvement compared to the previous generation product H100 and is 30 times that of Hopper.
However, the release plan for Blackwell in the Chinese market has not been disclosed yet. A person close to NVIDIA told Time Weekly that Blackwell is currently not applicable domestically due to export control regulations.
NVIDIA that cannot be bypassed?
Nvidia once held a very high market share in the Chinese market. At the beginning of this year, research firm TrendForce analyst Frank Kung stated that approximately 80% of high-end artificial intelligence chips in Chinese cloud computing companies currently come from Nvidia.
With the tightening of chip regulations in the United States, this proportion will decrease. But Frank Kung estimates that this proportion will still reach 50% -60% in the next five years.
But why are there still companies willing to pay for the Nvidia H20, which has seen a significant decline in performance?
After interviewing multiple domestic artificial intelligence related enterprises and chip manufacturers, Time Weekly reporters received almost the same answer: ecology.
The ecosystem here refers to CUDA (Compute Unified Device Architecture), which is a programming tool provided by Nvidia to developers.
In 2006, Nvidia launched the first version of CUDA, marking the official introduction of GPUs into the field of general-purpose computing. At its inception, CUDA provided a new programming model that allowed developers to utilize Nvidia's GPUs for high-performance computing, which was a revolutionary technology at the time as it extended GPUs beyond just graphic rendering to a wider range of computing tasks.
But at that time, the industry had not yet seen the value of CUDA. Huang Renxun overcame public opinion, invested, developed, maintained, and promoted CUDA, only to see the dawn six years later.
In 2012, Alex Krizhevsky from the University of Toronto won the championship in the ImageNet Computer Image Discrimination Competition, which made NVIDIA GPUs famous overnight.
Afterwards, The CUDA ecosystem is rapidly iterating and growing. According to research reports from Huatai Securities, as of 2020, the global number of CUDA developers has reached 2 million, and by 2023, this number has reached 4 million, including large corporate clients such as Adobe.
Screenshot taken from NVIDIA's official website
This has also become Nvidia's strongest moat. The head of a technology company conducting AIGC visual research told Time Weekly that it is difficult for users to have the motivation to migrate out of CUDA. Migration means rewriting code, which requires a lot of time and money.
The person in charge of a certain robot company also admitted, Due to years of R&D adjustments and optimizations by Nvidia, and extensive experimentation by developers, CUDA's ecosystem is currently the most stable.
A chip industry insider told Time Weekly that in recent years, some people in the industry attempted to translate the code on CUDA and run it on their own chips. For example, some overseas developers are attempting to work at Intel A software project running CUDA application on AMD hardware has been developed, and the ZLUDA project has been built. This project attempts to break down NVIDIA's CUDA ecosystem barriers, allowing CUDA applications to run on third-party hardware without modifying the source code, and at one point received support from Intel AMD support.
But this approach can easily lead to copyright and intellectual property issues. As early as 2021, Nvidia prohibited other hardware platforms from using the simulation layer to run CUDA software. In February of this year, a German engineer discovered that Nvidia had added a clause in the restriction category of the End User License Agreement (EULA) during the installation of CUDA 11.6: "No reverse engineering, decompilation, or disassembly of any output generated using SDK elements is allowed to convert such output artifacts to non NVIDIA platforms." At that time, The operation of the ZLUDA project has been difficult, and it was opened up at the beginning of the year.
The insiders in the chip industry mentioned above further indicate that some companies also choose to use open-source architecture, such as Intel's oneAPI AMD's ROCm, etc., but open-source architectures often cannot match the performance of CUDA.
Nowadays, Nvidia is more eager to further expand its ambitions in the software layer. Huang Renxun has repeatedly reiterated in past interviews that Nvidia is a platform company that distinguishes itself from any manufacturer that only provides chips.
"Nvidia is not just about hardware, we are a full stack platform supplier," said Shi Chengqiu, Senior Technical Marketing Manager of Nvidia China, at the BEYOND Expo 2024 event. In his description, Nvidia built a computing power system or solution from chips, transmission to software, based on which entrepreneurs only need to focus on how to train their algorithms well. In addition, Nvidia also provides algorithmic support as much as possible, such as offering various pre trained models for different partners in different segmented vertical application markets.
Outside of CUDA, Nvidia is also working on cultivating another new ecosystem. In 2020, Nvidia announced the launch of a new processor DPU and the software framework DOCA tailored for DPU. DOCA is a new type of data center infrastructure processor architecture that can achieve breakthrough network, storage, and security performance.
In the field of AI applications, The value of GPU and CUDA has been market validated, DPU and DOCA may become another new growth point for Nvidia. Liu Nianning, Global Vice President of NVIDIA and Head of Marketing for Chinese Enterprises, has publicly stated that in the era of generative AI, DPU is an accelerated computing platform built by enterprises The key to an AI factory.
According to data released by Nvidia, nearly half of the global DOCA developers in 2022 will come from China, and as of now, the number of CUDA and DOCA developers in China has exceeded one million.
"DOCA is still a little baby now, but today's DOCA is just like CUDA 20 years ago." The relevant person in charge of Nvidia's network marketing department said that if companies purchase CPUs GPU and other metaphors refer to buying a racing car, so CUDA DOCA is like the wheels of a racing car. Only when the wheels of a racing car are strong and strong enough can they support the car to run on different roads and over longer distances.
Domestic chip acceleration
"The biggest gap between China and the United States (developing AI) lies in computing power, which is basically about 10 times the gap," said Xu Bing, co-founder of Shangtang Technology, at the BEYOND Expo 2024.
He believes that the United States has the vast majority of cutting-edge, high-performance Nvidia GPUs, which are being purchased by a large number of countries and companies worldwide. In the past decade, artificial intelligence has created over $2 trillion in value, corresponding to Nvidia's current market value.
But Xu Bing believes that the computing power gap between China and the United States can be bridged with a large amount of investment. On the one hand, domestic chips are developing rapidly, and on the other hand, computing power is essentially a commodity with strong financing attributes. It has already developed investment attributes in real estate, which can be scaled up through capital turnover, leverage, and other means.
Meanwhile, China has a strong AI soil. "China is the only country in the world that has the potential to achieve a second level of intelligent emergence," said Liu Qingfeng, Chairman of iFlytek.
And wherever the demand is, the market is there. In fact, domestic chips have already seen some results in breaking through the encirclement. In the visit of Time Weekly, some companies met their computing power needs through "curve saving".
For example, the self-developed chip of Yuntian Lifei (688343. SH) follows the path of "algorithm chip based", focusing on specific scenarios. Define algorithms based on scenarios, and then define chips based on algorithms. Through collaborative design of custom instruction sets, processor architectures, and toolchains, achieve algorithm chipization.
There are also companies like iFlytek (002230. SZ) that achieve maximum energy efficiency through algorithmic optimization. Previously, iFLYTEK announced a price reduction for its large models. During an interview with Time Weekly and other reporters, iFLYTEK Vice President Wang Wei talked about how to optimize costs.
Wang Wei stated that currently, the iFlytek big model is completely built on the domestic independent computing power base. In January of this year, iFlytek established a Wanka cluster using Huawei Ascend.
"When using domestically produced computing power, the initial efficiency can only reach 30% to 40% of Nvidia's. Later, continuous training was conducted on algorithms, computing power, and operator libraries, and energy efficiency has improved significantly. Of course, (cost optimization) also includes the electricity discount provided by the government for the AI industry. Make good use of these, and then optimize the entire model," said Wang Wei.
The chip design service provider, Zhuhai Lingyange Chip Technology Co., Ltd. (hereinafter referred to as "Lingyange"), is taking a different path. Lingyan Ge's relevant business personnel told Time Weekly reporters, "In solving computing power problems, we use AI chips developed for customers to make it into an AI PC box for the entire machine. Combining the whole machine,..." AI board and software algorithms, launching industry-specific applications, developing a complete set of solutions to provide more customer service, customers can plug and play, reducing the difficulty of use. For example, the high-precision AI-PC server developed by Lingyan Pavilion - Xuanming No.1
The relevant person in charge of Guangdong Saifang Technology (hereinafter referred to as "Saifang Technology") Co., Ltd. provided another approach. Saifang Technology hopes to break the existing market pattern through AI chips based on RISC-V architecture.
It is understood that, RISC-V is an open-source and free instruction set architecture, which is a new choice for chip architecture beyond X86 and ARM. Saifang Technology believes that for domestic enterprises, RISC-V is an excellent opportunity to achieve Chinese chip and enhance independent innovation capabilities. In terms of development trends and market prospects, RISC-V has more vitality and potential compared to other processor IPs.
Of course, the research and development of general-purpose chips is also urgent, but currently there are not many domestic manufacturers that can take advantage of this "hard bone".
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

海角七号 注册会员
  • 粉丝

    0

  • 关注

    1

  • 主题

    29