首页 News 正文

On April 9th local time, Intel announced the launch of its latest AI chip product, the Gaudi 3 Accelerator, at the Vision 2024 Customer and Partner Conference. Intel claims that compared to Nvidia's H100 GPU, the Gaudi3 AI chip has increased model training speed and inference speed by 40% and 50%, respectively.
In February of this year, Intel launched its first system level foundry for the AI era, willing to contract chips for all customers including Nvidia, Qualcomm, Google, Microsoft, and AMD, with the goal of becoming the world's second largest foundry by 2030.
In recent years, Intel, known as the "king of the past," has made comprehensive efforts in AI chip design and foundry, targeting giants Nvidia and TSMC in these two fields.
Release of new generation AI chips
From Intel's introduction, the Gaudi 3 AI accelerator aims to serve enterprises in extending generative AI from the experimental stage to the application stage. Gaudi 3 has the characteristics of high performance, economic practicality, energy conservation, and can also meet the needs of complexity, cost-effectiveness, fragmentation, data reliability, and compliance.
Intel claims that Gaudi 3 is expected to significantly shorten the training time for 7 billion, 13 billion parameter Llama2 models, and 175 billion parameter GPT-3 models. Gaudi 3 will be shipped to OEM manufacturers (original equipment manufacturers) in the second quarter of 2024.
The Gaudi series is a chip brand specifically launched by Intel for AI application scenarios, aimed at benchmarking Nvidia's AI chips.
AMD also launched the MI300 series of AI chip products in early December 2023. AMD claims that the memory density of the MI300X chip is 2.4 times that of the Nvidia H100, and the memory bandwidth is 1.6 times that of it, providing better inference performance.
H100 GPU is the mainstream computing power chip used for training AI large models. At the 2024 GTC AI Conference held in March, Nvidia has launched a new generation of AI chips, the Blackwell GPU series, with significantly improved performance compared to the H100. The AI computing performance of Blackwell GPU is 20 petaflops (1 petaflop is equivalent to 10 billion floating-point operations per second), while H100 is 4 petaflops.
This year's first AI chip OEM promotion
In February of this year, Intel also launched its first system level foundry for the AI era - Intel Foundry.
Intel CEO Pat Kissinger stated that AI is profoundly changing the world and the way we think about technology and its "core" power. This brings unprecedented opportunities for chip design companies and Intel foundry. Intel is willing to contract chips for any company, including NVIDIA, Qualcomm, Google, Microsoft, and AMD.
Intel customers, including Microsoft, have expressed support for Intel's system level OEM. Microsoft announced plans to use Intel 18A process nodes to produce a chip of its design. Overall, in the fields of wafer manufacturing and advanced packaging, Intel foundry's expected transaction value exceeds $15 billion.
Intel has expanded its process technology roadmap, source Intel

Intel's goal is to become the world's second largest foundry by 2030, with the aim of producing 50% of the world's semiconductors in the United States and Europe within a decade; At present, this proportion is 20%, and the majority of global production is concentrated in Asia.
Challenging two giants simultaneously
From launching AI chips to AI chip foundry business, Intel can be said to be challenging both giants at the same time, and even not avoiding the situation of foundry chips for long-term competitors.
According to research firm IoT Analytics, Nvidia has a 92% market share in the field of GPUs used in data centers. TSMC is a major contract manufacturer of NVIDIA.
According to TrendForce consulting data, Intel entered the top ten global wafer fabs in the third quarter of 2023, but fell out of the top ten in the fourth quarter. In the fourth quarter, TSMC's share in the wafer foundry market further increased compared to the previous quarter, exceeding 60%.
Why is Intel rare to "single handedly" compete against the two giants?
The root cause is that Intel is one of the few chip manufacturers in the industry that adopts the IDM model, which means they handle everything from design, manufacturing, packaging and testing to selling their own branded chips. NVIDIA, AMD and other non factory model manufacturers only do chip design, and the chip manufacturing process is completely entrusted to foundries; The OEM model, represented by TSMC and Gexin, is only responsible for chip manufacturing and packaging.
However, it is evident that Intel, which is one step behind in the design and manufacturing of AI chips, still has a significant gap.
In the low tide of the semiconductor industry, Intel's overall revenue in 2023 was $54.2 billion, a year-on-year decrease of 14%. In 2023, Intel's Data Centers and Artificial Intelligence Division (DCAI) achieved revenue of $15.5 billion, a year-on-year decrease of 20%; The operating loss of chip foundry business reached 7 billion US dollars, which is approximately 1.8 billion US dollars more than in 2022. Intel expects its OEM business to achieve a balance of revenue and expenditure by around 2027.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

无由窗置 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    1