首页 News 正文

After presenting an iterative roadmap in June this year, AMD held the Advancing AI 2024 conference on October 10th local time and submitted an answer sheet.
We believe that high-performance computing is a fundamental component of the modern world, and we are using technology to solve the world's most important challenges, whether it be cloud, healthcare, industry, automotive, PC, or gaming AMD CEO Su Zifeng stated.
AMD is a major manufacturer of CPUs (central processing units) and GPUs (graphics processing units), facing competitors Intel and Nvidia in these two fields respectively. In terms of CPU, AMD has launched the fifth generation EPYC server CPU, which supports up to 192 cores and 384 threads. AMD also emphasizes that the EPYC 9965 has multiple performance advantages over Intel's Xeon 8592+CPU.
In terms of GPU, AMD showcased the latest Instinct MI300X series GPU MI325X, which uses 256GB HBM3e memory and a maximum memory bandwidth of 6TB/s. The GPU will start production in the fourth quarter of this year and be supplied to AMD's partners in the first quarter of next year.
AMD introduces that compared to the Nvidia H200, the MI325X has 1.8 times the memory capacity of the H200, and the peak theoretical computing power under FP16 and FP8 precision is 1.3 times that of the H200. When running the Llama3.1 405B large model, the MI325 platform can provide 40% higher performance than the Nvidia H200 Introduction by Su Zifeng.
As Nvidia's main competitor in the GPU field, AMD is accelerating its catch-up. After AMD launched the MI300 for AI and high-performance computing, the market responded with revenue exceeding $1 billion in the second quarter of this year. After the release of its Q2 performance this year, AMD expects its data center GPU sales to reach $4.5 billion in 2024. However, there is still a gap in the volume of AMD data center GPU revenue compared to Nvidia. In the second quarter of fiscal year 2025 ending on July 28th, Nvidia's data center revenue was $26.3 billion.
Nvidia and AMD are also racing in chip iteration. In June of this year, Nvidia released an iteration roadmap, planning to launch the Blackwell Ultra AI chip in 2025, starting the rhythm of "one iteration per year". AMD was also unwilling to fall behind and subsequently launched a roadmap of "one iteration per year". According to the plan, AMD will launch the MI350 series next year after MI300X and MI325X, using the CDNA4 architecture. The MI400 series, which will be launched in 2026, will adopt a more advanced CDNA architecture.
AMD reiterated its iteration roadmap and also revealed some new information about the iteration. For example, the company will launch the MI355X GPU platform in the second half of next year. The performance of MI355X at FP8 and FP16 precision is 80% higher than that of MI325X, and the peak performance of FP6 and FP4 reaches 9.2 PFLOPS. Compared to the CDNA3 used in MI325X, the CDNA4 architecture will have a 35 fold performance improvement in artificial intelligence, with a memory capacity 1.5 times that of CDNA3. The MI350 series, which will adopt the CDNA4 architecture and be launched by AMD next year, will have 288GHz of BM3e memory.
Currently, AMD is laying out its AI layout and has also shown greater ambitions beyond CPU and GPU hardware. In August of this year, AMD announced plans to spend $4.9 billion to acquire server manufacturer ZT Systems. Su Zifeng talked about the acquisition of ZT Systems this time, stating that he hopes to lay out the artificial intelligence software stack. With this acquisition, AMD will combine various elements together to provide a true roadmap for artificial intelligence solutions. With the deployment of AI, Su Zifeng stated that the market share of AMD EPYC CPUs in the data center CPU market has climbed to 34%, which was only 2% in 2018.
From a performance perspective, in the second quarter of this year, AMD's data center business unit, including CPUs, GPUs, etc., generated revenue of $2.834 billion, a year-on-year increase of 115% and a month on month increase of 21%, setting a new historical high. This revenue growth was mainly driven by the significant increase in Instinct GPU shipments and the growth in sales of fourth generation EPYC CPUs. Su Zifeng previously predicted that the market for artificial intelligence accelerators in data centers would grow from $45 billion in 2023 to over $400 billion in 2027. This time, she stated that the demand for artificial intelligence continues to grow, exceeding expectations. Looking ahead to the next four years, it is expected that the market for artificial intelligence accelerators will grow at a rate of 60% per year and reach $500 billion by 2028.
AMD also emphasized its partnerships with vendors such as Oracle, Google, Microsoft, and Meta. Su Zifeng stated that generative AI platforms from multiple vendors such as Microsoft, OpenAI, Meta, and Coal have already used MI300 series drivers. Meta revealed that the company has deployed over 1.5 million EPYC CPUs.
In addition to CPUs and GPUs related to data centers, AMD has also launched the new generation Ryzen AI PRO 300 series for mobile devices. The processor adopts the Zen 5 architecture and has a built-in NPU (Neural Network Processing Unit) that can provide 50-55 TOPS of AI computing power.
However, on the day of the conference, AMD's stock price feedback was not so positive. On October 10th, AMD's stock price fell 4% to close at $164.18 per share.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

123458086 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    3