AMD launches new AI high-performance computing solution
因醉鞭名马幌
发表于 2024-10-12 10:17:11
155
0
0
On October 11th, at Advancing AI 2024, AMD launched a new AI high-performance computing solution, including the fifth generation AMD EPYC server CPU, AMD Instinct MI325X accelerator, AMD Pensando Salina DPU, AMD Pensando Pollara 400 NIC, and AMD Ryzen AI PRO 300 series processor for enterprise AI PCs.
Dr. Su Zifeng, Chairman and CEO of AMD, stated, "With our new EPYC CPU, AMD Instinct GPU, and Pensando DPU, we will provide leading computing power to support our customers' most important and demanding workloads. Looking ahead, we expect the market size of data center AI accelerators to grow to $500 billion by 2028
In 2018, AMD EPYC server CPUs only had a market share of 2%. In less than 7 years, it has now reached 34%. Data centers and AI have brought huge growth opportunities for AMD.
EPYC CPU fully upgraded
As one of AMD's core products, AMD EPYC server CPUs have undergone a comprehensive upgrade.
The fifth generation AMD EPYC server CPU (AMD EPYC 9005 series CPU), codenamed "Turin", adopts the "Zen5" core architecture, is compatible with the SP5 platform, provides up to 192 cores, and has a maximum frequency of 5GHz. The AVX512 instruction set supports a complete 512 bit wide data path.
AMD states that the AMD EPYC 9005 is an advanced CPU designed for AI. Compared with traditional hardware, AMD EPYC 9005 can achieve equivalent integer computing performance while significantly reducing the number of racks, greatly reducing the physical space, power consumption, and required software license quantity, thus freeing up space for new or growing AI workloads.
AMD also stated that the AMD EPYC 9005 has outstanding AI inference performance. Compared to the previous generation product, servers running two 5th generation AMD EPYC 9965 CPUs can provide up to twice the inference throughput capability.
AMD has proven that it can meet the needs of the data center market and set a benchmark for data center performance, efficiency, solutions, and functionality that can meet the demands of cloud, enterprise, and AI workloads for customers, "said Dan McNamara, Senior Vice President and General Manager of AMD's Server Division
Instinct GPU steadily advances
As an important carrier of AI computing power, AMD Instinct GPU has also undergone updates and iterations. In addition, AMD has also announced its GPU product roadmap for 2025 and 2026.
The AMD Instinct MI325X is built on the third-generation AMD CDNA architecture, featuring 256GB of HBM3E memory and 6TB/s of memory bandwidth, delivering impressive training and inference performance and efficiency, setting a new standard for AI performance. According to data released by AMD, the AMD Instinct MI325X outperforms the Nvidia H200 in inference across multiple models.
AMD stated that the AMD Instinct MI325X is expected to be put into production and shipped in the fourth quarter of 2024, while the complete system and infrastructure solutions of partners such as Dell, Gigabyte, HP, and Lenovo will be launched from the first quarter of 2025.
In terms of future product layout, compared to accelerators based on AMD CDNA 3 architecture, the inference performance of AMD Instinct MI350 based on AMD CDNA 4 architecture will be improved by 35 times. Meanwhile, the AMD Instinct MI350 can be equipped with up to 288GB of HBM3E memory and is expected to be launched in the second half of 2025.
AMD also announced significant progress in the development of the AMD Instinct MI400 based on the AMD CDNA Next architecture, with plans to launch it in 2026.
Improve the performance of AI networks
Currently, AI networks are crucial for ensuring effective utilization of CPUs and accelerators in AI infrastructure.
To support the next generation of AI networks, AMD is utilizing widely deployed programmable DPUs to provide support for ultra large scale computing. The AI network can be divided into two parts: the front-end that transmits data and information to the AI cluster, and the back-end that manages data transmission between the accelerator and the cluster. To this end, AMD has launched the AMD Pensando Salina DPU for front-end and the AMD Pensando Pollara 400 for back-end.
The AMD Pensando Salina DPU is one of the world's highest performing and most programmable third-generation DPUs, with twice the performance, bandwidth, and scale compared to its predecessor. The AMD Pensando Salina DPU supports a throughput of 400G and enables high-speed data transmission. It is a key component in AI front-end networks, optimizing performance, efficiency, security, and scalability for data-driven AI applications.
The Pensando Pollara 400 is the industry's first UEC ready AI NIC (an AI network card that complies with the Super Ethernet Alliance specifications), which supports next-generation RDMA software and an open network ecosystem, ensuring performance leading, scalable, and efficient communication between accelerators in the backend network.
In terms of launch time, AMD Pensando Salina DPU and AMD Pensando Pollara 400 will both provide samples to customers in the fourth quarter of 2024 and are expected to be launched in the first half of 2025.
Forrest Norrod, Executive Vice President and General Manager of AMD's Data Center Solutions Division, said, "With the new AMD Instinct Accelerator, EPYC Processor, AMD Pensando Network Engine, open software ecosystem, and the ability to integrate these things into AI infrastructure, AMD fully possesses the key expertise to build and deploy world-class AI solutions
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Bit Digital completes acquisition of Enovum data center, expanding high-performance computing business
- Intel、AMD联合捍卫x86!黄仁勋:他们很了不起
- Intel and AMD join forces to defend x86! Huang Renxun: They are amazing
- Intel, AMD 통합 x86!황인훈: 걔네들 못 살아.
- 以色列财政部下调该国今明两年经济增长预期
- AMD携端到端AI解决方案“出场”
- AMD brings end-to-end AI solutions to the market
- AMD、エンドツーエンドのAIソリューションを「登場」
- AMD 휴대용 종단간 AI 솔루션'출전'
-
アリババは、26億5000万ドルのドル建て優先無担保手形と170億元の人民元建て優先無担保手形の定価を発表した。ドル債の発行は2024年11月26日に終了する予定です。人民元債券の発行は2024年11月28日に終了する予定だ ...
- SOGO
- 前天 09:05
- 支持
- 反对
- 回复
- 收藏
-
スターバックスが中国事業の株式売却の可能性を検討していることが明らかになった。 11月21日、外国メディアによると、スターバックスは中国事業の株式売却を検討している。関係者によると、スターバックスは中国事 ...
- 献世八宝掌
- 昨天 16:29
- 支持
- 反对
- 回复
- 收藏
-
【意法半導体CEO:中国市場は非常に重要で華虹と協力を展開】北京時間11月21日、意法半導体(STM.N)は投資家活動の現場で、同社が中国ウェハー代工場の華虹公司(688347.SH)と協力していると発表した。伊仏半導体 ...
- 黄俊琼
- 昨天 14:29
- 支持
- 反对
- 回复
- 收藏
-
【ナスダック中国金龍指数は1%下落した。人気の中概株の多くは下落した】現地時間11月21日、ナスダック中国金龍指数は1%下落し、人気の中概株の多くは下落し、必死に10%超下落し、愛奇芸は7%超下落し、百度は6%近く ...
- 比尔992
- 13 分钟前
- 支持
- 反对
- 回复
- 收藏