Nvidia Announces "Super AI Chip" H200 Expected to Start Supply in the Second Quarter of Next Year
六月清晨搅
发表于 2023-11-14 18:36:59
296
0
0
Huang Renxun has upgraded his "equipment" again. On November 14th, a reporter from Daily Economic News learned from NVIDIA that on November 13th local time, NVIDIA announced the launch of NVIDIA HGX H200 (AI chip model, hereinafter referred to as "H200"). It is reported that H200 is the first to use HBM3e GPU (memory, faster and larger than before), further accelerating generative AI and large language models, while promoting scientific computing for HPC (high-performance computing) workloads. It can provide 141GB of display memory with a transmission speed of 4.8 TB/s, nearly doubling the capacity and bandwidth compared to the previous generation architecture of NVIDIA A100.
In the view of Ian Buck, Vice President of NVIDIA's Ultra Large Scale and High Performance Computing, in order to create intelligence through generative AI and HPC applications, it is necessary to use large, fast GPU graphics memory to process massive amounts of data quickly and efficiently. When H200 is used in conjunction with NVIDIA Grace CPUs using ultra fast NVLink C2C interconnect technology, it forms the GH200 Grace Hopper superchip with HBM3e - a computing module designed specifically for large-scale HPC and AI applications.
From the perspective of specifications, H200 will provide options for four and eight way H200 server motherboards, which are compatible with the hardware and software of the HGX H100 system; It can also be used for the NVIDIA GH200 Grace Hopper superchip using HBM3e released in August this year. These configurations enable H200 to be deployed in various data centers, including local, cloud, hybrid cloud, and edge; It can provide the highest performance for various application workloads, including LLM training and inference for super large models with parameters above 175 billion.
In terms of architecture, compared to the previous generation, the NVIDIA Hopper architecture has achieved a performance leap, such as nearly doubling the inference speed on a 70 billion parameter LLM - Llama 2 compared to the H100 (NVIDIA AI chip).
According to NVIDIA, H200 will be available through global system manufacturers and cloud service providers starting in the second quarter of 2024; Server manufacturers and cloud service providers are also expected to start offering systems equipped with H200 at the same time.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Can Broadcom's customized AI chip challenge Nvidia with a market value exceeding trillions of dollars?
- Nvidia's US stock fell more than 2% before trading
- Nvidia's stock price fell 2.1% in pre-market trading and is expected to decline for four consecutive trading days
- Who will dominate the new landscape of AI chips between Broadcom and Nvidia?
- Who is the biggest buyer of Nvidia AI chips? This tech giant is dominating the rankings ahead of its peers
- Nvidia's US stock rose over 2% in pre-market trading
- Research institution: Microsoft will purchase far more Nvidia AI chips than its competitors in 2024
- Nvidia's stock price rose 2.5% in pre-market trading and is expected to end its four consecutive declines
- Texas Instruments receives $1.6 billion in chip subsidies from the United States
- Nvidia reportedly has preliminarily finalized the GB300 order configuration
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 前天 12:48
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 前天 12:09
- 支持
- 反对
- 回复
- 收藏
-
【GPT-5屋台で大きな問題:数億ドルを燃やした後、OpenAIは牛が吹くのが早いことを発見した】OpenAIのGPT-5プロジェクト(Orion)はすでに18カ月を超える準備をしており、関係者によると、このプロジェクトは現在進 ...
- SNT
- 3 小时前
- 支持
- 反对
- 回复
- 收藏
-
【ビットコインが飛び込む!32万人超の爆倉】データによると、過去24時間で世界には32万7000人以上の爆倉があり、爆倉の総額は10億ドルを超えた。
- 断翅小蝶腥
- 3 天前
- 支持
- 反对
- 回复
- 收藏