Baidu ShenDou: Traditional cloud computing is no longer the protagonist, intelligent computing calls for a new generation of "operating systems"
守遍丝
发表于 2024-4-16 16:25:24
1300
0
0
On April 16th, the Create 2024 Baidu AI Developer Conference was held in Shenzhen.
During the conference, Shen Shuan, Executive Vice President of Baidu AI Cloud Group and President of Baidu Intelligent Cloud Business Group, officially released the new generation of intelligent computing operating system - Wanyuan. Through the abstraction and packaging design of the intelligent computing platform in the AI native era, he shielded the complexity of cloud native systems and heterogeneous computing power for users, and improved the efficiency and experience of AI native application development.
Shen Dou stated that with the continuous evolution of big model technology, programming through natural language is becoming a reality. Programming will no longer be process or object oriented, but rather requirement oriented; The process of programming will become a process for developers to express their wishes and bring revolutionary changes to the operating system. In the kernel of the operating system, the underlying hardware has shifted from CPU computing power to GPU computing power, and has added world knowledge compressed by large models. The objects managed by operating systems have undergone fundamental changes, evolving from managing processes and microservices to managing intelligence.
"Traditional cloud computing systems are still important, but they are no longer the protagonist. We need a brand new operating system that abstracts and encapsulates new computing platforms, namely intelligent computing, redefines human-computer interaction, and provides developers with a simpler and smoother development experience," said Shen Dou.
At this conference, Baidu AI Cloud launched the "Wanyuan" intelligent computing operating system, which aims to "bridge" computing efficiency and application innovation. Specifically, Wanyuan is mainly composed of three layers: Kernel, Shell, and Toolkit. The lower layer shields the complexity of cloud native systems and heterogeneous computing power, while the upper layer provides support and guarantee for agile development of AI native applications.
Firstly, at the kernel level, in terms of computing resource management, Baidu Baige · AI heterogeneous computing platform has made special optimizations to the design, scheduling, and fault tolerance of intelligent computing clusters for tasks such as large model training and inference. At present, Baige is able to achieve over 98.8% of the effective training time of models on the Wanka cluster, with linear acceleration ratio and bandwidth effectiveness reaching 95%, leading the industry in computing power efficiency.
In addition, Baige is also compatible with mainstream domestic and international AI chips such as Kunlun chip, Ascent, Hikvision DCU, Nvidia, Intel, etc., supporting users to complete computing power adaptation with minimal cost. Compared to model inference, "one cloud with multiple cores" is an extremely difficult problem to overcome in model training scenarios, mainly including two types of sub scenarios: 1. There are multiple training tasks in the intelligent computing cluster, and a single vendor chip only serves a single task; 2. Use different vendor chips simultaneously in each independent model training task. This requires solving problems such as evenly dividing the computing power of chips from different manufacturers and optimizing communication efficiency between chips, which is extremely difficult.
It is reported that currently, Baige has achieved mixed training of chips from different manufacturers under a single training task, with a performance loss of no more than 3% for Baika and no more than 5% for Qianka, leading the industry. To maximize the shielding of hardware differences, help users break free from dependence on a single chip, achieve better costs, and create a more flexible supply chain system.
Another important component of the Wanyuan kernel is the large model. Large models can efficiently compress vast amounts of world knowledge and encapsulate the understanding, generation, logic, and memory abilities of natural language. At present, the Wanyuan kernel includes industry-leading ERNIE 4.0 and ERNIE 3.5 language models, as well as lightweight models such as ERNIE Speed/Lite/Tiny, textual and visual models, and various distinctive third-party models, fully meeting the diverse needs of users in different business scenarios.
On top of the kernel layer is the shell layer. Through Baidu AI Cloud Qianfan ModelBuilder, we can solve the management, scheduling, secondary development and other problems of models in the kernel, mask the complexity of model development, and help more people to quickly fine tune models suitable for their own business with only a small amount of data, resources and energy. Meanwhile, in practical applications, the model routing service provided by ModelBuilder can automatically select models with appropriate parameter scales for tasks of different difficulty, and provide the optimal model combination that balances effectiveness and cost. According to calculations, when the model performance is basically the same, the average inference cost of model routing can be reduced by up to 30%.
On top of the Shell layer, Qianfan AppBuilder and AgentBuilder together form the tool layer, providing developers with powerful AI native application development capabilities. Especially with the workflow orchestration function provided by AppBuilder, developers can easily customize their business processes using preset templates and components. They can also integrate and expand their unique components, select suitable models at different nodes, and implement business logic through flexible orchestration.
It is reported that in the process of developing AI native applications on AppBuilder, models that have been finely tuned through ModelBuilder can also be directly called, making the entire development process extremely smooth and convenient. After the application development is completed, it can be released to Baidu Search, WeChat official account and other platforms with one click, or it can be directly integrated into the user's own system through API or SDK, truly achieving rapid development and easy listing.
Shen Dou stated that as an open operating system, Wanyuan will further open up ecological cooperation in the future, providing application developers with more capabilities and interfaces; Assist enterprises in creating exclusive vertical industry operating systems; Deploy Wanyuan in the customer's own intelligent computing center to provide stable, secure, and efficient intelligent computing platform services; Adapt to more heterogeneous chips from manufacturers and maximize their performance.
Shen Dou believes that the current big model technology and AI native applications are driving the development of cloud services towards a new generation of intelligent computing operating systems with AI as the core. This trend not only reflects the inherent logic of technological development, but also reflects the strong driving force of market demand, and opens up a new era of intelligent cloud driven by AI.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Google is offering seven years of operating system updates for Pixel phones
- Based on the Wenxin Big Model, Refactoring Small Degrees to Launch AI Native Operating System DuerOS X
- Apple's iPad operating system is reportedly subject to EU digital market laws
- A milestone breakthrough! Google DeepMind launches a new generation of drug development AI model AlphaFold 3
- NIO's comprehensive operating system for the entire vehicle has been officially released in full
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 前天 12:48
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 前天 12:09
- 支持
- 反对
- 回复
- 收藏
-
【GPT-5屋台で大きな問題:数億ドルを燃やした後、OpenAIは牛が吹くのが早いことを発見した】OpenAIのGPT-5プロジェクト(Orion)はすでに18カ月を超える準備をしており、関係者によると、このプロジェクトは現在進 ...
- SNT
- 4 小时前
- 支持
- 反对
- 回复
- 收藏
-
【ビットコインが飛び込む!32万人超の爆倉】データによると、過去24時間で世界には32万7000人以上の爆倉があり、爆倉の総額は10億ドルを超えた。
- 断翅小蝶腥
- 3 天前
- 支持
- 反对
- 回复
- 收藏