Musk's Action Counterattacks Open Source's Top Model Pressure on OpenAI
六月清晨搅
发表于 2024-3-18 13:13:44
278
0
0
It seems that Musk made a completely different choice from Altman to demonstrate his unwavering commitment to open source AI models. On March 17th, Musk announced the open-source Grok-1, making it the largest open-source large language model with the largest number of parameters currently available, with 314 billion parameters, far exceeding the 175 billion of OpenAI GPT-3.5.
Interestingly, Grok-1 announced that its open source cover image will be generated by Midjournal, making it an "AI help AI".
Musk, who has been roast that OpenAI is not open, naturally wants to insinuate something on the social platform, "We want to know more about the open part of OpenAI."
Grok-1 follows the Apache 2.0 protocol to open model weights and architecture. This means that it allows users to freely use, modify, and distribute the software, whether for personal or commercial use. This openness encourages broader research and application development. Since its release, the project has won 6.5k stars on GitHub and its popularity is still increasing.
The project description clearly emphasizes that since Grok-1 is a large-scale (314B parameter) model, a machine with sufficient GPU memory is needed to test the model using the example code. Netizens suggest that this may require a machine with 628 GB of GPU memory.
In addition, the implementation efficiency of the MoE layer in this repository is not high, and the reason for choosing this implementation is to avoid the need for a custom kernel to verify the correctness of the model.
Currently, popular open source models include Meta's Llama2 and France's Mistral. Generally speaking, releasing open-source models helps the community conduct large-scale testing and feedback, which means that the iteration speed of the model itself can also be accelerated.
Grok-1 is a Mixture of Experts (MOE) big model developed by xAI, an AI startup under Musk, over the past four months. Review the development process of this model:
After announcing the establishment of xAI, researchers first trained a prototype language model (Grok-0) with 33 billion parameters. This model approached the capabilities of LLaMA2 (70B) on the standard language model testing benchmark, but used fewer training resources;
Subsequently, researchers made significant improvements to the reasoning and encoding capabilities of the model, ultimately developing Grok-1 and releasing it in November 2023. This is a more powerful SOTA language model that achieved 63.2% performance in HumanEval encoding tasks and 73% in MMLU, surpassing all other models in its computational class, including ChatGPT-3.5 and Inflection-1.
What are the advantages of Grok-1 compared to other large models?
XAI emphasizes that Grok-1 is their own large model trained from scratch, that is, starting from October 2023, using custom training stacks to train on JAX and Rust without fine-tuning for specific tasks (such as conversations);
A unique and fundamental advantage of Grok-1 is that it can understand the world in real-time through the X platform, which allows it to answer spicy questions rejected by most other AI systems. The training data used in the released version of Grok-1 comes from the Internet data as of the third quarter of 2023 and the data provided by the AI trainers of xAI;
The Mixture of Experts model with 314 billion parameters has an active weight ratio of 25% for each token, providing it with powerful language comprehension and generation capabilities due to its large number of parameters.
XAI previously introduced that Grok-1 will serve as the engine behind Grok for natural language processing tasks, including question answering, information retrieval, creative writing, and encoding assistance. In the future, the understanding and retrieval of long context, as well as multimodal ability, will be one of the directions that this model will explore.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- In October of this year, Tesla Model Y won the sales championship for first tier and new first tier city models
- Alibaba CEO Wu Yongming: AI development requires a batch of open-source models of different scales and fields
- Baidu's Q3 core net profit increased by 17%, exceeding expectations. Wenxin's large model daily usage reached 1.5 billion
- The delivery fee pricing has been lowered to 6 yuan, and McDonald's has adjusted the McDonald's delivery fee model
- Ideal Automobile implements a limited time zero interest policy for all models for the first time
- OpenAI launches full health version of the o1 big model and $200 per month ChatGPT Pro
- OpenAI has Rocket again! Officially launched Sora, an AI video generation model
- Google releases its most powerful model to attack OpenAI, shifting focus to AI agents
- Challenge OpenAI, Google's new move! Significantly updated generative AI, launching video model VEO 2 and the latest version Imagen3
- Is it increasingly difficult to distinguish between truth and falsehood? Google launches new generation video generation model Veo 2
-
生成式人工知能(AI)が巻き起こす技術の波の中で、電力会社は意外にも資本市場の寵児になった。 今年のスタンダード500割株の上昇幅ランキングでは、Vistraなどの従来の電力会社が注目を集め、株価が2倍になってリ ...
- xifangczy
- 前天 12:14
- 支持
- 反对
- 回复
- 收藏
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 昨天 12:48
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 昨天 12:09
- 支持
- 反对
- 回复
- 收藏
-
グーグルは現地時間12月19日、新しい「推理」モデルとしてGemini 2.0 Flash Thinkingを発売すると発表した。紹介によると、このモデルはまだ実験段階であり、訓練を経た後、モデルが反応を起こした時に経験した「思 ...
- 地下水
- 前天 09:59
- 支持
- 反对
- 回复
- 收藏