Tongyi Qianwen's open-source 32 billion parameter model has achieved full open-source for 7 major language models
芊芊551
发表于 2024-4-7 17:04:49
209
0
0
Alibaba Cloud's Tongyi Qianwen open-source 32 billion parameter model Qwen1.5-32B can balance performance, efficiency, and memory usage to the greatest extent possible, providing enterprises and developers with a more cost-effective model choice. At present, Tongyi Qianwen has opened up 7 major language models, with a cumulative download volume exceeding 3 million in open source communities at home and abroad.
Tongyi Qianwen has previously opened up six large language models with parameters of 500 million, 1.8 billion, 4 billion, 7 billion, 14 billion, and 72 billion, all of which have been upgraded to version 1.5. Among them, several small-sized models can be easily deployed on the end side, while the 72 billion parameter model has industry-leading performance and has repeatedly appeared on models lists such as HuggingFace. This open-source 32 billion parameter model will achieve a more ideal balance between performance, efficiency, and memory usage. For example, compared to the 14B model, the 32B model has stronger capabilities in intelligent agent scenarios; Compared to 72B, 32B has lower inference costs. The Tongyi Qianwen team hopes that the 32B open source model can provide better solutions for downstream applications.
In terms of basic capabilities, the 32 billion parameter model of Tongyi Qianwen has performed well in multiple evaluations such as MMLU, GSM8K, HumanEval, BBH, etc. Its performance is close to that of Tongyi Qianwen's 72 billion parameter model, far exceeding other 30 billion parameter models.
In terms of the Chat model, the Qwen1.5-32B-Chat model scored over 8 points in the MT Bench evaluation, and the gap between it and Qwen1.5-72B-Chat is relatively small.
In terms of multilingual abilities, the Tongyi Qianwen team selected 12 languages including Arabic, Spanish, French, Japanese, Korean, etc., and conducted assessments in multiple fields such as exams, comprehension, mathematics, and translation. The multilingual ability of Qwen1.5-32B is only slightly inferior to the 72 billion parameter model of Tongyi Qianwen.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- In October of this year, Tesla Model Y won the sales championship for first tier and new first tier city models
- Alibaba CEO Wu Yongming: AI development requires a batch of open-source models of different scales and fields
- Baidu's Q3 core net profit increased by 17%, exceeding expectations. Wenxin's large model daily usage reached 1.5 billion
- The delivery fee pricing has been lowered to 6 yuan, and McDonald's has adjusted the McDonald's delivery fee model
- Ideal Automobile implements a limited time zero interest policy for all models for the first time
- OpenAI launches full health version of the o1 big model and $200 per month ChatGPT Pro
- OpenAI has Rocket again! Officially launched Sora, an AI video generation model
- Google releases its most powerful model to attack OpenAI, shifting focus to AI agents
- Challenge OpenAI, Google's new move! Significantly updated generative AI, launching video model VEO 2 and the latest version Imagen3
- Is it increasingly difficult to distinguish between truth and falsehood? Google launches new generation video generation model Veo 2
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 3 天前
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 3 天前
- 支持
- 反对
- 回复
- 收藏
-
【GPT-5屋台で大きな問題:数億ドルを燃やした後、OpenAIは牛が吹くのが早いことを発見した】OpenAIのGPT-5プロジェクト(Orion)はすでに18カ月を超える準備をしており、関係者によると、このプロジェクトは現在進 ...
- SNT
- 昨天 13:11
- 支持
- 反对
- 回复
- 收藏
-
【英偉達はExBody 2システムを発売して2足ロボットのバランスと適応能力を強化】12月18日、英偉達、MIT、カリフォルニア大学は共同で最新の研究を発表し、ロボットが「固定シナリオ」による運動限界を打破し、ロボ ...
- smile929
- 昨天 19:00
- 支持
- 反对
- 回复
- 收藏