Domestic developers looking at Google Gemini: Although it is embroiled in controversy over "counterfeiting", it has found a way beyond OpenAI
六月清晨搅
发表于 2023-12-13 11:05:11
262
0
0
It has been almost a week since Google launched its most powerful model, Gemini, and many domestic AI companies are trying to explore the power of this large model.
Unlike many large models previously launched in the industry, Google Gemini has bypassed the textual aspect and relied directly on visual and sound to understand the world, despite its on-site demo being suspected of fraud and excessive exaggeration of its capabilities.
Gemini's demonstration videos have led many users to mistakenly believe that Gemini can read video information in real-time and answer user questions through understanding. However, in reality, Google employees only generate these responses through prompts. Image source: Google
In order to understand the impact of Gemini's emergence on OpenAI and other AI companies, Interface News recently visited the business leaders and developers of several top generative AI companies. They believe that Gemini's biggest feature is its "native" multimodal large model.
"In theory, native multimodal models are more effective than 'concatenated' multimodal models because the latter is prone to encountering bottlenecks during the training phase." Chen Yujun, the AI manager of Recurrent Intelligence, told Interface News reporters that as Gemini has not been deeply used yet, its actual advantages need to be further understood.
Several start-up developers of large models have stated that even though the largest size Ultra in the Gemini series has not yet been officially launched, Gemini has already demonstrated the same level of ability as the GPT-4 in terms of text.
In the benchmark test set released by Google, Gemini Ultra performs better than GPT-4 in most text tests and GPT-4v in almost all multimodal task tests. If using the testing conditions of GPT-4 as a benchmark, Gemini Ultra performs weaker on MMLU than GPT-4, but still outperforms other mainstream large models. Image source: Gemini Technical Report CITIC Construction Investment Research Report
In Gemini's demonstration video, this large model seems to be able to observe human behavior in real-time and provide feedback, for example, it can perfectly describe the process of a duck from sketching to coloring; Can track paper balls in the cup changing game and assist in solving math and physics problems; Can distinguish gestures, engage in hands-on classroom games, and rearrange planetary sketches.
Developers generally believe that regardless of the geometry of the fake components, Gemini has demonstrated strong abilities in understanding, reasoning, creation, and real-time interaction, achieving a comprehensive surpassing of the OpenAI multimodal model GPT-4v. Google's response has also been widely accepted by the industry, "All user prompts and outputs are genuine, only shortened for simplicity."
The GPT-4v, which was low-key released by OpenAI three months ago, can perform multimodal tasks such as comprehension and image generation, but the results are not very good, and its key reasoning ability is to cooperate with other models to complete. And abstract reasoning ability itself is the most critical ability of large models.
Image source: CITIC Construction Investment
Yin Bohao explained to Interface News that GPT-4v and Gemini are based on two completely different training logics. "GPT-4v is a nearsighted person who cannot see clearly, so its performance is not good. It is a typical cheating scheme. Gemini trains multiple modalities together."
But in the opinion of an algorithm manager at a multimodal large model company, Gemini should not have completely surpassed GPT-4. "During the evaluation, GPT-4 and Gemini did not form a completely fair comparison in text generation."
Many netizens have also tested and expressed that the Gemini Pro's ability to search for objects and images accurately surpasses the GPT-4. For this situation, Liu Yunfeng from Zhuiyi Technology believes that Google's search business naturally has text and other modal aligned data, which is indeed more conducive to training native multimodal large models.
Gemini is able to correctly recognize handwritten answers from students and verify the reasoning process of physics problems. Image source: Gemini Technical Report
Any major move by Google in the field of artificial intelligence will unlock emerging exploration directions in the market, but before the release of Gemini, the trend towards comprehensive multimodality of AI models had become increasingly clear.
As early as the release of GPT-4 in March, OpenAI stated that it would add multimodal integration in this iteration. Starting from September, star companies such as Runway, Midjournal, Adobe, and Stability AI have successively launched multiple multimodal products.
On the domestic side, Baidu's Wenxin Big Model 4.0 has made significant progress in the field of cross modal cultural and biological images. The largest model startup in China, Zhipu AI, has the highest public financing, and its generative AI assistant Zhipu Qingyan has significant advantages in the visual field.
Multiple developers have told Interface News that multimodal big models are recognized as a clear development direction in the industry and will not be awakened by Google's big actions. However, the arrival of Gemini will stimulate domestic companies to accelerate research and development. The algorithm manager of the aforementioned multimodal large model company also pointed out Gemini's limitations, "its ability in image generation and its reference significance in video and image generation are limited."
At present, it is difficult to come to the conclusion that Gemini has completely surpassed the GPT-4, but it is an undeniable fact that Google has become the strongest opponent of OpenAI. It also proved a truth with Gemini: any multimodal large model must rely on the training process of the large language model in order to achieve true multimodal AI.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- AstraZeneca acquires a Chinese cancer drug developer with a transaction value of approximately $1.2 billion
- The Baidu Create AI Developer Conference will be held from April 16th to 17th
- Meta is considering showcasing its AR glasses Orion at the fall developer conference
- Apple Developer Conference scheduled for Dragon Boat Festival, executive tweet suggests AI elements will be included
- Robin Lee's first speech at Baidu Create AI developer conference on April 16 will bring three development artifacts
- One quarter of Baidu's code is written by AI programmers, and now individual developers can use Baidu Comate for free
- Outlook for Google I/O Developer Conference: Faced with a pincer battle between OpenAI and Microsoft, imminent
- A fresh move or all? Google Developer Conference launches 22 consecutive moves to counter OpenAI
- Only for paid developers! The debut of "Apple Intelligence" features comprehensive upgrades such as Siri, but ChatGPT has not yet been integrated. This time, Apple has "abandoned" Nvidia
- Apple responds to EU regulations, EU developers will be able to promote products independently
-
生成式人工知能(AI)が巻き起こす技術の波の中で、電力会社は意外にも資本市場の寵児になった。 今年のスタンダード500割株の上昇幅ランキングでは、Vistraなどの従来の電力会社が注目を集め、株価が2倍になってリ ...
- xifangczy
- 3 天前
- 支持
- 反对
- 回复
- 收藏
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 前天 12:48
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 前天 12:09
- 支持
- 反对
- 回复
- 收藏
-
【ビットコインが飛び込む!32万人超の爆倉】データによると、過去24時間で世界には32万7000人以上の爆倉があり、爆倉の総額は10億ドルを超えた。
- 断翅小蝶腥
- 3 天前
- 支持
- 反对
- 回复
- 收藏