首页 News 正文

It has been almost a week since Google launched its most powerful model, Gemini, and many domestic AI companies are trying to explore the power of this large model.
Unlike many large models previously launched in the industry, Google Gemini has bypassed the textual aspect and relied directly on visual and sound to understand the world, despite its on-site demo being suspected of fraud and excessive exaggeration of its capabilities.
Gemini's demonstration videos have led many users to mistakenly believe that Gemini can read video information in real-time and answer user questions through understanding. However, in reality, Google employees only generate these responses through prompts. Image source: Google
In order to understand the impact of Gemini's emergence on OpenAI and other AI companies, Interface News recently visited the business leaders and developers of several top generative AI companies. They believe that Gemini's biggest feature is its "native" multimodal large model.
"In theory, native multimodal models are more effective than 'concatenated' multimodal models because the latter is prone to encountering bottlenecks during the training phase." Chen Yujun, the AI manager of Recurrent Intelligence, told Interface News reporters that as Gemini has not been deeply used yet, its actual advantages need to be further understood.
Several start-up developers of large models have stated that even though the largest size Ultra in the Gemini series has not yet been officially launched, Gemini has already demonstrated the same level of ability as the GPT-4 in terms of text.
In the benchmark test set released by Google, Gemini Ultra performs better than GPT-4 in most text tests and GPT-4v in almost all multimodal task tests. If using the testing conditions of GPT-4 as a benchmark, Gemini Ultra performs weaker on MMLU than GPT-4, but still outperforms other mainstream large models. Image source: Gemini Technical Report CITIC Construction Investment Research Report
In Gemini's demonstration video, this large model seems to be able to observe human behavior in real-time and provide feedback, for example, it can perfectly describe the process of a duck from sketching to coloring; Can track paper balls in the cup changing game and assist in solving math and physics problems; Can distinguish gestures, engage in hands-on classroom games, and rearrange planetary sketches.
Developers generally believe that regardless of the geometry of the fake components, Gemini has demonstrated strong abilities in understanding, reasoning, creation, and real-time interaction, achieving a comprehensive surpassing of the OpenAI multimodal model GPT-4v. Google's response has also been widely accepted by the industry, "All user prompts and outputs are genuine, only shortened for simplicity."
The GPT-4v, which was low-key released by OpenAI three months ago, can perform multimodal tasks such as comprehension and image generation, but the results are not very good, and its key reasoning ability is to cooperate with other models to complete. And abstract reasoning ability itself is the most critical ability of large models.
Image source: CITIC Construction Investment
Yin Bohao explained to Interface News that GPT-4v and Gemini are based on two completely different training logics. "GPT-4v is a nearsighted person who cannot see clearly, so its performance is not good. It is a typical cheating scheme. Gemini trains multiple modalities together."
But in the opinion of an algorithm manager at a multimodal large model company, Gemini should not have completely surpassed GPT-4. "During the evaluation, GPT-4 and Gemini did not form a completely fair comparison in text generation."
Many netizens have also tested and expressed that the Gemini Pro's ability to search for objects and images accurately surpasses the GPT-4. For this situation, Liu Yunfeng from Zhuiyi Technology believes that Google's search business naturally has text and other modal aligned data, which is indeed more conducive to training native multimodal large models.
Gemini is able to correctly recognize handwritten answers from students and verify the reasoning process of physics problems. Image source: Gemini Technical Report
Any major move by Google in the field of artificial intelligence will unlock emerging exploration directions in the market, but before the release of Gemini, the trend towards comprehensive multimodality of AI models had become increasingly clear.
As early as the release of GPT-4 in March, OpenAI stated that it would add multimodal integration in this iteration. Starting from September, star companies such as Runway, Midjournal, Adobe, and Stability AI have successively launched multiple multimodal products.
On the domestic side, Baidu's Wenxin Big Model 4.0 has made significant progress in the field of cross modal cultural and biological images. The largest model startup in China, Zhipu AI, has the highest public financing, and its generative AI assistant Zhipu Qingyan has significant advantages in the visual field.
Multiple developers have told Interface News that multimodal big models are recognized as a clear development direction in the industry and will not be awakened by Google's big actions. However, the arrival of Gemini will stimulate domestic companies to accelerate research and development. The algorithm manager of the aforementioned multimodal large model company also pointed out Gemini's limitations, "its ability in image generation and its reference significance in video and image generation are limited."
At present, it is difficult to come to the conclusion that Gemini has completely surpassed the GPT-4, but it is an undeniable fact that Google has become the strongest opponent of OpenAI. It also proved a truth with Gemini: any multimodal large model must rely on the training process of the large language model in order to achieve true multimodal AI.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

六月清晨搅 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    30