Is the Big Language Model Not the End of AI? Meta Chief Scientist: Still unable to reach human intelligence
日微牧
发表于 2024-5-23 18:00:53
1303
0
0
Yann LeCun, the chief artificial intelligence (AI) scientist at Meta, believes that existing large-scale language models (LLMs) will never be able to achieve reasoning and planning abilities similar to humans.
Yang Likun stated that large-scale language models have a very limited understanding of logic. They do not understand the material world, have no lasting memory, cannot reason with any reasonable terminology definitions, and cannot perform hierarchical planning.
In a recent interview, he believed that existing advanced large-scale language models cannot be relied upon to create universal artificial intelligence (AGI) comparable to human intelligence, as these models can only accurately answer prompts with correct training data, making them inherently unsafe.
Specifically, Yang Likun believes that although current large-scale language models have excellent performance in natural language processing, dialogue understanding, dialogue interaction, and text creation, they are still just a "statistical modeling" technique that completes related tasks by learning statistical rules in data, and fundamentally do not truly possess understanding and reasoning abilities.
Yang Likun himself is working hard to develop a new generation of artificial intelligence systems, hoping that the system will provide power for machines with human intelligence levels and create "super intelligence" within the machines. However, he pointed out that this vision may take 10 years to achieve.
The "World Modeling" Method
Yang Likun manages a team of approximately 500 people in Meta's Basic Artificial Intelligence Research (Fair) laboratory. They are committed to creating an artificial intelligence that can form "common sense" and observe, experience, and learn how the world operates in a similar way to humans, ultimately achieving Universal Artificial Intelligence (AGI), a method known as "world modeling".
In 2022, Yang Likun first published a paper on the vision of "world modeling", and subsequently Meta released two research models based on this method.
Yang Likun recently pointed out that, The Fair Laboratory is testing various ideas in the hope of ultimately achieving human intelligence through artificial intelligence. However, there is a lot of uncertainty and exploration involved, and we cannot determine which one will succeed or which one will ultimately be chosen.
In addition, he firmly believes that "we are at the forefront of the next generation of artificial intelligence systems."
Internal contradictions
However, this scientist's experimental vision is a costly gamble for Meta, as current investors prefer to see rapid returns on artificial intelligence investments.
Therefore, There has also been a disagreement within Meta regarding the concept of "short-term income" and "long-term value". This disagreement can be seen from the establishment of the GenAI team last year.
Meta established the Fair Laboratory in 2013 to explore the field of artificial intelligence research and hired top scholars in the field. However, in 2023, Meta has separately established a new GenAI team, led by Chief Product Officer Chris Cox. The team has recruited many artificial intelligence researchers and engineers from the Fair Laboratory and led the work on the Llama 3 model, integrating it into its new artificial intelligence assistants and image generation tools.
Some insiders believe that, The establishment of the GenAI team may be due to a certain ideological contradiction between Yang Likun and Meta CEO Zuckerberg. Under investor and profit pressures, Zuckerberg has been pushing for more commercial applications of artificial intelligence; However, the academic culture within the Fair Laboratory has made Meta slightly weaker in the wave of generative artificial intelligence.
At the time when Yang Likun expressed this viewpoint, Meta and its competitors are pushing for more enhanced versions of large-scale language models, including the faster GPT-4o model released by OpenAI last week; Google has launched a new "multimodal" AI assistant Project Astra; Meta also launched the latest Llama 3 model last month.
For these latest large-scale language models, Yang Likun disdains them. He believes that "the evolution of large-scale language models is superficial and limited, and only when human engineers intervene and train based on this information, the model will learn, rather than naturally draw conclusions like humans." This is also equivalent to slapping the face of our own Llama model.
Despite conflicting ideas, insiders have revealed that Yang Likun is still one of Zuckerberg's core advisors because he enjoys a great reputation in the field of artificial intelligence.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Ideal Automobile implements a limited time zero interest policy for all models for the first time
- OpenAI launches full health version of the o1 big model and $200 per month ChatGPT Pro
- OpenAI has Rocket again! Officially launched Sora, an AI video generation model
- Google releases its most powerful model to attack OpenAI, shifting focus to AI agents
- Challenge OpenAI, Google's new move! Significantly updated generative AI, launching video model VEO 2 and the latest version Imagen3
- Is it increasingly difficult to distinguish between truth and falsehood? Google launches new generation video generation model Veo 2
- Rio Tinto appoints Georgie Bezette as new Chief Human Resources Officer
- Microsoft is reportedly committed to adding non OpenAI models to its 365 Copilot product
- The most expensive and cheapest models will be launched on the same day! Li Bin: NIO will double its efforts to become one of the top ten global car companies
- How will Google respond under stricter regulation in a more competitive track? CEO: Focus on Gemini model next year