首页 News 正文

Yann LeCun, the chief artificial intelligence (AI) scientist at Meta, believes that existing large-scale language models (LLMs) will never be able to achieve reasoning and planning abilities similar to humans.
Yang Likun stated that large-scale language models have a very limited understanding of logic. They do not understand the material world, have no lasting memory, cannot reason with any reasonable terminology definitions, and cannot perform hierarchical planning.
In a recent interview, he believed that existing advanced large-scale language models cannot be relied upon to create universal artificial intelligence (AGI) comparable to human intelligence, as these models can only accurately answer prompts with correct training data, making them inherently unsafe.
Specifically, Yang Likun believes that although current large-scale language models have excellent performance in natural language processing, dialogue understanding, dialogue interaction, and text creation, they are still just a "statistical modeling" technique that completes related tasks by learning statistical rules in data, and fundamentally do not truly possess understanding and reasoning abilities.
Yang Likun himself is working hard to develop a new generation of artificial intelligence systems, hoping that the system will provide power for machines with human intelligence levels and create "super intelligence" within the machines. However, he pointed out that this vision may take 10 years to achieve.
The "World Modeling" Method
Yang Likun manages a team of approximately 500 people in Meta's Basic Artificial Intelligence Research (Fair) laboratory. They are committed to creating an artificial intelligence that can form "common sense" and observe, experience, and learn how the world operates in a similar way to humans, ultimately achieving Universal Artificial Intelligence (AGI), a method known as "world modeling".
In 2022, Yang Likun first published a paper on the vision of "world modeling", and subsequently Meta released two research models based on this method.
Yang Likun recently pointed out that, The Fair Laboratory is testing various ideas in the hope of ultimately achieving human intelligence through artificial intelligence. However, there is a lot of uncertainty and exploration involved, and we cannot determine which one will succeed or which one will ultimately be chosen.
In addition, he firmly believes that "we are at the forefront of the next generation of artificial intelligence systems."
Internal contradictions
However, this scientist's experimental vision is a costly gamble for Meta, as current investors prefer to see rapid returns on artificial intelligence investments.
Therefore, There has also been a disagreement within Meta regarding the concept of "short-term income" and "long-term value". This disagreement can be seen from the establishment of the GenAI team last year.
Meta established the Fair Laboratory in 2013 to explore the field of artificial intelligence research and hired top scholars in the field. However, in 2023, Meta has separately established a new GenAI team, led by Chief Product Officer Chris Cox. The team has recruited many artificial intelligence researchers and engineers from the Fair Laboratory and led the work on the Llama 3 model, integrating it into its new artificial intelligence assistants and image generation tools.
Some insiders believe that, The establishment of the GenAI team may be due to a certain ideological contradiction between Yang Likun and Meta CEO Zuckerberg. Under investor and profit pressures, Zuckerberg has been pushing for more commercial applications of artificial intelligence; However, the academic culture within the Fair Laboratory has made Meta slightly weaker in the wave of generative artificial intelligence.
At the time when Yang Likun expressed this viewpoint, Meta and its competitors are pushing for more enhanced versions of large-scale language models, including the faster GPT-4o model released by OpenAI last week; Google has launched a new "multimodal" AI assistant Project Astra; Meta also launched the latest Llama 3 model last month.
For these latest large-scale language models, Yang Likun disdains them. He believes that "the evolution of large-scale language models is superficial and limited, and only when human engineers intervene and train based on this information, the model will learn, rather than naturally draw conclusions like humans." This is also equivalent to slapping the face of our own Llama model.
Despite conflicting ideas, insiders have revealed that Yang Likun is still one of Zuckerberg's core advisors because he enjoys a great reputation in the field of artificial intelligence.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

日微牧 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    4