首页 News 正文

On June 20, 2024, Dartmouth College of Engineering in the United States released an interview video with Mira Murati, Chief Technology Officer of OpenAI. Mira revealed in an interview that the GPT-5 will be released in a year and a half and will reach the level of doctoral intelligence in certain fields. meanwhile, Claude 3.5 Sonnet, It has become the first model to score higher than the smartest human PhD.
Recently, OpenAI CTO Mira Murati delivered an important speech at the Dartmouth School of Engineering on the transformative potential of AI in various industries and its impact on work.
Mira said that in about a year and a half, OpenAI will release "doctoral level" artificial intelligence. In contrast, GPT-3 is equivalent to the intelligence level of young children, while GPT-4 is more like the intelligence level of high school students. But she emphasized that doctoral level AI is only applicable to certain specific tasks, "these systems have already reached human level in certain specific tasks, but of course, in many other tasks, they still cannot achieve it."
Meanwhile, on June 20th, Anthropic officially announced the release of a new large model, the Claude 3.5 Sonnet, which is claimed to be the smartest model to date.
X

Claude 3.5 Sonnet not only pushed the countdown of AGI to 75%, but also became the first model to test scores higher than the smartest human doctoral score.
Life Architect

According to Life Architect data, Claude 3.5 Sonnet has refreshed SOTA in graduate level reasoning (GPQA), undergraduate level knowledge (MMLU), and coding ability (HumanEval). Among them, on MMLU, it scored 90.4; On GPQA, the score is 67.2. This is also the first time that LLM has surpassed the GPQA score of 65%, reaching the level of the smartest human PhD. The GPQA score for ordinary PhDs is 34%, while the score for professional PhDs in the field is 65%. Claude 3.5 Sonnet clearly surpasses them.
Life Architect

In an interview with Dartmouth College, the host asked a hypothetical question: "Assuming that GPT becomes extremely intelligent three years later, is it possible for GPT to connect to the Internet and take action on its own?"
Mira replied: "Indeed, we have thought a lot about this. Systems with AI agent capabilities do exist. They will connect to the Internet, communicate with each other, complete tasks together, or cooperate seamlessly with humans. Therefore, our cooperation with AI in the future may be the same as our cooperation with each other now."
Metaculus

The speed of progress in artificial intelligence is astonishing.
In 2022, experts predict that human level artificial intelligence may rise in the 1960s, with a 50% chance. However, the Metaculus Forecasters community predicted earlier, predicting the 1940s. With the release of GPT-4, the community's prediction time is starting to advance, and it is expected that we may achieve AGI by 2032, or even as early as 2027.
Metaculus

On the early morning of June 20th, Ilya Sutskever, former co-founder and chief scientist of OpenAI, announced on social media the establishment of the new company SSI, focusing on security super intelligence.
A netizen said, "Wow! This is a direct step towards Super Intelligence, surpassing AGI."
X

In his official opening statement on the SSI account, Ilya stated, "Super intelligence is within reach. Building a secure super intelligence (SSI) is the most important technological issue of our time."
He also added, "We will directly pursue secure super intelligence with one focus, one goal, and one product. We will achieve it through revolutionary breakthroughs created by a small and efficient team."
X

In summary, what the Ilya team is going to do, according to a classic emoji style summary from netizens, is:
X

However, the development of artificial superintelligence has also raised some concerns.
Recently, a preprint paper proposed that with the implementation of artificial superintelligence (ASI), the unparalleled ability of AI may lead people to respect it as a god, resulting in a cognitive bias to accept its decisions without questioning.
arxiv.org

Author Tevfik Uyar warns that this phenomenon may lead us to confuse technological progress with the superiority of moral ethics. "We cannot assume that ASI is equally superior in morality and ethics just because of its strong capabilities."
What is even more concerning is that this dynamic may lead to a "technocratic theocracy" where decision-making power is transferred to ASI, thereby undermining human initiative and critical thinking. The author emphasizes, "If we hand over decision-making power to ASI, human initiative and critical thinking may be undermined."
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

阿豆学长长ov 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    27