首页 News 正文

According to media reports on Friday, after the departure of two key leaders, OpenAI's team focused on researching the survival risks of artificial intelligence (AI) has all resigned or been absorbed by other groups.
In an interview, OpenAI confirmed that the company no longer treats the so-called "super alignment" team as an independent entity, but integrates the team more deeply into its research work to help the company achieve its security goals.
In July of last year, OpenAI announced the establishment of a new research team called "Super Alignment", which aims to use AI to supervise AI and solve the problem of super intelligent "AI alignment". The meaning of "AI alignment" is to require that the goals of AI systems be aligned with human values and interests. One of the founders of the company, Ilya Sutzkovo, has been appointed as the co leader of the team.
The dissolution of the AI risk team is another evidence of recent internal turmoil within the company, which has once again raised questions about how the company balances speed and safety in developing AI products. The OpenAI charter stipulates that it is necessary to safely develop General Artificial Intelligence (AGI), a technology that can rival or surpass humans.
Last month, two researchers from the Super Alignment team were fired for leaking company secrets, and another member also left in February. On Tuesday of this week, Ilia announced on social platform X that he will leave the company in nearly a decade, believing that under the leadership of CEO Ultraman and others, he can create a universal artificial intelligence that is both safe and beneficial. Ilya had previously clashed with Ultraman about the speed of AI development.
Subsequently, another leader of the Super Alignment team, Jan Leike, also announced his resignation. However, insiders claim that for Leike, Illya's withdrawal has led to serious disagreements that cannot be reconciled.
Leike released a statement on Friday stating that the Super Alignment team has been working hard to secure resources. "In the past few months, my team has been sailing against the wind. Sometimes we struggle with computing resources and it's becoming increasingly difficult to complete this critical research."
He also stated that his resignation was due to a series of disagreements with OpenAI on the company's "core priorities", and he believed that the company did not pay enough attention to AI related security measures.
OpenAI announced on Tuesday that co-founder John Schulman will become the scientific leader for future alignment work. In addition, OpenAI stated in a blog post that it has appointed research director Jakub Pachocki to replace Ilya as the chief scientist.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

胡胡胡美丽_ss 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    34