首页 News 正文

Meta spokesperson recently revealed that Meta has disbanded its Responsible AI (RAI) department, which is committed to overseeing the security of AI projects during development and deployment.
The spokesperson stated that most members of the RAI team have been reassigned to the company's Generative AI product department, while others will now work on the AI infrastructure team.
Meta's generative artificial intelligence team, born in February this year, focuses on developing products that can generate language and images to mimic human like versions of artificial intelligence.
Since the beginning of the artificial intelligence boom, not only Meta companies, but also major technology companies have been investing in machine learning development and funding to avoid falling behind in the artificial intelligence competition.
The restructuring of the RAI department confirms what CEO Mark Zuckerberg said during the February earnings conference call. He pointed out at the time that the restructuring of RAI would take place towards the end of Meta's "Efficiency Year". So far, the company has experienced a series of layoffs, team mergers, and reassignments.
Is safety supervision still valued?
However, although the RAI department has disbanded, ensuring the security of artificial intelligence remains the top priority for technology giants, especially as regulatory agencies and government officials are more concerned about the potential hazards of this emerging technology.
In July of this year, Anthropic, Google, Microsoft, and OpenAI established an industry organization dedicated to setting security standards for the advancement of artificial intelligence.
A Meta spokesperson pointed out that RAI employees are now dispersed throughout the organization and will continue to support "responsible AI security development and use".
The spokesperson said, "We will continue to prioritize and invest in safe and responsible artificial intelligence development
Furthermore, it is worth mentioning that the "coup" that swept through OpenAI this weekend seems to have revealed a piece of information.
The root cause of this internal conflict lies in the contradiction between AI regulation and commercialization. Ilya Sutskever, the chief scientist of OpenAI, was unable to accept CEO Sam Altman's aggressive commercialization strategy, so she teamed up with other directors to dismiss Altman and company president Greg Brockman.
It can be seen that the commercialization of AI is constantly accelerating, and AI security has been placed in a secondary position.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

白云追月素 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    39