首页 News 正文

On Wednesday local time, the co-founder and former chief scientist of the company who resigned from OpenAI last month, as well as Ilya Sutskever, an authoritative expert in the field of deep learning who cast a crucial vote on the OpenAI board last year to drive Ultraman out of the company and then turned back on him, officially announced a new AI entrepreneurship project after flying solo.
(Source: X)
Simply put, Suzkiwi founded a new company called "Safe Superintelligence" (SSI) with the goal of creating a secure superintelligence in one step.
Safety first, direct to the ultimate goal
Suzkiwi plans to directly create a secure and powerful artificial intelligence system in a pure research institution, without launching any commercial products or services in the short term. He told the media, "This company is very special because its first product will be a secure super intelligence, and it will not do anything else until the day of implementation. It is completely isolated from external pressure and does not have to deal with large, complex products or get caught up in fierce competition."
Safety first, no commercialization, and ignoring external pressure. Throughout the entire paragraph, Sutzkovi did not mention OpenAI once, but the underlying meaning is self-evident. Although the OpenAI "palace fight" incident ended with Ultraman's big and quick victory, the struggle between accelerationism and securityism behind the whole thing did not end.
Although the concepts are different, both sides still maintain a decent relationship privately. On May 15th of this year, when OpenAI announced its departure from work for ten years, Sutzkowi also released a group photo of the management and expressed his belief that under the leadership of Altman and others, OpenAI will build a safe and beneficial AGI (General Artificial Intelligence).
(Source: X)
Oltman responded with "deep sadness" about Sutzkowi's departure and stated that without Sutzkowi, there would be no OpenAI today.
Since the end of last year's "palace fight" incident, Suzkiwi has remained silent on the entire matter, and this is still the case today. When asked about his relationship with Ultraman, Sutzkowi simply answered "very good", while when asked about his experiences in the past few months, he only said "very strange".
"Safety like nuclear safety"
In a sense, Sutzkowi is currently unable to accurately define the boundary between the security of AI systems, and can only say that he has some different ideas.
Suzkiwi hinted that his new company will attempt to use "engineering breakthroughs embedded in AI systems" to achieve security, rather than temporary technical applications of "protective barriers.". He emphasized, "When we talk about security, we mean security like nuclear security, not something like 'trust and security'.".
He said he has spent many years thinking about AI security issues and has several implementation methods in mind. He introduced, "At the most basic level, a secure super intelligence should have the characteristic of not causing large-scale harm to humanity. After that, we can say that we hope it will become a force of goodness, a force built on key values."
In addition to the famous Suzkiwi, SSI also has two founders - former Apple machine learning executive and well-known technology venture capitalist Daniel Gross, and another engineer Daniel Levi, who trained large models with Suzkiwi on OpenAI.
Levi stated that my vision is completely aligned with Sutzkowi: a small and efficient team, each focused on achieving a single goal of security super intelligence.
Although it is currently unclear why SSI dares to shout "one step ahead" (how many investors and how much money has been invested), Gross made it clear that the company will indeed face many problems, but changing money will not be one of them.
Returning to the original intention of OpenAI
It is not difficult to see from a series of visions that the so-called "security super intelligence" is essentially the concept of OpenAI in its early days. But as the cost of training large models surged, OpenAI had to collaborate with Microsoft in exchange for funding and computing power support to continue their business.
Such a question will also arise on SSI's future path - are the investors of the company really willing to invest a lot of money and watch as the company produces nothing until the ultimate goal of "super intelligence" is achieved?
By the way, "super intelligence" itself is also a theoretical concept, referring to an artificial intelligence system that surpasses human level and is more advanced than what most global super technology companies are pursuing. However, there is no consensus in the industry on whether such intelligence can be achieved or how to build such a system.
However, it is noteworthy that in SSI's first announcement, the first sentence was "Super intelligence is within reach".
Attachment: SSI Announcement
Safe Superintelligence Inc
Super intelligence is within reach.
Building secure super intelligence is the most important technological issue of our time.
We have launched the world's first direct target SSI laboratory, with only one goal and product: secure super intelligence.
It is called Security Super Intelligence Company.
SSI is our mission, our name, and our entire product roadmap, as it is our only focus. Our team, investors, and business model are all working together to achieve SSI.
We consider security and capability to be a technological problem that needs to be solved through revolutionary engineering and scientific breakthroughs. We plan to improve our capabilities as quickly as possible while ensuring that security always prioritizes.
In this way, we can expand in a calm state.
Our focus means not being distracted by managing transactions or product cycles, and our business model means that security and technological progress are not affected by short-term business pressures.
We are an American company with offices in Palo Alto and Tel Aviv, which are our foundation and where we can recruit top technical talents.
We are building a streamlined and outstanding team of world-class engineers and researchers focused solely on SSI.
If you are that kind of person, we offer an opportunity to complete your lifelong career and help solve the most important technological challenges of our time.
Now is the time. Join us.
Ilya Sutzkovi, Daniel Gross, Daniel Levi
June 19, 2024
标签: America Core People
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

楚一帆 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    38