Global leaders warn at the AI summit: Beware of the disasters predicted by science fiction
Katlyn30590
发表于 2023-11-4 03:29:08
1317
0
0
World War II code breakers once cracked the Nazi Enigma cipher machine on a farm in a rural England, and now world leaders are committed to working together to reduce the risks posed by a technology they believe also poses a serious threat.
Leaders say that the most advanced artificial intelligence (AI) may pose catastrophic risks in areas such as cybersecurity and biotechnology, and may even be beyond human control.
Michelle Donelan, the UK Minister of Science, Innovation and Technology, quoted a password decider who had worked at the estate as saying: 'Sometimes science fiction is worth taking seriously.'
The two-day summit kicked off on Wednesday, about 40 miles outside London. In a statement released at the opening ceremony of the summit, the United States, China, and more than 20 other countries pledged to strengthen cooperation, jointly evaluate the risks posed by future AI systems (known as models), and consider developing a legal framework to manage the deployment of these systems; These systems are more powerful than current AI systems.
The joint statement states, "The most important capability of these AI models is the potential to cause serious or even catastrophic harm, whether intentional or unintentional." The reason mentioned in the statement is that powerful AI systems may be abused by malicious molecules, or humans may lose control of AI systems due to incomplete understanding and difficulty in predicting.
This commitment is the first important statement made by the international community regarding AI risks, responding to warnings that powerful new AI models may pose survival risks to humanity.
The debate on how to attach importance to these warnings and how to respond to them is the main agenda of this meeting, with attendees including the US delegation led by Vice President Kamala Harris and executives from leading AI companies including Elon Musk and OpenAI CEO Sam Altman.
These two technology executives have both issued warnings about the potential destructive power of AI they claim; The companies they lead are striving to create their own AI.
Others, including the policy executive of Meta Platforms and former British Deputy Prime Minister Nick Clegg, publicly stated on Tuesday that these speculated risks have distracted attention from current issues, such as protecting elections from the impact of fake videos produced by AI.
Clegg said:" If you spend too much time worrying about things that are far away in the future, it will reduce your focus on immediate risks
At the two-day UK AI Security Summit starting on Wednesday, world leaders pledged to work together to reduce the risks of artificial intelligence.
The United States announced on Wednesday that it will establish an AI security center under the Ministry of Commerce, and promised to collaborate with British counterparts to develop basic standards for measuring AI system capabilities and collect data reported by AI companies in accordance with the Biden administration's new executive order.
The UK has also announced long-term plans to host AI security summits in South Korea and France next year.
Western officials and industry executives believe that China's participation in this summit is a noteworthy matter, as some people are concerned that AI may become part of an arms race between world superpowers.
British Prime Minister Rishi Sunak said in a speech last week, "If we don't at least try to involve all the world's leading AI powers, we can't have a serious AI strategy
At the opening ceremony of the summit, Vice Minister of Science and Technology of China, Wu Chaohui, said that China supports the international community's efforts to establish an AI testing and evaluation system.
For a long time, the AI apocalyptic threat brought by machines has been a fascinating issue in popular culture. But since the launch of OpenAI's ChatGPT, it has also become a topic of debate about which priorities decision-makers should determine. ChatGPT demonstrates the incredible ability of this new technology to provide seemingly credible and fluent answers to almost any topic's questions.
Harris said in a speech on Wednesday that experts should broaden their definition of AI security and focus on broader risks, including what she called existing social hazards such as bias, discrimination, or the ability to spread erroneous information. A US executive order signed earlier this week addressed several of these issues, which were also raised by the organizers of the summit on Wednesday.
Some digital rights and trade union organizations also echo these concerns. These organizations issued an open letter on Monday, criticizing the priorities of the summit.
This is related to whether you will be fired by algorithms or unfairly analyzed for loans based on your identity or postal code, "the letter wrote. As a few large technology companies are seizing greater power and influence, small businesses and artists are being squeezed out of the market, and innovation is being stifled
At the same time, some AI researchers are concerned about the catastrophic risks they believe AI will bring, urging decision-makers to control the development of state-of-the-art AI systems.
The non-profit organization Future of Life Institute is promoting the development of binding rules at the aforementioned summit to force companies to prove that their systems are safe and do not face potential survival risks, rather than requiring government regulatory agencies to prove that a system is at risk. The Future Life Research Institute organized a joint letter in the spring calling for a six-month suspension of the development of state-of-the-art AI systems.
However, Max Tegmark, a professor at the Massachusetts Institute of Technology and director of the Future Life Institute, stated before the summit that he believes issuing a statement is progress.
He said that even a joint statement that only acknowledges the real risks of AI systems is a significant progress.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
猜你喜欢
- Overseas leading companies expanding production, A-share companies chasing trends, weight loss drugs once again "fattening" stock prices
- After multiple denigrations and crackdowns, American leaders opened accounts on TikTok
- Net profit growth exceeds 150%! The leading photovoltaic company's revenue exceeded 100 billion yuan for the first time
- Opening of the Pan China Bay Area Family Office Elite Education Summit
- Fidelity International: Consolidation of China's express delivery industry may boost returns for leading companies
- Altman and other tech leaders meet with Biden team! AI cross departmental working group officially announced establishment
- Baidu launches a new round of cadre rotation, with Luo Rong serving as the head of the mobile ecosystem business group
- The latest holdings of leading global asset management giants in Chinese concept stocks have been exposed
- As Wall Street collectively bullish on US stocks, Deutsche Bank warns: Beware of these three major risks!
- Former Nissan CEO: Seeking an agreement with Honda is a desperate move, and there is little synergy between the two companies
-
隔夜株式市場 世界の主要指数は金曜日に多くが下落し、最新のインフレデータが減速の兆しを示したおかげで、米株3大指数は大幅に回復し、いずれも1%超上昇した。 金曜日に発表されたデータによると、米国の11月のPC ...
- SNT
- 前天 12:48
- 支持
- 反对
- 回复
- 收藏
-
長年にわたって、昔の消金大手の捷信消金の再編がようやく地に着いた。 天津銀行の発表によると、同行は京東傘下の2社、対外貿易信託などと捷信消金再編に参加する。再編が完了すると、京東の持ち株比率は65%に達し ...
- SNT
- 前天 12:09
- 支持
- 反对
- 回复
- 收藏
-
【GPT-5屋台で大きな問題:数億ドルを燃やした後、OpenAIは牛が吹くのが早いことを発見した】OpenAIのGPT-5プロジェクト(Orion)はすでに18カ月を超える準備をしており、関係者によると、このプロジェクトは現在進 ...
- SNT
- 4 小时前
- 支持
- 反对
- 回复
- 收藏
-
【ビットコインが飛び込む!32万人超の爆倉】データによると、過去24時間で世界には32万7000人以上の爆倉があり、爆倉の総額は10億ドルを超えた。
- 断翅小蝶腥
- 3 天前
- 支持
- 反对
- 回复
- 收藏