首页 Review 正文

World War II code breakers once cracked the Nazi Enigma cipher machine on a farm in a rural England, and now world leaders are committed to working together to reduce the risks posed by a technology they believe also poses a serious threat.
Leaders say that the most advanced artificial intelligence (AI) may pose catastrophic risks in areas such as cybersecurity and biotechnology, and may even be beyond human control.
Michelle Donelan, the UK Minister of Science, Innovation and Technology, quoted a password decider who had worked at the estate as saying: 'Sometimes science fiction is worth taking seriously.'
The two-day summit kicked off on Wednesday, about 40 miles outside London. In a statement released at the opening ceremony of the summit, the United States, China, and more than 20 other countries pledged to strengthen cooperation, jointly evaluate the risks posed by future AI systems (known as models), and consider developing a legal framework to manage the deployment of these systems; These systems are more powerful than current AI systems.
The joint statement states, "The most important capability of these AI models is the potential to cause serious or even catastrophic harm, whether intentional or unintentional." The reason mentioned in the statement is that powerful AI systems may be abused by malicious molecules, or humans may lose control of AI systems due to incomplete understanding and difficulty in predicting.
This commitment is the first important statement made by the international community regarding AI risks, responding to warnings that powerful new AI models may pose survival risks to humanity.
The debate on how to attach importance to these warnings and how to respond to them is the main agenda of this meeting, with attendees including the US delegation led by Vice President Kamala Harris and executives from leading AI companies including Elon Musk and OpenAI CEO Sam Altman.
These two technology executives have both issued warnings about the potential destructive power of AI they claim; The companies they lead are striving to create their own AI.
Others, including the policy executive of Meta Platforms and former British Deputy Prime Minister Nick Clegg, publicly stated on Tuesday that these speculated risks have distracted attention from current issues, such as protecting elections from the impact of fake videos produced by AI.
Clegg said:" If you spend too much time worrying about things that are far away in the future, it will reduce your focus on immediate risks
At the two-day UK AI Security Summit starting on Wednesday, world leaders pledged to work together to reduce the risks of artificial intelligence.
The United States announced on Wednesday that it will establish an AI security center under the Ministry of Commerce, and promised to collaborate with British counterparts to develop basic standards for measuring AI system capabilities and collect data reported by AI companies in accordance with the Biden administration's new executive order.
The UK has also announced long-term plans to host AI security summits in South Korea and France next year.
Western officials and industry executives believe that China's participation in this summit is a noteworthy matter, as some people are concerned that AI may become part of an arms race between world superpowers.
British Prime Minister Rishi Sunak said in a speech last week, "If we don't at least try to involve all the world's leading AI powers, we can't have a serious AI strategy
At the opening ceremony of the summit, Vice Minister of Science and Technology of China, Wu Chaohui, said that China supports the international community's efforts to establish an AI testing and evaluation system.
For a long time, the AI apocalyptic threat brought by machines has been a fascinating issue in popular culture. But since the launch of OpenAI's ChatGPT, it has also become a topic of debate about which priorities decision-makers should determine. ChatGPT demonstrates the incredible ability of this new technology to provide seemingly credible and fluent answers to almost any topic's questions.
Harris said in a speech on Wednesday that experts should broaden their definition of AI security and focus on broader risks, including what she called existing social hazards such as bias, discrimination, or the ability to spread erroneous information. A US executive order signed earlier this week addressed several of these issues, which were also raised by the organizers of the summit on Wednesday.
Some digital rights and trade union organizations also echo these concerns. These organizations issued an open letter on Monday, criticizing the priorities of the summit.
This is related to whether you will be fired by algorithms or unfairly analyzed for loans based on your identity or postal code, "the letter wrote. As a few large technology companies are seizing greater power and influence, small businesses and artists are being squeezed out of the market, and innovation is being stifled
At the same time, some AI researchers are concerned about the catastrophic risks they believe AI will bring, urging decision-makers to control the development of state-of-the-art AI systems.
The non-profit organization Future of Life Institute is promoting the development of binding rules at the aforementioned summit to force companies to prove that their systems are safe and do not face potential survival risks, rather than requiring government regulatory agencies to prove that a system is at risk. The Future Life Research Institute organized a joint letter in the spring calling for a six-month suspension of the development of state-of-the-art AI systems.
However, Max Tegmark, a professor at the Massachusetts Institute of Technology and director of the Future Life Institute, stated before the summit that he believes issuing a statement is progress.
He said that even a joint statement that only acknowledges the real risks of AI systems is a significant progress.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

Katlyn30590 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    2