首页 News 正文

On October 30th local time, US President Biden signed an executive order on artificial intelligence, introducing the first set of regulatory regulations for generative artificial intelligence from the White House, requiring new security assessments, fairness and civil rights guidelines for artificial intelligence, as well as research on its impact on the labor market.
Biden's executive order requires large companies to share security test results with the US government before the official release of artificial intelligence systems. It also prioritizes the artificial intelligence "red team" standard developed by the National Institute of Standards and Technology of the United States, which involves stress testing of defenses and potential issues within the system. The US Department of Commerce will establish standards for watermarking content generated by artificial intelligence.
In addition, the latest administrative order also involves training data for large-scale artificial intelligence systems and proposes the necessity of evaluating how institutions collect and use commercially available data, including data purchased from data intermediaries, especially when this data involves personal identifiers.
The Biden government is also taking measures to attract more labor to participate in artificial intelligence. Starting from Monday, workers with expertise in artificial intelligence can find vacancies in federal government positions on AI.gov.
White House Deputy Chief of Staff Bruce Reed said in a statement that the order represents "the strongest action taken by any government in the world in terms of artificial intelligence security, security, and trust. It is based on voluntary commitments received by the White House from leading artificial intelligence companies, and is the first major binding action taken by the government on this technology.
Previously, 15 major technology companies in the United States had agreed to implement voluntary artificial intelligence security commitments, but Reid stated that this was "not enough" and Monday's executive order was a step towards specific regulation of the development of the technology.
Reed said, "President Biden's ongoing efforts for several months are aimed at leveraging the federal government's power in a wide range of areas to manage the risks of artificial intelligence and leverage its benefits
Although law enforcement agencies have warned that they are ready to apply existing laws to the abuse of artificial intelligence, and Congress is also striving to gain more understanding of this technology to develop new laws, executive orders may have a more direct impact. Like all administrative orders, it has legal effect.
The executive order of the White House mainly includes 8 parts:
1. Develop new safety and security standards for artificial intelligence, including requiring some AI companies to share security test results with the federal government, instructing the Department of Commerce to develop guidelines for AI watermarks, and creating a cybersecurity plan that enables AI tools to help identify defects in critical software.
2. Protecting consumer privacy, including developing guidelines that institutions can use to evaluate privacy technologies used in artificial intelligence.
3. By providing guidance to federal contractors to avoid artificial intelligence algorithm discrimination and creating best practices on the appropriate role of artificial intelligence in the judicial system, including its use in sentencing, risk assessment, and crime prediction, promote fairness and civil rights.
4. By instructing the Department of Health and Human Services to develop a plan to assess the potential hazards of artificial intelligence.
5. Write a report on the potential impact of artificial intelligence on the labor market and study how the federal government supports workers affected by artificial intelligence in the labor market.
6. Promote innovation and competition by expanding funding for artificial intelligence research in areas such as climate change and setting standards for highly skilled immigrants with key professional knowledge to stay in the United States.
7. Collaborate with international partners to implement artificial intelligence standards globally.
8. Develop guidelines for the use and procurement of artificial intelligence by federal agencies, and accelerate the government's hiring of skilled workers in this field.
In August of this year, the White House reached a consensus with seven top artificial intelligence companies, including Google, Microsoft, Amazon, Meta, OpenAI, Anthropic, and Inflection. Each company agreed to make a series of voluntary commitments in the development of artificial intelligence, including allowing independent experts to evaluate tools, study social risks related to artificial intelligence, and allow third-party testing of system vulnerabilities before their public appearances. Subsequently, eight more companies participated in the White House initiative.
The Global Artificial Intelligence Summit will be held, and G7 is about to reach an agreement on AI development guidelines
The first AI Security Summit, led by the UK, will be held from November 1st to 2nd. The UK government claims that the purpose of hosting this summit is to discuss the risks brought by artificial intelligence and how to mitigate AI risks through international coordinated actions.
Prior to the summit, the latest document released by the Group of Seven (G7) stated that representatives of the G7 were about to agree to develop a code of conduct to regulate the development and construction of advanced artificial intelligence systems by major companies. This move comes as governments around the world seek ways to reduce the risks and potential misuse of artificial intelligence technology. The report adds that the voluntary code of conduct will establish important milestones for how major countries around the world manage this new technology.
The report points out that the Group of Seven (G7), composed of the United States, the United Kingdom, Canada, France, Germany, Italy, and Japan, launched this critical process at the ministerial level forum held in May this year, which is widely referred to as the "Hiroshima Artificial Intelligence Process" by the outside world.
The document also shows that the new guidelines aim to promote safe, reliable, and trustworthy artificial intelligence globally, and will provide voluntary guidance for organizations developing state-of-the-art artificial intelligence systems, including state-of-the-art basic models and generative artificial intelligence systems. In addition, the guidelines aim to help countries and businesses seize the opportunities of this new technology and address its potential risks. The report states that the regulation requires major companies to take measures to detect, evaluate, and reduce risks in the artificial intelligence lifecycle, and to address corresponding abuse incidents after the launch of artificial intelligence products on the market.
The document shows that major companies are also urged to release public reports on the functionality, limitations, use, and any abuse of artificial intelligence products, and to invest in cybersecurity measures.
In terms of managing artificial intelligence, the European Union had previously advocated a stronger stance, while Japan sought easier management methods - closer to the methods adopted by the United States to strengthen economic growth. Southeast Asian countries have also adopted a more commercialized approach to artificial intelligence.
CandyLake.com 系信息发布平台,仅提供信息存储空间服务。
声明:该文观点仅代表作者本人,本文不代表CandyLake.com立场,且不构成建议,请谨慎对待。
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

六月清晨搅 注册会员
  • 粉丝

    0

  • 关注

    0

  • 主题

    30