首页 News 正文

According to the latest report by Nikkei News, the Japanese government is considering establishing an organization next year to explore issues related to artificial intelligence (AI) security, including the risk of civilian AI technology being diverted for military purposes, with the aim of improving its security without hindering research and development.
It is reported that this plan is expected to be officially announced at the Artificial Intelligence Strategy Committee meeting held this month at the earliest. The committee is responsible for discussing national strategies related to AI, and its members include experienced private sector professionals in the field of AI.
The focus will be on the most advanced batch of products, such as OpenAI's ChatGPT. In order to avoid interfering with the private sector, standard AI technology used by enterprises to improve operational efficiency will not become a goal of the Japanese government.
AI products will need to undergo a series of tests before entering the market, and there is also a proposal that requires any AI products purchased by the Japanese government to be certified by professional organizations before use.
The newly established organization plans to study potential national security risks, such as the possibility of obtaining information on the manufacture of biological, chemical, or other weapons through AI, and will also examine security vulnerabilities, including attacks on networks.
In addition, we will investigate the risks of AI becoming uncontrollable, as well as issues related to misinformation and bias. AI controlled social media accounts can automatically generate content on the internet and interact with human users, manipulating public opinion or inciting emotions.
At present, with the development of information technology, the widespread use of AI technology in the military field has become a reality. A series of AI operations such as collecting and analyzing battlefield data using AI have been widely applied in combat systems. The transfer of civilian AI technology to military purposes is almost inevitable.
Although a brand new institution may be established, the most likely outcome is to merge a new department into the existing organization. The candidate institutions include the National Institute of Information and Communication Technology, which studies AI technology under the Ministry of General Affairs, and the Information Technology Promotion Agency under the Ministry of Economy, Trade and Industry.
At the beginning of this month, the Group of Seven (G7) reached a final agreement on the AI international rules framework. This will become the first comprehensive international rule for developers and users, which stipulates that "all relevant personnel" of AI should abide by their responsibilities.
G7 strives to gain the approval of countries and enterprises outside of G7 by taking the lead in demonstrating a cooperative stance to prevent AI abuse. The Japanese government will use this agreement to develop domestic guidelines, which require AI developers to undergo third-party risk assessments before launching products to the market.
The UK and the US are also in a leading position in creating artificial intelligence institutions. In November, the UK established the world's first AI security research institute. After verifying cutting-edge AI products before and after their launch, consider disclosing products that pose security risks.
According to President Biden's executive order in October this year, the United States is leading the development of AI security evaluation methods under the leadership of the National Institute of Standards and Technology. Aiming to jointly create an AI security alliance with private companies to develop methods for evaluating AI capabilities and risks, and hoping that private companies will conduct risk verification on their own.
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

凉亭之中净 新手上路
  • 粉丝

    0

  • 关注

    0

  • 主题

    1