On the left, Prime Minister Isbaiba will receive an explanation of the next generation of fighter jets at DSEI Japan 2025, an international defense and security equipment exhibition held in Chiba Province in May.
12:59 JST, June 7, 2025
The Ministry of Defense has issued guidelines for managing risks related to defense equipment that incorporates artificial intelligence.
The guidelines clearly state that if the government is found to be a deadly autonomous weapons system (law), the government will not allow research and development of defense equipment.
According to the guidelines, risk management for research and development should be carried out in three stages: AI equipment classification, legal review, and technical review.
We consider equipment under these guidelines based on how AI systems judgements affect disruptive capabilities and how research and development goals are divided into high- and low-risk categories.
If deemed high risk, the government will assess compliance with international and national law prior to the initiation of research and development. This includes launching missiles that are supported by AI to identify targets.
If considered law, the development and research of the system will be cancelled.
Once the legal review is complete, the process moves to a technical review. This stage ensures that the design allows for human control and ensures safety through mechanisms that reduce AI malfunction.
To ensure effective reviews, the ministry requires the cooperation of defense contractors designing AI-embedded design equipment and requires the disclosure of AI algorithms and other relevant information.
The ministry plans to determine specific ways to ensure cooperation through future discussions with businesses.