Artificial Intelligence has proven to be a powerful tool for analyzing large volumes of data, making it an important asset for future arms control verification regimes. This chapter explores potential consequences of AI for verification. We find that the availability of proper training data for verification purposes can pose a challenge that needs to be addressed by AI engineers building models for verification. The explainability of AI results – i.e. the reasoning behind outputs – must be addressed by developers and practitioners to ensure sound verification assessments and to provide inspectors insights into the potential and limits of decision-supporting AI models. To create trust and acceptance for AI-aided verification and monitoring, AI verification systems should be developed and tested jointly with all treaty parties. Training data and models should be shared among parties, where possible, to foster transparency and enable independent validation.
Bibliographic record
Göttsche, M. & Unruh, F. (2025). AI in Verification. In: Göttsche, M.; Reis, K. & Daase, C. (Eds.). New Realities of AI in Global Security. CNTR Monitor – Technology and Arms Control 2025. PRIF – Peace Research Institute Frankfurt.
Authors
Fabian Unruh & Prof. Dr. Malte Göttsche