Emerging Disruptive Technologies
The Emerging Disruptive Technologies research group addresses three key questions:
- How dangerous can new technological developments become from a security, ethical and legal perspective when they find their way into military use?
- How must verification measures be tailored to enable the effective control of modern military technologies?
- How can new technologies help develop more reliable arms control and verification measures?
In order to obtain robust answers, the group is pursuing an interdisciplinary research approach that combines political science with the natural sciences. Only the combination of different perspectives can answer what can be achieved politically (and with which actors), where technological pitfalls lie, and how they can be overcome—potentially, through technology itself. Therefore, the interdisciplinary approach promises effective approaches to strengthening arms control, which is currently in a severe crisis.
The group focuses on the future and looks primarily at technologies that are considered as emerging disruptive technologies—that is, technologies which are capable of overturning previous power structures and might allow weaker challengers to overtake the militaries of previously stronger players through innovations. These technologies include hypersonic missiles, military robotics, remotely piloted as well as autonomous and semi-autonomous weapon systems, nanotechnology, various forms of human enhancement, cyber operations, militarily used Artificial Intelligence (AI) and Machine Learning, and even the military use of quantum computers.
Some of these technologies, such as hypersonic missiles, have already been deployed by at least some militaries. Other technologies, such as quantum computers, are still years or even decades away from being ready for deployment. For all of these technologies, traditional quantitative arms control efforts such as ceilings and limits are difficult or virtually impossible to implement.
For example, while the use of AI in weapon systems can force humans out of critical decision-making processes such as target selection or engagement, and the use of autonomy in crucial functions can give states a significant military advantage, the question of how AI in weapon systems can be controlled remains an open one. That being said, AI can also help make arms control more effective and objective under certain conditions, such as in the evaluation of imagery from inspections or when distinguishing between a seismic event and a nuclear weapons test. The new research group at PRIF will explore precisely this area of tension as its core task.
Prof. Dr. Dr. Christian Reuter
Head of Research Group
Dr. Niklas Schörnig
Head of Research Group
Dr. Thomas Reinhold
Researcher
Liska Suckau
Researcher
Contact
Team
Dr. Niklas Schörnig
Head of Research Group