Anhang CNTR Fact Sheet KI
Quellen und Referenzen zum Fact Sheet „KI und ihre militärischen Anwendungen“
Dr. Thomas Reinhold (CNTR/PEASEC). Juli 2024
Diese Beiträge und Berichte gehen genauer auf künstliche Intelligenz und ihre aktuellen und getesteten militärischen Anwendungsbereiche ein:
- KI in Biowissenschaft:
- Carter, S. R., Wheeler, N. E., Chwalek, S., Isaac, C. R., & Yassif, J. (2023). The Convergence of Artificial Intelligence and the Life Sciences: Safeguarding Technology, Rethinking Governance, and Preventing Catastrophe. NTI:bio.
- Hunter, P. (2024). Security challenges by AI-assisted protein design: The ability to design proteins in silico could pose a new threat for biosecurity and biosafety. EMBO Reports, 25(5), 2168–2171.
- KI in Cyberverteidigung und -angriff:
- Einfluss von KI auf die Cyberbedrohungslandschaft. (2024). DE-BSI.
- How Will AI Change Cyber Operations? War on the Rocks. (2024). War on the Rocks.
- Autonomous Cyber Defense A Roadmap from Lab to Ops. (2024). CSET.
- KI in Battlefield-Management- und militärischen Entscheidungsunterstützungssystemen:
- Shehabi, O. Y., & Lubin, A. (2024). Algorithms of War: Military Ai and the War in Gaza. ISRAEL – HAMAS 2024 SYMPOSIUM.
- Goecks, V. G., & Waytowich, N. (2024). COA-GPT: Generative Pre-trained Transformers for Accelerated Course of Action Development in Military Operations (arXiv:2402.01786). arXiv.
- Skove, S., & Et, A. (2024). Targeting time shrinks from minutes to seconds in Army experiment. DefenseOne.
- Gady, F.-S. (2020). What does AI mean for the future of manoeuvre warfare? In IISS. IISS.
- KI in (letalen) autonomen Waffensystemen:
- Copp, T. (2024). An AI-controlled fighter jet took the Air Force leader for a historic ride. What that means for war. Politico.
- Cotovio, V., Sebastian, C., & Goodwin, A. (2024). Ukraine’s AI-enabled drones are trying to disrupt Russia’s energy industry. So far, it’s working. CNN.
- Robertson, N. (2023). Pentagon unveils ‘Replicator’ drone program to compete with China. DefenseNews.
- European Aviation Artifical Intelligence High Level Group. (2020). The FLY AI Report—Demystifying and Accelerating AI in Aviation/ATM (Issue March).
- KI in nuklearen Führungs-, Steuerungs- und Kommunikationssystemen (NC3):
- Anand, A. A., Arias, L., Bianco, B., Hoffmann, F., Honich, A., Karner, N., Renssen, N., Suh, E., Wachs, L., & Alexa, W. (2021). Preemptive Discussions: The Potential Implications of Integrating Deep Learning into Early Warning Systems. BASIC UK.
- Nakamitsu, I. (2020). Emerging technology and nuclear risks; sustaining and developing expertise in the next generation—Keynote Speech. In Virtual UK Project on Nuclear Issues 2020—Annual Conference Royal United Services Institute for Defence and Security Studies. Virtual UK Project on Nuclear Issues 2020 Annual Conference Royal United Services Institute for Defence and Security Studies.
- Topychkanov, P. (2020). The Impact of Artificial Intelligence on Strategic Stability and Nuclear Risk. In South Asian Perspectives. SIPRI.
Ethische und menschenrechtliche Überlegungen und Herausforderungen bei militärischen KI-Anwendungen:
- McFarland, T., & Assaad, Z. (2023). Legal reviews of in situ learning in autonomous weapons. Ethics and Information Technology, 25(1), 9.
- Rivera, J.-P., Mukobi, G., Reuel, A., Lamparth, M., Smith, C., & Schneider, J. (2024). Escalation Risks from Language Models in Military and Diplomatic Decision-Making (arXiv:2401.03408). arXiv.
- DE-Ethikrat. (2023). Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz—Stellungnahme des Deutschen Ethikrates.
- Boulanin, V. (2020). Responsible Military Use of Artificial Intelligence: Can the European Union Lead the Way in Developing Best Practice? SIPRI.
- United Nations Institute for Disarmament Research, & Holland Michel, A. (2020). The Black Box, Unlocked: Predictability and Understandability in Military AI. United Nations Institute for Disarmament Research.
Weitere Informationen zu KI und ihrer Sicherheit:
- Anyoha, R. (2017). The History of Artificial Intelligence. Harvard University.
- Ji, Z., Lee, N., Frieske, R., Yu, T., Su, D., Xu, Y., Ishii, E., Bang, Y., Chen, D., Chan, H. S., Dai, W., Madotto, A., & Fung, P. (2023). Survey of Hallucination in Natural Language Generation. ACM Computing Surveys, 55(12), 1–38.
- Vassilev, A. (2024). Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST AI NIST AI 100-2e2023; p. NIST AI NIST AI 100-2e2023). National Institute of Standards and Technology.
- Herpig, D. S. (2020). Understanding the Security Implications of the Machine-Learning Supply Chain.
- Large language models can do jaw-dropping things. But nobody knows exactly why. (2024). MIT Technology Review.
- Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F. (2020). Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82–115.