Governance of Disruptive Technologies for International Peace and Security: Maturing Yet Nascent Military AI-cyber Policy Has Far to Go

Caitriona Heinl Commentary

There is not yet a comprehensive understanding of the future implications of Emerging and Disruptive Technologies (EDTs) like AI in civilian, military and intelligence applications. The precise nature of opportunities and risks relating to EDTs continues to be unpacked, and relevant national and international policies for international peace and security are still maturing. Like the first cyber strategies introduced globally a decade ago, it is likely that new EDT and AI strategies will undergo several revisions. Adapting existing solutions will not always suit the peculiarities of EDTs and specific solutions will continue to be needed. While global AI, autonomy and related cyber governance questions for international peace and security still have some way to go, a number of positive developments are underway. In February alone, the first global Summit on Responsible AI in the Military Domain was held, alongside the release of a Joint Call to Action and Political Declarations and an announcement of a proposed Global Commission. In the context of international security, ongoing questions surrounding normative disconnection (especially for novel cyber weapons) can and should be addressed within the UN Open Ended Working Group on ICTs, the UN Convention on Conventional Weapons on Lethal Autonomous Weapons Systems and other international initiatives, such as the intergovernmental process on the Global Digital Compact established by the President of the UN General Assembly in late 2022.

The first global Summit on Responsible Artificial Intelligence in the Military Domain was co-hosted by the governments of the Netherlands and Republic of Korea last month. It was envisaged that this would create a high-level platform for governmental and non-governmental stakeholders, thus placing the responsible development, deployment and use of artificial intelligence (AI) in the military domain higher on the international agenda. As a result, a joint call to action was agreed upon and a Global Commission on AI will be established to raise awareness, clarify how to define AI in the military domain and determine how this technology can be developed, manufactured and deployed responsibly. The Commission will also set out the conditions for the effective governance of AI.

The Summit, Call to Action and proposed Global Commission are positive and much-needed developments given the fast pace of technological development, which is being driven quite extensively by the civilian sector. In particular, industry leaders are observing that AI has already reached an inflection point this year. Despite these initiatives and other global or EU efforts to introduce new frameworks, many long-standing and serious questions remain.

Understanding the link between EDTs, AI and cyber

The European Commission’s Action Plan on Synergies between Civil, Defence and Space Industries defines disruptive technology, including AI, as a technology inducing a disruption or a paradigm shift – in other words, a radical rather than an incremental change. It explains that there is an associated ‘high risk, high potential impact’, and the concept applies equally to the civil, defence and space sectors. Such emerging and disruptive technologies (EDTs) are broadly understood to include especially disruptive technologies such as AI; big data analytics; quantum-based technologies; robotics and autonomous systems; space/space technologies; new advanced materials; bio and nano technology; hypersonic weapons systems; and directed energy weapons.

Some such EDTs are inherently dual-use given their potential for civil and military use, and dual-purpose where they can support both legitimate and nefarious activity. They can offer benefits to both the private and public sectors, while also creating serious risks relating to the cybersecure aspects of the EDT or nefarious criminal and state use. By now, it is well encapsulated within key strategies such as the EU’s Cybersecurity Strategy for the Digital Decade (2020) that the cyber threat landscape is compounded by geopolitical tensions over control of technologies across the whole supply chain. And the European Union Agency for Cybersecurity (ENISA) is monitoring cybersecurity risks relating to emerging technologies, including AI. In late 2022, the agency concluded that two of the top cybersecurity threats that are likely to emerge by 2030 are lack of analysis and control of space-based infrastructure and objects as well as AI abuse. The EU’s inaugural March 2022 ‘Strategic Compass for Security and Defence’ further found that some state actors are using EDTs for strategic advantage and to increase the effectiveness of hybrid campaigns, as well as increasingly using strategic technologies and data without respecting existing international norms and regulation.

An EDT like AI can, for example, assist with enhancing cybersecurity or automated decision-making and it is becoming a ‘prerequisite’ for deployment of IoT. Law enforcement authorities are exploring training and capacity building in AI and data analytics. And in the EU, the 2020 Cybersecurity Strategy proposed a network of security operation centres for threat intelligence powered by AI to improve incident detection, analysis and response speeds. In military terms, the role AI can play, including in cyber defence, is fast becoming a reality.

Militaries the world over and in the EU are focusing increasingly on capability development gaps related to AI, including through monitoring defence industrial capacities. Broadly speaking, military areas of interest include anomaly detection, intrusion detection and prevention of cyberattacks; vulnerability management; threat intelligence; and automated incident response. For example, the European Commission’s Joint Communication on ‘Defence investment gaps analysis and way forward’ of May 2022 recommends augmentation of existing capabilities in domains such as cyberspace whereby ‘AI solutions could provide significant advantages in the field of cybersecurity. … Militarily, this will mean improved logistics and operational efficiency, real-time monitoring of assets, predictive assessments of campaign plans, and quicker decision making’. In late 2022, a Joint Communication was also presented on an EU Policy on Cyber Defence noting that sustaining state-of-the-art cyber defence capabilities will mean staying on top of technological developments and their applications in defence-related systems, in particular those of EDTs such as AI, with the support of industry.

On the other hand, there are ongoing concerns that AI and its application to automated decision-making might open new avenues in manipulation and attack methods. Moreover, the use of AI to support cybersecurity means measures must be developed to ensure that it is secure and trustworthy. Examples of malicious cyber-related uses of AI include AI-powered malware, AI-enhanced DDoS attacks and AI-enabled advanced disinformation campaigns. In addition, those legal, security-related and ethical concerns in areas like transparency, reliability, accountability and bias are amplified in the high-risk military context as well as in relation to the nuclear domain.

The precise nature of these EDT opportunities and risks continues to be unpacked, and relevant policy is still maturing. As with those first cyber strategies introduced across the globe in 2012 and thereabouts, it is highly likely that new EDT and AI strategies will undergo several revisions in years to come. This is especially the case where there is not yet a comprehensive understanding of the future implications of EDTs like AI in civilian, military and intelligence applications.

EU governance responses: Version 1.0 but more to come      

The EU’s Strategic Compass rightly identifies a clear, present need to establish a ‘better analytical hold’ on EDT trends and dependencies and how they are being increasingly used by states. In particular, it notes that in relation to the cyber domain, there is a need to develop and make intensive use of new technologies such as quantum computing, AI and Big Data to achieve comparative advantages, including in terms of cyber responsive operations and information superiority (language that is later reiterated in the May 2022 Council conclusions on the development of the EU’s cyber posture).

In this regard, the EU’s Cybersecurity Strategy for the Digital Decade (2020) previously specified that cybersecurity should be integrated into all future digital investments – and key technologies like AI, encryption and quantum computing in particular – through the use of incentives, obligations and benchmarks. More recent European Parliament documents reveal the EU and its Member States acknowledging the importance of EDTs by launching initiatives and dedicating funds to EDT R&D. EU leaders are committing to related defence expenditure; investing in critical and emerging technologies and innovation for security and defence; and fostering synergies between space, civilian and defence innovation and research. To this end, the EU will use the Observatory on Critical Technologies of the Commission to coordinate and gain a full understanding of critical dependencies. Examples of other initiatives include an allocation of the European Defence Fund annual budget to EDTs and the promotion of synergies between civilian and defence R&I, as innovation in EDTs is perceived to be mostly civilian sector-driven. In addition, ENISA runs ad hoc Working Groups on ‘AI Cybersecurity’ and ‘Foresight for Emerging and Future Cybersecurity Challenges’ and regularly publishes relevant publications such as the AI Threat Landscape report to better understand emerging risks.

In relation to AI specifically, there has been a lot of recent EU activity in the policy space. In 2018, the European Commission established a High-Level Expert Group on AI and issued an AI Strategy. The ‘White Paper on Artificial Intelligence – A European Approach to AI’ was released in 2020, noting that the main risks related to AI concern the application of rules designed to protect fundamental rights as well as safety (such as cybersecurity issues, issues associated with AI applications in critical infrastructures or malicious use of AI) and liability-related issues. The later Coordinated Plan on AI in 2021 highlighted the need to focus on AI cybersecurity, and the Action Plan on synergies found that AI developments must be conducted openly across the EU; ensure the safety and societal soundness of AI-based applications; consider ethical aspects; and assess the risks and mitigate its potential for malicious use. There are numerous relevant regulatory responses that include, among others, the proposed AI Act; NIS 2 Directive; Dual-Use Directive; and new Product Liability Directive to modernise the EU’s product liability framework for modern technology, including AI. For its part, ENISA is mapping the AI cybersecurity ecosystem and threat landscape as well as providing security recommendations for foreseeable challenges. Notably, the agency’s AI cybersecurity challenges report finds that the ‘EU secure AI ecosystem should place cybersecurity and data protection at the forefront and foster relevant innovation, capacity building, awareness raising and R&D initiatives’.

In short, adapting existing solutions will not always suit the peculiarities of EDTs and specific solutions will continue to be needed. In terms of global governance and international cooperation for peace and security, the EU’s Strategic Compass does specify the EU’s intention to work with all partners to promote relevant ethical and legal standards, including cooperation in the UN framework. In particular, it finds that if the EU and UN are to meet the challenges of the future, a more dynamic approach to early warning, conflict prevention and mediation is required. To this end, it identifies a need for structured information exchange, joint horizon scanning and strategic foresight, noting that this is important for responding to new and emerging challenges such as EDTs and hybrid threats, including cyber attacks and disinformation. Notably, the European Parliament calls for particular attention to be paid to the impact of emerging AI technologies, including their potential malicious use, on security and defence. It also calls for the EU to take the lead in global efforts to establish a comprehensive regulatory framework for AI-enabled weapons.

Global AI and related cyber governance for international peace and security have far to go

In terms of military AI, high-level questions remain, such as the nature of risk if deployment of AI leads to a lack of meaningful human control, or escalation of violence. Moreover, there has been insufficient progress on issues such as reliability, accountability and explainability. Similarly, it is recognised that more certainty is needed surrounding those international agreements that already apply and where gaps must continue to be addressed, such as managing unintended consequences and risks of unintended escalation. Notably, the February 2023 U.S. State Department’s ‘Political Declaration on Responsible Military Use of AI and Autonomy’ lists a best practice that states should maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment. It also recommends that states design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behaviour.

The August 2022 report of the current UN Open-Ended Working Group on developments in the field of information and telecommunications in the context of international security does acknowledge this double-sided nature of emerging technologies. The report recognises that emerging technologies are expanding development opportunities, but their ever-evolving properties also expand the attack surface, creating new vectors and vulnerabilities that can be exploited for malicious ICT activity. This finding can also be found within the prior 2021 consensus report of the UN Group of Governmental Experts on cyber. Acknowledging the double-sided nature of emerging technologies, however, is far from sufficient for these UN First Committee processes.

First, even though these cyber acquis for responsible state behaviour were crafted from inception as ‘technology neutral’ in order to take into account future technological manifestations, it is highly probable that each EDT will raise additional unique questions requiring tailored policy responses. Second, there is an ongoing risk of norms disconnect between developing normative frameworks for the development of novel cyber weapons under the First Committee negotiations on cyber and those through the rubric of the UN Convention on Conventional Weapons (CCW) on Lethal Autonomous Weapons Systems (and possibly with other international initiatives too). For instance, how should the principle of ‘Meaningful Human Control’ under discussion in the LAWS GGE in the CCW apply to AI-enabled or increasingly autonomous cyber capability, cyber operations and autonomous cyber weapons (whether for strategic reconnaissance; defensive; or offensive for defensive purposes)? Why does it not apply? This question of disconnection and meaningful human control/involvement has been outstanding for nearly ten years already and yet the problem persists – similar safeguards and human control or oversight of AI-enabled cyber systems should surely be ensured.

While there are numerous interpretations for concepts of AI and autonomy, they can refer to machines’ ability to perform tasks whether digitally or as the smart software behind autonomous physical systems. By deduction, autonomous cyber weapons could be Lethal Autonomous Weapons Systems too. Other unresolved questions at this level include how cyber confidence building measures should evolve to deal with unintended escalation and consequences deriving from such AI-enabled activity.

For these reasons, the Joint Call to Action at the Global Summit in February is a positive step with well-meant intentions. Notably, it is endorsed by four of the five permanent members of the UN Security Council (excluding the Russian Federation), which is a highly positive indicator. On the downside, it is not endorsed by all states or indeed by all EU Member States (such as Ireland and Austria). This means that there continues to be disagreement over suitable governance approaches. For instance, while the EU is active in the LAWS CCW, some EU Member States would prefer a legally binding instrument while others propose a behavioural approach. In addition, there is possible norms disconnect and contradiction where those EU Member States which advocate for a legally binding instrument in the LAWS CCW seem to instead advocate a behavioural rather than treaty approach for responsible state behaviour in cyberspace within the UN First Committee.

In any case, the proposed Global Commission and other initiatives – such as South Korea’s World Emerging Security Forum that examines AI implications for international peace and security; the ‘AI Partnership for Defense’; or the U.S. State Department’s ‘Political Declaration on Responsible Military Use of AI and Autonomy’ with its list of best practices that endorsing states believe should be implemented – can each be useful international mechanisms to address outstanding problem sets. So can other means, such as baking these questions into international partnerships like the EU’s future Trade and Technology Councils or future cyber defence partnerships that are expected to be established. Both the Netherlands and Republic of Korea have recent, relevant experience in hosting the third and fourth international cyberspace multistakeholder conference respectively, namely the Seoul Conference on Cyberspace in 2013 and the Global Conference on Cyberspace 2015. These multi-stakeholder conferences were held at a time when the field required high-level political attention to nudge along international cooperation. In governance terms, AI as it pertains to international peace and security is now at a somewhat similar stage. Similarly, the multi-stakeholder approach is vital insofar as the civilian sector can be a key driver, end user and problem-solver in this field. Moreover, structured dialogue with defence relevant industries will certainly be required, as already highlighted within the EU Global Strategy.

Lastly, although many national AI strategies have recently been published, they do not always pay extensive attention to either cybersecurity or security and defence. This means that states need to undertake far more work at the national level to clarify their own national defence/security AI strategies in order to then guide and support their international efforts, thus driving more successful international cooperation. While some countries like the United Kingdom have dedicated national defence AI strategies (other countries like South Korea or Singapore have developed frameworks, roadmaps and guidelines on AI for defence), France seems to be the only EU Member State with such a strategy. Such a strategy can provide the clarity of thought and terms of reference for a nation’s international cooperation. And it is likely that this will only become even more important as militaries integrate EDTs into their ecosystems.

The Joint Call to Action includes an ambition to promote the exchange of good practices and lessons learnt among states to increase the mutual comprehension of states’ national frameworks on AI use in the military domain. But, it will be practically difficult to conduct such exchanges if many countries do not even have such official frameworks and policies. Moreover, there is no cohesive EU-level defence AI strategy. Governance in this space across Europe is currently fragmented.[1] The Call to Action rightly invites states to develop national frameworks, strategies and principles on responsible AI in the military domain. To conclude, this is an important first step which should not be underestimated. One key lesson from the first tranche of states’ cyber exchanges in the past included the realisation of this need to first establish national frameworks and nominate key points of contact as underpinning enablers for successful information exchange, trust building and international cooperation.


[1] Author observations, AI Defence Innovation Panel – Presentation of Simona Soare, REAIM, February 2023.

Thumbnail image credits@Harshana @Image on @Unsplash

Image

About the Author

Caitriona Heinl

Caitríona Heinl is Executive Director at The Azure Forum for Contemporary Security Strategy, Ireland and Adjunct Research Fellow at the School of Politics and International Relations at University College Dublin (UCD). Caitríona has over a decade of experience in international and Irish research and academic environments working on transnational crime, international security and defence questions with particular focus on cybersecurity policy, emerging technologies, and regional security.

Share this Article