In 2015, UN member states committed themselves to fostering software supply chain security. But the issue has since been neglected in international forums, even as software supply chain compromises have severely impacted individuals, companies and societies. To begin to close this implementation gap, diplomatic action should focus on global promotion of processes of coordinated vulnerability disclosure (CVD). This would both strengthen domestic cybersecurity and demonstrate states’ commitments to the UN normative framework.
During the most recent meeting of the United Nations Open-ended Working Group on security of and in the use of information and communications technologies (OEWG) in March 2023, the German and Czech delegations drew attention to the issue of software supply chain security and called on the body to address it through ‘policies and cooperation at national, regional and most importantly global level’. Switzerland, for their part, opted to dedicate the latest iteration of the Geneva Dialogue to addressing software vulnerabilities and improving the security of digital products.
Rise in software supply chain compromises left unaddressed
In recent years, the world has experienced a series of software supply chain compromises with significant real-world effects. The NotPetya (2017) malware initially compromised Ukrainian tax software. As it spread, corporate operations around the world were disrupted for weeks, costing individual companies hundreds of US dollars. The Kaseya incident (2021), in which criminals took advantage of a vulnerability in IT monitoring software to ransom organisations across continents, caused ripple effects far down supply chains. A vulnerability in the Java library Log4j (2021), which is ubiquitous in software applications, led to a wave of incidents involving ransomware, crypto-mining and botnet creation.
While the global spread of ransomware rallied the international community – in the form of a global taskforce – threats to the integrity of global software supply chains have not prompted such common determination. This is quite disturbing; in 2015, all UN member states endorsed a set of eleven cyber norms, including the provision that:
States should take reasonable steps to ensure the integrity of the supply chain so that end users can have confidence in the security of ICT products. States should seek to prevent the proliferation of malicious ICT tools and techniques and the use of harmful hidden functions[.]
Even though this agreed-upon norm is a legally non-binding and voluntary political commitment, remarkably few practical policy initiatives have originated from this commitment. The 2021 report of the UN Group of Governmental Experts went no further than recommending states adopt good practices entertained by suppliers and vendors of ICT equipment and systems. Also, individual states developed no further opinions on what responsibilities must be upheld to demonstrate adherence to the norm.
No further guidance has been developed in terms of the responsibilities individual states are expected to uphold to demonstrate adherence to the norm.
Coordinated vulnerability disclosure – A starting point for norm implementation
Now that a couple of UN member states are attempting to place the issue of software supply chain integrity on the cyber diplomacy agenda, it is time to more clearly define expectations of responsible state conduct. In other words: what measures can governments reasonably be expected to take to demonstrate seriousness about software supply chain security?
It’s imperative governments pick up this gauntlet. So many promote their country as a safe and secure place to connect, conduct online business, trade in digital products and invest in cyber-enabled emerging technologies. At the same time, the borderless nature of the software market means that regulators cannot but acknowledge the limits of their ability to control and audit it.
A recent report from Stiftung Neue Verantwortung identifies a variety of policy and regulatory options in the works. These include: establishing quality assurance frameworks or secure software development principles; using software bills of materials; and imposing product liability schemes to strengthen software supply chain security.
Yet, some of these practices may set the bar too high for many governments, in particular those in emerging economies. These schemes require advanced skillsets for proper set up and substantive resources to ensure compliance. There is, however, a quicker win to consider: coordinated vulnerability disclosure (CVD) policies require less of governments, in terms of both investment and technical capability. In fact, CVD practices leverage the broader community of industry, academia and IT security researchers.
CVD: Due process to enable and encourage reporting and remediation
The idea behind CVD is relatively simple. While governments are increasingly pushing for better ‘security by design’, software products coming to market will most likely continue to contain undiscovered vulnerabilities, and legacy applications will remain in operation too. With a CVD policy, an organisation agrees to report software vulnerabilities upon discovery – by security researchers or anyone else. Discoverers agree to be bound by certain rules, and the notified organisations agree to take remedial action and potentially offer remuneration. The organisation, typically the software developing entity, is then enabled to develop and issue a patch or other remedy before malicious actors can exploit the undisclosed vulnerability.
CVD is mainly a means to guide a process of reporting and remediation, but it also encourages a collaborative and community culture of resolving public interest ICT security issues. Originally established at the grassroots level by the cybersecurity community and now applied in both the public and private sectors, this practice protects researchers from instant criminal charges or other forms of punishment – as long as they adhere to mutually agreed upon and respected principles enshrined in CVD.
Some governments – including those of the United States, the Netherlands and Japan – and major tech companies have already embraced this practice and are promoting broader acceptance globally, including through the Global Forum on Cyber Expertise.
Not a guarantee but an important step for mitigating risks
Alternatives to an established coordinated disclosure process take states to a much worse place. A culture of non-disclosure, in which discoverers keep vulnerabilities to themselves or sell the information illegally to third parties, would make countries, end users and vendors vulnerable to exploitation by malicious actors – both state-backed and criminal. The same goes for situations in which discovered vulnerabilities are instantly and fully disclosed before patches are available.
While CVD offers a means to ensure information about vulnerabilities reaches software developing entities legitimately and promptly, it is by no means a guarantee against malicious exploitation. Information may still leak out before a patch is adequately deployed, or a patch might be reverse-engineered to craft an exploit. But having a CVD process decreases the chances of widespread exploitation of vulnerabilities.
This is why CVD should rank high on the list of priorities for any government that takes safety and security of software entering its market seriously and that wants to ensure trust and confidence in online products and services, in particular critical and critical information infrastructure. In addition, at the international level, encouraging a practice of CVD within their jurisdictions is a way for governments to demonstrate commitment to the agreed norm of ensuring software supply chain security.
What measures, then, might governments take to ensure the integrity of the ICT supply chain and end-user confidence in the security of ICT products? We suggest three initial steps.
Introduce CVD policies for government departments that develop software products
First, governments should lead by example and set up organisational CVD policies for public sector agencies that develop software. While not all states have entities that develop software, this is often the case for national cybersecurity agencies as well as for digital transformation and e-government service agencies. Organisational CVD policies clarify expectations about appropriate behaviour from all stakeholders involved in the CVD process. They include information on legal considerations as well as practical information such as contact details and response timelines. By introducing such policies, governmental agencies facilitate vulnerability discovery and reporting for their own software products and set an example for others to follow.
Encourage software developing entities to introduce their own CVD practices
Second, policymakers should issue ready-to-use guidance to software developing entities in their jurisdictions to incentivise them to set up organisational CVD policies. Such guidance should be tailored to the needs of the stakeholders in their national ecosystem: for example, small- and medium-size enterprises with limited resources and capabilities; or newcomers to the software supply chain market, such as producers of IoT devices.
Actionable guidance can include standard language on the legal situation of security researchers or templates for vulnerability disclosure documents and CVD policies. Fortunately, states do not have to start from scratch: they can refer to a broad range of good practices from non–governmental actors or international organisations.
Update national legal frameworks to offer adequate protection for security researchers
Third, governments need to foster an enabling legal environment for these steps to take place. This can be done through a national legal framework for CVD. In many jurisdictions, the legal status of security researchers is currently unclear at best and criminalised at worst. At minimum, regulation should include safe-harbour provisions that specify the scope and conditions for vulnerability research.
By putting these three steps into practice, governments will achieve two things at once: they will improve cybersecurity at home; and they will credibly and tangibly demonstrate their commitment to essential components of the UN framework of responsible state behaviour in cyberspace. Furthermore, they will contribute to further definition of state responsibilities and accountability.
 The authors of this blog post are a co-author of and contributing expert to the report.
About the Author
Alexandra Paulus & Bart Hogeveen
Alexandra Paulus is Project Director for International Cybersecurity Policy at Stiftung Neue Verantwortung, the Berlin-based tech policy think tank. Her work focuses on cyber diplomacy, the development and implementation of cyber norms and non-traditional actors in international cybersecurity policy. Bart Hogeveen is Head of Cyber Capacity Building at ASPI’s International Cyber Policy Centre. Across his career he has focused on international peace and security, international aid and national security aspects of cyber affairs.