No Safety Without Cybersecure AI

Sven Herpig Opinions

Artificial Intelligence (AI) is one of the most promising emerging technologies that exist today, and its applications will be widely implemented in the years to come. This isn’t only true for sectors like autonomous driving or surveillance but also for the military and judiciary – all areas where security is paramount. It’s imperative that we start considering the cybersecurity dimensions of AI and begin developing an appropriate policy framework to better defend sectors critical to public safety against adversarial interference.

Looking back at previous technology-related discourses, like the ones surrounding the Internet of Things, it’s clear that security concerns are often considered quite late. A failure to address these issues early on isn’t only expensive in the long run, it can also seriously undermine privacy – as witnessed over the years with other technologies.

Playing catch-up with technology poses serious risks, especially when it comes to cybersecurity. This has been all too obvious in the way governments around the world have dealt with many past developments like, say, the Internet of Things, where neither privacy nor security were baked into its design from the get-go.

This wasn’t the first time that security concerns were taken into account far too late – and it unfortunately won’t be the last. But we can’t afford such a blunder with AI. We can’t ignore people’s – justified – security concerns. The stakes are simply too high.

If the EU wants to be a leader in the fields of emerging technologies, like machine learning (as the driving force behind AI), it must come up with an appropriate policy framework to better safeguard safety-critical areas.

Cybersecurity and the Future of Machine Learning

Even though there are already many applications that claim to leverage AI – just look at Oral-B’s latest toothbrush – it is important to first concentrate on applications that will likely be run in safety-critical, high-risk, areas. For example, researchers have managed to successfully trick Tesla’s lane recognition system into steering a car into oncoming traffic or fool an image classifier into flagging a rifle that was, in fact, a printed turtle.

Machine learning may offer numerous advantages – not least of all enormous growth in productivity and automation – but it also poses new risks, such as vulnerabilities to adversarial interference that can be difficult to spot. There are three distinct points where cybersecurity and machine learning intersect.

First, machine learning can be used to make cyberattacks more potent, such as by improving bug detection, exploitation, phishing and user deception. These tools are malicious actors’ bread and butter, and they become more effective when leveraged with machine learning.

Luckily, the opposite is also true for improving our cyberdefences: Machine learning can be leveraged to strengthen IT security, for instance by automating intrusion detection and immediately suggesting triage options to staff.

But it’s the third intersection that is probably the most important for the EU. It has to do with the security of machine learning applications: What is the current state and what steps could be taken to improve it? Or as Martin Ford, the author of Architects of Intelligence, put it, “[Security is] maybe the single most important challenge with regard to Artificial Intelligence […]”.

Much Ado About Nothing?

There’s little doubt that machine learning will continue to evolve as more people use it. According to forecasts by the International Data Corporation (IDC), global industrial expenditures on AI will reach $79.2 billion in 2022, up from an estimated $35.8 billion in 2019. Today, 72 percent of business executives believe AI will be a huge advantage for their companies. With that in mind, we can reasonably expect an investment boom in machine learning across various sectors. In turn, this will cause many machine learning applications to be further integrated into diverse safety-critical areas, such as autonomous driving, surveillance systems, the courts and the military.

Yet without proper security systems in place, our vulnerability to malicious cyber operations is staggering. Even without the deployment of machine learning, conventional IT systems are exposed to cyber operations. Almost every other week, another successful cyberattack is reported in the news. The risks aren’t limited to our outdated smartphones or laptops, either, but more worryingly concern critical infrastructure like government systems or military assets and networks.

What Are the Cybersecurity Risks?

An analysis by European and American experts in machine learning and cybersecurity concluded that we face a greater risk when deploying machine learning in sensitive areas of national security. This is due to an extended attack surface, an often low awareness that crucial machine learning systems have been compromised, and potential domino effects, like when several systems are interconnected without human oversight. (Think of image recognition for identifying enemy military vehicles or automated weapons systems.)

Another facet of this increased attack surface is that the data used to train a machine learning classifier can be manipulated by an attacker – which isn’t a problem in traditional software development. An attacker could, for example, tell facial recognition systems used in surveillance to never flag people with brown eyes as criminals. All they would need to do would be to delete a specific subset of training data while the application is being developed.

For those people training and deploying machine learning applications, it’s crucial to recognize whether the result of a machine learning prediction is objectively correct or not. Wrong output could either be because of a faulty setup or because someone interfered with the system. (The latter is harder to confirm.)

It’s hard enough as it is to attribute attacks that target traditional software; with machine learning, there are several stages in which those responsible for development and deployment have no control. For example, the collection and labelling of outsourced training data, as well as pre-training models, are carried out by third parties (the “supply chain” of machine learning).

Moreover, adversaries could conceivably conduct physical manipulations – of, say, traffic signs. An attack could occur at any one of these stages, making it extremely difficult to spot (not to mention track) it.

Unfortunately, an increased attack surface and challenges in tracking anomalies aren’t the only downsides of implementing machine learning in safety-critical areas. Another risk is the interconnectedness of these systems. Individual applications don’t exist in isolation, after all. As the level of automation of (vulnerable) systems within networks grows, and as machine learning facilitates the autonomy, scaling, and speed of decisionmaking, the danger of a (potentially lethal) chain reaction increases in kind.

For example, take the manipulation of image recognition and detection systems for traffic lanes in automated vehicles – these could be steered into oncoming traffic. The IT community has been working on countermeasures for years, but policymakers need to also be aware of these issues and discuss whether existing security frameworks for safety-critical areas are also applicable and effective for machine learning applications.

A Call for Better Joint Engagement on AI Security Questions

Patching the vulnerabilities in machine learning before someone can find ways to exploit them is a cat-and-mouse game, but there is much promising technical research being carried out to this end. People are exploring, for instance, secure multi-party computation, differential privacy, built-in forensic capabilities, and the explainability/interpretability of machine learning.

So far, most of the work in this area has been conducted within research departments and academic communities in the public or private sector. It’s not part of the policy discourse yet, but the security aspects of machine learning – particularly safety and robustness – have indeed trickled down into the standardization community.

In October 2019, the European Telecommunications Standards Institute (ETSI) announced the launch of its specification group on Securing AI. Later that month, the German Institute for Standardization (DIN) launched its own working group on the IT security of AI. Other organizations, including the International Organization for Standardization (ISO) and the European Committee for Standardization/European Committee for Electrotechnical Standardization (CEN/CENELEC) have also launched similar initiatives. Likewise, cybersecurity agencies like the European Union Agency for Cybersecurity (ENISA) and its German counterpart, the Federal Office for Information Security (BSI), have also begun to work more on the issue.

Standardization efforts, academic research, and open communication between technical experts and policymakers are all a must if we’re going to form domain-specific information security guidelines and best practices. These could include penetration testing guidelines, data validation methods, and guidelines for handling classified training data.

It’s true that some of these aspects might already be covered by existing processes and regulations (e.g. the General Data Protection Regulation or the Network and Information Security Directive), but the EU white paper on AI points to the safety of AI as an area where “the legislative framework could be improved”. Safety does not *necessarily* extend to cybersecurity, but the white paper specifies that safety “includes issues of cybersecurity, issues associated with AI applications in critical infrastructures, or malicious use of AI”.

The paper goes even further to state that a mandatory legal requirement for high-risk AI applications should be “ensuring that AI systems are resilient against both overt attacks and more subtle attempts to manipulate data or algorithms themselves, and that mitigating measures are taken in such cases”.

We’ve taken the first step. There’s acknowledgement that cybersecurity is vital for the safe development and deployment of AI applications. Now the EU needs to follow through and address the associated vulnerabilities. This is especially true for safety-critical sectors – and not only from a regulatory, but also a technical and an organizational perspective.

Image

About the Author

Sven Herpig

Sven Herpig is head of international cybersecurity and the Berlin-based tech policy think tank Stiftung Neue Verantwortung. Sven worked for the German government on IT-security issues in various positions and served as expert for the German parliament.

Share this Article