The EU’s data and AI strategies assume that increasing trust will increase data generation, bolstering the Union’s competitiveness in the AI field. But an inherent conflict exists between the goals of data-driven innovation and data protection.
Like any significant technological shift, the growth of the Internet of Things (IoT) raises many issues of geopolitical and strategic significance. These issues reflect not just the IoT’s involvement of many interconnected systems; they have at least as much to do with differing visions of and expectations for the future of the sociotechnical systems that will shape our world. As it develops, every powerful political entity will take the position they believe will best promote their interests, preferred norms and values.
As with other geopolitical drivers, a number of approaches can be observed emerging and evolving. The “Californian” model emerged from the US under the Clinton/Gore administration of the 1990s. This model was built on the principle of “move fast, break things, and apologise later”, and it deserves credit for much of the innovation of the last few decades. It has also been responsible for many of the problems we now confront. These problems have now been baked into the digital economy through business models based on trading free apps and services for free access to valuable data (oftentimes of a highly private nature) that can be commercially exploited.
A second approach is emerging from China, where considerable state investment in technological development is now paying dividends. The Chinese model has much in common with the Californian model in that it too is built on massive data collection, which is subject to only light, if any, regulation. Many believe that the Chinese approach is driven more by a desire for political stability or dominance than for economic gain (as the Californian approach is). But with the influence of the middle class in China rising, one could argue that those two goals increasingly reinforce one another.
The third major approach is that of the EU, where we see an effort to square the circle. The EU approach attempts to balance the benefits of the digital economy and the IoT’s potential to improve the human condition with continued reinforcement of the centrality of democratic institutions and human rights. This third approach is the most complex because it engages head-on with the dilemmas and contradictions that plague discussions about the IoT and data generally. The EU has recently made some significant steps towards resolving these contradictions.
EU Steps Forward
The European Commission recently released two documents laying out its plans to shape Europe’s digital future: a Communication that sets out the European “Data Strategy” – specifically seeking to increase the availability of data for businesses and the public sector – and a White Paper containing a number of policy options aimed at promoting the adoption of Artificial Intelligence (AI). These two documents are intrinsically related, as achieving the aims of the former is a precondition for the development of the AI ecosystem contemplated in the latter: the refinement of AI depends on access to large volumes of data to train the underlying algorithms.
This is an issue of fundamental importance to the future of the Internet of Things, which will generate exponentially more data as it grows. For the EU, establishing legitimate mechanisms for gathering, sharing and re-using data will be essential for (re)building trust in civil society and, therefore, ensuring that the Union is in a position to maximise the potential of the IoT in a way that upholds human rights. This, of course, was the premise upon which the General Data Protection Regulation (GDPR) was implemented.
Paradoxically, despite the dramatic growth in data generation caused by technological progress, data is still not widely accessible. Accordingly, the Data Strategy seeks to create a single market where data can be readily accessed, traded and shared. The resulting increase in data-sharing between private projects and the public sector is expected to lead to European data pools where big data analytics, machine learning and AI can thrive in a manner compliant with EU data protection, consumer protection and competition law.
There is a straightforward line of reasoning behind the Commission’s strategy to make quality data more available for use and re-use in a competitive environment that is undistorted by unfair and anti-competitive practices and where participants respect individuals’ data protection rights. The EU’s strong data protection framework, the argument goes, brings about legal certainty and promotes trust, thereby leading to greater engagement with data-driven platforms and technologies, the creation of more data, and consequently greater AI improvement.
Moreover, consumers naturally prefer honest service providers, so actors that engage in unfair commercial practices will organically lose market share. Crucially, if properly enforced, the purpose limitation principle enshrined in the GDPR could reduce the market power held by tech giants in data-driven markets, while at the same time making it easier for startups to enter the market and compete on their own merits.
Data Protection and Innovation in Conflict
Regrettably, the logical argument suggesting that greater data protection increases both consumer protection and competition, leading to greater trust, generation of more data and eventually AI improvements, is both overly simplistic and flawed. It fails to account for a number of regulatory failures that prevent the achievement of synergies between data protection, consumer protection and competition policy.
First, there is already weak competition in data-driven markets, which are typically characterised by the presence of a super dominant firm and fringe competitors. These super dominant firms have an incentive to withhold their data troves rather than share them. Second, consumers are largely in the dark about the scope and magnitude of their service providers’ data processing operations, and remain so thanks to behavioural biases, consent fatigue and other impediments to informed decision-making. To data-driven firms, of course, these users’ data is invaluable. In this setting, where information asymmetries and imbalances of power are rife, little trust can be developed and gained.
Crucially, data protection and competition policy pull in opposite directions. Data protection laws aim to put a brake on the collection and re-use of personal data, thereby protecting individuals’ privacy and autonomy. However, limiting data collection makes it more difficult for data-driven firms to succeed and slows innovation, ultimately potentially harming consumer welfare, which is arguably the supreme goal in modern competition policy. Resolving this tension will require a collective dialogue about the goals of both fields, quite possibly leading to legal uncertainty and potentially creating problems than solutions, at least in the short term.
Essentially, despite a latent consumer demand for privacy-friendly services, lack of trust has had minimal impact on consumer engagement with data-driven services. Market forces have failed to satisfy this latent demand due to the information asymmetries and imbalance of power referenced above. Of course, should greater competition lead to the introduction of privacy-friendly services despite these hurdles, these new alternatives’ reduced data collection could make the Commission’s data and AI strategies more difficult to achieve. It is exactly these mounting tensions, depicted in a fragmented regulatory framework, that require continued effort in order for the EU’s vision to flourish between the two poles of California and Beijing.
While this is a global issue that requires a multi-stakeholder approach, it also draws upon state-centric levers of regulation, governance and power. Consequently, the EU’s approach will have implications for the promotion of norms and values that underpin the future of the Internet of Things. Those actors that display initiative, vision and thought leadership now will have the opportunity to establish frameworks within which others operate. The effort and investment that the EU is putting into this now bodes well for the future.
Thumbnail image: credits to Federico Beccari on Unsplash
About the Author
Madeline Carr is the Director of the UK-wide Research Institute in Sociotechnical Cyber Security (RISCS) and the Director of the Digital Technologies Policy Lab. Her research focuses on the implications of emerging technology for national and global security, international order and global governance. She is now the lead on the Economics and Law lens of the new PETRAS National Centre of Excellence in Cybersecurity of the IoT. Professor Carr is a member of the World Economic Forum Global Council on the IoT. She is also the Deputy Director of a new Centre for Doctoral Training in Cybersecurity at UCL.