Three Pivots Towards Digital Inclusion

Natalie Pang Opinions

While the pandemic has created a new awareness of the importance of digital inclusion, we need to expand the conversation beyond accessibility and literacy and towards human-centric strategies to combat newer forms of biases and gaps. Current discussions about digital inclusion need to take into account three elements in particular: smart technology bias, data citizenship and platform cooperativism.

If there’s one thing that the coronavirus pandemic has brought about in the last two years, it’s a new level of awareness that access to digital technologies has become essential to the daily functioning of many people. Having digital access is both about being able to get online and about having access to an adequate and reliable device. While this used to be thought of as simply something ‘nice to have,’ digital access has become essential for people to function at work, sustain their social relationships and maintain family bonds.

Over the course of the pandemic, many stories have emerged about people on the margins – those who aren’t the typical user new technologies are designed for – who have fallen through the cracks at work, at home and at school. Thankfully, in both academic and policy circles, conversations about digital equity and inclusion have gained momentum. The resulting recommendations are centred on addressing access to devices and internet access as well as empowering marginalised populations with digital literacy training and programmes. These solutions are important and essential: the digital gaps that have surfaced reflect class divides, with the more affluent and educated being much further ahead than others.

But it’s also time to expand the conversation on digital inclusion beyond accessibility and literacy. With advances in artificial intelligence (AI) and rapid digitalisation of nations across different societies, new vulnerabilities have emerged that are even less visible than gaps associated with device access or broadband connectivity. Ongoing conversations and debates are often population-centric, addressing gaps faced by particular groups of people. However, these vulnerabilities can impact large new groups of people who are not typically the focus of policy interventions associated with digital inclusion.

Algorithmic and smart technologies’ bias

As smart technologies and algorithms become more prevalent, governments, institutions and individuals are becoming more dependent on smart machines to help them filter and screen large volumes of data and make informed decisions. Algorithms now drive many applications, from recruitment to facial recognition to analysing credit risk. While algorithms’ effects are often visible and widely observed, it’s more important, but harder, to understand how they’re created. Algorithms rely heavily on training data, which often fails to include marginalised populations. Algorithmic biases are created when algorithms are used to make decisions or recommendations based on flawed data. Such data flaws can result from incomplete data or human biases.

Incomplete data is not simply a matter of fringe groups being left out of the training data; it can also arise from other groups being over-represented in the data. These factors contribute to different issues. When certain groups are missing from the data, their needs, decisions and behaviours are simply not considered – resulting in incorrect or irrelevant recommendations and logics associated with individuals from these groups. Groups being over-represented in the training data will also produce biases: think about the potential biases created by over-representation of African Americans in criminal databases, for example.

Researchers have discussed many strategies to consider in mitigating algorithmic biases. For instance, organisations and governments could pay greater attention to how sensitive information about populations is handled and used to drive algorithmic recommendations. Information like race should not be used to form the core basis of outcomes – i.e. recommendations should not be made simply because someone belongs to a certain race or gender. The context of such data must be considered so that algorithms will not be used against certain groups, deepening existing gaps. Another mitigation strategy is better diversifying technical development, or adding feedback mechanisms to it, so algorithms are developed by teams able to incorporate greater diversity into the coding logics.

Regardless of the strategy adopted, researchers have alluded to the following types of questions to consider as a form of ongoing checks into whether algorithms are inclusive:

  1. Does the training data under-represent or over-represent particular groups? If so, which, and what kind of data is missing?
  2. How is demographic and sensitive information (e.g. race, income, gender etc.) being used?  
  3. Have the algorithms been tested for their usefulness and usability with individuals who come from the most marginalised groups?
  4. Do the algorithms produce unequal outcomes for people from different population groups?

Data citizenship for inclusion

As societies become more digitalised and surveillance technologies expand over the course of the pandemic, a parallel debate has been developing in the area of data security and privacy. Much of the discussion has centred on legal provisions and frameworks to guide the current era of development that sees the collection, use and exchange of personal data as one of the key drivers of digital innovation and surveillance. Rising concerns about privacy have contributed to the development of important regulations prioritising security and privacy of personal data – with the EU’s General Data Protection Regulation (GDPR) perhaps being the most well-known and influential. The GDPR can also be seen as one of the most protective of the personal data rights of individuals, including those that are most vulnerable, as it places responsibility on organisations as well as governments.

But more needs to be done, and data citizenship is an important mechanism to include to address current gaps. Consider the extent to which informed consent is meaningful: for many who are highly reliant on technological platforms for their livelihoods, not agreeing to the terms and conditions associated with the use of personal data can be very costly. Additionally, making sense of lengthy notification letters comprising legal clauses may also be seen as a difficult and futile pursuit, even when organisations have done their best to make such statements in ‘concise, transparent, intelligible and easily accessible form, using clear and plain language’ (GDPR, Article 12). What’s missing for many individuals is a culture of talking about data collection and use to provide sufficient contexts for them to make sense of such clauses. ‘Data citizenship’, referring to citizens’ engagement associated with understanding and making decisions about data, can be a useful intervention in empowering citizens, including those that are most vulnerable.

Platform capitalism and online harms

Organisations as well as individuals have thrived using technological platforms over the years. From the days of women setting up blog shops on the World Wide Web in the 1990s and early 2000s to individuals using YouTube, Spotify and NFTs to generate revenue and grow their own influence networks today, platforms have afforded many opportunities to those that might otherwise be excluded from participating in the global economy. Market logics governing the growth and operations of technological platforms form the basis of platform capitalism, where people use these platforms to define and develop their own work and livelihoods.

While platforms offer many benefits, platform capitalism is driven by for-profit motives and premised on individuals connecting and sharing data aggressively. Due to the market logic associated with platforms, concerns about the security and privacy of one’s data can end up taking a backseat. Platforms that afford individuals opportunities to work for themselves likewise offer malicious actors and covert networks opportunities to engage in disinformation, manipulation, commission of scams and dissemination of harmful content. Youths are one of the populations most vulnerable to these activities, as they are highly dependent on digital platforms in their daily life. When the use of platforms is dominated by market logic, youths are often left alone to deal with the harms they experience online.

Platform cooperativism is often discussed as a solution to the issues associated with platform capitalism. It involves organisations and individuals working together to sell goods and services based on goals and principles that have been developed as a community. Businesses and entities operating under such guidelines are known as platform co-ops, and have been floated as an approach to addressing inequalities by involving the most vulnerable in a more equitable manner. Platform co-ops have emerged across a variety of different industries, and taken a variety of forms – which is hardly surprisingly, given the participatory principles governing the development of platform co-ops. But platform cooperativism is a work in progress and these organisations are not without challenges. Governance of platform co-ops is usually complex and involves managing divergent interests. The needs of the most vulnerable groups may still sometimes be excluded when trying to balance different interests. Nevertheless, platform co-ops afford greater opportunities for individuals to have input in and control over their privacy and define what’s fair to them. They also create communities individuals can turn to when things go wrong.

To conclude, while platform cooperativism is not without its challenges, it offers a community-centric approach to the development and use of platforms. It is not about trying to change platforms or introduce new policies to govern platforms. It is about making communities the enterprise – through collaboration and deliberation.

Thumbnail image credits: Kabiur Rahman Riyad on Unsplash.

Image

About the Author

Natalie Pang

Natalie Pang is Senior Lecturer and Deputy Head at Communications and New Media Department, and Principal Investigator at the Centre for Trusted Internet and Community, both at the National University of Singapore. Her research and teaching centres on the internet and society, and digital humanities.

Share this Article