We need to be wary of disruptive tech in development.

A white man in a suit stands on a stage, pointing at a bit screen in a dark room.

Unveiling a new technology. Credit: Gorodenkoff on Shutterstock.

Disruptive commercial technologies seem to be exploding at an ever-faster rate and are omnipresent. New buzzwords like Web3, the metaverse, ChatGPT, etc. are popping up left, right, and center. There is often a tendency to over-hype new disruptive commercial tech in the sustainable development sector and make over-exaggerated claims about their potential to transform the lives of the world’s poorest people. 

Disruptive technologies are primarily driven by commercial innovation, which is, in essence, experimentation. That is no bad thing in and of itself, and in commercial market-based environments, ‘disruption’ is a key driver of the immensely rapid digital transformation our societies have experienced over the last few decades. OpenAI’s ChatGPT and other large language models (LLMs) are excellent examples of disruptive technologies that will undoubtedly transform how we access, generate, and consume information, media, and knowledge in the years to come.

Yet, in the rush to apply disruptive tech to problems within the sustainable development sector we often over-exaggerate the benefits and underestimate the risks. For instance, who remembers blockchain? Wasn’t the blockchain meant to revolutionize life for the world’s poorest by being the “key that unlocks the SDGs”? Instead, blockchain-based technologies are consuming inordinate amounts of energy and fueling illegal financial transactions. Today, we can replace ‘blockchain’ with artificial intelligence and machine learning (AI/ML). In a few years, it will undoubtedly be something else, like quantum computing. It is likely that the distortive effects of highly disruptive technologies will remain the same.

Thanks for reading The Data Values Digest! Subscribe for free to support our work.

The problem with disruptive tech in the sustainable development space is that the world’s poorest places are not an appropriate testing ground for this experimentation. Focusing attention and resources on testing disruptive technology means fewer resources allocated to the time-tested drivers of sustainable, equitable development.

Misaligned incentives: Does disruptive tech contribute to, or distract from, sustainable development?

There’s a fundamental disconnect between the motives of commercially developed technologies and the aims of those working on digital transformation and the data revolution within the sustainable development community. Much commercial digital tech is primarily designed to monetize data in various ways. Conversely, public policy professionals working in the data revolution for sustainable development aim to create positive impacts for the world’s poorest and the planet. We need to stop pretending that these disparate incentives align—they usually don’t. This means that we must be extra careful when we think about how extremely powerful commercial tech products might apply in low-income and humanitarian contexts. We have a duty to those we serve to be conservative and risk-averse, the exact opposite of what often happens with disruptive tech, which is to dive head-first into a process of reckless experimentation.

Much commercial digital tech is primarily designed to monetize data in various ways. Conversely, public policy professionals working in the data revolution for sustainable development aim to create positive impacts for the world’s poorest and the planet. We need to stop pretending that these disparate incentives align—they usually don’t.

Let’s look at some of the issues. First, commercial tech developed for a high-income market is often simply not suited for application in a development context. The private tech world has always operated on the “move fast and break things” mantra. That approach works efficiently in the dog-eat-dog world of venture capital-backed company building but does not align with the duty to do no harm that we have as development and humanitarian professionals.

Secondly, commercial tech developed in a particular place doesn’t travel well. This is born out in what are now well-established technologies. Biometric ID systems come to mind here. Facial ID systems trained on predominantly white faces in the U.S. (unsurprisingly) don’t work for non-white faces. We should not take for granted that the broad application of technology means these developments are well-suited to all contexts. Such problems stem not only from the characteristics of new tech itself: supposedly harmless technology has been used by governments for nefarious ends in all corners of the world

Disruptive tech, therefore, is often problematic at a fundamental level. Commercial solutions that are deployed at scale are often technologies or data analytics processes that have been designed with profit in mind, not socioeconomic impact. Moreover, the dominance of a few firms in the tech world also means that when new technologies do emerge, they are increasingly ensnared in geopolitical suspicion, competition, and rivalry (e.g., China’s Great Firewall vs. U.S. TikTok bans for instance).

What does all this mean for the sustainable development community, and how do we deal with it?

The obvious criticism to everything I have written so far is that new tech is a fact and we cannot ignore it. I absolutely agree. The point for me however is to start thinking about how disruptive tech might apply to our work from a position of caution, critical reflection, interrogation of motives and business models, and careful consideration of where there might be opportunities to pragmatically apply new tech. We need to apply new tech in ways that minimize disruption to fledgling and fragile, or underfunded and understaffed institutions and contexts rather than maximize it, as the private sector does.

The point is to start thinking about how disruptive tech might apply to our work from a position of caution, critical reflection, interrogation of motives and business models, and careful consideration of where there might be opportunities to pragmatically apply new tech.

Based on my ten or so years working at the intersection of sustainable development and humanitarian action, digital and data policy, and human rights law, there are a few ‘mental check’ questions we can ask ourselves before jumping headfirst into a new tech experiment. Some considerations that work well as a starting point in assessing how appropriate a new technology might be for inclusion in a sustainable development project could include thinking through the following factors:

  1. Who are the key stakeholders involved in developing the new technology/application/tool? Are they commercial entities or are they actors within the sustainable development sector? What is their reputation in this space? Are they local actors likely to develop a technology that is genuinely useful for their context (for example, like M-Pesa)?

  2. What are the incentives for the new technology/tool? Is it designed to generate profit, or achieve positive socio-economic impact? If it is primarily profit-driven, are the implications for applying it in the sustainable development space?

  3. Is the new technology/tool controversial in any way? Politically, ethically, culturally?

  4. What data does the new technology rely on to operate? Who controls that data? Who is contributing data and what are they getting in return (or even whether they are getting anything in return)?

  5. What are the risks of using this new technology/tool in a development context? What are the requisite legal and ethical safeguards that are needed to make it safe? What are the digital and data skill sets and capacities that are needed to make it effective? What datasets and digital infrastructure are needed to make it reliable? Do any of these identified requisites exist in the context you want to implement the tech?

The goal here is to think through these questions to assess the costs and benefits, risks and rewards of engaging with emerging disruptive technologies. What this boils down to is the need to learn to moderate our response to disruptive commercial technology. We need to establish peer learning communities to discuss and unearth how new technologies and tools are likely to disrupt our sector, and what we can do to mitigate negative impacts. In this regard, self-organizing communities such as MerlTech’s Tech Natural Language Processing Community of Practice managed by Linda Raftree and the Data Values Project are paving the way in ensuring that there are processes in place to mitigate against the harms that accompany digital tools.

Thanks for reading The Data Values Digest! Subscribe for free to receive new posts and support our work.

Tom Orrell is founder and managing director of DataReady, an organizations that helps the sustainable development and humanitarian sectors navigate the complexities of data governance.

Related Articles

Responses

Your email address will not be published. Required fields are marked *