By Maleeha Lodhi
The global debate about the disruptive impact of new technology has intensified especially with advances in artificial intelligence (AI). There is no doubt modern technology has been a force for good and responsible for multiple positive developments – empowering people, advancing medical and scientific knowledge, increasing productivity, improving lives and transforming societies. Technological developments have helped to drive unprecedented social and economic progress.
But the fourth industrial revolution has also involved the evolution of advance technologies that are creating disruption, new vulnerabilities and harmful repercussions, which are not yet fully understood, much less managed or regulated. A digitized world is facing the challenge of cybersecurity as threats mount across the world. Data theft and fraud, cyberattacks and breaches of critical systems, electricity networks and financial markets are all part of rising risks.
Technology is also fueling a wave of disinformation around the world. Disinformation and fake news have of course long been around but technology has created new risks with its expanded capabilities to disseminate and amplify false information. Several recent reports by international organizations and global consultancy firms identify misinformation and disinformation as top risks for the year ahead. The World Economic Forum’s Global Risk report 2024 finds, on the basis of its extensive annual survey, that the most ‘severe global risk’ over the next two years is the use of disinformation and misinformation by domestic and foreign actors, which will further sow social and political divisions and deepen polarization. According to the report, AI driven disinformation could impact the record number of elections due this year across the world. This in turn could undermine the legitimacy of newly elected governments.
In a similar vein, the Eurasia Group’s Top Risks Report for 2014 says AI-created and algorithm-driven disinformation poses a threat in a year that will see elections in many countries. This could “influence electoral campaigns, stoke division, undermine trust in democracy, and sow political chaos on an unprecedented scale.”
Both reports emphasize how the largely unregulated AI landscape heightens the risks and that advances in artificial intelligence are outpacing governance efforts. According to the Eurasia Group’s assessment, “the result is an AI Wild West resembling the largely ungoverned social media landscape, but with greater potential for harm.” Similarly, the WEF report says that “a globally fragmented approach to regulating frontier technologies” is doing little to “prevent the spread of its most dangerous capabilities and, in fact, may encourage proliferation.”
Arguably the most worrying aspect of AI is its military application. The threat posed by autonomous weapons systems is a case in point as they can takes decisions and even strategies out of human hands. They can independently target and neutralize adversaries and operate without the benefit of human judgment or thoughtful calculation of risks. Today AI is already fueling an arms race in lethal autonomous weapons in a new arena of superpower competition.
One of the most insightful books that warned of the dangers ahead was published two years ago titled ‘The Age of AI: And our Human Future,’ co-authored by Henry Kissinger, Eric Schmidt and Daniel Huttenlocher. The authors (Schmidt is Google’s former CEO) argued that AI had ushered in a new period of human consciousness, which “augers a revolution in human affairs.” But this, they posit, can lead to human beings losing the ability to reason, reflect and conceptualize. It could in fact “permanently change our relationship with reality.”
Their discussion of the military uses of AI and how it is used to fight wars is especially instructive. AI would enhance conventional, nuclear and cyber capabilities in ways that would make security relations between rival powers more problematic and conflicts harder to limit. The authors say that in the nuclear era, the goal of a national security strategy was deterrence. This depended on a set of key assumptions – the adversary’s known capabilities, recognized doctrines and predictable responses. Their core argument about the destabilizing nature of AI weapons and cyber capabilities is that their value and efficacy stems from their “opacity and deniability and in some cases their operation at the ambiguous borders of disinformation, intelligence collection and sabotage … creating strategies without acknowledged doctrines.” They see this as leading to calamitous outcomes. They note the race for AI dominance between China and the US, which other countries are likely to join. AI capabilities are challenging the traditional notion of security and this intelligent book emphasizes that the injection of “nonhuman logic to military systems” can result in disaster.
Despite the risks and dangers of such new technologies there is no international effort aimed to manage them much less regulate their use. There have been national efforts to regulate AI but these seriously lag behind fast paced developments. There is certainly no move by big powers for any dialogue on cyber and AI arms control. If the global Internet can’t be regulated and giant, unaccountable social media companies continue to rake in excessive profits, there is even less prospect of mitigating the destabilising effects of AI-enabled military capabilities.
(Maleeha Lodhi is a former Pakistani ambassador to the US, UK & UN. Twitter @LodhiMaleeha)