Three reasons we should be worried about AI right now

Publicly released:
Australia; International
Photo by Google DeepMind on Unsplash
Photo by Google DeepMind on Unsplash

A group of doctors and public health experts - including an Australian - have joined calls to suspend development of artificial intelligence until sufficient regulations are in place. The researchers cite three major reasons AI currently poses a risk to public health and safety; the first being its ability to ramp up surveillance capacity which can manipulate consumer choices, spread misinformation and social division and even enable government oppression. The second reason is the current and potential development of military weapons that can kill entirely without human supervision, and the third is the loss of jobs that will come as AI allows the automation of more types of work.

Media release

From: The BMJ

BMJ GLOBAL HEALTH

Externally peer reviewed? Yes
Evidence type: Analysis; Opinion
Subjects: People

Doctors and public health experts join calls for halt to AI R&D until it’s regulated

Certain types and applications pose “existential threat to humanity,” they warn

An international group of doctors and public health experts have joined the clamour for a moratorium on AI research until the development and use of the technology are properly regulated.

Despite its transformative potential for society, including in medicine and public health, certain types and applications of AI, including self-improving general purpose AI (AGI), pose an “existential threat to humanity,” they warn in the open access journal BMJ Global Health.

They highlight 3 sets of threats associated with the misuse of AI and the ongoing failure to anticipate, adapt to, and regulate the transformational impacts of the technology on society.

The first of these comes from the ability of AI to rapidly clean, organise, and analyse massive data sets consisting of personal data, including images.

This can be used to manipulate behaviour and subvert democracy, they explain, citing its role in the subversion of  the 2013 and 2017 Kenyan elections, the 2016 US presidential election, and the 2017 French presidential election.

“When combined with the rapidly improving ability to distort or misrepresent reality with deep fakes, AI-driven information systems may further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,” they contend.

AI-driven surveillance may also be used by governments and other powerful actors to control and oppress people more directly, an example of which is China’s Social Credit System, they point out. 

This system combines facial recognition software and analysis of ‘big data’ repositories of people’s financial transactions, movements, police records and social relationships.

But China isn’t the only country developing AI surveillance: at least 75 others, “ranging from liberal democracies to military regimes, have been expanding such systems,” they highlight.

The second set of threats concerns the development of Lethal Autonomous Weapon Systems (LAWS)---capable of locating, selecting, and engaging human targets without the need for human supervision.

LAWS can be attached to small mobile devices, such as drones, and could be cheaply mass produced and easily set up to kill “at an industrial scale,” warn the authors. 

The third set of threats arises from the loss of jobs that will accompany the widespread deployment of AI technology, with estimates ranging from tens to hundreds of millions over the coming decade.

“While there would be many benefits from ending work that is repetitive, dangerous and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behaviour,” they point out.

To date, increasing automation has tended only to shift income and wealth from labour to the owners of capital, so helping to contribute to inequitable wealth distribution across the globe, they note.

“Furthermore, we do not know how society will respond psychologically and emotionally to a world where work is unavailable or unnecessary, nor are we thinking much about the policies and strategies that would be needed to break the association between unemployment and ill health,” they highlight.

But the threat posed by self improving AGI, which, theoretically, could learn and perform the full range of human tasks, is all encompassing, they suggest. 

“We are now seeking to create machines that are vastly more intelligent and powerful than ourselves. The potential for such machines to apply this intelligence and power—whether deliberately or not—in ways that could harm or subjugate humans—is real and has to be considered. 

“If realised, the connection of AGI to the internet and the real world, including via vehicles, robots, weapons and all the digital systems that increasingly run our societies, could well represent the ‘biggest event in human history’,” they write.

“With exponential growth in AI research and development, the window of opportunity to avoid serious and potentially existential harms is closing. The future outcomes of the development of AI and AGI will depend on policy decisions taken now and on the effectiveness of regulatory institutions that we design to minimise risk and harm and maximise benefit,” they emphasise. 

International agreement and cooperation will be needed, as well as the avoidance of a mutually destructive AI ‘arms race’, they insist. And healthcare professionals have a key role in raising awareness and sounding the alarm on the risks and threats posed by AI.

“If AI is to ever fulfil its promise to benefit humanity and society, we must protect democracy, strengthen our public-interest institutions, and dilute power so that there are effective checks and balances. 

“This includes ensuring transparency and accountability of the parts of the military–corporate industrial complex driving AI developments and the social media companies that are enabling AI-driven, targeted misinformation to undermine our democratic institutions and rights to privacy,” they conclude.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research The BMJ, Web page The URL will go live after the embargo ends
Journal/
conference:
BMJ Global Health
Research:Paper
Organisation/s: London School of Hygiene & Tropical Medicine, UK
Funder: The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for- profit sectors.
Media Contact/s
Contact details are only visible to registered journalists.