How do we manage the 'extreme risks' posed by advanced AIs?

Publicly released:
International
CC-0. https://creazilla.com/nodes/1643906-byte-bits-computer-illustration
CC-0. https://creazilla.com/nodes/1643906-byte-bits-computer-illustration

We've heard warnings that the rapid development of artificial intelligence (AI), particularly generalist AI systems that match or exceed human abilities, poses extreme risks to humanity, but how do we manage those risks? International researchers say there's currently no consensus on how to manage the risks, so they recommend directions for proactive and adaptive governance to mitigate risk, call on big tech and public funders to invest more in risk assessment and mitigation, and encourage global legal institutions and governments to enforce standards to prevent AI misuse. Advanced AI systems pose grave risks to society, they say, such as amplifying social injustice, eroding social stability, enabling cybercriminals, and facilitating automated warfare, customised mass manipulation, and pervasive surveillance. And perhaps the biggest risk of all is losing control of autonomous AIs altogether. "There is a responsible path – if we have the wisdom to take it,” the authors say.

Media release

From: AAAS

Managing extreme AI risks amidst rapid technological development

Although researchers have warned of the extreme risks posed by rapidly developing artificial intelligence (AI) technologies, there is a lack of consensus about how to manage them. In a Policy Forum, Yoshua Bengio and colleagues examine the risks of advancing AI technologies – from the social and economic impacts, malicious uses, and the possible loss of human control over autonomous AI systems – and recommend directions for proactive and adaptive governance to mitigate them. They call on major technology companies and public funders to invest more – at least one-third of their budgets – into assessing and mitigating risks and for global legal institutions and governments to enforce standards to prevent AI misuse. “To steer AI toward positive outcomes and away from catastrophe, we need to reorient. There is a responsible path – if we have the wisdom to take it,” write the authors. Technology companies worldwide are racing to develop generalist AI systems that match or exceed human abilities across many critical domains. However, alongside advanced AI capabilities come sociality-scale risks that threaten to amplify social injustice, erode social stability, enable large-scale cybercriminal activity, and facilitate automated warfare, customized mass manipulation, and pervasive surveillance. Among these harms, researchers have also suggested the potential to lose control of autonomous AI systems, rendering human intervention ineffective. Bengio et al. argue that humanity is not on track to handle these potential risks and that, compared to the efforts of making AI systems more powerful, very few resources are being invested in ensuring the safe and ethical development and deployment of advanced AI technologies. To address this, the authors outline urgent priorities for AI research and development and governance.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research AAAS, Web page
Journal/
conference:
Science
Research:Paper
Organisation/s: University of Oxford, UK, Quebec AI Institute, Canada, Université de Montréal, Canada
Funder: No information provided.
Media Contact/s
Contact details are only visible to registered journalists.