Expert Reaction
These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.
Associate Professor Vitomir Kovanović is the Associate Director (Research Excellence) of the Centre for Change and Complexity in Learning (C3L), UniSA Education Futures
The doomsday clock is a constant reminder of the potential consequences inappropriate use of technologies can have on humanity and Planet Earth.
While the discussion focuses on whether AI can turn evil and destroy humanity, there are other dangers that are less discussed. In particular, the danger of over-relying on technology and the resulting lack of competence, especially for those making important decisions.
While AI can supplement and enhance human capabilities, it also provides opportunities for unprecedented cost-cutting by using a less-skilled workforce armed with AI tools. For example, rather than investing in high-quality education, we will instead rely on future professionals to get their insights from AI systems, cutting the costs spent on education. While such an approach will work in most cases, the potential consequences can be catastrophic when AI makes a (rare) mistake. The more AI tools resemble humans and human decision-making, the more prone they will be to such errors.
Forty years ago, Stanislav Petrov was a Soviet duty officer when the Oko (eye) early warning system showed a nuclear missile heading from the US toward the USSR, followed by five more. Stanislav dismissed this as a false alarm since his common sense was telling him that a nuclear attack from the US would consist of hundreds of missiles. Had he informed his superiors of this incident, the world as we know it would likely not exist. While first imprisoned for not following orders, he later received numerous accolades for saving the world.
The question we need to ask is, what will happen if a similar incident happens in a few years when AI becomes ubiquitous and embedded in all our critical systems? Would we have enough common sense to stand behind our knowledge and understanding against AI recommendations and suggestions?
Associate Professor Sven Teske is Research Director at the Institute for Sustainable Futures, University of Technology Sydney
Climate change is the world’s largest threat – it changes the living conditions of all living creatures and plants on this planet. Historically, science and engineering have contributed to this crisis, but in the past decade, scientists and engineers have worked hard to avert it. Our One Earth Climate Model summarises the technologies and measures required to limit mean temperature rise to 1.5C with a likelihood of over 60%. We know what to do to maintain and improve living conditions for our children, yet, it is a handful of irresponsible businesses and politicians who block its implementation.
Tilman A Ruff AO is an Associate Professor in the Nossal Institute for Global Health, School of Population and Global Health at the University of Melbourne. He is an immediate past Co-President of the International Physicians for the Prevention of Nuclear War
In 2024 the Doomsday Clock is still at 90 seconds to midnight, 'teetering near the edge' of unprecedented danger.
Treaties that have constrained nuclear weapon number and types are dead or dying. Nuclear build-up, massive and growing investments in new more dangerous weapons, increasing threats to use nuclear weapons by irresponsible leaders, and wars in Ukraine and the Middle East involving one or more nuclear-armed states put all humanity – along with most living things – in mortal danger.
Nuclear-armed states building nuclear weapons and delivery systems designed to last till the end of this century belies their obligation to negotiate a world freed from nuclear weapons. They are barely even talking about talks. We face a three-way arms race between China, Russia and the US, exacerbated by accelerating military uses of artificial intelligence. The only thing the figleaf of nuclear deterrence is reliably deterring is nuclear disarmament.
This bleak place we find ourselves in is not hopeless, but it requires action by governments and people that comes much closer to matching the immensity and urgency of the danger.
Our best hope lies in the 2017 Treaty for the Prohibition of Nuclear Weapons (TPNW), with Indonesia and Brazil poised to joined the almost half of the world's nations in this landmark treaty.
The TPNW for the first time makes the worst weapon of mass destruction illegal in international law. It has changed nuclear debates and is stigmatising nuclear weapons and threats. It has already stimulated financial institutions with over a trillion dollars in funds to divest from companies profiting from building nuclear weapons. It is bringing to life long-overdue assistance for the victims of nuclear weapons use. And it provides the only internationally agreed framework for verifiably and irreversibly eliminating nuclear weapons. As Labor has committed to do, joining this treaty is likely the best immediate step responsible nations can take to wind back the Doomsday Clock.
Dr Kristin Alford is a Futurist and Director of In Situ Foresight and also an Adjunct Industry Professor at University of South Australia (UniSA)
The accelerated countdown of the Doomsday Clock is a potent reminder that our current systems are no longer serving us. Yet it also acts as a useful reminder that the systems we operate within have changed in the past and that these will change in the future. We have the ability to imagine alternatives and create change.
While the Doomsday Clock is a measurement of failure, we know that in our work engaging with young people, that dystopic visions are ultimately unsustainable. It’s important to recognise reality, but this needs to be the start of a longer conversation. A conversation that inspires people to identify different futures, create alternative pathways and empower people collectively to enable large scale change.
Our new exhibition at MOD. called BROKEN is exactly about this - recognising that it feels like our current ways are riddled with ethical and environmental failures, but in response understanding also how to harness the power of imagination, agency, and active hope.
Professor Paul Salmon is co-director of the Centre for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast
While it hasn’t shifted, the latest doomsday clock unveiling provides us with another timely reminder that society faces a growing set of complex global risk.
These risks are well known, however, the present global response remains inadequate.
Though artificial intelligence may dominate the discourse (as a potential existential threat, and perhaps rightly so), we should not be distracted from the many other issues impacting humanity on a global scale. Of these, there are many, including climate action failure, extreme weather, the cost-of-living crisis, infectious diseases, geopolitical conflict, and human environmental damage, to name only a few.
The big question, of course, is what can be done? The issues are complex and interrelated, and it is clear that we need to do more, both in understanding their causes and in implementing appropriate, long-term solutions. The ongoing collective failure of governments, policy makers, corporations and other key stakeholders to respond effectively can continue no longer.
Enhancing our capacity to manage global risks is therefore society’s most critical challenge. I hope the fact the clock remains so close to midnight will provide some impetus for immediate and necessary action.
Dr Erica Mealy is a Lecturer and Program Coordinator in Computer Science at the University of the Sunshine Coast
We can expect technology to continue to have a visible impact on future doomsday clock predictions.
Not only have we seen the rise of ChatGPT and the fact that has been described as a potential existential threat but we've seen an enormous uptick in the use of resources for these technologies. One such issue is training these LLMs takes an enormous amount of electricity and water for cooling.
One such example is the training of Microsoft Bing's GPT model which used tremendous volumes of water for cooling*.
Set amongst cornfields outside of Des Moines, Iowa, diverting water that may have been used in agriculture for the sake of advancing technology represents a further threat in a world already struggling to feed its population.
Furthermore, the increase in electric vehicles with their need for electricity means that issues of capacity related to coping with temperature extremes across the world, are further compounded by adding AI training and increased EV ownership. It is clear the need for sustainably sourced power and carbon offset has never been greater."
Dr Dyann Ross is a Senior Lecturer in Social Work at the University of the Sunshine Coast
Our hearts know what we often can’t admit to ourselves or accept as the evidence from eminent scientists and world leaders. Our planet is under extreme pressure as evidence mounts for the interlinked impacts of ecological fragility due to climate change.
We know time is running out so this Doomsday report is both not surprising and terribly shocking. The idea of broken-heartedness explains how we experience the pain of eco-anxiety for the planet and all other harms and losses in a culminative and profound way.
Broken-heartedness is caused by lovelessness, in this instance for the planet. Because if you love the planet you don’t cause environmental degradation, mass species extinction, large scale unsustainable farming of animals for human consumption and fail to act to redress the harms occurring.
My theory of love provides ways for people to understand how the Doomsday Clock remains so perilously close midnight. It explains how broken-heartedness for people, animals and the planet embodies the travesties of safety, wellbeing, sustainability and the failure to uphold the equal moral worth of all beings.
We need to urgently and collectively listen to what our broken hearts know.