Image by Gerd Altmann from Pixabay
Image by Gerd Altmann from Pixabay

EXPERT REACTION: Australia signs international AI declaration - what next?

Embargoed until: Publicly released:
Not peer-reviewed: This work has not been scrutinised by independent experts, or the story does not contain research data to review (for example an opinion piece). If you are reporting on research that has yet to go through peer-review (eg. conference abstracts and preprints) be aware that the findings can change during the peer review process.

Australia signed up to the Bletchley Declaration, an international agreement on the regulation of AI, at a summit in London late last week. A release from the Department of Industry on Friday said that signing the declaration "signals our commitment to work with the international community to ensure AI is developed with the right guardrails in place". Aussie experts comment below on the declaration and what needs to happen next.

Organisation/s: Australian Science Media Centre

Funder: N/A

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Professor Karin Verspoor is the Dean of the School of Computing Technologies, RMIT University

The areas where AI may pose the greatest risks are also areas where they may have the greatest benefits, including in health, where AI has the potential to support highly personalised medical care, enable deeper understanding of disease, and identify new potential treatments. It is apparent that a scientific, evidence-based approach to the use of AI in such highly sensitive contexts is necessary for us to have confidence in the efficacy and safety of particular tools in specific use cases. As such, I support the Bletchley declaration; it represents an important step towards establishing standards for use of AI. Coupled with this, it is important to establish mechanisms for reporting and surveillance of harm that AI systems might do when deployed into real-world settings, analogous to what we already do for medical interventions such as drugs.

Last updated: 07 Nov 2023 9:54am
Declared conflicts of interest:
Karin is the Directorof BioGrid Australia
Dr Erica Mealy is a Lecturer in computer science in the School of Science, Technology and Engineering at the University of the Sunshine Coast

While the Bletchley declaration is a step in the right direction, it stops short of actually naming some of the potential harms. When talking about the safety of AI, it's important that we discuss all of the meaningful ways that AI can change our lives; for instance, it talks about the need for different cultural lenses but stops short of calling out the inherent cultural biases that are evident in our large language and image generative models.

The democratisation of voices online is taking away the meaningful weight of expertise, and as these models continue to proliferate unmoderated, we will continue to see biases reinforced, and the loudest voices rather than the most expert become the dominant truths.

It is concerning the agreement advocates codes of conduct, which have been inadequate for other technological problems, e.g. data privacy in the European Union. These data privacy initiatives are strongly founded on legislation and regulation which are far better levers on technology innovation. The mere fact that these tools were made available without ethical and safety oversight, and then called potentially ‘extinction level threats’ by the very companies that released them shows that perhaps we need to respond more.

Last updated: 06 Nov 2023 1:57pm
Declared conflicts of interest:
None declared.
Rebecca L. Johnson is a PhD Researcher in the Ethics of Generative AI at The University of Sydney

AI helped lift John Lennon’s voice from backing piano music of a 1970s recording to create a new Beatles song “Now and Then”; the same week brought us two important AI governance documents. In the US, the Biden administration released an “Executive Order on the safe, secure, and trustworthy development of AI”. In contrast, the UK's "Bletchley Declaration" under Sunak's direction took a different route. Who Australia decides to follow will impact our future AI governance.

Whilst both documents seek to ensure a better AI-enabled world, their approaches couldn’t be more different. The US's approach is comprehensive and nuanced, reflecting diverse AI-experts’ perspectives: it exhibits a strong focus on human-centric principles, recognising AI as a deeply human-centric issue with wide-reaching socio-political implications. Meanwhile, the UK's stance is steeped in existential risk rhetoric, seemingly echoing the concerns of a particular faction frequently labelled the AI-Safety community, which tends to concentrate on the long-term implications of potential artificial general intelligence.

These documents echo the AI research community's polarities: the immediate effects ("the Now") and the potential future risks ("the Then").

Australia's choice in AI governance mirrors the task of isolating Lennon's voice from its piano backdrop—distinguishing immediate human-centric concerns from the distant hum of existential risks. We stand at a juncture: to tune into the 'Now' with the US's inclusive approach or to anticipate the 'Then' through the UK's speculative lens.

Scorecard:
The US Exec Order – 4 out of 5 stars
The Bletchley Declaration – 2 out of 5 stars
Now and Then – 4.5 out of 5 stars"

Last updated: 07 Nov 2023 5:42pm
Declared conflicts of interest:
None declared.
Professor Simon Lucey is Director of the Australian Institute for Machine Learning (AIML) at the University of Adelaide

It is vital that Australia is an active player in shaping global AI policy, and the Bletchley Declaration is a promising step in that direction.
 
Some of the most challenging fundamental problems in AI research are directly related to how it can be deployed in a safe and trusted manner. The Bletchley Declaration reinforces the need for our country to legislate where we need to, and invest where we must, to ensure the benefits of the AI revolution for all Australians.
 
Australia’s university-led AI research sector is truly world-class, and a real asset for the country.

Last updated: 06 Nov 2023 1:55pm
Declared conflicts of interest:
None declared.
Professor Johanna Weaver is Director of the Tech Policy Design Centre at the Australian National University

The Bletchley Park Declaration is an important international milestone. It brought together the world's largest artificial intelligence developers and extracted a commitment to building and using AI safely. But it is lacking in urgency.
 
The Declaration recognised the need to better understand the risks and implement safeguards. But it stops short of articulating the specific safeguards, or a timeline for action.
 
The Bletchley Declaration was the outcome of Day 1 of the UK AI Safety Summit. It was signed by 29 countries, including the US and China.
 
However, two of the most significant developments resulted from the second day (which had a much smaller group of 11 like-minded countries and several major AI companies):

The first was the Statement on Testing: committing 11 countries (including Australia) to independent testing by governments of frontier models before and after they are deployed. This only applies to frontier models (the most powerful of AI). It is significant in that it signals an end to tech companies marking their own homework, but lacks specifics on implementation. 

The second was the announcement by Prime Minister Rishi Sunak, at the close of Day 2, that Professor Yoshua Bengio, a leading Canadian AI researcher, will lead the first-ever frontier AI' State of the Science' report.

This mirrors the approach of the International Panel on Climate Change (IPCC). It aims to develop an independent scientific evidence base upon which politicians and policymakers can make decisions. It is unclear if this was supported by participants on Day 1 (including China) or Day 2 (smaller like-minded group). Timelines for the first report were not disclosed. 
 
By design, the UK AI Safety Summit focused on Frontier Models (the most powerful of AI), which, if misused, can potentially cause existential harm. This focus is warranted. But it must not come at the cost of focusing on the urgent need to address safety, fairness, and accountability in artificial intelligence systems, which may be less powerful but are already being rolled out in our societies and economies.

Last updated: 06 Nov 2023 1:40pm
Declared conflicts of interest:
None declared.
Dr Nataliya Ilyushina is a Research Fellow at the Blockchain Innovation Hub and ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University.

Australia's endorsement of the Bletchley Declaration breaks new ground in the global AI dialogue, championing inclusivity as a key to unlocking technology's true potential.
 
The Declaration's calls early in the text for AI to be embraced “in an inclusive manner in our countries and globally,” a progressive acknowledgment that stringent AI policies can exacerbate social inequalities and constrict the diversity of opportunities.
 
This pivotal stance comes at a time when the focus has been intensely on the cybersecurity risks of AI, often sidelining the critical risks associated with its slow adoption.
 
Over-regulation, intended to safeguard against digital threats, can inadvertently create hurdles that marginalise vulnerable groups, hinder less developed countries, stifle small businesses and threaten competition.
 
The adoption of AI by small businesses and productivity growth are especially critical for the Australian economic context.

Given the considerable research and growing recognition of digital inclusion's importance in Australia, the logical progression is to strive for AI inclusion, ensuring equal opportunities for access and usage of AI across various communities.
 
The evolving consensus is that the next strategic move should be to advance AI inclusion, promoting equal access and the ability to leverage AI across all sectors of society

Last updated: 06 Nov 2023 1:37pm
Declared conflicts of interest:
None declared.
Dr Dennis Desmond is a lecturer for the University of the Sunshine Coast in Cyberintelligence and Cybersecurity

The Bletchley Declaration provides a good starting foundation for what will be a contentious and difficult issue to address. While difficult, the inclusion of recognised cyber adversaries such as China will be important from both a transparency perspective and a regulatory perspective. Fraught with the potential for creating a divisive and antagonistic environment for the future (read: Balfour declaration), it could also result in a longer-term agreement amongst high-technology countries to control access, development and integration.  

But, as we’ve seen with the non-proliferation treaty and other nuclear treaties, it also has the potential to suppress technological development in some countries and restrict its use to a select number of countries. Through its relationship with other AI leaders, and the innovative development that is occurring in its academic sector,  Australia must now evaluate how its integration of AI into medical, financial, cyber, and other sectors will be addressed through ethical controls and oversight.

Viewing AI as presenting the same potential threat to the world’s citizens as nuclear energy, and how the technology could be misused or positively integrated into society without causing harm, will be vital to the success of Bletchley and the ability of the signatories to improve all facets of life through AI integration.

Last updated: 06 Nov 2023 1:36pm
Declared conflicts of interest:
None declared.
Professor Paul Salmon is co-director of the Centre for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast

There is no doubt that the signing of the Bletchley declaration is a critical and watershed moment in the evolution of AI, and for AI safety generally. The risks of untrammelled AI development are well known, so the declaration represents a useful and positive first step in aligning nations around AI and AI safety.

Within the declaration, there are critical points made around human centricity, governance and regulation, and risk classification, all of which represent areas where we currently lack appropriate knowledge, methods, and tools. The notion of a shared responsibility for AI safety also comes through strongly – this has been lacking in previous AI safety efforts. This all bodes well for future AI safety efforts.

It should be noted, however, that there is a formidable amount of work to do to ensure that the declaration has the desired impact. There is a lack of detail regarding key aspects such as regulation and the metrics and tools to be used - how we can ensure that different nations apply consistent (and valid) methods when identifying and addressing risks is not yet clear. Exactly how nations will coordinate their efforts is another critical question. Finally, there are questions around how the declaration will be enforced. So whilst it is a positive step – it is clear that is only the beginning and that there is much work to do.

Last updated: 06 Nov 2023 1:35pm
Declared conflicts of interest:
None declared.
Professor Toby Walsh is Chief Scientist of the AI Institute and Scientia Professor of AI at The University of New South Wales (UNSW), and Adjunct Fellow at Data 61

The Bletchley Declaration will have modest impact. It has helped to build some international consensus around AI risks but has achieved little else. Biden’s Presidential executive order on AI risks is likely to have much greater impact. Indeed, PM Sunak can feel some annoyance that President Biden stole the thunder by releasing his executive order just before the UK AI Safety Summit.
 
While Sunak has a talking shop that not all international leaders attend, Biden gets down to business with some concrete initiatives to address AI risks. An executive order will impact how government operates more than business, and can always be overturned by an incoming President. Nevertheless, Biden’s executive order catipulated the US from behind the UK and Europe to leading the pack in terms of addressing the potential risks of AI.
 
Australia sadly remains at the back of the pack in terms of responding to the opportunities and risks AI poses. We have, for example, yet to see the federal government’s response to the unprecedented number of submissions to the recent consultation on supporting responsible AI. Over 500 groups and individuals submitted evidence to this inquiry. There is no shortage of ideas for action. There is, however, a distinct lack of action so far from the government, especially in the inadequate level of funding for AI. While other countries are investing billions of dollars in AI, we have seen the Australian government invest just tens of millions of dollars. We have even seen investment announced and then quietly shelved without any dollars being spent. The government seems to understand quantum, and has invested appropriately there. But AI, which is arguably an order of magnitude greater opportunity than quantum, seems to have alluded Australian politicians.

Last updated: 06 Nov 2023 1:34pm
Declared conflicts of interest:
Toby receives funding from the Australian Research Council and google.org (the philanthropic arm of Google)

News for:

Australia
NSW
VIC
QLD
SA
ACT

Media contact details for this story are only visible to registered journalists.