EXPERT REACTION: Government proposes mandatory rules for high risk AI

Publicly released:
Australia; NSW; VIC; QLD; ACT
Image by Brian Penny from Pixabay
Image by Brian Penny from Pixabay

Today, the federal government announced it is proposing 'mandatory' rules for high-risk AI. The proposal includes a definition of high-risk A, ten mandatory guardrails, and three regulatory options to mandate these guardrails. Alongside this proposal, the government has also released a Voluntary AI Safety Standard which provides advice for businesses where their use is high risk, so they can start implementing best practice in the use of AI immediately. Below Australian experts comment on these announcements.

Media release

From: Australian Government - Dept of Industry, Science and Resources

The Albanese Government acts to make AI safer

The Albanese Government is making AI in Australia safer with two important announcements today.

Last year the government consulted with public and industry about AI, and Australians told us they wanted to see stronger regulation.

Business asked for clarity on AI regulation so they can confidently seize the opportunities that AI presents.

The Tech Council estimates Generative AI alone could contribute $45 billion to $115 billion per year to the Australian economy by 2030.

That’s why earlier this year the government appointed an AI expert group to guide our next steps.

Their work informed the Government’s Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings which includes the following key elements:

  • A proposed definition of high-risk AI.
  • Ten proposed mandatory guardrails.
  • Three regulatory options to mandate these guardrails.
  • The three regulatory approaches could be:
  • Adopting the guardrails within existing regulatory frameworks as needed
  • Introducing new framework legislation to adapt existing regulatory frameworks across the economy.
  • Introducing a new cross-economy AI-specific law (for example, an Australian AI Act).

Today we’re also releasing a new Voluntary AI Safety Standard with immediate effect.

It provides practical guidance for businesses where their use is high risk, so they can start implementing best practice in the use of AI.

The Standard give businesses certainty ahead of implementing mandatory guardrails.

In step with similar actions in other jurisdictions – including the EU, Japan, Singapore, the US – the Standard will be updated over time to conform with changes in best practice.

This new guidance will help domestic businesses grow, attract investment and ensure Australians enjoy the rewards of AI while managing the risks.

Consultation on the Proposals Paper for Introducing Mandatory Guardrails for AI in High-Risk Settings is open for four weeks, closing 5pm AEST on Friday 4 October 2024.

For more information on the Proposals Paper, including how to have your say, go to consult.industry.gov.au/ai-regulatory-guardrails.

More information on the Voluntary AI Safety Standard is available at industry.gov.au/VAISS.

Quotes attributable to the Minister for Industry and Science Ed Husic

Australians want stronger protections on AI, we’ve heard that, we’ve listened.

“Australians know AI can do great things, but people want to know there are protections in place if things go off the rails.

“From today, we’re starting to put those protections in place.

“Business has called for greater clarity around using AI safely and today we’re delivering.

“We need more people to use AI and to do that we need to build trust.”

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Dr Aaron Snoswell, PhD is a Senior Research Fellow in AI Accountability at QUT Generative AI Lab, Associate Investigator at ARC Centre of Excellence for Automated Decision Making and Society and Program co-lead at the QUT Centre For Data Science Responsible Data Science and AI Program

It's fantastic to see the Australian Government taking the next step in AI regulation with the release of the Voluntary AI Safety Standard.

It's so important that AI systems are tested, evaluated, and incorporate ongoing monitoring (guardrail #4). We know from the research so far that evaluation can't just be a 'one and done' checkbox before releasing a system: because the AI ecosystem today is developing and changing so rapidly, and the 'technology stack' is so convoluted, best practice evaluation looks like an ongoing conversation between model developers and other down- and up-stream stakeholders. It's also critical that testing and evaluation processes engage deeply with subject-matter experts. For instance, in my own research I work with experts in gendered hate speech to go beyond generic 'toxicity' benchmarks and quantify misogyny in Large Language Models more precisely and authentically.

Informing end-users (guardrail #6) is critical - especially because the 'end-users' of today's AI systems might be other AIs, not just humans! Watermarking, labelling, or disclosure of AI generated content is something I'm a big advocate of, because it safeguards the digital public good of human data on the Internet, and also ensures we can continue to build high-quality AI systems in the future. Transparency and embracing open-source development can go a long way to solving many issues, but need to be balanced with the genuine intellectual property considerations from model developers.

Last updated:  06 Sep 2024 12:48pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Daniel G. MacArthur is Director of the Centre for Population Genomics, a joint initiative of the Garvan Institute of Medical Research and the Murdoch Children's Research Institute

Breakthroughs in AI are transforming healthcare at an unprecedented rate and driving remarkable progress in disease diagnosis and prediction. The Centre for Population Genomics (a joint initiative of the Murdoch Children’s Research Institute and Garvan Institute of Medical Research) is leading the development of a national program to accelerate the safe adoption of AI in genomic diagnosis. Our top priority is to ensure that these powerful tools are reliable, ethical, and help improve health equity for all Australians. 

Australia’s unique position, with its robust healthcare infrastructure and research capabilities, offers immense opportunities to harness AI technological advancements for the benefit of our population. While the Government’s proposals represent a crucial first step in building public trust, robust measures should be carefully considered and implemented to not only safeguard the responsible use of AI but also position Australia as a leader in AI development.The Centre looks forward to contributing to the consultation on the proposals paper.

Last updated:  06 Sep 2024 10:02am
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Associate Professor Vitomir Kovanović is the Associate Director (Research Excellence) of the Centre for Change and Complexity in Learning (C3L), UniSA Education Futures

AI will have a profound impact on the whole aspect of Australian society and it is important to ensure that AI is not causing harm but instead used in safe and productive ways. It is great to see that Australian Government is taking the task seriously and looking to learn from regulators in other parts of the world (mainly Canada and EU) and this paper is a great start.

I am particularly pleased that Education (and Training) are included in the suggested list of high-risk contexts. However, the task of ensuring guide rails are in place will fall on developers (and deployers) of AI technologies, but there is little clarity on what that would include and how that would be mandated.

Finally, there is a need to invest in applied AI so that the benefits and risks of its use can be better understood and I wish that was more prominent in the draft paper. This is, after all, very intriguing and new technology, unlike any that we had in the past, so we need to first understand it better, before we can have clear policy around it.

Last updated:  05 Sep 2024 5:44pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Melissa McCradden is THRF Clinical Research Fellow at AIML and AI Director at Women's and Children Health Network at the University of Adelaide

Guardrails are so important to ensuring Australians can expect consistent standards for quality and safety whenever an AI system is used. We are all hearing wild stories about AI these days, and it's understandable that there are questions about reliability. No AI tool is perfect - for the foreseeable future, there will undoubtedly be gaps in the performance of AI tools, and these can have serious consequences for Australians. If a chatbot gives bad advice, if an algorithmic decision is wrong, or if someone can't access a digital tool, we need to have systems in place to protect peoples' rights.
 
Some key principles to these guardrails include good knowledge about the system's performance its limitations or boundaries - when does it work well and when does it not?; appeals - making sure there is a back-up option when something isn't working well; and choice - ensuring that people who have concerns or are not well represented have other options for accessing essential services. 
 
As we navigate the near future of using AI tools, the government plays a key role in setting the floor - what are the standards that need to be set in order to protect rights, ensure consistent and fair processes, and enable participation without exclusion? Bringing in consumers and Aboriginal knowledge holders will be vital for ensuring that this floor is set in a transparent, fair, and socially licenced manner. This way, we can work together to leverage what's good about AI tools with those guardrails in place to deliver better services to all people.

Last updated:  05 Sep 2024 5:43pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Dr Carolyn Semmler is the Lead Researcher of the Applied Cognition and Experimental Psychology group at the University of Adelaide

The responsible adoption of AI by industry is too important to Australia’s future economic prosperity to get wrong – and so the release of mandatory guardrails for AI in high-risk settings is welcome. However, there are aspects to these guardrails that mean there is still a large chasm between our understanding of how models work in ideal settings – and how they work when implemented in complex sociotechnical environments such as hospitals and law practices.

What is most concerning is that there is a lack of evidence regarding the types of risks that become evident when AI systems are implemented – which cannot be predicted by looking at the model performance alone – but need to be assessed as humans interact with the models in completing a task or making a consequential decision.

Our team is working on methods to understand how human cognition and decision making is changed via interaction with AI – for good and bad. It is this knowledge that will allow industry to meet the regulatory requirements imposed by these mandatory guardrails and ensure that Australians are not harmed by poor practice around implementation.

Last updated:  05 Sep 2024 5:41pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Steve Lockey is a Postdoctoral Research Fellow in Organizational Trust at The University of Queensland Business School

My collaborators and I are pleased to see the government take proactive action in developing guardrails around AI, particularly in high-risk contexts.

Our research on public attitudes towards AI in Australia and around the world (Trust in Artificial Intelligence: A Global Study; https://ai.uq.edu.au/project/trust-artificial-intelligence-global-study) shows that people want regulation, they are more comfortable with independent, external regulation, and are more likely to trust when AI systems that adhere to trustworthy principles and practices, and organizations provide assurances of trustworthiness.

For example, in Australia, 70% of respondents believed AI regulation was necessary, over 90% felt trustworthy AI principles and practices were important, and a majority agreed that they would be more willing to trust AI when organizations monitored system accuracy and reliability over time (77% agree), had AI ethics certifications (61%), and had an AI code of conduct in place (69%). Our modelling also shows the belief that there are appropriate safeguards around AI to be the strongest predictor of trust, ahead of perceived benefits, risks, and personal knowledge of AI.

The Australian Government has cited this research, and I am pleased to see that it has clearly taken public sentiment and expectations around AI regulation and governance into account in the development of the ten guardrails. I applaud the government’s decision to take a risk-based, preventative approach to AI governance.

Last updated:  05 Sep 2024 5:40pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Nicole Gillespie is a Professor of Management & Chair in Organizational Trust at the University of Queensland, and a Research Fellow at the Centre for Corporate Reputation at the University of Oxford

Our 2023 survey showed that 70% of Australians believe AI regulation is required. Australians have a clear preference for AI to be regulated by government and existing regulators, or by a dedicated independent AI regulator, rather than by industry. 

Despite the strong expectation of AI regulation, only a third of Australians believe current regulations, laws and safeguards are sufficient to make AI use safe.

The government’s proposed mandatory guardrails for high-risk AI responds to this public demand and expectation for AI regulation and are likely to enhance trust and adoption of AI technologies.

The proposed guardrails around testing, transparency and accountability are centrally important for building and sustaining trust in the use of AI technologies. Our research shows that over 90% of Australians view these practices as important for their trust in AI systems. 

Our survey further shows that three out of four people would be more willing to trust an AI system when the organization deploying the AI have assurance mechanisms in place that signal ethical and responsible use. For example, assurances that the accuracy and reliability of the system is being monitored and  standards for explainable and transparent AI are adhered to.

Last updated:  05 Sep 2024 5:39pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Lisa Given is the Director of the Centre for Human-AI Information Environments and Professor of Information Sciences at RMIT University

The government’s proposal for mandatory guardrails is an important step in Australia’s strategy for balancing the potential risks and benefits of AI-enabled technologies. The proposals for guardrails that the government sets out are intended to “set clear expectations…on how to use AI safely and responsibly.”

Identifying the need for mandatory guardrails aligns Australia with strategies set out by other jurisdictions, globally, such as the European Union.

This approach is welcome and timely, particularly given survey results of Australian business leaders (by the ABC) that one-third of businesses using AI are doing so without informing employees or customers. The government is seeking feedback on the definition of what constitutes “high-risk” AI, which is a critical issue that must be addressed to put appropriate guardrails in place.

They are also proposing 10 guardrails for AI systems in high-risk settings, related to testing, transparency, and accountability, as well as regulatory options for mandating guardrails. Given the rise in generative AI challenges (such as the creation of misinformation and deep fake videos and images), and the lack of transparency in the design, application, and use of AI systems, these proposals outline the key next steps that Australian organisations and regulatory bodies must take to ensure that AI technologies will benefit society, by managing risks and setting out key expectations for safe and responsible implementation.

The release of voluntary AI safety standards is welcome as an interim measure, to guide businesses and other organisations that are already adopting AI technologies; however, mandatory guardrails are also needed to ensure appropriate protections are in place for consumers, employees, and others who may be affected by the introduction of these tools across sectors.

Please also see the Conversation article “Australia plans to regulate ‘high-risk’ AI. Here’s how to do that successfully” for additional commentary related to this topic.

Last updated:  05 Sep 2024 5:37pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Chennupati Jagadish is the President of the Australian Academy of Science

Artificial intelligence proposals find the middle ground
The Australian Academy of Science welcomes the release of the Australian Government’s proposals paper for introducing mandatory guardrails for AI in high-risk settings, and a voluntary AI safety standard.

The documents are an important next step in the Australian Government’s Safe and responsible AI in Australia consultation and are consistent with our international commitments through the Bletchley and Seoul declarations.

AI is transforming science and our society, which is why the Academy has advocated for a national strategy and guidelines for the responsible use of AI.

Regulation of AI should not limit innovation but rather create a safe and ethical framework for science and society to prosper.

A solid regulatory framework is essential to ensure Australia is prepared for the transformation AI is bringing, and this can be provided through the urgent introduction of the proposed mandatory guardrails.

The proposals, including to introduce an Australian AI Act, is a major step in the right direction to develop laws and regulations that appropriately address the opportunities, challenges and risks of AI.

It is important that any regulatory environment is adaptable and can anticipate the adoption of AI and guide its safe and responsible use, and that progress to introduce mandatory guardrails is made swiftly.

Time is of the essence. Australia needs to progress the development of anticipatory regulation in AI and other areas of emerging science.

The Academy will publish a full response to the consultation on proposed guardrails in the coming weeks.
We will continue to convene expertise to assist the Australian Government in guiding the responsible adoption of AI for the benefit of all Australians. 

The Academy is optimistic that Australia can lead in AI and related sciences if the Australian Government:

  • develops a national strategy for the uptake of AI in the science sector, including scaling up investment in fundamental AI science
  • ensures that Australia’s AI capability doesn’t rely on other nations by uplifting our sovereign high-performance computing facilities
  • implements the UNESCO Recommendation on Open Science (since AI is trained on available data, keeping scientific data and peer-reviewed publications behind paywalls impacts the ability of these systems to leverage the most reliable information).

Read the Academy’s submission to the consultation process and our statement on the government’s interim response.

Last updated:  05 Sep 2024 5:33pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Adam Dunn is a Professor of Biomedical Informatics and Head of Discipline for Biomedical Informatics and Digital Health at The University of Sydney

Generative AI tools are different from traditional machine learning because a single model can be used for many different tasks and their behaviour can be erratic rather than strictly reproducible. This is a problem in settings like healthcare, where consistency is needed to maintain standards of safety and fairness.

Other big challenges include ensuring data privacy for users and making sure there is human oversight over generative AI tools even when their inner workings and data sources are often opaque. The 10 guardrails in the Voluntary AI Safety Standard are smart, and appear to be built from an understanding that these models can change and behave in unusual ways.

This means that beyond careful testing before deployment, the guardrails are built around post-deployment monitoring for unintended consequences and accountability.

Last updated:  05 Sep 2024 5:13pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Dr Michael Noetel is a Senior Lecturer in the School of Psychology at The University of Queensland

The Government has proposed a promising direction for mandatory AI guardrails in Australia. It addresses many risks like discrimination, privacy concerns, and the need for human oversight in automated systems. However, our research shows that both researchers and the public have serious concerns about the safety of the next generation of AI systems. Both fear we might lose control of these systems, or that they might be used for either biological or cyber-attacks.
 
To address the most serious risks, we should be thinking about how the European Union targets obligations at general-purpose AI systems—systems that might pose systemic risks. The paper is right to consider the whole AI lifecycle. Effective regulation targets those best able to manage risks, which are usually the frontier model developers themselves. For 'black box' AI systems where Australians and Australian businesses have little practical control, regulation has to encourage AI developers to make their products safe by design.
 
The Government has only given the public a month to respond to this consultation. Our research shows that the public has grave concerns about AI. The discussion paper cites our research and acknowledges public concerns. But such a short consultation window disadvantages civil society and favours well-funded tech companies with incentives to race ahead into a dangerous future.
 
[Cited research: https://aigovernance.org.au/survey/sara_technical_report and https://airisk.mit.edu/]

Last updated:  05 Sep 2024 3:17pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Dr Tony Carden is an Adjunct Fellow within the Centre for Human Factors and Sociotechnical Systems at The University of the Sunshine Coast

I applaud the Government's introduction of mandatory guard rails for high-risk AI. In particular, the adoption of a prevention-based approach is encouraging. Nothing less than this is adequate to mitigate the potential speed and scope of foreseeable risks that may emerge from rapidly developing AI systems.
 
The rapidly evolving nature of AI requires a robust regulatory approach that is agile and responsive. It should include the full range of available regulatory tools from education and empowerment to influence and enforcement. It must be integrated with existing regulatory and legal frameworks to ensure efficiency and effectiveness.
 
To ensure integration, efficiency, and effectiveness of AI regulation for Australia, a national regulatory agency will be required for coordination and to ensure adaptation to continuous change.

Last updated:  05 Sep 2024 3:15pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Dr Tapani Rinta-Kahila is an ARC DECRA Fellow at the University of Queensland Business School

The 10 guardrails align with my research on the responsible implementation of the use of AI systems. Many recent AI failures in Australia and globally (e.g., Robodebt, UK university acceptance, etc) have stemmed from issues such as lack of human intervention, insufficient data quality, lack of understanding and controlling for potential risks, and ignorance or exclusion of third-party advice in system development.

Moreover, with Robodebt (technically not AI but comparable), and in a similar case in the Netherlands (this was AI), we saw an absence of processes for people impacted by AI systems to challenge the system’s outcomes. In both countries, this led to the formation of grassroots social movements set out to help the affected people. The guardrails speak to all these issues very specifically and are thus a step in the right direction.

Last updated:  05 Sep 2024 3:13pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Dian Tjondronegoro is a Professor of Business Strategy and Innovation at Griffith University

The Australian Government's plan to implement mandatory AI guardrails in high-risk settings is a crucial step toward ensuring the safe and responsible use of AI technologies. These guardrails can effectively address AI's risks and potential harms by aligning with established frameworks such as Responsible AI and AI Risk.

Focusing on transparency, ethical governance, and technical robustness will help build public trust and provide regulatory certainty for businesses. Encouraging adaptability and innovation within these guidelines will foster a culture of continuous improvement while maintaining high safety standards. A multi-disciplinary approach is essential for integrating AI into critical sectors like healthcare and infrastructure, ensuring that AI systems are effective, ethically aligned, and socially responsible.

Last updated:  05 Sep 2024 3:11pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Dr Shaanan Cohney is a Senior Lecturer in the School of Computing and Information Systems, Faculty of Engineering and Information Technology at the University of Melbourne

The DISR’s proposed guidelines do not add anything new. The report largely recapitulates the same discussion points that have been circulating internationally for many months. The proposed guidelines are very high-level and as such are likely to create a compliance-driven culture rather than meaningfully improving practices. Much like corporate Australia’s response to Australia’s privacy principles, it is all too easy to hire an auditor to tick off requirements against a compliance spreadsheet. This comes at the cost of the industry-specific work needed to develop AI regulations that would meaningfully protect consumers.

Australia should be more careful when following the EU’s lead—their risk-based approach to regulation has yet to improve safety while imposing substantial extra costs. Regulating AI is necessary. However, our regulators would do well to act as intelligently as the products they are seeking to regulate.

Last updated:  06 Sep 2024 11:17am
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Alex Jenkins is Director of the WA Data Science Innovation Hub and Chair of the Curtin AI in Research Group (CAIR) at Curtin University

The Australian government’s voluntary AI guidelines are a step toward ensuring responsible use of artificial intelligence, but they fall short in providing robust protections for consumers. While the guidelines encourage ethical AI practices, their non-mandatory nature leaves significant gaps in accountability. This is particularly concerning in areas such as recruitment, where AI could be used to filter CVs or interview candidates, political spheres where AI-generated disinformation could disrupt election cycles, and healthcare, where a lack of transparency could impact the uptake of AI by the medical community. Without enforceable safeguards, these risks remain unchecked.

On the upside, the guidelines may foster innovation, giving Australian AI startups the freedom to experiment and grow without excessive regulation. They also provide businesses with confidence when dealing with international markets that have stricter AI laws, helping to ensure compliance and smooth interactions. However, for Australian consumers, the lack of mandatory oversight leaves open the possibility of harm from AI systems, making it clear that stronger regulations may still be needed to protect public interests while supporting innovation.

Last updated:  05 Sep 2024 3:19pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Associate Professor Abhinav Dhall is an Associate Professor of Computer Science from the College of Science and Engineering at Flinders University

The proposal on mandatory guardrails for AI in high-risk settings is a welcome step as AI-enabled systems will become more ubiquitous in the near future. The report defines and categorises the high-risk use cases of AI and suggests guardrails to protect the users.

AI-based software may be considered high-risk depending on the targeted user group. Specifically, applications and platforms used by children and young audiences must implement stricter guardrails to ensure safety. For example, platforms that enable the creation and sharing of AI-generated media, such as deepfakes, pose significant risks to younger users. 

Many apps with AI-enabled features allow users to create synthetic content through filters and other tools, potentially exposing them to harmful or deceptive media. These risks underscore the need for clear guidelines and protective measures for AI-driven technologies in these contexts.

Establishing a clear distinction between synthetically generated media and deepfakes is crucial. While synthetic media can serve creative purposes along with use cases in various industries, deepfakes are often linked to malicious uses like spreading misinformation. Clear definitions are essential for creating effective regulations and preventing misuse. Moreover, we should learn from our experience with cybersecurity guidelines, laws, and standards, which took many years to develop and are still not fully enforced. The infrastructure and solutions implemented without security in mind have had significant implications that we are facing today.

Last updated:  05 Sep 2024 3:18pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Toby Walsh is Chief Scientist of the AI Institute and Scientia Professor of AI at The University of New South Wales (UNSW), and Adjunct Fellow at Data 61

The public is rightly concerned about how artificial intelligence is being deployed. It is a powerful technology with both positive and negative uses. I therefore welcome the government’s proposal to develop mandatory guardrails. However, the biggest risk is that Australia misses out on the opportunity. Compared to other nations of a similar size like Canada, we are not making the scale of investment in the fundamental science.  Another concern is the speed with which this regulation is being developed. Good regulation takes time. The EU started on their journey to regulate AI over 5 years ago. And the EU’s AI Act is now just coming into force. So, while guardrails are to be welcomed, I am concerned how long it is going to take to get them in place.

Last updated:  05 Sep 2024 1:13pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Paul Salmon is co-director of the Centre for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast

The voluntary AI safety standard is a step in the right direction. Current regulatory frameworks are not fit for purpose and hence such standards and guardrails are critical.

The 10 guardrails touch on key AI safety issues such as governance, regulatory compliance, risk management, data quality, testing, human oversight and intervention, and transparency. As with most AI safety guidelines, the devil will be in the detail in terms of exactly how developers and deployers can adhere to them and what criteria will be used to assess adherence.

For example, for the risk management process guardrail, most contemporary risk assessment methods are ill-suited to identify the risks associated with AI technologies, and so new methods are required.

The higher education sector’s preparedness for generative AI is a great example of where critical risks were not considered until they were actually encountered - a similar reactive approach for AI technologies in all sectors would be catastrophic.

From a compliance point of view how we can determine whether a deployer of AI has considered and implemented controls for the full spectrum of risks is also unclear.

So whilst the standard provides a useful framework, what we now need are consistent, valid, and reliable methods that will allow it to have the desired influence on the development and deployment of AI technologies.

Last updated:  05 Sep 2024 1:12pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Nataliya Ilyushina is a Research Fellow at the Blockchain Innovation Hub and ARC Centre of Excellence for Automated Decision-Making and Society (ADM+S) at RMIT University.

This document heralded a major breakthrough in Australian AI regulation. It provided a much-needed summary of the existing laws that already apply to AI and most importantly how to apply them. For example, AI hallucination can be breaking the Consumer law, the well-known "non-fit-for-purpose clause". Further, it adopted a risk-based approach. And while some might say it has already been adopted by the EU in November and is dated, the document released today has more details. For example, the human-centered perspective on AI harms, namely, harms to people, to groups, and communities and harms to societal structures.

Last updated:  05 Sep 2024 1:11pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Associate Professor Niusha Shafiabady is from the Department of Information Technology, Peter Faber Business School at Australian Catholic University

The Australian Government released ten mandatory guardrails to mitigate the risks of AI especially in high-risk applications. This is a great step towards managing the risks of technology.

There are some very good points in the proposed guidelines such as guardrail number 5 (5. Enable human control or intervention in an AI system to achieve meaningful human oversight). This would add a new layer to check the AIs’ outcomes before finalising the decisions that could potentially impact people. 

Although this is a great initiative, there are some concerns about the guidelines. As an example, guardrails 3 and 4 (3. Protect AI systems and implement data governance measures to manage data quality and provenance 4. Test AI models and systems to evaluate model performance and monitor the system once deployed) are basically the measures that all Software Developers irrespective of the domain of their work, MUST comply with. This requires the knowledge and expertise on addressing those issues and solving them. If all the AI experts had the knowledge on how to deal with issues that are resolvable such as bias, we probably wouldn’t have them as big problems in the first place. So, if we want to have a safer AI, we should start training better experts in our Higher Education sector.

Guardrails such as number 8 (8. Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks), are quite difficult to implement since the companies are protective of their data, algorithms and methods. As an example, there are many pre-trained models that, if released, would be helpful in different applications, but the corporations are protecting their ownerships on the models.

Last updated:  06 Sep 2024 2:43pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Kylie Walker is CEO of The Australian Academy of Technological Sciences and Engineering (ATSE)

Greater adoption of AI could see Australia’s economy increase by $200 billion annually, but it is critical that robust measures are rapidly implemented to safeguard these areas and position Australia at the forefront of AI development.  

 This is Australia's AI moment. Ultimately, these proposals will help Australia lead in both technological and regulatory innovation in AI, setting a global standard for responsible and effective AI development and deployment.  

Investing further in local AI innovations will simultaneously create new AI industries and jobs here in Australia and reduce our reliance on internationally developed and maintained systems.  

Local AI industries will also give the Australian Government greater ability to regulate AI development in line with Australian community values and expectations.

Last updated:  05 Sep 2024 1:09pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Dr Daswin De Silva is Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University

The Australian Government’s mandatory and voluntary AI Safety Standards are much-needed in the highly dynamic and fast-paced AI landscape. They appear to follow the EU AI Act, ISO and NIST frameworks proposed in recent years. However, the presentation of these standards is somewhat confusing. The guardrails proposed for mandatory and voluntary are pretty much the same with subtle differences that need to be unpacked by a domain expert. The first guardrail is likely to put off many SMEs looking to adopt AI, as it speaks to accountability processes and regulatory compliance, instead of encouraging innovation and responsible adoption. Large organisations would already have an AI strategy and roadmap in place which accounts for all risks and challenges of AI. The extensive discussion of AI risks in pages 12-14 and the classification of high-risk AI without reference to low/medium risk AI is further cause for confusion, especially for those new to AI and those in the SME space. Overall, these guardrails need to be simplified for an average Australian organisation, if not we will end up in a further round of consultant-speak that does not capitalise on the opportunities of AI for all Australians.

Last updated:  05 Sep 2024 1:08pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Erica Mealy is a Lecturer and Program Coordinator in Computer Science at the University of the Sunshine Coast

Australians are rightly distrustful of AI. Phrases like “we need more people to use AI” are particularly dangerous against the backdrop of significant potential for harm in areas of cybersecurity (https://link.springer.com/article/10.1007/s43681-024-00443-4), fraud, automation bias and discrimination. While I welcome the need to protect Australians and our businesses, telling us to use the technology without educating people on when/when not, and how/how not to utilise it puts Australians at further risk. 

One of the biggest challenges of AI is that it’s “decisions” and “recommendations” cannot be explained outside of “it was statistically likely amongst a certain set of inputs that the model was trained on”, and that we don’t have access to. The proposal’s repeated use of “the government plans to “strengthen privacy protections, transparency and accountability”” is particularly problematic: 

  • there’s no such thing as “explainable AI”, 
  • most, if not all of the major AI players, are international technology companies with no interest in keeping Australia’s data and intellectual property sovereign, and 
  • there is no way to make the training sets and decision algorithms both private and transparent or accountable to the Australian public. Transparency and accountability need visibility, while privacy needs confidentiality – these are competing interests at best.
Last updated:  05 Sep 2024 1:07pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Professor Mary-Anne Williams, Michael J Crouch Chair in Innovation and founder of the UNSW Business AI Lab at The University of New South Wales

Back in 2018, I said 'AI is poised to disrupt humanity, society, industries, local, national and global economies and politics by fundamentally transforming how people perceive, feel, reason and interact with the physical and digital worlds, shaping human experiences, beliefs and choices. The extraordinary potential of AI has created a fiercely competitive race to lead. As Vladimir Putin put it, the prize of leadership is to shape and control the future for huge benefits and rewards.'

Australia must invest in its own future, and that means it must participate in and lead the AI game, learn to think like an AI leader, and become an AI leader. The guidelines are an important step because to reap the benefits of AI, it must be safe to use. How can we ensure that the new generation of autonomous AI agents make decisions and behave in ways that align with human preferences and expectations? What methods and tools can businesses use to adhere to the guidelines? Who is responsible for AI reliability across the complex AI innovation value chain? How can we avoid accidental and deliberate harm when AI knows it knows more than the people supposed to be controlling it?

What are we waiting for?

Last updated:  05 Sep 2024 1:43pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Associate Professor Stan Karanasios, Business Information Systems at the University of Queensland

The framework sets a solid foundation for ethical AI usage. Organisations will welcome guidance, especially in terms of complying with any potential future regulatory requirements in Australia and emerging international practices; this is particularly relevant for those working in the international space. Society will be reassured that organisations have a framework for using AI safely and responsibly.

Yet, its success will ultimately depend on its adaptability and the genuine commitment of all stakeholders involved in the AI lifecycle. Like all guardrails, real effectiveness hinges on their practical implementation and ongoing commitment from organisations. The integration of stakeholder engagement and transparency across the AI supply chain significantly contributes to mitigating risks and enhancing trust.

Organisations will look for more explicit directions on how to implement these guardrails. For instance, while the guardrails specify the need for testing and human oversight, they could also provide concrete examples or case studies demonstrating successful applications of these practices. It would be useful to include specific strategies for small and medium-sized enterprises that might lack the resources of larger corporations. The connection to international standards is necessary, but the adaptability of these guardrails across different regulatory environments will be challenging in practice.

Last updated:  05 Sep 2024 1:04pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives grant funding from OpenAI in 2024.

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Media Release Australian Government - Dept of Industry, Science and Resources, Web page
Journal/
conference:
Organisation/s: Australian Science Media Centre
Funder: N/A
Media Contact/s
Contact details are only visible to registered journalists.