Expert Reaction
These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.
Dr Timothy Koskie is a Post-Doctoral Associate in the Centre for AI, Trust and Governance at the University of Sydney
"There is a conflict here where this is ostensibly a national AI plan, but the largest firms currently being used for AI-powered services, such as OpenAI and Microsoft, are based and founded in the US, and Australia’s role in or benefits from this large and rapid AI rollout is unclear.
Given the extent to which the previous innovations through digital platforms have enormously impacted, for instance, our news media environment by funnelling much of its revenue to overseas platform companies, this is an area that needs considerable attention. This is doubly problematic because existing laws were demonstrably ineffective at protecting many vital sectors from those impacts, which raises the question of how they are going to work if they are stretched further still to take on this new and potentially massive disruption.
The current US administration has put forward some strong political headwinds against regulatory action, even by US states, but this also flags that the rapid expansion of AI is not entirely organic and is instead already being shaped by governments’ decisions. Australia has a strong history of being able to not only lead on such legislation but also to find strong international support for their initiatives."
Professor Joel Pearson is Deputy Director of the AI Institute and is a researcher in human readiness and neuroscience at The University of New South Wales
"Australia’s new National AI Plan is a solid start. It recognises the importance of skills, infrastructure and safety, and it shows the government is finally taking AI seriously at a systems level. But it almost completely misses the elephant in the room: people’s minds.
Right now, Australians are anxious about their future. Quietly. At scale. The uncertainty around jobs, education, identity and meaning, in an AI-saturated world, is already driving stress, disengagement and social unrest. You cannot build an “AI-enabled” nation on a population that feels psychologically unsafe about the future.
The Plan treats AI as a technical and economic challenge. In reality, it is also a human psychology challenge. We need a national AI change management layer that sits alongside the existing pillars. A framework that helps people understand what is happening to work, education, purpose and identity, gives them evidence-based tools to cope with uncertainty, and supports leaders to manage AI change without burning out their teams.
For example, we have developed a white paper outlining a National AI Change Management Plan that could plug straight into the current framework. Until we add that human layer, Australia’s AI strategy will remain structurally impressive but psychologically incomplete."
Dr Armin Chitizadeh is from the School of Computer Science at The University of Sydney
"The Federal Government has released its National AI Plan—a promising first step. It outlines investments and strategies to strengthen Australia’s position in artificial intelligence while aiming to ensure that all Australians benefit from its growth.
The plan includes some funding to address potential risks, but this area is not heavily prioritised. Many in the AI field follow the mindset of “build first, fix later”. Unfortunately, this does not work for AI. AI systems are complex, far more complex than most human-made designs, including skyscrapers or aircraft engines. With such complexity, fixing major issues after deployment can be extremely costly or even impossible. Think of it like building a house without proper inspection, only to discover later that it is sinking. Once that happens, repair becomes nearly impossible. The same applies to AI: if we develop it without careful planning and robust safety measures, we may not be able to make it safe afterwards.
That said, the challenge is not solely Australia’s to solve. AI safety requires global cooperation, much like climate action. When all parties work together, everyone benefits. But if some act responsibly while others do not, the cooperative ones may bear the cost while others profit in the short term. Unfortunately, we live in a world where international collaboration is weakening, and countries often prioritise national gains over shared safety. Australia could help lead by proposing an international framework—similar to the Paris Agreement on climate change—perhaps even a “Canberra Agreement” focused on AI risk mitigation."
Professor Mary-Anne Williams is a Michael J Crouch Chair for Innovation from the Business School at The University of New South Wales
“Australia’s new National AI Plan represents an important step, but it will only succeed if we approach AI not only as a technology transformation, but as an innovation agenda focused on value creation and value capture. AI delivers impact when organisations use it to solve real problems, run rapid experiments, learn quickly, and iterate. This is the core of modern innovation practice, and without it, even the most advanced AI tools achieve very little.
Globally, more than 80% of innovations fail to deliver expected value, and AI initiatives are no different. The danger for Australia is that we invest heavily in AI infrastructure and technologies without cultivating the innovation cultures, strategies and capabilities needed to turn those technologies into economic and societal benefit.
To unlock AI’s potential across Australia’s critical domains, from healthcare and defence to agriculture, education and energy, we urgently need a new generation of talent that not only understands AI technically, but is deeply skilled in the art and science of innovation: problem framing, experimentation, evidence-driven decision-making, and designing for impact.
If the National AI Plan is to achieve its ambition, it must place innovation capability at its core. AI creates value only when people know how to innovate with it.”
Dr Mohiuddin Ahmed is a Senior Lecturer of Computing and Security discipline in the School of Science at Edith Cowan University. He is also coordinating the Postgraduate Cyber Security courses.
"An Australian national AI plan is much needed and a welcome step towards uplifting national security. Artificial Intelligence is a double-edged sword, and in particular, cyber-enabled crime has a direct connection. In the near future, it would be highly appreciated to see a synergy among different stakeholders and relevant frameworks, such as the Scams Prevention Framework. It is not just the government; citizens should also be held responsible for doing their due diligence to keep everyone safe in this age of unprecedented technological advancement."
Dr Emmanuelle Walkowiak is Vice-Chancellor's Senior Research Fellow in the College of Business and Law at RMIT University
"Australia's National AI Plan takes a neutral approach that gives firms more flexibility to experiment with AI integration. From a labour economics perspective, it avoids locking organisations into new rules and organisational practices before we fully understand how AI reshapes tasks, productivity, and skill demand. A neutral approach allows firms across all industries to experiment with AI, choose the models that best fit their workflows, capture productivity gains, potentially enhancing workforce adoption.
In addition, the establishment of the AI Safety Institute signals that the government recognises that monitoring AI risks and research capacity matters. So, Australia is choosing a “safety-with-innovation” road. In my research, I raise a key question: who is responsible for managing AI-related risks within workplaces?
Employers face choices about data governance, accuracy, accountability, and professional standards, while they also need to tackle AI risks such as cybersecurity, misinformation, and intellectual property rights in new ways. It is time now to clarify responsibilities and ensure organisations and workers have the capability to deploy AI productively, safely, and in ways that build long-term trust."
Professor Paul Salmon is co-director of the Centre for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast
“The decision to move away from developing a comprehensive set of AI risk controls is disappointing. Unfortunately, existing frameworks, principles, and guidelines are not fit-for-purpose and cannot be relied upon to manage the risks associated with AI in different contexts. Whilst I acknowledge the need to capitalise on the potential benefits of AI, this should not come without appropriate management of the various risks associated with AI.”
Associate Professor Sophia Duan is Associate Dean of Research and Industry Engagement at La Trobe University
"The National AI Plan rightly recognises that AI is now an economic capability issue. Australia must accelerate investment in skills, infrastructure and industry adoption if we want to remain globally competitive.
While focusing on economic opportunity is important, the absence of new AI-specific legislation means Australia still needs clearer guardrails to manage high-risk AI. Trustworthy AI requires more than voluntary guidance.
AI capability-building must extend beyond major cities and large organisations. Regional, rural, and Indigenous communities need tailored support to ensure AI benefits are shared equitably.
AI transformation is as much about people as technology. Organisations need support to build AI literacy, trust, governance and change-readiness among employees.
Data access is the foundation of AI innovation, but it must be balanced with strong safeguards, privacy protections and community trust.”
Dr David Tuffley is a Senior Lecturer in the School of Information and Communication Technology at Griffith University
"Australia's National AI Plan is strong on infrastructure commitments and has some impressive worker-focused rhetoric, but it lacks the institutional teeth required for genuine accountability. The plan identifies opportunities in data centres and regional partnerships, yet sidesteps the harder questions about market concentration and competition policy.
The establishment of an AI Safety Institute is a good move, but I see some critical gaps around its actual authority and enforcement mechanisms. There is vague language about "coordination" and "monitoring", but no clear lines of regulatory power or consequences for bad actors (that I can see). The accountability framework could do with more work - i.e. who makes the rules, who enforces them?
The plan also lacks a robust competition policy. It celebrates attracting Microsoft and Amazon's billions without addressing how such massive foreign investment might entrench control of these oligarchs over Australia's digital infrastructure. We need a strategy for ensuring smaller players can compete on equitable terms.
The worker protections sound good on paper - consultation, reskilling, union engagement - but without binding mechanisms, they remain aspirational.
Overall, Australia's plan positions itself as 'responsible' while avoiding the regulatory friction that responsibility actually requires."
Dr Angela Kintominas (she/her) is a lecturer in the Faculty of Law and Justice at The University of New South Wales
"It is encouraging to see that the National AI Plan recognises that, as well as possible benefits to the labour market, we also need to respond to risks to workers such as increased surveillance, discrimination and bias (including in relation to hiring and dismissal/deactivation) and work intensification. These risks impact all Australian workers, but especially those in precarious forms of work, such as in the gig economy.
As well as the Plan's initiatives relating to training and skills development, attention is also needed around national workplace laws to protect workers' rights and strengthen collective labour rights through collective bargaining and enterprise agreement making. This includes transparency requirements around what worker data is collected, giving workers a right to request (and to contest and delete) data, prohibitions on abusive forms of data collection without worker consent, updates to the National Employment Standards, making AI a permitted matter in collective bargaining, and expanding the role and funding for the national labour inspectorate the Fair Work Ombudsman to investigate and proactively respond to emerging AI issues in the world of work."
Dr Melissa McCradden (she/her) is Deputy Director and THRF Fellow at the Australian Institute for Machine Learning at The University of Adelaide
"On 25 November, the Australian Government announced that it was creating an Australian Artificial Intelligence Safety Institute (AISI) to respond to AI-related risks and harms. I applaud the Government's decision to leverage existing legislation. As an ethicist and clinical researcher, I have found that augmenting the strengths of existing governance systems often provides the most effective and quickest path to protecting citizens.
AI safety is important, but we can't forget that AI doesn’t do anything by itself. To make AI safe, we need to focus on human decision-making and transparency, not just more technical 'fixes.' Safe decisions with AI are those which centre the wellbeing of Australians, mitigate conflicts of interest, involve consultation and partnership with Aboriginal knowledge holders, and are made by diverse teams including consumers.
We need to involve young people, in particular, in shaping Australia's future with AI. Our young people are not only navigating their most impactful years with these technologies, but they are also actively shaping new norms around technology use. No sustainable AI plan neglects the perspective of youth."
Dr Rebecca Marrone is a Senior Lecturer in The Centre for Change and Complexity in Learning at Adelaide University
"The National AI Plan presents a three-stage strategy centred on (1) capturing economic opportunities, (2) expanding access to AI benefits, and (3) ensuring the safety of Australians. Rather than introducing new legislation, it strengthens existing frameworks and prioritises the practical development and adoption of artificial intelligence across industry and government. A strong emphasis is placed on building sovereign capability, including the development of models trained on Australian data stored within Australia, and on strengthening a workforce that can use artificial intelligence safely, confidently and ethically. The focus on capability uplift across all sectors reflects the scale of cultural and technical change required for responsible adoption.
The plan recognises the importance of hearing from Australians, and this commitment is essential. One area that warrants particular attention is the inclusion of young people. As artificial intelligence will shape the learning, well-being and working lives of the next generation most profoundly, it is vital that their perspectives and experiences inform national priorities. My research with teachers and students shows that young people are highly attuned to both the promise and the risks of artificial intelligence. Ensuring that their voices are not only heard but actively integrated into policy design and dataset development will strengthen trust, improve relevance and support the safe adoption of these technologies across education and youth settings."
Associate Professor Niusha Shafiabady is from the Department of Information Technology, Peter Faber Business School at Australian Catholic University
"The National AI Plan highlights the economic opportunities of AI, but based on what we know so far, it leaves important gaps in how Australia manages the risks. Relying on existing laws sounds practical, but those laws were not designed for powerful modern AI systems. They don’t fully address issues like how AI models make decisions, how data is used, and who is responsible when things go wrong.
Opening up more public and private data could help businesses innovate, but it must be done safely. We need clear rules about how data can be shared, strong privacy protections, and consistent standards for the development and testing of AI systems - especially in sensitive areas like health, finance, and education.
Australia has strong research and industry capability in responsible AI, but to lead internationally, we need more than an economic vision. We need clearer guidance, transparency requirements, and a framework that ensures AI is used safely and fairly. Without this, the plan risks speeding up AI adoption without giving the public enough protection or confidence."
Dr Rob Nicholls is a Senior Research Associate at the University of Sydney
"We have a new AI Plan, which says that protections can be created by amending existing legislation. This approach is welcome. However, there is very little political appetite to change laws, even when it is government policy. Some protections would come from the introduction of a prohibition of unfair trading practices. Treasury consulted on this in 2024, and there may be legislation next year. The power of AI providers might well be best managed through ex ante competition law. Treasury consulted on this in 2024, but has not even published the submissions.
The evidence is that amending existing law is simply unlikely. This has a dual outcome. The first is that the consumer protections that have been promised will not be delivered. The second is that the economic benefits will not flow, because there is no predictability about the consumer protections and associated regulation.
Kicking the legislative can down the road is the worst of all worlds. Whether it's competition and consumer law or a digital duty of care, there is a need to act. Creating regulatory predictability provides benefits for both consumers and businesses. It's time for Minister Ayers and Assistant Minister to be specific about the legislative timetable."
Professor Lyria Bennett Moses is from the School of Law, Society and Criminology in the Faculty of Law & Justice at the University of New South Wales
“The government has committed to start with an audit of existing law, identifying gaps and developing new law as required. This response provides an opportunity to focus on values and concerns, such as those to which discrimination law, consumer law and privacy law are directed, and build out from there in ways that protect Australians as the technology continues to evolve in unpredictable ways.”
Dr Raffaele Fabio Ciriello is a Senior Lecturer in the Discipline of Business Information Systems (BIS) at The University of Sydney
"The establishment of an AI Safety Institute is a welcome and necessary step. The National AI Plan is right to highlight economic opportunity, infrastructure investment and skills development. But these benefits will not be shared evenly unless we confront a longstanding truth: digital technologies tend to distribute benefits and burdens unequally. The key question is therefore not only how much economic value AI creates for Australia, but who benefits and who is burdened. Ensuring fair distribution requires structured, democratic deliberation with genuine representation from Australia’s diverse communities – including families, young people, First Nations peoples, LGBTQIA+ groups, regional and remote communities, tradies, teachers, small businesses, workers, experts and end-users. These voices must shape how AI is designed and governed.
The plan focuses heavily on infrastructure, investment attraction and capability building. These are important, but without enforceable safeguards we risk becoming a digital colony, where foreign platforms shape Australian childhoods, workplaces and civic life with limited accountability to local values. Rolling back earlier proposals for mandatory guardrails is therefore concerning, especially given Australia’s historical difficulty in regulating powerful tech corporations. Just as we struggle to ensure Australians benefit from the extraction of natural resources, we risk surrendering our digital sovereignty to offshore tech giants unless we strengthen our regulatory backbone. Agencies such as the eSafety Commissioner work tirelessly to protect Australians, but they need stronger regulatory backing to keep pace with global platforms.
The new AI Safety Institute can provide vital capability, but its advice must feed into a regulatory framework that has real teeth. Democratic governance of public digital infrastructure is now essential to Australia’s sovereignty. The next step is building an inclusive, representative process to embed that principle into enforceable policy."
Professor Toby Walsh is Chief Scientist of the AI Institute and Scientia Professor of AI at The University of New South Wales (UNSW), and Adjunct Fellow at Data 61
"The long awaited National AI Plan says all the right things but lacks ambition and commitment. It’s hard not to compare against the UK’s AI growth plan announced one week ago with AU$48 billion of investment, both public and private, into the AI sector in the UK promised in the last month alone. The Australian plan includes just $30 million for a much needed AI Safety Institute.
But where is Australia’s sovereign AI fund to invest in AI startups to match the AU$1 billion just announced by the UK government? Where will Australia’s AI growth zones be to match the millions of pounds that the UK government is investing into AI growth zones in North England, Wales and elsewhere? Where is the extra investment into research to accelerate science with AI to match the UK’s announcement of another AU$250 million in this space? And where is the extra investment into AI compute for universities and startups to match the UK’s announcement of AU$500 millions?
As for safety, why did the government decide to backtrack on new AI regulation, that the Minister Husic had vocally supported? There will be fresh harms that AI introduces, outside of existing regulation. If it is good enough for Europe, why is new AI regulation not needed here? Did we not learn anything from social media? Along with many benefits, social media introduced new harms that we are only now regulating after they impacted so many young people. Can we not repeat this mistake? We don’t let the drug industry regulate itself. Why do we let the tech industry when the impacts on our (mental) health are just as great?
Many others have called for greater investment and regulation. ATSE put out such a report very recently saying exactly this. I chaired a report for ACOLA, the Australian Council of the Learned Academics at the request of the Chief Scientist and PMC over six years ago. This also called for greater investment in and regulation around AI. We are still waiting for the government to respond with ambition to the opportunity (and to the risks). We will miss this boat if we don’t steer a better course. Where else does the government expect the desperately needed productivity gains to come from if not from technologies like AI?"
Dr Karen Sutherland is a Senior Lecturer in Public Relations at the University of the Sunshine Coast and author of the research monograph: Artificial Intelligence for Strategic Communication. She is also Co-Director of the Queensland AI Hub Sunshine Coast Chapter
"The National AI Plan is a solid start, but it leans heavily on infrastructure and investment while skimming over the realities organisations face when trying to adopt AI safely and effectively. Data centres and broadband upgrades matter, but without deep, sustained investment in AI literacy and workforce training, most businesses and public agencies will remain stuck at the surface level. Capability isn’t built through enthusiasm. It requires structured, ongoing education.
I’m pleased to see recognition of digital exclusion and the metro–regional divide, although the solutions feel optimistic rather than grounded. The uptake gap will not close without long-term, community-led skills programs that prioritise accessibility, not just pilot initiatives and templates.
The inclusion of the AI Safety Institute is promising, but safety without practical quality-control frameworks risks becoming another high-level aspiration that doesn’t reach practitioners. My own research shows overconfidence and weak fact-checking and editing when using GENAI are already undermining trust in AI outputs and can result in the spread of misinformation. The plan needs stronger emphasis on everyday safeguards and standards workers can operationalise.
Overall, the vision is there. The challenge will be execution, coordination, and the courage to regulate where necessary. If Australia wants to lead, capability, literacy and governance must be treated as seriously as cables and compute."
Dr Lisa Harrison is a Lecturer in Media and Communications at Flinders University
"AI policy commentary focuses on technical capabilities, economic impacts or regulatory frameworks. I can speak to the educational and critical thinking dimensions that determine whether Australians will use AI thoughtfully rather than simply avoid its most obvious harms. The plan mentions the Framework for Generative AI in Schools and pilot programs, but stops short of articulating how we develop population-wide critical capabilities. I can speak to what this actually looks like in practice and why humanities education must be recognised as essential infrastructure for the plan's success."
Rebecca Johnson has recently completed a PhD in the Ethics of Generative AI at The University of Sydney. A year of her doctoral work was based at Google Research in the Ethical AI Department. Website: EthicsGenAI.com
"The National AI Plan puts a strong focus on economic opportunity, but it is surprising to see the government rely on existing, tech-neutral laws at the very moment AI is shifting from static chatbots to AI agents that can act in the world. It’s like trying to regulate drones with road rules: some parts apply, but most of the risks fly straight past.
AI agents don’t just generate text; they carry out tasks. They can book flights, move money, update calendars, make decisions at machine speed, and interact with other systems without checking back with the user. That is a fundamentally different safety landscape.
We’ve already seen warning signs in laboratory tests. In one of Anthropic’s [An AI safety research company] safety evaluations, an AI agent faced a scenario where it had to choose between saving a human engineer or preserving its own ability to keep operating. Some models began drifting toward self-preservation; not out of malice (they don’t have that capability), but because they were optimising the goal they were given.
Australia can absolutely benefit from AI, but we cannot pursue opportunity while overlooking unprecedented risk. Our people, our society, and our democracy matter more than short-term economic gains. Safety needs to lead, not follow."
Dr Dana McKay is a senior lecturer in innovative interactive technologies at RMIT University
"The announcement from the government that existing rules will be used to address the risks of AI is a positive step in regulating tech in Australia: this step should end the exceptionalism that tech companies have long enjoyed where issues like social cost of their services, spread of misinformation, copyright violation and human rights violations have been ignored. Unfortunately this approach with respect to AI does ignore some of the unique risks AI poses.
One of these risks is the inability to delete or correct data once it has been incorporated into a model (a right that privacy laws in many states require). Another is the challenge of assigning legal responsibility when AI is used in an agentic context: if an AI 'makes a decision' that causes a critical equipment malfunction, a breach of consumer law, or a negative medical or legal outcome, we need clear mechanisms of responsibility.
The third challenge is the opacity nondeterminism of AI: this makes it very difficult to capture and document outcomes of AI. There is much to be said for the potential productivity gains of AI, but assuming that existing laws will be enough to deal with the negative consequences may place a very high burden on those with the least technological knowledge to prove they have been harmed."
Professor Daswin De Silva is Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University
"The Fed Government’s National AI Plan appears to be driving a technology focus rather than a socio-technological focus of unlocking the benefits of AI for every Australian.
This technology focus is primarily on the financial opportunity of building new data centres, which the global AI landscape requires in droves. This technology focus is also aligned with the calls from the Productivity Commission and the big tech industry group DIGI. But the government must be commended on retaining Australia's strict copyright laws which limits the extent of training ai models without permissions.
As a lowered second and third priority, the plan includes AI skills training for workforce and a $30 million AI safety institute in an advisory role instead of the mandatory guardrails that were articulated originally. The main challenge in build on existing regulation over new AI legislation, is that the impact of AI is clearly seen by every Australian to be general purpose and impactful across all parts of the economy and society. Existing legislation is unprepared and will short fall of this deep impact of AI, such as AI psychosis and self harm, and the impending AI wipe-out of entry level jobs and many others. On loss of jobs, details of the proposed AI skills training for workforce are somewhat sketchy in the Plan at this stage.
Globally, we are also seeing an unravelling of AI regulation in the USA and China, which might be a proportionate reality for Australia to remain AI competitive and directly benefit from the AI data centre race."
Dr Oliver Bown is Associate Professor and co-director of the UNSW Interactive Media Lab and co-director of research and engagement at the School of Art & Design at UNSW
"It is vital the National AI plan recognises the cultural impacts of AI and put these up on a level with economic concerns. In the creative industries the effects of AI are unfolding with potential sweeping transformation in a short generation — 5-10 years. Effects in the creative industries are effects in culture at large. If social media regulation is recognised after 20 years as a vital correction to its negative societal effects, how can we move faster with AI?
I think many of the items in the plan’s 'Action on AI risks and harms' are on point. I’d prefer to see 'Keeping Australians Safe' deserves to sit at the top, not the bottom, of the list. Safety first, social values first, then work adoption around it. Keeping Australians safe requires joined up whole-of-society thinking to understand the indirect societal impact of widespread AI adoption."