Expert Reaction
These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.
Dr Rajesh Johnsam is a Senior Lecturer in the Flinders Business School at Flinders University
“Putting the Cart before the horse?? – National AI plan”
Capturing the economic opportunities and benefits of AI is critical, but those must result from safe AI investment, architecture and culture. With AI safety placed as Actions 7, 8, and 9 in the National AI Plan and without mandatory guardrails, the question arises: is the message about its critical importance being lost?
While it is understandable that overly restrictive rules could slow readiness to embrace AI, the plan should clearly reinforce and prioritise the importance of AI safety within its earliest actions.
By possibly prioritising the economic opportunities and benefits that AI brings and the decision to rely on the existing technology-neutral legal frameworks, are we inadvertently sending the wrong message to global tech companies that profit can come before Australians’ safety? We hope not.
The current National AI Plan demonstrates the Government’s serious intent, but for it to truly succeed, AI safety must be elevated from a lower-order action to a top priority. Sustained investment in the AI safety institute, clearer application of existing legal frameworks, and public and workforce training in responsible AI is astronomically critical.
Otherwise, we risk putting the cart before the horse. Time will tell."
Dr Farida Akhtar is a Senior Lecturer in Finance in the Business School at Macquarie University
"The Plan’s growth-first emphasis matches what AI is expected to deliver: targeted efficiency gains rather than economy-wide step changes. In financial services, our research on robo-advice in wealth management shows systems automate routine portfolio tasks, scale advice and reduce behavioural biases, improving portfolios without major job loss. Generative-AI-enabled advisory is expected to dominate retail investment services by 2028, driving significant efficiency gains in finance, but Australia will need to accelerate adoption to keep pace.
Similar incremental effects are likely in data-heavy work, aligning with a Plan that prioritises infrastructure, investment and skills to “capture the opportunities” of AI. We argue that robotic process automation, platforms and data analytics can actually streamline work and enable roles across finance, education and health.
However the Plan underplays how these gains rely on trust, literacy and equity. Our research highlights technostress, cybersecurity and privacy anxieties, and confusion about liability-especially for people with low financial and digital capability -which suppress sustained use. We also show that marginalised groups will face barriers to inclusion in financial decision-making using robo-advisory.
AI-enabled monitoring can erode labour rights, and disadvantaged workers need protections and skills support. These risks require stronger regulation, yet the Plan remains centred on investment and guidance, so gains may be narrower and unequal."
Professor Chennupati Jagadish AC is President of the Australian Academy of Science
“The Australian Government’s national AI plan released today highlights how AI can benefit Australians, including through innovations made possible from Australian science.
The Academy welcomes the launch of an ‘AI Accelerator’ funding round of the Cooperative Research Centres program, which will provide many researchers a platform to translate their ideas into real-world products.
Advanced computing power (compute) infrastructure and data centres are critical to power AI research and its translation; however the plan has fallen short in that there are only concrete actions relating to data centres.
Australia does have a leading opportunity to be a hub for data centres – but AI capability is so much more than data centres. If Australia became a regional hub for advanced computing this would generate huge economic and societal benefits, including providing opportunities for scientists and industries to innovate and compete globally.
The Academy acknowledges that the government is undertaking work to map and assess the compute infrastructure landscape. We urge the Government to turn that work into a 10-year strategy and investment plan for advanced computing and data after the Strategic Examination of Research and Development Independent Panel delivers their final report, expected later this month.
The plan highlights how AI can benefit Australians, from detecting lung cancer to improving education outcomes. All of this progress comes from science.
We must continue to support the fundamental research that underpins our next breakthroughs, to make sure Australians can enjoy the full benefits of AI advances by both creating new tools and adapting existing tools for Australian contexts.
The Academy supports the notion that ‘Australia can be a leader in AI innovation and a trusted exporter of AI computing power, not just a consumer of AI technologies built elsewhere.’”
Next week the Academy will publish a series of discussion papers on how AI is changing science and research.”
Professor Babak Abedin is from the Business School at Macquarie University
The release of Australia’s National AI Plan is an important and overdue step toward treating AI as the transformative, strategic capability it has already become. Its three pillars, i.e. capturing opportunities, spreading benefits, and keeping Australians safe, offer a sensible foundation, and build on earlier initiatives such as the AI Ethics Framework and the National AI Centre. But ambition must now translate into faster, more decisive action.
Australia has historically been an enthusiastic adopter of digital technologies and digital platforms, yet productivity gains have lagged. The plan’s focus on AI literacy is welcome, though past experience with digital education, particularly in schools, shows that rollout has been slow and fragmented. It is encouraging that the plan maintains a light-touch regulatory stance grounded in existing laws; the challenge will be sustaining this balance so that safety is strengthened without dampening innovation.
AI is rapidly becoming a sovereignty issue, and Australia cannot rely predominantly on specific major overseas firms. Strengthening domestic research, talent pipelines and compute capacity, alongside addressing the massive energy demands of future AI systems, is essential if the nation is to compete, not just consume, in the global AI economy.
Dr Dana Rezazadegan is a Lecturer in the Department of Computer Science and Software Engineering at Swinburne University of Technology
"For Australia to position itself at the vanguard of emerging technologies, it is essential to ensure the adeptness and flexibility of our regulatory frameworks for greater harmony with the rapid evolution of technology. This strategic approach is essential to fully capitalise on the potential advantages of AI, while safely and sustainably navigating the inherent risks that novel technologies bring to society and the economy. I believe the pillars of this strategic plan, so-called the Responsible AI framework, should lie in “Transparency” and “Risk-Oriented Regulation”.
Transparency in AI demands independent pre-release testing and clear disclosure of outcomes, affirming responsible practice through certifications. Similarly, applications must provide transparent data usage terms to foster informed consent. These efforts build public trust, uphold data privacy, and enable the ethical adoption of AI and automated decision making (ADM), supporting innovation and accountability.
Risk-oriented AI regulation, adopted in the US and EU, classifies systems by risk level, with strict controls on high-risk applications. Independent experts such as academics active in AI domain should guide this process, ideally within public agencies. Coupled with ethical training for developers, this approach ensures responsible innovation while safeguarding societal benefit and supporting Australia's AI progress."
Professor Javaan Chahl is the DST Joint Chair of Sensor Systems at the University of South Australia
The decision to prioritize adoption over new standalone legislation reflects a pragmatic reality: defining 'Artificial Intelligence' with the precision required for a court of law is currently almost impossible without inadvertently capturing standard software engineering. Limiting artificial intelligence for this jurisdiction may be practically impossible.
From an operational perspective, we cannot simply opt out of these tools while the rest of the world adopts them. The efficiencies gained in writing code, correlating vast document sets, and synthesizing data are too significant to ignore. The creativity now unlocked by movie-making tools and music can allow people to express themselves in new ways (amongst the spam).
There are risks; AI systems make mistakes and can be used to amplify harm. However, we must distinguish the tool from the intent. The societal environment that seeks to 'dig up dirt' or weaponize information predates the technology. AI operates within that environment, and may sometimes fuel it, but it did not create it. Blaming the tool for existing social behaviours distracts from the necessary work of integrating these systems for economic and operational competitiveness; AI also has an increasingly firm grasp of the underlying world model and data which will only improve with time. A society that struggles to maintain an agreed model of reality might find AI confronting at times. And yes, it is very likely that individuals from all professions will lose their jobs as a direct result of AI. That is unavoidable because AI increases productivity in many tasks. Disruption to the current order due to technology is as old as human history, and it has never come with an opt-out clause.
Professor Dali Kaafar is Executive Director at the Cyber Security Hub at Macquarie University as well as Director of Information Security and Privacy Research, a NSW Cyber Ambassador and Founder & CEO of Apate.ai
"The National AI Plan is a welcome step, but its effectiveness will depend on how well we balance innovation with safety and trust. Australia needs to accelerate AI adoption, yet we must do so in ways that protect citizens, safeguard data, and strengthen our national resilience.
The proposal to establish an Australian AI Safety Institute is encouraging. To be meaningful, it should focus on rigorous, science-based evaluation of AI systems, including how models use data, how they can fail, and how they can be protected from misuse. Safety cannot be an afterthought.
The Plan’s emphasis on digital skills and access is also critical. Our research shows that Australia’s AI capability gap is as much about skills and secure data access as it is about technology. Training our workforce and enabling safe use of AI across government and industry, will determine whether we capture the economic and security benefits the Plan aims for.
Overall, this is the start of a national conversation we urgently need."
Professor Niloufer Selvadurai is the Director of Research & Innovation from Macquarie Law School at Macquarie University
"The National AI Plan is a valuable step in advancing the safe and innovative design, deployment and use of AI. Of particular merit is the commitment to a ‘whole-of-government’ approach. The proposed regulatory approach ensures that any new AI-specific laws will build upon existing legal and regulatory frameworks. Such an approach will help avoid the regulatory gaps and overlaps which commonly accompany law for emerging and evolving technologies. Additionally, the action plan on measures to mitigate risk and promote responsible business behaviour is detailed and practical. The question that remains unanswered is whether and to what extent any AI laws of general application will be introduced. The government’s sector-specific analysis, as well as its current emphasis on innovation and productivity growth, suggests this is unlikely at present. Given the complex and diverse applications of AI, I think this nuanced approach, premised on a regulatory gap-analysis, is to be welcomed."
Associate Professor Georg Grossmann is from the STEM Industrial AI Research Centre and Digital Health Innovation and Clinical Informatics (DHICI) Lab at The University of South Australia
"The anticipated National AI Plan is a first step towards a sustainable and sovereign future of AI in Australia. At the core of the plan - and what is on everybody's mind - is the ethical, trustworthy application of AI. Rather than talking about "Responsible AI" (where AI itself cannot be responsible), the plan talks about responsible methodologies and keeping the human in the loop which is very positive to see. What will be a challenge though is to bring together the different initiatives on responsible practices and integrate them into a coherent national plan. At the moment there are different initiative and people are often confused about which one to follow.
It is very positive to see funding towards Universities in particular the AI Accelerator round for Cooperative Research Centres (CRC) where CRC programs offer this unique collaboration and commercialisation between universities and industries which we have experienced over the last 20+ years."
Professor Emeritus Joseph Davis is from the School of Computer Science at The University of Sydney
"The AI plan is a good starting point for working through the wide ranging opportunities and risks associated with AI and related technologies. It lays out some good proposals for protecting worker rights and on the broader issue of nurturing talent and building an AI-ready workforce.
The key focus on building large AI infrastructure through massive investments in data centres is not supported by adequate consideration for potential environmental impacts and sustainability.
The plan is weakest in its treatment of both known and emerging risks associated with AI. While the decision to establish the AI Safety Institute (AISI) to address the safety concerns is to be welcomed, the plan is categorical in stating that any regulatory initiatives has to be “technology-neutral” and within existing legal frameworks."
Dr Sue Keay is Director of the AI Institute at the University of New South Wales
"Australia has finally released a National AI Plan, and while it’s nice to see all the right ingredients listed, once again, we’re stuck with a recipe that forgets the actual cooking. The plan rightly lists everything we should be doing, but fails to commit to any real investment or any sense of urgency.
Most striking is the government’s reluctance to put money on the table for the public compute capacity we so desperately need. After years of waiting for a national AI strategy, it’s beyond frustrating to discover we’re only now beginning to “assess the landscape of available compute infrastructure.” The gaps have been obvious for years, and other countries have been building compute at breakneck speed since at least 2018.
It’s hard not to feel like we’re turning up to a global race in thongs, asking where the starting line is, while everyone else is already sprinting to the finish line. If we’re serious about building a world-class AI ecosystem, we need more than the right words in a digital document. We need ambition, investment, and a sense of pace that matches reality.
The plan seems like a belated acknowledgement that Australia should probably start paying attention to this AI stuff. Let’s hope the next iteration shows the leadership and urgency that this moment demands."
Associate Professor Sean Arisian is from the La Trobe Business School at La Trobe University
"The National AI Plan correctly identifies energy and water as critical challenges, but the gap between acknowledgment and enforceable policy remains concerning. The plan notes data centres consumed approximately four terawatt hours in 2024 and projects this could triple by 2030 – yet Australia's water governance for this sector remains dangerously fragmented. Sydney Water has indicated data centre demand could reach 250 megalitres per day by 2035, potentially increasing total system demand by nearly 20 per cent. Without proactive measures such as recycled water use and closed-loop cooling, data centres could fundamentally reshape Sydney's water network within a decade.
While Australia has established robust federal energy efficiency standards through NABERS ratings, no equivalent national framework exists for water. This creates what I would term a 'credibility gap' in our sustainability strategy. Frameworks will apparently be developed around energy and water usage, but the plan lacks binding targets or timelines for water efficiency standards. Proven solutions exist. Quantum-based energy optimization, advanced cooling technologies and mandatory recycled water requirements could dramatically reduce consumption. The question is whether Australia will lead with enforceable standards or react only after water crises emerge. Climate-resilient AI infrastructure requires more than good intentions – it demands measurable, nationally consistent regulatory action."
Dr Timothy Koskie is a Post-Doctoral Associate with the Centre for AI Trust and Governance at The University of Sydney
"There is a conflict here where this is ostensibly a national AI plan, but the largest firms currently being used for AI-powered services, such as OpenAI and Microsoft, are based and founded in the US, and Australia’s role in or benefits from this large and rapid AI rollout is unclear.
Given the extent to which the previous innovations through digital platforms have enormously impacted, for instance, our news media environment by funnelling much of its revenue to overseas platform companies, this is an area that needs considerable attention. This is doubly problematic because existing laws were demonstrably ineffective at protecting many vital sectors from those impacts, which raises the question of how they are going to work if they are stretched further still to take on this new and potentially massive disruption.
The current US administration has put forward some strong political headwinds against regulatory action, even by US states, but this also flags that the rapid expansion of AI is not entirely organic and is instead already being shaped by governments’ decisions. Australia has a strong history of being able to not only lead on such legislation but also to find strong international support for their initiatives."
Professor Joel Pearson is Deputy Director of the AI Institute and is a researcher in human readiness and neuroscience at The University of New South Wales
"Australia’s new National AI Plan is a solid start. It recognises the importance of skills, infrastructure and safety, and it shows the government is finally taking AI seriously at a systems level. But it almost completely misses the elephant in the room: people’s minds.
Right now, Australians are anxious about their future. Quietly. At scale. The uncertainty around jobs, education, identity and meaning, in an AI-saturated world, is already driving stress, disengagement and social unrest. You cannot build an “AI-enabled” nation on a population that feels psychologically unsafe about the future.
The Plan treats AI as a technical and economic challenge. In reality, it is also a human psychology challenge. We need a national AI change management layer that sits alongside the existing pillars. A framework that helps people understand what is happening to work, education, purpose and identity, gives them evidence-based tools to cope with uncertainty, and supports leaders to manage AI change without burning out their teams.
For example, we have developed a white paper outlining a National AI Change Management Plan that could plug straight into the current framework. Until we add that human layer, Australia’s AI strategy will remain structurally impressive but psychologically incomplete."
Dr Armin Chitizadeh is from the School of Computer Science at The University of Sydney
"The Federal Government has released its National AI Plan—a promising first step. It outlines investments and strategies to strengthen Australia’s position in artificial intelligence while aiming to ensure that all Australians benefit from its growth.
The plan includes some funding to address potential risks, but this area is not heavily prioritised. Many in the AI field follow the mindset of “build first, fix later”. Unfortunately, this does not work for AI. AI systems are complex, far more complex than most human-made designs, including skyscrapers or aircraft engines. With such complexity, fixing major issues after deployment can be extremely costly or even impossible. Think of it like building a house without proper inspection, only to discover later that it is sinking. Once that happens, repair becomes nearly impossible. The same applies to AI: if we develop it without careful planning and robust safety measures, we may not be able to make it safe afterwards.
That said, the challenge is not solely Australia’s to solve. AI safety requires global cooperation, much like climate action. When all parties work together, everyone benefits. But if some act responsibly while others do not, the cooperative ones may bear the cost while others profit in the short term. Unfortunately, we live in a world where international collaboration is weakening, and countries often prioritise national gains over shared safety. Australia could help lead by proposing an international framework—similar to the Paris Agreement on climate change—perhaps even a “Canberra Agreement” focused on AI risk mitigation."
Professor Mary-Anne Williams is a Michael J Crouch Chair for Innovation from the Business School at The University of New South Wales
“Australia’s new National AI Plan represents an important step, but it will only succeed if we approach AI not only as a technology transformation, but as an innovation agenda focused on value creation and value capture. AI delivers impact when organisations use it to solve real problems, run rapid experiments, learn quickly, and iterate. This is the core of modern innovation practice, and without it, even the most advanced AI tools achieve very little.
Globally, more than 80% of innovations fail to deliver expected value, and AI initiatives are no different. The danger for Australia is that we invest heavily in AI infrastructure and technologies without cultivating the innovation cultures, strategies and capabilities needed to turn those technologies into economic and societal benefit.
To unlock AI’s potential across Australia’s critical domains, from healthcare and defence to agriculture, education and energy, we urgently need a new generation of talent that not only understands AI technically, but is deeply skilled in the art and science of innovation: problem framing, experimentation, evidence-driven decision-making, and designing for impact.
If the National AI Plan is to achieve its ambition, it must place innovation capability at its core. AI creates value only when people know how to innovate with it.”
Dr Mohiuddin Ahmed is a Senior Lecturer of Computing and Security discipline in the School of Science at Edith Cowan University. He is also coordinating the Postgraduate Cyber Security courses.
"An Australian national AI plan is much needed and a welcome step towards uplifting national security. Artificial Intelligence is a double-edged sword, and in particular, cyber-enabled crime has a direct connection. In the near future, it would be highly appreciated to see a synergy among different stakeholders and relevant frameworks, such as the Scams Prevention Framework. It is not just the government; citizens should also be held responsible for doing their due diligence to keep everyone safe in this age of unprecedented technological advancement."
Dr Emmanuelle Walkowiak is Vice-Chancellor's Senior Research Fellow in the College of Business and Law at RMIT University
"Australia's National AI Plan takes a neutral approach that gives firms more flexibility to experiment with AI integration. From a labour economics perspective, it avoids locking organisations into new rules and organisational practices before we fully understand how AI reshapes tasks, productivity, and skill demand. A neutral approach allows firms across all industries to experiment with AI, choose the models that best fit their workflows, capture productivity gains, potentially enhancing workforce adoption.
In addition, the establishment of the AI Safety Institute signals that the government recognises that monitoring AI risks and research capacity matters. So, Australia is choosing a “safety-with-innovation” road. In my research, I raise a key question: who is responsible for managing AI-related risks within workplaces?
Employers face choices about data governance, accuracy, accountability, and professional standards, while they also need to tackle AI risks such as cybersecurity, misinformation, and intellectual property rights in new ways. It is time now to clarify responsibilities and ensure organisations and workers have the capability to deploy AI productively, safely, and in ways that build long-term trust."
Professor Paul Salmon is co-director of the Centre for Human Factors and Sociotechnical Systems at the University of the Sunshine Coast
“The decision to move away from developing a comprehensive set of AI risk controls is disappointing. Unfortunately, existing frameworks, principles, and guidelines are not fit-for-purpose and cannot be relied upon to manage the risks associated with AI in different contexts. Whilst I acknowledge the need to capitalise on the potential benefits of AI, this should not come without appropriate management of the various risks associated with AI.”
Associate Professor Sophia Duan is Associate Dean of Research and Industry Engagement at La Trobe University
"The National AI Plan rightly recognises that AI is now an economic capability issue. Australia must accelerate investment in skills, infrastructure and industry adoption if we want to remain globally competitive.
While focusing on economic opportunity is important, the absence of new AI-specific legislation means Australia still needs clearer guardrails to manage high-risk AI. Trustworthy AI requires more than voluntary guidance.
AI capability-building must extend beyond major cities and large organisations. Regional, rural, and Indigenous communities need tailored support to ensure AI benefits are shared equitably.
AI transformation is as much about people as technology. Organisations need support to build AI literacy, trust, governance and change-readiness among employees.
Data access is the foundation of AI innovation, but it must be balanced with strong safeguards, privacy protections and community trust.”
Dr David Tuffley is a Senior Lecturer in the School of Information and Communication Technology at Griffith University
"Australia's National AI Plan is strong on infrastructure commitments and has some impressive worker-focused rhetoric, but it lacks the institutional teeth required for genuine accountability. The plan identifies opportunities in data centres and regional partnerships, yet sidesteps the harder questions about market concentration and competition policy.
The establishment of an AI Safety Institute is a good move, but I see some critical gaps around its actual authority and enforcement mechanisms. There is vague language about "coordination" and "monitoring", but no clear lines of regulatory power or consequences for bad actors (that I can see). The accountability framework could do with more work - i.e. who makes the rules, who enforces them?
The plan also lacks a robust competition policy. It celebrates attracting Microsoft and Amazon's billions without addressing how such massive foreign investment might entrench control of these oligarchs over Australia's digital infrastructure. We need a strategy for ensuring smaller players can compete on equitable terms.
The worker protections sound good on paper - consultation, reskilling, union engagement - but without binding mechanisms, they remain aspirational.
Overall, Australia's plan positions itself as 'responsible' while avoiding the regulatory friction that responsibility actually requires."
Dr Angela Kintominas (she/her) is a lecturer in the Faculty of Law and Justice at The University of New South Wales
"It is encouraging to see that the National AI Plan recognises that, as well as possible benefits to the labour market, we also need to respond to risks to workers such as increased surveillance, discrimination and bias (including in relation to hiring and dismissal/deactivation) and work intensification. These risks impact all Australian workers, but especially those in precarious forms of work, such as in the gig economy.
As well as the Plan's initiatives relating to training and skills development, attention is also needed around national workplace laws to protect workers' rights and strengthen collective labour rights through collective bargaining and enterprise agreement making. This includes transparency requirements around what worker data is collected, giving workers a right to request (and to contest and delete) data, prohibitions on abusive forms of data collection without worker consent, updates to the National Employment Standards, making AI a permitted matter in collective bargaining, and expanding the role and funding for the national labour inspectorate the Fair Work Ombudsman to investigate and proactively respond to emerging AI issues in the world of work."
Dr Melissa McCradden (she/her) is Deputy Director and THRF Fellow at the Australian Institute for Machine Learning at The University of Adelaide
"On 25 November, the Australian Government announced that it was creating an Australian Artificial Intelligence Safety Institute (AISI) to respond to AI-related risks and harms. I applaud the Government's decision to leverage existing legislation. As an ethicist and clinical researcher, I have found that augmenting the strengths of existing governance systems often provides the most effective and quickest path to protecting citizens.
AI safety is important, but we can't forget that AI doesn’t do anything by itself. To make AI safe, we need to focus on human decision-making and transparency, not just more technical 'fixes.' Safe decisions with AI are those which centre the wellbeing of Australians, mitigate conflicts of interest, involve consultation and partnership with Aboriginal knowledge holders, and are made by diverse teams including consumers.
We need to involve young people, in particular, in shaping Australia's future with AI. Our young people are not only navigating their most impactful years with these technologies, but they are also actively shaping new norms around technology use. No sustainable AI plan neglects the perspective of youth."
Dr Rebecca Marrone is a Senior Lecturer in The Centre for Change and Complexity in Learning at Adelaide University
"The National AI Plan presents a three-stage strategy centred on (1) capturing economic opportunities, (2) expanding access to AI benefits, and (3) ensuring the safety of Australians. Rather than introducing new legislation, it strengthens existing frameworks and prioritises the practical development and adoption of artificial intelligence across industry and government. A strong emphasis is placed on building sovereign capability, including the development of models trained on Australian data stored within Australia, and on strengthening a workforce that can use artificial intelligence safely, confidently and ethically. The focus on capability uplift across all sectors reflects the scale of cultural and technical change required for responsible adoption.
The plan recognises the importance of hearing from Australians, and this commitment is essential. One area that warrants particular attention is the inclusion of young people. As artificial intelligence will shape the learning, well-being and working lives of the next generation most profoundly, it is vital that their perspectives and experiences inform national priorities. My research with teachers and students shows that young people are highly attuned to both the promise and the risks of artificial intelligence. Ensuring that their voices are not only heard but actively integrated into policy design and dataset development will strengthen trust, improve relevance and support the safe adoption of these technologies across education and youth settings."
Associate Professor Niusha Shafiabady is from the Department of Information Technology, Peter Faber Business School at Australian Catholic University
"The National AI Plan highlights the economic opportunities of AI, but based on what we know so far, it leaves important gaps in how Australia manages the risks. Relying on existing laws sounds practical, but those laws were not designed for powerful modern AI systems. They don’t fully address issues like how AI models make decisions, how data is used, and who is responsible when things go wrong.
Opening up more public and private data could help businesses innovate, but it must be done safely. We need clear rules about how data can be shared, strong privacy protections, and consistent standards for the development and testing of AI systems - especially in sensitive areas like health, finance, and education.
Australia has strong research and industry capability in responsible AI, but to lead internationally, we need more than an economic vision. We need clearer guidance, transparency requirements, and a framework that ensures AI is used safely and fairly. Without this, the plan risks speeding up AI adoption without giving the public enough protection or confidence."
Dr Rob Nicholls is a Senior Research Associate at the University of Sydney
"We have a new AI Plan, which says that protections can be created by amending existing legislation. This approach is welcome. However, there is very little political appetite to change laws, even when it is government policy. Some protections would come from the introduction of a prohibition of unfair trading practices. Treasury consulted on this in 2024, and there may be legislation next year. The power of AI providers might well be best managed through ex ante competition law. Treasury consulted on this in 2024, but has not even published the submissions.
The evidence is that amending existing law is simply unlikely. This has a dual outcome. The first is that the consumer protections that have been promised will not be delivered. The second is that the economic benefits will not flow, because there is no predictability about the consumer protections and associated regulation.
Kicking the legislative can down the road is the worst of all worlds. Whether it's competition and consumer law or a digital duty of care, there is a need to act. Creating regulatory predictability provides benefits for both consumers and businesses. It's time for Minister Ayers and Assistant Minister to be specific about the legislative timetable."
Professor Lyria Bennett Moses is from the School of Law, Society and Criminology in the Faculty of Law & Justice at the University of New South Wales
“The government has committed to start with an audit of existing law, identifying gaps and developing new law as required. This response provides an opportunity to focus on values and concerns, such as those to which discrimination law, consumer law and privacy law are directed, and build out from there in ways that protect Australians as the technology continues to evolve in unpredictable ways.”
Dr Raffaele Fabio Ciriello is a Senior Lecturer in the Discipline of Business Information Systems (BIS) at The University of Sydney
"The establishment of an AI Safety Institute is a welcome and necessary step. The National AI Plan is right to highlight economic opportunity, infrastructure investment and skills development. But these benefits will not be shared evenly unless we confront a longstanding truth: digital technologies tend to distribute benefits and burdens unequally. The key question is therefore not only how much economic value AI creates for Australia, but who benefits and who is burdened. Ensuring fair distribution requires structured, democratic deliberation with genuine representation from Australia’s diverse communities – including families, young people, First Nations peoples, LGBTQIA+ groups, regional and remote communities, tradies, teachers, small businesses, workers, experts and end-users. These voices must shape how AI is designed and governed.
The plan focuses heavily on infrastructure, investment attraction and capability building. These are important, but without enforceable safeguards we risk becoming a digital colony, where foreign platforms shape Australian childhoods, workplaces and civic life with limited accountability to local values. Rolling back earlier proposals for mandatory guardrails is therefore concerning, especially given Australia’s historical difficulty in regulating powerful tech corporations. Just as we struggle to ensure Australians benefit from the extraction of natural resources, we risk surrendering our digital sovereignty to offshore tech giants unless we strengthen our regulatory backbone. Agencies such as the eSafety Commissioner work tirelessly to protect Australians, but they need stronger regulatory backing to keep pace with global platforms.
The new AI Safety Institute can provide vital capability, but its advice must feed into a regulatory framework that has real teeth. Democratic governance of public digital infrastructure is now essential to Australia’s sovereignty. The next step is building an inclusive, representative process to embed that principle into enforceable policy."
Professor Toby Walsh is Chief Scientist of the AI Institute and Scientia Professor of AI at The University of New South Wales (UNSW), and Adjunct Fellow at Data 61
"The long awaited National AI Plan says all the right things but lacks ambition and commitment. It’s hard not to compare against the UK’s AI growth plan announced one week ago with AU$48 billion of investment, both public and private, into the AI sector in the UK promised in the last month alone. The Australian plan includes just $30 million for a much needed AI Safety Institute.
But where is Australia’s sovereign AI fund to invest in AI startups to match the AU$1 billion just announced by the UK government? Where will Australia’s AI growth zones be to match the millions of pounds that the UK government is investing into AI growth zones in North England, Wales and elsewhere? Where is the extra investment into research to accelerate science with AI to match the UK’s announcement of another AU$250 million in this space? And where is the extra investment into AI compute for universities and startups to match the UK’s announcement of AU$500 millions?
As for safety, why did the government decide to backtrack on new AI regulation, that the Minister Husic had vocally supported? There will be fresh harms that AI introduces, outside of existing regulation. If it is good enough for Europe, why is new AI regulation not needed here? Did we not learn anything from social media? Along with many benefits, social media introduced new harms that we are only now regulating after they impacted so many young people. Can we not repeat this mistake? We don’t let the drug industry regulate itself. Why do we let the tech industry when the impacts on our (mental) health are just as great?
Many others have called for greater investment and regulation. ATSE put out such a report very recently saying exactly this. I chaired a report for ACOLA, the Australian Council of the Learned Academics at the request of the Chief Scientist and PMC over six years ago. This also called for greater investment in and regulation around AI. We are still waiting for the government to respond with ambition to the opportunity (and to the risks). We will miss this boat if we don’t steer a better course. Where else does the government expect the desperately needed productivity gains to come from if not from technologies like AI?"
Dr Karen Sutherland is a Senior Lecturer in Public Relations at the University of the Sunshine Coast and author of the research monograph: Artificial Intelligence for Strategic Communication. She is also Co-Director of the Queensland AI Hub Sunshine Coast Chapter
"The National AI Plan is a solid start, but it leans heavily on infrastructure and investment while skimming over the realities organisations face when trying to adopt AI safely and effectively. Data centres and broadband upgrades matter, but without deep, sustained investment in AI literacy and workforce training, most businesses and public agencies will remain stuck at the surface level. Capability isn’t built through enthusiasm. It requires structured, ongoing education.
I’m pleased to see recognition of digital exclusion and the metro–regional divide, although the solutions feel optimistic rather than grounded. The uptake gap will not close without long-term, community-led skills programs that prioritise accessibility, not just pilot initiatives and templates.
The inclusion of the AI Safety Institute is promising, but safety without practical quality-control frameworks risks becoming another high-level aspiration that doesn’t reach practitioners. My own research shows overconfidence and weak fact-checking and editing when using GENAI are already undermining trust in AI outputs and can result in the spread of misinformation. The plan needs stronger emphasis on everyday safeguards and standards workers can operationalise.
Overall, the vision is there. The challenge will be execution, coordination, and the courage to regulate where necessary. If Australia wants to lead, capability, literacy and governance must be treated as seriously as cables and compute."
Dr Lisa Harrison is a Lecturer in Media and Communications at Flinders University
"AI policy commentary focuses on technical capabilities, economic impacts or regulatory frameworks. I can speak to the educational and critical thinking dimensions that determine whether Australians will use AI thoughtfully rather than simply avoid its most obvious harms. The plan mentions the Framework for Generative AI in Schools and pilot programs, but stops short of articulating how we develop population-wide critical capabilities. I can speak to what this actually looks like in practice and why humanities education must be recognised as essential infrastructure for the plan's success."
Rebecca Johnson has recently completed a PhD in the Ethics of Generative AI at The University of Sydney. A year of her doctoral work was based at Google Research in the Ethical AI Department. Website: EthicsGenAI.com
"The National AI Plan puts a strong focus on economic opportunity, but it is surprising to see the government rely on existing, tech-neutral laws at the very moment AI is shifting from static chatbots to AI agents that can act in the world. It’s like trying to regulate drones with road rules: some parts apply, but most of the risks fly straight past.
AI agents don’t just generate text; they carry out tasks. They can book flights, move money, update calendars, make decisions at machine speed, and interact with other systems without checking back with the user. That is a fundamentally different safety landscape.
We’ve already seen warning signs in laboratory tests. In one of Anthropic’s [An AI safety research company] safety evaluations, an AI agent faced a scenario where it had to choose between saving a human engineer or preserving its own ability to keep operating. Some models began drifting toward self-preservation; not out of malice (they don’t have that capability), but because they were optimising the goal they were given.
Australia can absolutely benefit from AI, but we cannot pursue opportunity while overlooking unprecedented risk. Our people, our society, and our democracy matter more than short-term economic gains. Safety needs to lead, not follow."
Dr Dana McKay is a senior lecturer in innovative interactive technologies at RMIT University
"The announcement from the government that existing rules will be used to address the risks of AI is a positive step in regulating tech in Australia: this step should end the exceptionalism that tech companies have long enjoyed where issues like social cost of their services, spread of misinformation, copyright violation and human rights violations have been ignored. Unfortunately this approach with respect to AI does ignore some of the unique risks AI poses.
One of these risks is the inability to delete or correct data once it has been incorporated into a model (a right that privacy laws in many states require). Another is the challenge of assigning legal responsibility when AI is used in an agentic context: if an AI 'makes a decision' that causes a critical equipment malfunction, a breach of consumer law, or a negative medical or legal outcome, we need clear mechanisms of responsibility.
The third challenge is the opacity nondeterminism of AI: this makes it very difficult to capture and document outcomes of AI. There is much to be said for the potential productivity gains of AI, but assuming that existing laws will be enough to deal with the negative consequences may place a very high burden on those with the least technological knowledge to prove they have been harmed."
Professor Daswin De Silva is Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University
"The Fed Government’s National AI Plan appears to be driving a technology focus rather than a socio-technological focus of unlocking the benefits of AI for every Australian.
This technology focus is primarily on the financial opportunity of building new data centres, which the global AI landscape requires in droves. This technology focus is also aligned with the calls from the Productivity Commission and the big tech industry group DIGI. But the government must be commended on retaining Australia's strict copyright laws which limits the extent of training ai models without permissions.
As a lowered second and third priority, the plan includes AI skills training for workforce and a $30 million AI safety institute in an advisory role instead of the mandatory guardrails that were articulated originally. The main challenge in build on existing regulation over new AI legislation, is that the impact of AI is clearly seen by every Australian to be general purpose and impactful across all parts of the economy and society. Existing legislation is unprepared and will short fall of this deep impact of AI, such as AI psychosis and self harm, and the impending AI wipe-out of entry level jobs and many others. On loss of jobs, details of the proposed AI skills training for workforce are somewhat sketchy in the Plan at this stage.
Globally, we are also seeing an unravelling of AI regulation in the USA and China, which might be a proportionate reality for Australia to remain AI competitive and directly benefit from the AI data centre race."
Associate Professor Oliver Bown is from the Creative Technologies Research Lab at The University of New South Wales
"It is vital the National AI plan recognises the cultural impacts of AI and put these up on a level with economic concerns. In the creative industries the effects of AI are unfolding with potential sweeping transformation in a short generation — 5-10 years. Effects in the creative industries are effects in culture at large. If social media regulation is recognised after 20 years as a vital correction to its negative societal effects, how can we move faster with AI?
I think many of the items in the plan’s 'Action on AI risks and harms' are on point. I’d prefer to see 'Keeping Australians Safe' deserves to sit at the top, not the bottom, of the list. Safety first, social values first, then work adoption around it. Keeping Australians safe requires joined up whole-of-society thinking to understand the indirect societal impact of widespread AI adoption."