EXPERT REACTION: Australian federal government bans Chinese AI DeepSeek on devices

Publicly released:
Australia; NSW; VIC; QLD; SA; WA
CC-0. https://unsplash.com/photos/a-close-up-of-a-cell-phone-on-a-table-zQvPAtGxQh0
CC-0. https://unsplash.com/photos/a-close-up-of-a-cell-phone-on-a-table-zQvPAtGxQh0

It's been reported that Australia's federal government has banned the new Chinese artificial intelligence (AI) app DeepSeek from all government-issued devices, as well as a state ban by the Government of NSW. Other states are reportedly considering bans. Home Affairs Minister Tony Burke has said the decision follows advice from intelligence agencies but is not impacted by DeepSeek’s country of origin, China. Below, Australian AI experts comment on the ban and the risks chatbots such as DeepSeek and ChatGPT can pose for the public.

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Dr Erica Mealy is a Senior Lecturer in Computer Science at the University of the Sunshine Coast

The key issue is whose laws protect us and our data. In tech, we talk about data sovereignty, keeping Australians' data in Australia so that the companies using it are subject to Australian privacy, etc. laws.

DeepSeek’s own terms of use declare that they fall under the People's Republic of China’s laws. Not only does that mean that the Chinese government can request and be supplied any data we input, but the permissions we grant with installing the application are much more broad – i.e. granting access to the file system means that in theory all files can be accessed or transmitted, similarly for the camera or microphone.

The further concern is for data bias – what is considered moral and ethical is very culturally-based. In cultures around the world, I would not, as a woman, be granted the voice that I am in Australia and many western cultures. We have already seen that censorship that is evident in obvious cases in DeepSeek, but what is more concerning are the more insidious cases, or cases in which (as the late Queen famously said) “recollections may vary”.

We give these applications and technologies a place of influence in our lives and it is important we understand the context.

Last updated:  05 Feb 2025 2:38pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Brendan Walker-Munro is a Senior Lecturer (Law) at Southern Cross University

The Australian government ban on DeepSeek on governmental devices follows similar moves around the world.

Not only is it unclear the training data or methodologies by which DeepSeek generates its results nor how secure the AI model actually is, but the inputs to the platform will be retained and accessible by agencies of the Chinese security state.

Further, the universities in China that contributed to the DeepSeek project (such as Zheijiang University) are deeply embedded with Chinese military and security forces, raising further questions as to what uses the DeepSeek platform might have for state-sponsored programs of intelligence gathering or espionage.

China's civil-military fusion program has raised independence and autonomy concerns in academic and scientific research over the past twenty years - these concerns should be very much part of the DeepSeek discussion.

Last updated:  28 Apr 2025 12:14pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Anton Van den Hengel is Director of the Centre for Augmented Reasoning, Australian Institute for Machine Learning at the University of Adelaide

It's important to distinguish between the DeepSeek app, and their AI model. The app has been banned by some, but the model hasn't. 

The app has been banned because it comes from China, not because of anything particular that it does. TikTok was banned from government devices for the same reason.  

The DeepSeek AI model is open source, and this means that anyone can download it, and have a look at the code. If there was code in there doing unethical things then people would have found it by now. The fact that it is open source means that people can run their own copy, and they don't have to send their data to a multinational. DeepSeek has thus given everyone free access to some of the most powerful AI on the planet.

What DeepSeek means for Australia is that it's possible for a small company, or country, to compete with the best AI in the world, through good science and engineering. It makes it even clearer than before that there's a place for us in the AI race."

Last updated:  05 Feb 2025 2:27pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Associate Professor Marina Zhang is an Associate Professor, Research (technology and innovation) in the Australia-China Relations Institute (ACRI) at University of Technology Sydney

Australia’s recent ban on DeepSeek highlights the growing complexities of digital sovereignty and the geopolitical implications of generative AI.

While AI presents immense potential, it also poses risks - from misinformation and bias to security vulnerabilities.

This raises unprecedented challenges: how to define digital sovereignty, Australia’s strategic position between the U.S. and China, and the necessity of global collaboration in AI research and governance.

As the value of data surges, with AI models like DeepSeek heavily reliant on vast datasets, the question is - to whom does data belong? Is it the citizens who generate it, the nation where the AI platform is headquartered, or the country that physically stores the servers? This question has become increasingly urgent.

In a borderless and intangible cyberspace, enforcing national security and digital sovereignty is difficult. As data becomes an increasingly valuable strategic asset, defining sovereignty over digital information is no longer straightforward.

AI development cannot thrive in isolation. While governments recognise the importance of regulatory frameworks to protect data flows as a form of sovereignty, maintaining global collaboration in research and governance is equally critical.

Australia’s ban reflects concerns that foreign AI models, particularly those linked to China, could pose risks such as data leaks, influence operations, or mean reliance on opaque systems that are difficult to audit.

However, Australia’s position between the U.S. and China adds another layer of complexity - balancing security concerns with the need to remain open to AI advancements.

My recent research of AI-related publications shows that China remains Australia’s largest AI research partner, underscoring the challenge of securing national interests while maintaining scientific collaboration in an increasingly fragmented digital landscape.

Last updated:  05 Feb 2025 1:19pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Mr Cory Alpert is a PhD Student studying the impact of AI on democracy from the Faculty of Arts at the University of Melbourne

Banning models like this from government devices is a useful idea - the Australian government is right to be protective of the data that's on any government device, and such actions are in the best interests of the public. I was in the White House when President Biden banned TikTok from government devices, and that was the right move then and still is. 

It's important that this ban not affect researchers across Australia who are testing these models to help advance Australia's own AI capacity.

Actions like this should be part of a broader strategy for Australia to be able to compete on the global AI landscape. The government should explain, as much as possible, the risks that DeepSeek poses, and help to build an alternative.

The government should also be wary of any steps to ban programs like this from the public. Merely banning Chinese-made platforms isn't the solution to advancing Australia's interests - Australia (and the West more broadly) need to have models and technologies that can compete with the markets that the Chinese are identifying, not just banning things outright that people are clearly engaging with.

Last updated:  05 Feb 2025 1:17pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Dana McKay is a senior lecturer in innovative interactive technologies at RMIT University

The federal government has banned DeepSeek on federal government devices in much the same way as they banned TikTok, and for similar reasons.

Data about users collected by Chinese companies is required by Chinese law to be stored within China, and is subject to inspection and use by the Chinese government not just in the event of a crime, but also for social and economic reasons.

DeepSeek's privacy policy explicitly says that they collect details of interactions with the system, including what people are looking for, and also their keystroke patterns. This information can pose a national security threat: what we look for signal a lot about the work we're doing, for example. Keystroke patterns are as individual as fingerprints, and would allow people to be identified across accounts and devices, potentially providing leverage over them if they were looking for information out of work that they didn't want disclosed at work.

If people are using installed versions of DeepSeek, this also potentially allows access to e.g. location information or other information on their devices.

It is likely that OpenAI, the owner of ChatGPT, is collecting similar information; and it is worth considering whether commercial entities having this information about our government activities is also problematic.

The key difference is that commercial entities based in the US can have their data practices legislated in Australia, whereas Chinese based companies will require diplomatic solutions to address data ownership and management.

Last updated:  05 Feb 2025 1:16pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Mohiuddin Ahmed is a Senior Lecturer of Computing and Security discipline in the School of Science at Edith Cowan University. He is also coordinating the Postgraduate Cyber Security courses.

Like it or not, we live in the age of information warfare more than traditional warfare! The State with the maximum amount of information will likely have the upper hand.

One way to collect information and create a digital persona of an individual is through apps like DeepSeek.

State-based threat actors can exploit these apps to conduct cyber espionage and inject unconscious bias to establish their agenda. Australia's decision to ban DeepSeek is necessary for national security and sovereignty.

Last updated:  05 Feb 2025 1:14pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Toby Walsh is Chief Scientist of the AI Institute and Scientia Professor of AI at The University of New South Wales (UNSW), and Adjunct Fellow at Data 61

You shouldn’t be typing sensitive information into any chatbot, whether it be ChatGPT in the US or DeepSeek in China. It’s good to see government on the front foot about these concerns.

I only wish they’d be more on the front foot about supporting AI here in Australia, not just restricting it. To put this in context, both the EU and India have just also announced plans to spend tens of millions of dollars making their own sovereign foundation models.

What you can take away from DeepSeek’s success in building a state-of-the-art model for just $6 million is that anyone can do it. We could be profiting from this wave too.

Last updated:  05 Feb 2025 12:38pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Lisa Harrison is a Lecturer in Media and Communications at Flinders University

The Australian government's decision to ban DeepSeek from government devices highlights crucial issues around data privacy and security in the age of AI.

While DeepSeek and other generative AI tools offer impressive capabilities, they also collect vast amounts of user data - including keystroke patterns, queries, and potentially sensitive information. For government devices, this creates an unacceptable security risk.

What makes this particularly concerning is that companies providing AI services may be required to share collected data with their home governments, as highlighted by cybersecurity experts. In DeepSeek's case, data being stored on servers in mainland China raises legitimate security questions that deserve serious consideration.

The ban serves as an important reminder that while AI technology offers tremendous opportunities, we must remain vigilant about data protection and sovereignty. Organisations and individuals should carefully evaluate the privacy implications and data handling practices of AI tools before adoption, particularly when sensitive information is involved.

This isn't about restricting innovation, but rather ensuring proper safeguards are in place as these technologies become more prevalent in our society.

Last updated:  05 Feb 2025 12:37pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor George Buchanan is from the School of Computing Technology at RMIT University

There is a certainty that government data would leak to the Chinese government without a ban on DeepSeek.

Artificial intelligence tools such as DeepSeek copy everything they are exposed to. DeepSeek also tracks almost all activity on the computer including keystrokes, and bypasses many basic security measures.

DeepSeek is highly intrusive and the data it gathers could readily be used to infer further sensitive data: e.g. where an employee works, where that is intended to be secret.

Where government data is spread unconditionally there is an uncontrolled risk to national security and individual privacy: in the UK, a Welsh minister’s personal health data was identified in minutes when anonymised data was shared publicly.

However, this ban is weak as it focuses on a single tool. The rapid development of DeepSeek demonstrates the relative ease with which new rivals can now be established.

The threats to government security and personal privacy are general, and it is quite likely further competitors will emerge, each requiring a targeted ban.

A principled strategy is needed to secure the data that Australian citizens have paid for, and on which all government services depend. The security and control of all government data should be a matter of policy, not one-off actions.

Last updated:  05 Feb 2025 12:37pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Shazia Sadiq is a researcher and educator in Computer Science at The University of Queensland

The move by the government to ban DeepSeek from government-issued devices is understandable and caution is warranted.

However, DeepSeek is not the first such technology to be launched and will not be the last. There is a bigger question at play here - that of AI literacy. Whether it is for public service, students, business or citizens, without adequate literacy on the risks and opportunities that these technologies present, we are pushing ourselves under a rock.

There is a potential for eroding confidence in Australian society towards one of the most powerful technologies of our time. The lack of engagement could lead to a new form of socio-economic divide, that we term ‘AI privilege’, where the benefits could flow inequitably to different countries, sectors and communities.

Last updated:  05 Feb 2025 12:35pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Uri Gal is a Professor of Business Information Systems at The University of Sydney Business School

Australia’s decision to ban DeepSeek for government agencies likely reflects concerns over data security and privacy risks.

Government agencies typically manage highly sensitive information, and there are worries that DeepSeek’s extensive collection of data - such as device details, usage metrics, and personal identifiers - could expose confidential information to vulnerabilities if accessed or stored outside Australian borders.

Although the open-source nature of the model offers transparency regarding its code, it does not guarantee that user data is handled solely within Australia or according to local privacy standards. This risk of cross-border data access is a key factor behind the ban.

Beyond government applications, generative AIs like DeepSeek pose additional risks to the public.

These include the potential spread of misinformation, unintentional biases in outputs, and the risk of privacy breaches if personal data is inadvertently exposed or misused.

Moreover, the scale and automation of such systems can lead to accountability challenges, which could complicate efforts to trace and rectify erroneous or harmful content.

The ban can thus be seen as a pre-emptive measure aimed at protecting national security and public trust until robust data protection safeguards are established.

Last updated:  05 Feb 2025 12:34pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Jonathan Kummerfeld is a senior lecturer in the School of Computer Science in the Faculty of Engineering, University of Sydney

This is a prudent move because we have no control over how data given to the chatbot is used. At the same time, it is important to note that Australia can benefit from the scientific discoveries underpinning DeepSeek. We can use those innovations to build our own systems, without the risks of using the chatbot service.

Last updated:  05 Feb 2025 12:33pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Suranga Seneviratne is a Senior Lecturer in Security at the School of Computer Science, The University of Sydney

Similar to the case with TikTok, this decision is not surprising given the potential risks. Large Language Models (LLMs) introduce well-known concerns, including data privacy, confidentiality, and the possibility of containing backdoors. Also, despite significant recent advancements, LLMs can still hallucinate, meaning their outputs must be verified in critical settings.

Beyond the AI itself, the app’s access to user data - such as the clipboard - can pose additional risks. A unique challenge arises from DeepSeek being open-source; while the original company controls the official web and app versions, anyone can host their own instance. This makes a complete ban challenging to enforce, though in this case, the risk may be considered low.

Last updated:  05 Feb 2025 12:32pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Kai Riemer is a Professor of Information Technology and Organisation at the University of Sydney Business School and Director of Sydney Executive Plus - an executive education course on AI fluency

This generates a lot of interest because it’s AI and China, but it’s just prudent data security. It doesn’t matter whether it’s China or any other country: government data should not be housed offshore and outside of secure Australian systems. The real story here is how these open-source platforms appear to be reverse engineering the breakthroughs made by pioneers such as OpenAI and impacting the AI industry business model.

Last updated:  05 Feb 2025 12:30pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Armin Chitizadeh is a Lecturer in the School of Computer Science at the University of Sydney

DeepSeek's introduction shook the world as a worthy opponent, raising important privacy and security concerns. The emergence of GenAI [generative artificial intelligence] tools has introduced many issues, and I'm glad that DeepSeek's arrival has created a wave of concern. People need to be more cautious when dealing with any GenAI tools. 
  
The first concern is that in the race to create the fastest and best AI, companies might cut corners on safety. There may be risks in how customer data is stored, with insufficient time spent on protecting it from malicious actors. 
  
The second concern is that people now tend to blindly trust AI-generated content. Users should not rely on AI-generated content without verifying its accuracy. AI can intentionally or unintentionally hallucinate or provide false answers, yet people trust it as if it comes from reliable sources. DeepSeek's introduction has rightly made people question how much they can trust AI-generated content. 
  
The third, and possibly most crucial concern, is that AI can easily reason and draw important conclusions from seemingly insignificant user data. Users might provide data to GenAI tools, assuming it's not valuable. However, AI can connect the dots and reach important conclusions. This newly inferred information is then in the hands of the GenAI tool owner, who has full control over its use and sale. Interestingly, what AI does is similar to a magic trick: it uses seemingly unimportant information to derive significant insights that astonish viewers.

Last updated:  05 Feb 2025 12:29pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Associate Professor Vitomir Kovanović is the Associate Director (Research Excellence) of the Centre for Change and Complexity in Learning (C3L), UniSA Education Futures

The move by the federal government is a reasonable and expected move to protect the safety of Australia and its citizens.

Just like the TikTok ban, the key issue is that use of such apps provides the company with massive amount of data, some of which can be highly important and have significant political or economic implications.

Imagine what could happen if, for example, a minister used DeepSeek to draft a sensitive government document. Such data could be easily provided to Chinese Government which could use such information for their own benefit. While DeepSeek is banned from the government-issued devices, I think the ban should be far more extensive and also include use of such apps from any device and for any purpose. 
 
I believe that we will see more and more actions like this, especially with AI apps becoming more comprehensive and accessing more and more data from our devices.

Imagine an AI assistant that can help manage your calendar or email; such an app would have to have access to your data, and thus, be a significant security hole. The only way to really protect the data would be for each government to develop their own AI system which would be then safe to use as no other foreign power would have access to this data.

Last updated:  05 Feb 2025 12:28pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Haiqing Yu is a Professor of Media and Communication at RMIT University

The DeepSeek ban by the Australian government from all government-issued devices does not come as a surprise, particularly to long-time observers of global technology interdependence and the geopolitical tensions surrounding data governance.

It reflects Australia’s continued commitment to the US alliance and the broader efforts by Western governments to mitigate potential risks associated with foreign technology.

While security concern is valid, it is important to note the positive response by the global AI industry, especially from less-resourced countries in the Global South, to the Chinese AI model and its innovative methods, which they use to develop bespoke large language models (LLMs) at a more affordable cost.

The ban and the large-scale cyber attack at DeepSeek just days ago may impact the Chinese LLM’s reputation and market presence in the global north, but its messaging may send signals to slow down global cooperation and hinder the development of universalist ethos on technological progress. 

Similar to the TikTok ban, the DeepSeek ban will not impact how the Australian public and industry engage with Chinese technologies and technological platforms. Its significance lies in virtue signalling.

Last updated:  05 Feb 2025 12:26pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Ryan Ko is Chair and Director of UQ Cyber Security, an interdisciplinary research centre based at the Faculty of Engineering, Architecture, and Information Technology (EAIT), University of Queensland.

Until there is a deep analysis on the information leakage risks of this service, it is a prudent approach to ban the usage of this on devices accessing sensitive government information.

When you sign up for the service, you are agreeing to DeepSeek collecting your device and network connection during your access to their service. The information includes your device model, operating system, IP address, system language and even your keystroke patterns or rhythms. This provides a way for the company to profile you as a user. For example, keystroke patterns – the way you type - can identify the age group and right- or left-handedness of the user.

Last updated:  05 Feb 2025 12:22pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Dr Daswin De Silva is Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University

Just as the social media app TikTok was banned from government devices in 2023, now DeepSeek, the hugely popular chatbot to rival ChatGPT, has been banned from federal agencies due to unacceptable risks to national security and foreign interference.

Although the government claims its country of origin was not a factor, it is implied in both bans. The risk can be summed up as all data associated with using the chatbot (prompts, responses, user profiles, access logs, access location, etc) is stored in China and can be used for training or other purposes in alignment with Chinese government laws.

Where DeepSeek is installed as an app on phones or computers, in natural conversational style, the chatbot can ask for elevated access to the host computer or advise the users to download potentially malicious software programs through links that appear within chatbot responses to prompts. Besides risks to data and device security, the democratic values instilled in chatbot conversations could also be removed or manipulated to align with the interests of state actors. 

By knocking out the low-cost competitor in light of potential security risks to federal agencies, the government is also sending out a much broader indirect message to the public to shift back to US-based AI platforms, ChatGPT, Copilot etc. It is timely to acknowledge the sovereign risk of AI - being dependent on non-Australian third parties for high-quality AI models and downstream applications. 

At this stage, a reality check for the government is to cut through passive commentary and work towards expediting investment in Australian AI infrastructure. DeepSeek operates on a fraction of the cost of Open AI models and most of the DeepSeek model is open source, so there's not much getting in the way of building and providing an Australian version of Gen AI that is located on our lands and governed/protected by our laws. 

Last updated:  05 Feb 2025 12:22pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Associate Professor Niusha Shafiabady is from the Department of Information Technology, Peter Faber Business School at Australian Catholic University

The biggest risk that the GenAI tools including DeepSeek could potentially pose a risk to governments and societies is the threat to privacy and security.

A software tool gets access to the installed computer, and could potentially make unsolicited changes to the system. Using illicit tools could open a gateway for being hacked or having sensitive information stolen from a computer.

The other big risk is data privacy. When someone asks questions and enters information into these GenAI tools, the tool may record and use their information in unsolicited ways.

The implications of this are huge. As an example, other than the privacy issue, they could unobtrusively impact on the way people think by giving them information with respect to their viewpoints. This would have potential implications on sensitive matters such as election outcomes and many more.

Last updated:  05 Feb 2025 12:20pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Cori Stewart is Founder and CEO of Advanced Robotics for Manufacturing Hub a University of Queensland not-for-profit spin out

DeepSeek will continue to be deployed in Australia. Without it, and the next LLM on the horizon, Australia will be even further behind in the global AI race. Banning is not a sustainable option. 

The Chinese Government’s ultimate ownership of the data does not mean we need to forgo the benefits of cutting-edge AI.

DeepSeek can still be used securely if hosted locally on personal computers in smaller versions or deployed within trusted cloud providers operating on Australian soil. Data does not have to be shared with China. This approach allows us to leverage new advancements without exposing sensitive data to foreign entities.

Looking ahead, the rapid and intense global competition in AI development highlights that Australia can build its own sovereign LLM. The cost and energy barrier are lowering. Developing an Australian-owned model, designed for our unique regulatory, security, and linguistic needs, will ensure that we remain competitive while safeguarding national interests.

By taking this step, we can secure a measure of AI leadership without compromising data integrity, reinforcing both our digital sovereignty and innovation potential

Last updated:  05 Feb 2025 12:17pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.

Professor Dian Tjondronegoro is a Professor of Business Strategy and Innovation at Griffith University

This ban is a measured, precautionary step to ensure that emerging AI technologies meet rigorous security and trust standards before deployment on sensitive government systems.

Applications like DeepSeek, which rapidly exchange large volumes of data and employ complex generative algorithms, necessitate a high degree of confidence in their technical safeguards and the accountability of the organisations behind them.

Trust is built not only on how these apps function and protect data but also on their developers' and data custodians' reliability and transparency.

Government agencies must avoid inadvertent data leaks and mitigate security risks, as they exercise caution when integrating new operating systems into mission-critical sectors such as defence and healthcare.

While this ban restricts access to government-issued devices, it does not preclude everyday Australians from exploring and experimenting with this innovative technology. This dual approach maintains robust security for sensitive operations while fostering public engagement with cutting-edge AI.

I encourage all users to adopt responsible practices and implement voluntary guardrails when interacting with AI systems. Our collective commitment to safeguarding personal data and maintaining privacy should remain paramount, ensuring that convenience does not come at the expense of security.

Last updated:  05 Feb 2025 12:12pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None declared.
Journal/
conference:
Organisation/s: Australian Science Media Centre
Funder: N/A
Media Contact/s
Contact details are only visible to registered journalists.