EXPERT REACTION: New AI Chatbot DeepSeek shakes up the artificial intelligence industry

Publicly released:
Australia; NSW; VIC; QLD; SA
Photo by Steve Johnson on Unsplash
Photo by Steve Johnson on Unsplash

Chinese company DeepSeek has developed an AI Chatbot to rival ChatGPT, and reportedly developed it using fewer, and less-advanced chips than its American rival. Technology stocks have fallen dramatically overnight as a result of DeepSeek offering a competitor that has comparable performance to the world’s best chatbots at seemingly a fraction of the cost. What does this new chatbot mean for the future of AI?

Expert Reaction

These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.

Dr Shaanan Cohney is a Senior Lecturer in the Faculty of Engineering and IT at the University of Melbourne

China’s new AI model, ‘DeepSeek’ r1, is the latest move in the fierce competition among AI providers. What has caught investors’ attention isn’t so much China’s dominance in AI but rather its ability to develop such technology despite sanctions on the advanced chips typically required. While this has triggered a stock market decline in companies like NVIDIA, the leading producer of AI chips, the company valuations remain highly inflated above typical price fundamentals–DeepSeek hasn’t fundamentally changed the AI landscape.

Yes, the model approaches state-of-the-art performance, but its development aligns with what researchers expect—incremental yet significant advances. The real challenges in building competitive AI remain the same: access to high-quality data and sufficient computing power. The efficiency gains that make DeepSeek cheaper and faster mirror the progress U.S.-based firms like OpenAI have achieved in dramatically reducing AI costs over time.

Last updated:  03 Feb 2025 10:47am
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Aaron Snoswell is a Senior Research Fellow in AI Accountability at the QUT Generative AI Lab, an associate Investigator in the ARC Centre of Excellence for Automated Decision Making and Society (ADM+S), and program co-lead for the QUT Centre For Data Science Responsible Data Science and AI Program

DeepSeek's open-source release this week finally sheds light on a secret Anthropic, OpenAI, and other AI behemoths have been keeping quiet about - how the new 'reasoning' class of language models operate under the hood. The breakthrough combines three key components: specialized fine-tuning for abstract reasoning tasks, sophisticated chain-of-thought prompting that enables models to reflect on and critique their own responses, and perhaps most significantly, Monte Carlo tree search algorithms that systematically explore multiple potential responses to user queries.

This last element proves particularly transformative (pun intended) – models like DeepSeek effectively generate and evaluate numerous possible answers before presenting the optimal response to users. While this approach was pioneered by Google DeepMind in their groundbreaking AlphaGo system, its application to advanced language models represents a significant evolution. Though technical AI researchers have long hypothesized this underlying mechanism for 'reasoning' language models, DeepSeek's release provides the first concrete confirmation of these theories.

Last updated:  29 Jan 2025 1:39pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.

Associate Professor Stan Karanasios is a researcher of Information Systems at the University of Queensland

The recent unveiling of DeepSeek has taken the technology sector by storm, promising to accelerate the pace of innovation in artificial intelligence. President Trump has described the launch as a "wake-up call" for U.S. firms, highlighting the need for increased competitiveness in the AI industry.

Remarkably, DeepSeek reported that it was constructed at a significantly lower cost than that of leading models such as those from OpenAI, attributing the savings to its minimal use of advanced chips.

This development not only demonstrates the accessibility of AI technology but also indicates a shift towards a more politically charged arena, as governments around the world are keen to secure leadership in this sector.

Last updated:  29 Jan 2025 1:36pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.

Uri Gal is Professor of Business Information Systems at The University of Sydney Business School

The widespread adoption of DeepSeek raises significant privacy concerns, given its operation under Chinese jurisdiction - which may obligate it to share sensitive user data with the CCP. This risk is especially serious considering DeepSeek’s extensive data collection practices, which include users’ IP addresses, keystroke patterns, device information, and text or audio input.

Additionally, DeepSeek’s Chinese origins could significantly influence how it presents controversial topics to a global audience. The AI’s responses may reflect state-approved narratives on sensitive issues like democracy, human rights, and territorial disputes. For example when asked to describe the events that took place in Tiananmen Square in 1989, or who the Uyghurs are, or who the President of Taiwan is, the AI currently responds with “Sorry, that’s beyond my current scope. Let’s talk about something else”.

Last updated:  29 Jan 2025 1:35pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Professor Geoff Webb is from the Department of Data Science and Artificial Intelligence at Monash University

The emergence of DeepSeek is a significant moment in the AI revolution. Until now it has seemed that billion dollar investments and access to the latest generation of specialised NVIDIA processors were prerequisites for developing state-of-the-art systems. 

This effectively limited control to a small number of leading US-based tech corporations. Due to US embargoes on exporting the latest generation of NVIDIA processors, it also locked out China. 

DeepSeek claims to have developed a new Large Language Model, similar to ChatGPT or Llama, that rivals the state-of-the-art for a fraction of the cost using the less advanced NVIDIA processors that are currently available to China. If this is true, it means that the US tech sector no longer has exclusive control of the AI technologies, opening them to wider competition and reducing the prices they can charge for access to and use of their systems.

Looking beyond the implications for the stock market, current AI technologies are US-centric and embody US values and culture. This new development has the potential to create more diversity through the development of new AI systems. It also has the potential to make AI more accessible for researchers around the world both for developing new technologies and for applying them in diverse areas including healthcare.

Last updated:  28 Jan 2025 6:00pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.

Associate Professor Chang Xu is ARC Future Fellow and Associate Professor in Machine Learning and Computer Vision at the University of Sydney

The success of the DeepSeek model marks the start of the "Android era" for large models. Its open-source framework, unlike closed systems, is expected to inspire more companies to create innovative models for diverse tasks, breaking monopolies and fostering a more open and accessible AI ecosystem.

Looking ahead, the pursuit of AI will no longer focus solely on scaling up; scaling down will become equally critical. The challenge lies in successfully developing smaller yet more effective models.

Last updated:  28 Jan 2025 5:59pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Samantha Newell is a Lecturer in Psychology at The University of Adelaide

It is important to recognise that no Generative AI Model is completely neutral. Our main Generative AI Models are trained on a body of work that largely reflects a Western (cis/white/male) perspective. This perspective mirrors and ultimately perpetuates biases about women and some minority groups. The selection of training data for these Models are extremely consequential, and provide the foundation for the narratives that users of these Models will consume.

So, if any models (intentionally, or not) embed within themselves biases (or push narratives), then these will be reflected in the Model’s output. Because Generative AI Models sound so convincing, users have been less likely to critically analyse the output (and consume this output as 'fact'). It is possible for future ‘bad actors’ to exploit users’ ‘trust’ in AI-generated output and push false narratives, or exploit Models to increase Soft Power. This is an issue that we should all be paying attention to, and we need to urgently prioritise the critical analysis of AI-generated output in Schools and Universities.

Last updated:  28 Jan 2025 5:57pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Professor Simon Lucey is Director of the Australian Institute for Machine Learning (AIML) at The University of Adelaide

The recent results of DeepSeek are truly disruptive for the field of AI. They have shaken the very foundation of the assumption that only the companies/countries with the fastest chips can dominate in AI. This also opens the door for middle-power countries like Australia to invest more in AI developed here at home, as the gap between the superpowers and the rest of the world has just got a lot narrower.

Last updated:  28 Jan 2025 5:55pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Raffaele Ciriello is a Senior Lecturer in Business Information Systems at the University of Sydney

DeepSeek challenges the prevailing narrative in AI development, which prioritises scaling massive models at unsustainable environmental and financial costs. Unlike OpenAI’s approach, which relies on resource-heavy systems to pursue speculative artificial general intelligence (AGI), DeepSeek demonstrates that meaningful advancements can be achieved with far fewer resources. This has profound implications for global AI development, exposing the need for sustainability, transparency, and accountability in the industry. 
 
However, these efficiencies come with risks, particularly the ‘rebound effect’ (Jevon’s paradox) where reduced resource use can drive greater overall consumption. Policymakers must address this through proactive regulations that incentivise renewable energy use and limit resource-intensive applications.  
 
Additionally, privacy and sovereignty must be central to any AI regulatory framework. Open-source models like DeepSeek offer a path to reducing reliance on monopolistic providers by enabling user control and leveraging the ‘many eyes principle’ (Linus’ Law) to address security vulnerabilities quickly.  
 
As a Chinese startup, DeepTech has geopolitical implications, challenging US tech dominance amid ongoing disruptions presented by the Trump administration. Contrary to the US approach of removing safety standards, reliance on voluntary compliance may be insufficient in high-stakes scenarios. Mandatory guardrails, focused on privacy protections and equitable access, are essential to ensuring AI serves societal and environmental values. Australia has an opportunity to lead by establishing sustainable and ethical AI governance.

Last updated:  28 Jan 2025 5:54pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Armin Chitizadeh is a researcher in AI ethics at the University of Sydney

The DeepSeek model has sparked some fascinating developments in the AI landscape.
 
Firstly, it has introduced healthy competition, which benefits everyone. This increased choice for consumers and helps companies grow and innovate.
 
Secondly, it has demonstrated that generative AI can be achieved with fewer resources and lower energy consumption. In a world with limited resources, this is excellent news. However, people and investors may be overestimating its efficiency.
 
Thirdly, DeepSeek is showing that generative AI need not be hidden behind closed walls; open-sourcing can benefit the world. A prime example from the past is the WebKit engine project, to which large companies like Apple, Google, and Sony contributed. It was later used in Safari and Android browsers, helping shape the mobile world we see today.
 
As a AI researcher and enthusiast, I am thrilled by the introduction of DeepSeek. I hope the focus extends beyond creating the fastest or best AI to also prioritising the safest and most inclusive AI systems, because by the nature, AI is optimised for the majority and can overlook the minorities.

Last updated:  28 Jan 2025 5:53pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Mike Seymour is a Senior Lecturer in the Business School at the University of Sydney. He has expertise in the application of AI in the entertainment industry, in R&D and film production.

The DeepSeek R1 open-source model has caused such a storm, due to being a better AI model, but more importantly - doing that with dramatically less tech, which naturally adversely disrupts the companies selling and building AI tech.

Last updated:  28 Jan 2025 5:52pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Professor Albert Zomaya is the Peter Nicol Russell Chair of Computer Science in the Faculty of Engineering at the University of Sydney

Will DeepSeek be a DeepHit to other platforms? One thing is for sure, DeepSeek will lower the barriers to accessing expert-level information. It could level the playing field for smaller companies and independent researchers who don’t have the resources to hire teams of specialists. It is interesting to watch how this will reshape the competitive landscape in the tech and research industries.

Last updated:  28 Jan 2025 5:51pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Jason Pallant is a Senior Lecturer of Marketing Technology at RMIT University

Just this week with Trump’s ‘Stargate Project’ announcement, we saw how much the US is investing in existing AI leaders OpenAI and the strategy to create and own massive data centres for training and operating AI models.

The speed of DeepSeek’s emergence and the associated stock crashes for firms like Nvidia shows how fast-moving and competitive the AI space is. It also shows that the methods behind AI are still evolving, and they aren’t owned by any one company or country.

Where existing models use extensive amounts of data centres and computer chips to train and operate their model, DeepSeek have reportedly innovated the process of training through self-improvement and trial and error. The result is reportedly models on par with the best on the market, yet at significantly lower investment. These innovations, and the fact DeepSeek have made their models open source, presents a broad implication; an opportunity for individual companies or brands develop their own AI in ways not seen before. DeepSeek is significantly cheaper to interact with, and some parts are even open source, presenting a challenge to existing commercial models; why pay millions for someone else’s model when one is freely available?

This could have major implications for enterprise models like OpenAI which seek to license their platform for the development of AI products and services. At the same time, expect these operators to respond quickly and at scale, which ultimately will only further push the AI space forward.

Last updated:  28 Jan 2025 5:50pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Wolfgang Meyer is an Associate Professor in STEM at the University of South Australia

DeepSeek has demonstrated that large language models can learn to solve complex tasks with fewer computing resources and less data for training. As a result, future chatbots may respond faster and be trained with less but higher quality data, consume less energy, and may be less costly to develop.

While we don't yet fully understand the training process and capabilities of DeepSeek, its results may shape how future chatbots are developed. By making these language models publicly available, DeepSeek may enable other researchers to create AI applications and deliver innovations previously only possible in labs run by large companies such as Anthropic, Google, Meta, and OpenAI.

Last updated:  28 Jan 2025 5:49pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Professor Michael Blumenstein is Pro Vice Chancellor (Business Creation and Major Facilities) at the University of Technology Sydney

DeepSeek’s newest offering provides an opportunity for researchers and industry to reflect upon what areas to prioritise in the field of AI and how that will impact on the global AI race. DeepSeek purports to have similar (or in some cases better) accuracy than ChatGPT, but its creators say it’s also more efficient, cheaper and requires less resources.

This has arisen partly because of constraints that the Biden government implemented in export controls of its computer chips and hardware. The DeepSeek developers have found a way to optimise the algorithms so that less computation is required and therefore it doesn’t need the same expansive amount of hardware.

This in turn has reduced its cost whilst maintaining quality of output (although critics are concerned that some of DeepSeek’s responses are being censored, particularly in reference to controversial topics in China). Once the open source code is properly scrutinised, and this claim can be validated, then there will be a massive shift towards looking at revolutionising the software of AI systems rather than primarily looking to extend the hardware investment and capabilities to meet the resource-intensive demands of AI.

This in turn could shift the entire market for AI and would be a groundbreaking game-changer for future AI developments globally.

Last updated:  28 Jan 2025 5:49pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Dr Daswin De Silva is Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University

China has effectively created a 'shortcut' in the global Gen AI race by developing and releasing DeepSeek without any access to Nvidia AI hardware (due to the US export ban) and allegedly at a fraction of the cost and time. Compared to US-based models like ChatGPT, DeepSeek performs equally well on chat-based queries, reasoning, coding and math, although it underperforms on multimodal (image-based, text-to-image) tasks. The trifecta of success factors of the current AI boom are 1) algorithms, 2) processors (hardware) and 3) datasets. The algorithms (Transformer, RLHF etc.) are already open source, so it is quite likely DeepSeek trained on the same datasets (sold by third-party data vendors, and largely in English language) as those used by OpenAI, Google etc, to achieve equivalent performance.

In the global AI race, in addition to the hardware ban, we might see a dataset export ban from the Trump administration. For organisations, the Chinese origins of the model will be a hefty challenge in terms of data privacy and integration with pre-existing technology architectures, so we will not see Microsoft copilot being switched off any time soon. However, for consumers, DeepSeek opens up the market for more equitable and free access to high quality AI, which has been a major roadblock since 2023.

It is also encouraging for other AI developers, as this model is open source, it also indicates a decoupling of good AI from overpriced Nvidia hardware. It is also a wakeup call for governments to invest in alternative computing infrastructure, such as neuromorphic, quantum and in-memory computing. The human brain operates on far less, at 60 watts of energy to generate and sustain human intelligence, and this success story further confirms size is not all that matters in the global AI race.

Last updated:  28 Jan 2025 5:46pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.

Associate Professor Niusha Shafiabady is from the Department of Information Technology, Peter Faber Business School at Australian Catholic University

ChatGPT is a generative AI tool. These models work based on different complex models of neural networks. As we all have seen, it produces content for us and responds to the questions we ask it. Basically, a generative AI tool like ChatGPT is created based on something we call ‘Transformer’ architecture.

DeepSeek works on a different architecture which is called ‘Mixture-of-Experts’. When you ask a question from ChatGPT, it is like opening the water tap to irrigate a whole field through all the waterways you have dug. Even if someone has watered some parts of your field before, the amount of water that goes there is a lot, and it is wasted on watering the parts that were irrigated before. The water in this analogy, equals to the energy that it uses to answer our questions.

On the other hand, using DeepSeek is like knowing which waterways lead to the parts of the field that need irrigation. Those are the parts that are related to answering the questions you have asked. It works on a pruned architecture and doesn’t waste its resources on doing irrelevant stuff, and traversing irrelevant (already irrigated) paths and neurons in the neural networks used for processing the information. That is why it is more efficient energy-wise. 

One thing we should remember is when using these tools, we allow our data and the content we enter into them as questions to be accessed and potentially used for different purposes. That is when things get tricky if the technology falls in the wrong hands.

Last updated:  28 Jan 2025 5:46pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.

Dr Fernando Marmolejo-Ramos is a Lecturer in statistics and research methods at Flinders University

Imagine a world where AI chatbots like ChatGPT, Gemini, and Claude are the rockstars of tech, dazzling us with their ability to solve problems, write code, and even chat like humans. But here’s the catch: these large language models (LLMs) are incredibly power-hungry, expensive to run, and constantly chasing the ultimate goal—artificial general intelligence (AGI), where machines think like humans.

To measure their progress, AI companies test these models on everything from math puzzles to reasoning challenges. Most LLMs ace these tests, but a new, ultra-tough benchmark called Humanity’s Last Exam (HLE)—a grueling set of 3,000 questions across multiple subjects—has revealed their limitations.

Enter DeepSeek, a new LLM from China that not only outperformed giants like ChatGPT and Gemini on the HLE but also does so at a fraction of the cost. This has sent shockwaves through the AI world. For everyday folks, this is great news: it means more affordable, smarter AI tools to help with daily tasks, pressure on big tech to innovate, and a push to redefine what truly makes AI “intelligent”.

DeepSeek isn’t just a new player—it’s a game-changer.

Last updated:  29 Jan 2025 4:57pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.

Dr Saeed Rehman is Senior Lecturer in Cybersecurity and Networking, College of Science and Engineering, at Flinders University 

DeepSeek has produced a cost-effective and less power-hungry large language model since its funding was announced in March 2023, at a fraction of the cost that OpenAI has been pursuing for the past few years. The buzz is created by DeepSeek's V3 model, which is open-source and available to the community to run offline and can be independently verified. However, from a privacy and security perspective, DeepSeek's terms and conditions are similar to those of other AI providers.
 
The input data will be accessible to humans for training purposes. The computing engine is based in China, operated by Hangzhou DeepSeek Artificial Intelligence Co., Ltd., and Beijing DeepSeek Artificial Intelligence Co., Ltd. The company collects extensive information, including device IDs, locations, and user inputs. This information is used not only to improve the model but also for "legal obligations, or as necessary to perform tasks in the public interest, or to protect the vital interests of our users and other people." Additionally, the stored information is kept on "secure servers located in the People's Republic of China".
 
DeepSeek's cost efficiency is praiseworthy, but the privacy implications of its data collection would raise significant concerns. The fact that user data is stored on servers in China, a country known for its stringent data control policies, could be troubling for users (and governments) wary of their data privacy. This situation may evoke similar concerns to those raised for TikTok, where data privacy and security have been hotly debated and led to bans in some Western countries.

Last updated:  28 Jan 2025 5:44pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest None

Associate Professor Vitomir Kovanović is the Associate Director (Research Excellence) of the Centre for Change and Complexity in Learning (C3L), UniSA Education Futures

DeepSeek showed that high-performing AI does not necessarily require computing power as high as previously thought, especially if more focus is put on improving the AI model design.

This is fantastic and I believe that this would shift the focus from “the bigger the better” race to new advancements in the AI model design. It also opens up new possibilities for broader adoption of AI without the need for expensive subscriptions to third-party services and without compromising the privacy and security of sensitive user data.

In the education space for example, it would allow using AI engines locally, without sending of student data to external AI providers such as OpenAI. Frank Lloyd Wright famously said, “The human race built most nobly when limitations were greatest.” I think this perfectly illustrates the situation with DeepSeek, and how they were able to make significant advances in the AI race, despite the limitations caused by the US government.

Last updated:  28 Jan 2025 5:43pm
Contact information
Contact details are only visible to registered journalists.
Declared conflicts of interest Aaron Snoswell receives research grant funding from OpenAI in 2025.
Journal/
conference:
Organisation/s: Australian Science Media Centre
Funder: N/A
Media Contact/s
Contact details are only visible to registered journalists.