Expert Reaction
These comments have been collated by the Science Media Centre to provide a variety of expert perspectives on this issue. Feel free to use these quotes in your stories. Views expressed are the personal opinions of the experts named. They do not represent the views of the SMC or any other organisation unless specifically stated.
Prof Daniel Angus FQA is Professor of Digital Communication in the QUT School of Communication, Director of QUT’s Digital Media Research Centre, and Chief Investigator in the ARC Centre of Excellence for Automated Decision Making and Society
"The Age Assurance Technology Trial report insists that “age assurance can be done in Australia – privately, efficiently and effectively” with “no substantial technological limitations”. Yet beneath this overly upbeat framing lie glaring problems that the report quietly acknowledges but never fully confronts. It admits that 'unnecessary data retention may occur in apparent anticipation of future regulatory needs', a direct contradiction of data minimisation principles, and an open door to scope creep and privacy risks for all Australians. Error rates are likewise buried, with the best-performing systems showing 3.07% false negative and 2.95% false positive rates. In real terms, that means tens of thousands of legitimate Australian users would be wrongly locked out of digital services. The report never grapples with these consequences, reducing them instead to abstract accuracy figures. Weaknesses are spun into positives, failures become 'opportunities for technological improvement', and a lack of universal effectiveness is reframed as evidence of a 'dynamic, innovative and evolving' sector. The result is a biased document that nauseatingly reassures rather than deeply scrutinises unresolved issues in accuracy, proportionality, and privacy. Far from proving that age assurance is ready, the detail shows it remains deeply problematic.
Dr Dana McKay is a senior lecturer in innovative interactive technologies at RMIT University
" A recent report indicates that privacy-sensitive age verification is achievable and feasible within Australia, in advance of the social media ban for children. The three main approaches to age verification are estimation, which uses interactions, facial features or gestures to estimate age, inference and verification, which match social media profiles with 'known facts', and parental control and consent. Each of these approaches has major problems: estimation has an average error of just over a year, meaning half of all mistakes made are out by more than a year. There is also no indication in the report that mistakes aren't made more for certain groups, e.g. young people of colour—indeed, the report indicates it is impossible to eliminate bias.
Linking a social media profile to known facts or government records relies on that social media profile having the same details as those facts, including name and birthdate. There are many good reasons people, including children, may wish to hide their real identity from social media companies and those they interact with online, including data stewardship practices of social media companies, or fleeing domestic and family violence.
Social media companies insisting on a real name is so problematic that the German government banned it well over 10 years ago. Parental control and consent work well where parents and children have an open and honest relationship (which is not facilitated by any of the proposed solutions), but can be actively dangerous for the up to 15% of children who experience domestic and family violence in the home, and may not be able to trust their parents. The report notes that these solutions can be stacked, but even this stacking puts extremely vulnerable children and young people - those experiencing family violence, or who may be LGBTQIA+, for example - at more risk of not being able to access the benefits that social media can offer. Many young people will do fine with the mechanisms outlined in this report, but even with the best will in the world, those who most need the external support and connection offered by social media are also those most likely to be denied it by these mechanisms.
Dr Julia Coffey is an Associate Professor of Sociology in the School of Humanities, Creative Industries and Social Sciences at the University of Newcastle
'The Age Assurance Technology Trial Report states it is possible to implement age restriction technology in Australia, as is planned in the government’s Social Media ban. However, the report’s content shows just how complex, difficult, and problematic this ban will be to implement. The report is clear that the capabilities of age estimation carry a large margin for error – commonly under or overestimating age by 18 months. It is even less precise for girls, First Nations people, and lower socioeconomic groups. This means other methods need to be used in tandem with facial verification tools, like using data provided by third parties such as banks, schools, or healthcare providers, which may not be kept private by a platform; or guessing a person's age based on their online activity, which is also imprecise. When young people don’t have a government ID like a passport, which most won’t, they will be forced to use face scanning if they want to use social media.
Australia's social media ban for teens was always impractical, and missed the point – which is the need to find ways to make platforms responsible for the harms that occur through using them. This report only provides more evidence that the ban will be imprecise, problematic, and ultimately untenable."
Dr Jake Renzella is the Head of the Computing and Education Research Group, Director of Studies (Computer Science) and Director of Digital Infrastructure Strategy at the University of New South Wales
"It's encouraging to see the trial's thorough, ethics-first approach. A layered ‘successive validation’ model is particularly sensible. In this approach, a simple, low-friction check first (like facial age estimation) is used, and only escalates to a full ID verification when absolutely necessary, say, to ensure that host content is not safe for children.
However, the fundamental challenge here isn't the technology, but the new risks we introduce by outsourcing this critical function. The report proposes adding dozens of third-party providers into the process, each becoming a potential point of failure for data security. The report itself flags a significant concern, finding that some providers evaluated in the trial are already retaining 'full biometric or document data for all users' unnecessarily, perhaps to pre-empt future regulatory needs. This is seriously concerning.
While the best practice suggested by the commission is to not store this data at all, compliance is left to the whims of whichever tech partner a service chooses. Assurances that Australians’ identity documents aren't being stockpiled in new databases could drift over time, creating serious risks for identity theft and privacy breaches."
Dr Justine Humphry is a Senior Lecturer in Digital Cultures at The University of Sydney
"The final report of the Age Assurance Technology Trial reports that age assurance can be done in Australia privately, efficiently and effectively. However, there are some significant areas of concern raised, particularly related to the variations in the results of the three main methods used in the trial (age-verification, age-estimation and age-inference).
These variations suggest that for diverse users, estimation and inference technologies can be inaccurate for sections of the population, especially for older adults and people who are non-Caucasian and those who are female-presenting. The results showed that there are potential risks to users’ data privacy, especially with regards to over-retention of data, digital tracking and cross-service data reuse.
While the trial results provide a first-of-its-kind evidence base in an Australian context and are helpful for highlighting these areas of concern, if technologies that restrict age based on these methods are introduced by platform companies without addressing these problems, there is a strong likelihood that these will compound existing barriers and inequalities of access and use."
Professor Tama Leaver (he/him) is a Professor of Internet Studies at Curtin University and a Chief Investigator in the ARC Centre of Excellence for the Digital Child
"Despite the seemingly reassuring conclusions, the Age Assurance Technology Trial clearly demonstrates that the only way age can be clearly determined by technology companies is to use government issued ID. Inevitably, this leads to privacy risks.
Even the best age estimation tools that don't require ID regularly get age wrong by more than a year, and show more inconsistencies for anyone who isn't white skinned and male. Bias is baked in to how these tools work, which is even more concerning when they are positioned as the only remaining options for people who don't already have government IDs, as is the case for many children and young people.
The technical thresholds that the trial used to determine whether a tool was viable or not seem completely at odds with the expectations of ordinary Australians online. Australians want to know if these tools work properly, and properly means work every time. The evidence in this report shows that these tools simply aren't reliable.
The amount of inaccuracy, the need for multiple overlapping tools, and the seeming inevitability of falling back on government-issued ID for any case where identity verification must be 100% accurate, mean that this trial simply concluded what was already known: age assurance and age interference tools are too immature to be used reliably."
Dr David Tuffley is a senior lecturer in the School of Information and Communication Technology at Griffith University
"The devil is in the detail.
Australia’s age assurance technology trial shows age verification for social media is feasible, but complex. The report highlights there’s no single solution, with over 48 providers and 60 technologies assessed—from AI age estimation to ID checks. Each comes with privacy and accuracy challenges, especially for diverse communities and younger teens.
The systems tested can work reliably, but no method is flawless. Technologies like facial analysis can misjudge ages by years, and some require invasive data collection. As an ethicist, I would be concerned about excessive retention of personal data. Some providers over-collect data, anticipating future regulation. Coordination among major tech firms is crucial, as many solutions depend on their willingness to cooperate.
Challenges include possible circumvention via VPNs, and the exclusion of games or AI platforms from regulation. Any system must evolve constantly. Real safety depends on tackling harmful content at the source."
Dr Alexia Maddox is a Senior Lecturer in Pedagogy and Education Futures at La Trobe University
"The Core Policy Contradiction
The government's own age assurance technology trial reveals a fundamental contradiction at the heart of Australia's social media ban. While document-based age verification achieves very high accuracy—the most reliable method available—the legislation explicitly prohibits platforms from using this technology. Instead, platforms must rely on age estimation systems that provide only 'probability-based classification' with ‘on average’ errors of 1.3-1.5 years, particularly problematic for teenagers whose faces change dramatically during puberty. Australia has essentially banned the most accurate technology while demanding accurate outcomes.
The Parental Choice Elimination
The trial found that both parental control and consent systems 'can be done and can be effective,' providing sophisticated tools for graduated risk management and family-centred digital literacy education. However, Australia's legislation eliminates parental choice entirely. Parents have no override options regardless of their judgment about their child's readiness. This removes the educational pathway where families gradually introduce social media with supervision and safety conversations, forcing a cliff-edge transition from total prohibition to unrestricted access at 16.
The Policy-Evidence Inversion
Australia implemented this backwards, legislating first and evaluating feasibility later. The government passed the world's first social media ban in November 2024, then incorporated an in-train technology trial process with results delivered months after the law was enacted. The trial explicitly states it's 'not within scope' to make policy recommendations about whether these technologies should be implemented. This procedural reversal helps explain the disconnect between policy assumptions and technical reality revealed in the trial data.
Migration to Less Safe Spaces
The ban risks creating the opposite of its intended effect. Children blocked from mainstream platforms, which have sophisticated content moderation, reporting systems, and AI safety measures, will likely migrate to messaging apps, gaming platforms, or international services with weaker safety infrastructure. We're potentially pushing children toward spaces where algorithmic manipulation may be more sophisticated and harmful, while eliminating the parental oversight tools the trial found most effective.
Technical Implementation Crisis
The trial's technology stack analysis reveals that many solutions for age estimation remain at prototype stages despite vendor claims. The legislation creates technical contradictions: banning the most reliable age assurance methods while demanding reliable outcomes, prohibiting data retention while requiring data processing for compliance, and eliminating proven parental tools while expecting families to manage online safety. These contradictions may force the system toward failure, potentially making children less safe rather than more protected.
Inclusion of diverse populations
There is discussion about how the trial found that systems generally performed well across diverse users, but, critically, noting that gaps persist in remote and very remote communities where digital exclusion and lack of foundational credentials continue to limit access."
Dr Shaanan Cohney is a Senior Lecturer in the Faculty of Engineering and IT at the University of Melbourne
"Although the government’s Age Assurance report is expansive, its many volumes do not substantiate the conclusions they draw. The work exhibits material, at points, critical flaws that likely stem from the exclusion of security and privacy researchers, the very field trained to probe such systems. The report was produced by an organisation with commercial interests in age-assurance technologies, which raises legitimate concerns about independence.
The problems in the report range from inconsistent claims (calling age estimation deployable and ready for prime time, even while documenting serious flaws in the relevant age bands) to far more serious omissions, such as failing to model a realistic spectrum of ways young people would circumvent the technology. These are exactly the weaknesses adversarial testing - a core cybersecurity methodology - would have surfaced. Instead, key sections skate over methodology and lean on assurances that fixes are 'under development'. Meanwhile, real-world reports already show easy bypasses (for example, children pointing phones at video-game characters to fool systems), and my own testing suggests such tricks succeed far more often than the report would have Australians believe. In short, the report understates risk, overstates effectiveness, and falls well short of the standard security and privacy researchers expect for a high-stakes, society-wide intervention."
Professor Daswin De Silva is Deputy Director of the Centre for Data Analytics and Cognition (CDAC) at La Trobe University
"A few months prior to the implementation of its social media ban for under-16s, the full findings of the age assurance technology trial have been released by the Federal Government. The report does not offer a precise and deterministic solution; instead, it presents several methods that can be used with 'reasonable levels of confidence' alongside limitations of reliability and privacy.
The methods are grouped into
- age verification (using official identity documents to verify date of birth)
- age estimation (using Artificial Intelligence to estimate age based on face, voice, and motion data)
- age inference (using third-party sources of contextual, behavioural, transactional data, such as electoral enrolment or school year), and the use of a combination of these (described as the waterfall method).
Age verification and inference have low privacy protection, where, given the sensitive nature of any information related to children, they can be exposed to subsequent harms and risks.
Although age estimation affords high privacy protection, the use of AI can be unreliable, biased and prejudiced, leading to other types of risks and harms. The combination of methods is likely to be most effective at age assurance, but equally risky as an entire digital profile can be created, tracked long term, exposed or breached.
By reporting on a broad range of options across multiple and diverse metrics instead of an exact technology or provider, or even a list of priorities, this report serves as an environmental scan, and a further study will be warranted to determine actionable next steps. The report also lacks comparison with international practices, given the global scale of this challenge of social media use by children. The report recognises all technology providers of these age assurance methods as 'dynamic and innovative', which raises separate questions on where else this technology is being used and to what level our private and personal data is being extracted."
Dr Belinda Barnet is a Senior Lecturer in Media at Swinburne University of Technology
"As expected, the report found that there were some privacy and security concerns with several of the methods, but that there are third-party verification providers who could deliver age assurance without unnecessarily storing our data. I would personally like us to adopt the reliable third party method rather than giving Facebook our passports."
Associate Professor Faith Gordon is an Associate Professor in Law at the Australian National University (ANU)
“Age Assurance Technology is clearly not the 'silver bullet' to make the digital world safer for children. The Age Assurance Technology Trial demonstrates that while some biometric methods show promise, they remain prone to errors and demographic biases, particularly for women and darker-skinned users. The findings confirm that no single solution works universally, raising serious concerns about privacy, data collection, and the potential for exclusion. More broadly, these developments mark a significant shift in how Australians will access and experience the online platforms, with implications that extend well beyond the proposed social media restrictions for young people.”