Credit: Oscar Wong via Getty Images.

Artificial Intelligence

Impact Missions

Australians support AI innovation, but safety must come first.

A new national survey shows many Australians believe the risks of artificial intelligence outweigh its benefits.

The findings highlight growing concern about safety, misuse and trust, as well as the importance of clear rules as AI becomes more embedded in everyday life.

Six months on from our first national survey on artificial intelligence [NS1] (AI), the message from Australians is clear: they are not opposed to innovation. They simply want confidence that guardrails are in place to keep people safe.

About the SafeAI “Australian attitudes toward AI regulation” survey:

  • Conducted: 20–28 January 2026
  • Sample size: 2,010 Australians aged 18+
  • Sampling method: Representative national sample using interlocked age, gender and regional quotas
  • Weighting: Applied to ensure results closely reflect the Australian population
This image is shot from directly behind a boy sitting at a computer with a headset on. Set up on the desk in front of him is a computer tower with light-up fans, a desktop monitor, and a laptop.
Credit: Nazar Abbas Photography via Getty Images.

Almost two-thirds of Australians continue to believe AI is moving too fast and most say the risks outweigh the benefits.

The top concerns driving this unease are unchanged: job losses, privacy breaches, and criminal misuse that can cause harm at speed and scale. These are issues that are not fading from people’s minds.

A chart that shows what Australians think of the risks versus the benefits of AI - and whether they think the risks outweigh the benefits or not. Various demographics are more or less likely to agree that the risks outweigh the benefits, including women and older Australians.
A chart that shows the relative importance of various tech-related issues to the Australian public - in order: digital privacy, the future of employment, misinformation and fake news, companies collecting personal data, government collection personal data, the role of big tech companies, online radicalisation, artificial intelligence, social media and genetic engineering

Deepfakes have also become increasingly common online.

Nearly six in ten Australians say they have seen an image or video in the past six months where they could not tell whether it was real or AI-generated.

This speaks to a broader concern about digital manipulation and our ability to determine what is real. When people cannot trust what they see, trust in the system begins to erode.

Despite these concerns, Australians are not turning their backs on AI.

Are we using AI more now than we were six months ago?

Yes. More people are using it every day and across a growing range of tools.

A chart that shows the use of Australians of various AI tools, compared to the previous research period of August 2025. The top three tools are ChatGPT, Gemini, and Copilot. The n size of this question was 1142.

The data also reveals how Australians are putting AI to use.

For most users, it’s a practical tool for learning and information-seeking, whether that’s asking “how-to” questions or quickly looking up information.

Work-related tasks such as writing and editing are also common. Other uses, including travel planning, recipes and health advice, appear less frequently but highlight how AI is steadily becoming integrated into everyday life.

A chart that shows what Australians consider their main use of AI to be. The top three uses are learning or asking 'how-to-do' things; looking up information like you might on Wikipedia; and work tasks like writing or editing. The results also note that people aged 60 or over were more like to use it for looking up information, while those under 30 were more likely to say learning.

Many see the potential for AI to improve healthcare outcomes and lift productivity, from better diagnoses and treatment to making daily life more convenient.

Australians don’t want to pull the handbrake on innovation, nor do they want an AI free-for-all.

Support for a balanced approach to regulation, one that keeps people safe and enables innovation, remains the most preferred option.

However, there are signs patience may be wearing thin, with a noticeable increase in support for strict regulation in our latest survey.

When the balanced option is removed and Australians are asked to choose between only minimal rules and strict rules, 69 per cent now favour the stricter approach – up from our last survey.

The signal to policymakers and industry is clear. Without trust, the public will default to caution, even if it slows progress.

The research also highlights a gap in awareness about the government’s AI plans.

Only 9 per cent of Australians have heard of the AI Safety Institute or National AI Plan.

Yet when given some basic information about the intention of these initiatives, most believe they could be effective steps.

The appetite for clearer, increased communication and visible action is there. And with the public continuing to look to government to manage the risks, there is a real opportunity to lead in building trust in AI.

A screenshot of a conversation with ChatGPT that says “How concerned should we be about the lack of regulation about AI tools like yourself, Chat?” and the reply “It's a very reasonable concern - and you're not alone in asking it. Right now the global conversation around AI regulation is essentially a race between capability and governance, and capability is currently moving faster.”
Caption: ChatGPT response. Credit: OpenAI.

Getting this right is not about holding innovation back. It is about setting clear, enforceable rules that protect people, enable innovation and give government public permission to unlock the full productivity potential of AI.

The window to get ahead of public concern, rather than chasing it, is still open. But it is narrowing fast.

Tags
AI
Explainer
Research