A new national survey shows many Australians believe the risks of artificial intelligence outweigh its benefits.
The findings highlight growing concern about safety, misuse and trust, as well as the importance of clear rules as AI becomes more embedded in everyday life.
Six months on from our first national survey on artificial intelligence [NS1] (AI), the message from Australians is clear: they are not opposed to innovation. They simply want confidence that guardrails are in place to keep people safe.
About the SafeAI “Australian attitudes toward AI regulation” survey:
- Conducted: 20–28 January 2026
- Sample size: 2,010 Australians aged 18+
- Sampling method: Representative national sample using interlocked age, gender and regional quotas
- Weighting: Applied to ensure results closely reflect the Australian population

Almost two-thirds of Australians continue to believe AI is moving too fast and most say the risks outweigh the benefits.
The top concerns driving this unease are unchanged: job losses, privacy breaches, and criminal misuse that can cause harm at speed and scale. These are issues that are not fading from people’s minds.
Deepfakes have also become increasingly common online.
Nearly six in ten Australians say they have seen an image or video in the past six months where they could not tell whether it was real or AI-generated.
This speaks to a broader concern about digital manipulation and our ability to determine what is real. When people cannot trust what they see, trust in the system begins to erode.
Despite these concerns, Australians are not turning their backs on AI.
Are we using AI more now than we were six months ago?
Yes. More people are using it every day and across a growing range of tools.
The data also reveals how Australians are putting AI to use.
For most users, it’s a practical tool for learning and information-seeking, whether that’s asking “how-to” questions or quickly looking up information.
Work-related tasks such as writing and editing are also common. Other uses, including travel planning, recipes and health advice, appear less frequently but highlight how AI is steadily becoming integrated into everyday life.
Many see the potential for AI to improve healthcare outcomes and lift productivity, from better diagnoses and treatment to making daily life more convenient.
Australians don’t want to pull the handbrake on innovation, nor do they want an AI free-for-all.
Support for a balanced approach to regulation, one that keeps people safe and enables innovation, remains the most preferred option.
However, there are signs patience may be wearing thin, with a noticeable increase in support for strict regulation in our latest survey.
When the balanced option is removed and Australians are asked to choose between only minimal rules and strict rules, 69 per cent now favour the stricter approach – up from our last survey.
The signal to policymakers and industry is clear. Without trust, the public will default to caution, even if it slows progress.
The research also highlights a gap in awareness about the government’s AI plans.
Only 9 per cent of Australians have heard of the AI Safety Institute or National AI Plan.
Yet when given some basic information about the intention of these initiatives, most believe they could be effective steps.
The appetite for clearer, increased communication and visible action is there. And with the public continuing to look to government to manage the risks, there is a real opportunity to lead in building trust in AI.

Getting this right is not about holding innovation back. It is about setting clear, enforceable rules that protect people, enable innovation and give government public permission to unlock the full productivity potential of AI.