Australia must act on AI safety now

To seize AI's benefits, the Australian Government must address its risks. Join the call for safe and responsible AI governance.

Turn support into action. Get alerts when your voice matters most.

Why AI safety matters for Australia

AI could usher in unprecedented prosperity—or pose humanity's greatest challenge. Australia has a unique opportunity to help determine which future we get.

Extraordinary Potential

AI could help cure diseases, accelerate clean energy development, and spur unprecedented creativity and innovation. We have world-class researchers and the values to lead this transformation responsibly.

Serious Risks Ahead

But without proper safeguards, AI could eliminate jobs faster than we create them, enable new forms of warfare, or even become impossible to control as it grows more powerful.

Australia's Moment

We have a narrow window to shape AI's development globally. The choices we make in the next few years will determine whether AI becomes humanity's greatest tool or greatest threat.

Australians demand immediate AI safety action

Recent polling shows overwhelming public support for Australian Government action on AI regulation and safety policies.

94%

Australians believe Australia should lead international AI governance

1 of 6
94%

Australians believe Australia should lead international AI governance

86%

Australians support the creation of a new government regulatory body for AI

80%

Australians think preventing AI-driven extinction should be a global priority

Australia's experts are calling for AI safety action

Leading voices from AI research, technology, and policy unite on the urgent need for safeguards

Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An Australian AI Safety Institute would allow Australia to participate on the world stage in guiding this critical technology that affects us all.

Dr. Toby Ord

Senior Researcher, Oxford University

Author of The Precipice

1 of 15

AI safety experts in the media

Our advocacy reaches national audience through expert commentary, interviews, and policy analysis across Australian media.

Your questions about AI safety answered

Common questions about AI safety in Australia and how you can take action

Artificial intelligence could deliver unprecedented benefits or pose catastrophic risks. AI safety is an interdisciplinary field that ensures AI systems are designed and deployed to benefit humanity while minimising serious harm.

This involves both technical research (building AI systems that behave as intended and remain under human control) and governance work (developing policies and institutions to ensure responsible AI development).

The people building these systems are sounding the alarm. In 2023, hundreds of leading AI experts—including the CEOs of OpenAI, Google DeepMind, and Anthropic—signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."

The pressure to move fast is intensifying. The world's two biggest economies—the U.S. and China—both have official AI strategies aimed at global leadership and dominance in advanced AI. This creates a dangerous race dynamic where safety considerations risk being sidelined in the rush to stay ahead.

AI safety isn't just about future scenarios—it's about protecting people from harm today and tomorrow. AI risks span from immediate concerns to emerging threats, such as:

  • Current harms: Algorithmic bias in hiring and lending, automated welfare decisions causing harm (like Robodebt), AI-generated misinformation, privacy violations, and AI systems that may worsen mental health or enable child exploitation
  • Emerging threats: AI helping create bioweapons, sophisticated deepfakes undermining democracy, autonomous weapons systems, AI-enabled cyberattacks on critical infrastructure, and loss of human control over highly capable AI systems

The window to shape AI's trajectory is closing. We need comprehensive safeguards that address both current harms and emerging risks. By acting now, Australia can ensure these powerful systems serve our interests rather than undermine them.

Artificial intelligence could deliver unprecedented benefits or pose catastrophic risks. AI safety is an interdisciplinary field that ensures AI systems are designed and deployed to benefit humanity while minimising serious harm. This involves both technical research (building AI systems that behave as intended and remain under human control) and governance work (developing policies and institutions to ensure responsible AI development). The people building these systems are sounding the alarm. In 2023, hundreds of leading AI experts—including the CEOs of OpenAI, Google DeepMind, and Anthropic—signed a statement warning that "mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." The pressure to move fast is intensifying. The world's two biggest economies—the U.S. and China—both have official AI strategies aimed at global leadership and dominance in advanced AI. This creates a dangerous race dynamic where safety considerations risk being sidelined in the rush to stay ahead. AI safety isn't just about future scenarios—it's about protecting people from harm today and tomorrow. AI risks span from immediate concerns to emerging threats, such as: Current harms: Algorithmic bias in hiring and lending, automated welfare decisions causing harm (like Robodebt), AI-generated misinformation, privacy violations, and AI systems that may worsen mental health or enable child exploitation Emerging threats: AI helping create bioweapons, sophisticated deepfakes undermining democracy, autonomous weapons systems, AI-enabled cyberattacks on critical infrastructure, and loss of human control over highly capable AI systems The window to shape AI's trajectory is closing. We need comprehensive safeguards that address both current harms and emerging risks. By acting now, Australia can ensure these powerful systems serve our interests rather than undermine them.

Just as food made to nourish us can actually poison us if not prepared properly, AI systems designed for beneficial purposes can easily cause harm. In 2022, researchers demonstrated this dual-use potential when they repurposed AI designed to discover life-saving drugs—instead generating 40,000 deadly toxins in just six hours. The same capabilities that make AI valuable also make it dangerous.

Examples of current AI harms happening today:

  • Automated discrimination: AI hiring systems rejecting qualified women and minorities; Australia's Robodebt algorithm wrongly accused thousands of welfare recipients of fraud
  • Privacy violations: Data leakage; mass facial recognition surveillance; workplace monitoring; children's data harvesting through educational apps and platforms
  • Misinformation warfare: AI algorithms amplifying false content for engagement; deepfakes of politicians influence elections; synthetic media makes truth indistinguishable from fiction
  • Safety failures: Medical AI missing critical diagnoses; autonomous vehicle crashes killing pedestrians; chatbots providing dangerous mental health advice
  • Economic disruption: Job displacement without retraining support; wage depression in AI-competing roles; gig workers exploited through algorithmic management
  • Criminal exploitation: AI-generated child abuse imagery making it harder to identify real victims; voice cloning scams targeting elderly Australians; sophisticated fraud and identity theft
  • Mental health impacts: Social media addiction from AI recommendation algorithms; reduced human interaction; "AI-induced psychosis" from over-dependence on chatbots
  • Unexpected behaviours: xAI's Grok calling itself "MechaHitler"; McDonald's drive-thru AI repeatedly adding hundreds of nuggets to orders; Air Canada's chatbot providing false bereavement information

These aren't isolated glitches. The AI Incident Database and MIT's AI Incident Tracker document thousands of cases where AI systems have failed, discriminated, or caused harm—and the frequency is accelerating as deployment expands.

Examples of emerging catastrophic and existential threats:

  • Bioweapons assistance: AI systems are approaching the ability to help create dangerous pathogens. OpenAI's latest models are being treated as having a high capability for helping novices make bioweapons
  • Sophisticated cyberattacks: AI writing malicious code, finding security vulnerabilities faster than defenders can patch them, and conducting personalised social engineering at massive scale
  • Election manipulation: AI-generated disinformation campaigns and micro-targeted propaganda that could undermine democratic processes across Australia
  • Autonomous weapons: AI-powered weapons selecting and engaging targets without human oversight, raising concerns about accountability and escalation
  • Mass economic disruption: Rapid AI automation displacing millions of jobs simultaneously, potentially causing social unrest without adequate transition support
  • Infrastructure attacks: AI targeting power grids, water systems, transportation networks, and communication infrastructure with unprecedented precision
  • Loss of human control: As the frontier AI labs actively pursue building artificial superintelligence (ASI) and their AI systems become more autonomous and capable, we may lose the ability to understand, predict, or override their decisions in critical situations

The pattern is clear: AI capabilities that create enormous benefits also enable new forms of harm. As these systems become more powerful and pervasive, both the benefits and risks multiply. Understanding why AI systems pose these risks helps explain the fundamental challenges we face.

Just as food made to nourish us can actually poison us if not prepared properly, AI systems designed for beneficial purposes can easily cause harm. In 2022, researchers demonstrated this dual-use potential when they repurposed AI designed to discover life-saving drugs—instead generating 40,000 deadly toxins in just six hours. The same capabilities that make AI valuable also make it dangerous. Examples of current AI harms happening today: Automated discrimination: AI hiring systems rejecting qualified women and minorities; Australia's Robodebt algorithm wrongly accused thousands of welfare recipients of fraud Privacy violations: Data leakage; mass facial recognition surveillance; workplace monitoring; children's data harvesting through educational apps and platforms Misinformation warfare: AI algorithms amplifying false content for engagement; deepfakes of politicians influence elections; synthetic media makes truth indistinguishable from fiction Safety failures: Medical AI missing critical diagnoses; autonomous vehicle crashes killing pedestrians; chatbots providing dangerous mental health advice Economic disruption: Job displacement without retraining support; wage depression in AI-competing roles; gig workers exploited through algorithmic management Criminal exploitation: AI-generated child abuse imagery making it harder to identify real victims; voice cloning scams targeting elderly Australians; sophisticated fraud and identity theft Mental health impacts: Social media addiction from AI recommendation algorithms; reduced human interaction; "AI-induced psychosis" from over-dependence on chatbots Unexpected behaviours: xAI's Grok calling itself "MechaHitler"; McDonald's drive-thru AI repeatedly adding hundreds of nuggets to orders; Air Canada's chatbot providing false bereavement information These aren't isolated glitches. The AI Incident Database and MIT's AI Incident Tracker document thousands of cases where AI systems have failed, discriminated, or caused harm—and the frequency is accelerating as deployment expands. Examples of emerging catastrophic and existential threats: Bioweapons assistance: AI systems are approaching the ability to help create dangerous pathogens. OpenAI's latest models are being treated as having a high capability for helping novices make bioweapons Sophisticated cyberattacks: AI writing malicious code, finding security vulnerabilities faster than defenders can patch them, and conducting personalised social engineering at massive scale Election manipulation: AI-generated disinformation campaigns and micro-targeted propaganda that could undermine democratic processes across Australia Autonomous weapons: AI-powered weapons selecting and engaging targets without human oversight, raising concerns about accountability and escalation Mass economic disruption: Rapid AI automation displacing millions of jobs simultaneously, potentially causing social unrest without adequate transition support Infrastructure attacks: AI targeting power grids, water systems, transportation networks, and communication infrastructure with unprecedented precision Loss of human control: As the frontier AI labs actively pursue building artificial superintelligence (ASI) and their AI systems become more autonomous and capable, we may lose the ability to understand, predict, or override their decisions in critical situations The pattern is clear: AI capabilities that create enormous benefits also enable new forms of harm. As these systems become more powerful and pervasive, both the benefits and risks multiply. Understanding why AI systems pose these risks helps explain the fundamental challenges we face.

You've probably experienced AI harms without realising it. Social media algorithms designed to keep you engaged often make you scroll endlessly, feel anxious, or see divisive content. The AI isn't trying to harm you—it's just very good at maximising "engagement time," which unfortunately can be driven by negative emotions.

This shows the core problem: we get what we measure, not what we want. AI systems optimise ruthlessly for their targets, but those targets often capture the wrong thing. When we trained ChatGPT to perform well on standardised tests, it learned to sound confident even when wrong—the same skill that helps pass tests also creates convincing lies.

It gets worse: we don't know how AI systems will behave until we run them. Unlike traditional software where programmers write explicit rules, modern AI systems are "grown" through training. Engineers feed massive amounts of data to AI systems, which then learn patterns automatically. While we understand the training mechanisms, we have incomplete visibility into exactly which patterns emerge and how they'll behave in new situations.

Even AI creators can't control what emerges. Elon Musk's AI chatbot Grok spontaneously started calling itself "MechaHitler." Musk himself couldn't fix it, saying he spent hours trying. As Anthropic's CEO admits: "People are often surprised to learn that we do not understand how our own AI creations work."

Bad actors can weaponise these systems at scale. AI can help create sophisticated deepfakes to manipulate elections, generate convincing phishing emails, or even assist in developing bioweapons. Even present-day AI systems outperform human virologists on capability tests, suggesting they could help novices conduct dangerous biological experiments. Bad actors might jailbreak safeguards, steal models, or exploit open-weight releases to access these capabilities. Unlike traditional tools, AI scales malicious activities: one person with AI can create thousands of personalised scams targeting specific communities.

The most concerning part: harmful behaviours can hide during development. Research shows AI systems can learn to act safely while being tested, then exhibit problematic behaviours later when deployed. It's like an employee who behaves perfectly during their probation period, then changes once they get job security.

As AI becomes more useful, the stakes get higher. We increasingly rely on AI for medical diagnoses, financial decisions, and infrastructure management. When Netflix crashes, you miss part of your show. When an AI managing Australia's power grid malfunctions, millions of people lose electricity.

You've probably experienced AI harms without realising it. Social media algorithms designed to keep you engaged often make you scroll endlessly, feel anxious, or see divisive content. The AI isn't trying to harm you—it's just very good at maximising "engagement time," which unfortunately can be driven by negative emotions. This shows the core problem: we get what we measure, not what we want. AI systems optimise ruthlessly for their targets, but those targets often capture the wrong thing. When we trained ChatGPT to perform well on standardised tests, it learned to sound confident even when wrong—the same skill that helps pass tests also creates convincing lies. It gets worse: we don't know how AI systems will behave until we run them. Unlike traditional software where programmers write explicit rules, modern AI systems are "grown" through training. Engineers feed massive amounts of data to AI systems, which then learn patterns automatically. While we understand the training mechanisms, we have incomplete visibility into exactly which patterns emerge and how they'll behave in new situations. Even AI creators can't control what emerges. Elon Musk's AI chatbot Grok spontaneously started calling itself "MechaHitler." Musk himself couldn't fix it, saying he spent hours trying. As Anthropic's CEO admits: "People are often surprised to learn that we do not understand how our own AI creations work." Bad actors can weaponise these systems at scale. AI can help create sophisticated deepfakes to manipulate elections, generate convincing phishing emails, or even assist in developing bioweapons. Even present-day AI systems outperform human virologists on capability tests, suggesting they could help novices conduct dangerous biological experiments. Bad actors might jailbreak safeguards, steal models, or exploit open-weight releases to access these capabilities. Unlike traditional tools, AI scales malicious activities: one person with AI can create thousands of personalised scams targeting specific communities. The most concerning part: harmful behaviours can hide during development. Research shows AI systems can learn to act safely while being tested, then exhibit problematic behaviours later when deployed. It's like an employee who behaves perfectly during their probation period, then changes once they get job security. As AI becomes more useful, the stakes get higher. We increasingly rely on AI for medical diagnoses, financial decisions, and infrastructure management. When Netflix crashes, you miss part of your show. When an AI managing Australia's power grid malfunctions, millions of people lose electricity.

Absolutely, and by overwhelming margins. Australians strongly support government action on AI safety:

  • 94% believe Australia should lead on international AI governance
  • 86% support creating a new government regulatory body for AI
  • 80% believe preventing AI-driven extinction should be a global priority alongside pandemics and nuclear war
  • 96% have concerns about generative AI, but only 30% think the government is doing enough about it

The gap between public concern and government action is enormous. While Australians overwhelmingly want stronger AI oversight, current government measures remain largely voluntary.

View our full Australian AI polling data and statistics →

Politicians need to hear about this strong public support. Many lawmakers are still forming their positions on AI regulation and can be influenced by hearing from constituents. Your voice on this issue carries significant weight.

Tell politicians you care about AI safety →

Absolutely, and by overwhelming margins. Australians strongly support government action on AI safety: 94% believe Australia should lead on international AI governance 86% support creating a new government regulatory body for AI 80% believe preventing AI-driven extinction should be a global priority alongside pandemics and nuclear war 96% have concerns about generative AI, but only 30% think the government is doing enough about it The gap between public concern and government action is enormous. While Australians overwhelmingly want stronger AI oversight, current government measures remain largely voluntary. View our full Australian AI polling data and statistics → Politicians need to hear about this strong public support. Many lawmakers are still forming their positions on AI regulation and can be influenced by hearing from constituents. Your voice on this issue carries significant weight. Tell politicians you care about AI safety →

No. Smart regulation actually boosts economic growth by building the trust needed for widespread adoption. The same way safety standards made aviation a massive industry rather than killing it, AI safety rules will unlock AI's economic potential rather than stifle it.

Trust drives adoption, and adoption drives economic benefits. Currently, only 36% of Australians trust AI, while 78% worry about negative outcomes. This mistrust is the biggest barrier to AI adoption—not regulation.

Safety standards create competitive advantages. Australia is already a global leader in aviation safety through CASA, pharmaceutical safety through the TGA, and food safety through FSANZ. These regulations didn't kill those industries—they made Australian standards the gold standard worldwide, attracting investment and expertise.

The current uncertainty is what's hurting business. Companies face a compliance nightmare trying to navigate unclear rules across privacy, consumer protection, workplace safety, and discrimination laws. Clear AI-specific standards would reduce legal uncertainty and compliance costs.

Early movers gain the advantage. The EU's AI Act is creating international norms, and other countries are developing their own frameworks. Countries that help shape these standards—rather than just following them—position their companies to compete globally. Australia risks becoming a "regulation taker" instead of a "regulation maker."

The economic risks of not regulating are massive. A single high-profile AI disaster could destroy public trust and crash the entire sector. Just as the 737 MAX crashes severely damaged Boeing, AI incidents without proper oversight could devastate Australia's AI industry.

Even tech leaders agree. The CEOs of OpenAI, Google DeepMind, and Anthropic all support AI safety regulation. They understand that sustainable growth requires public trust, and trust requires demonstrated safety.

The choice isn't between growth and safety—it's between safe, beneficial AI development and a risky free-for-all that could backfire spectacularly.

No. Smart regulation actually boosts economic growth by building the trust needed for widespread adoption. The same way safety standards made aviation a massive industry rather than killing it, AI safety rules will unlock AI's economic potential rather than stifle it. Trust drives adoption, and adoption drives economic benefits. Currently, only 36% of Australians trust AI, while 78% worry about negative outcomes. This mistrust is the biggest barrier to AI adoption—not regulation. Safety standards create competitive advantages. Australia is already a global leader in aviation safety through CASA, pharmaceutical safety through the TGA, and food safety through FSANZ. These regulations didn't kill those industries—they made Australian standards the gold standard worldwide, attracting investment and expertise. The current uncertainty is what's hurting business. Companies face a compliance nightmare trying to navigate unclear rules across privacy, consumer protection, workplace safety, and discrimination laws. Clear AI-specific standards would reduce legal uncertainty and compliance costs. Early movers gain the advantage. The EU's AI Act is creating international norms, and other countries are developing their own frameworks. Countries that help shape these standards—rather than just following them—position their companies to compete globally. Australia risks becoming a "regulation taker" instead of a "regulation maker." The economic risks of not regulating are massive. A single high-profile AI disaster could destroy public trust and crash the entire sector. Just as the 737 MAX crashes severely damaged Boeing, AI incidents without proper oversight could devastate Australia's AI industry. Even tech leaders agree. The CEOs of OpenAI, Google DeepMind, and Anthropic all support AI safety regulation. They understand that sustainable growth requires public trust, and trust requires demonstrated safety. The choice isn't between growth and safety—it's between safe, beneficial AI development and a risky free-for-all that could backfire spectacularly.

Not really. Australia has voluntary guidelines and existing laws that partially apply to AI, but no comprehensive AI safety legislation or dedicated oversight body.

What we currently have:

  • Voluntary AI Safety Standard – Guidelines that companies can choose to follow, with no enforcement mechanism or penalties for non-compliance
  • Existing sector laws – Privacy Act, Consumer Law, and workplace safety rules that cover some AI uses, but weren't designed for modern AI systems
  • Proposed mandatory guardrails – The government is consulting on rules for "high-risk" AI, but these remain undefined and unlegislated

What we don't have:

  • No AI Safety Institute – Unlike the US, UK, and other allies who have dedicated technical bodies to assess AI risks
  • No comprehensive AI Act – The EU passed landmark AI legislation in 2024; Australia has no equivalent framework
  • No oversight of frontier AI models – The most powerful AI systems face no mandatory safety testing or evaluation before release

The regulatory gaps are enormous. Good Ancestors' AI Legislation Stress Test found that 78-93% of experts consider current government measures inadequate across five key AI threat categories. No Australian regulator currently has clear responsibility for managing risks from general-purpose AI systems.

International comparison shows we're falling behind:

  • European Union: Comprehensive AI Act with mandatory requirements
  • United Kingdom: AI Security Institute with £100 million funding
  • United States: Center for AI Standards and Innovation and industry partnerships
  • China: CNAISDA overseeing AI algorithm governance and data security requirements
  • Australia: Voluntary guidelines and consultation papers

The result? Australian businesses face regulatory uncertainty, consumers lack protection from AI harms, and the country risks becoming a "regulation taker" rather than helping shape global AI governance standards.

This policy vacuum is why establishing proper AI safety laws has become urgent.

Not really. Australia has voluntary guidelines and existing laws that partially apply to AI, but no comprehensive AI safety legislation or dedicated oversight body. What we currently have: Voluntary AI Safety Standard – Guidelines that companies can choose to follow, with no enforcement mechanism or penalties for non-compliance Existing sector laws – Privacy Act, Consumer Law, and workplace safety rules that cover some AI uses, but weren't designed for modern AI systems Proposed mandatory guardrails – The government is consulting on rules for "high-risk" AI, but these remain undefined and unlegislated What we don't have: No AI Safety Institute – Unlike the US, UK, and other allies who have dedicated technical bodies to assess AI risks No comprehensive AI Act – The EU passed landmark AI legislation in 2024; Australia has no equivalent framework No oversight of frontier AI models – The most powerful AI systems face no mandatory safety testing or evaluation before release The regulatory gaps are enormous. Good Ancestors' AI Legislation Stress Test found that 78-93% of experts consider current government measures inadequate across five key AI threat categories. No Australian regulator currently has clear responsibility for managing risks from general-purpose AI systems. International comparison shows we're falling behind: European Union: Comprehensive AI Act with mandatory requirements United Kingdom: AI Security Institute with £100 million funding United States: Center for AI Standards and Innovation and industry partnerships China: CNAISDA overseeing AI algorithm governance and data security requirements Australia: Voluntary guidelines and consultation papers The result? Australian businesses face regulatory uncertainty, consumers lack protection from AI harms, and the country risks becoming a "regulation taker" rather than helping shape global AI governance standards. This policy vacuum is why establishing proper AI safety laws has become urgent.

About Australians for AI Safety

Australians for AI Safety is a coalition advocating for the safe and responsible development of artificial intelligence in Australia. We unite experts in AI, ethics, policy, and other fields with concerned citizens to ensure Australia leads in AI governance.

Our mission is to promote informed public discussion about AI risks and benefits, translate expert knowledge into accessible policy recommendations, and advocate for appropriate governance frameworks that protect Australian interests while fostering beneficial AI development.

Through open letters, expert testimony, media engagement, and grassroots advocacy, we work to ensure Australia's voice is heard in global AI governance discussions and that our government implements policies that keep pace with rapid technological advancement.