Back to home

AI Safety Institute

A well-resourced, independent technical body to assess AI risks and advise on safety standards.

Share this policy:

What we asked the candidates

We put the same question with the same details to every candidate to ensure the scorecard is fair and accurate. This is the exact question we asked and the exact detail we provided. The additional detail below provides more policy specifics about what delivering this commitment could look like.

Our question

Do you support the creation of an Australian AI Safety Institute?

The detail

Create and adequately fund an AI Safety Institute dedicated to:

  • Evaluating advanced AI systems
  • Driving safety research
  • Partnering with international AISI counterparts

Comparable commitments include the UK's £100 million over 2 years and Canada's CAD$50 million over 5 years. This initiative would help Australia fulfil its commitments under the Seoul Declaration.


Why we need an Australian AI Safety Institute (AISI)

The global race to develop more powerful AI has created a concerning imbalance: for every $250 spent making AI more powerful, only $1 is spent on safety. While frontier AI companies push the boundaries of what's technically possible, governments worldwide are recognising the need for independent assessment capabilities. An AISI could help address this critical gap while helping us fulfil our commitments under the Seoul Declaration and Bletchley Declaration.

Currently, Australia lacks the technical capacity to independently evaluate frontier AI systems or test for risks before deployment. This leaves us largely dependent on companies' self-assessments and with limited influence in how these powerful technologies develop. By establishing an AISI, Australia would contribute meaningfully to addressing significant safety and security, build relevant technical expertise within government, potentially secure early access to frontier models for evaluation, and have a seat at the table in global AI governance alongside our international partners.

The urgency of this need is underscored by the 2025 International AI Safety Report, where 96 international experts identified serious risks ranging from cyber-attacks to biological threats and disinformation campaigns. The MIT AI Risk Repository has documented over 1,000 potential AI harms from peer-reviewed and industry research, spanning issues from misinformation to unsafe autonomous systems.

Leading AI developers acknowledge these concerns. The CEOs of OpenAI, Anthropic, and Google DeepMind have publicly stated that AI poses an existential threat, a position that 80% of Australians agree with . These same labs are actively calling for government oversight (e.g. OpenAI and DeepMind).

The rapid release of increasingly capable AI models presents escalating risks. OpenAI's latest model was assessed to have a 'medium' risk of helping develop chemical, biological, radiological, and nuclear weapons, while DeepSeek's R1 model was found to have critical security vulnerabilities. Without an AISI, Australia lacks a systematic way to conduct such risk assessments independently or take action if dangers become unacceptable.

What an AISI would provide

The institute would deliver:

  • Independent testing of frontier AI systems
  • Research on safety, robustness, and understanding AI
  • Expert assessment of AI risks like cyber capabilities
  • Advice to government and industry
  • Partnership with international counterparts

The UK AI Security Institute (formerly UK AI Safety Institute) provides a good template for its priorities:

  • Evaluating dual-use capabilities
  • Assessing societal impacts
  • Improving system safety and security
  • Preventing loss of control

Why an AISI is a good investment

Other leading nations have already established AI safety institutes with sufficient funding:

AI is estimated to add between $45 billion and $600 billion to Australia's economy by 2030. However, public trust is essential—Australians are more concerned about AI than people in any other surveyed country. Estimates show a 156% difference in value between slow and fast adoption scenarios. To unlock this economic and social value trust must be earned.

An investment on the scale of the UK and Canada proposed in the Good Ancestors pre-budget submission would cost just 0.001–0.1% of the potential yearly economic benefits from faster AI adoption.

Would an AI Security Institute meet the commitment?

Yes. The UK recently renamed its AI Safety Institute to an AI Security Institute, but it remains focused on the kinds of challenges outlined above.

Take Action

Support this initiative by signing our open letter and letting politicians know that AI safety matters to you.

Spread the word: