Join 156 signatories

31% of 500 signatures

← Back to All Open Letters

Australia Must Act on AI Risks Now

To realise AI’s immense benefits, we must first confront its escalating risks—starting now.

Published 24 March 2025

The next government will shape whether AI becomes a powerful force for good or causes catastrophic harm. Australia needs swift and globally coordinated action to address the risks of AI so Australians can fully seize its opportunities.

We can fly with confidence because we know airlines are subject to robust safety standards—the same should be true for AI.

Therefore, we call on the Australian Government to:

  • Deliver on its commitments by creating an Australian AI Safety Institute. While massive investments are accelerating AI capabilities, there is minimal funding dedicated to understanding and addressing its risks. We need independent technical expertise within government to join global AI risk research, and help ensure regulation and policy meet Australia's needs.
  • Introduce an Australian AI Act that imposes mandatory guardrails on AI developers and deployers. The Act should ensure that powerful AI systems meet robust safety standards and clarify liability for developers and deployers.

AI development is already rapid, can be unexpected, and will have significant consequences for Australians. AI stands to fundamentally transform our society, economy, and democracy – for better or worse. Australians expect our government to take the widespread implications of AI seriously, to work with the global community to ensure AI is well governed, and to be adaptable in protecting us from AI risks while enabling us to realise its benefits.

We, the undersigned, call on the Australian Government to make AI safe before it's too late.

Signatories

Note: Signatories endorse only the core letter text. Footnotes and additional content may not represent their views.

Prof Huw Price

University of Cambridge

Emeritus Bertrand Russell Professor & Emeritus Fellow, Trinity College, Cambridge

Co-founder of the Centre for the Study of Existential Risk and former Academic Director of the Leverhulme Centre for the Future of Intelligence, Cambridge

Dr. Toby Ord

Oxford University

Senior Researcher

Author of The Precipice

Bill Simpson-Young

Gradient Institute

Chief Executive

Australia's AI Expert Group, NSW's AI Review Committee

Prof. Robert SparrowPhD

Monash University

Professor of Philosophy

Author of more than 50 refereed papers on AI and robot ethics

Prof. Michael A. Osborne

University of Oxford

Professor of Machine Learning

Co-author of well-known paper "The Future of Employment: How Susceptible Are Jobs to Computerisation?" co-authored with Carl Benedikt Frey in 2013.

Dr. Ryan KiddPhD

MATS Research

Co-Executive Director

Co-Founder of the London Initiative for Safe AI

Dr. Paul LessardPhD

Symbolica

Principal Scientist

Author of Categorical Deep Learning

Prof. Richard DazeleyPhD

Deakin University

Professor of Artificial Intelligence and Machine Learning

Researcher in AI Safety and Explainability

Prof. Paul SalmonPhD

Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast

Professor

Australia's discipline leader, Quality and Reliability, 2020 - 2024

Mr Justin Olive

Arcadia Impact

Head of AI safety

Dr. Tiberio Caetano

Gradient Institute

Chief Scientist

Dr. Alexander SaeriPhD

The University of Queensland | MIT FutureTech

AI Governance Researcher

Director, MIT AI Risk Index

Dr. Tom Everitt

Google DeepMind

Staff Research Scientist

Dr. Peter SlatteryPhD

MIT FutureTech

Researcher

Lead at the MIT AI Risk Repository

Dan Braun

Apollo Research

Lead Engineer/Head of Security

Dr. Marcel Scharth

The University of Sydney

Lecturer in Business Analytics (Machine Learning)

Dr. Daniel Max McIntosh

La Trobe University

AI governance researcher

Bryce Robertson

Alignment Ecosystem Development

Project Director

Simon Goldstein

The University of Hong Kong

Associate Professor, AI & Humanity Lab

Prof. Peter VamplewPhD

Federation University Australia

Professor, IT

Karl Berzins

FAR.AI

Co-founder & COO

Dr Simon O'Callaghan

Gradient Institute

Head of Technical AI Governance

Co-author of Implementing Australia’s AI Ethics Principles: A selection of Responsible AI practices and resources

Liam Carroll

Gradient Institute

Researcher

Jess Graham

The University of Queensland

Senior Research Coordinator

Assoc. Prof. Michael NoetelPhD

The University of Queensland

Associate Professor

Jarrah Bloomfield

Security Engineer

Dr. Aaron SnoswellPhD

Queensland University of Technology GenAI Lab

Senior Research Fellow

Greg Sadler

Good Ancestors

CEO

Pooja Khatri

University of Sydney

Lawyer and AI Governance Researcher

Soroush Pour

Harmony Intelligence

CEO

Harriet Farlow

Mileva Security Labs

CEO and Founder

Dr. Daniel Murfet

University of Melbourne

Mathematician, Deep Learning Researcher

Matthew Farrugia-Roberts

Department of Computer Science, University of Oxford

Clarendon scholar

Tobin Smit

Vow

Systems Engineer

Campbell Border

Software Engineer

Andrew Taylor

Rockland Legal

Technology Lawyer

Matt Fisher

Software engineer

Maintainer of inspect_evals on behalf of UK AISI

Hunter Jay

Software Engineer

Previously: CEO of Ripe Robotics

Yanni Kyriacos

AI Safety - Australia & New Zealand

Co-Founder & Director

Arush Tagade

Leap Labs

Research-Scientist

David Quarel

Australian National University, Cambridge University (Fmr.)

PhD student, Research Assistant (Fmr.)

Pip Foweraker

AI Governance researcher

Michael Kerrison

Independent AI safety researcher

Dr. Huaming ChenPhD, FHEA

The University of Sydney

Senior Lecturer

Evan Hockings

The University of Sydney

PhD student

James Dao

Harmony Intelligence

Research Engineer

Dr. Ryan Carey

Causal Incentives Working Group

Fmr AI Safety Lead @ Future of Humanity Institute, University of Oxford

Joseph Bloom

UK AI Security Institute

Head of White Box Evaluations

Dr. Cassidy NelsonMBBS MPH PhD

Centre for Long-Term Resilience

Head of Biosecurity Policy

Dane Sherburn

AI Safety Researcher

Sign the Open Letter

156
signatures and counting

31% of 500 signatures

Join experts, researchers, and concerned citizens in calling for the Australian Government to take decisive action on AI safety.

Your signature requires email verification. By signing, you accept our Terms and Privacy Policy. Your email remains private, and only your name and fields marked as public will be displayed on the signature list.