← Back to All Open Letters

Senate Inquiry on Adopting Artificial Intelligence

Published 10 May 2024

This letter is now closed for new signatures.

Dear Senators,

In July 2023, Australians for AI Safety wrote to the Minister for Industry Science and Technology, highlighting the importance of AI safety.1 The letter was in the context of the Safe and Responsible AI (SRAI) Consultation and built on similar letters from global AI leaders.2

The letter made four specific requests of the Australian Government:

  • Recognise that catastrophic consequences from increasingly advanced AI systems are possible
  • Take steps to mitigate these extreme risks, alongside other important work
  • Work internationally to shape the global approach to these risks, and
  • Increase support for research into AI safety.

Australia’s endorsement of the Bletchley Declaration on AI Safety in November 2023 was a critical step towards Australia delivering on these requests.3 By signing the Bletchley Declaration, Australia acknowledged that highly capable general-purpose AI models have the potential for serious, even catastrophic, harm. Australia resolved to mitigate these risks through global cooperation and by supporting scientific research on the safety of the most capable AI models at the frontier of research and development.

Despite our endorsement of the Bletchley Declaration, Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.

We recommend:

  • The urgent creation of an Australian AI Safety Institute. The National AI Centre’s mission is driving the responsible adoption of AI by Australian industry. A separate institution is now needed to ensure the safety and security4 of the small number of models at the frontier of AI research and development. Countries – including the US, UK, Canada, Japan and the Republic of Korea – are establishing AI Safety Institutes to address this small class of models. An Australian AI Safety Institute would be focused on evaluating these models – including assessing models for dangerous capabilities, considering risks that could emerge from deployment in complex systems, “red-teaming” the adequacy of safeguards, and providing overall advice on their implications. An Australian AI Safety Institute would also drive AI safety research and deliver the Government's commitment to collaborate internationally on scientific research on frontier AI safety.5

  • New safeguards must keep pace with new AI capabilities and new risks. AI is progressing rapidly, and new capabilities are hard to predict. The Government’s interim response to SRAI consultation agreed that, "Governments must respond with agility when known risks change and new risks emerge".6 Unfortunately, this is yet to happen. For instance, the UK and US adopted new regulations for “mail-order DNA” in response to the possibility that next-generation AI models will be able to help terrorists make biological weapons.7 Six months after the US took action, Australia has yet to update its equivalent biosafety regulations. This was a test of our agility, and we have failed. Australia needs a streamlined approach to risk identification and mitigation so that new safeguards keep pace with new risks in this fast-paced environment.

  • Ensure the regulation of high-risk AI systems includes those that could have catastrophic consequences. Government, through the Safe and Responsible AI Consultation, identified the pressing need to regulate high-risk AI systems.8 In addition to the examples given in the Interim Response, “high risk” should also include systems where the consequence of something going wrong could be catastrophic. This should include highly-capable agents that can interact with other systems; autonomous goal-directed systems; and frontier models with capabilities relating to cyber offence, biotechnology, or risks to democratic institutions.9

Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.

Yours faithfully,

Signatories

Note: Signatories endorse only the core letter text. Footnotes and additional content may not represent their views.

Dr. Toby Ord

Oxford Martin AI Governance Initiative, Oxford University

Senior Research Fellow, Author of The Precipice

Prof. Peter Singer

University Center for Human Values, Princeton University

Ira W. DeCamp Professor of Bioethics

Prof. Michael A Osborne

Oxford Martin AI Governance Initiative, University of Oxford

Professor of Machine Learning

Assoc. Prof. Simon Goldstein

Dianoia Institute of Philosophy (Australian Catholic University), Center for AI Safety

Associate Professor, Prev. Research Fellow

Prof. Paul Salmon

Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast

Professor of Human Factors and AI Safety researcher

Dr. Marcel Scharth

The University of Sydney

Lecturer in Business Analytics (statistics and machine learning)

Dr. Daniel D'Hotman

Brasenose College, University of Oxford

Rhodes Scholar, DPhil Candidate (AI Ethics)

Dr. Cassidy Nelson

The Centre for Long-Term Resilience

Head of Biosecurity Policy

Pooja Khatri

University of Sydney

AI Governance Researcher, Lawyer

Dr. Ryan Kidd

ML Alignment & Theory Scholars, London Initiative for Safe AI

Co-Director, Co-Founder

Dr. Peter Slattery

MIT FutureTech, Massachusetts Institute of Technology

Affiliate Researcher

William Zheng

George Weston Food

Lead Data Scientist

Nik Samoylov

Campaign for AI Safety, Existential Risk Observatory, Conjointly

Coordinator, Volunteer, Director

Dan Braun

Apollo Research

Lead Engineer

Dr. Ramana Kumar

Google Deepmind

Prev. Senior Research Scientist on AGI Safety

Chris Leong

AI Safety Australia and New Zealand

Convenor

Matthew Newman

TechInnocens

CEO

Dr. David Johnston

Eleuther AI

AI Interpretability Researcher

Dr. Ryan Carey

University of Oxford

PhD Candidate, Statistics: Casual Models + Safe AI

Dr. Michael Dello-Iacovo

Social science of AI researcher

Ben Cottier

Epoch AI

Staff Researcher

Dr. Craig Bellamy

Consultant

Harriet Farlow

Mileva Security Labs, UNSW Canberra

CEO, PhD Candidate (Adversarial Machine Learning)

Arush Tagade

Leap Labs

Machine Learning Researcher

James Dao

Harmony Intelligence

Research Engineer

Soroush Pour

Harmony Intelligence

CEO, ML Researcher & Technology Entrepreneur

Justin Olive

Arcadia Impact

Head of AI safety

Jonathan Kurniawan

Prodago

Chief Product Officer

Casey Clifton

Alive AI

CEO

Hunter Jay

Ripe Robotics

Co-founder

Yanni Kyriacos

AI Safety Australia and New Zealand

Convenor

Michael Clark

Woodside Energy, Three Springs Technology, Cytophenix

Machine Learning Engineer, Director, Director

Oscar Delaney

Institute for AI Policy and Strategy

Research Assistant

Assistant Prof. Pamela Robinson

University of British Columbia Okanagan

Assistant Professor

Jordan Taylor

The University of Queensland

PhD Candidate (Tensor Networks)

David Quarel

School of Computing, Australian National University

PhD Candidate (AI)

Lucia Quirke

EleutherAI

Member of Technical Staff

Matthew Farrugia-Roberts

Timaeus

Research Lead

Jeremy Gillen

AI Safety Researcher

Joseph Bloom

AI Safety Researcher

Joe Brailsford

Centre for AI and Digital Ethics (CAIDE), University of Melbourne

PhD candidate (Human Computer Interaction)

Dane Sherburn

AI Safety Researcher