Australians for AI Safety
Dear Senators,
In July 2023, Australians for AI Safety wrote to the Minister for Industry Science and Technology, highlighting the importance of AI safety.1 The letter was in the context of the Safe and Responsible AI (SRAI) Consultation and built on similar letters from global AI leaders.2
The letter made four specific requests of the Australian Government:
- Recognise that catastrophic consequences from increasingly advanced AI systems are possible
- Take steps to mitigate these extreme risks, alongside other important work
- Work internationally to shape the global approach to these risks, and
- Increase support for research into AI safety.
Australia’s endorsement of the Bletchley Declaration on AI Safety in November 2023 was a critical step towards Australia delivering on these requests.3 By signing the Bletchley Declaration, Australia acknowledged that highly capable general-purpose AI models have the potential for serious, even catastrophic, harm. Australia resolved to mitigate these risks through global cooperation and by supporting scientific research on the safety of the most capable AI models at the frontier of research and development.
Despite our endorsement of the Bletchley Declaration, Australia has yet to position itself to learn from and contribute to growing global efforts. To achieve the economic and social benefits that AI promises, we need to be active in global action to ensure the safety of AI systems that approach or surpass human-level capabilities.
We recommend:
The urgent creation of an Australian AI Safety Institute. The National AI Centre’s mission is driving the responsible adoption of AI by Australian industry. A separate institution is now needed to ensure the safety and security4 of the small number of models at the frontier of AI research and development. Countries – including the US, UK, Canada, Japan and the Republic of Korea – are establishing AI Safety Institutes to address this small class of models. An Australian AI Safety Institute would be focused on evaluating these models – including assessing models for dangerous capabilities, considering risks that could emerge from deployment in complex systems, “red-teaming” the adequacy of safeguards, and providing overall advice on their implications. An Australian AI Safety Institute would also drive AI safety research and deliver the Government's commitment to collaborate internationally on scientific research on frontier AI safety.5
New safeguards must keep pace with new AI capabilities and new risks. AI is progressing rapidly, and new capabilities are hard to predict. The Government’s interim response to SRAI consultation agreed that, "Governments must respond with agility when known risks change and new risks emerge".6 Unfortunately, this is yet to happen. For instance, the UK and US adopted new regulations for “mail-order DNA” in response to the possibility that next-generation AI models will be able to help terrorists make biological weapons.7 Six months after the US took action, Australia has yet to update its equivalent biosafety regulations. This was a test of our agility, and we have failed. Australia needs a streamlined approach to risk identification and mitigation so that new safeguards keep pace with new risks in this fast-paced environment.
Ensure the regulation of high-risk AI systems includes those that could have catastrophic consequences. Government, through the Safe and Responsible AI Consultation, identified the pressing need to regulate high-risk AI systems.8 In addition to the examples given in the Interim Response, “high risk” should also include systems where the consequence of something going wrong could be catastrophic. This should include highly-capable agents that can interact with other systems; autonomous goal-directed systems; and frontier models with capabilities relating to cyber offence, biotechnology, or risks to democratic institutions.9
Too often, lessons are learned only after something goes wrong. With AI systems that might approach or surpass human-level capabilities, we cannot afford for that to be the case.
Yours faithfully,
Dr Toby Ord
Senior Research Fellow
Author of The Precipice
Oxford Martin AI Governance Initiative
Oxford University
Dr Ramana Kumar
Prev. Senior Research Scientist on AGI Safety
Google Deepmind
Assoc. Prof Simon Goldstein
Associate Professor
Dianoia Institute of Philosophy,
Australian Catholic University
Prev. Research Fellow
Center for AI Safety
Assistant Prof Pamela Robinson
Assistant Professor
University of British Columbia Okanagan
Prof Paul Salmon
Professor of Human Factors and AI Safety researcher
Centre for Human Factors and Sociotechnical Systems
University of the Sunshine Coast
Dr Ryan Kidd
Co-Director
ML Alignment & Theory Scholars
Co-Founder
London Initiative for Safe AI
Harriet Farlow
CEO
Mileva Security Labs
PhD Candidate, Adversarial Machine Learning
UNSW Canberra
Dr Ryan Carey
PhD Candidate, Statistics: Casual Models + Safe AI
University of Oxford
Matthew Newman
CEO
TechInnocens
Lucia Quirke
Member of Technical Staff
EleutherAI
Dane Sherburn
AI Safety Researcher
Joseph Bloom
AI Safety Researcher
Dan Braun
Lead Engineer
Apollo Research
Dr Craig Bellamy
Consultant
Oscar Delaney
Research Assistant
Institute for AI Policy and Strategy
Casey Clifton
CEO
Alive AI
Ben Cottier
Staff Researcher
Epoch AI
Jonathan Kurniawan
Chief Product Officer
Prodago
Michael Clark
Machine Learning Engineer
Woodside Energy
Director
Three Springs Technology
Director
Cytophenix
Matthew Farrugia-Roberts
Research Lead
Timaeus
Jeremy Gillen
AI Safety Researcher
Prof Peter Singer
Ira W. DeCamp Professor of Bioethics
University Center for Human Values
Princeton University
Prof Michael A Osborne
Professor of Machine Learning
Oxford Martin AI Governance Initiative
University of Oxford
Dr Daniel D’Hotman
Rhodes Scholar
DPhil Candidate, AI Ethics
Brasenose College, University of Oxford
Pooja Khatri
AI Governance Researcher
University of Sydney
Lawyer
Dr Peter Slattery
Affiliate Researcher
MIT FutureTech
Massachusetts Institute of Technology
Soroush Pour
CEO
Harmony Intelligence
ML Researcher & Technology Entrepreneur
Joe Brailsford
PhD candidate, Human Computer Interaction
Centre for AI and Digital Ethics (CAIDE)
University of Melbourne
David Quarel
PhD Candidate, AI
School of Computing
Australian National University
Jordan Taylor
PhD Candidate, Tensor Networks
The University of Queensland
Dr David Johnston
Interpretability researcher
Eleuther AI
Dr Marcel Scharth
Lecturer in Business Analytics (statistics and machine learning)
The University of Sydney
Arush Tagade
Machine Learning Researcher
Leap Labs
Dr Michael Dello-Iacovo
Social science of AI researcher
Hunter Jay
Co-founder
Ripe Robotics
Nik Samoylov
Coordinator
Campaign for AI Safety
Volunteer
Existential Risk Observatory
Director
Conjointly
Justin Olive
Head of AI safety
Arcadia Impact
James Dao
Research Engineer
Harmony Intelligence
William Zheng
Lead Data Scientist
George Weston Food
Yanni Kyriacos
Convenor
AI Safety Australia and New Zealand
Chris Leong
Convenor
AI Safety Australia and New Zealand
Dr Cassidy Nelson
Head of Biosecurity Policy
The Centre for Long-Term Resilience
If you are an AI expert in the Australian community and would like to sign this letter please get in touch with us at contact@australiansforaisafety.com.au
Endnotes
[1] Australians for AI Safety. Aug 2023. www.australiansforaisafety.com.au
[2] “Statement on AI Risk.” May 2023. Center for AI Safety. safe.ai/statement-on-ai-risk
[3] “AI Safety Summit 2023: The Bletchley Declaration”. 1 Nov 2023. www.gov.uk/government/publications/ai-safety-summit-2023-the-bletchley-declaration
[4] Artificial Intelligence | CISA
[5] “U.S. and UK Announce Partnership on Science of AI Safety”. April 2024. U.S. Department of Commerce. commerce.gov/news/press-releases/2024/04/us-and-uk-announce-partnership-science-ai-safety
[6] “Safe and responsible AI in Australia consultation: Australian Government’s interim response”. January 2024. Australian Government Department of Industry, Science, and Resources. consult.industry.gov.au/supporting-responsible-ai
[7a] "Executive Order 14110: Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”. 30 Oct 2023. www.govinfo.gov/app/details/FR-2023-11-01/2023-24283.
[7b] "A pro-innovation approach to AI regulation: government response." 6 Feb 2024. Department for Science Innovation and Technology, www.gov.uk/government/consultations/ai-regulation-a-pro-innovation-approach-policy-proposals/outcome/a-pro-innovation-approach-to-ai-regulation-government-response
[7c] “Safety and security risks of generative artificial intelligence to 2025.” 25 Oct 2023. Department for Science Innovation and Technology,
[8] Safe and responsible AI in Australia consultation Australian Government’s interim response, page 14, 18.
[9] See Managing AI Risks in an Era of Rapid Progress, a consensus paper signed by three Turing Award winners, for a description of risks associated with advanced AI systems.