← Back to All Open Letters

Letter to Minister of Industry, Science and Technology

Published 25 July 2023

This letter is now closed for new signatures.

Dear Minister,

We write to call for the Australian Government to take the risks of AI seriously.

The economic potential of advanced AI systems will only be realised if we make them ethical and safe. Ethical challenges from today’s systems are already causing serious harms, and as systems become more powerful the risks of misuse, accident, and catastrophe become more acute.

The safe use of AI requires considering that, in the future, AI could represent a catastrophic or existential risk that could jeopardise all of humanity.1 While we have to be frank about the uncertainty, many experts have raised the alarm, and governments must listen.2 Mitigating catastrophic risk should never be left to chance.

As part of a due-diligence-based approach, the Australian Government should:

  • Recognise the risk. The Australian Government’s AI strategy must recognise that catastrophic and existential consequences are possible. The AI strategy should highlight that mature risk management processes treat uncertainty as a cause for concern, not comfort.

  • Take a portfolio approach to mitigating risk. The Australian Government’s risk-based approach to AI should be holistic, including preparing for problems that may only arise when AI systems are more advanced.3 Mitigations need to be in place for the risks we are experiencing today, like targeted harassment, dual-use technologies, deepfakes, and other forms of misinformation and disinformation.4 Risks with uncertain likelihood, but catastrophic consequences, must also be mitigated within the portfolio.5

  • Work with the global community. The rest of the world is moving quickly, and we should contribute. Australia has led on the risks of nuclear and biological weapons – we can and should lead on mitigating similar risks from AI.6 Specifically, our contributions to international agreements and standards development should be mindful of managing longer-term and catastrophic risks while also addressing ethical concerns and ensuring economic benefits.

  • Support research into AI safety. In addition to policy leadership, we need technical solutions that greatly improve AI interpretability and alignment to human values. We must not leave the risks of AI to chance – or private interests and foreign companies. This requires governments to support research into AI safety, and urgently train the AI safety auditors that industry will soon need.7 Technical problems take time to solve, so we need to start now.

A powerful first step from this consultation would be the creation of an AI Commission, or similar body, tasked with ensuring Australia takes a world-leading approach to understanding and mitigating the full range of risks posed by AI.

An AI Commission should approach its work on AI safety in a similar way to how we approach aviation safety. Government doesn’t need all the technical answers – but it does need to set the expectation of a culture of safety and transparency, create an independent investigator and a strong regulator, back them with a robust legal regime, and connect them with a global peak body that our Government helps to shape. This is how the Australian Transport Safety Bureau, the Civil Aviation Safety Authority and the International Civil Aviation Organization give Australians the confidence to fly. An AI Commission must ensure AI has the same kind of governance so that Australians can give it the same kind of trust.

An AI Commission should be set up urgently to ensure the law is clear about who is liable for harms caused by AI. This should include joint culpability between AI labs and AI providers, as well as anyone who uses AI to cause harm. We wouldn’t allow aircraft manufacturers to sell planes in Australia without knowing the product is safe, and we wouldn’t excuse a business for being ignorant about the potential harms of its products, so the law should similarly ensure adequate legal responsibility for the harms of AI.8

Importantly, this globally coordinated regulatory approach to aviation hasn’t stifled innovation. Indeed, certainty and structure of this kind helps new participants by providing them a framework through which to participate.

Signatories

Note: Signatories endorse only the core letter text. Footnotes and additional content may not represent their views.

Prof. Michael A Osborne

University of Oxford

Professor of Machine Learning

Richard Dazeley

Deakin University, Machine Intelligence Lab, The Future of Life Institute AI Existential Safety Community, The Australian Responsible Autonomous Agents Collective (ARAAC.au)

Professor of Artificial Intelligence and Machine Learning, Leader, Senior Member, Co-founder

Dr. Toby Ord

Future of Humanity Institute, Oxford University

Senior Research Fellow

Harriet Farlow

Mileva Security Labs, UNSW Canberra

CEO, PhD Candidate (Adversarial Machine Learning)

Assoc. Prof. Simon Goldstein

Dianoia Institute of Philosophy (Australian Catholic University), Center for AI Safety

Associate Professor, Research Fellow

Prof. Peter Vamplew

Federation University, The Future of Life AI Existential Safety Community

Professor of Information Technology, Senior Member

Chris Leong

AI Safety Australia and New Zealand

Convenor

Dr. Michael Dello-Iacovo

Sentience Institute

Strategy Lead and Researcher

Soroush Pour

Harmony Intelligence

CEO, Engineer & Technology Entrepreneur

Michael Aird

Rethink Priorities

Senior Research Manager (AI Governance and Strategy)

Dr. Daniel Murfet

University of Melbourne

Lecturer in Mathematics

Hadassah Harland

Deakin University, Australian Responsible Autonomous Agents Collective (ARAAC.au)

PhD Candidate (Artificial Intelligence and Human Alignment), Member

Hunter Jay

Ripe Robotics

CEO

JJ Hepburn

AI Safety Support

CEO

Dan Braun

Apollo Research

Lead Engineer

Yanni Kyriacos

AI Safety Australia and New Zealand

Convenor

Joseph Bloom

AI Safety Researcher

Dane Sherburn

AI Safety Researcher

Matthew Farrugia-Roberts

AI Safety Researcher