Experts call for Australian AI Safety Institute ahead of federal election
24 March 2025
Today, Australia’s AI safety leaders have united in a call for action by the next government on AI safety concerns. The public statement, which is open for support from other Australian AI experts as well as members of the public, says that action on safety is necessary for Australia to fully seize AI’s opportunities.
The experts are calling for the Australian Government to deliver on its commitments by creating an Australian AI Safety Institute (AISI). In May 2024, Australia and other participants at the Seoul AI Summit committed to “create or expand AI safety institutes”. Australia is the only signatory of the Seoul commitments yet to establish an AISI.
Australian philosopher Dr Toby Ord, Senior Researcher at Oxford University and author of The Precipice: Existential Risks and the Future of Humanity, said, “Australia risks being in a position where it has little say on the AI systems that will increasingly affect its future. An Australian AI Safety Institute would allow Australia to participate on the world stage in guiding this critical technology that affects us all.”
During last year’s Senate Select Committee Inquiry into AI Adoption, Senator David Pocock recommended that such an institute be created, but neither major party took a position.
Greg Sadler, coordinator of Australians for AI Safety, said, “It sets a dangerous precedent for Australia to formally commit to specific actions but fail to follow through. Australia is the only signatory that is yet to meet its obligations.”
The Seoul Declaration isn’t the only commitment that an AISI would deliver. Australia’s AI Ethics Principles call for “transparency and explainability” as well as “reliability and safety”. Currently, frontier AI systems are not transparent and can be hard to predict. An AISI could lead the technical work necessary to tackle these challenges and deliver the AI Ethics Principles.
The AI experts are also calling for an Australian AI Act. Minister Ed Husic ran a series of consultations on safe and responsible AI, culminating in a paper about imposing mandatory guardrails on high-risk AI systems. The experts argue that the next Parliament needs to turn talk into action.
Australian Professor Paul Salmon, Centre for Human Factors and Sociotechnical Systems, said, “I support the creation of an Australian AI safety institute and the implementation of an AI Act. Both are urgently required to ensure that the risks associated with AI are effectively managed. We are fast losing the opportunity to ensure that all AI technologies are safe, ethical, and beneficial to humanity.”
This open letter compares AI safety to aviation safety, arguing that Australians are hesitant to adopt AI because Australia has yet to build the frameworks to give confidence that AI is safe and secure.
Yanni Kyriacos, Director of AI Safety Australia & New Zealand, said, "Robust assurance justifies trust. We’re all excited about the potential opportunities of AI, but not enough work is currently happening to address genuine safety concerns. It’s easy to understand why Australians are hesitant to adopt AI while these big issues are outstanding.”
A 2024 survey shows that Australians overwhelmingly want strong AI oversight, with 8 in 10 believing Australia should lead international AI governance, 9 in 10 supporting the creation of a new government regulatory body for AI, and preventing dangerous and catastrophic AI outcomes identified as the top priority. A 2023 Ipsos survey found Australians to be the most nervous about AI globally.
In conjunction with this letter, Australians for AI Safety is also launching a national scorecard where voters can compare and stay up to date on the positions of parties and candidates on AI safety policies as they are released in advance of the election.
The letter will remain open for support by Australian experts and members of the public until election day.
Contact: Greg Sadler
Email: greg@australiansforaisafety.com.au
Phone: 0401 534 879
Additional Quotes
- “AI is as transformative as electricity and as powerful as nuclear technology. We wouldn’t handle those without clear mandatory safeguards, and AI should be no different. To support good policy, Australia's government also needs a dedicated AI Safety Institute to bring deep technical AI expertise into government.”
Soroush Pour (CEO of Harmony Intelligence) - “We need to ensure AI systems align with our society’s needs and values, while ensuring a populace with a healthy educated skepticism of these systems. This is best achieved through an Australian AI Safety Institute and AI Act.”
Richard Dazeley (Professor of Artificial Intelligence and Machine Learning, Researcher in AI Safety and Explainability, Deakin University) - “We are building increasingly intelligent systems, but our methods of aligning their goals with ours are rudimentary and may not scale. Public funding to research this technical problem is essential if we wish future AI systems to remain safe.”
Hunter Jay (fmr CEO of Ripe Robotics) - “AI safety and security is a global challenge. But middle powers like Australia have an important role in shaping the global discourse and strengthening safety measures.”
Oscar Delaney (Institute for AI Policy and Strategy) - “I support the establishment of an Australian AI Safety Institute and the introduction of an AI Act. We don’t want advanced AI that is unsafe, untrustworthy, or unreliable—no one is better off in that scenario. Unfortunately, that may be what we are racing toward.”
Peter Slattery (Lead at the MIT AI Risk Repository, MIT FutureTech)