Australian AI leaders join global call for safe AI 

Released 21 July 2023

Today, Australia’s AI Safety leaders have united to call for Ed Husic MP to take AI Safety seriously, as he moves to regulate the industry. 

Greg Sadler, spokesperson for Australians For AI Safety, said despite growing calls around the world and mounting examples of real-world harms, the Minister continues to ignore the possibility of catastrophic or existential harms from AI, dismissing them as “darkly negative”. 

The letter, signed by a range of individuals and organisations representing a cross-section of Australian AI expertise, calls for Australia to ensure global agreements are mindful of longer-term risks and to increase its support for AI Safety research in Australian universities.

The full letter is available at AustraliansForAISafety.com.au

On 1 June, the Albanese Government took its first steps towards modernising laws relating to AI, including publishing a report by Australia’s Chief Scientist and inviting Australians to share their views on the best way to govern AI.

Greg Sadler said that the signatories to the letter aren’t being alarmist. They’re calling for a measured approach. They want AI Safety to be part of the mix, not ignored. 

“What’s alarming is that even deliberate and methodical bodies like the United Nations have recognised the potential for catastrophic or existential risks from AI, but the Australian Government won’t. We’re falling behind.”

On 12 June, the Secretary-General of the United Nations, António Guterres said: 

Alarm bells over the latest form of artificial intelligence (AI) — generative AI — are deafening, and they are loudest from the developers who designed it.

These scientists and experts have called on the world to act, declaring AI an existential threat to humanity on a par with the risk of nuclear war.

We must take those warnings seriously.

Richard Dazeley, Professor of Artificial Intelligence and Machine Learning at Deakin University and signatory to the letter, said there’s a range of practical actions Government could take to make AI safer.

“One of the main issues is transparency. AI labs need a culture of transparency. Our regulation needs to require transparency. And we need to do more work to ensure we understand how emerging AI models work and what their capabilities are.

It’s simply not safe or ethical to release increasingly advanced AIs before we have tools and frameworks to analyse and monitor them.”

Singapore has recently established the AI Verify Foundation, which is working to develop AI testing tools to enable safe and responsible AI. AI Verify is similar to the European Centre for Algorithmic Transparency, which performs technical tests to “decode algorithmic black boxes”. Similar safety-focused government labs are proposed in other countries, including the UK. Australia is yet to propose a national lab of this kind. 

Jenna Ong, a Canberra community organiser, is glad that experts are calling attention to this issue.

Community groups around the country are getting together to discuss their responses to Ed Husic’s consultation. A lot of us think that governments should be listening to AI Safety experts and taking a longer-term perspective which includes existential or catastrophic risks to humanity.

I don’t know for sure if AI is an existential risk or not, but surely it's worth putting in place common-sense mitigations, just in case.

Bridget Loughhead, an attendee at a recent forum in Melbourne said it’s honestly disappointing that the Government can release a 42-page AI discussion paper without mentioning the main thing people are talking about. Australia has such a strong history of leadership on nuclear safety and biosecurity – I want to live in an Australia that can lead on AI safety as well.

AustraliansForAISafety.com.au 

Contact: Greg Sadler
Email: greg@australiansforaisafety.com.au
Phone: 0401 534 879 

A Canberra community group spends their evening at The Food Co-op working on their calls for Government to think about the long-term future as it moves to regulate AI. 

A Melbourne community group meets to draft submissions to the “Supporting Responsible AI” process calling for government to address catastrophic and existential risks. 

A Melbourne community group meets to draft submissions to the “Supporting Responsible AI” process calling for government to address catastrophic and existential risks. 

A Canberra community group spends their evening at The Food Co-op working on their calls for Government to think about the long-term future as it moves to regulate AI.