FAQs

Are you saying that all AI systems are dangerous?

No. Australia’s Chief Scientist is right that it is unlikely that today’s LLMs, and their personalisation to particular business cases, pose catastrophic or existential risks. While there are valid ethical concerns that the Government is right to engage with, the AI systems that will benefit Australia’s industry today and in the short term, like automated manufacturing and solutions that increase industry efficiency, are unlikely to be an extreme safety risk.

However, some future “foundation models”, which are trained using increasingly large amounts of data and compute, could pose a risk. We have already seen large jumps in capability between models, and those jumps could get larger. Each time these foundation models are developed, researchers must conduct tests to understand their capabilities and risks after the fact. This makes it difficult to forecast when risks might emerge in the coming years or decades. This uncertainty is why we want to urgently prioritise safety for these kinds of systems.

What do you mean by catastrophic or existential risk?

Without providing a technical definition, we mean that AI could pose a risk of a similar scale to pandemics or nuclear war. This could include social collapse or large numbers of deaths. More information about categories of extreme risk is available from the Legal Priorities Project.

Are you catastrophising the risks of AI? What do you mean by ‘portfolio approach’?

No. In the same way most people think it’s wise to have car insurance, we think it’s wise for the Australian government to mitigate the “worst case scenarios” relating to future AIs.

This is what we mean by a “portfolio” approach to risk management. An investment fund manager will select stocks they like, but will also hedge against other scenarios occurring. We think methods to manage catastrophic and existential risks should be included in the Government's portfolio of risk mitigations. It’s proper that the Government anticipates a range of possible futures, including trying to get the best out of optimistic futures, as well as trying to reduce the chance that things go wrong, and to reduce the harm if they do.

Who is behind Australians for AI Safety?

Australians for AI Safety is volunteer-run, has received no funding from any sources, and doesn’t ask for or accept funding from its supporters. Our supporters share common values, most importantly, we want AI to be safe, in Australia and globally.

Our thanks go to the Good Ancestors Project who have volunteered ICT support and website building.

Please note, this FAQ was written by volunteers for Australians for AI Safety. It does not necessarily represent the views of signatories to the statement.