Mandatory Guardrails
A dedicated AI Act with mandatory guardrails for high-risk AI systems will both protect Australians and create the certainty businesses need to innovate.
Share this policy:
What we asked the candidates
We put the same question with the same details to every candidate to ensure the scorecard is fair and accurate. This is the exact question we asked and the exact detail we provided. The additional detail below provides more policy specifics about what delivering this commitment could look like.
Our question
Do you support an Australian AI Act that applies mandatory guardrails to AI developers and deployers?
The detail
Legislation that mandates guardrails for high-risk and general-purpose AI systems, ensuring their safe design, deployment, and oversight in line with emerging global best practices. This would include the imposition of mandatory guardrails similar to those at Attachment E of DISR's Proposals paper for introducing mandatory guardrails for AI in high-risk settings.
Why we need an AI Act with mandatory guardrails
As artificial intelligence transforms our economy and society, both businesses and individuals need clarity about how these powerful systems will be governed. An Australian AI Act could establish a framework where innovation can flourish while protecting against serious harms, creating the regulatory certainty that allows companies to confidently invest in AI development and implementation.
Effective regulation generally doesn't hinder progress—it often enables it. By setting clear standards and liability frameworks, Australia can create a stable environment where developers understand their responsibilities and users can trust the technology they're adopting. This approach mirrors successful regulation in other domains, like aviation, where safety standards have enabled rather than constrained industry growth.
Australian Government bodies have already recognised the need for effective guardrails to manage the significant risks the powerful AI systems could pose. The Department of Home Affairs has identified AI's potential to enable sophisticated cyber attacks, accelerate bioweapon development, and amplify disinformation campaigns. Similarly, the Senate Select Committee on Adopting AI warns about frontier AI systems' capabilities to design chemical weapons, exploit vulnerabilities, and potentially evade human control.
Importantly, this regulatory approach would focus specifically on high-risk AI while allowing innovation to flourish through a lighter-touch approach in lower-risk areas. Rather than creating unnecessary new agencies, it could leverage existing regulators where possible, adding specialised oversight only where gaps exist.
We acknowledge that the legislative process is complicated, there are lots of valid paths to achieve these goals, and no specific candidate has the power to create legislation on their own.
How an Australian AI Act could work
Good Ancestors' Mandatory Guardrails submission outlines a practical approach that:
- Separates regular AI from "AI with systemic risk" that could cause widespread harm
- Tailors rules to different AI types rather than using "one-size-fits-all" rules
- Places more responsibility on developers than users, especially for risky systems
- Proposes banning only those systems that could cause critical harm
- Creates clear liability frameworks so businesses have certainty about responsibilities
A comprehensive Australian AI Act could include:
- Mandatory risk testing for high-risk systems
- Clear liability frameworks
- Transparency requirements
- Oversight structures
- Safety measures matched to each system's risk level
This would align with the approaches outlined in DISR's Proposals paper for introducing mandatory guardrails for AI in high-risk settings.
Why mandatory guardrails are a good investment
Aviation safety rules didn't slow down the industry—they helped it grow by building trust. Similarly, good AI rules can create the trust needed for Australian businesses and consumers to embrace AI's benefits safely.
Studies suggest AI could add between $45 billion and $600 billion to Australia's economy by 2030.
However, achieving these gains requires building public trust through appropriate safeguards. Estimates show a 156% difference in value between slow and fast adoption scenarios—trust is the key to unlocking this potential.
The 2024 SARA survey shows that 94% of Australians want Australia to play a leading role in the international governance and regulation of AI. This overwhelming public support reflects both concern about AI risks and desire for Australia to help shape responsible standards for development and deployment.
Without an Australian AI Act with mandatory guardrails, we lack a comprehensive framework for evaluating high-risk AI systems before deployment. This creates regulatory uncertainty for businesses while leaving Australians vulnerable to potentially catastrophic harms. An AI Act would establish clear standards for safety assessment, transparent verification mechanisms, and specific protections against severe risks—creating both the safety and certainty needed for successful AI adoption.
How an Australian AI Act would align globally
The Australian AI Act could align with global frameworks like the EU AI Act, potentially creating consistent international standards while addressing Australia's specific needs and helping to protect our national interests.
This would help fulfil our commitments under the Seoul Declaration and Bletchley Declaration, which acknowledge the potential for "serious, even catastrophic, harm" from frontier AI systems and commit nations to developing appropriate governance frameworks.
By taking action now, Australia can help shape global standards while ensuring our regulatory approach both protects Australians from AI's most significant risks and creates the foundation of credible trust necessary for responsible AI development and adoption. With the right guardrails in place, we can safely unlock AI's substantial economic and social benefits.
Could an Australian AI Act antagonise the US or cause a trade war?
Australia already imposes common sense safety obligations on imported products, regardless of their country of origin. For instance, cars imported to Australia have to meet our safety standards regardless of whether they are made in Japan, China, the US, Europe or elsewhere. An Australian AI Act could regulate models coming out of US labs, like OpenAI, the same as models coming out of European labs, like Mistral AI, or Chinese labs, like DeepSeek. A universal safety standard helps level the playing field. It doesn't target a country or company.
Take Action
Support this initiative by signing our open letter and letting politicians know that AI safety matters to you.