Join our Town Hall: 9 April at 6:30pm AEST

Does David Shoebridge support AI safety?

Somewhat. As a member of The Greens, David Shoebridge is expected to support some of the expert-recommended AI safety policies.

Senator David Shoebridge (Greens) represents NSW in the Senate.

In public statements he has warned that unchecked AI—especially in the form of deepfakes—poses serious risks to democratic processes and national security, and has called for essential guard rails and robust law reform similar to European models, though he has not explicitly detailed measures for mitigating catastrophic or existential AI risks.

Their score on expert-recommended AI safety policies

Over ... experts, public figures, and concerned citizens endorsed these policies in their open letter.

AI Safety Institute

A well-resourced, independent technical body to assess AI risks and advise on safety standards.


David Shoebridge partially supports (from party policy)

Party Notes: While the Greens strongly advocate for AI oversight, they don't specifically mention an AI Safety Institute by name. They do call for "an independent and expert regulator that will assess and respond to these new technologies including concerns around data sovereignty" and emphasise that "Australia needs a National Strategy on AI." Their focus appears to be more on comprehensive regulatory frameworks rather than specifically creating a dedicated safety institute as outlined in the policy.

Show details

Mandatory Guardrails

A dedicated AI Act with mandatory guardrails for high-risk AI systems will both protect Australians and create the certainty businesses need to innovate.


David Shoebridge supports (from party policy)

Party Notes: The Greens "fully support an Australian AI Act and an independent AI regulator." They explicitly acknowledge that "AI technology is rapidly outpacing government regulation" and highlight "the need for guardrails when it comes to AI." Their plan includes requiring "Digital Rights Impact Assessments for machine learning and other AI technologies that can negatively impact the public" and ensuring "a chain of accountability for when AI goes wrong or causes harm."

Show details