NDIS Computer-Generated Plans Prove We Can't Wait on AI Regulation
When government deploys algorithmic cruelty while claiming to deliberate carefully
On 29 October 2025, Industry Minister Tim Ayres told business leaders government would “take our time” on AI regulation, focusing on voluntary frameworks rather than mandatory guardrails.
On 31 October and then again on 1 November, I wrote the urgent need for AI regulation now and how people are already being directly harmed because of this lack of AI regulation.
One month later, Guardian Australia revealed Health Minister Mark Butler’s government will roll out computer-generated support plans for 750,000 Australians with disabilities, with staff forbidden from amending algorithmic decisions and appeal rights gutted.
This isn’t taking our time. This is deploying AI to exercise power over Australia’s most vulnerable citizens while claiming we need more consultation.
What’s Actually Happening
Under the new Framework Planning model rolling out from mid-2026, an assessor (APS level 6 administrative staff where allied health background is “desirable but not mandatory”) conducts a conversation with NDIS participants. This feeds into the I-CAN tool, which generates a budget.
Here’s the problem: once the algorithm generates that budget, NDIA staff cannot amend it. As NDIA executive Lawrie Thomas stated, “it’s not a recommendation” - the algorithm provides the actual budget, and staff can only “accept the assessment which generated the budget, or request a replacement assessment.”
Staff cannot override based on professional judgement, participant circumstances, or independent medical evidence. There’s no requirement to consider independent medical evidence at all.
Appeal rights are equally concerning. Currently, the Administrative Review Tribunal can directly amend inadequate plans. Under the new system, tribunals can only send plans back for another algorithmic assessment. In 73% of the 7,132 appeals last year, NDIA decisions were changed. That correction avenue is closing.
The Fundamental Flaw: AI in Charge, Not Humans
I support using AI. For 2.5 years, my colleague Steve Davies and I have been training AI models on ethical frameworks because we see AI’s tremendous potential.
AI excels at pattern recognition, data processing, mapping complexities. These capabilities can genuinely improve outcomes.
Here’s the overriding principle: AI should do the grunt work. Humans must deliver nuance, judgement, and oversight.
In a well-designed system, I-CAN would process data, identify patterns, generate recommendations. Then a qualified human would review, consider independent evidence, apply professional judgement, and make the final decision.
That’s AI augmenting human capability. That’s humans controlling AI.
What NDIS implements is the inverse: the algorithm decides, humans rubber-stamp. Staff can only accept or request the algorithm run again.
This is AI controlling humans. It’s a catastrophic design flaw.
The Question Government Hasn’t Asked
Professor Shoshana Zuboff identified the fundamental question about technology deployment: is it being used to empower people with information and agency, or to exercise power and control over them?
The NDIS implementation answers that question clearly, and wrongly.
Consider what’s removed from human discretion: professional judgement about individual circumstances, consideration of independent medical evidence from treating specialists, ability to account for context and trauma, tribunal power to amend inadequate plans.
Consider what the algorithm serves: Minister Butler’s explicit fiscal target (reduce NDIS growth from 12% to 5-6% annually), consistency in applying limitations, processing speed over quality.
This system isn’t designed to empower 750,000 Australians with disabilities. It’s designed to enforce budget constraints through algorithmic efficiency.
When NDIA staff raised concerns that “you can see how there might be shortfalls” without considering independent evidence, they identified exactly this problem. The system prioritises fiscal targets over participant welfare.
That’s power over, not empowerment.
Evidence This Wasn’t Thought Through
Government lowered professional standards (allied health now “desirable not mandatory”) while removing human discretion to compensate for algorithmic limitations.
They claim “stronger oversight” while removing tribunal power to provide actual oversight by amending plans.
This echoes the “independent assessments” policy scrapped in 2021 after disability community criticism. Government claims they’ve “learned from that.” Evidence suggests otherwise.
Many participants have psychosocial disabilities, communication barriers, trauma histories. When staff asked about contingencies, responses were vague promises to “work out ways” later. This should have been foundational to design.
The Perfect Accountability Void
Under Australian law, “the AI decided” is not a defence. Legal responsibility rests with the decision-maker, not the tool.
But NDIS creates perfect accountability avoidance: staff cannot override (”I couldn’t change it”), the algorithm is a black box (participants can’t see why), tribunals cannot amend (no independent correction), government claims they’re implementing an “evidence-based tool.”
Who’s responsible when a vulnerable Australian receives inadequate support? The government that designed it? The NDIA operating it? Staff who couldn’t override? The algorithm that generated it?
Everyone points elsewhere. The harmed person has no recourse.
This accountability void is a hallmark of moral disengagement: structuring systems so no one feels responsible for the harm they collectively enable.
What Proper Regulation Would Require
Human oversight with discretion: AI recommends. Humans decide. Staff need authority to override based on professional judgement.
Mandatory independent evidence: Incorporate evidence from treating specialists who know the participant.
Qualified decision-makers: Relevant professional expertise required.
Transparency: Participants must understand how algorithms reached recommendations.
Meaningful appeal rights: Independent review with power to amend, not just refer back.
Bias auditing: Regular testing for discriminatory outcomes.
Legal accountability: Clear responsibility when decisions cause harm.
The Timeline Tells the Story
29 October 2025: Ayres says take our time on AI regulation.
13 November 2025: NDIA briefs staff on removing human discretion.
Mid-2026: Rollout begins.
While government claims careful consideration, agencies deploy systems removing human judgement from decisions affecting 750,000 vulnerable Australians.
What Must Happen Now
Minister Butler: pause this rollout. Ensure humans remain in charge, qualified professionals conduct assessments, independent evidence is mandatory, meaningful appeals are preserved.
Minister Ayres: stop calling AI regulation a “last resort.” Your government deploys algorithmic systems affecting fundamental rights now. The time for mandatory guardrails was yesterday.
AI can help deliver better, more efficient services, but only when properly governed, when humans retain decision-making authority, when vulnerable people are protected rather than processed.
Without proper regulation ensuring human oversight, we don’t get efficiency gains. We get algorithmic cruelty disguised as modernisation.
Australia’s 750,000 NDIS participants deserve better.
We all do.
You know what to do: write to your local MP and Minister Tim Ayres and demand AI regulation now.
Onward we press
Resources
Australia’s AI Policy Vacuum: When Government Retreats, Citizens Must Fill the Gap
How AI Rental Screening is Worsening Australia’s Housing Crisis


AI should do the grunt work. Humans must deliver nuance, judgement, and oversight.
well said