
Dr. Farrell Cahill, PhD
Mar 26, 2026
Healthcare workers are using unauthorized AI tools to write clinical notes. The real problem isn't the workers, it's the systems that left them no better option.

Healthcare workers in New Zealand were recently caught using ChatGPT and other free AI tools to write clinical notes. Health NZ responded with a memo threatening formal disciplinary action, calling the practice "strictly prohibited due to data security, privacy and accountability concerns."
The reaction is understandable. Patient data entered into consumer AI tools creates real compliance exposure. But the response misses the more important question: why are trained clinicians, people who understand confidentiality requirements, turning to unauthorized tools in the first place?
The answer is straightforward. They are drowning in documentation, and the systems they have been given are not keeping up.
The Health NZ incident is not an isolated case. It reflects a pattern playing out across healthcare systems worldwide. Clinicians are spending more time on paperwork than on patients, and they are improvising with whatever tools they can find.
A Salesforce survey of more than 14,000 workers across 14 countries found that over half of generative AI users in the workplace are leveraging the technology without formal approval from their employers. In healthcare specifically, 87% of workers reported that their company lacks clear policies around generative AI use (Salesforce, "The Promises and Pitfalls of AI at Work," 2024). That is not a workforce discipline problem. That is a leadership vacuum.
The New Zealand Public Service Association, which represents health and addiction service workers, made this point directly. National secretary Fleur Fitzsimons noted that clinical staff were turning to AI tools because of the "enormous pressure" they were under. She called the disciplinary memo "a warning shot that will make staff afraid to ask questions or seek help."
Meanwhile, Health NZ has been cutting the very teams responsible for digital systems and IT support. When organizations remove infrastructure but keep the workload, workers will find their own solutions. That is not defiance. That is survival.
The compliance concern is legitimate. When a clinician pastes patient information into ChatGPT, that data leaves the organization's control. Free AI tools are not designed for healthcare data. They lack tenant isolation, audit trails, and the regulatory compliance frameworks that clinical environments require.
Under frameworks like HIPAA in the United States, PIPEDA and PHIPA in Canada, and New Zealand's Health Information Privacy Code, organizations are responsible for how patient data is handled. Using consumer AI tools for clinical documentation creates exposure that no disciplinary memo can retroactively fix.
The problem compounds when organizations respond by banning AI entirely rather than providing compliant alternatives. A blanket prohibition does not eliminate the demand. It pushes the behavior underground, making it harder to detect and impossible to audit.
64% of generative AI users have passed off AI-generated work as their own (Salesforce, 2024). In a clinical setting, that means AI-drafted notes entering the medical record without any indication that a machine was involved. Physicians sign off on documentation they did not fully write, creating accountability gaps that surface during litigation, insurance disputes, and regulatory audits.
The instinct to prohibit unauthorized AI use is understandable. But prohibition without alternatives has a poor track record in healthcare technology adoption.
When electronic health records were introduced, clinicians initially resisted the shift from paper. The organizations that succeeded were the ones that provided proper training, adequate support, and tools that actually reduced the documentation burden rather than adding to it. The organizations that simply mandated compliance without addressing workflow reality saw the slowest adoption and the highest rates of workarounds.
The same dynamic applies to AI. Clinicians are not using ChatGPT because they enjoy breaking rules. They are using it because they are spending hours on documentation that an AI tool can compress into minutes. The task is real. The need is legitimate. The tool they chose is the wrong one.
The correct response is not to eliminate AI from clinical workflows. It is to provide AI tools that are purpose-built for healthcare: tools that process documents within compliant infrastructure, maintain audit trails, preserve source attribution, and never expose patient data to consumer platforms.
There is a meaningful difference between a clinician pasting patient notes into ChatGPT and an organization deploying an AI platform designed for regulated industries.
Purpose-built healthcare AI operates within the organization's own infrastructure. Data stays tenant-isolated. Processing happens within environments that meet SOC 2 Type II, HIPAA, and PIPEDA requirements. Every AI-generated output links back to its source material, maintaining the chain of evidence that clinical and legal work demands.
Platforms like Sky AI were built specifically for this use case: processing unstructured medical documents, generating chronological timelines, categorizing records, and enabling conversational queries against case files, all within a compliance framework designed for healthcare, insurance, and legal environments.
The distinction matters because the alternative is already here. Clinicians have demonstrated that they will use AI for documentation. The only question is whether that AI operates within a governed, auditable system or through consumer tools that create the exact risks Health NZ's memo was trying to prevent.
The Health NZ incident should serve as a signal, not just for New Zealand, but for every healthcare organization grappling with AI adoption. The workforce is ahead of the policy. Clinicians are already using AI. They are doing it because the documentation burden is real and the tools they have are insufficient.
Organizations that respond with discipline alone will find themselves playing an endless game of enforcement. Those that respond by deploying compliant AI infrastructure will solve two problems simultaneously: they will reduce the documentation burden that drives unauthorized tool use, and they will bring AI adoption under governance where it can be monitored, audited, and improved.
The question is not whether AI belongs in clinical documentation. That debate ended the moment clinicians started using ChatGPT on their own. The question now is whether organizations will provide the right tools, or keep punishing workers for solving the problem themselves.
Healthcare professionals exploring compliant AI solutions for document processing can evaluate platforms purpose-built for regulated industries, where data governance, source attribution, and tenant isolation are foundational rather than aftermarket additions.