Skip to main content
raava

AI for Australian healthcare clinics: where it actually saves hours

Where AI actually saves hours in an Australian clinic

The phones never stop. A multi-site GP practice fields a call every two minutes between 8am and 6pm AEDT. A solo dental clinic loses a third of its inbound calls between 5pm Friday and 8am Monday. A physio with five rooms watches its receptionist juggle a triple-booked morning while the after-hours voicemail fills with people who will book somewhere else by lunch. The shape of the problem is consistent across the country. The phone is the front door, and the front door is the constraint.

Most of the AI conversation aimed at Australian clinics still arrives as a slide deck. A vision for clinical decision support. A roadmap for predictive analytics. A pitch for ambient scribing that needs a six-month change-management programme to deploy. The pitches are not wrong, but they are far from where the saved hours actually live for a clinic owner reading this in 2026. The hours live in the reception workflow — the calls picked up, the bookings made, the referral letters typed into the practice management system, the after-hours triage that currently routes to voicemail.

This piece walks through three patterns that pay back inside a quarter for an Australian clinic. The receptionist voice agent for after-hours phone calls. The integration layer that fits HotDoc, Cliniko, Halaxy, Coreplus, and Best Practice. The document pipeline that turns referral letters and intake forms into structured records. Each pattern meets the four properties of practical AI from the longer 2026 starter guide — runs without a supervisor, produces an artefact a human downstream uses, carries an audit trail, admits when it is unsure.

After-hours phone calls — the receptionist voice agent pattern

The receptionist voice agent picks up the calls a human cannot. Not the calls during the day — the front desk is for those. The calls at 7pm on a Tuesday. The calls at 9am Saturday. The calls during a triple-booked morning when the receptionist has two patients at the counter and three lines flashing.

The stack is mature enough to be boring. Twilio handles the inbound number and the call control. Vapi sits between the model and the audio stream, handling the voice activity detection, the turn-taking, the silence cues. ElevenLabs renders the outbound voice — a calm, neutral Australian timbre that does not pretend to be a human and does not announce itself as a robot either. Claude API runs the conversation logic, including the booking flow, the message-taking, the urgency triage that routes a chest-pain caller to a duty contact rather than a Wednesday afternoon slot.

The agent does five things, and only five. It identifies the caller, looks them up in the practice management system, offers the next available appointments inside the doctor or clinician they ask for, books the slot if they confirm, and emails the receptionist a transcript and summary the next morning. It does not give clinical advice. It does not interpret symptoms beyond a hard-coded urgency triage. It does not improvise. The constraint is the design — a narrow scope is the reason it works.

Escalation discipline matters more than the conversation. The agent is told to hand off if the caller asks for something it has not been scoped to do, if the practice management system returns an error, if the caller’s tone signals distress, if a confidence threshold on the intent classifier dips below 80. The hand-off is to a duty receptionist’s mobile during business hours and a clinical on-call number out of hours. The threshold for hand-off is tuned weekly during the first eight weeks; after that it stabilises.

The numbers from a clinic running this pattern are public in the case study. A 12-clinic GP network in NSW recovered 38% of missed bookings in 90 days using a Twilio + Vapi + Claude voice agent integrated to the practice management system. Ninety-two per cent of after-hours calls were answered by the agent without waking a duty receptionist. The hand-off rate stabilised at 7% by week eight. The Practice Manager verified the numbers; the case study is anonymised.

For a clinic owner thinking about whether the pattern fits their phones, the voice AI service page walks through what a deployment looks like end to end. The shape is consistent across general practice, allied health, and dental.

Booking platform integrations — what fits HotDoc, Cliniko, Halaxy, Coreplus, Best Practice

The voice agent only works if it can write into the booking system the clinic already uses. Australian clinical software is fragmented in a way that surprises overseas vendors. Five major players cover most of the market, and each one exposes a different integration surface.

HotDoc sits in front of bookings for thousands of Australian clinics. It is patient-facing — the booking widget on a clinic’s website is usually HotDoc — and it pushes bookings into the underlying practice management system. For a voice agent, HotDoc’s API is the cleanest entry point if the clinic already runs HotDoc as the booking layer; the agent calls HotDoc, HotDoc syncs to the PMS. This works well for general practice and allied health.

Cliniko is the practice management system underneath a large slice of allied health and physio clinics. The API is well documented, the developer portal is responsive, and the patient + appointment + practitioner objects map cleanly to what a voice agent needs. A direct Cliniko integration is the standard pattern for a clinic that uses Cliniko as both the PMS and the booking layer.

Halaxy is the practice management system for a substantial share of psychology and counselling practices. The API surface is functional but narrower than Cliniko’s; integrations tend to handle bookings well and patient records less directly. For a voice agent on a Halaxy clinic, the pattern is to read availability from Halaxy and write the booking back through the API, with patient identification handled by phone-number lookup rather than full record sync.

Coreplus is heavily represented in allied health, physio, and rehabilitation. The integration surface includes a REST API and an export pipeline for reporting. Voice agents on Coreplus clinics tend to use the REST API for bookings and a separate audit trail in the orchestration layer rather than relying on Coreplus’s audit log alone.

Best Practice is one of the two dominant general practice clinical packages, alongside Medical Director. The integration surface is more conservative than the cloud-native players — it traditionally exposes data through a local SQL database rather than a hosted API. A voice agent on a Best Practice clinic usually sits behind an integration broker rather than calling Best Practice directly. The pattern adds a step but works reliably.

The orchestration layer that ties the voice stack to the booking layer is n8n in most of our deployments. The clinic owns the n8n instance, owns the credentials, owns the workflows. If the relationship with the integrator ends, the workflows do not. This is a deliberate choice — clinical software is too long-lived to lock into a black-box integration that lives inside a vendor’s tenancy. The business process automation service page explains the orchestration pattern in more depth.

Document processing for referral letters and intake

The second place hours leak in a clinic is the referral letter and the intake form. A referral letter from a GP to a specialist arrives as a PDF, a faxed scan, a Word document attached to an email, or — depressingly often — a phone call followed by a posted hard copy. An intake form arrives as a paper sheet, a HotDoc-collected form, an email attachment, or a handwritten note from the patient at the counter. Each document is then typed by a clinical or admin staff member into the practice management system.

The pipeline pattern is the same one described in the pillar’s section on document processing. The pipeline reads the document, classifies what it is, extracts the structured fields, validates each field with a confidence score, and either writes the record into the PMS or routes the document to a human review queue. For a referral letter, the structured fields are the referring practitioner, the patient identifiers, the requested specialty, the clinical question, the urgency, and the dates. For an intake form, the fields are the patient details, the medical history, the medications, the allergies, the consent declarations.

Claude API handles the heavy reading. The clinical vocabulary in a referral letter — anatomical terms, drug names, abbreviations — is variable enough that rule-based extractors break frequently; Claude’s tolerance for the long tail is the reason it sits at the centre of the pipeline. Validators score each field on a 0-100 scale. Below threshold, the document routes to a human queue rather than into the production database. The reviewer’s job shifts from typing to checking.

The output goes into the same Australian clinical packages as the booking flow — Cliniko, Halaxy, Coreplus, Best Practice. The mapping is bespoke per package because each PMS structures patient records differently. A generic connector does not survive a Privacy Act review at any scale; a dedicated mapping function does.

A clinic running this pattern alongside a voice agent gets two compounding wins. The voice agent reduces the inbound phone load. The intake pipeline reduces the typing load. The receptionist’s day shifts from triage to judgment — the calls that need a human, the documents that flagged for review, the patient at the counter who needs a face. Neither pipeline replaces a person. Each one removes a queue.

What a 90-day path looks like for a clinic

The path from “we should look at this” to “the agent is taking calls and the pipeline is reading referrals” is roughly 90 days for a single-site clinic. Multi-site clinics take 90 to 120 depending on how many practice management instances need to be wired in.

Days 1 to 14: a paid audit. A half-day workshop with the practice manager and the clinical lead, a workflow inventory across reception and intake, a feasibility ranking on the candidate workflows, and a written recommendation. Cost: 2,000 to 5,000 AUD plus GST. The deliverable is a 90-day plan with a named first project, a price, and a definition of done. If the workflows are not ready — the booking system does not expose an API, the call volume is too low to pay back, the clinical mix is too complex for an off-the-shelf urgency triage — the recommendation says so. The healthcare audit page walks through what the workshop covers.

Days 15 to 60: pilot build. The first project is usually the receptionist voice agent against after-hours calls only. The agent is built, integrated to the booking layer, tuned against the clinic’s real call patterns, and deployed in shadow mode for the first fortnight — taking calls, drafting bookings, but routing every booking to a human before it commits. Shadow mode is the calibration phase. Cost: 5,000 to 15,000 AUD plus GST depending on scope and the integration surface.

Days 60 to 90: live deployment and tuning. The agent moves from shadow to live. The hand-off threshold is tuned weekly against the transcripts. The escalation paths get pressure-tested with real distress calls and real after-hours urgency. By day 90, the agent is handling the after-hours volume in full and the practice manager is reviewing the next workflow on the audit’s list — usually the intake pipeline.

Day 91 onwards: the second project. Most clinics ship the intake pipeline next, then the knowledge management surface across both. The audit trail and the validators built in the voice project become the foundation for the document pipeline. Each project compounds the last. The studio’s role thins as the in-house team takes over the operating rhythm.

The 90-day shape is not a marketing line. It is the median across the clinic builds the team has run. Some ship in 60 days. Some take 120. The variance lives in the booking platform integration depth and the clinical mix, not the model side of the build.


If your clinic wants to see what fits, we run a free 45-minute audit — no slides, just a walk through your current workflows.