Skip to main content
raava

Practical AI · Built in Melbourne

AI Chatbots that stay in your brand voice

Custom assistants grounded in your knowledge base. Citations on every answer. Human hand-off when the question warrants it.

Why most chatbots stay generic

Off-the-shelf chatbots answer the easy questions and invent the rest. Australian SMEs we talk to find the same three failures keep repeating, no matter which platform they bought.

01

Hallucinated answers your team has to clean up.

Generic models confabulate when the question lands outside their training. Customers see a confident wrong answer; your team finds out in the support inbox three days later.

02

No source — the user can't verify, the auditor can't trace.

When the chatbot quotes a number with no citation, you have no way to prove the policy it referenced or audit where the figure came from. That's a compliance problem before it's a UX one.

03

Brand voice drift after the first three messages.

Out-of-the-box assistants slip into 'as an AI assistant' phrasing the moment a question gets edge-case. The voice was meant to be the differentiator; the chatbot erodes it.

How we build it

Reliable chatbots come from grounding, not from cleverer prompts. We build retrieval-augmented generation (RAG) pipelines using LangGraph to orchestrate the read → retrieve → compose → review steps as discrete nodes. Your knowledge base — help docs, internal wikis, product manuals, ticket history — gets ingested through a normaliser that handles inconsistent formatting, embedded into pgvector or Pinecone, and tagged with metadata your retrieval logic can filter on. Claude API does the composition, but only over passages we've explicitly retrieved — the model can't answer from training memory alone. Every response carries citations the user can click through to verify. Confidence-routed hand-off catches the questions where retrieval came up short and pushes them to a human reviewer instead of letting the bot guess. Brand voice gets tuned through a small set of style rules and a handful of few-shot examples that hold across long conversations. The whole stack runs on Vercel infrastructure or in your own cloud tenancy if data residency demands it.

Tools we lean on: LangGraph · Claude API · pgvector · FastAPI · LangChain · Pinecone

Pipeline shape · AI Chatbots

  1. 01Ingest
  2. 02Embed
  3. 03Retrieve
  4. 04Compose
  5. 05Hand-off

What the chatbot does, end to end

Six capabilities every chatbot we ship includes by default.

Brand-voice grounding.

We tune the assistant against a small set of your real conversations and content samples. It writes the way your team writes — not 'as an AI assistant'.

Source-cited answers.

Every response links back to the source document the model retrieved. Click-through verification for the user; full audit trail for you.

Confidence-routed hand-off.

Below threshold, the conversation routes to a human reviewer with the user's full context attached. The bot never guesses past its knowledge.

Retrieval scoring and re-ranking.

Every retrieved passage gets a relevance score before composition. Low-relevance results trigger a fallback prompt or hand-off, not a fabricated answer.

Channel integration.

Web embed, Slack, Microsoft Teams, WhatsApp Business — pick the channels your customers already use. We integrate with your existing helpdesk and CRM.

Tenant-isolated data.

Your knowledge base stays in your tenancy. Anthropic doesn't train on the conversations. We document the data flow so your privacy officer signs off without surprises.

Packages start at three sizes

Most clients land on Scale. We re-quote against your knowledge base size and channel mix after the audit.

Automate

From $2,000 AUD

Single-source FAQ chatbot. Web embed. Email hand-off when retrieval comes up short.

  • Single source ingest
  • Web chat embed
  • Source citations on answers
  • Email hand-off path
Most popular

Scale

From $5,000 AUD

Multi-source RAG with confidence-routed hand-off, brand-voice tuning, and a primary channel integration.

  • Up to 5 source types
  • Confidence-routed hand-off
  • Brand-voice tuning
  • Slack, Teams, or web embed
  • Quarterly knowledge tuning

Transform

From $10,000 AUD

Multi-channel deployment with analytics dashboard, voice channel option, and ongoing retrieval refinement.

  • Unlimited sources
  • Multi-channel deployment
  • Conversation analytics dashboard
  • Voice channel option
  • Ongoing prompt + retrieval tuning

Real-world scenario · 2025

85% of support tickets answered in under a minute

An Australian SaaS company's support team was drowning. Two reps were handling 600+ tickets a week, with first-response time creeping past 14 hours and resolution time past 36. The product had grown faster than the team, and the help docs were three-quarters complete. They asked us to take the load off the inbox without sacrificing the voice their customers had come to expect.

We built a RAG-backed assistant grounded in their help docs, ticket history, and changelog. LangGraph orchestrated retrieval across pgvector for the docs and a separate index for resolved tickets. Claude composed the responses with citations to the source. Anything below 0.7 retrieval confidence routed to a human — with the conversation transcript pre-loaded — instead of letting the bot guess. Brand voice was tuned against 200 of their best-rated past replies.

Within six weeks, the assistant was answering 85% of inbound questions without human escalation, citing the source on every answer. The team stopped firefighting the inbox and started writing the docs the chatbot needed for the harder cases — which fed back into retrieval and pushed the resolution rate up further.

Read the full case study
0%

First-touch resolution rate

0%+

Answer accuracy with citation match

0s

Average response time

Questions clients ask before they book the call

How do you stop the chatbot from hallucinating?

Grounding does most of the work. Every answer comes from passages we explicitly retrieved from your knowledge base — the model can't answer from training memory alone. Below the retrieval-confidence threshold, the conversation routes to a human with the full context attached. We measure citation accuracy on a held-out evaluation set during build and re-measure quarterly. If a question category can't reach the threshold reliably, we route it to hand-off by default rather than letting the bot guess.

What if my knowledge base is a mess?

We expect it to be. The first two weeks of every project are ingestion plus cleanup — deduplication, normalising inconsistent formatting, flagging out-of-date pages, and tagging metadata your retrieval logic can filter on. We also tell you upfront which sections aren't ready to index. Cleaning the knowledge base is a side benefit — the team writes better docs once they see what the assistant got wrong.

How accurate are the answers really?

Answer accuracy with citation match lands at 90%+ on clean source material; we measure it directly against a domain-specific evaluation set. Where a question requires multi-document reasoning or covers a fast-moving policy area, accuracy can drop into the 80s — the assistant flags low-confidence cases for human review before sending. We tell you the per-category accuracy at the end of week two so you can adjust the hand-off threshold, not discover it in production.

Where does my data live? Australia, or overseas?

Your choice. The default deployment runs on Vercel infrastructure in Sydney — your knowledge base and conversations stay in Australia. For stricter sovereignty, we deploy the pipeline to your own AWS or Azure tenancy in your region. Anthropic offers Claude through AWS Bedrock in the Sydney region, which we use for clients with explicit data residency requirements. Anthropic doesn't train on your data either way.

How long does a typical project take?

AI Chatbot projects on the Scale tier ship in 4-6 weeks from kickoff. Automate-tier projects ship in 2-3 weeks. Transform-tier projects (multi-channel, voice option, analytics dashboard) typically run 8-12 weeks. The first two weeks are always discovery + knowledge-base ingestion + brand-voice calibration — nobody writes pipeline code until the source corpus is understood.

Free 30-min audit · No prep required

See where a chatbot earns its keep.

Book a free 30-minute audit. We'll walk through your support volume, sketch the high-value automation candidates, and tell you whether a chatbot is the right fit — or not.