We Replaced 2,000 Daily Calls With AI. Nobody Lost Their Job.
A 2,000-call-a-day phone operation disappeared from a London law firm last quarter. No redundancies. No internal revolt. No all-hands meeting full of nervous body language. The reason is simple, and worth dwelling on: the people answering those calls were solicitors, paralegals and receptionists — doing work that was never supposed to land on their desks.
This is the part of the AI-and-jobs conversation that keeps getting skipped. Before you can have an honest argument about whether AI is taking people's jobs, you have to look at what those people were actually doing. In many cases the answer is: work they were overqualified for, underpaid against, and quietly resentful of. Those are the workloads that move first.
What 2,000 calls a day actually looks like
Most people hear "2,000 calls a day" and picture a contact centre. In this firm, there wasn't one. The calls landed on the desks of fee-earners, their assistants, and a rotating front-of-house team. A quick sample of a Tuesday morning:
- "Any update on my case?" — about 40% of volume
- "I need to move my appointment." — 15%
- "I've had a letter asking for documents — where do I send them?" — 10%
- "I want to speak to [name] — is she in today?" — 10%
- New enquiries and referrals — 15%
- Everything else — 10%
Roughly 75% of the calls had no legal content at all. They were status updates, calendar moves, and signposting. The average handling time was 90 seconds. The average person answering them billed at £180 an hour.
Do the sum. A paralegal taking 20 of these calls a day loses about 30 minutes of direct handling time — and closer to 90 minutes once you price in the cost of context switching back into a drafting task. Across a firm of 80 fee-earners, that isn't a productivity problem. It's a category error.
The real job the AI took
The job the AI took isn't "solicitor". It's "human telephone exchange". It's the work of routing, acknowledging, checking a case management system, and reading back the status in plain English. The work is cognitively light, emotionally draining, and endlessly interrupting. In any other industry it would have been carved out decades ago. In legal services it wasn't — because the "support function" kept getting promoted upwards until it was being done by people qualified to draft a shareholders' agreement.
Here's what the AI voice agent now handles end-to-end:
- Status updates. It reads the case management system, returns the latest update in natural language, and logs the interaction.
- Appointment management. It reschedules, cancels, and confirms — writing back to the calendar and the CRM.
- Intake. It qualifies new enquiries against practice-area rules and books the right kind of consultation with the right fee-earner.
- Document chasing. It confirms what's been received, what's still outstanding, and where to send missing items.
- Routing. For anything outside its scope, it routes to a warm human with a full context summary already written.
Here's what it does not do: give legal advice, make judgement calls on a case, handle a distressed caller without escalation, or commit the firm to anything. That boundary is not a limitation of the technology. It's a design choice that maps to the SRA's Standards and Regulations — specifically the requirement that clients receive a proper standard of service and that advice comes from a qualified person.
Where the humans went
Here is the part most coverage of AI deployments misses. The firm did not make anyone redundant. They also did not magically conjure new roles. What they did was boring and specific:
- Receptionists went back to receiving. The front-of-house team stopped being a phone-answering function and became the first point of contact for clients walking in — which, in a firm doing a lot of immigration and family work, is emotional, sensitive work that needs a human face.
- Paralegals got their afternoons back. The ninety minutes a day that used to disappear into the phone came back as drafting time. On the quieter days, they got time to learn.
- Fee-earners stopped being interrupted. The partners and senior associates had been losing 20–30 minutes a day to "quick questions" that weren't. That isn't a productivity gain — it's a sanity gain.
- The overnight problem solved itself. The firm had been losing enquiries after 6pm because no one was picking up. Out-of-hours capture tripled in the first full month.
Nobody's headcount shrank. Hiring slowed, yes — a planned intake of two additional paralegals became one — but nobody was pushed out. The AI closed the gap between what the firm needed and what the firm had already paid for, and the humans moved into the work they were hired to do.
The economics nobody wants to talk about
The common objection is that this will cost a fortune. It doesn't — and this is where the pricing model matters more than the technology itself.
If you rent AI voice capacity by the minute, a 2,000-call-a-day operation at 90 seconds a call is 3,000 minutes a day, or roughly 750,000 minutes a year. At typical SaaS voice pricing, that is a seven-figure annual bill. That's what kills most of these projects at the CFO stage.
What we deployed instead is an on-premises, fixed-licence voice platform. The firm owns the infrastructure. The cost is a one-off build and a predictable annual licence, not a per-minute meter. Call volume can triple — which it will, because AI capture drives more volume — without the cost curve bending with it.
The data angle matters here too. Law firms in England and Wales operate under strict confidentiality rules and the ICO's guidance on special category data. For most firms, sending client voice audio through a third-party cloud vendor's LLM every time a person rings in is not a risk they want on the register. On-prem solves the commercial and the regulatory argument in the same stroke.
This is the CapEx versus OpEx question the industry isn't having openly. A ten-year software contract at £30,000 a month is £3.6 million, and at the end of it you own nothing. The same money deployed once buys an asset you control, in an infrastructure you audit, with a cost profile that doesn't scale linearly with how successful you get.
Why this is the model, not the exception
The legal example is vivid because the overqualification gap is so obvious. According to the Law Society's annual statistics, there are more than 160,000 practising solicitors in England and Wales, and a significant share of their working time is consumed by administrative contact that nobody planned to put there. But the underlying pattern is not legal-specific.
In hospitality, receptionists handle booking admin that should never have left the system. In healthcare, GPs' time burns on appointment rescheduling. In ISPs, Tier 2 engineers get paged for password resets. In every one of those cases, the humans being "protected" by a blanket refusal to automate are doing work their own employers already didn't want to pay them for.
The framing of AI-as-redundancy-machine works because it is simple. The framing that matches what we're seeing in production is different: AI absorbs the work that nobody wanted to assign in the first place, and the humans move back towards the work they were hired to do. That is not a utopian claim. It is what happens when the rollout is designed properly — with clear boundaries between the AI's scope and the human's scope, and a genuine redeployment plan rather than a headcount target.
The firms that get this wrong treat AI as a cost-cutting exercise and build the business case around a 30% reduction in support staff. The business case usually delivers the cut and a noticeable drop in service quality six months later, because it turns out the support staff were doing more than the ticketing system showed. The firms that get it right treat AI as a capacity exercise — how much more client work can we handle without adding overhead? — and redeploy rather than reduce.
Both roads use the same technology. Only one of them produces the outcome people claim to want.
Takeaways
- Audit what your experts actually do all day. Most of the early AI wins are hiding in the gap between job title and real task mix — and that gap is usually embarrassing.
- Treat AI voice as capacity, not headcount reduction. The business cases that survive contact with reality measure new throughput, not cut salaries.
- Own the infrastructure when the volume is real. Per-minute SaaS pricing breaks at the volumes that justify the project in the first place; a fixed-licence, on-prem model doesn't.
- Draw the line where the regulator draws it. In regulated industries, the AI should handle everything up to the point where professional judgement is required — and hand over cleanly. That boundary is a feature, not a limitation.