Proof from teams that already live inside high-volume voice workflows.
What the customers actually say.
"For everyone there is a lack of engineers that understand ASR. Implementing Whisper is good but not enough for corporate or government. Wordcab One changes the game with its private cloud voice stack."
David Keane — CEO, Bigtincan & SCX.ai
"During my years in medical ASR, I've seen a common unaddressed need for secure private cloud and on-prem voice solutions, only partially fulfilled by today's piecemeal solutions. Wordcab One goes beyond an on-device API, bringing an easily deployable voice stack and admin panel into your environment in hours."
Christopher Foley — former CEO, iMedX
"Wordcab's speed of implementation is excellent. We needed m3u8 support for transcription, and they deployed it within several days. Within a week we had a custom solution for a niche use case, callable as a parameter on their endpoint. Rapid implementation and a nimble team."
Jason Shore — CEO, Sardius Media
What shipped
An on-prem, scalable call summarization API packaged for secure enterprise deployment.
What we did
Worked directly with their engineering team to turn the summarization system into a production-ready deployment package — hardened containers, secure runtime configuration, vulnerability monitoring, update workflows, quality testing, and API review. Then supported rollout across hundreds of downstream customer environments.
Why it mattered
This was not a demo running on clean audio. It had to survive enterprise customer environments, strict security review, and the operational reality of shipping AI behind another company's product.
What shipped
Multiple voice AI projects across 2024 and 2025, including a meeting-recording browser extension, voice AI R&D, and a fully self-hosted custom voice stack.
What we did
Worked across product prototyping and infrastructure: browser capture, meeting audio flows, private processing, model experimentation, and self-hosted deployment.
Why it mattered
Sales enablement teams do not just need transcripts. They need meeting capture, summaries, coaching signals, and workflow outputs that live close to commercial data. The work proved Wordcab can ship both the product experience and the private voice infrastructure underneath.
Original engagement (2021–2023)
Hundreds of thousands of production call summaries delivered by API across Jiminny's revenue intelligence workflow. High-volume, low-error summarization baked into a customer-facing product.
2026 re-engagement
UK and EU enterprise customers pushed Jiminny toward private deployment for data-residency commitments. Wordcab Voice and Think deployed into a UK-region VPC, serving enterprise accounts end-to-end inside the GDPR boundary their customers already defend.
Why it mattered
Revenue intelligence only works if customers trust where the conversations live. Bringing the runtime back inside Jiminny-controlled infrastructure removed the vendor-path objection — without rebuilding the product.
What shipped
Tens of thousands of call summaries delivered by API between 2021 and 2022.
What we did
Built call summaries that worked as operational records for field service teams — not generic transcripts.
Why it mattered
Field service businesses run on phone calls. Summaries cut manual note-taking and turn messy customer conversations into records the business can actually use.
What shipped
Custom voice pipelines for the Sardius media platform, plus custom LLM-based containers.
What we did
Built voice-processing components and deployable LLM containers around Sardius's platform — aligned to their media workflow, runtime constraints, and integration points.
Why it mattered
Media platforms need AI components to run as reliable product infrastructure — not demos. Sardius got packaged voice and LLM capabilities they could integrate into controlled broadcast workflows.
What shipped
A proof-of-concept for a fully on-prem custom voice stack.
What we did
Mapped the private-deployment architecture and evaluated how the stack would run inside customer-controlled infrastructure. The same on-prem voice pattern later showed up in Bigtincan and enterprise contact center work.
Why it mattered
Enterprise communications buyers cannot always send voice data through hosted AI APIs. This engagement mapped the private-deployment architecture — the same pattern later repeated in the Bigtincan and enterprise contact-center work.
What shipped
A proof-of-concept for a complex fully on-prem voice stack, plus a custom call transcript redaction model.
What we did
Built the private voice stack path and a redaction model for call transcripts — focused on the data-protection problem that shows up immediately in real contact center environments.
Why it mattered
Contact center operators need QA, analytics, summaries, redaction, and compliance controls running close to sensitive customer conversations — not across an external API. The engagement covered both: a full on-prem voice stack pattern, plus a redaction model tuned for the operator's real call audio.
What shipped (2026)
Wordcab Voice deployed on SCX.ai's SambaNova RDU clusters in roughly 8 weeks, replacing a longer internal build timeline. Full voice products including sam.scx.ai, plus custom models tuned for Australian accents and financial-services vocabulary.
What the joint deployment delivers
Sub-100 ms ASR latency on telephony audio, running inside Australian sovereign infrastructure, with the power-per-inference economics of purpose-built accelerators (SambaNova reports 60–80% lower power vs. equivalent GPU on comparable workloads).
Why it matters
This connects Wordcab's productized private voice stack with the sovereign-inference future: customer-controlled environments, specialized accelerators, and regional data locality — shipped as a joint reference architecture, not a one-off integration.
"For everyone there is a lack of engineers that understand ASR. Implementing Whisper is good but not enough for corporate or government. Wordcab One changes the game with its private cloud voice stack." — David Keane, CEO at Bigtincan & SCX.ai
More teams that trusted the work.










Need proof tied to your deployment model?
Comparing Wordcab against an internal build or a hosted speech stack? The right case study is the one that matches your workflow and environment.
Talk to an Engineer
We usually respond within one business day.