Loading…
Transparency
A straight description of where models help us deliver — and where they do not replace judgement, access to your data, or accountability to you and your customers.
AI assists drafting, classification and tooling — decisions that affect your brand, legal position or customers stay with people you can name.
We use client and platform data only in ways we have agreed — minimisation, UK-first infrastructure where promised, and clear subprocessors in our privacy materials.
Branding, welcome flows and knowledge sources are configured per tenant. We tell you when retrieval is thin so answers do not pretend to be authoritative.
If a feature is a thin wrapper around a model with no real gain, we say so. We would rather ship something useful than repackage hype.
Internally we may use assistants for brainstorming, boilerplate, or to speed up research — always reviewed before anything client-facing ships. Generated code passes the same review, tests and security mindset as hand-written code.
For WaveAI products, retrieval and prompts are designed so the model can say "I do not have that in your knowledge base" instead of confabulating. That is a feature, not an embarrassment.
Questions about subprocessors, retention or DPIAs belong in Privacy and in direct conversation — we prefer specifics over generic AI manifestos.
Hosted assistant, knowledge grounding and UK support — scoped to what you actually need.
Talk to us