Your AI Agent Has a Trust Problem. Technology Won't Fix It.
A CEO of an Asian digital bank is building AI agents that interact with customers. Personalized, proactive, useful. The technology works but customers don’t use it yet.
The problem isn’t the model. It’s the relationship.
Banking is transactional by design. You open the app when you need something: send money, check for fraud, reverse a charge. The mental model is reactive: you reach out when you have a problem; the bank doesn’t reach out unless it has its reasons. When a bank representative — human or not — calls you with “suggestions”, the reflex is suspicion. Most people believe banks work for banks, not for them.
The AI made the interaction more personal. It didn’t fix the missing trust.
The misconception: better AI fixes the trust gap
It doesn’t. And this isn’t a banking problem — banking just makes the misalignment visible.
The same pattern applies to any product that touches personal data or claims to act in your interest. If customers have a reasonable basis for believing your product works more for you than for them, a better model makes the problem more visible, not less. Users don’t resist AI agents because the technology is unfamiliar. They resist because extending trust to a system that acts on your behalf requires a history of evidence, and most products don’t have it.
This is the real incumbency advantage in AI: not the model, not the data, not the compute. The accumulated trust that makes users willing to let a system act on their behalf.
What we’re seeing
Eric Chang spent almost 25 years at Microsoft Research Asia. He now advises AI startups in the health space — a domain where trust is paramount, subject to regulatory and clinical requirements. His observation is blunt: as synthetic content proliferates, trust will concentrate, not distribute. “If trust gets scarce, there are fewer players left. It’s easier to trust a few entities that can monitor everything.”
The flood of AI products that look interchangeable will make the trust deficit worse for newcomers. Apple’s privacy-first positioning is a deliberate bet on this dynamic: when everything feels compromised, the brand that didn’t compromise becomes premium.
The sandbox is how you compress the trust timeline. Eric points to Mars, the candy company, as an unlikely model. Mars operates one of the largest veterinary clinic chains in the world, and they’ve been deploying tech in clinical settings faster than anyone in human medicine. Pet healthcare faces lighter regulation but involves real patients, real outcomes, and real iteration. You prove the technology on animals, accumulate evidence, then move to human healthcare with a track record rather than hypotheses.
The mechanism generalizes. Monzo and Revolut didn’t launch with full banking licenses to compete with Barclays. They started with prepaid cards for small transactions: low stakes, low commitment, low regulatory exposure. Trust accumulated through repeated low-risk interactions before users moved their primary accounts.
Every regulated, high-trust domain needs a sandbox equivalent. Healthcare has pets and digital therapeutics. Finance has prepaid accounts and spend-tracking tools. Accounting has operator-approved decisions. The pattern: find the version of your domain where the stakes are low enough that a new user will try it without established trust, and design your product to start there.
What to do instead
Design for trust-building. The banking AI agent that calls with a suggestion may fail — not because the suggestion is bad, but because the interaction pattern violates the user’s mental model. The same intelligence, surfaced inside the app when the user is already there completing a task they initiated, has a chance. Meet users inside their existing behavior.
Find your sandbox. Where is the low-regulation, low-stakes version of your domain? Not as a permanent goal — as a trust-building environment where you iterate, accumulate evidence, and build the track record.
Understand existing perceptions. If customers have a reasonable basis for believing your product works against their interests, no amount of personalization fixes that. Audit the incentives first. If you can’t change it, design around it — starting with transparency about how the system makes decisions.
The diagnostic
Before deploying AI agents that act on users’ behalf: does your product have a trust history?
If your users have no basis for believing the system works for them — not just correctly, but for them — you’re not solving an AI problem. You’re solving a relationship problem that predates AI by a decade.