Case Studies
Strengthening Trust in an Autonomous Financial‑Planning Tool
The Challenge
A leading German financial‑services provider launched an AI‑powered planning tool designed to help customers automate savings, optimize monthly cash flow, and make more confident long‑term decisions. Early engagement looked promising—registrations were high, and users explored the interface—but very few activated the tool’s autonomous features. Most kept it in a passive “view‑only” mode, reviewing recommendations manually but refusing to delegate any meaningful actions.
The organization initially assumed the issue was functional: customers needed more education, clearer instructions, or a more detailed onboarding flow. But the behavioral data told a different story. Users weren’t confused—they were cautious. They didn’t fully trust the system to act on their behalf, and even small uncertainties created disproportionate resistance.
The company needed to understand the psychological barriers behind this hesitation and identify the specific moments where trust broke down.
What We Did
We applied behavioral science to uncover the cognitive and emotional dynamics shaping how German customers interpreted the tool’s decisions and assessed the risks of delegation.
Our work included:
A Trust‑to‑Delegation Diagnostic to identify where users felt uncertain, overwhelmed, or unsure how the AI made decisions
Journey mapping to pinpoint the exact moments where confidence dipped and hesitation increased
Message and interface testing to evaluate how different explanations, cues, and framings influenced perceived safety
Behavioral analysis of usage patterns to understand which features users explored, avoided, or abandoned
Rapid experiments to isolate the psychological drivers behind low delegation—fear of errors, perceived loss of control, unclear logic, and concerns about reversibility
Across all evidence sources, one insight stood out: customers weren’t rejecting the tool—they were protecting themselves. The system felt competent, but not predictable. Helpful, but not fully transparent. Smart, but not yet safe to rely on.
What We Found
Three behavioral barriers were quietly suppressing adoption:
1. Perceived Loss of Control
Users feared the AI would make irreversible decisions without their explicit approval. Even though safeguards existed, they weren’t visible or emotionally reassuring.
2. Unclear Decision Logic
Explanations were technically correct but psychologically ineffective. Customers couldn’t form a mental model of why the AI recommended certain actions, so they defaulted to caution.
3. Error Sensitivity
A single minor recommendation that felt “off” disproportionately damaged trust. Users interpreted small inconsistencies as signs the system might misjudge more important decisions.
These barriers weren’t functional—they were psychological. And they were entirely solvable.
What Happened Next
We designed a set of targeted behavioral interventions to strengthen trust and increase delegation:
A new “control frame” that made safeguards explicit and emotionally salient
Simplified, human‑centered explanations that helped users understand the AI’s reasoning at a glance
Micro‑interactions that reinforced predictability, such as previewing actions before execution
A staged delegation pathway, allowing users to start with low‑stakes tasks and build confidence over time
Reframed error messaging that normalized small discrepancies and reduced overreaction to minor issues
Within eight weeks of implementation:
Delegation rates increased by 38%
Drop‑off during onboarding decreased by 22%
Users reported significantly higher confidence in the tool’s decision‑making
Customer‑support inquiries about “how it works” fell sharply
Most importantly, customers began using the tool for the high‑value actions it was designed for—not just passive monitoring.