Case Studies

When Adoption Stalls: Diagnosing AI Trust Failure in a Mid-Market Insurance Brokerage

The Challenge

A Brisbane-based commercial insurance brokerage with roughly 200 employees had invested significantly in an AI-powered underwriting assistance tool designed to help brokers assess risk profiles, flag coverage gaps, and generate client-ready recommendations faster. The vendor's implementation had gone smoothly. The data integrations worked. The interface was clean. By every technical measure, the rollout was a success.

Six months in, usage told a different story.

Senior brokers were barely touching the system. Junior brokers used it inconsistently — heavily for straightforward accounts, rarely for anything complex. Several team leads had quietly created their own Excel-based checklists that duplicated what the AI was supposed to do. Client-facing output quality remained uneven, and the efficiency gains leadership had projected had not materialized.

The brokerage's COO initially attributed the problem to change resistance — a generational divide between senior brokers who preferred their own judgment and younger staff who hadn't yet developed the confidence to deviate from the tool. A second round of vendor training was scheduled.

Before committing to it, the COO wanted an independent assessment of whether training was actually the right intervention. The suspicion was that the problem was more behavioral than technical, and that another training session would produce another temporary bump followed by the same plateau.

What We Did

We conducted a focused behavioral assessment of how brokers at different experience levels were interpreting, using, and working around the AI tool in their day-to-day client work.

Our work included:

  • One-on-one interviews with brokers across seniority levels, team leads, and two members of the senior leadership team to surface how the tool was perceived and where confidence broke down

  • Workflow observation sessions with six brokers across two office locations, watching real account work in progress to identify the specific moments where the tool was consulted, ignored, or bypassed

  • A review of override and non-use patterns in the system's usage logs, examined through a behavioral lens rather than a purely operational one

  • Message testing of the tool's existing explanation formats to assess whether brokers could accurately interpret why the system flagged or recommended what it did

  • A brief structured survey deployed to the full brokerage team measuring perceived control, fairness, effort, and confidence in the tool's consistency

The picture that emerged was not one of resistance. It was one of reasonable people responding rationally to a system they couldn't fully predict.

What We Found

Three behavioral dynamics were driving the adoption gap.

1. A Mental Model Mismatch

Experienced brokers had spent years developing finely calibrated risk intuitions. The AI's recommendations frequently aligned with their judgment — but the reasoning the tool surfaced didn't map onto how they thought about risk. It cited variables they considered secondary and underweighted factors they considered primary. The tool wasn't wrong, but it felt like it was thinking differently. For senior brokers, that gap was enough to dismiss it as a useful check and rely instead on their own assessment.

2. Control Without Clarity

Brokers technically could override the system at any point, but there was no visible acknowledgment that overriding was a legitimate and expected part of the workflow. The interface treated overrides as exceptions rather than normal professional judgment. Several brokers described a vague sense that overriding too often would be noticed and questioned, even though no such policy existed. That perception alone was enough to create discomfort and avoidance.

3. Inconsistency Sensitivity

On three occasions in the preceding six months, the tool had produced recommendations that experienced brokers considered clearly suboptimal for accounts they knew well. Each incident had been minor and quickly corrected. But in the absence of any organizational explanation for why those errors had occurred or what had been done about them, the incidents had circulated informally and disproportionately shaped how the broader team assessed the tool's reliability. A small number of visible failures had done significant trust damage that a large number of quiet successes had not repaired.

What Happened Next

We worked with the brokerage's operations lead and the AI vendor's customer success team to design a targeted set of behavioral interventions that could be implemented without a system rebuild or a significant retraining investment.

Interventions included:

  • A revised explanation format that surfaced the tool's reasoning in the language brokers actually used — mapping its outputs onto the risk variables they already prioritized in their own assessments

  • An explicit override acknowledgment built into the workflow, framing broker judgment as a valued input rather than a deviation, and removing the implicit social cost of disagreeing with the system

  • A brief monthly "system notes" communication from leadership — two paragraphs, distributed via email — that acknowledged recent edge cases, explained what the tool handled well and where broker judgment remained essential, and reinforced that the tool was designed to support professional decision-making rather than replace it

  • A structured re-engagement conversation between team leads and the three senior brokers who had most visibly disengaged, focused on rebuilding familiarity through low-stakes account reviews before reintroducing the tool in complex cases

Ten weeks following implementation:

  • Consistent tool engagement among senior brokers increased from 18% of eligible accounts to 54%

  • The parallel Excel checklists were abandoned by two of the three team leads who had created them

  • Junior broker usage in complex accounts increased by 31%, suggesting growing confidence in how to interpret and apply the tool's outputs

  • The COO reported that the efficiency gains originally projected at launch were beginning to materialize in pipeline turnaround times

  • The vendor's customer success team requested a debrief on the intervention approach, citing two other brokerage clients experiencing similar adoption patterns

The brokerage did not need more training. It needed its people to understand the system well enough to trust it — and to feel trusted enough to push back on it when their judgment said otherwise.