I listened to a podcast conversation this week with Dr. John Hanna, an infectious disease physician who also works in clinical informatics. He described a simple way of thinking about AI adoption using a three-layer pyramid.
What stood out to me wasn’t that the model was new. It wasn’t.
It was that he said it clearly, in plain language, with good visuals. And it reminded me of something I’ve been re-learning as ThreeWill spends more time in the Human Services space: a lot of “obvious” technology truths aren’t obvious at all when technology isn’t the point.
In a lot of industries, people are paid to obsess over workflows, metrics, and systems. In Human Services, people are paid to care. Technology is often just something you tolerate to get the job done. Even when someone is tech-savvy, it’s rarely the reason they got into the work.
So when someone like Dr. Hanna lays out a clean model for how enabling technology actually gets adopted successfully, it’s worth sharing — not because it’s groundbreaking, but because it’s foundational.
The Three-Layer Pyramid
Dr. Hanna described adoption and governance as three layers happening at the same time.
The problem layer
At the base is the problem itself: the outcome you’re trying to move, the KPI that matters, the risk you’re trying to reduce. This layer exists before any tool exists. It’s what leadership cares about, and it’s what makes the entire effort worth doing in the first place.
If you aren’t clear about the problem you’re solving, the rest of the pyramid becomes noise. You can build something impressive that doesn’t matter.
The model layer
The middle is the model. In his world, this is literally an AI model — accuracy, validation, bias, benchmarking. But zoomed out, it’s the “tool layer” in general: the technology you’ve chosen and the way it’s been configured.
This is where most organizations spend their time because it’s measurable and concrete. It feels like progress. But it can become a trap if you start believing the tool is the work.
The workflow layer
At the top is the workflow — what people actually experience. The prompts, the alerts, the interruptions, the handoffs, the friction, the workarounds. Dr. Hanna used the “tip of the iceberg” visual here, and it’s perfect: the workflow is the only part most users ever see, but it’s sitting on top of a lot of structure underneath.
This is also where adoption lives or dies. If the workflow doesn’t fit reality, people won’t follow it. They’ll route around it. They’ll compensate. They’ll create shadow processes. Not because they’re difficult, but because they’re trying to get the job done.
That’s the pyramid.
And again, it’s not just an AI pyramid. It’s a technology adoption pyramid.
Monitoring the Workflow vs. Monitoring the Tool
This is the part of the conversation that connected most directly to how I think about measurement.
When most organizations say they’re monitoring technology success, they’re usually watching the outcome KPI. Did we hit the number? Did the rate go down? Did we reduce the risk? Did productivity improve?
Those are lag measures. They tell you what already happened.
Dr. Hanna’s point (at least the way I heard it) is that you can’t govern technology using only lag measures, because by the time the KPI tells you there’s a problem, the problem has already been happening for a while.
Monitoring the workflow is different. It’s monitoring the lead measures.
If you know the outcome you want, you can reverse engineer the steps required to get there. Those steps should be measurable too — not because you want to micromanage people, but because you want to detect drift early, while it’s still reversible.
So instead of only asking, “Did we improve the outcome?”, you also ask:
Are people using the workflow the way we intended?
Where are they getting stuck?
Where are they creating workarounds?
Which handoffs are failing?
Which steps are being skipped because they’re too hard, too unclear, or too interruptive?
That’s what you monitor week to week, day to day. It’s how you identify issues before they become permanent.
In Human Services, I think this matters even more because teams are already stretched thin. If a workflow adds friction, people will still push through it for a while. They’ll carry the burden themselves. They’ll absorb the chaos. And the dashboard won’t show it until it shows up as burnout, turnover, missed documentation, delayed billing, compliance risk, or a quality issue that should have been preventable.
Builders Close to the Work
Dr. Hanna also argued that the people closest to the work should be involved in shaping and configuring the technology. He even runs workshops teaching clinicians to build low-code, no-code AI tools themselves.
I agree with the core idea: the people doing the work daily should have a real voice in how the system works. They understand the nuance. They see the edge cases. They know what “this will never work in real life” looks like.
Where I differ slightly is in how far we should push the “they should build it” part — at least in Human Services.
In this space, there’s a willingness to sacrifice that I deeply respect, but it comes with a cost. Leaders and caregivers are predisposed to taking on burdens. Staff shortages and low funding don’t leave much room. People wear multiple hats because they have to.
So when something new needs doing, someone volunteers. They learn the tool. They become the builder. They become the admin. They become the “AI person.”
And when you split your time across multiple roles, something gives. Not because people aren’t capable — because focus is finite. Something slips. The work that matters most gets diluted.
That’s where partners come into play.
The people closest to the work should absolutely shape the requirements and define what success looks like. But they don’t have to carry the technical build alone. A good partner can speak your language, understand your needs, and then do the heavy lifting of translating those needs into workable systems — so your team can keep its attention on care.
Then, as time goes on, as the workflows get easier, as the burden comes down, you actually create room for the kind of learning Dr. Hanna is advocating for. You can look under the hood. You can build more fluency. You can become more confident in how the technology works — because you’re no longer learning it from a place of exhaustion.
Time Is the Real KPI
When those three layers are aligned, technology stops feeling like another responsibility you have to manage.
It starts giving time back.
And in Human Services especially, time is the one resource no one has enough of.
That’s why we care about getting this right.
Our mission is simple: helping you find more time for yours.
If you’d like to think through how those layers are lining up in your own organization, we love having that conversation.