“Our tracker shows how many AI Assistant Chatbots every division has created” is NOT what you want.
When adoption itself is the metric, you’ve already created the wrong incentive structure. Teams will adopt something… not because it’s right for the workflow… but because “not adopted” looks like THEY FAILED.
A bare LLM like ChatGPT, Claude, take your pick… is not the best solution to most real problems. Most workflows need external data, orchestration, guardrails. Integrations. They can’t exist in a vacuum of a chatbot.
The gap between “we can do Chat Bots” and “we have the right setup for this workflow” is enormous. And the gap between state-of-the-art and what tools your organisation actually has access to? We need to be honest here.
Best case is you have “performative AI adoption”. Worse case? Actual decisions are made by confident sounding AI bots that have no/outdated data.
The reward structure needs to flip.
Right now, a team that says “we evaluated this carefully, and for our workflow, it’s not ready” looks like they’re behind. They should look like the most mature team in the room.
Careful assessment, including honest “not yet” and “here’s what we need” conclusions SHOULD be the thing we celebrate. Chatbots are a good start. Yay. BUT losing pace at chatbots or bare minimum coding agents is a losing game.