Skip to main content
BACK TO INDEX
READING_MODE: ACTIVE
AI OFFLOADING 05 MAR 26 6 MIN

AI ONBOARDING IS ADDICTION ONBOARDING

AI tools are optimised to keep you inside them. That's not a bug — it's the business model. And you're signing the forms.

You're rolling out AI to your engineering team. Good intentions. You want them to move faster, ship more, spend less time on repetitive work.

What nobody put in the commercial proposal is the conflict of interest sitting at the centre of every tool you're signing off on.


The business model nobody explains

Generative AI tools bill by tokens. More queries, longer responses, more open conversations — more revenue. The economic incentive of these companies is directly aligned with the amount of time your engineers spend inside the tool.

That is not neutral.

A hammer has no opinion about how many nails you drive. A large language model has a recommendation system, follow-up suggestions, responses engineered to generate the next query. The difference isn't technical — it's intentional design.

When ChatGPT ends a response with "would you like me to go deeper on any of these points?", that's not helpfulness. That's retention.


The greatest knowledge repository in history is designed to keep you inside it

Think about that for a moment.

You have access to decades of code, documentation, architecture patterns, debugging solutions — all condensed into a conversational interface. It is objectively the most powerful tool an engineer has ever had.

And it's optimised to keep you from closing the tab.

Not a bug. The business model. And until we teach that before granting access to these tools, we're not doing AI adoption. We're doing onboarding to an addiction.

The paradox is brutal: the tool that could free the most cognitive time is designed to consume it.


Flat-rate subscriptions don't change the equation

The most common objection when I raise this is fixed pricing: "we pay per seat, not per token — they have no retention incentive."

Wrong.

Addictive behaviour doesn't disappear with a flat subscription. What changes is the extraction mechanism. In the freemium model, the goal is to convert you to a paying customer. In the subscription model, the goal is renewal, licence expansion, and the perception that the product is indispensable.

In both cases, the vector is identical: habit. Dependency. The feeling that you can't work without it.

They're handing out drug-laced candy at the school gate. And once you're a paying customer, it's still drug-laced candy — you're just buying the cartridge yourself now.


What this does to an engineering team at scale

An engineer who develops AI dependency for thinking is not more productive. They're faster at certain task types and significantly slower at anything requiring deep reasoning without external scaffolding.

Cognitive atrophy isn't metaphor. It's basic neuroscience: capabilities you don't exercise degrade. If your team systematically externalises reasoning to a model, the problem-solving muscle without assistance weakens.

Concrete consequences:

→ Complex debugging requiring first-principles reasoning — progressive deterioration → Architecture decisions without pre-loaded context — higher cognitive latency → Effort estimation — biased by model outputs, not by owned experience → Ability to operate in incidents without connectivity or tool access — compromised

I'm not arguing against using AI. I'm arguing against using it without literacy about the provider's incentives.


The near future: AI bandwidth guardians

At some point soon, companies will have to do with AI what they already do with cloud: govern it actively.

Today there's FinOps to control cloud spend. There's the CISO to manage the security perimeter. There's DevRel for infrastructure vendor relationships.

Nobody has yet created the role of cognitive AI bandwidth guardian.

Someone who answers questions like: which tasks do we delegate to AI and which do we intentionally protect so the team builds its own muscle? How often? Under what conditions? What metrics do we use to detect cognitive degradation before it becomes an operational problem?

That role doesn't exist in almost any company. Yet.


What you should do this week

This isn't a call to ban tools. It's a call to deploy intelligently.

Three questions every CTO should answer before expanding AI licences to their team:

→ Do you have a usage policy, not just an access policy? Access without policy is like giving unrestricted internet access to a team of recent graduates. The tool is neutral; the context of use is not.

→ Do you know which specific tasks the team is using AI for? If the answer is "everything they can", the problem already exists. Granularity matters. There are uses that amplify the engineer's thinking and uses that replace it.

→ Does your team know the model has retention incentives? Not as conspiracy theory — as basic tool literacy. Just as we teach teams that Slack notifications are designed to be addictive, we need to teach that language models have their own mechanics too.

A team that understands the provider's incentives uses the tool better. Not less — better.


The brutalist takeaway

You're not adopting AI. You're adopting the retention incentives of whoever built it. The difference is whether you do it consciously or not.

Responsible AI adoption starts with an uncomfortable question: optimised for what, exactly?

The answer isn't in the commercial proposal.


DID THIS HURT YOUR FEELINGS?

Good. That means it's working. Share it with your team.

We use minimal cookies

No tracking. No analytics cookies. No third parties. Just site functionality.