Someone once asked me what it takes to be a Velocity writer.
And while the shortest answer is a blend of people-pleasing, masochism and whatever mental illness makes you interested in supply chains, it’s a question we’ve tried to tackle more seriously in a few posts.
There’s this one, that lists all the kinds of writers we don’t want to attract. (Self-described wordsmiths are always a giant red flag).
But I think my favourite is Emma’s 10 (or so) principles of writing at Velocity.
The whole list is great, but I’ve parroted the first point to every new writer who’s walked through our doors in the last 5 years: “This isn’t a writing job. It’s a thinking job”.
To be clear, every writing job is a thinking job (and if it isn’t, then it’s a typing job). But I think great writing is what happens when words get out of the way of great ideas.
And the more I use generative AI to write, the more I’m noticing how it’s changing the way I think. And not necessarily for the better.
This isn’t going to be an anti-AI post
Clickbait title aside, I use AI all the time. I’m sure you do as well.
But the more it creeps into my workflow, the more important it feels not just to figure out what it’s good or bad at, but also how it’s changing my habits and what effect that has on my brain over time.
Whenever the thing I’m writing (or reviewing) hits a snag or feels a bit janky, the root cause is usually insufficient thinking. Could be a logical leap or an unearned conclusion or a gap in the argument — the culprit is uncertainty kicked down the road in the hope you’ll figure it out in execution.
(Sidebar: Harry (whose book I will shill any chance I get) once made a great point that the trigger for procrastination is often discomfort caused by a perceived spike in difficulty that happens when an idea needs more thought or research. It’s great framing — try using your procrastination habits as a heat-map for quality control.)
The reason I’m so hot-and-cold conflicted about AI creep for certain kinds of work is because it superficially trivialises the retrieval and synthesis of information in ways that profoundly impact how we think and the conclusions we come to.
And there’s an emerging body of evidence that validates this feeling.
What happens when AI replaces thinking: a cost, a consequence and a solution
Quite often, using ChatGPT feels icky, like I’m cheating. In the past I’ve chalked that up to a professional defense mechanism — a specialized skill is being commodified in front of my eyes, and I need to get over myself and build a new moat.
But I’ve read a few things recently that have helped to disentangle generalized ickiness into a set of specific dynamics — and I think naming and understanding them is essential to become a more responsible and more effective user of AI.
Here are three of them: a consequence, a cost, and a solution.
A consequence: Cognitive offloading
First, a research paper published in January this year: AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking (Gerlich, 2025)
The study is based on surveys and in-depth interviews with 666 participants across diverse age groups and educational backgrounds.
Its central premise is that over-reliance on the quick solutions and ready-made information offered by AI tools can lead to “cognitive offloading” — and that while cognitive offloading “can free up cognitive resources, it may also lead to a decline in cognitive engagement.”
The study “found a significant negative correlation between the frequent use of AI tools and critical thinking abilities, mediated by the phenomenon of cognitive offloading.”
What to do about it
Unchecked cognitive offloading can make you worse at critical thinking. That alone should inspire you to make more conscious decisions about when, how, why and how frequently you turn to AI tools in your daily work.
A cost: Cognitive debt
Second, a concept called “Cognitive debt” (analogous to technical debt) coined by Strategic Designer John V. Willshire.
Technical debt describes “the implied cost of additional work in the future resulting from choosing an expedient solution over a more robust one”. It’s what happens when dev teams hastily write code to launch a feature without fully integrating it in the existing codebase.
Cognitive debt describes the result of arriving at conclusions without understanding the reasoning required to get there. It’s what happens when people take shortcuts to avoid “the thinking in the present that [they] will need to demonstrate in the future”.
A crucial difference is that technical debt is “accrued locally, with high specificity and direct accountability”, while cognitive debt is “accrued globally, with low-specificity and unclear accountability.”
Put simply, teams incurring technical debt generally know the costs they are incurring, how to rectify them, and who is responsible for doing so. But rapid, distributed AI adoption in the name of more productive and efficient knowledge work creates gaps that are harder to see and fix.
What to do about it
This post from Andrew Mitchell describes a critical question to ask before turning to AI tools for help: “How large is the cognitive debt that taking this shortcut will accrue?”
A solution: Pathological caring
Third, a lovely articulation of the importance and power of caring about your subject matter from 28 slightly rude notes on writing by psychology researcher Adam Mastroianni.
The whole post is phenomenal. But point 23 rang out like a bell when I was outlining this post:
“Most students…thought the problem with their essay was located somewhere between their forehead and the paper in front of them… They assumed their thinking was fine, but they were stuck on this last, annoying, arbitrary step where they have to find the right words for the contents of their minds.
But the problem was actually located between their ears. Their thoughts were not clear enough yet, and that’s why they refused to be shoehorned into words.
Which is to say: lots of people think they need to get better at writing, but nobody thinks they need to get better at thinking, and this is why they don’t get better at writing.
Writing is a costly signal of caring about something. Good writing, in fact, might be a sign of pathological caring.”
What to do about it
All good writing comes from convictions: a strongly held belief; a burning question; an axe to grind; an insistence that some things matter more than others. Generative AI can confect infinite hypothetical perspectives to serve any narrative. But it cannot care for you. That has to come from you.
Productive inefficiency
Whenever I fall into a habit of over-reliance on AI, I get something like a hangover.
The next time I sit down to write, the discomfort of not knowing feels immediate and frustrating, and the urge to just start molding something is irresistible.
But every time I do that, it’s costing me something — and now I have the evidence.
It’s not just the organic avenues I never explored, the happy accidents that led me down fascinating rabbit-holes, the bumpy weirdness I sacrificed in exchange for a perfectly smooth first draft.
It’s the mental atrophy. The repetitions of critical thought I didn’t force myself through, which makes the next time a little harder, and then harder still.
Our bodies are meant to move and our brains are meant to think.
And I think it’s incumbent on any company charging toward broad AI adoption to pay really close attention to the costs as well as the benefits — and to have clear answers to uncomfortable questions like:
- What is the long-term impact of routine cognitive offloading on your workforce?
- What is the cognitive debt your teams accrue with each AI-enabled shortcut they take?
- How does AI-assisted thinking flatten your ability to express the things you care about the most, in the way only you can?
Enjoyed this article?
Take part in the discussion
Comments
There are no comments yet for this post. Why not be the first?