Comfort Before Competence
How artificial intelligence, misaligned incentives and the seduction of comfort may be eroding judgement long before correction becomes inevitable.
Relief is underrated. Especially the kind that follows genuine orientation.
Not comfort. Not sedation. Not the quiet glow of having something done for you.
I mean the kind of relief that comes when a nagging misalignment settles. When something that didn’t quite fit finally does. When unease gives way — not because it has been silenced, but because it has been understood.
Unease, in my experience, is rarely noise. It is usually signal. Something is slightly off. Half a degree perhaps. And half a degree, given distance and time, is the difference between making harbour and missing it entirely.
So I do not try to eliminate unease. I try to map it.
Unease → mapping → pathway → relief.
That rhythm has served me reasonably well — in business, at sea, and in life more generally.
It also explains why I am not a particularly good passenger.
I relax as a passenger only when two things are clear: I trust the Captain, and I can see that the vessel is being handled within its limits. I don’t need to be at the helm. I don’t need to instruct. But I need to sense that judgement is present — that competence and restraint are aligned.
Without that, I remain alert.
Which brings me — inevitably — to AI.
I am not especially worried about AI’s ability. It is impressive. It is also bounded. It operates within patterns, probabilities, and parameters. When used within those limits, it performs remarkably well.
What unsettles me is not the engine.
It is the incentive structure around it.
I felt something similar when social media first emerged. The promise was connection. The reality, over time, was optimisation for attention. The system did not malfunction. It did exactly what it was incentivised to do.
Dopamine became currency.
Outrage became engagement.
Clickbait became strategy.
The vulnerable were sometimes targeted deliberately, sometimes harmed incidentally. Either way, distortion scaled faster than correction.
The tool was not evil.
It was efficient.
AI is not merely an extension of that dynamic — but it is an amplification of it.
Social media amplified distribution.
AI amplifies synthesis.
It produces coherence at speed. And coherence, when well-formed, carries authority. Structure feels like understanding. Fluency feels like depth.
Add to that the vast volume of behavioural observation now routinely gathered — patterns of interest, timing, preference, reaction — and persuasion becomes less about shouting at crowds and more about aligning quietly with individuals.
Old propaganda was loud.
Modern influence is adaptive.
That is not dystopian fiction. It is commercial logic.
But here is where the deeper concern lies.
Civilisations rarely collapse in a dramatic moment of technological failure. They drift. Convenience reduces friction. Reduced friction lowers cognitive effort. Lower effort weakens judgement.
Correction, when it arrives, tends to arrive after erosion.
The real risk is not that AI exceeds its ability. It is that humans begin to defer too readily to outputs that feel authoritative. Not maliciously. Not lazily. Simply because it is easier.
The erosion feels like relief.
Answers arrive quickly. Complexity is summarised neatly. Ambiguity is reduced. The discomfort of not knowing dissolves.
But if the mapping process is skipped often enough, the internal compass weakens.
Judgement, like muscle, atrophies when unused.
This is not an argument for resisting technology. It is an argument for maintaining cognitive sovereignty — the discipline of remaining an active interpreter rather than a passive recipient.
That requires friction. Not bureaucratic friction — paperwork rarely sharpens thinking. Constructive friction does. The kind that forces refinement of questions. The kind that exposes limits. The kind that makes one pause before accepting coherence as proof.
AI can be augmentation.
It can also become quiet authority.
The difference does not lie primarily in the code. It lies in whether we preserve the capacity to question, to map, to remain slightly uncomfortable until understanding settles.
I trust a powerful vessel when it is handled with judgement and restraint.
What I distrust is complacency — particularly when power increases and vigilance decreases.
We are building faster engines than ever before.
The sea looks calm.
But calm water can conceal current.
Relief should come from regained orientation, not from surrendering the effort to orient.
Correction, when it comes, rarely waits for comfort.
The question is not whether AI will exceed its capability.
The question is whether we will relax ours.


