Pseudogap in Electron-Doped Cup-rates: Strong Correlation Leading to Band Splitting
From Observation to Performance in Everyday Systems
This essay begins with a discovery in condensed-matter physics. What it uncovers applies far beyond physics.
When reading the original paper on this subject, I recognised a familiar structural pattern, worked to understand it, realised it was more than observational, and followed its implications through quantum computing to the most immediate and performative domain we live with today: AI.
I wasn’t initially drawn to this discovery because of superconductivity, or even because of quantum mechanics. What stopped me, at that first moment, was recognition.
The work describes a pseudogap in electron-doped cuprates that arises not from antiferromagnetic band folding or symmetry breaking, but from strong electronic correlations leading to band splitting. In technical terms, it identifies a regime of correlation-driven, non-informational quantum coherence — a state in which quantum order is energetically stable precisely because it is not required to encode symbolic information locally.
At first glance, this appears purely descriptive: a clarification of how quantum matter behaves under strong correlation. But as I read further, a familiar structural problem began to surface.
In the 1980s, we dealt with vast volumes of global strategic market intelligence that existed almost entirely as narrative. Reports were written, published, consumed, and then effectively discarded. Insight was real but perishable. To maintain relevance, new narratives had to be produced continuously. The system worked, but inefficiently.
Our first instinct was rational: solve the problem at the input level. We tried to impose categorisation at the moment of writing, asking those producing the narratives to structure and formalise them as they went. Technically it succeeded. Practically, it failed.
Narratives thinned. Writers were forced to think like databases. Categorisation increased, but meaning collapsed. We had imposed symbolic order too early, at the generative level, and paid for it in lost signal. We were solving the problem at the wrong layer.
That same tension appears sharply in quantum computing.
Quantum computing occupies an intermediate position: it sits between the physics of quantum coherence and the symbolic demands of computation. Superposition and entanglement are not merely observed, as in superconductivity, but actively manipulated to carry information. Here, the question of where information should be introduced becomes unavoidable.
In superconductivity, coherence is stable because it is collective, anonymous, and non-informational. In quantum computing, coherence is asked to carry symbolic meaning at the most fragile level of the system. Enormous energy is then required to maintain isolation, suppress decoherence, and correct errors. The physics is sound, but the cost is high.
Seen this way, quantum computing is not failing — it is exposing a structural constraint. Quantum mechanics supports stable coherence most readily when it is not burdened with representation. When information is introduced too early, energy input rises sharply and efficiency degrades.
This is where the unexpected connection to artificial intelligence emerges.
AI systems operate by clearly separating levels: generation, representation, optimisation, and output. They are extraordinarily effective because of this separation. Quantity is not a problem. But meaning is stabilised only by abstraction and scale, not by integration. AI does not suffer from the qualitative compromises that affect human intelligence — but it also does not integrate levels simultaneously.
Human intelligence appears to do the opposite. The brain operates across generative, interpretive, and symbolic layers at once. This simultaneity is powerful but costly. Output is compromised for qualitative reasons: ambiguity, bias, compression, and judgement. Meaning is preserved at the expense of precision.
In this context, the superconductivity discovery is no longer merely informative. It becomes performative. It shows, at the most fundamental physical level, that systems operate efficiently when generative coherence is allowed to exist without premature categorisation, and that energy costs rise when symbolic meaning is imposed too early.
Quantum computing reveals this constraint directly. Artificial intelligence reflects it indirectly, in familiar, everyday form. Together, they frame the discovery not as an isolated result in condensed-matter physics, but as a structural insight into how complex systems handle information.
That, I suspect, is why this discovery held my attention. It names a pattern I had encountered before, failed to describe at the time, and now recognise across domains: efficiency depends less on control than on knowing where not to impose it.



Wow, the structural pattern from quantum to AI is truly intriguing. Much to ponder. Multumesc!