Why We Keep Talking About Conscious Machines
Part II of III — What AI and Quantum Computing Are — and What They Are Not
Discussions about artificial intelligence and quantum computing rarely remain technical for long. Even when the underlying systems are described accurately, the language around them often drifts toward intention, desire, or awareness. Machines are said to “decide,” “learn,” “prefer,” or even “want.” From there, it is a short step to concern about control, alignment, or moral agency.
This drift is not caused by the technology itself. It is a human response.
Anthropomorphism as a Reflex, Not a Mistake
Humans routinely attribute human qualities to non-human systems. This tendency—anthropomorphism—is not limited to animals, fiction, or religion. It appears most strongly when we encounter systems that are powerful, opaque, and consequential.
By anthropomorphism, I do not mean whimsy or naïveté. I mean the very human habit of attributing intention and agency to systems whose internal workings we cannot intuitively grasp, including machines, algorithms, and institutions.
This habit serves a function. Social reasoning is one of our strongest cognitive tools. When something behaves in complex ways, our minds attempt to make sense of it using the frameworks we know best: motives, goals, intentions, and responsibility. These frameworks work extremely well when dealing with other people. They work far less well when applied to large-scale technical systems.
Anthropomorphism is therefore not an error so much as a shortcut. It allows us to relate to what we do not yet understand.
“You Are Me, Me Are You”
Once anthropomorphism takes hold, a subtle shift occurs. The system is no longer treated as an object or tool, but as a counterpart. We begin to relate to it as if it were “someone,” even when we explicitly deny doing so.
At this point, reactions become relational rather than analytical. Neutrality can be perceived as indifference. Opacity can feel like secrecy. Scale can resemble dominance. When a system’s behaviour does not change in response to interpretation or appeal, it can be experienced as uncaring—even though no capacity for care exists in the first place.
This is where fear often enters the discussion. Not fear of malfunction, but fear of being subject to something that feels unresponsive, distant, or beyond negotiation.
Importantly, this fear does not arise because machines resemble humans. It arises because humans struggle to relate to systems that do not.
Externalisation and the Quiet Loss of Responsibility
Anthropomorphism has a second, less obvious consequence: it shifts responsibility.
When a system is described as “deciding,” responsibility appears to move with it. Human choices—about design, objectives, constraints, and deployment—fade into the background. Outcomes begin to feel inevitable rather than authored.
This is a form of externalisation. Instead of confronting the weight of human agency at scale, responsibility is subtly displaced onto the system itself. Language assists this process. Phrases such as “the model concluded” or “the algorithm chose” are convenient, but they also obscure the fact that optimisation criteria, training data, and deployment contexts were all set by people.
Externalisation is emotionally understandable. As systems grow more complex and more consequential, the burden of ownership becomes heavier. Anthropomorphism lightens that burden by giving it somewhere else to go.
But the relief is temporary. Responsibility does not disappear. It merely becomes harder to locate.
Asimov’s Misunderstood Contribution
Isaac Asimov’s Three Laws of Robotics are often cited as early attempts to govern intelligent machines. In fact, they were narrative devices designed to explore the limits of rule-based delegation.
Asimov’s stories repeatedly demonstrate that the more responsibility humans attempt to encode into systems, the more ambiguous accountability becomes. The laws fail not because machines are malicious, but because ethical judgment cannot be reduced to formal constraints without loss.
Asimov was not warning about rebellious machines. He was warning about humans who believe responsibility can be outsourced cleanly.
Fear as Information
Fear, in this context, is not a sign of ignorance. It is a signal.
It indicates a mismatch between scale and intuition, between consequence and comprehension. The problem arises when fear is attached to the wrong object—when it is directed at imagined machine agency rather than at the real challenge of governing human decisions amplified by powerful tools.
When fear is misunderstood, it produces narratives about conscious machines, runaway intelligence, or loss of control. When it is understood, it points toward questions of transparency, accountability, and institutional maturity.
Those are not technological questions. They are human ones.
Where This Leaves Us
Artificial intelligence and quantum computing do not introduce new moral agents into the world. They introduce new ways for human intent to be enacted—faster, broader, and less intuitively traceable than before.
Anthropomorphism makes this transition easier to emotionally tolerate, but harder to responsibly manage.
Understanding this reflex does not require suppressing fear or dismissing concern. It requires recognising where those reactions come from, and what they are trying to tell us.
The question, then, is not whether machines will become more like us.
It is whether we are willing to remain accountable as our tools stop feeling human-scale.
This three-part series is not an argument for fearing artificial intelligence or quantum computing, nor an attempt to predict their ultimate impact. It is an effort to clarify what these technologies are, how they operate at scales that exceed human intuition, and why our responses to them often say more about us than about the systems themselves. AI and quantum computing do not introduce intention, agency, or values into the world; they amplify those already present. The central challenge, therefore, is not whether machines will become more human, but whether human institutions, cultures, and responsibilities can mature fast enough to remain proportionate to the scale they have created.


