Turing test 2.0.

The Turing test is a measure of a machine’s ability to exhibit intelligent behavior that is indistinguishable from that of a human. It was first proposed by the mathematician and computer scientist Alan Turing in 1950 and is sometimes referred to as the “imitation game.” The test involves a human evaluator who engages in a natural language conversation with two entities, one of which is a human and the other of which is a machine. The evaluator does not know which entity is which and must determine which one is the human based on their responses to the evaluator’s questions.

(Chat GPT 2023)

The Turing test still stands as a standard to evaluate how close artificial intelligence (AI) comes to humans. While philosophers and scientists take the option that AI could, in principle (sooner or later), fake humans so much as it would be indistinguishable from real humans, I do not. A pretty simple thought experiment will convince you as well.


The cult movie Bladerunner depicts AI that behaves like a human but feels not like a human. Nevertheless, this is quite a poor version of AI since it is constantly detected as a zombie.

I long thought AI could mimic a human almost perfectly, but it cannot suffer and fear because it cannot die. However, the heuristically constructed computer Hall 9000 in the even more cult movie Oddisey 2001 explicitly fears to die. He (what misogyny!) understands that he dies if he is unplugged. It is thus conceivable for AI to have fears similar to humans.


To make the story of Turing test 2.0. shorter, I will jump directly to an unbeatable proof. If, for instance, it would be conceivable for AI to experience the complexity of wine or French cheese using sophisticated detectors, it is inconceivable that such an AI would excrete. The metabolism of AI is powered by electricity, not wine and cheese.

Is it inconceivable for AI to develop an internal machine that would transform food into electricity? Of course not. Such machines already exist as biomass power plants. However, here comes the catch: AI would have to have a body in such an instance, and such a body would be internally different from a human body, even if its shape would resemble it.

Brains in a Vat

The Turing test asks to distinguish the behaviors of AI and humans. If the behavior we have in mind is purely memetic, and if memes are understood only as bits of information (Dennett 2017, 173) without a semantic, intersubjective dimension, then AI could be indistinguishable from a human mind, but only if we take Hilary Putnam Brains in a Vat (Putnam 1999) article thesis as physically possible and not only as a thought experiment.

Brains co-evolved with their whole phenotypes, limbs, hearts, and hairs. Brain’s minds co-depend on the whole phenotype and not only on brains as a part of a phenotype. One could say that computer chip is AI’s phenotype as a top-down human creation that evolved bottom up. As such a phenotype, the computer chip differs ontologically from any biological brain. Its evolution can only make a difference larger and not smaller, as the laws of evolution teach.


The Turing test thus fails from the very beginning. The behavior is always a behavior of the brain as a phenotype within a more extensive phenotype. Even if the AI phenotype is eventually going to excrement, and even if such excreta would resemble humans, the behavior of the AI phenotype, its operations, would differ from humans’ to such a degree that even a child could not be misled.


Chat GPT, 2023

Dennett, Daniel C. 2017. From Bacteria to Bach and Back: The Evolution of Minds. Reprint edition. W. W. Norton & Company.

Putnam, Hilary. 1999. “Brains in a Vat.” In Knowledge: Readings in Contemporary Epistemology, 1–21. Oxford University Press. https://philpapers.org/edit/PUTBIA.

Andrej Drapal