Skip to Content, Navigation, or Footer.
The Tufts Daily
Where you read it first | Thursday, April 25, 2024

Around the Corner: Sentient computers? Never.

AsherColumn

A common trope in science fiction is “what if the computer comes to life?” The plot of “Free Guy” (2021) — a mediocre film — for example, revolved around a nonplayer character in a video game gaining sentience and struggling to preserve it. The problem of what to do with a sentient computer is an enormously complex one — too complex, indeed, to address in this column. There is, however, another more fundamental question: How do we know when a computer has gained sentience?

One answer is to divine the inner capacities of a computer by examining its external exhibitions. The Turing test, developed by the eponymous Alan Turing, consists of an interrogator asking a series of questions to a human and a computer in two different rooms. The interrogator, unaware of the identity of either, must determine which room contains the machine. If they guess incorrectly, the machine has beaten the Turing test and is said to be able to think. Irrespective of the Turing test’s competence, a critical issue with it is that just because a machine acts like it can think, that does not mean that it actually can. 

A thought experiment, known as the Chinese room, illustrates this. Imagine a woman sitting in a room with a window. Watching her from the outside, you see her put squares of paper with Chinese characters together, forming sentences or even whole texts in Chinese. From the outside, it appears that she understands Chinese. When you walk into the room, however, you realize that she is receiving instructions telling her which squares to put where. A computer functions in a similar way. It receives instructions and executes them. Though it appears to understand, like the woman in the Chinese room, it does not. Even though the machine appears to exhibit critical thinking, like the woman in the Chinese room, it simply executes instructions. 

A skeptic may contend that this dismal assessment of computational vitality is rooted only in the computers of today. Computers of tomorrow, after all, might be sufficiently complex enough to be considered alive. Three principles lead me to respond that even if a computer does come to life, it is not logical to treat them as alive. First, computers are not currently self-aware (which is this column’s definition of life). Second, it is not possible to determine whether a computer is self-aware by examining its external behavior. Third, it is not possible to examine a computer’s internal behavior — just as it is impossible to experience someone else’s first person experience.

If we know now that computers are not self-aware, then we must continue to believe as such unless contrary evidence arises. Since it is impossible for such evidence to exist, as outlined by my second and third principles, we must continue to treat computers as unaware, irrespective of whether they become such. I am, however, very open to the possibility that there are fatal flaws in this reasoning. If you, the reader, perceive a problem, I invite you to share it with me, and I will respond in two weeks in my next column.