Technology

A Google Software Engineer Believes An AI Has Become Sentient. If He’s Right, How Would We Know?

A Google Software Engineer Believes An AI Has Become Sentient. If He’s Right, How Would We Know?

Google’s LaMDA (Language Model for Dialogue Applications) software is a sophisticated AI chatbot that responds to human input by producing text. LaMDA has realized a long-held ambition of AI engineers, according to software engineer Blake Lemoine: it has become sentient. Google’s management disagree, and Lemoine has been fired after publishing his interactions with the computer public.

Other AI scientists believe Lemoine is exaggerating, claiming that systems like LaMDA are little more than pattern-matching computers that repeat variations on the material they were trained on. Regardless of the technological facts, LaMDA presents an issue that will only become in importance as AI research progresses: how will we recognize if a computer becomes sentient?

What does it mean to be conscious? We’ll have to figure out what sentience, consciousness, and intelligence are before we can recognize them. For ages, people have been debating these issues. Understanding the link between physical facts and our mental representations of those phenomena is the fundamental challenge. This is the “hard issue” of consciousness, as described by Australian philosopher David Chalmers.

There is no agreement on how consciousness might emerge from physical systems, if at all. Physicalism is one popular viewpoint, which holds that consciousness is entirely a physical process. If this is the case, there’s no reason why a computer programmed correctly couldn’t have a human-like mentality.

Mary’s quarters, in 1982, Australian philosopher Frank Jackson used a famous thought experiment known as the knowing argument to test the physicalist stance. Mary, a color scientist who has never experienced color, is the subject of the experiment. She lives in a specially designed black-and-white room and watches black-and-white television to keep up with the outside world.

Mary learns everything there is to know about colors by watching lectures and reading textbooks. She understands that sunsets are generated by different wavelengths of light scattered by particles in the sky, that tomatoes and peas are red and green, respectively, due to the wavelengths of light they reflect, and so on. So, what happens if Mary is let out of the black-and-white chamber, Jackson wondered? Is there anything new she learns when she sees color for the first time? Jackson was convinced she did.

In addition to physical characteristics, this thought experiment distinguishes between our knowledge of color and our perception of color. Crucially, Mary knows everything there is to know about color but has never really experienced it, according to the thought experiment’s requirements. What does this signify for LaMDA and other artificial intelligence systems?

The experiment demonstrates that even if you had all of the physical property information in the world, there are still realities about how those attributes are experienced. These realities have no place in the physicalist narrative. A purely physical machine, according to this reasoning, may never be able to properly reproduce a mind. LaMDA is only pretending to be sentient in this scenario.