Can AI See the Value in Confusion?
Can AI See the Value in Confusion?
(posted 2024/11/24; originally appeared 2024/08/18 in the Albany, New York, USA Times Union, https://www.timesunion.com/opinion/article/commentary-ai-see-value-confusion-19659622.php)
This month in Scientific American, Hartmut Neven, the founder and lead of Google Quantum AI, co-authored an opinion piece containing the following remarkable assertion: “You … don’t ever consciously experience a superposition of states. Any one experience has a definitive quality; it is one thing and not the other. I see a particular shade of red. … I don’t simultaneously experience red and not-red.”
Have you ever simultaneously experienced red and not-red? I have. In fact, there are common words for this kind of experience, though I confess that it took me a while to come up with one.
And it matters. Why? Because it matters how the people shaping artificial intelligence appear to understand human nature, which emerges from our conscious experience.
Recently, I was walking down Lark Street on an overcast day when I saw what looked like an American flag ahead. I’m a middle-aged physicist with slight vision loss owing to a neurological condition. My color vision is somewhat degraded. I’m aware of variants of the American flag in which red is replaced with dark grey or black. Until I got closer, I wasn’t sure whether the dark stripes I was seeing were red or dark grey or some other color.
In other words: I was experiencing red and not-red simultaneously.
Have you figured out a word for the kind of experience I was having? One word is “confusion.” The founder and lead of Google Quantum AI, writing an essay based on ideas central to his work, failed to recognize the concept of confusion when he expressed it.
Why should this bother you? The technologists creating AI are trying to align its outputs with what they see as “human values.” But people have a wide variety of values.
So which ones are the AI models trained on? What are AI systems taught about the value of life, the nature of gender, the relationship of a people to their government? How about difference vs. conformity, community vs. self, or when it’s OK to lie?
Built on a limited view of human experience, artificial intelligence could become a hard-to-gainsay Wizard of Oz, imposing a virulent homogeneity on humanity.
Humans are not always logical, constant, absolute or clearheaded. And that’s fine. In some ways, it's an advantage. Confusion is the first step to understanding. Neven and his colleagues should give it a try.