Hitmetrix - User behavior analytics & recording

Just Don’t Call it A.I.

You buy a book on Amazon. Its algorithms recommend some other books.  As you continue making purchases, the algorithms automatically teach themselves to improve their recommendations based on your choices.

This is machine learning. It’s smart, it’s useful, and it has many broader marketing applications—especially when embedded and working behind the scenes—from predicting customer behavior to powering chatbots. What it’s not is Artificial Intelligence.

AI—which has its chief intellectual roots in a paper by Alan Turing published in the philosophy journal Mind in 1950 (“Computing Machinery and Intelligence”)—originally meant the dream of designing machines which can “think.”

Obviously, everything hangs on what one means by “think.” In the paper, Turing famously described an experiment he called “the imitation game” (now also widely known as the “Turing test”). In short, if a machine can fool a human questioner into believing he or she is in conversation with another human, then the machine can be said to be “thinking”—or “intelligent.” (Computers are occasionally reported to have passed the test, although I wouldn’t bet on the average chatbot).

Turing was a genius of computer science and mathematics. He’s probably the single individual most responsible for the digital world in which we live (and you may have seen his life portrayed in the 2014 movie The Imitation Game). He wasn’t such a great philosopher.

In his 1950 paper, he makes some weak responses to the objection that computer circuits trained (even automatically) to mimic intelligent behavior aren’t necessarily themselves intelligent. To paraphrase one critic, the machine must not only respond intelligently, but know it’s doing so. Turing’s response is that one might have the same reservations about responses from another human being, and that’s solipsism. But that just describes the problem; it doesn’t solve it.

Back in 1936, Turing had already described a computing machine which would come to form the conceptual basis for stored program computers. Eighty years ago, Turing didn’t much care how the machine would be built (he talks about it running “tapes”). Doubtless, with enough ingenuity, time, and patience, one could build a computer—even a very powerful one—using tapes, strings, tin cans–whatever’s lying around.

One might believe—as Turing probably did—that this jumble of junk could be considered “intelligent” if it passed the Turing test. But one might also believe that it’s just a very useful jumble of junk. In particular—and here’s what Turing passed over—one might be reluctant to  confer the same ethical status on the junk that we confer on another human being.

That’s the nub of the problem. If IBM Watson, or some competing machine, became capable of flawlessly imitating human intelligence, would unplugging the machine count as murder? Or is it really the case that human beings, much as we value them, are actually just incredibly complex jumbles of (flesh and blood) junk?

These are really hard questions, and they’re at the heart of the AI debate.  Getting algorithms to train themselves to make better customer touchpoint suggestions—however useful—isn’t much of a contribution to the discussion.

Total
0
Shares
Related Posts