Are the robots coming to get us?

How often do I hear these words from a vendor?  “And of course there’s AI or machine learning in the back end.” AI (artificial intelligence) or machine learning; in other words, whatever the solution is doing–segmenting audiences, personalizing content, optimizing campaigns–every time it does it, it gets a little smarter.

The robot teaches itself–a never-ending process.

Because robots, after all, are what we’re talking about. No, not R2D2 or Marvin the Paranoid Android. We’re talking about virtual artificial agents (as Wikipedia neatly puts it), steered by software. We’re talking about cybernetics (“cyber” from the Greek ????????–to steer or guide); but we’re talking about cybernetic systems in which feedback from actions changes–and hopefully improves–the system’s functionality.

Okay, why are we talking about that? Because so pervasive is machine learning in the new technology, including marketing technology, that it’s making some smart people nervous. See the absorbing, if tendentiously titled, article in the latest New Yorker by Raffi Khatchadourian : “The Doomsday Invention.” In summary, there are academics who seriously contemplate a dystopian future where the robots (or, if you prefer, highly sophisticated computer programs) supersede the human race as the smartest and most powerful species (if that’s the word) on the planet.

But is this increasingly common element in martech systems really so dangerous.  Over the last few months, here at The Hub, we’ve heard about the multiple uses of machine learning to enhance our understanding of what prospects and customers say and do:

  • Stacey Bishop of Scale Venture Partners said: “Machine learning partnered with human insights will be most successful in creating a holistic customer journey.”
  • Textual analytics vendors like Lexalytics and Kanoya are using machine learning to improve semantic understanding of language–and sentiment expressed within language.
  • Yasha Spong of Zample described the “incredible things” machine learning can do in the context of analyzing and understanding images.
  • Gordon Evans of Salesforce explained how machine learning underlies predictive analytics: “We are putting the power of machine learning and data science into the hands of marketers.”

The list goes on and on. So what’s so scary? Let’s look briefly at some basics.

One conceptual problem we’ve created for ourselves arises from using the terms “machine learning” and “AI” interchangeably. It sometimes seems that machine learning is just the smart, buzzy way to refer to AI. But not all machine learning is really AI. Perhaps the most familiar manifestation of machine learning in the current digital environment is the use of algorithms to improve search results or purchase recommendations. Google and Amazon, to take obvious examples, use algorithms which update automatically as a result of feedback users can’t help giving as they choose to click or purchase. That’s machine learning, and nobody supposes there’s anything thoughtful, reflective or deliberate about what the algorithms are doing. It’s automatic like knocking over a line of dominoes is automatic.

AI, in its origins anyway, was about designing intelligent machines (in contemporary terms, intelligent software). As for what counted as an intelligent machine, the foundational description was provided by the computer scientist Alan Turing, who said that a machine could be described as intelligent if its behavior was indistinguishable from that of a human being (you can study that in more detail here). Now Turing was a genius, but it’s evident that his test– which served for many years as an admirable goal for software designers–overlooks an important distinction. It can’t distinguish a machine which is genuinely intelligent from a machine which can fool people into believing it’s intelligent.

But that doesn’t matter here. What’s important is to recognize that the concept of “intelligence” goes far beyond the automatic revision of algorithms. A truly intelligent machine should be able to act independently, spontaneously and purposively. And those are the machines philosophers like Nick Bostrom, the focus of the New Yorker article, are worried about: “Bostrom’s fears, in their simplest form, are evolutionary: that humanity will unexpectedly become outmatched by a smarter competitor.”

Bostrom, of course, is capable of explaining his concerns in much greater detail than that, but the term “evolutionary” affords an important clue to one practical problem with the doomsday argument. Machines, to put it bluntly, don’t evolve–except in a purely metaphorical sense. To paraphrase the biologist Richard Dawkins, life evolves through natural selection among replicating entities. While it’s obviously true that software can self-replicate, and we know from the simple Amazon example that it can optimize its performance automatically, it relies–unlike the biological replicators Dawkins has in mind–on the grid. And while one can draw a fanciful analogy between the eco-systems which sustain living creatures, and the power grid which sustains software and machine systems generally, the truth is that the grid is easy to turn off.

When I was a kid, the joke about the daleks–the malicious robots who threatened Dr Who–was that you could evade them simply by running upstairs. For the Doomsday scenario to come about, it’s necessary to suppose that the robots can’t just be unplugged. Of course one can imagine means by which malicious software could override attempts to cut off power–but designing an adequate fail-safe mechanism to disable a power supply is an engineering problem, not a philosophical problem, and surely not an insurmountable one.

The philosophical problem–which we won’t solve here–is whether there could be a moral objection to disabling an intelligent machine. Marketers, sales reps, data scientists, creative designers–all the mere human beings riding the martech tiger–are objects of moral concern. Unplugging them–whatever that means in practice–would be wrong (and that’s why it’s illegal).

But what could possibly be wrong with shutting down HAL?

Related Posts