First, fair warning: I’ve been involved one way or another with artificial and augmented intelligence since 1968. So I’m a little stuck, in that I actually know a bit about how this stuff works. I know less about how human, and animal, intelligence work, but I try to keep up. (For example, did you see the recent breakthrough in which scientists transplanted eyes into the tails of blind tadpoles, added a little migraine pharma, and — voila! — the tadpoles got the gift of sight!)
You might argue that seeing isn’t “intelligence,” but that’s just not true. The way we see, hear, feel, fear, talk, listen, strike a pose, make a face, go, pause, accept, reject, try, give up — the list goes on and on — are all “intelligence” in action. Turn the intelligence off, and you are left with reflexes: effectively, a coma.
We don’t understand very much about how most of this stuff that we do works. But just try selling without exercising these still-mysterious gifts. Think about the Turing Test, where the “sales robot” and the test human interact merely by passing text messages back and forth until the human cries “uncle,” accepting the robot as a fellow human. Add one requirement: that the robot needs to talk, and listen, and understand a bit of humor — and be sensitive to how you feel about your job, and how scary this potential purchase could be for you, and whether that little pause before you replied was an expression of hesitation, concern, or preparation for a thoughtful response (or an expression of distraction as the home-office pooch suddenly needed a bit of attention).
The way modern AI works, for the most part, is by taking advantage of the opportunity afforded by very fast computers, which don’t get bored or forget facts, to make correlations between inputs, outputs, and feedback.
AI is great for uncovering hidden patterns in data; and it’s equally great at fooling its masters into believing that correlation is causation and that false negatives (the stuff you missed that sometimes turns out to be more important than the stuff you noticed) don’t matter. When combined with visualization, this kind of data mining and pattern discovery is very powerful, like radar or night vision goggles, letting you see what you couldn’t see before by emphasizing some features and muting a bunch of noise.
Lessons of great gravity
AI is also great for trial-and-error learning. Flying a drone? No problem — crash that sucker 11,500 times, like these folks at Carnegie Mellon University did:
When pundits predict that sales robots will make human sales reps obsolete, they are extrapolating from these “deep learning” successes. All you have to do is crash 11,500 potential deals, and your sales robot might learn to avoid failure and navigate its way to success. Simple, right? Of course, there might be the small problem that 11,500 blown deals is a pretty expensive experiment — and you won’t be flying your “sales drone” in the same environment repeatedly either.
By the way, note that you need to first solve for the hard version of the Turing Test: natural, voice-based conversation with real give-and-take, before this kind of “sales drone” can get airborne and ready for its 11,500 deal crashes.
There’s another flavor of AI, full of old-fashioned language processing, with lots of hand-coded rules piled on rules piled on rules. It’s possible to make simple sales robots this way. After all, a program called ELIZA was written by AI pioneer Joseph Weizenbaum, running an IBM 7094 computer, in 1964. (Read that date again: 53 years ago, we had a robot that consistently fooled people into believing it cared about them, something only the most skilled and sincere salespeople manage to do today!).
There’s actually a lot to learn from ELIZA, who is a better empathizer than almost every self-absorbed sales rep I’ve heard. ELIZA’s descendants are with us today in the proliferation of chatbots. And well-programmed chatbots really can sell or at least carry on the first part of a sales conversation. People project their own humanity onto the chatbot, because they can’t help it: we just don’t have another model inside our brains for something that “talks” to us. And if the sale, or part of the sale, is simple enough, then the chatbot gets part of the job done, and cheaply.
But what about the prediction that sales robots will wipe out millions of sales jobs? Sounds scary, doesn’t it? I’ll take a look at that in Part 2. Stay tuned!