What do we call the collection of technologies that make up what we used to call “artificial intelligence?” This conundrum reminds me of a Raymond Carver short story (and book) called What We Talk About When We Talk About Love. Artificial Intelligence (AI) isn’t quite as ambiguous a concept as love, but it’s moving in that direction.
I was prompted to discuss this issue by a conversation with Jeremy Achin, the young CEO of DataRobot. We were preparing for a collaborative presentation at the Open Data Science Conference in Boston a few weeks ago, and I told him I could present on “The Cognitive Company.” Achin, who doesn’t mince words, wrinkled up his nose and said he really didn’t like the use of the word “cognitive”. He suggested that IBM had forced the term on the world and to use it was to promote that company. He also suggested that the field of AI is not actually very close to replicating or surpassing the capabilities of human cognition.
I’m not sure I agree with his IBM claim, although the company does seem to pay top prices for ad space for searches involving “cognitive technology” or “cognitive computing.” Achin is certainly correct that current technologies are not yet worthy of a comparison to the human brain’s capabilities, but then “artificial intelligence” also implicitly makes that comparison.
There is another competitor in this race, and that’s the one that best characterizes DataRobot’s offerings: machine learning. It’s a bit grandiose as well in comparing computer-based learning to human learning, but it’s a bit more specific than “AI” or “cognitive.” The trouble with it is that several technologies that have often been included in the AI category—rule-based expert systems and “robotic process automation” tools—don’t actually learn or improve their performance over time without human intervention. So I don’t think it’s a good fit for an umbrella term that describes all intelligent technologies.
There are also the more generic terms like “machine intelligence” or “smart machines.” For whatever reason—perhaps their high level of generality—these haven’t caught on. Some people also use the term “robotics” as the general term for intelligent machines, but to me anything involving “robot” will always suggest a machine with the ability to manipulate the physical world. That’s why I don’t like the term “robotic process automation”—it has nothing to do with physical robots. For that matter, I am also not a fan of “automation”—we’ve been talking about it for decades, and many of us still seem to have jobs. I prefer “augmentation” in almost every case for the impact of technology on human labor.
What terms have caught on? If we turn to Google Trends, the arbiter of how we use terminology to find out about the world, “artificial intelligence” and “machine learning” are quite dominant compared to any of the alternatives—see this comparison over the last five years, for example. It suggests that “machine learning” is now the popularity winner, followed by “artificial intelligence.” “Cognitive computing,” “cognitive technology” and “machine intelligence” are hardly visible on the graph at all. Not surprisingly, “automation” is substantially more popular than “augmentation.”
One could suggest that this is all a matter of personal preference, but terminology does have its consequences. “Machine learning” might well mislead amateurs to expect that all smart technologies can learn about their environment and improve their performance within it over time. “Cognitive” does imply that we won’t have to rely on human brains for too much longer. “Automation” tends to instill fear in the hearts of human workers.
It’s also undeniably true that some terms roll off the tongue more easily than others. My current research is on how companies build capabilities with these technologies. I’d argue that “The Cognitive Company” sounds better than “The Artificially Intelligent Company” or “The Company That Makes Excellent Use of Machine Learning.” It helps that “cognitive” is an adjective, whereas all the other terms are nouns.
I also have a suspicion that there is a secret reason why people and organizations are using newer terms than “artificial intelligence.” That concept has risen and fallen multiple times over the 60 years since we’ve been employing the term (and this Wikipedia entry suggests a much longer timeline for the concept). So current mentions of AI might cause the listener or reader with an historical bent to think, “Here we go again.” Calling it something else makes it seem new and different.
The other factor to consider here is the influence of vendors on terms. Yes, IBM is pushing “cognitive,” and in particular, “cognitive computing.” But vendors and market research firms were also quite influential in establishing terms like “analytics” (for better or worse, I had a hand in that one too), “cloud computing,” “client server,” and the like. Other than IBM’s clear preference for “cognitive,” it’s unclear what direction the rest of the IT industry will prefer on this terminological issue, but it will clearly be influential at some point.
I told Jeremy Achin of DataRobot that I admire his passion on the issue of terminology, but it may not be a fight worth fighting. There is typically a period of uncertainty about the names for technologies—and we are going through that now—but eventually the world jumps on a particular terminological bandwagon, and there is little we can do to change it. It will be fun to observe which term gains the most adherents over the next couple of years.
Tom Davenport, the author of several best-selling management books on analytics and big data, is the President’s Distinguished Professor of Information Technology and Management at Babson College, a Fellow of the MIT Initiative on the Digital Economy, co-founder of the International Institute for Analytics, and an independent senior adviser to Deloitte Analytics. He also is a member of the Data Informed Board of Advisers.
Subscribe to Data Informed for the latest information and news on big data and analytics for the enterprise.