
It seems to be inevitable. Any popular technology or approach to business change apparently has to involve a large amount of breathlessly positive media-driven hype, and then must be followed by potshots and disparagement. The positive press usually lasts for a couple of years or so, and then authors, journalists, and speakers who seek attention realize that they canât get much of it by jumping on the optimistic bandwagon. They then begin to criticize the idea. Both the positive and negative hype are typically equally unrealistic.
Since there has been enormous positive hype about AIâit will transform organizations virtually overnight, eliminate human drudgery, cure cancer, etc.âitâs not surprising that the negative accounts have started to emerge. In my inbox today, for example, I noticed stories like these:
âA new white paper is about to release that shows 85% of Artificial Intelligence projects FAIL.â (I have provided the link, but I would advise you not to click on it for reasons I describe below)
âWhat if AI in health care is the next asbestos?â (complete with photo of a piece of asbestos)
Why AI is inherently biased. (But wait, thereâs more: âAnd itâs not the first example of AIâa technology developed by an industry that is overwhelmingly populated by white menâproducing results that reflect a racial bias in particular.â)âreferring to this report.
Like the positive hype, these scary accounts, which usually emerge from PR firms, usually have at least a grain of truth to them. Yes, itâs true that some AI projects fail; itâs still an emerging technology, and many companies have undertaken pilots or proofs of concept to learn if it will successfully address particular use cases. It doesnât work for all of them. But no oneârepeat, no one in this worldâhas done a study of all AI projects, or even a systematic sample of them, to see what percentage of them fail.
The particular PR message and white paper saying that â85% of Artificial Intelligence projects FAILâ actually has no data at all on what percentage of projects fail. It simply refers to a 2018 Gartner report predictingânot documentingâthat âthrough 2022, 85 percent of AI projects will deliver erroneous outcomes due to bias in data, algorithms or the teams responsible for managing them.â As with many Gartner predictions, no data sources are cited. Nor is it clear that all, or too many, of the âoutcomesâ delivered by AI will be âerroneous.â I guess Gartner, known for its âhype cycle,â now feels obliged to help nudge technologies along the path to the âtrough of disillusionment.â
What happens around such predictions, of course, is that vendors or consultants say, âUse our solution and avoid the problem.â In the case of the â85% FAILâ white paper, the authors are advocating their particular brand of localization for natural language processing. Vendor hype is not behind the âAI as asbestosâ metaphor; that comes from Jonathan Zittrain, a smart Harvard Law School professor whose work I normally respect. I donât know what makes him think that AI will cause cancer or something similarly dire, but itâs not because he has a product to hawk.
The key message here, however, is to deflate both positive and negative hype. If you have an analytics background you know, as Bill Franks and I have both argued on this site, that AI is mostly just an extension of what weâve been doing for a while in predictive analytics. Itâs a powerful but familiar tool that isnât going away and wonât fail 85% of the time. You should be adding AI capabilities to your organizationâs analytics portfolio, but you shouldnât be promising miracles for them.
Similarly, you may have to puncture some of the negative hype balloons as well. Itâs important to point out that while AI is great at churning out models, producing business change from those models is as difficult as it has ever been. And yes, there can be algorithmic bias in AI models, but the decisions they make are usually much less biased than those made by humans, and there are often ways to detect any bias in algorithms. In general, try to communicate that organizations that donât try to change their processes with AI and analytics will be at a substantial competitive disadvantage to those that embrace these tools aggressively.
Analytics and AI professionals didnât generally create the positive hype, so they shouldnât be blamed for it. And they certainly arenât behind the negative hype either. It may be unfair to ask them to play a role in disseminating the truth about their chosen profession, but Iâm not sure who else will play that role.