A wonderful Amazon.com review by A.J. Sutter of the now almost 5 years old Competing on Analytics, of little relevance to African economic development except for the mobile phone companies and Western Union/Moneygram!
A. FALLACY (and related sins): The most obvious ones in the book are: (i) confusing causation with correlation, (ii) attempting to lead the reader into such confusion, and (iii) “post hoc, propter hoc” (if Y comes after X, Y must have been caused by X).
(i): At page 178, the authors discuss “direct discovery technologies” that mine data and would “let managers go directly to the cause of variances in results or performance. This would be a form of predictive analytics, since it would employ a model of how the business is supposed to perform, and would pinpoint factors that are out of range in the causal model of business performance.”
First we need to deal with a textual ambiguity: the meaning of “supposed” in this context. If “supposed to” is normative — i.e. meaning “is desired to” — then to call technology “predictive” when it uses such a model is quite a stretch. So does “supposed to” have a more neutral meaning, like “is anticipated to”? I’ll assume that this fits the context better.
Now let’s get to the real problem: The model is looking at results and performance — i.e., the past. As statistical programs are wont to do, the model can identify correlations; and let’s assume that it will make predictions based on the observed correlations (there are some commercial software packages that promise this). That is quite different from divining causes, which nonetheless is what the authors have twice asserted in this passage. I leave aside the question of predictive value based on past results; read Taleb or your mutual fund prospectus (“Past results are no guarantee of future performance”).
(ii) At pp. 46-47, the authors describe correlations between “low performance” in using analytics and financial underperformance, and “high performance” in using analytics and financial overperformance. The ratings of analytics and financial performance are based on self-evaluations, not objective measures. This is the “halo effect” in spades, as most recently described in Rosenzweig’s book — happy (profitable) companies are happy about everything, and unhappy (less profitable) companies blame themselves about everything. More to the point, though: the companies in these two groups make up an aggregate of only 29% of their sample. They say nothing about the middle 71%. For all we know, “high performance” in analytics also correlates well with mediocre financial performance.
(iii) At pp. 18-19, the authors tell a cautionary tale about the Red Sox manager who defied the quants in the 2003 American League Championship Series against the Yankees: Red Sox analysts “had demonstrated conclusively” that pitcher Pedro Martinez became much easier to hit against after about 7 innings or 105 pitches, and warned the manager that “by no means should Martinez be left in the game after that point.” However, “in the fifth [sic] and deciding game of the series,” the manager allowed Martinez to continue pitching into the 8th inning. The result? “[T]he Yankees shelled Martinez. The Yanks won the ALCS, but [the manager] lost his job. It’s a powerful story of what can go happen if frontline managers and employees don’t go along with the analytical program.” Sounds like a sportscaster channeling the Borg.
Even if we take this story at face value, one has to wonder, was that all there was to it? Does the Red Sox’ losing the series after Martinez pitched into the 8th inning mean that his pitching was the cause? Was there bad fielding involved, for example? Or did the Yankees’ adrenalin have anything to do with it? And what was the score when Martinez was removed?
Thoughts like these moved me to look up the box score of the game. First of all, Martinez didn’t pitch in the fifth game — probably what the authors were referring to was the 7th game. In that game, it’s true, Martinez gave up 3 runs in the 8th inning. But what was the result? The Yankees only TIED the game, 5-5, to that point. They didn’t win until the bottom of the 11th inning, when they scored one more run (off the third Red Sox pitcher brought in after Martinez). By the way, the game was in New York, so do you think the home crowd’s energy might have been a factor? “Post hoc, propter hoc”: it don’t come any better than this.
B. CIRCULARITY: E.g.: At pp. 48-49, one of the 5 characteristics of analytic capabilities possessed by companies “that compete successfully on analytics” is that such capabilities are “better than the competition [sic].” I guess that’s why they “compete successfully.” BTW, two others in the list of five are that such capabilities are “hard to duplicate” and “unique” (@48). Same cannot be said of items in this list.
The discussion about the ideal characteristics of executives in “analytic competitors” (@135-136) hints at a more substantive circularity. One such characteristic an exec should possess is he or she should be a “passionate believer in analytical and fact-based decision making”. However, when describing how “analytical leadership emerge[s]” (@136-137), the authors can only adduce cases in which the leaders (i) found a company on the principle of using analytics from the get-go, (ii) come in as a new senior exec bringing with them the idea of using analytics, or (iii) are a younger generation in a family-owned business. The authors don’t mention anyone who “saw the light” and became a convert. So companies whose leaders are passionate about analytics will use analytics.
C. INCONSISTENCY: E.g.: The “most analytically sophisticated and successful” companies use analytics, inter alia, to support “a distinctive strategic capability” (@23). “Having a distinctive capability means that *the organization* views this aspect of its business as what sets it apart from competitors” (@24; emphasis added). However, “not all businesses have a distinctive capability” — e.g., Kmart, USAirways and GM don’t, because “to *an outside observer* they don’t do anything substantially better than their competitors” (id., next paragraph; emphasis added.)
D. BANALITY: Parts of the book (esp. Chapter 6, a five-step “road map to enhanced analytical capabilities”), sound like a MadLibs that could just have easily been filled in with strategic planning, Six Sigma, or dozens of other management fads through the decades. E.g., a “Stage 4” company is defined as “analytics are respected and widely practiced but are not driving the company’s strategy” (@ 125); “It is important to specify the financial outcomes desired from an analytical initiative to help measure its success,” @ 127; “Assuming that an organization already has sufficient management support and an understanding of its desired outcomes, analytical orientation, and decision-making processes, its next step is to begin defining priorities,” @id.
Finally, the whole enterprise of “analytics” has a certain banality too, through no fault of the authors of this book: it’s one more in a string of dreary revivals of Taylorism on steroids, albeit this time with 21st-Century pharmaceutical know-how — and with far greater potential to invade personal privacy. Some of its practitioners think it would be a good idea to, say, deny jobs to people simply on the basis of low credit scores, since people with low credit scores can be assumed to have lots of other problems too (reported without any explicit endorsement or disapproval by the authors @ 26). That such an “analytical” criterion might compound those folks’ problems and low credit scores is not worth a mention. Here is the point at which the authors’ omissions and gaffes stop being silly, and where banality stops being benign. It is more than a disappointment that you won’t find ethics discussed in this book.