Competitive Intelligence

Tactical, Operational & Strategic Analysis of Markets, Competitors & Industries

All Predictions are Mistaken; the Role of Intelligence is to Lessen the Gap Between Estimates and Outcomes

Tom Hawes had a blog post recently about the trustworthiness of predictions which got me thinking it might be a useful exercise here to clear up a major misconception that has been a perennial plague on intelligence people and their value proposition; that is, the real role of intelligence in crafting predictions. Here's the thesis:

All Predictions Are Mistaken; the Role of Intelligence is to Lessen the Gap Between Estimates and Outcomes

Now, I'm certainly not alone in this position - I've heard versions of the idea from many colleagues over the years, yet there is a widespread belief that predictions are something solid decision makers can bank on.

Quite the contrary, as decision makers grow ever more intoxicated by the assumptions which accompany the responsibilities invested in them - many true, others false - intelligence must reveal these differences in a way that makes skepticism palatable and, indeed, valued as their foremost contribution to deliberation.

Let's talk this through - I still think too many, particularly new practitioners, remain convinced they were hired to predict outcomes, when in fact their job is to simply lessen the gap between estimates and outcomes.

Nothing short of the sustainability of our institutions is on the line.

Views: 69

Reply to This

Replies to This Discussion

Arik: As usual, I'm in agreement with you. As someone who has been studying, doing and teaching about analysis for a few decades now, I'd humbly suggest to you that the intelligence analysis function has always been about "reducing the gap." My favorite working definition, which resides in front of me on my work desk and has been a part of most everything I have written about intelligence analysts through the years, states that the intelligence professional defines reality for decision makers who's actions could alter it. Predicting outcomes is not a fruitful pursuit for intelligence professionals; altering outcomes positively because our decision making clients are better informed about differing possibilities is. Craig
Those pesky predictions - they may not be right, but they can be fun:

Love how the wife goes shopping and the husband has to pay; very 1969.
Intelligence defining reality is core, yes, but I think there's some misunderstanding among us about extending our attention into defining implications, options and recommendations. That boils down to individual charter of course and there's very little consistency between circumstances. I think there's a project here for us to start to think anew about the discrete activities in the intelligence apparatus.

I recall a few years ago having a discussion with CXO at a Fortune 50 company who implied that, at his organization, intelligence people taking on the latter three roles was incredibly condescending to the decision-makers' own intelligence.

One of the major critiques of the "Ambiguous Sciences" as a career (our broader kin in the analytical family) is that, in some organizational cultures, the value proposition boils down to asking leadership to outsource their brains. I believe redefining that value proposition in ways that make a more realistic, concrete case without all the implied hostility is a real opportunity for us to improve the value of the field.
Yes Arik,

This is precisely the objective of Streaming Intelligence.

Benchmarking Analysis with Gap Analysis and Streaming Information and the Ability to Recalibrate Intelligence.

"Keep your eye on the ball".
Arik,

All predictions are mistaken? Hardly, and any time anyone says ALL there is ample reason to dismiss their claim. That said, I would respectfully disagree with your predominant thesis.

In my 12 plus years of progressive experience moving from Analyst to Sr Strategic Analyst to Strategic CI Manager to Director of CI, it has ALWAYS been my role to provide Strategic early Warning (SEWs) to C level management, R/D, etc so my company could proactively plan with the benefit of foreknowledge. I was always on the "strategic" side of the CI equation. Now, admittedly at the firms where I have been employed , there were also lower level, current focused analysts (tactical emphasis) who supported current product marketing, field sales etc dealing with existing products/ "today's issues"-hence , no they were NOT expected to be predictive. These tactical folks were referred to as "Competitive Response" and they were part of the overall CI apparatus as were Technical CI/Benchmarking.

So stepping back, yes IT IS the role of SOME CI folks to be predictive-usually the more seasoned, senior folk with HUMINT gathering capability. Most large firms employ BOTH current and strategic/ future focused CI folks and the roles tend to be separate, because for the most part if you are down in the trenches every day answering calls from sales people and in a reactive mode, no you aren't going to be predictive. Hence why my team and I were removed from dealing with field sales and Exec management got real testy when he ended up getting pulled in to deal with current focused problems- yeah they wanted us focusing on what was COMING. Call it predictions, call it early warning (SEWS), in my opinion and in my training, uh that's precisely what CI has always REALLY been about --and my team and I have been very effective producing just that and it was precisely what we were tasked with.

Now to your other point about assumptions, I do agree with you that CI has a role in validating/negating conventional wisdom (which so rarely is valid) with fact driven data and analysis.......

Regards,
M
So, no disrespect intended here at all Monica, but this is precisely the worldview that we need to correct; e.g., I find it hard to believe your predictions were entirely flawless. Indeed, if I understand your position correctly, that's your assessment of the word ALL.

To clarify my argument a bit to take this further, while forecasts of any kind might be very, very close to accurate in hindsight, the assumption that high degrees of certainty in predictions where uncontrollables are predominant is possible at all is a critical part of the problem intelligence needs to solve if we are to keep our promises to those who rely on us.
Arik: Foresight is rarely 20-20 -- it reminds of the seer-sucker theory that we talk about in intel circles that suggests for every seer, there is a correlated sucker. There is also a popular view that hindsight is 20--20. Indeed, in hindsight, most people will conclude that their predictions or forecasts were more accurate or correct than their predictions actually indicated. After a big tournament, it is remarkable how many people claim they predicted it all along (e.g., everybody saw a Duke victory over Butler in the NCAA basketball championships, or that all the clues were in place that an individual was a serial cheater (read: Tiger Woods or Jesse James). We seek order in our thinking, and will find any number of ways to put it in place even if the entropy that occurs normally occurs in nature suggests otherwise.

The reality suggests that humans have a remarkable cognitive ability to rationalize, justify, twist, (etc.) the facts in retrospect. It is a coping mechanism that helps us deal with buyer's regret, cognitive dissonance, intellectual-emotional disconnects, etc. We do not have the capacity or ordinarily get the necessary practice to do the same using our foresight capabilities.

We also rarely identify the "controllables" or the "uncontrollables," although this is a necessary facet of most every analytical exercise, nor do many of our CI analysts place or stand by determined probabilities on our predictions (despite our teaching analysts that this is necessary and proper.

Good prediction, like good foresight, good scenarios, good insights, (etc.) require both good analytical inputs (from all sources), rigorous and practiced analytical processes, and appropriate (conducive) organizational contexts. Most "real-world" businesses fail on one or more of these dimensions.
Arik,

With all due respect, I never said my predictions were flawless, but they were extremely accurate the vast majority of the time. And no, you misunderstand if you inferred when I used the word ALL that I meant my predictions were spot on every time. No, I was referring to your usage of the word ALL which I took to mean as each and every prediction is wrong and mistaken as specifically stipulated in your thesis.

Anyway as to my world view, well I know it aligns with many of the big names in the old intelligence vanguard whom I notice have little to do with SCIP any longer largely due to this difference of opinion. It's hardly a worldview that needs corrected, its CI's true heritage.

Kind Regards,
M
The real issue isn't the brute force ability to make predictions by intelligence people - it's the understanding that predictions are always only ever partially correct, which I think you acknowledge, if I read your reply correctly.

The implications are that if and when predictions are mistaken, as indeed they ALL are, at least to some infinitesimal degree, the capital put at risk is done so wisely, with full knowledge of the limited effectiveness of prediction as an art by the steward(s) of that capital.

To be clear, I wouldn't suggest that intelligence people should never make predictions about probable outcomes - quite the contrary, the future is our canvas; I believe simply that the expectations of decision or policy makers must be aligned with the degree of confidence accompanying the recommendation.

As Craig suggests, Monday morning quarterbacking is pretty easy and all hindsight seems 20/20; or to stretch the cliché, success has many fathers but failure is always an orphan. My thesis however as intelligence philosophy stands firm - though our debate suggests perhaps a corollary - how's this for another absolute? ;-)

Predictions are never flawless; the goal of intelligence should be to minimize the mistakes that accompany any given decision's inevitable errors, great or small.

By the way, the whole point of starting this conversation is because orthodoxy is failing us in an era significantly different from the good old days when hard core HUMINT and command-line computers nobody could use described the frontier of our work. We can't push the field forward unless we take risks in dialog together about intelligence theory and doctrine as it evolves over time.

In other words, though change is uncomfortable in any endeavor, we can adapt if we stick together and debate the fundamental values our profession espouses. Indeed, nobody else is going to.
Arik,

I like the second thesis.

mn
Thanks! I tried to elbow that into shape for consensus, but I'm still not terribly satisfied with the diligence of the debate in testing a thesis statement such as this.

What else is wrong with this position? Where do we go from here?

If we accept the thesis, then I would suggest it's framework time - heuristics for understanding how to process predictions as estimates, assessments or probable scenarios from an intelligence point of view - which is very different than seeing things from a pure strategy perspective. Strategy is only the deterministic will of the agent or actor; the operating environment may or may not care what an organization's strategy looks like.

It seems to me (though I'm not privy) that's perhaps why the national intelligence apparatus (at least here in the U.S.) began using nomenclature like that above (estimates instead of predictions) precisely because of this inherent conundrum faced by non-determinist drivers and inhibitors on strategy.

Thanks for fencing this out everybody!

RSS

Free Intel Collab Webinars

You might be interested in the next few IntelCollab webinars:

RECONVERGE Network Calendar of Events

© 2019   Created by Arik Johnson.   Powered by

Badges  |  Report an Issue  |  Terms of Service