Competitive Intelligence

Tactical, Operational & Strategic Analysis of Markets, Competitors & Industries

I am attaching below an April 2006 blog entry by Michael Mace, who claims to be a "technology guy" working in Silicon Valley, but actually has a terrific background in top positions at (one-time) tech leaders Palm, Apple, and Palm Source among others. He put this in his blog entry at http://www.mikemace.com/stopflyingblind/archives/17 for those interested in reading the before-and-after context as well. I think he makes some interesting points, and certainly ones worthy of discussion in our forum. What do others think? Has he gotten a handle on the five factors that caused the "fall of CI?" Are there some he missed? Did he over-state one or more of these causes?

3. The Fall of Competitive Intelligence

In the next few weeks, we’ll go into depth on competitive analysis – what it is today, and what it should become.

Once upon a time, back in the 1990s, competitive intelligence was a hot area at many companies. They invested heavily in creating competitive intelligence teams. A professional group called the Society of Competitive Intelligence Professionals claimed that CI was the fastest-growing corporate discipline. SCIP had more than 3,000 members in 1996, and was growing by more than 100 new people a month.1 Competitive intelligence consulting firms did big business, and if you visit any good research library you’ll find whole shelves of books about Competitive Intelligence, most of them written in the 1990s.

But when the Internet bubble burst and companies started cutting costs, many of those competitive intelligence groups were wiped out.

“CI units are being eliminated….At least half of the CI functions in place today have suffered significant cutbacks, or will face them within the next six months.”

That was from the SCIP’s own newsletter in 2003.2 One of the leading promoters of competitive intelligence in its heyday is now writing about how to apply the Talmud to business decisions.

Why the retreat? Conditions vary from company to company, but I think there are five main reasons:

1. The case for CI was grounded in fear. Much of the urgency behind creating the function was driven by fear of foreign companies, which were said to practice Competitive Intelligence aggressively. Japan in particular was described as a hotbed of competitive spying, and it was implied that this played a key role in Japan’s economic rise. The rhetoric was frightening, and it seemed likely that any company failing to create a competitive intelligence unit was doomed to fall to foreign conquerors.

But as the perceived Japanese “threat” to American business receded, so did interest in Japanese business practices (when’s the last time you heard someone quote from the Book of Five Rings?).

2. The role was never completely defined. CI was a very new discipline, and there hadn’t been time for a consensus to develop on exactly what the role was and how to organize it. As a result, different books and consultants gave conflicting advice. In the lack of clear expectations, I think many competitive intelligence groups were never given well-defined charters. When a company’s under financial stress, a poorly defined function is an obvious thing to cut.

Inevitably, some of the advice was also damaging. For example, one prominent book said the CI role is like being a court jester for your company. The idea was that the CEO resembles Shakespeare’s King Lear, surrounded by liars and flatterers. The jester is the guy in tights who tells the king the truth, by mocking the egotists and exposing the liars.

It’s true that someone in a competitive role must be unafraid to say exactly what the data indicates, even if it’ll upset people. But beyond that I’m uncomfortable with the jester analogy because it implies a completely negative role, and one that focuses only on influencing the CEO. In most of the companies I’ve known, to make change work you need to influence the whole management team, not to mention the rank and file employees. You can’t do that if you speak only to the CEO, and besides you won’t win much respect from the organization if all you do is point out the flaws in other people’s work. When the CEO is replaced (which happens a lot more often than the death of a king), it’s likely that the one agreement among all the remaining executives will be that they want to strangle the jester.

Which, I believe, is what happened at the end of King Lear.

If you need an analogy for the competitive role, it’s better to think of the scout for a wagon train, forging ahead in the wilderness to identify dangers and find the easiest path for everyone. The scout’s not the manager of the wagon train, but he’s a leader with a unique and valued role. And he never wears tights.

3. The focus was on intelligence, not analysis. Much of the CI literature focused on how to gather and verify facts about the competition’s activities. It’s right there in the name — the function collects intelligence on what the other guys are up to. You can find entire books just listing various intelligence-gathering techniques, down to obscure things like taking the competition’s factory tour with two-sided tape on your shoes, so you can collect microscopic samples of the materials they’re using.

The problem is that basic intelligence collection is becoming less important as the Internet grows and people change jobs more often. The Web is awash with competitive rumors, and chances are that if you can’t find the information you need online, one of your former coworkers is now working for the competitor and will sing like a canary if you buy them lunch. There’s simply less need in most companies for full-time employees who ferret out tidbits of intelligence.

What companies do need is insight on what the flood of information means — how it adds up, and what it says about the competition’s thinking and future behavior. This is why I prefer the term “competitive analysis” rather than competitive intelligence. But that sort of predictive analysis is a very different discipline than collecting data, and it works best when competitive analysis is teamed with market research and advanced technology research. So competitive analysis isn’t very valuable as a standalone function.

4. The wrong people were hired for the function. This probably relates back to the lack of a clear charter for the CI role. When you’re not sure what a function will do, it’s easy to imagine that anyone can do it. Many of the people I’ve seen working in the field were marketing or sales people who had been dropped into the competitive role without much preparation, or much inclination for the work. They floundered around trying to figure out what to do, and produced very superficial reports.

Because of the flood of how-to books on Competitive Intelligence, I think some people formed the impression that anyone could do CI if they followed a few simple practices. That’s a little odd; I don’t know of any other field in business where the expectation is that anyone can be good at it. You don’t try to turn randomly-selected employees into engineers, or PR specialists, or salespeople. You look for people who have talent in that area. The same is true for competitive analysts. It’s a specialized field, and not everyone can do it well.

In a future chapter I’ll give some guidelines on how to identify a good competitive analyst.

5. Competitive Intelligence is not mission-critical in the short term. A company has to have salespeople or no one eats. You have to have engineers or products just don’t get built. But if you don’t have a competitive team…well, the company keeps going just fine, thank you. At least for a while.

This means that, in practice, a competitive team has to be more than competent in order to survive. It has to be superb, delivering great value to the company in a visible way, so no one would think of living without it. Just being a service group, delivering good information to clients in the company, is not enough. The group has to solve serious business problems and help close sales. Rather than being a source of competitive information, the group needs to be a source of competitive leadership.

Late one evening in 1989, I entered one of Apple Computer’s office buildings in Silicon Valley. Although Apple called its headquarters a “campus,” it was actually a series of buildings sandwiched between homes and stores over several square miles. The company had rented them haphazardly as it grew.

The building I went to was inconspicuous, two stories tall and tucked behind a screen of trees. It wasn’t the usual place for executive meetings, but an important meeting had been held there earlier in the day.

It was after sunset when I entered the building, and the place was very quiet. The building didn’t house a lot of engineers, so most of the employees had gone home. I went to a darkened conference room, where an IBM personal computer stood in one corner. It was a PS/2 Model 80, a hulking floor-based tower that was the leading edge of PCs at the time. After checking to make sure no one was nearby, I turned on the computer and watched it start up.

It launched a pre-release copy of Microsoft Windows version 3.0. I saw the software come up on the screen, played with it for a couple of minutes, and immediately knew Apple was in deep trouble.

To understand why, you had to know the history of Microsoft up to that time. This was back in the days when PC companies like Apple, Microsoft, Lotus, and Word Perfect viewed one another as peers. The dominant behemoth was IBM, and we were all dancing around them. Microsoft was the clever operating system company that had ridden the IBM standard to prominence, but no one really respected its ability to innovate in applications. Its efforts there were a joke — Microsoft Word was something like the #6 word processor on the PC, and even Microsoft’s software for the Macintosh had numerous competitors, many of which were viewed as technically superior to Microsoft’s products.

Microsoft Windows was the biggest joke of all. Its first two versions had been crude, extremely hard to use, and didn’t excite anyone. In some ways, they probably helped Apple by validating the idea of a graphical interface for a computer, without providing one that was good enough to steal away many customers.

Windows 3.0 changed that. It looked nice. The graphics were pleasant, the icons were reasonably well laid out on the screen, and it worked fairly well. There were still some rough edges, but it was good enough that I could picture a PC user installing it and not being embarrassed a week later. Windows was, for the first time, usable.

For reasons I still don’t know, Microsoft had decided to come down and give a demo of the unreleased software to Apple’s executives. I was managing the company’s competitive analysis department at the time, and as the only people in the company who had IBM PCs, we were asked to provide one for the meeting.

I wasn’t invited to the meeting, for obvious reasons, but I stayed late that night until I was sure it was over. As it turned out, when the Microsoft people left the meeting, they hadn’t erased the software from the PC. Now it was mine. I lifted the very heavy PS/2 tower onto a wheeled chair and rolled it out to my car. We started testing the software the next morning, trying to learn as quickly as we could just in case Microsoft came back and asked us to wipe the hard drive.

They never did.

With a pre-release version of Microsoft’s new product in hand, we were in a good position to prepare Apple for the upcoming competition. And in many ways we did — we documented our competitive advantages, educated the engineers about the improving competition, created marketing collateral, and generally tried to prepare the company for a fight. But the preparation turned out to be harder than I expected, in part because of resistance from above.

Spreading bad news about a competitor can be very disruptive to a company. It distracts employees, causes people to question their current plans, and generally hurts efficiency. The news is especially hard to deliver when a competitor has a history of screwing up, and most of the people in the company don’t use the competitor’s products. It’s seductively easy to rationalize that the competition is going to blow it one more time.

Sure enough, soon after we started raising a red flag about the software, my boss called me into his office. He said we were upsetting too many people, and told me to tone down the message. “After all,” he said, “it’s just another version of Windows.”

Maybe Apple was destined to lose anyway. Apple’s refusal to license its software to other companies meant it couldn’t establish a competing software standard, and its failure to produce new innovations that would make Windows obsolete meant it couldn’t hold onto many of the customers it had. But I think another cause of Apple’s fate was its inability to picture how the world would change. Apple didn’t really understand the minds of PC customers, and couldn’t see how Microsoft’s new software would act on them. And so despite a free preview from Microsoft, Apple never fully rose to the challenge of Windows 3.0, and Microsoft went on to cement its dominance of the PC industry.

By the traditional rules of competitive intelligence, I ought to feel at peace with my role in this. I did everything I could do legally to get advance information, my team and I turned out the best analysis we could, and we reported it as aggressively as we were allowed to. But I think that’s a cop-out. My company screwed up on a competitive issue. Therefore I’m partly to blame.

My experience with Windows taught me the most important rule of competitive analysis — your role is to make sure your company wins competitively. It’s not enough to deliver a great report and then wash your hands of the situation. If the company doesn’t act on your information, you failed.

You need to drive this principle into everything a competitive group does.
__________

Please click here to rate this section (the link will open a one-screen anonymous survey, and you’ll get to see the results after you take it).

Next week: How to organize a competitive analysis team.

1. For a complete discussion, see the book Competitive Intelligence by Larry Kahaner. [↩ back]
2. Written by Bill Fiora, principal of Outward Insights, a CI consulting firm. [↩ back]

This entry was posted on Saturday, April 22nd, 2006 at 5:16 pm and is filed under Chapter. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

Views: 31

Reply to This

Replies to This Discussion

This is one of the most useful things I've ever read on the subject. Thanks for posting.
The Apple situation the original author describes is a great case example of how intelligence fails. One of the most common ways intelligence fails is when it is not sufficiently communicated and/or acted upon. The example I often use is Xerox, which is ironic in that Xerox invented many of the technologies that Apple later commercialized (the GUI and the mouse, for example) – and that Microsoft even later “adapted”, as the author describes.

There is no proven business value or ROI for CI in many companies – and no process in place to even try to measure it. As a result, it is expendable. The primary failure of CI to create value is that it does not connect with what the business itself actually does to create value.

In the business world (and government too, I’d hope) there are no points awarded for good intelligence—only for superior results. Whether you have lousy intelligence, or—as in this Apple case—good intelligence without using it, the net result is the same. Non-linkage to business results and value-creation = irrelevance = extinction. The law of the corporate jungle.

And absent tangible, demonstrable results, why should it be otherwise?

But business threats have not withered away, and if anything are potentially more dangerous, faster-moving, and harder to detect than in the 1990s (when I wrote Analyzing Your Competition, one of those “how-to” books the author refers to).

Let’s take Apple (AAPL), since the author uses that example. The iPhone was released in June 2007 with a proprietary SIM chip that in effect would direct part of the phone service revenue stream back to Apple. Within a matter of weeks, the chip was counterfeited by an eastern European firm—a clear threat to Apple’s revenue model.

Apple is a smart company, and responded quickly by opening up the SDK (developers’ tool kit) to third-party developers. Their counter-strategy seems in part to be, more developers = more applications = more iPhone users = more handset sales and phone service revenues = market pre-emption for rival offerings.

Could they have detected this without some kind of business environment monitoring capability? Probably not as quickly, and in their markets—in all markets, really—speed plays a huge strategic role. Would they have missed it if their intelligence process was just focused on direct rivals like Microsoft, Dell, and Lenovo? Yup.

FULL DISCLOSURE: I bought Apple stock when I learned about this. Not because the iPhone is a “cool” product (which it is), not because I love my iPod and 8-core Mac Pro (which I do)—but because I think that Apple as a company reacts quickly and effectively to its business environment.

I like that in a company.

The more I ponder it, the more I believe that this “strategic adaptability” is the only thing that comes even close to being deemed “sustainable competitive advantage.” That ability defines the essence of what “intelligence” is.

There is an even greater need than ever for an intelligence process to support this—whether or not we call it “CI”. So, please, let’s not throw out the baby with the bathwater. CI 1.0 may be over, but CI 2.0 needs to be created—and it needs to work.

Incidentally, it’s worth going back to the referenced blog “Stop Flying Blind” by Michael Mace. His comments excerpted here make more sense in context. Still, I think he rests his argument too much on conflating “competitive” intelligence with “competitor” intelligence (one, but only one, of the former’s components)—another of the common ways intelligence fails.

Now please stop it—these questions are too interesting, and I can’t get any work done!

Have a great Labor Day weekend!
Dear Craig,

Correct me if I am wrong. I read this article and according to it Michael Mace says: 3. The focus was on intelligence, not analysis.................................

I think Michael Mace needs to understand that Analysis comes before and not Intelligence as he has mentioned in point No 3 as a header! I think the Competitive Intelligence brotherhood needs to educate him to interpret correctly.

Tell Michael Mace that Analysis is before Actionable Intelligence:

1. Collection
2. Analysis
3. Wargaming
4. Scenario Planning
5. Generates Early Warning ie Actionable Intelligence.
Lets not discuss Holy Grail of Apple.

Their competence is reflected by what they thought as misfit turned out / went on to be "Pixar"

Remember Steve Jobs !
Thanks Craig for pasting this article.

It is always good with some constructive self criticism. If I should dare to engage on some quick comments to the five points I would say:

1. Yes, the CI business probably got some extra business from the fear factor after 9-11, but I would not exaggerate this point.
2. I think the CI function has been well defined – responding to the Intelligence Cycle – but there is of cause always a problem of overlaps with other functions in a company, ex. Library, strategy/top management, and marketing department. In military intelligence this is all much simpler. To a certain extent it is a question of which function came first. CI could never really compete with Marketing. Thus we see also a return to “Market Intelligence” and “Mareting Intelligence” as more integrated fields
3. This point is probably the best; analysis has been severely underestimated, also in research. In my own experience this is what customers want the most from you. The rest they can figure out or understand pretty well themselves. This is a high-knowledge input. It requires training in logics, science methodology, math.
4. “The wrong people”. Well, that is a tricky one, and an argument that can often be used in about every case.
5. That CI is not critical in the short run is of cause a major and well documented disadvantage.

If I could attempt to make some suggestions of my own threatening CI I would say

6. The development of BI is more and more about software and technical solutions. The technical engineers are to a large extent stealing the problem and there is little we can do about it except cooperate. Thus you will find more BI contributions on ECIS 2009 than ever before
http://www.atelis.org/Version_ang/docs/program_ECIS_2009.pdf
7. The inability of SCIP to preserve and develop its theoretical/academic member base. In other words the separation between professionals and academics became clearer during the last few years, to the disadvantage of both groups.

All the best from

Klaus
I have to correct Mr.Mace here that CI did not fail, but the operative who ought to look through things failed to see the subtle actions IN THE ENVIRONMENT and AROUND THE ENVIRONMENT.

He had himself accepted the fact,
"By the traditional rules of competitive intelligence, I ought to feel at peace with my role in this. I did everything I could do legally to get advance information, my team and I turned out the best analysis we could, and we reported it as aggressively as we were allowed to. But I think that’s a cop-out. My company screwed up on a competitive issue. Therefore I’m partly to blame."


And also, if the analysed and the disseminated information doesnot undergo Wargaming and Scenario Planning, it would never be possible to find out the 'Worst-Case analysis' and hence, the actionable intelligence gets out of the question for the corporates !
Thinking is man’s only basic virtue, from which all the others proceed. And his basic vice, the source of all his evils, is that nameless act which all of you practice, but struggle never to admit: the act of blanking out, the willful suspension of one’s consciousness, the refusal to think—not blindness, but the refusal to see; not ignorance, but the refusal to know. It is the act of unfocusing your mind and inducing an inner fog to escape the responsibility of judgment—on the unstated premise that a thing will not exist if only you refuse to identify it, that A will not be A so long as you do not pronounce the verdict “It is." - AYN RAND

RSS

Free Intel Collab Webinars

You might be interested in the next few IntelCollab webinars:

RECONVERGE Network Calendar of Events

© 2024   Created by Arik Johnson.   Powered by

Badges  |  Report an Issue  |  Terms of Service