Competitive Intelligence

Tactical, Operational & Strategic Analysis of Markets, Competitors & Industries

I've been turning this one over and can't find a satisfactory explanation.

Why don't we see more 'big data' players doing CI consulting.  It seems to me that companies like Google, with absurdly comprehensive environmental data, doing more to move into the CI world.

Is it a lack of demand, ie not enough companies recognize the value of CI?

Are companies hesitant to use external resources?

Do the big data companies not realize the potential of their position?

or I guess the real question might be:

Is an internal CI function with limited data more effective than externally produced CI with a more holistic understanding of the market?  

This noob questions how it could be.

Views: 79

Reply to This

Replies to This Discussion

Oh, they are doing CI, lots of it, but they call it other things, like Data mining e.g. They solve many of the same problems but from an engineering perspective


I can answer this one. Ok, first "big data" players usually means HP, Oracle, SAP, IBM. These firms are extraordinarily good at handling structured data (ie numbers) in relational databases, but they don't deal well with unstructured data, text. Let's just say they are making investments to be able to derive "intelligence" from unstructured data in the areas of NLP, semantic indexing, demographic profiling, indexing/clustering, parsing, etc.

Now, as far as Google, and their efforts, well they are indeed involved in providing Strategic early Warning (SEWS) by leveraging their various technologies. I'll give you one example: InSTEDD which stands for International Networked System for Total Early Disease Detection. Obviously, the idea here is to predict epidemics, outbreaks, etc so health organizations can mobilize, business travelers can be alerted to avoid certain locales, etc. InSTEDD leverages multiple Web Crawlers, Geo Chat SMS, infobots, historical databases, satellite photography, a system entitled Evolve. Evolve consists of several high-level modules, including: 1) Data aggregation and gathering, 2) Automatic feature extraction, data classification and tagging, 3) Human input, hypotheses generation and review, 4) Predictions and alerts output, and 5) Field confirmation and feedback. Now, InSTEDD also pulls from Veratect's system, Veratect being a CI firm of sorts, that offers pandemic type early warning and regional specific surveillance to corporate, Insurance, financial services, NGO, government sectors in the form of their Foreshadow/PEER View analytical product lines. Their claim to fame, is that they were recently attributed with first detecting the “Swine flu” outbreak; beating the WHO and the CDC. So, yes Google is in the game......................

Kind Regards,


PS Yes, my background is providing intelligence to certain firms R/D in these arenas.....
Sorry to check back in so late!

Thanks for both your comments; I am particularly interested in the google example.

Can anyone think of other recent examples where a firm who 'owns' the data is shown to have a better understanding and to successfully guide the 'experts' in what is actually going on in their business environment?
If you have not tried Google Squared, try -
This is from Google's labs that tries to put structure into unstructured, in some sense. It doesnt work many times, especially if you use it like a search. Where it does, its tabular arrangement of information seems to have some potential. Of course, at the end of the day it maybe only a tool for secondary research in CI. Similarly there is in the public domain - These sort of explorations aim to consume a huge amount of information (call it publicly available big data) and find useful content, not just by relevance, but also to organize it intelligently, to draw out some sort of pattern on the fly. The juxtaposition of a variety of information sources placed into a configurable, familiar structure seems to have an inherent value here. If you have time you could explore - especially their visualizations. Worth a try, IMO.

If the big data players I mentioned above are in the secondary research and already published stuff, big data - give it a size - if it is measured in terabytes, then many enterprises are indeed growing their data (internally) at the rate of many terabytes per day. If you take a scenario such as 'single customer view' for a telecom company, it can be a serious big data challenge. Bringing together information from billing, call center, provisioning, etc and add to that emails, info from scanning blogs, user activities in the public domain, etc, it can be huge in data size terms. However, the same big data players above find it difficult to crawl these so called structured sources or even the unstructured ones due to security concerns, governance issues, one hand not talking to the other and so on. There is a Google Search Appliance, you have Microsoft FAST, or specialized players such as Autonomy and Endeca besides others who try to deal with this challenge of finding relevant information from internal sources. Also, there is all the data mining activity. None of it real time reporting. So to draw insights and find actionable information that hopefully paints a future scenario, to connect the dots seems predominantly a task for the human brain. Only, in the case of dealing with big data, perhaps, a set of good tools can assist the CI role.

caveat - I do not have a CI background and am unable to address your specific question on internal vs external modes of CI functioning or compare on value. Instead, I prefer to view it as primary vs secondary research issue as one dimension and dealing with big data another, combining these can lead to a more holistic understanding.


Free Intel Collab Webinars

You might be interested in the next few IntelCollab webinars:

RECONVERGE Network Calendar of Events

© 2024   Created by Arik Johnson.   Powered by

Badges  |  Report an Issue  |  Terms of Service