Six Ways Data Virtualization Can Boost Business Intelligence Agility

Wayne Kernochan

Updated · May 30, 2012

Those who have seen many of my writings know that (a) I believe firmly, based on existing evidence, that increased agility is a net bottom-line benefit for just about any organization, and (b) I have high standards for what constitutes an improvement in agility. So when I say I think that using data virtualization in business intelligence in new ways can deliver increased agility, I am talking about a significant bottom-line impact.

Rather than go over, yet again, my definition of agility, let’s start with my definition of data virtualization. DV is the ability to see disparate data stores as one, and to perform real-time data processing, reporting and analytics on the “global” data store. Data virtualization has three key time-tested capabilities that can help in increasing business agility in general, and business intelligence agility in particular. They are:

  • Auto-discovery. DV will crawl your existing data stores – or, if instructed, the Web – find new data sources and data types, fit them into an overall spectrum of data types (from unstructured to structured) and represent them abstractly in a metadata repository.  In other words, data virtualization is a good place to start proactively searching out key new information as it sprouts outside “organizational boundaries,” because they have the longest experience and the best “best practices” at doing just that.
  • Global contextualization.  Data virtualization’s global metadata repository is not a cure-all for understanding all the relationships between an arriving piece of data and the data already in your data stores, but it does provide the most feasible way of pre-contextualizing the data arriving today.
  • Database veneer.”  This means data virtualization allows end users, applications, and (although this part has not been used much) administrators to act as if all enterprise data stores were one gigantic data store, with near-real-time data access to any piece of data.

Boosting Business Intelligence Agility

Think of your organization’s business intelligence process as having six or seven steps/tasks:

1. Inhale the data into the organization (Input)

2. Relate it to other data, including historical data, so the information in it is better understood (Contextualize)

3. Send it into data stores so it is potentially available across the organization if needed (Globalize)

4. Identify the appropriate people to whom this pre-contextualized information is potentially valuable, now and in the future, and send it as appropriate (Target)

5. Display the information to the user in the proper context, including both corporate imperatives and his/her particular needs (Customize)

6. Support ad-hoc additional end-user analysis, including considerations not in your information system (Plan)

Agile business intelligence, at the least, needs to add a seventh task, Evolve: Constantly seek out new types of data outside the organization and use those new data types to drive changes in the business intelligence information-handling process – and, of course, in the BI “products” and agile BI “targets of opportunity.”

As you can guess, the most important of those steps in making your organization more agile is step seven, Evolve. But any improvement in the speed and effectiveness of any step/task in this process is an improvement in business intelligence agility, and therefore in the agility of the business dependent on that BI for competitive advantage.

Let’s break down where our three DV capabilities can be applied. The global contextualization is valuable for steps two (contextualize), three (globalize), and four (target). The database veneer is useful for steps three (globalize) and five (customize).  The auto-discovery, useful for step one, is uniquely invaluable for seven.

Six Data Virtualization Action Items

So let’s get specific. Here are six things you can do – or do more of, if you are already doing them:

1) Inventory the places where data is entering your organization. Use DV to set up an “auto-discoverer” to search out new sources, direct them to the appropriate “entry data stores” and automatically add them to the global repository, complete with initial context. Set up alerting mechanisms to send information about “new” data to the appropriate analysts/tools.

2) Use DV to drive master data management that reconciles and keeps consistent different copies of the same data, or data that should be related – e.g., customer transactions in different subsidiaries in different countries – even as new data types arrive.

3) Use embedded business intelligence to determine patterns in data that decision-makers use, and use those patterns to define and evolve decision-maker alerting and the “database veneer” representing the subset of company data relevant to a decision-maker’s needs.

4) Use DV to prioritize delivery of new Web data, which represents the data that the decision-maker is least likely to know and therefore the data most crucial to finding insights unknown to competitors.

5) Use the DV’s “database veneer” to allow faster development of agile business intelligence products without the need to deal with the bottleneck of siloed IT data-store administration.

6) Use the DV’s ability to identify connections between data to enhance ad-hoc “exploratory” analysis that chases these connections through multiple links across data stores and even across organizational boundaries (e.g., combining spreadsheet analysis with Google search for competitors’ information).

One final point: One thing that makes all these tactics more agile is that they evolve semi-automatically, instead of requiring constant updates. As I said, I have high standards for agility. If you don’t take advantage of DV’s ability to help you evolve your data, your improvements to your business’ agility will be far less – and much fewer of agility’s benefits will materialize.

Wayne Kernochan is the president of Infostructure Associates, an affiliate of Valley View Ventures that aims to identify ways for businesses to leverage information for innovation and competitive advantage. Wayne has been an IT industry analyst for 22 years. During that time, he has focused on analytics, databases, development tools and middleware, and ways to measure their effectiveness, such as TCO, ROI, and agility measures. He has worked for firms such as Yankee Group, Aberdeen Group and Illuminata, and has helped craft marketing strategies based on competitive intelligence for vendors ranging from Progress Software to IBM.

Wayne Kernochan
Wayne Kernochan

Wayne Kernochan has been an IT industry analyst and auther for over 15 years. He has been focusing on the most important information-related technologies as well as ways to measure their effectiveness over that period. He also has extensive research on the SMB, Big Data, BI, databases, development tools and data virtualization solutions. Wayne is a regular speaker at webinars and is a writer for many publications.

More Posts By Wayne Kernochan