Oops! The input is malformed! From Business Intelligence to Intelligent Data Exploration by Sylvain Pavlowski - BeyeNETWORK Netherlands

Channel: Return-On-Intelligence - Jorgen Heizenberg RSS Feed for Return-On-Intelligence - Jorgen Heizenberg


From Business Intelligence to Intelligent Data Exploration

Originally published 4 May 2009

Phase 1: Uncompromising Control

Most business intelligence (BI) tools have led companies to focus only on the “business” component of their solutions, and as a consequence many of their customers have missed out on realizing the full benefit of the “intelligence’’ part of their potential.

In fact, the deployment of BI 1.0 infrastructures entailed the creation of centralized data warehouses to allow information collected from various business applications to be organized and aggregated. The differences and complexity of the underlying infrastructure involved in achieving this made it necessary to create a semantic application layer. The purpose of this layer was to break down the complex structure of the data warehouse in order to make data available to certain users in a simplified format. This costly process required huge investments and remained the exclusive domain of IT departments and not the front-line business users.

In addition, creating data warehouses and storing this information in relational databases could only be achieved by aggregating this data for predefined reporting purposes. Formatting the information involved structuring the relational database patterns in order to define the dimensional relationships among the information.

This strategic vision of “everything centralized” was realized through application convergence (and the implementation of ERP systems) or the deployment of SOA architectures (a software layer for inter-application dialogue), with the result being the same – all data had to be consolidated in a single enterprise-wide data warehouse. Information about a customer, for example, was compiled from several applications to supply the data warehouse via programmable interfaces (ETL). BI tools then extracted the data via SQL queries in order to format it in static reports. The reports were then sent to users in “push mode” since this type of infrastructure does not give users the freedom to initiate hundreds and even thousands of simultaneous queries.

The “Everything Centralized” Approach Sparked a User Rebellion

What at first seemed like a good idea became a nightmare for users, who were dependent on the IT department for any new reports, unable to add their own data to the analysis and unable to perform ad hoc queries. Only a handful of “super users” could retrieve data by using specific parameters provided in advance. In this type of arrangement, the user has no freedom to perform predictive analytics since data can only be used for retrospective analysis.
Therefore, deploying BI 1.0 tools actually resulted in a loss of knowledge – since users are limited to working on aggregated data available only in push mode via the enterprise-wide integration of a centralized reporting application. It is easy to understand why users, in response to this situation, have sought tools that enable them to work more efficiently and independently.

Phase 2: From Decentralization to Decision-Making Chaos

While IT departments built centralized infrastructures, constraining their customers within IT’s ability to respond, freedom-seeking users found a way to liberate themselves. Prompted by changes in industrial processes, shorter time-to-market, globalization of offerings, regulatory pressures and even reductions in management levels, users were sorely lacking in decision-support tools.

As a natural response to the lack of flexibility on the part of BI 1.0 tools, and without the necessary infrastructure to support them, users turned to client tools. At many organizations, managers rely on spreadsheet-type client tools to consolidate their operating data and make day-to-day decisions.
This created a serious problem because it meant that decision-making processes were based on the exchange of spreadsheets in which some data was entered manually; and, more importantly, users were relying on data that differed from one user to another and was not from the central data warehouse. Moreover, it created a silo vision at the company, where each department made it own decisions without involving the business’ other departments or checking its choices with them.

Of course, these tools have made some of the promise of BI available to everyone, but at the expense of a total loss of control of the information by IT departments, both in terms of data quality and the decision-making processes itself. This is a recipe for decision-making chaos.

The Third Option

With the advent of “enterprise analytics” solutions, all those involved in the decision-making process are able to work together.
  • Returning centralized data control to the IT department is crucial to promoting and ensuring the quality of decision-making processes as well as data consistency.

    Users of solutions from various vendors can extract data via a user-friendly interface accessible to everyone. The creation of aggregates calculated on the fly makes it possible to eliminate the semantic data abstraction layer in favor of business data views.

    These views allow users to extract any data from the data warehouse without the need for predefined dimensions by leveraging “free dimensional” analysis.

  • Because of the solution’s flexibility, users with any degree of sophistication can create the analyses they need to do their job, using central data yet without the IT department’s involvement. This process guarantees the quality and reliability of the result. In the same way, users can include external data in their analyses (geophysical information, external studies, satellite data, and information from pending operations not yet included in the data warehouse), thereby completing their analyses quickly and effectively.

  • When data is imported into one of these solutions, all the columns of the tables are converted into filters (the equivalent of dimensions in traditional BI tools). As a result, there are as many dimensions as there are columns. For example, the “quantity ordered” column will be displayed as a filter and the associated variables in the lines will automatically be converted into upper and lower limits. All without the intervention of a specialist.

  • In addition to the static reporting application, users can look for the data they need, at any time, and import it to their client workstation (In-Memory). This vector database technology offers users a “pull solution” in the decision-making process without the need for predefined dimensions. By having the capability to include any other data source on the fly, users go from mere reporting to predictive analytics.

  • Since the solutions offer a set of APIs, data from multiple sources, including the Web, can now be integrated into analyses. This means that a graphic resulting from an analysis can be integrated into data from Web 2.0 applications on the same page. Thanks to mashup technology, it is possible to make a Web page and a graphic interdependent and to have them interact.

Phase 3: Re-Take Control of Data While Giving Business Users the Freedom to Handle Information

As businesses are forced to shorten their decision-making processes, it is critical to re-think infrastructures in order to give users the flexibility and tools needed for greater adaptability and faster analysis.

BI 1.0 tools have allowed institutional reporting and data consolidation applications to be deployed, but their lack of flexibility has made them cumbersome at best and inaccessible to all users at worst.

IT departments have numerous responsibilities and setting up a service dedicated to creating or modifying reports is not considered a strategic mission. Moreover, users now have enough knowledge of their data and computer skills to be able to act independently.

Business users already have incredible processing power right on their desktops. Since all that horsepower is available, it should be put to work! The use of “in-memory” databases gives users the flexibility to manage data locally without affecting the enterprise data warehouse’s response time.

Today, new predictive analytics and data exploration tools are the only alternative that enables IT departments to re-take control of enterprise-wide data while at the same time giving users the flexibility they demand.

Allowing both worlds – traditional reporting applications and predictive analytics and intelligent data exploration tools – to coexist is without a doubt the best solution to address these issues today. When the people with firsthand knowledge of the data are the ones performing the analysis, it leads to better outcomes. Quicker and more effective analysis of information means better decisions, more quickly – which is a winning proposition.

SOURCE: From Business Intelligence to Intelligent Data Exploration

  • Sylvain Pavlowski
    Sylvain is an IT veteran with more than twenty years of industry’s experience vital to his role of VP EMEA Sales at TIBCO Spotfire where he has held this position since 2007.

    Prior to this, Sylvain was the VP EMEA at UNICA where he was responsible for building and launching new territories and building the partner network across EMEA. Previously Sylvain had held the position of CEO of Digital Peach SAS until September 2005.



Want to post a comment? Login or become a member today!

Be the first to comment!