SAP Products and the DW Quadrant

Heidelberg PhilosophenwegIn a recent blog, I have introduced the Data Warehousing Quadrant, a problem description for a data platform that is used for analytic purposes. The latter is called a data warehouse (DW) but labels, such as data mart, big data platform, data hub etc., are also used. In this blog, I will map some of the SAP products into that quadrant which will hopefully yield a more consistent picture of the SAP strategy.

To recap: the DW quadrant has two dimensions. One indicates the challenges regarding data volume, performance, query and loading throughput and the like. The other one shows the complexity of the modeling on top of the data layer(s). A good proxy for the complexity is the number of tables, views, data sources, load processes, transformations etc. Big numbers indicate many dependencies between all those objects and, thus, high efforts when things get changed, removed or added. But it is not only the effort: there is also a higher risk of accidentally changing, for example, the semantics of a KPI. Figure 1 shows the space outlined by the two dimensions. The space is the divided into four subcategories: the data marts, the very large data warehouses (VLDWs), the enterprise data warehouses (EDWs) and the big data warehouses (BDWs). See figure 1.

Figure 1: The DW quadrant.

Now, there is several SAP products that are relevant to the problem space outlined by the DW quadrant. Some observers (customers analysts, partners, colleagues) would like SAP to provide a single answer or a single product for that problem space. Fundamentally, that answer is HANA. However, HANA is a modern RDBMS; a DW requires tooling on top. So, there is something more required than just HANA. Figure 2 assigns SAP products / bundles to the respective subquadrants. The idea behind that is to be a “flexible rule of thumb” rather than a hard assignment. For example, BW/4HANA can play a role in more than just the EDW subquadrant. We will discuss this below. However, it becomes clear where the sweet spots or the focus area of the respective products are.

Figure 2: SAP products assigned to subquadrants.

From a technical and architectural perspective, there is a lot of relationships between those SAP products. For example, operational analytics in S/4 heavily leverages the BW embedded inside S/4. Another example is BW/4HANA’s ability to combine with any SQL object, like SQL accessible tables, views, procedures / scripts. This allows smooth transitions or extensions of an existing system into one or the other direction of the quadrant. Figure 3 indicates such transitions and extension options:

  1. Data Mart → VLDW: This is probably the most straightforward path as HANA has all the capabilities for scale-up and scale-out to move along the performance dimension. All products listed in the data mart subquadrant can be extended using SQL based modeling.

  2. Data Mart → EDW: S/4 uses BW’s analytic engine to report on CDS objects. Similarly, BW/4HANA can consume CDS views either via the query or in many cases also for extraction purposes. Native HANA data marts combine with BW/4HANA similarly to the HANA SQL DW (see 3.).

  3. VLDW ⇆ EDW: Here again, I refer you to the blog describing how BW/4HANA can combine with native SQL. This allows BW/4HANA to be complemented with native SQL modeling and vice versa!

  4. VLDW or EDW → BDW: Modern data warehouses incorporate unstructured and semi-structured data that gets preprocessed in distributed file or NoSQL systems that are connected to a traditional (structured), RDBMS based data warehouse. The HANA platform and BW/4HANA will address such scenarios. Watch out for announcements around SAPPHIRE NOW 😀

Figure 3: Transition and extension options.

The possibility to evolve an existing system – located somewhere in the space of the DW quadrant – to address new and/or additional scenarios, i.e. to move along one or both dimensions is an extremely important and valuable asset. Data warehouses do not remain stale; they are permanently evolving. This means that investments are secure and so it the ROI.

This blog has also been published here. You can follow me on Twitter via @tfxz.

The Data Warehousing Quadrant

A good understanding or a good description of a problem is a prerequisite to finding a solution. This blog presents such a problem description, namely for a data platform that is used for analytic purposes. Traditionally, this is called a data warehouse (DW) but labels, such as data mart, big data platform, data hub etc., are also used in this context. I’ve named this problem description the Data Warehousing Quadrant. An initial version has been shown in this blog. Since then, I’ve used it in many meetings with customers, partners, analysts, colleagues and students. It has the nice effect that it makes people think about their own data platform (problem) as they try to locate where they are and where they want to go. This is extremely helpful as it triggers the right dialog. Only if you work on the right questions you will find the right answers. Or put the other way: if you start with the wrong questions – a situation that occurs far more often than you’d expect – then you are unlikely to find the right answers.

The Data Warehousing Quadrant (Fig. 1) has two problem dimensions that are independent from each other:

  1. Data Volume: This is a technical dimension which comprises all sorts of challenges caused by data volume and/or significant performance requirements such as: query performance, ETL or ELT performance, throughput, high number of users, huge data volumes, load balancing etc. This dimension is reflected on the vertical axis in fig. 1.

  2. Model Complexity: This reflects the challenges triggered by the semantics, the data models, the transformation and load processes in the system. The more data sources that are connected to the DW, the more data models, tables, processes exist. So, the number of tables, views, connected sources is probably a good proxy for the complexity of modeling inside the DW. Why is this complexity relevant? The lower it is the less governance is required in the system. The more tables, models, processes there are, the more dependencies between all this objects exists and the more difficult it becomes to manage all those dependencies whenever something (like a column of a table) needs to be added, changed, removed. This is the day-to-day management of the “life” of a DW system. This dimension is reflected on the horizontal axis in fig. 1.

The DW quadrant
Figure 1: The DW quadrant.

Now, these two dimensions create a space that can be divided into four (sub-) quadrants which we discuss in the following:

Bottom-Left: Data Marts

Here, the typical scenarios are, for example,

  • a departmental data mart, e.g. a marketing department sets up a small, maybe even open source based RDBMS system and creates a few tables that help to track a marketing campaign. Those tables hold data of customers that were approached, their reactions or answers to questionnaires, addresses etc. SQL or other views allow some basic evaluations. After a few weeks, the marketing campaign ends, hardly any or no data gets added and the data, the underlying tables and views slowly “die” as they are not used anymore. Probably, one or two colleagues are sufficient to handle the system, both setting it up and creating the tables and views. They now the data model intimately, data volume is manageable and change management is hardly relevant as the data model is either simple (thus changes are simple) or has a limited lifespan (≈ the duration of the marketing campaign).

  • An operational data mart. This can also be the data that is managed via a certain operational application as you find them e.g. in an ERP, CRM or SRM system. Here, tables, data are given and data consistency is managed by the related application. There is no requirement to involve additional data from other sources as the nature of the analyses is limited to the data sitting in that system. Typically, data volumes and number of relevant tables are limited and do not constitute a real challenge.

Top-Left: Very Large Data Warehouses (VLDWs)

Here, a typical situation is that there is a small number of business processes – each one supported via an operational RDBMS – with at least one of them producing huge amounts of data. Imagine the sales orders submitted via Amazon’s website: this article cites 426 items ordered per second on Cyber Monday in 2013. Now, the model complexity is considerably simple as only a few business processes, thus tables (that describe those processes), are involved. However, the major challenges originate in the sheer volume of data produced by at least one of those processes. Consequently, topics such as DB partitioning, indexing, other tuning, scale-out, parallel processing are dominant while managing the data models or their lifecycles is fairly straightforward.

Bottom-Right: Enterprise Data Warehouses (EDWs)

When we talk about enterprises then we look at a whole bunch of underlying business processes: financial, HR, CRM, supply-chain, orders, deliveries, billing etc. Each of these processes is typically supported by some operational system which has a related DB in which it stores the data describing the ongoing activities within the respective process. There is natural dependencies and relationships between those processes – e.g. there has to be an order before something is delivered or billed – that it makes sense for business analysts to explore and analyse those business processes not only in an isolated way but also to look at those dependencies and overlaps. Everyone understands that orders might be hampered if the supply chain is not running well. In order to underline this with facts the data from the supply chain and the order systems need to be related and combined to see the mutual impacts.

Data warehouses that cover a large set of business processes within an enterprise are therefore called enterprise data warehouses (EDWs). Their characteristic is the large set of data sources (reflecting the business processes) which, in turn, translates into a large number of (relational) tables. A lot of work is required to cleanse and harmonise data in those tables. In addition, the dependencies between the business processes and its underlying data are reflected in the semantic modeling on top of those tables. Overall, a lot of knowledge and IP goes into building up an EDW. This makes it sometimes expensive but, also, extremely valuable.

An EDW does not remain static. It gets changed, adjusted, new sources get added, some models get refined. Changes in the day-to-day business – e.g. changes in a company’s org structure – translate into changes in the EDW. This, by the way, does apply to the other DWs mentioned above, too. However, the lifecycle is more prominent with EDWs than in the other cases. In other words: here, the challenges by the model complexity dimension dominate the life of an EDW.

Top-Right: Big Data Warehouses (BDWs)

Finally, there is the top-right quadrant which starts to become relevant with the advent of big data. Please beware that “big data” not only refers to data volumes but also incorporating types of data that have not been used that much so far. Examples are

  • videos + images,
  • free text from email or social networks,
  • complex log and sensor data.

This requires additional technologies involved that currently surge in the wider environment of Hadoop, Spark and the like. Those infrastructures are used to complement traditional DWs to form BDWs, aka modern data warehouses, aka big data hubs (BDHs). Basically, those BDWs see challenges from both dimensions, the data volume and the modeling complexity. The latter is being augmented by the fact that models might span various processing and data layers, e.g. Hadoop + RDBMS.

How To Use The DW Quadrant?

Now, how can the DW quadrant help? I have introduced it to various customers and analysts and it made them think. They always start mapping their respective problems or perspectives to the space outlined by the quadrant. It is useful to explain and express a situation and potential plans of how to evolve a system. Here are two examples:

SAP addresses those two dimensions or the forces that push along those dimensions via various products, namely SAP HANA and VORA for the data volume and performance challenges, while BW/4HANA and tooling for BDH will help along the complexity. Obviously, the combination of those products is then well suited to address the cases of big data warehouses.

An additional aspect is that no system is static but evolves over time. In terms of the DW quadrant, this means that you might start bottom-left as a data mart to then grow into one or the other or both dimensions. These dynamics can force you to change tooling and technologies. E.g. you might start as a data mart using an open source RDBMS (MySQL et al.) and Emacs (for editing SQL). Over time, data volumes grow – which might require to switch to a more scalable and advanced commercial RDBMS product – and/or sources and models are added which requires a development environment for models that has a repository, SQL generating graphical editors etc. Power Designer or BW/4HANA are examples for the latter.

This blog can also be found on SAPHANA.com and on Linkedin. You can follow me on Twitter via @tfxz.

#BW4HANA and a SQL-Based DW Hand-in-Hand

AstorhausThis blog looks at one of BW/4HANA’s biggest strengths, namely to embrace both, (1) a guided or managed approach – using the highly integrated BW or BW/4 based tools and editors – and (2) a freestyle or SQL-oriented one – as prevalent in many handcrafted data warehouses (DWs) based on some relational database (RDBMS). And it is not only restricted to running those approaches side-by-side! They can also be combined in many ways which allows to tap into the best of both worlds. For instance, data can be loaded into an arbitrary table using basic SQL capabilities to then expose that table to BW/4HANA as if it were an infoprovider that can be secured via BW/4HANA’s rich set of security features.

In fact, many SAP customers have one or more BW systems for (1) and one or more DW systems for (2). Those systems depend on each other as data is copied from one to the other so that each system can provide a coherent view on the data. Keeping such a system landscape in sync is not only a technical challenge. Often, separate IT teams own the respective systems. There exists a natural rivalry; they compete for resources, ownerships, who has the better SLAs, whose requirements gets precedence in situations that affect both teams or systems and so on. Fig. 1 shows that situation.

Typical customer landscape with a Business Warehouse (BW) and a SQL-based data warehouse side-by-side.

Fig. 1: Typical customer landscape with a Business Warehouse (BW) and a SQL-based data warehouse side-by-side.

The reason for the organisational and technical separation that is shown in fig. 1 is typically found in that approaches (1) and (2) appear to be mutually exclusive and, thus, ought to be separated. This has become a common perception and practice. Now and as mentioned above, BW/4HANA offers the possibility of not only a coexistence of (1) and (2) in one single system but also synergetic combinations of (1) and (2) – see figure 2.

Fig. 2: BW/4HANA combines the best of both worlds in one and the same system.

Examples for synergies between (1) and (2) – the frequently cited mixed scenarios – have been documented in various presentations, webinars, blogs and the like, sometimes still in the context of BW-on-HANA but all of that is even more applicable now to BW/4HANA as the latter has seen a number of enhancements. Here is a non-exhaustive list of material:

In a simplified way or as a summary, there is the following options:

  1. SQL → BW/4HANA: Any SQL-consumable table or view can be incorporated into BW/4HANA, e.g. augmented by BW/4HANA based semantics (like currency logic) or infrastructure (like BW/4HANA defined security).
  2. BW/4HANA → SQL: Most of the BW/4HANA based data objects (i.e. infoproviders but also BW queries) can be exposed as SQL-consumable views, potentially with a loss of some semantics.
  3. BW/4HANA ⇄ SQL: There is a number of “exit options” that allow to add SQL, SQL script, R or any other HANA supported code to BW/4HANA processing. The most popular place is the HANA Analysis Process (HAP) in BW/4HANA.

This blog can also be found on SCN and on SAP HANA. You can follow me on Twitter via @tfxz.

Native DSO in #HANADW

There is an excellent series of short videos that introduce the native data store object (NDSO) for HANA. The NDSO can be considered as a more intelligent table that, in particular, allows to capture deltas. This is especially useful when data is regularly loaded to be transformed or cleansed afterwards: rather than going through the complete data set in the table, one can focus on the changes since the last transformation or cleansing has happened. This reduces the amount of data that needs to be processed and, thus, increases the throughput / performance of the process. Frequently, the effect is significant. The DSO idea has originated from SAP’s Business Warehouse (BW) and has seen the advent of the more versatile and powerful advanced DSO (ADSO) in BW/4HANA.

Here are 4 videos as an introduction to the NDSO:

There are more videos on the HANA DWF features in this list.

You can follow me on Twitter via @tfxz.

Oncoming #BW4HANA Webcasts

Here is a list of ASUG webcasts covering topics around BW/4HANA; click on title for registration:

For a complete list of ASUG BI webcasts look here.

Technical Summary of #BW4HANA

[This can be considered as an extended version of the introductory blog What is #BW4HANA?]

Overview

barcelona-2016-nov-13BW/4HANA is a data warehousing application sitting on top of HANA as the underlying DBMS. A data warehouse (DW) is designed specifically to be a central repository for all data in a company. This well-structured data traditionally originates from transactional systems, ERP, CRM, and LOB applications. Each individual system is consistent, whereas the union of the systems and the underlying data is not. This is why disparate data from those systems has to be harmonized—that is, extracted, transformed, loaded (ETL) or logically exposed (federated) — into the warehouse within a single relational schema. The predictable data structure (of such a schema) optimizes processing and reporting.

BW/4HANA allows to define a DW architecture via high level building blocks, almost like Lego bricks. Out of this model, a set of tables, views and other relational objects are generated. BW/4HANA manages the lifecycle of those tables and views, e.g. when columns are added or removed. It also manages the relationship between the tables. For example, it asserts referential integrity which is extremely beneficial for query processing as it avoids the use of outer joins whose performance is, in general, far inferior to inner joins. BW/4HANA not only manages the lifecycle of tables, views etc. but also the lifecycle of the data sitting in those tables or being exposed by the views. Data typically enters a DW in its original format but then gets harmonized with data from other systems. For legal compliance and other reasons, it is usually important to track the data in the DW, how it “travels” from its entry in the DW to its exposure to the end users. In many cases, data retains for a certain period in an active (hot data) layer in the DW until it is moved to less expensive media outside of HANA, e.g. nearline storage (NLS) in IQ or Hadoop. Still, BW/4HANA provides online access to that data albeit at a small performance penalty.

On top of its data management layer, BW/4HANA also provides an analytic layer with an analytic manager at its core. The latter is a unique asset as it differentiates from traditional OLAP engines in the sense that it refrains from processing data itself (i.e. in ABAP) but simply compiles query execution graphs that are sent to HANA’s execution engines, mainly the calculation engine but also SQL, OLAP, planning and other engines and libraries. Those engines return (partial) results which are then assembled within BW/4HANA’s analytic manager to form an overall query result. It is important to understand that typical analytic queries consist of a sequence of operations which cannot be arbitrarily changed for optimization due to mathematical constraints. For example, currency values have to be converted to a single currency before the values are aggregated; swapping aggregation and currency conversion would yield incorrect results. In order to leverage HANA’s extremely fast aggregation power, currency conversion (and similarly unit conversion) logic has been brought into HANA.

Many of the approaches mentioned above have started within BW-on-HANA but are now extended within BW/4HANA, mainly using the advantage that HANA is the only supported DBMS underneath. In the remainder, we will elaborate this further.

Openness

Probably one of the most popular and widely recognized strengths of BW/4HANA is the many features that allow BW/4HANA …

  • to expose its data and a subset of the semantics on top (e.g. hierarchies, currency logic, fiscal logic, formulas) via HANA’s calculation views to a SQL tool or programmer,
  • to incorporate SQL tables, views, SQL script procedures seamlessly into a BW/4HANA-based DW architecture,
  • to leverage any specialized library (e.g. AFLs, PAL) in batch or online processing.

So, it is possible to easily interact with any SQL environment, tool and approach. It is so popular that many BW-on-HANA customers (and this should be even more the case for BW/4HANA – have started to discard their SQL-oriented data warehouses in favor of using native SQL within BW-on-HANA or BW/4HANA. Bell Helicopters presented an example at the ASUG 2016 conference; see the figure below. They will deprecate their 4 Oracle-based data warehouses and move them into HANA. For more details see their slides or here.

Bell Helicopter's plans as presented at ASUG 2016

Bell Helicopter’s plans as presented at ASUG 2016

Simplicity

Depending on how one counts, BW offers 10 to 15 different object types / building blocks – these are the “Lego bricks” mentioned above – for building a data warehouse. In BW/4HANA, there are only 4 which are at least as expressive and powerful as the previous 15 – see the figure below.  BW/4HANA’s building blocks are more versatile. Data models can now be built with less buildings blocks w/o compromising expressiveness. They will, therefore, be easier to maintain, thus more flexible and less error-prone. Existing models can be enhanced, adjusted and, thus, be kept alive during a longer period that goes beyond an initial scope.

Another great asset of BW/4HANA is that it knows what type of data sits in which table. The usage and access pattern of each table is very well known to BW/4HANA. From that information, it can automatically derive which data needs to sit in the hot store (memory) and which data can be put into the warm store (disk or non-volatile RAM) to yield a more economic usage of the underlying hardware. This is unique to BW/4HANA compared to handcrafted data warehouses that require also a handcrafted, i.e. specifically (manually) implemented data lifecycle management.

Object types in classic BW vs BW/4HANA

Object types in classic BW vs BW/4HANA

Modern UIs

With the switch from BW or BW-on-HANA to BW/4HANA comes along a shift away from legacy SAPGUI based UIs for administrators, expert users and DW architects to modern UIs based on HANA Studio or Fiori-like, browser based UIs. Currently, this shift has been accomplished for the main modeling UIs and SAPGUI will still be necessary in the short term. But it is not only about using modern technology and changing the visualization of existing UIs but there has been significant changes in how to define and manage a DW architecture. Most prominently, there is a new data flow modeler (figure below; left-hand side) which visualizes the DW architecture in a very intuitive and user-friendly way, thereby moving away from the classic tree-based BW workbench (figure below; right-hand side).

Advances of UIs in BW/4HANA vs classic BW

Advances of UIs in BW/4HANA vs classic BW


Big Data

BW/4HANA will be tightly integrated with SAP’s planned Big Data Hub tooling. This caters for the fact that traditional data warehouses are gradually complemented with big data environments which lead to an architecture of modern data warehouses; see the figure below. Typically, “data pipelines” (data movement processes that refine, combine, harmonize, transform, convert unstructured → structured etc.) span the various storage layers of such an environment. It will be possible to incorporate BW/4HANA’s process chains into such data pipelines to allow for an end-to-end view, scheduling, monitoring and overall management. BW/4HANA will leverage VORA as a bridge between HANA and HDFS, e.g. for accessing NLS data that might have been moved to HDFS, for machine learning or transformation processes that involve (e.g. high volume) data in HDFS.

An EDW in the context of a big data system landscape

An EDW in the context of a big data system landscape

This article has also been published on Linkedin. You can follow me on Twitter via @tfxz.

Quality of Sensor Data: A Study Of Webcams

noisy GPS data

Fig 1: Noisy GPS data: allegedly running across a lake.

For a while, I’ve been wondering what the data quality of sensor data is. Naively – and many conversations that I had on this went along that route – it can be assumed that sensors always send correct data unless they fail completely. A first counter-example that many of us can relate to is GPS, e.g. integrated into a smartphone. See the figure to the right which visualises part of a running route and shows me allegedly running across a lake.

Now, sensor does not equal sensor, i.e. it is not appropriate to generalise “sensors”. Quality of measurements and data varies a lot on the actual measure (e.g. temperature), the environment, the connectivity of the sensor, the assumed precision and many more effects.

In this blog, I analyse a fairly simple, yet real-world setup, namely that of 3 webcams that take images every 30 minutes and send them via the FTP protocol to an FTP server. The setup is documented in the following figure that you can read from right to left in the following way:

  1. There are 3 webcams, each connected to a router via WLAN.
  2. The router is linked to an IP provider via a long-range WIFI connection based on microwave technology.
  3. Then there is a standard link via the internet from IP provider to IP provider.
  4. A router connects to the second IP provider.
  5. The FTP server is connected to that router.
The connection between the webcams on th right to the FTP server on the left.

Fig 2.: The connection between the webcams on the right to the FTP server on the left.

So, once an image is captured, it travels from 1. to 5. I have been running this setup for a number of years now. During that time, I’ve incorporated a number of reliability options like rebooting the cameras and the (right-hand) router once per day. From experience, steps 1. and 2. are the most vulnerable in this setup: both, long-range WIFI and WLAN, are liable to a number failure options. In my sepcific setup, there is no physical obstacle or frequency polluted environment. However, weather conditions are most likely to be the source of distortion, like widely varying humidity and temperature.

So, what is the experiment and what are the results? I’ve been looking at the image data sent over the course of approx. 3 months. In total, around 8000 images were transmitted. I counted the successful (fig 3) vs the unsuccessful (fig 4) transmissions. I did not track the images that completely failed to be transmitted, i.e. that did not reach the FTP server at all and therefore did not leave any trace. 5.3% of the images were distorted (as in fig 4) or every 19-th image failed to be transmitted correctly. In addition, that rate was no constant (e.g. per week) but there were times of heavy failures and times of no failures.

Successfully transmitted image.

Fig 3: Successfully transmitted image.

Distorted image.

Fig 4: Distorted image.

This is an initial and simple analysis but one that matches real-world conditions and setups pretty well and is therefore no artificial simulation. In the future, I might refine the analysis like counting non-transmissions too or correlating the quality with temperature, humidity or other potential influencing factors.

You can follow me on Twitter via @tfxz.