SAP Products and the DW Quadrant

Heidelberg PhilosophenwegIn a recent blog, I have introduced the Data Warehousing Quadrant, a problem description for a data platform that is used for analytic purposes. The latter is called a data warehouse (DW) but labels, such as data mart, big data platform, data hub etc., are also used. In this blog, I will map some of the SAP products into that quadrant which will hopefully yield a more consistent picture of the SAP strategy.

To recap: the DW quadrant has two dimensions. One indicates the challenges regarding data volume, performance, query and loading throughput and the like. The other one shows the complexity of the modeling on top of the data layer(s). A good proxy for the complexity is the number of tables, views, data sources, load processes, transformations etc. Big numbers indicate many dependencies between all those objects and, thus, high efforts when things get changed, removed or added. But it is not only the effort: there is also a higher risk of accidentally changing, for example, the semantics of a KPI. Figure 1 shows the space outlined by the two dimensions. The space is the divided into four subcategories: the data marts, the very large data warehouses (VLDWs), the enterprise data warehouses (EDWs) and the big data warehouses (BDWs). See figure 1.

Figure 1: The DW quadrant.

Now, there is several SAP products that are relevant to the problem space outlined by the DW quadrant. Some observers (customers analysts, partners, colleagues) would like SAP to provide a single answer or a single product for that problem space. Fundamentally, that answer is HANA. However, HANA is a modern RDBMS; a DW requires tooling on top. So, there is something more required than just HANA. Figure 2 assigns SAP products / bundles to the respective subquadrants. The idea behind that is to be a “flexible rule of thumb” rather than a hard assignment. For example, BW/4HANA can play a role in more than just the EDW subquadrant. We will discuss this below. However, it becomes clear where the sweet spots or the focus area of the respective products are.

Figure 2: SAP products assigned to subquadrants.

From a technical and architectural perspective, there is a lot of relationships between those SAP products. For example, operational analytics in S/4 heavily leverages the BW embedded inside S/4. Another example is BW/4HANA’s ability to combine with any SQL object, like SQL accessible tables, views, procedures / scripts. This allows smooth transitions or extensions of an existing system into one or the other direction of the quadrant. Figure 3 indicates such transitions and extension options:

  1. Data Mart → VLDW: This is probably the most straightforward path as HANA has all the capabilities for scale-up and scale-out to move along the performance dimension. All products listed in the data mart subquadrant can be extended using SQL based modeling.

  2. Data Mart → EDW: S/4 uses BW’s analytic engine to report on CDS objects. Similarly, BW/4HANA can consume CDS views either via the query or in many cases also for extraction purposes. Native HANA data marts combine with BW/4HANA similarly to the HANA SQL DW (see 3.).

  3. VLDW ⇆ EDW: Here again, I refer you to the blog describing how BW/4HANA can combine with native SQL. This allows BW/4HANA to be complemented with native SQL modeling and vice versa!

  4. VLDW or EDW → BDW: Modern data warehouses incorporate unstructured and semi-structured data that gets preprocessed in distributed file or NoSQL systems that are connected to a traditional (structured), RDBMS based data warehouse. The HANA platform and BW/4HANA will address such scenarios. Watch out for announcements around SAPPHIRE NOW 😀

Figure 3: Transition and extension options.

The possibility to evolve an existing system – located somewhere in the space of the DW quadrant – to address new and/or additional scenarios, i.e. to move along one or both dimensions is an extremely important and valuable asset. Data warehouses do not remain stale; they are permanently evolving. This means that investments are secure and so it the ROI.

This blog has also been published here. You can follow me on Twitter via @tfxz.

The Data Warehousing Quadrant

A good understanding or a good description of a problem is a prerequisite to finding a solution. This blog presents such a problem description, namely for a data platform that is used for analytic purposes. Traditionally, this is called a data warehouse (DW) but labels, such as data mart, big data platform, data hub etc., are also used in this context. I’ve named this problem description the Data Warehousing Quadrant. An initial version has been shown in this blog. Since then, I’ve used it in many meetings with customers, partners, analysts, colleagues and students. It has the nice effect that it makes people think about their own data platform (problem) as they try to locate where they are and where they want to go. This is extremely helpful as it triggers the right dialog. Only if you work on the right questions you will find the right answers. Or put the other way: if you start with the wrong questions – a situation that occurs far more often than you’d expect – then you are unlikely to find the right answers.

The Data Warehousing Quadrant (Fig. 1) has two problem dimensions that are independent from each other:

  1. Data Volume: This is a technical dimension which comprises all sorts of challenges caused by data volume and/or significant performance requirements such as: query performance, ETL or ELT performance, throughput, high number of users, huge data volumes, load balancing etc. This dimension is reflected on the vertical axis in fig. 1.

  2. Model Complexity: This reflects the challenges triggered by the semantics, the data models, the transformation and load processes in the system. The more data sources that are connected to the DW, the more data models, tables, processes exist. So, the number of tables, views, connected sources is probably a good proxy for the complexity of modeling inside the DW. Why is this complexity relevant? The lower it is the less governance is required in the system. The more tables, models, processes there are, the more dependencies between all this objects exists and the more difficult it becomes to manage all those dependencies whenever something (like a column of a table) needs to be added, changed, removed. This is the day-to-day management of the “life” of a DW system. This dimension is reflected on the horizontal axis in fig. 1.

The DW quadrant
Figure 1: The DW quadrant.

Now, these two dimensions create a space that can be divided into four (sub-) quadrants which we discuss in the following:

Bottom-Left: Data Marts

Here, the typical scenarios are, for example,

  • a departmental data mart, e.g. a marketing department sets up a small, maybe even open source based RDBMS system and creates a few tables that help to track a marketing campaign. Those tables hold data of customers that were approached, their reactions or answers to questionnaires, addresses etc. SQL or other views allow some basic evaluations. After a few weeks, the marketing campaign ends, hardly any or no data gets added and the data, the underlying tables and views slowly “die” as they are not used anymore. Probably, one or two colleagues are sufficient to handle the system, both setting it up and creating the tables and views. They now the data model intimately, data volume is manageable and change management is hardly relevant as the data model is either simple (thus changes are simple) or has a limited lifespan (≈ the duration of the marketing campaign).

  • An operational data mart. This can also be the data that is managed via a certain operational application as you find them e.g. in an ERP, CRM or SRM system. Here, tables, data are given and data consistency is managed by the related application. There is no requirement to involve additional data from other sources as the nature of the analyses is limited to the data sitting in that system. Typically, data volumes and number of relevant tables are limited and do not constitute a real challenge.

Top-Left: Very Large Data Warehouses (VLDWs)

Here, a typical situation is that there is a small number of business processes – each one supported via an operational RDBMS – with at least one of them producing huge amounts of data. Imagine the sales orders submitted via Amazon’s website: this article cites 426 items ordered per second on Cyber Monday in 2013. Now, the model complexity is considerably simple as only a few business processes, thus tables (that describe those processes), are involved. However, the major challenges originate in the sheer volume of data produced by at least one of those processes. Consequently, topics such as DB partitioning, indexing, other tuning, scale-out, parallel processing are dominant while managing the data models or their lifecycles is fairly straightforward.

Bottom-Right: Enterprise Data Warehouses (EDWs)

When we talk about enterprises then we look at a whole bunch of underlying business processes: financial, HR, CRM, supply-chain, orders, deliveries, billing etc. Each of these processes is typically supported by some operational system which has a related DB in which it stores the data describing the ongoing activities within the respective process. There is natural dependencies and relationships between those processes – e.g. there has to be an order before something is delivered or billed – that it makes sense for business analysts to explore and analyse those business processes not only in an isolated way but also to look at those dependencies and overlaps. Everyone understands that orders might be hampered if the supply chain is not running well. In order to underline this with facts the data from the supply chain and the order systems need to be related and combined to see the mutual impacts.

Data warehouses that cover a large set of business processes within an enterprise are therefore called enterprise data warehouses (EDWs). Their characteristic is the large set of data sources (reflecting the business processes) which, in turn, translates into a large number of (relational) tables. A lot of work is required to cleanse and harmonise data in those tables. In addition, the dependencies between the business processes and its underlying data are reflected in the semantic modeling on top of those tables. Overall, a lot of knowledge and IP goes into building up an EDW. This makes it sometimes expensive but, also, extremely valuable.

An EDW does not remain static. It gets changed, adjusted, new sources get added, some models get refined. Changes in the day-to-day business – e.g. changes in a company’s org structure – translate into changes in the EDW. This, by the way, does apply to the other DWs mentioned above, too. However, the lifecycle is more prominent with EDWs than in the other cases. In other words: here, the challenges by the model complexity dimension dominate the life of an EDW.

Top-Right: Big Data Warehouses (BDWs)

Finally, there is the top-right quadrant which starts to become relevant with the advent of big data. Please beware that “big data” not only refers to data volumes but also incorporating types of data that have not been used that much so far. Examples are

  • videos + images,
  • free text from email or social networks,
  • complex log and sensor data.

This requires additional technologies involved that currently surge in the wider environment of Hadoop, Spark and the like. Those infrastructures are used to complement traditional DWs to form BDWs, aka modern data warehouses, aka big data hubs (BDHs). Basically, those BDWs see challenges from both dimensions, the data volume and the modeling complexity. The latter is being augmented by the fact that models might span various processing and data layers, e.g. Hadoop + RDBMS.

How To Use The DW Quadrant?

Now, how can the DW quadrant help? I have introduced it to various customers and analysts and it made them think. They always start mapping their respective problems or perspectives to the space outlined by the quadrant. It is useful to explain and express a situation and potential plans of how to evolve a system. Here are two examples:

SAP addresses those two dimensions or the forces that push along those dimensions via various products, namely SAP HANA and VORA for the data volume and performance challenges, while BW/4HANA and tooling for BDH will help along the complexity. Obviously, the combination of those products is then well suited to address the cases of big data warehouses.

An additional aspect is that no system is static but evolves over time. In terms of the DW quadrant, this means that you might start bottom-left as a data mart to then grow into one or the other or both dimensions. These dynamics can force you to change tooling and technologies. E.g. you might start as a data mart using an open source RDBMS (MySQL et al.) and Emacs (for editing SQL). Over time, data volumes grow – which might require to switch to a more scalable and advanced commercial RDBMS product – and/or sources and models are added which requires a development environment for models that has a repository, SQL generating graphical editors etc. Power Designer or BW/4HANA are examples for the latter.

This blog can also be found on and on Linkedin. You can follow me on Twitter via @tfxz.

Technical Summary of #BW4HANA

[This can be considered as an extended version of the introductory blog What is #BW4HANA?]


barcelona-2016-nov-13BW/4HANA is a data warehousing application sitting on top of HANA as the underlying DBMS. A data warehouse (DW) is designed specifically to be a central repository for all data in a company. This well-structured data traditionally originates from transactional systems, ERP, CRM, and LOB applications. Each individual system is consistent, whereas the union of the systems and the underlying data is not. This is why disparate data from those systems has to be harmonized—that is, extracted, transformed, loaded (ETL) or logically exposed (federated) — into the warehouse within a single relational schema. The predictable data structure (of such a schema) optimizes processing and reporting.

BW/4HANA allows to define a DW architecture via high level building blocks, almost like Lego bricks. Out of this model, a set of tables, views and other relational objects are generated. BW/4HANA manages the lifecycle of those tables and views, e.g. when columns are added or removed. It also manages the relationship between the tables. For example, it asserts referential integrity which is extremely beneficial for query processing as it avoids the use of outer joins whose performance is, in general, far inferior to inner joins. BW/4HANA not only manages the lifecycle of tables, views etc. but also the lifecycle of the data sitting in those tables or being exposed by the views. Data typically enters a DW in its original format but then gets harmonized with data from other systems. For legal compliance and other reasons, it is usually important to track the data in the DW, how it “travels” from its entry in the DW to its exposure to the end users. In many cases, data retains for a certain period in an active (hot data) layer in the DW until it is moved to less expensive media outside of HANA, e.g. nearline storage (NLS) in IQ or Hadoop. Still, BW/4HANA provides online access to that data albeit at a small performance penalty.

On top of its data management layer, BW/4HANA also provides an analytic layer with an analytic manager at its core. The latter is a unique asset as it differentiates from traditional OLAP engines in the sense that it refrains from processing data itself (i.e. in ABAP) but simply compiles query execution graphs that are sent to HANA’s execution engines, mainly the calculation engine but also SQL, OLAP, planning and other engines and libraries. Those engines return (partial) results which are then assembled within BW/4HANA’s analytic manager to form an overall query result. It is important to understand that typical analytic queries consist of a sequence of operations which cannot be arbitrarily changed for optimization due to mathematical constraints. For example, currency values have to be converted to a single currency before the values are aggregated; swapping aggregation and currency conversion would yield incorrect results. In order to leverage HANA’s extremely fast aggregation power, currency conversion (and similarly unit conversion) logic has been brought into HANA.

Many of the approaches mentioned above have started within BW-on-HANA but are now extended within BW/4HANA, mainly using the advantage that HANA is the only supported DBMS underneath. In the remainder, we will elaborate this further.


Probably one of the most popular and widely recognized strengths of BW/4HANA is the many features that allow BW/4HANA …

  • to expose its data and a subset of the semantics on top (e.g. hierarchies, currency logic, fiscal logic, formulas) via HANA’s calculation views to a SQL tool or programmer,
  • to incorporate SQL tables, views, SQL script procedures seamlessly into a BW/4HANA-based DW architecture,
  • to leverage any specialized library (e.g. AFLs, PAL) in batch or online processing.

So, it is possible to easily interact with any SQL environment, tool and approach. It is so popular that many BW-on-HANA customers (and this should be even more the case for BW/4HANA – have started to discard their SQL-oriented data warehouses in favor of using native SQL within BW-on-HANA or BW/4HANA. Bell Helicopters presented an example at the ASUG 2016 conference; see the figure below. They will deprecate their 4 Oracle-based data warehouses and move them into HANA. For more details see their slides or here.

Bell Helicopter's plans as presented at ASUG 2016

Bell Helicopter’s plans as presented at ASUG 2016


Depending on how one counts, BW offers 10 to 15 different object types / building blocks – these are the “Lego bricks” mentioned above – for building a data warehouse. In BW/4HANA, there are only 4 which are at least as expressive and powerful as the previous 15 – see the figure below.  BW/4HANA’s building blocks are more versatile. Data models can now be built with less buildings blocks w/o compromising expressiveness. They will, therefore, be easier to maintain, thus more flexible and less error-prone. Existing models can be enhanced, adjusted and, thus, be kept alive during a longer period that goes beyond an initial scope.

Another great asset of BW/4HANA is that it knows what type of data sits in which table. The usage and access pattern of each table is very well known to BW/4HANA. From that information, it can automatically derive which data needs to sit in the hot store (memory) and which data can be put into the warm store (disk or non-volatile RAM) to yield a more economic usage of the underlying hardware. This is unique to BW/4HANA compared to handcrafted data warehouses that require also a handcrafted, i.e. specifically (manually) implemented data lifecycle management.

Object types in classic BW vs BW/4HANA

Object types in classic BW vs BW/4HANA

Modern UIs

With the switch from BW or BW-on-HANA to BW/4HANA comes along a shift away from legacy SAPGUI based UIs for administrators, expert users and DW architects to modern UIs based on HANA Studio or Fiori-like, browser based UIs. Currently, this shift has been accomplished for the main modeling UIs and SAPGUI will still be necessary in the short term. But it is not only about using modern technology and changing the visualization of existing UIs but there has been significant changes in how to define and manage a DW architecture. Most prominently, there is a new data flow modeler (figure below; left-hand side) which visualizes the DW architecture in a very intuitive and user-friendly way, thereby moving away from the classic tree-based BW workbench (figure below; right-hand side).

Advances of UIs in BW/4HANA vs classic BW

Advances of UIs in BW/4HANA vs classic BW

Big Data

BW/4HANA will be tightly integrated with SAP’s planned Big Data Hub tooling. This caters for the fact that traditional data warehouses are gradually complemented with big data environments which lead to an architecture of modern data warehouses; see the figure below. Typically, “data pipelines” (data movement processes that refine, combine, harmonize, transform, convert unstructured → structured etc.) span the various storage layers of such an environment. It will be possible to incorporate BW/4HANA’s process chains into such data pipelines to allow for an end-to-end view, scheduling, monitoring and overall management. BW/4HANA will leverage VORA as a bridge between HANA and HDFS, e.g. for accessing NLS data that might have been moved to HDFS, for machine learning or transformation processes that involve (e.g. high volume) data in HDFS.

An EDW in the context of a big data system landscape

An EDW in the context of a big data system landscape

This article has also been published on Linkedin. You can follow me on Twitter via @tfxz.

Data Modeling with #BW4HANA


One of the most striking differences between BW and BW/4HANA is data modeling. On one hand, there is less but more versatile objects to choose from (see figure 1). On the other hand, there is a new, more intuitive alternative to BW’s long standing admin workbench (RSA1), namely the Data Flow Modeler (see figure 2). It shows physical and virtual containers (like DSOs or composite providers) as boxes. Data transformations, combinations, transitions etc. are indicated as directed lines between those boxes. From those boxes and lines it is possible to access the respective editor for those objects. In that way, a DW architect can navigate along the paths that the data take from entering the system towards the multidimensional views that serve as source to pivot tables, charts and other suitable visualisations. This is not only great for the mentioned DW architect but also allows for rapid prototyping scenarios, e.g. when a DW architect sits down with a business user to quickly create a new or modify a given model. Figure 3 shows an example.

Modeling-DW Modeling-BW4
Figure 1: Less and more logical objects when architecting a DW with BW/4HANA.
dataflow modeler 1

Figure 2: The new Data Flow Modeler in BW/4HANA.

dataflow modeler 3 dataflow modeler 4
Figure 3: The same scenario, once in the traditional admin workbench (left) and BW/4HANA’s Data Flow Modeler (right).

This blog has also been published here. You can follow me on Twitter via @tfxz.

Why #BW4HANA ?

Benissa-2016-08With the recent announcement of BW/4HANA some questions arise on the motivation for a new product rather than evolving an existing one, namely BW-on-HANA. With this blog, we want to shed some light into the discussions we have had and why we think that this is the best way forward. Here are the fundamental 3 reasons:

1. Classic DBs vs HANA Platform

Nowadays, HANA has become much more than a pure, classic RDBMS that offers standard SQL processing on a new (in-memory) architecture. There is a number of specialized engines and libraries that allow to bring all sorts of processing capabilities close to where the data sits rather than the data to a processing layer such as SAP’s classic application server. Predictive, geo-spatial, time-series, planning, statistical and other engines and libraries are all combined with SQL but go well beyond the traditional Open SQL scope that has been prevalent in SAP applications for almost 3 decades. Please recall that Open SQL constitutes the (least) common denominator between the classic RDBMS that have been supported in SAP applications. Long time ago, BW has broken with that approach a bit by introducing RDBMS specific classes and function groups that allowed to leverage specific SQL and optimizer capabilities of the underlying RDBMS. Still, the mandate has to be pushing BW’s data processing more and more to where the data sits. Accommodating a “common denominator” notion (i.e. complying with “standard-ish SQL”) impedes innovation at times as it stops adopting highly DW relevant and effective capabilities from HANA.

2. Legacy Objects / Backward Compatibility

BW has been originally architected around the properties and the cost models imposed by the classic RDBMS. Over the past decade, cautious re-architecting has allowed to continuously innovate BW and to safeguard the investments of the BW customers. There has been a strong emphasis on keeping newer versions of BW as compatible with past versions as possible. Similar to sticking to “standard-ish SQL” this impedes innovation in some areas. BW/4HANA breaks with this strict notion of backward compatibility and replaces it with tooling for conversions that might require user interaction here and there, thus some effort. However, this allows for removing some legacy not only inside a software product but also in existing DW instances that move from BW to BW/4HANA.

Now, with some “baggage” removed it has become easier to focus on new, innovative things without being squeezed into considerations about backward compatibility in order to keep older scenarios going that you would build differently (e.g. with BW/4HANA‘s new object types) nowadays. So and in that sense, BW/4HANA is a much better breeding ground for innovations than BW-on-HANA can ever be. This is not because BW-on-HANA is a bad product but because it comes with a guarantee of supporting older stuff too which BW/4HANA does not.

3. Guidance

Finally, and that is basically the result of 1. and 2., many of our partners and customers have asked us for guidance about which of the many options BW provides they should use for their implementation. Some of those options are there simply because they got introduced some time ago but would be actually obsolete within a new product. So, SAP has decided to reduce the complexity of choices and created a product, namely BW/4HANA, that offers only those building blocks that customers and partners should use now and in the future. The product has become simple and that will translate into simplified DW architectures.

I hope this blog helps you to understand why SAP has moved from BW to BW/4HANA. In simple terms, it’s similar to choosing between renovating and rearchtecting your existing house or building and moving to a new house with the latter fitting your furniture and all the other stuff that you cherish. We all hope that you will feel comfortable in the new home.


This blog has also been published here. You can follow me on Twitter via @tfxz.

PS: More details are revealed on Sep 7’s SAP and Amazon Web Services Special Event.

PPS: In this 4 min video, Lothar Henkes and myself describe the motivation and plans for BW/4HANA. It was recorded at the BW/4HANA launch event in San Francisco on 7 Sep 2016.

What is #BW4HANA?

New YorkBW/4HANA is an evolution of BW that is completely optimised and tailored to HANA. The BW/4HANA code can only run on HANA as it is interwoven with HANA engines and libraries. The ABAP part is several million lines of code smaller compared to BW-on-HANA. It is free of any burden to stay, e.g., within a certain, “common denominator scope” of SQL, like SQL92 or OpenSQL, but can go for any optimal combination with what the HANA platform offers. The latter is especially important as it extends into the world of big data via HANA VORA, an asset that will be heavily used by BW/4HANA.

So, what are BW/4HANA’s major selling points? What are the “themes” or “goals” that will drive the evolution of BW/4HANA? Here they are:

1. Simplicity


Less-ObjecttypesDepending on how one counts, BW offers 10 to 15 different object types (building blocks like infocubes, multiproviders) to build a data warehouse. In BW/4HANA, there will be only 4 which are at least as expressive and powerful as the previous 15. BW/4HANA’s building blocks are more versatile. Data models can now be built with less buildings blocks w/o compromising expressiveness. They will, therefore, be easier to maintain, thus more flexible and less error-prone. Existing models can be enhanced, adjusted and, thus, be kept alive during a longer period that goes beyond an initial scope.

Data-LifecycleAnother great asset of BW/4HANA is that it knows what type of data sits in which table. From that information it can automatically derive which data needs to sit in the hot store (memory) and which data can be put into the warm store (disk or non-volatile RAM) to yield a more economic usage of the underlying hardware. This is unique to BW/4HANA compared to handcrafted data warehouses that require also a handcrafted, i.e. specifically implemented data lifecycle management.

2. Openness


SQL-OpennessBW/4HANA – as BW – offers a managed approach to data warehousing. This means that prefabricated templates (building blocks) are offered for building a data warehouse in a standardised way. The latter provides huge opportunities to optimise the resulting models for HANA regarding performance, footprint, data lifecycle. In contrast to classic BW, it is possible to deviate from this standard approach wherever needed and appropriate. On one hand, BW/4HANA models and data can be exposed as HANA views that are can be accessed via standard SQL. BW/4HANA’s security is thereby not compromised but part of those HANA views. On the other hand, any type of HANA table or view can be easily and directly incorporated into BW/4HANA. It is thereby not necessary to replicate data. Both capabilities mean that BW/4HANA combines with and complements any native SQL data warehousing approach. It can be regarded as a powerful suite of tools for architecting a data warehouse on HANA with all the options to combine with other SQL-based tools.

3. Modern UIs


QueryDesigner_Preview_3 SAP Digital Boardroom BW/4HANA will offer modern UIs for data modeling, admin, monitoring that run in HANA Studio or a browser. In the midterm, SAPGUI will become obsolete in that respect. Similarly, SAP’s Digital Boardroom, Business Objects Cloud, Lumira, Analysis for Office and Design Studio will be the perfect match as analytic clients on top of BW/4HANA.

4. High Performance


Big-DWExcellent performance has been at the heart of BW since the advent of HANA. As elaborated above, BW/4HANA will be free of any burdens and will leverage any optimal access to HANA which will be especially interesting in the context of big data scenarios as HANA VORA offers a highly optimised “bridge” between the worlds of HANA (RDBMS) and Hadoop/SPARK (distributed processing on a file system). Most customers require to enhance and complement existing data warehouses with scenarios that address categories of data that go beyond traditional business process triggered (OLTP) data, namely machine generated data (IoT) and human sourced information (social networks).

The figure below summarises the most important selling points. It is also available as a slide.

Major BW/4HANA selling points.

This blog has been cross published here and here. You can follow me on Twitter via @tfxz.

PS: In the meantime …




On May 17-19, SAPPHIRE NOW and ASUG 2016 took place in Orlando. SAP Business Warehouse (BW) typically doesn’t receive that much attention in such events as it has been in the market for some time. Still, this time, it has received quite some attention with (1) a large number of customer presentations in the context of ASUG 2016 and (2) a surprisingly prominent role in Hasso’s keynote. While I cannot provide an exhaustive coverage, here are a few selected highlights that I managed to capture.

Dolby Laboratories Inc. (Session DE34260)

Dolby presented their move to BW-on-HANA in a 30 min session on Tuesday (17 May). Interesting to me was that they evaluated BW-on-HANA against a number of competitive alternatives as can be seen in figures 1 and 2.


Fig. 1: Dolby’s evaluation of competitive alternatives.


Fig 2: Conclusions from the evaluation.

John Hopkins (Session A4690)

John Hopkins talked about their experience migrating SAP Business Suite and BW from Oracle to HANA. Interestingly, the go-live for their BW-on-HANA system was scheduled for the very Friday (20 May) of that week. Still, they had the time to attend SAPPHIRE. Great. The go-live was successful. They offered some statistics on their migration. They did not use DMO but a standard (export – import) migration. Still, the result looks great. See figure 3. John Hopkins’ session material can be found here.


Fig 3: John Hopkins’ migration stats.

Bell Helicopters (Session A4128)

Bell’s initial motivition in their Oracle-based EDW landscape (1 BW + 4 native Oracle DWs) was – amongst other things – to leverage BW-on-HANA’s openness regarding SQL access to better support also non-SAP analytics frontends like Qlik. They learned about HANA’s calc view capabilities, liked the Eclipse based modeling environment for BW and HANA and also used BO universes to tap into the newly created SQL consumable models (figure 4). That led to the plan to consider HANA (and BW-on-HANA) as the consolidation environment for their entire EDW landscape (figure 5). Bell’s presentation material can be found here.


Fig 4: BW-on-HANA experience by Bell Helicopters.


Fig 5: Plan to consolidate Bell’s EDW landscape via BW-on-HANA.

Hasso’s Keynote

Around 10’20” into his keynote, Hasso showed the slide displayed in figure 6. It describes his vision of the SAP systems future with BW featuring prominently. A few minutes later, he elaborated on the future of BW, his ideas for that and indicated initiatives in that direction.


Fig 6: Vision of the SAP systems future.

This blog is also available on SCN. You can follow me on Twitter via @tfxz.