## Graph Middleware Overview

Further Detail of IBM System G Grap Middleware Tools:(1) IBMPPL Graph Library

(2) Optimizer towards IBM POWER

(3) Optimizer on Software Defined Environment Cloud

## Graph Middleware Insights

### Mission

**High Productivity + High Performance** Graph Computing

System G Middleware layer is critical to deliver high performance for the graph and graphical model analytics by leveraging advanced computing platforms. In the meanwhile, this layer also provide a convenient representation (in-memory generic graph interface) for graphs, so that users can create their graphs including dynamic graphs in memory in a straightforward manner. The middleware includes runtime libraries that can automatically parallelize the graph and accelerate the computations. Therefore, the high level analytics can be highly portable among various advanced platforms. The middleware also provides schedulers and hardware-specific optimizations.

* The middleware provides a set of primitives in graph computations, which work as building blocks for constructing various high level analytics.

* The abstraction is carefully designed to capture analytic developers' concerns

* The middleware has a set of runtime libraries, which implements the primitives and optimized with respect to various platforms.

* The runtime provides opportunities to system developers to impact high level analytics without knowing any algorithmic details of the analytic models

* The interface joins the efforts of high level users and low level users

The challenge for graph middleware design is obvious. Due to the rich experiences in both analytics and systems, our systems can overcome those challenges.

* The challenges include the appropriate abstraction of analytic models, so that a small set of the primitives can describe a lot different algorithms

* The challenges also include the optimization of runtime with respect to various platform architectures. The implementation must be re-designed to deliver high performance on a new architecture. The middleware takes this efforts to free the high level users

### Structure

The structure of the System G Middleware is illustrated in the following chart. It provides a generic graph interface that can represent graphs with any class of vertices and edges -- so called generic. A java wrapper of the interface is also under developing to support Java users. Below the interface , we have runtime libraries for the shared memory computation model and that for the distributed memory model. These systems can provide high performance support to the graph components. Advanced techniques such as Graph RDMA and HMCs are considered for further improving the performance, up on the availability of these hardware.

A closer look of the middleware components can be found in the following diagram. For high-level users, the interface must naturally abstract graph representation and the interface must be generic, that is, the data structure of graph properties should be transparent to the interface. For low-level users, the interface should represent primitives of graph computations that are less likely coupled. Any optimization with respect to particular platform architecture should not affect the design above the interface. The components must be plug-in-and-play for enhancing the portability.

### Evaluation

We conducted preliminary experiments to evaluate the components developed by us. The relevant information can be found in the following figure.

* On the upper-left of the figure, we show the middle include several components including our novel techniques for graph and graphical model computations

* On the upper-right, we can see that it is highly straightforward to use the interface and access over 70+ basic graph algorithms for analytic development. The graph parallelization and scheduling are automatically handled.

* On the bottom-left, we illustrate the implementation of parallel graphical model for multidimensional forecasting and it scales well.

* On the bottom-middle, we demonstrate that the RDMA-based scheduler works well on a task graph with 10,000 nodes, each nodes has a small task of 1~10 microsecond (the smaller the task is, the higher the challenge is). The upper figure has node degree = 3, then the lower one has that = 25.

* On the bottom-right, we show that the computation and communication costs are analyzed on parallel machines. We will further profile them on Power machines for helping future Power architecture design.

### User Cases

### Inference in Multidimensional Graphical Model for Forecasting

Modern business data usually contain hierarchical structures in multiple dimensions at multiple resolutions. Disaggregated data can be used to obtain an aggregate forecast by summation of disaggregate forecasts (a bottom-up strategy) or to obtain a disaggregate forecast directly, instead of distributing an aggregate forecast.

Those data can be organized into a multidimensional hierarchy. An example to show such a hierarchy in dimensional of region and product category of camera is shown in the following figure, where SS stands for Samsung camera.

Note that the information can be propagate in any dimension at each level. So, a more generic representation of a 2D and 3D hierarchies for for online forecasting is shown below, where l1g1p1 denotes the node of PIN 1 in group 1 at level 1.

The model selection depends on the input data structure, including exponential smoothing, ARIMA and ARIMA for stationary time series, and dynamic linear models for

time series. The models in each node can be:

where (1) is the observation evolution, (2)-(5) is level, slope, regression and seasonal component evolution, respectively. The evaluation results must be propagated to its neighbors, representing influence estimate at various granularity. The message for passing can be described as:

, and

Our solutions is to use **our defined generic graph interface** in System G to represent the multidimensional (task) graph and parallelize the evaluation using MPI. So, we have the C++ based scheduler. The valuation model was implemented by R, which is implemented by statisticians. The C++ invokes multiple instances of R and support results transferring. The input to a R function is a string associate with a data file or a set of data files. The outputs are also strings representing the local estimate (with error). The scheduling continues until the evaluation on all nodes converges. Here is the code on IBM Blade: