In-Memory Computing and Data Analytics
One of the recurring themes I'm hearing at tech conferences this year is in-memory computing. At the core of these discussions is the need to make data-informed business decisions more quickly. Instead of taking a snapshot from your data sources and loading the data into a data warehouse for analysis, which can take hours or days to get a a number of solutions are promising analytics side-by-side with your transactional workloads in the form of in-memory computing.
What exactly is in-memory computing?
Instead of working with data stored to disk, in-memory computing is centered around data stored in RAM or the CPU cache, which gets the efficiency of the direct access to the system bus you simply can't get from reading from storage.
The speed of access isn't without it's challenges. Memory is notoriously volatile, though purpose built systems designed for in-memory computing help mitigate some of the risk. The other potential challenge of in-memory computing is the tendency for data sets to grow over time. Some of the in-memory solutions currently on the market require memory to increase as the database increases in size.
One of the more interesting approaches to in-memory computing is IBM BLU Acceleration, which combines columnar data storage with a unique approach to data compression that means you don't need the entire database in memory in order to get the full benefit of real time analysis.
What is IBM BLU Acceleration?
BLU Acceleration is a database technology designed for DB2 that converts row-based data to columns for faster processing. BLU combines a mechanism to prefetch the data you need with CPU acceleration to make working with the data faster than traditional methods. BLU Acceleration performs operations on compressed data to further speed analytics. Using a metadata management layer, BLU Acceleration skips any data not needed to perform data processing. All of this comes without the need for indexing or tuning.
How can you take advantage of BLU Acceleration?
If you already use DB2, BLU Acceleration is available for version 10.5. If you're already extracting data in order to gather business intelligence or analytics, this seems like a no-brainer. Being able to move from the ETL world of providing analytics far slower than real time to making business decisions on data that's close to real time means it's easier to spot trends as they happen rather than finding out after the fact. The case studies provided on the IBM BLU website suggest that many different industries are benefiting from making a move to in-memory computing. Beyond working with DB2, there's also integration points for your SAP environments as well.
For some additional information on how DB2 Blu Acceleration utilizes columnar organized datasets, check out this video: