ISACA Journal Author Blog

ISACA > Journal > Practically Speaking Blog > Posts > Using In-memory Computing to Manage Big Data

Using In-memory Computing to Manage Big Data

| Published: 9/23/2013 8:18 AM | Category: Risk Management | Permalink | Email this Post | Comments (0)
William Emmanuel YuWilliam Emmanuel Yu, Ph.D., CISM, CRISC, CISSP, CSSLP
In the telecommunications industry, for example, transactions are measured in the hundreds and thousands of transactions per second. Consider mobile prepaid databases that need to be checked every time a transaction is made. Does the subscriber belong to my network? Is the subscriber entitled to the service? Does the subscriber have enough air time? This easily translates to millions and billions of transactions on that particular database. Consider the emergence of real-time user profiling for contextual advertising. This requires a large amount of storage for easily retrievable transactional information used for profiling. As the industry grows larger, the volume of transactions also grows quickly. These use cases can also apply in the world of big data. This creates the need to build not just bigger and bigger but also faster and faster systems.
The most common solution nowadays is to scale sideways—horizontal scalability. However, there are use cases where more and more transactions need to be executed on a single record in a single database. In other cases, the data cannot be easily partitioned—a requirement for good horizontally scalable systems. In these cases, supplementary approaches are clearly necessary. Hence, we see the reemergence of vertical scalability and in-memory computing.
In-memory computing aims to provide vertical performance gains by decreasing I/O latency. This, in turn, is done by putting all the data in main memory. This decrease in latency across multiple transactions provides the necessary performance gains to address the needs of increasingly high throughput workloads.
However, there are a number of risk factors and considerations that need to be factored in when choosing in-memory technologies. If writes are lazy what happens to the data during an outage? How durable are the data? From open-source solutions to commercial providers, the various industry players have variances in approaches that make particular solutions' risk profiles better for certain use cases.
Read William Emmanuel Yu’s recent Journal article:
In-memory Computing—Evolution, Opportunity and Risk,” ISACA Journal, volume 5, 2013.


There are no comments yet for this post.