Using In-Memory Computing to convert Big Data into Fast Data
Deploying Big Compute applications can lead to a wide range of tools and approaches needed to run large-scale applications for business, science, and engineering using a large amount of CPU and memory resources in a coordinated way. Typically Big Compute implies one of two things. It can be ordinary computing scaled across a massive parallel cluster, or it can be High-Performance Computing (HPC). The problem with the former is that it can only scale so far. And, for it to really succeed, the data itself must be scaled just as wide. The immediate problem is that in many cases, the speed at which you get results matters just as much as the results themselves. For example, if you are using analytics to improve e-commerce, the best time to do this is while the customer is engaged in a transaction.
For a business to really get the value they need from Big Compute, you need to shift the paradigm and toolset to a message based, stateful data architecture with distributed computations.
In this session, we will walk through how the paradigm shift happened and where IMDG’s, NoSQL and In-Memory databases stopped working and what else was needed.