In-Memory Distributed Compute - Architectures for Low-Latency Applications in Java
Are you a developer, software engineer or architect looking to apply in-memory technologies to your current architecture to deliver ultra-fast response times, better performance, scalability and availability? Are you looking for new tools and techniques to manage and scale data and processing for Microservices and Cloud architectures?
This talk will provide an introduction into in-memory technologies, caching and distributed compute. It will then review top pain points and how they can be solved using in memory data grids to provide distributed caching and compute.
Key things you will learn.
• Caching concepts and strategies
• Common use-cases such as payments or securities processing, real-time fraud, online personalization, and IoT.
• Where an in-memory computing platform fits within a typical enterprise application architecture
• Further dive into low-latency distributed computing techniques for fast batch, distributed RPC and stream processing.
Johnson has 30 years of software industry experience, ranging from pre-sales, architecture and engineering, design, business analysis, implementation and operations. Most of his time has been focused around database related technologies.