Architecture

Scale Out and Conquer: Architectural Decisions Behind Distributed In-Memory Systems

Distributed platforms, like Apache Ignite, rely very much on horizontal scalability. More machines in the cluster - greater performance of the application. Do we always get twice faster after adding the second machine to the farm? Ten times faster after adding ten machines? Is that [always] true? What is the responsibility of the platform? And where do engineers' responsibility begin?

Everything We Learned About In-Memory Data Layout While Building VoltDB

The team behind the H-Store academic database and the VoltDB commercial database have been building and refining in-memory data storage for nearly ten years now, trying many different ways to organize data in memory. We’ve had successes and failures, and also lots of fascinating experiments. While we still have unanswered questions and ongoing experiments, this talk will share the highlights of what we have learned so far.

Using Hazelcast as the Serving Layer in Kappa Architecture

Many Industries have a combined need to view and process big and fast data.

Previously tools such as Hadoop allowed the processing of large data sets but at high latency and stream processing systems processed small amounts of data very fast.

Recently new architectures have been suggested to combine both of these to provide a single solution for big and fast data.

Ram Disk in Distributed Computer Networks

  1. Technologies that will be covered

Related technologies: Different storage systems overview, ram speeds (DDR3, DDR4, DDR5), Linux networking, I/O

 

  1. Purpose of the talk (problem, solution, etc.)

Purpose of the talk is data storage for fast computation and super fast i/o in distributed systems.

Problem: Slow access of data during frequent data store and retrieval

Solution: Integrated ram as a storage to help with this.

 

Making Stream Processing Stateless

Stream processing architectures are applied in operational reporting, IoT, real-time bidding, monitoring. Still, processing in all of the areas utilises data aggregation and associated state management.

Consistency vs Availability in the Cloud: Large-Scale Distributed Data Trade-Offs and Design Implications

Mission-critical software must be built for failure. However, managing large-scale distributed stateful systems imposes important trade-offs for being highly available.

The adoption of cloud architectures automatically imposes those design implications to anyone architecting modern systems, and overlooking them will eventually lead to data loss or production outage.

Making a Case for In-Memory Database

Dynamic random access memory is getting cheaper, and
ever growing set of applications is becoming fully RAM-resident.

In this talk I'll state the case for in-memory technology in purely engineering terms: how memory-focused algorithms
and data structures create a performance and efficiency edge over traditional systems, significant enough to
justify an own product family:

- taking performance to the next level: why only an in-memory database memory manager can reach the levels of speed and efficiency unattainable by general purpose memory allocators