Debunking the Myths of Scale-Up Architectures

Debunking the Myths of Scale-Up Architectures

Schedule
June 20, 02:35pm
Room
Matterhorn 3
Track

When growing capacity and power in the data center, the architectural trade-offs between server scale-up vs. scale-out continue to be debated. Both approaches are valid: scale-out adds multiple, smaller servers running in a distributed computing model, while scale-up adds fewer, more powerful servers that are capable of running larger workloads.

It’s worth noting that there are additional, unique advantages that scale-up architectures offer. One big advantage is large memory and compute capacity which makes In-Memory Computing possible. This means that large databases can now reside entirely in memory, boosting the analytics performance, as well as speeding up transaction processing. By virtually eliminating disk accesses, database query times can be shortened by many orders of magnitude, leading to real-time analytics for greater business productivity, converting wait time to work time.

Scale-up servers that utilize an interconnect versus an external network offer accelerated processing due to reduced software overhead and lower latency in the movement of data between processors and memory across the entire system.

Is it feasible and economical to support both scale-out and scale-up workloads on the same system or class of systems? At the end of the day, it’s a question of how many nodes (scale-out) and the size of each node (scale-up).

For newer workloads like Big Data or Deep Analytics, the scale-up model is a compelling option that should be considered. Given the significant innovations in server design over the past few years, concerns about cost and scalability in the scale-up model have been rendered invalid. With the unique advantages that newer scale-up systems offer, businesses today are realizing that a single scale-up server can process Big Data and other large workloads as well or better than a collection of small scale-out servers in terms of performance, cost, power, and server density.

Speakers
Ferhat
Hatay
Director of Strategy and Innovation
at
Fujitsu
Ferhat Hatay

Dr. Ferhat Hatay is Director of Strategy and Innovation driving new solution development in the areas of Cloud, Big Data, and Internet of Things (IoT).

His experience includes serving in key roles at Sun Microsystems, Oracle, and HAL (A Fujitsu Company, not the HAL 9000!) driving innovative open large-scale infrastructure solutions for high performance and enterprise computing.

Ferhat started his career at NASA Ames Research Center building infrastructures for large-scale computer simulations and Big Data analysis. He forever remains a rocket scientist. Follow him at Twitter: @FerhatSF.