Gary Hutchins

Gary Hutchins
Director of Solutions Architecture

Happy New Year, everyone. While I am a little tardy on this one, its something that is really important and has been on my mind since December, when HPE made some important announcements around Project Synergy. Up until now, Synergy had been just been a concept (initially introduced last June). This is the first actual product announcement based on it, laying the foundation for HPE’s goal of ‘Composable Infrastructure’, and is something we are very excited about… for a number of reasons.  Keep in mind that many of the the details remain NDA

I am sure the first question is ‘What’s Composable Infrastructure?” Fundamentally, it’s the concept of having all infrastructure resources pooled, disaggregated and completely controlled by a unified API. This includes compute, memory, disk devices AND the fabric.

So Synergy is FAR from being ‘just’ a next-generation integrated blade and fabric interconnect platform. It is the next step in the evolution of converged systems design and operations.

Synergy is designed from the ground up to deliver ‘Infrastructure as Code – out of the box – with full support for mixed physical, virtual and containerized workloads.

 

HPESynergy_Blog_1

 

The hardware foundation is the HPE Synergy 12000 frame. These frames are highly flexible and can be configured dynamically with storage, fabric and compute, and multiple racks (each consisting of multiple frames) can be linked and viewed as a common pool of resources.

Synergy Composer then manages BIOS, templates, drivers, firmware (and more) across the environment and all pooled resources. Composer also provides API access and control through the HPE OneView API. Lastly, The Synergy Image Streamer allows the Composer to access a repository of boot images that can be streamed to compute nodes on the fly and allowing for a stateless bare-metal environment.

Storage is provided via frame-resident HPE Synergy D3940 Storage Modules, connected through a non-blocking SAS fabric and presented as local/DAS or remote file, block or object… or via external SAN-attached 3Par StoreServ systems. Both deployment options are fully pooled and API-enabled.

While most of the changes center around software, operations and frame design/interconnect, the compute modules are enhanced as well, starting with the initial Synergy 480, 620, 660 and 680 Gen9 Compute Modules. With the ability to put up 6TB of memory onto a module (680), these are designed with the new memory intensive applications in mind.

So you may be thinking that this sounds like simply enhancements to the current C7000. This is definitely not your traditional blade chassis (starting with it’s 16Tbps, photonics-upgradeable mid-plane), and HP will even say this is not intended as an immediate replacement for your current traditional workloads or your C7000 frames. In fact existing HPE C7000 (and storage, networking and fabric) systems can leverage much of the API integration already available with OneView. Synergy goes far beyond that, with it’s purpose in life to provide integrated, ‘drop-in-turn-on’ deployment of compute-memory, storage and fabric with the same level of API toolsets that you would get from a public cloud provider.

Something else I find really exciting is the DevOps-focused partner ecosystem already in place (and developing), with existing support and integrations available for Docker/Swarm/Kubernetes, Chef, Puppet, Ansible…to name a few.

This is a massive step forward towards that “Software Defined Datacenter” and the “Infrastructure as Code” capabilities that DevOps teams now demand.

(And with the photonics-ready backplane there is already talk about memory/cpu de-coupling as well as other potential future upgrades based upon design criteria for “The Machine”. Could get interesting.)

 

HPESynergy_Blog_3

 

Look for much more information from VeriStor on this new platform in the near future.  Contact us for a full briefing or demo.

https://www.hpe.com/us/en/integrated-systems/synergy.html