hyper-convergence

Time to go all-in on Hyper-Convergence

May 19, 2017Categories: News. Tags: storage.

Hyper-Convergence

Hyper-Convergence – AS SOLID-STATE STORAGE has supplanted the venerable hard drive as primary storage, we’ve come to realize a number of deficiencies in the RAID array model.

Compared to local storage, RAID arrays deliver millisecond delays to all I/O operations and, while this was acceptable when HDD access times ran to tens of milliseconds, it becomes extremely inadequate when, for example, a local nonvolatile memory express (NVMe) SSD can deliver data within 100 microseconds. In the meantime, rebuild times for failed HDDs proved longer than the average time to failure of another drive in the array. This would eventually lead to data loss. RAID 6, with a second parity drive, tempered the issue for awhile. But increases in capacity above 4 TB made even dual-parity loss-prone.

These inadequacies, among others, forced the storage industry to change its course, steering us toward hyper-convergence technology.

THE ROAD TO HYPER-CONVERGENCE

The logical response to the performance and reliability issues revealed by the deployment of flash was to shift from arrays—with a massive number of slow drives—to a compact storage appliance with just eight or 10 drives attached by a nonredundant controller. This ensured a high level of data integrity by replicating data across appliances, as opposed to doing so internally with RAID.

One benefit of the small appliance approach is that network performance can easily be matched to the raw performance of the drives. This becomes very important as NVMe drives move into the 10 GBps streaming rate range for a single drive, as just happened in 2016.

As this was going on, storage software vendors started exploring new virtualization concepts, collectively known as software-defined storage (SDS), that unbundle storage services from actual storage platforms and run them in a general virtual instance pool. This development, which seeks to make storage an agile, scalable resource at the service level, corresponds strongly to server virtualization and orchestration already in place in the cloud.

Running virtual storage services in a storage appliance makes a lot of sense, as they all use some form of commercial off-the-shelf (COTS) platform as their controller. The realization that there was a good deal of spare compute capacity in the storage controller is what directly triggered the concept of hyper-convergence technology, along with the recognition that a compact storage appliance and a typical rack server are essentially identical in form and configuration. It was clearly time to merge compact storage appliances with rack servers, reducing hardware complexity and allowing a broader range of scaling on the storage side.

HYPER-CONVERGENCE SYSTEMS TODAY

Vendors typically build hyper-converged platforms as a 2U rack unit with an X64-server motherboard and a set of SSDs. These appliances are networked together so all storage forms a common virtual pool. Creating the pool requires some magic sauce, usually a storage management suite, to run on all the appliances and present the storage as a virtual SAN.

AN IDEAL NETWORK WOULD HAVE LITTLE ADDITIONAL LATENCY, BUT, OF COURSE, WE ARE NOWHERE CLOSE TO THIS.

Storage management automates discovery of new drives, making the scaling process simple. When a drive fails, the software recovers the cluster by copying data from other drives until the redundancy structure is rebuilt. These tools support both copy redundancy and erasure codes—the latter with some issues we’ll cover later.

From a performance viewpoint, the drives local to any appliance deliver very high I/O and bandwidth internally. A set of eight NVMe drives can deliver 80 GBps or 10 million IOPS, for example. This is way more than a typical server can use, leaving extra bandwidth to be shared.

Network access poses a challenge, however. An ideal network would have little additional latency, but, of course, we are nowhere close to this. A good plan for a hyper-convergence cluster is to use a fast network such as multiple 10 or 25 Gigabit Ethernet links (or even faster connections). This admittedly adds some cost, but reduces data management complexity (from the need to localize key data) and allows all servers to function faster.

Many hyper-convergence products now use Remote Direct Memory Access Ethernet or InfiniBand networks. This adds considerable throughput while reducing overhead by as much as 90% and significantly reduces latency.

Why traditional servers plus RAID architecture fall down

  • Can’t keep up with SSD
  • Fibre Channel-centric—not a cluster interface
  • SAN needs own admin and support
  • Can’t scale out to large size clusters
  • RAID rebuild time too long
  • RAID doesn’t provide appliance-level data integrity/availability

THE CASE FOR HYPER-CONVERGENCE

There are two classes of compact storage appliances. One takes a “Lego” block approach for object storage. It is also used for point products such as virtual desktop infrastructure (VDI), where storage capacity is relatively small, or as a complete storage platform for remote offices. The second, more recent, form is more interesting within the general context of hyper-convergence technology.

It’s built to tackle the tasks of virtualizing application instances, storage services and software-defined network (SDN) services. The CPU is usually more powerful—possibly a dual-CPU configuration—and the amount of dynamic RAM (DRAM) is quite large. This allows each appliance to run many more instances, especially if Docker containers are fully supported by all three service classes.

Overall, the upside of hyper-convergence technology lies in ease of use. There is a single hardware box to purchase for both storage and servers, and network switching is much more cost-efficient due to the barebones nature of hyper-convergence infrastructure (HCI) switching gear with SDN. This saves on sparing costs and simplifies installation, while HCI software usually comes preloaded to save even more startup time. Support of HCI involves a single supplier, which removes risk and reduces internal staffing needs.

HCI offerings come from all the major IT platform vendors and most use a third-party (Nexenta or Simplivity) virtual SAN tool to for clustering. They may also include other features, such as management and provisioning tools, to bolster the offering. Most hyper-convergence technology vendors today limit the number of configurations they offer, with the purpose being to guarantee out-of-the-box operation and first-rate support.

The bottom line: There is little risk in moving down the hyper-convergence technology path today. Costs should prove lower than traditional a la carte configurations, especially compared with traditional RAID, and there are many suppliers, ranging from the major vendors to startups. The evolution of software-defined everything will add many service options and take functionality to new levels over the next few years.

GETTING HCI RIGHT

There are some issues to be cognizant of when deploying hyper-convergence technology, however. We’ve touched on networking, definitely not a place to scrimp and save. Caching data, probably using NVDIMM as a DRAM extender, will likely become important in the near future.

Over time, you will need to extend any HCI cluster. With server and storage both evolving at their fastest pace in decades, it’s certain that any upgrades will be heterogeneous to your existing appliances … faster and bigger memory and drives, and so on. The software making the cluster work has to cope well with these expansions and handle the difference in resources properly.

The core cluster software is vendor-agnostic, just requiring COTS hardware to run. However, the major suppliers of HCI gear such as Dell and Hewlett Packard Enterprise dominate, so there’s the possibility for long-term lock-in, especially as new appliances enter the market. Ask your vendor if they allow multiple suppliers’ products to pool together, just as the typical cloud does today, to prevent this from happening.

Hyper-convergence technology is deficient for some use cases, of course. Products with GPUs are still outside HCI-approved configurations, which impacts big data and HPC needs. The HCI configurations described here are also overkill for many remote offices, where the added complexity of storage pooling may be unnecessary.

Using part of an HCI cluster for VDI makes sense, though, since this unifies hardware purchases and allows you to apply much the same resources as would be needed for a decent-sized traditional VDI setup while getting the benefits of a common architecture.

SECONDARY STORAGE

One question with hyper-convergence technology is what to do with older data, which is usually moved to secondary storage. You could add some bulk hard drives to the appliance, and significantly enhance those through compression and deduplication. So, for example, a pair of 10 TB HDDs could effectively add 100 TB of compressed secondary storage to each node. Alternatively, you could move data out to a networked secondary storage system (today, usually an object store).

The choice between the two options in terms of performance is a wash, though simply adding some drives to empty slots in HCI appliances will probably be cheaper.

ALTERNATIVES TO HCI

There are, in fact, only two other viable alternative approaches to HCI for a modern IT strategy. One is to move storage completely to a public cloud. It’s possible to obtain a dedicated private cloud-like space—using Virtual Private Clouds in Amazon Web Services, for example—that matches up with your data center in terms of security and data integrity. Most IT shops aren’t ready for a full transition to the cloud yet, however, while in-house facilities based on HCI may in fact have a lower total cost of ownership (TCO).

Benefits of hyper-convergence systems

  • Local storage—both instance store and persistent data storage are low-latency
  • Matches SSD performance needs
  • Can scale to very large clusters
  • Fewer platforms and fewer vendors to manage
  • Lego-like simplicity of integration
  • Inexpensive platform for storage compared with RAID arrays
  • Fits in with software-defined infrastructure
  • Extensible with future CPU and SSD architectures

The other alternative is to build a traditional server cluster with networked storage. Used as a cloud, these run into I/O performance issues, however, even when all-flash arrays are used to boost networked storage performance. Latencies here are always higher than local NVMe drives, which is pushing all-flash array vendors to deliver NVMe over Fabrics interfaces on their boxes. But even these will still be slower than local drives.

Networked storage complicates vendor management and generally increases TCO over hyper-convergence technology. Using networked storage for secondary storage is also more expensive.

THE EVOLUTION OF HCI

Two big trends in IT right now are massive performance improvements and a move toward compact packaging.

SSD performance continues to improve at a rapid pace. This means servers can do more with less, with fewer units for a given workload. The advent of storage-class memory (SCM) in the form of NVDIMMs is another game-changer. SCM acts as both a DRAM expander, allowing more instances in a server, and persistent DRAM memory, which will speed up applications by large factors as OS and compilers evolve to support it over the next 18 months. This will make HCI appliances much more powerful, especially when coupled with the multiplying effect of Docker containers on instance count.

These are relatively near-term improvements. Within a couple of years, variations of the Hybrid-Memory Cube (HMC) architecture will bring DRAM and CPU much closer together. We’ll see CPUs with what amounts to a 16 GB or 32 GB L4 cache in 2017, while the remaining DRAM will be coupled on a much higher bandwidth serial connection scheme. There are initiatives under way that would make all of this memory shareable over an HCI cluster, taking performance to a new plateau.

Meanwhile, SSDs are getting smaller and denser. 3D NAND technology will take over the market in 2017. The result is that tiny SSDs will have huge capacities, so expect 10 TB SSDs in the tiny M.2 form factor. A stack of 10 of these would fit in a 3.5-inch drive bay. At the bulk end of the spectrum, 100 TB 2.5-inch SSDs are already announced, though delivery dates are still up in the air.

The server engine using the HMC approach is also much more compact, since the CPU is delivered on a tiny module with its power system on board. Small drives and smaller server engines mean smaller systems. Most likely, 2018’s sweet spot will be either a 1/2U rack server or a simple high-density blade-chassis approach.

SO SHOULD YOU GO ALL-IN ON HYPER-CONVERGENCE SYSTEMS?

The answer, in short, is yes. As a way to get the lowest TCO, simplest (and fastest) installs, and a platform poised for software-defined infrastructure and fully orchestrated hybrid clouds, hyper-convergence technology looks good today and absolutely compelling over the next year or two.

[IRS]

Related Posts

Big data storage

Big Data Storage, What kind of storage do you need?

October 2, 2018

what we call big data today often deals with very large unstructured data sets, and is dependent on rapid analytics, with answers provided in seconds.

raid and ddp technology image

RAID and DDP Technology

November 21, 2016

Currently Dell has a storage technology whereas from performance and redundancy perspective, in normal operation or during rebuild process, this storage technology has more advantage.

Leave a Reply

Your email address will not be published. Required fields are marked *

0811-1237-916