NVMe: The New Face of Super-Speed Storage?
Storage has hit a bottleneck. Experts knew it was coming — PCWorld recently noted that while solid state drives (SSDs) offer substantial input/output performance boosts over their spin-dependent counterparts, SATA and SAS solutions can’t keep up. It makes sense: SSDs resemble system RAM far more than traditional hard disks, but the growing gap in speed demanded a new solution. The answer: nonvolatile memory express (NVMe). Here’s what it means for storage, servers and the future of enterprise networking.
Hard and Soft
Over the past few years, software-defined networking (SDN) has become a tech-media buzzword as companies seek to improve the link between servers and hyperconverged solutions or storage appliances. But with SSDs now replacing traditional spinning drives for many companies and the market heating up, Network Computing notes several manufacturers offer a big jump in SSD capacity, up to 60 or even 100 TB in ever-smaller devices. SATA and SAS protocols have maxed out, making them the limiting factor in storage writing and retrieval.
To address the issue, companies turned to graphics processing unit (GPU) hardware, which offers up to 4 Gbps throughput, enough for most SSD solutions. But even these solutions are now showing strain, prompting companies to develop alternatives that offer “a new level of performance,” according to Network Computing.
Why? There are several key reasons. First, offerings need only a single message for 4-KB transfers instead of the typical two, which significantly reduces redundancy. More important, however, is the ability to process multiple queues instead of just one. Multiple doesn’t mean just three or four, either: NVMe supports up to 65,536 queues simultaneously, PCWorld noted.
NVMe: The New Network
While the introduction of super-fast storage-to-server communication solves one problem, it also shifts the focus to another: corporate networks. According to Datanami, networks are the new storage bottlenecks, thanks to widespread adoption of SSDs. They’re much faster than HDDs when it comes to measurement, latency, bandwidth and input/output processors, making it easier for multiple SSDs to overwhelm a network and cap out a company’s bandwidth, which in turn limits the positive impact that cutting-edge storage can have. NVMe devices can actually make the problem worse: While it takes 250 SATA HDDs to max out a 100-Gbps Ethernet connection, it takes only 24 SATA SSDs, 10 SAS SSDs or a mere 4 NVMe SSDs to completely fill the pipeline.
One solution to network capping is the development of nonvolatile memory express over fabric (NVMeF). This separates NVMe from its hardware roots, moving it out of the PC and into other channels. Some experts argue that PCIe is its own fabric solution thanks to its low overhead and its lack of latency, but network capping remains an issue.
Another option taps existing Ethernet fabrics using remote direct memory access (RDMA), which allows the transfer of in-memory data with virtually no CPU intervention; while performance depends on the type of SSD used, as well as server and storage device architectures, there’s huge potential here to increase overall throughput without increasing latency.
So, what’s the verdict? With companies looking to shift from HDDs to SSDs and empower the move to SDN, NVMe offers an incredible jump in performance — so good, in fact, that current SATA, SAS and network bandwidths simply aren’t up to the challenge. With the development of NVMeF and the rise of 50-plus-TB SSDs, this technology may quickly become the new face of super-speed storage.