Why We Built the World’s First Ethernet Connected SSDs: Part 1

We are all familiar with the ubiquitous Ethernet, typically in networked infrastructure. But why Ethernet on Flash drives?

Why Ethernet Interface on Flash Drives?
At KIOXIA, we run the fabs that manufacture a third of all the flash consumed on the planet1. Think about that for a second. Across consumer devices like laptops and smart phones, edge devices, cellular towers, IOT, cars, large and small enterprises, server and storage systems, network connected devices, and across large and small data centers around the world. Such a large and diverse customer base allows us to understand current and future data storage needs across some of the largest businesses in the world. We are able to see broad needs and trends, key capabilities to enable, sustain and grow infrastructure to meet the ever-increasing need for data storage and analytics that only seems to accelerate with every passing day. We follow these trends and try to look 5-10 years ahead to project how these requirements may morph down the road so that we can enable our customers to be ready for these future changes.

One such change has been the explosion of large-scale data manipulation, high speed storage and the retrieval of large amounts of data, moving such large chunks of data back-and-forth across the network with minimal latency and maximum scalability. In an enterprise or data center, this scalability directly translates to additional data storage and the need to access it faster, and do so predictably with low latency.

The Current Problem with Data Storage Scaling
If you own an older version of some of the most popular Android phones in the market, it is likely to have an SD card slot for memory expansion purposes.  The lowest capacity model (usually 32GB) is the cheapest. As storage fills up (usually from high resolution photos and videos), one can add a 128GB SD card to cost effectively scale data storage. However, if you own a newer Samsung Galaxy Note-10, this simple $20 upgrade is not possible. You need to buy a whole new 128GB phone for an additional cost of $200 (or, alternatively, use cloud storage, which we will temporarily ignore as a possibility). Therefore, the customer is forced to pay $200 for a storage upgrade that previously cost only $20.

This is exactly what is happening in the enterprise and data center world. Many of our customers are forced to buy the proverbial ‘new phone’ every time they want to scale their storage infrastructure. They are forced to buy additional CPU cores, additional memory and the entire peripheral infrastructure that they may not really need. Options are very limited to cost effectively scale storage alone. And even these options have some unanswered questions about their future scaling and cost effectiveness.

We built Ethernet SSDs to solve this exact problem. Our goal is to unleash the full performance of flash storage without the burden of redundant peripheral infrastructure costs. By leveraging the tremendous advances in Ethernet technology, we are significantly closer to future proofing the interface transition hurdles the flash storage industry is likely to face as they attempt to scale storage.

Efficient Scaling with Ethernet SSDs
As mentioned above, if you need to scale data storage capacity only (disaggregated storage scaling), Ethernet SSDs offer a cleaner path to an efficient storage scaling architecture. As long as the host system or head node can centrally manage the added storage, this architecture is an efficient option to scale data storage, almost indefinitely. Please see pictorial representation below.

Current Approach with Conventional SSDs

Conventional storage scaling

 Ethernet SSD-Enabled Approach

Storage scaling using Ethernet SSDs

The latter approach in the illustration above enables an efficient storage scaling architecture by adding just what is needed for additional data storage (SSDs and switches). It doesn’t force customers to add CPU cores or DRAM or other expensive peripherals if all they need is more flash storage. While it may not suit the needs of all storage architectures, those that optimize their system to manage added storage EBOFs through an Ethernet switch will see significant architectural efficiencies, improved throughput, the elimination of system bottlenecks and reduced system costs.

But – what about the future, and connector compatibility moving forward? Stay tuned – the second part of this blog will address the transition to SFF-TA-1002, among other things. Be sure to check back later this month!


Notes:

World’s first claim as of 9/22/20

1 As of date of blog posting

PCI Express and PCIe are registered trademarks of PCI-SIG.

NVM Express and NVMe are registered trademarks of NVM Express, Inc.

NVMe-oF and NVMe-MI are trademarks of NVM Express, Inc.

Product image may represent a design model.

Disclaimer
The views and opinions expressed in this blog are those of the author(s) and do not necessarily reflect those of KIOXIA America, Inc.

Comments are closed