Intel Ethernet 800 Series To Support NVMe over TCP, PCIe 4.0by Billy Tallis on September 24, 2019 3:30 PM EST
- Posted in
- PCIe 4.0
- 100G Ethernet
Today at the SNIA Storage Developer Conference, Intel is sharing more information about their 100Gb Ethernet chips, first announced in April and due to hit the market next month. The upcoming 800 Series Ethernet controllers and adapters will be Intel's first 100Gb Ethernet solutions, and also feature expanded capabilities for hardware accelerated packet processing. Intel is now announcing that they have implemented support for the TCP transport of NVMe over Fabrics using the Application Device Queues (ADQ) technology that the 800 Series is introducing.
NVMe over Fabrics has become the SAN protocol of choice for new systems, allowing for remote access to storage with just a few microseconds of extra latency compared to local NVMe SSD access. NVMeoF was initially defined to support two transport protocols: Fibre Channel and RDMA, which can be provided by Infiniband, iWARP and RoCE capable NICs. Intel already provides iWARP support on their X722 NICs, and RoCEv2 support was previously announced for the 800 Series. However, in the past year much of the interest in NVMeoF has shifted to the new NVMe over TCP transport specification, which makes NVMeoF usable over any IP network without requiring high-end RDMA-capable NICs or other niche network hardware. The NVMe over TCP spec was finalized in November 2018 and opened the doors to much wider use of NVMe over Fabrics.
Software-based NVMe over TCP implementations can use any network hardware, but for the high-performance applications that were originally the focus of NVMe over Fabrics, hardware acceleration is still required. Intel's ADQ functionality can be used to provide some acceleration of NVMe over TCP, and they are contributing code to support this in the Linux kernel. This makes the 800 Series Ethernet adapters capable of using NVMe over TCP with latency almost as low as RDMA-based NVMe over Fabrics. Intel has also announced that Lightbit Labs, one of the major commercial proponents of NVMe over TCP, will be adding ADQ support to their disaggregated storage solutions.
Unrelated to NVMe over Fabrics, Intel has also announced that Aerospike 4.7 will be the first commercial database to make use of ADQ acceleration, and Aerospike will be publishing their own performance measurements showing improvements to throughput and QoS.
The Intel Ethernet Controller E810 and four 800 Series Ethernet adapters will be available from numerous distributors and OEMs over the next several weeks. The product brief for the E810 controller has been posted, and indicates that it supports up to a PCIe 4.0 x16 host interface—to be expected from a 100Gb NIC, but not something Intel PR is keen to highlight while their CPUs are still on PCIe 3.0.
- Intel Columbiaville: 800 Series Ethernet at 100G, with ADQ and DDP
- Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600
- NVIDIA To Acquire Datacenter Networking Firm Mellanox for $6.9 Billion
- Western Digital to Exit Storage Systems: Sells Off IntelliFlash Division
- Marvell at FMS 2019: NVMe Over Fabrics Controllers, AI On SSD
Post Your CommentPlease log in or sign up to comment.
View All Comments
edzieba - Wednesday, September 25, 2019 - link"Why is the industry afraid of investing in 2.5/5G devices?"
Because the consumer market for them is utterly minuscule. There are vanishingly few home use cases where more than 1gbps is actally necessary, and the prices for even 'cheap' 2.5/5g kit is still many times (10x or more) the price of the commodity network gear people purchase in volume, and that companies like ISPs deploy at volume. That leaves the slice of the home enthusiast market that want more than 1gbps, but who also aren't willing to go to 10gbps.
mark625 - Wednesday, September 25, 2019 - linkYou have to attack the chicken and egg problem from one end or the other. How many years was it that Intel was churning out PC chipsets with gigabit support before the first home gear appeared to take advantage of it? I seem to recall it was a good 4 or 5 years. Back then everyone said that gigabit switches were too expensive, too hot, too noisy, only for business, yada yada. Sound familiar? Now anything limited to 100M would be laughed out of the room.
If Intel has rights to (or owns) multi-gig IP, they should damn well start putting it in every chipset they put out. Same for AMD, Broadcom, or anyone else who puts Ethernet into their chips. A 250% to 500% link speed improvement, that works over existing cabling, for minimal cost, should be an obvious feature to support.
edzieba - Thursday, September 26, 2019 - linkIntel was putting gbit into their chipsets for the enterprise market, where gbit core switches and routers were available (and provided a definite increase in performance for remote storage and remote sessions). The same is absolutely not the case for 2.5g/5g/etc: no business is swapping out their network kit for that capability, all the capacity increases are going on from the switch 'upwards' where 10g and up are the new norm.
Putting 2.5g/5g into chipsets inflates the cost, but has no necessity and little to no uptake.
Icehawk - Wednesday, October 2, 2019 - linkyes but... we can actually use the bandwith increase that came from 100M to 1G. Other than direct files transfers when would you use more than 1G of bandwidth? Hard drives can't keep up with it and most people have 50M or less internet connections. I'm all for overkill but we seem to be a long way from "needing" more than 1G for normal home use, and as mentioned the average user is wifi all around. Enterprise... different story of course.
Kevin G - Wednesday, September 25, 2019 - linkA bit of an oddity but Intel has leveraged 2.5 Gbit Ethernet on various Atom chip for years now. The catch is that that 2.5 Gbit interface adheres to backplane spec, not Cat cabling. This was mainly for industrial/embedded usage.
Phynaz - Tuesday, September 24, 2019 - linkYou get dumber every day.
Samus - Wednesday, September 25, 2019 - linkGetting Lightbit onboard is good...that pretty much guarantees full investment at EMC. Love the cute heatsinks on the HBA ports too. I've noticed a lot of those media converters for hot af and can only imagine that'll be more of an issue as things go beyond 10Gb, so that's a creative solution.
jabber - Wednesday, September 25, 2019 - linkDoes it still grind to Kbps speeds when it hits tens of thousands of micro files? Oh sure you can transfer 8K video super fast but back in the real world...
This is the issue we need to address in data, not actual amounts of total bandwidth but how do we deal with modern software's need to populate itself with millions of micro files.
Billy Tallis - Wednesday, September 25, 2019 - linkFiles only exist at a higher level of abstraction than NVMe, so doing NVMe over Fabrics doesn't change much about that issue.
Dug - Wednesday, September 25, 2019 - linkWhat you are describing is an application layer issue, due to start/ stop operation, probably because you used Windows explorer to try to copy files.
In the real world this is for servers running vm's with hundreds of connections, and also for storage connections. Not to transfer a lot of tiny files.