Intel Ethernet 800 Series To Support NVMe over TCP, PCIe 4.0by Billy Tallis on September 24, 2019 3:30 PM EST
- Posted in
- PCIe 4.0
- 100G Ethernet
Today at the SNIA Storage Developer Conference, Intel is sharing more information about their 100Gb Ethernet chips, first announced in April and due to hit the market next month. The upcoming 800 Series Ethernet controllers and adapters will be Intel's first 100Gb Ethernet solutions, and also feature expanded capabilities for hardware accelerated packet processing. Intel is now announcing that they have implemented support for the TCP transport of NVMe over Fabrics using the Application Device Queues (ADQ) technology that the 800 Series is introducing.
NVMe over Fabrics has become the SAN protocol of choice for new systems, allowing for remote access to storage with just a few microseconds of extra latency compared to local NVMe SSD access. NVMeoF was initially defined to support two transport protocols: Fibre Channel and RDMA, which can be provided by Infiniband, iWARP and RoCE capable NICs. Intel already provides iWARP support on their X722 NICs, and RoCEv2 support was previously announced for the 800 Series. However, in the past year much of the interest in NVMeoF has shifted to the new NVMe over TCP transport specification, which makes NVMeoF usable over any IP network without requiring high-end RDMA-capable NICs or other niche network hardware. The NVMe over TCP spec was finalized in November 2018 and opened the doors to much wider use of NVMe over Fabrics.
Software-based NVMe over TCP implementations can use any network hardware, but for the high-performance applications that were originally the focus of NVMe over Fabrics, hardware acceleration is still required. Intel's ADQ functionality can be used to provide some acceleration of NVMe over TCP, and they are contributing code to support this in the Linux kernel. This makes the 800 Series Ethernet adapters capable of using NVMe over TCP with latency almost as low as RDMA-based NVMe over Fabrics. Intel has also announced that Lightbit Labs, one of the major commercial proponents of NVMe over TCP, will be adding ADQ support to their disaggregated storage solutions.
Unrelated to NVMe over Fabrics, Intel has also announced that Aerospike 4.7 will be the first commercial database to make use of ADQ acceleration, and Aerospike will be publishing their own performance measurements showing improvements to throughput and QoS.
The Intel Ethernet Controller E810 and four 800 Series Ethernet adapters will be available from numerous distributors and OEMs over the next several weeks. The product brief for the E810 controller has been posted, and indicates that it supports up to a PCIe 4.0 x16 host interface—to be expected from a 100Gb NIC, but not something Intel PR is keen to highlight while their CPUs are still on PCIe 3.0.
- Intel Columbiaville: 800 Series Ethernet at 100G, with ADQ and DDP
- Intel’s Enterprise Extravaganza 2019: Launching Cascade Lake, Optane DCPMM, Agilex FPGAs, 100G Ethernet, and Xeon D-1600
- NVIDIA To Acquire Datacenter Networking Firm Mellanox for $6.9 Billion
- Western Digital to Exit Storage Systems: Sells Off IntelliFlash Division
- Marvell at FMS 2019: NVMe Over Fabrics Controllers, AI On SSD
Post Your CommentPlease log in or sign up to comment.
View All Comments
thomasg - Tuesday, October 1, 2019 - linkThe first generations of 10GBASE-T chips was actually produced on the 130 nm and later 90 nm nodes, certainly those that were used 2 process-generations earlier for actual CPUs.
Afaik Intel has moved to 65 nm (also a node known from CPU production), where they still had some issues.
Aquantia however uses the now-available and ubiquitous 28 nm facilities (I assume from former GPU production lines) and does seem to have gotten the power consumption to acceptable levels.
I'm fairly certain that Intel will be able to spare some 14 nm capacities in the near future and will likely move the NICs there.
Aquantia certainly made 10GBASE-T reasonably priced and it will only get better.
The move of 10 Gbit/s into mainstream isn't far away anymore.
Mid-range server boards already include it quite often, and I expect that we will see it becoming common in HEDT in 2020.
bcronce - Wednesday, September 25, 2019 - linkI went from a $300 Netgear router+AP to $100 Ubiquiti AP and my latency was cut in half, signal strength went up, packetloss and drop outs disappeared. I thought wifi just sucked. My roku express, located several walls and a floor different, kept losing signal and buffering. The ISPs have been pushing gateways with built in wifi and has increased the noise. It was getting too much to deal with. I gave Ubiquiti a try because I heard they were "enterprise" grade instead of "lemon" grade. Hoooolllllyyyy crap. It's a difference. Or at least it has been for me.
Bp_968 - Tuesday, September 24, 2019 - linkWhy should they invest in 2.5/5g at all? 10gb gear is available for cheap now on Ebay and has been in the enterprise market for a decade or more now. They should be able to very cheaply put a few 10gb ports on a switch or high end router. I setup a 10gb *fiber* network at home nearly 10 years ago for cheap (infiniband is even cheaper). Its just crazy we haven't seen 10gb at least for soho stuff like drobo and similar storage devices.
Its so far behind now that with gigabit internet I can upload to cloud storage like amazon drive almost as quickly as I could upload to local NAS storage!
Assimilator87 - Tuesday, September 24, 2019 - linkAfter reading the wikis for these standards, it seems to be due to the cabling requirements of 10GBASE-T. For clients who have large infrastructures already wired with Cat 5e, they can still use the intermediate speeds without rewiring everything.
Samus - Wednesday, September 25, 2019 - linkThis. A lot of cabling obstacles leave 5Gb as a good median. 10Gb requires ideal cable quality and short distances. I often see a lot of implantations handshake back down to 5Gb and that’s often the best that can be done within a budget without pulling new cable throughout an office.
In the end that’s still 5x the throughout so if you are just looking to speed up loading a 400mb quickbooks file or something there will be virtually no noticeable difference between 5Gb and 10Gb
mooninite - Wednesday, September 25, 2019 - linkI wish people would stop saying "10gb is cheap! Just buy used!" I don't want a 48 port switch that will sound like a 747 in my office. That is *not* a solution.
brunis.dk - Wednesday, September 25, 2019 - linkI doubt you need 48 ports in your office? ..Unless your office is in the server room? Mellanox ConnectX-3 adapters can be had on ebay now from $22, so yeah, even 40/56 Gbit is cheap. Good for a point-to-point with your local SAN/NAS/whatever.
Namisecond - Wednesday, September 25, 2019 - linkClient adapters may be cheap, but what about the switches? I mean real switches with configurable ports. 16-port minimum.
name99 - Wednesday, September 25, 2019 - linkDon't be an idiot. His entire point is that he CANNOT BUY a home-appropriate 10G (or hell, 2.5G) switch.
They only come in industrial models -- LOTS of ports, big, strong fans, rack mounted.
If you know different, point us to a home-appropriate 10G switch, say 5 ports, no fan, basically size of a paperback book -- just like a 1G switch...
The point is not the inability to buy NICs (or, if you're living in the 21st century, TB or USB-C adaptors), the point is the SWITCH!
mark625 - Wednesday, September 25, 2019 - linkThe closest I have found is the Netgear MS510TX ($200) and MS510TX-PP ($350) switches.
Each has SFP+: (1) 10G/1G port, and RJ45: (1) 10G/5G/2.5G/1G/100M port, (2) 5G/2.5G/1G/100M ports, (2) 2.5G/1G/100M ports, and (4) 1G/100M/10M ports.
The MS510TX-PP version has PoE+ available on all eight of the RJ45 ports.
I don't have one yet, but I've been looking at a purchase in the near future. Neither one is exactly quiet, with the non-PoE version rated at 21dB and the PoE version at 28.8dB. But they do fit in the small/home business price range.
Here's the link: https://www.netgear.com/business/products/switches...