If there is one thing I like about ASRock, it is their ability to do something different in an increasingly difficult market to differentiate. One of these elements is the Extreme11 series, using an LSI RAID controller to provide more SAS/SATA ports on the high end model. Today we have the X99 Extreme11 in for review.

ASRock X99 Extreme11 Overview

Our last review of an Extreme11 model was back in the X79 era, featuring the six SATA ports from the PCH and eight from the bundled LSI 3008 onboard controller. Our sample back then used eight PCIe lanes for the controller and achieved 4 GBps maximum read and write sequential speeds when using an eight drive SSD RAID-0 SF-2281 array. Between the X79 and the X99 model came the Z87 Extreme11/ac which used the same LSI controller but bundled it with a port multiplier, giving sixteen SAS/SATA ports plus the six from the chipset for 22 total. When we come to the X99 Extreme11 in this review, we get the same 3008 controller without the multiplier) which adds eight ports to the ten from the PCH, giving eighteen in total.

One of the criticisms from the range is the lack of useful hardware RAID modes with the LSI 3008. It only gives RAID 0 and 1 (also 1E and 10) with no scope for RAID 5/6. This is partly because the controller comes without any cache (or albeit a very small one) which cannot help with managing such an array. ASRock's line on this is partly due to controller cost and complexity of implementation, suggesting that users who require these modes should use a software RAID solution. Users who want a hardware solution will have to buy a controller card that supports it, and ASRock is keen to point out that the Extreme11 range has plenty of PCIe bandwidth to handle it.

The amount of PCIe bandwidth brings up another interesting element to the Extreme11 range. ASRock feels that their high end motherboard range must support four-way GPU configurations, preferably in x16/16/16/16 lane allocation. In order to do this, along with having enough lanes for the LSI 3008 controller that needs eight, for the X99 Extreme11 there are two PLX8747 PCIe switches on board. We covered the PLX8747 during its prominent use during Z77, but a base summary is that due in part to its FIFO buffer it can multiplex 8 or 16 PCIe lanes into 32. Thus for the X99 Extreme11 and its dual PLX8747 arrangement, each PLX switch takes 16 lanes from the CPU to give two PCIe 3.0 x16 slots, totaling four PCIe 3.0 x16 slots overall. The final eight lanes from the CPU go to the LSI controller, accounting for 40 lanes from the processor. (28 lane CPUs behave a little differently, see the review below.)

As you might imagine, two PLX8747 switches and an LSI controller onboard does not come cheap, and that is why the Extreme11 is one of the most expensive X99 Motherboards on the market at $630+, only to be bested in this competition by the ASRock X99 WS-E/10G which comes with a dual port 10GBase-T controller for $670. Aside from the four PCIe 3.0 x16 slots and 18 SATA ports, the Extreme11 also comes with support for 128GB of RDIMMs, LGA2011-3 Xeon compatibility, dual Intel network ports, upgraded audio and dual PCIe 3.0 x4 M.2 slots. The market ASRock aims for with this board needs high storage and compute requirements in their workstation - typically with these builds the motherboard cost is not that important, but the feature set is. That makes the X99 Extreme11 an entertaining product in an interesting market segment.

Visual Inspection

With the extra SATA ports and controller chips onboard, the Extreme11 expands into the EATX form factor, which means an extra inch or so horizontally for motherboard dimensions. Aside from the big block of SATA ports, nothing looks untoward on the board, giving an extended heatsink around the power delivery down to the chipset heatsink which has an added fan to deal with the two PLX8747 chips and the LSI 3008 controller.

The socket area is fairly crammed up to Intel’s specifications, with ASRock’s Super Alloy based power delivery packing in twelve phases in an example of over engineering. The DRAM slots are color coded for the black slots to be occupied first. Within the socket area there are four fan headers to use – two CPU headers in the top right (4-pin and 3-pin), a 3-pin header just below the bottom left of the socket (above the PCIe slot) and another 3-pin near the top of the SATA ports. The other two fan headers at the bottom of the board are one 4-pin and another 3-pin, with the final fan header provided for the chipset fan. This can be disabled if required by removing the cable.

The bottom right of the motherboard next to the SATA ports and under the chipset heatsink hides the important and costly controller chips. Combining the two PLX8747 on the left, the LSI RAID controller and the chipset comes north of 30W in total for power use, hence the extra fan on the chipset.

Each PLX8747 PCIe switch can take in eight or sixteen PCIe 2.0 or PCIe 3.0 lanes, then by using a combination of a FIFO buffer and multiplexing output 32 PCIe 3.0 lanes. Sometimes this sounds like magic, but it is best to think of it as a switching FPGA – between the PCIe slots, we have full PCIe 3.0 x16 bandwidth, but if we go up the pipe back to the CPU, we are still limited by that 8/16 lane input. The benefit of the FIFO buffer is a fill twice/pour once scenario, coalescing commands and shooting them up the data path together rather than performing a one in/one out. In our previous testing the PLX8747 gave a sub 1% performance deficit in gaming, but aids compute users that need inter-GPU bandwidth. It also surpasses the SLI fixed limitation of needing eight PCIe lanes, ensuring that the NVIDIA configurations are happy.

The LSI 3008 is a little long in the tooth having been on the X79 and Z87 Extreme11 products, but it does what ASRock wants it to do – provide extra storage ports for those that need it. In order to get a case that can support 18 drives is another matter – we often see companies like Lian Li do them at Computex, and some cost as much as the motherboard. The next cost is all the drives, but I probably would not say no to an 18*6 TB system. The lack of RAID 5/6 for redundancy offerings is still a limitation, as is the lack of a cache. Moving up the LSI stack to a controller that does offer RAID 5/6 would add further cost to the product, and at this point ASRock has little competition in this space.

On the back of the motherboard is this interesting IC from Everspin, which turns out to be 1MB of cache for the LSI controller. There is scope for ASRock to put extra cache on the motherboard, allowing for higher up RAID controllers, but the cost/competition scenario falls into play again.

The final part of the RAID controller is this MXIC chip, which looks to be a 128Mbit flash memory IC with 110ns latency.

Aside from the fancier features, the motherboard has two USB 3.0 headers above the SATA ports (both from the PCH), power/reset buttons, a two digit debug display, two BIOS chips with a selector switch, two USB 2.0 headers, a COM header, and the usual front panel/audio headers. Bang in the middle of the board, between the PCIe slots and the DRAM slots, there is a 4-pin molex to provide extra power to the PCIe slots when multiple hungry GPUs are in play. There is also another power connector below the PCIe slots, but ASRock has told us that only one is needed to be occupied at any time. I have mentioned to ASRock that the molex connector is falling out of favor with PSU manufacturers and very few users actually need one in 2015, as well as the fact that these connectors are both in fairly awkward places. The response was that the molex is the easiest to apply (compared to SATA power or 6-pin PCIe power), and the one in the middle of the board is for users that have smaller cases. I have a feeling that ASRock won’t shift much on this design philosophy unless they develop a custom connector.

The PCIe slots give x16/x16/x16/x16, with the middle slot using eight PCIe 3.0 lanes when in use causing the slot underneath to split causing an x8/x8 arrangement. With sufficiently sized cards, this gives five cards in total possible. Normally we see the potential for a seven card setup, but ASRock has decided to implement two PCIe 3.0 x4 M.2 slots in-between a couple of the PCIe slots. The bandwidth for these slots comes from the CPUs PCIe lanes, and thus do not get hardware RAID capabilities. However, given the PM951 is about to be released, two of them in a software RAID for 2800 MBps+ sequentials along with an 18*6 TB setup would be a super storage platform.

For users wanting to purchase the 28-lane i7-5820K for this motherboard, the PCIe allocation is a little harder to explain. The CPU gives 8 lanes each to the PLX controllers, giving a full x16/x16/x16/x16 solution still applies, with another 8 lanes for the LSI controller. The first M.2 x4 port gets the last four lanes and the second M.2 slot is disabled.

The rear panel gives four USB 2.0 ports, a combination PS/2 port, a Clear CMOS button, two eSATA ports, two USB 3.0 from the PCH, two USB 3.0 from an ASMedia controller, an Intel I211-AT network port, an Intel I218-V network port and audio jacks from the Realtek ALC1150 audio codec.

Board Features

ASRock X99 Extreme11
Price US
Size E-ATX
CPU Interface LGA2011-3
Chipset Intel X99
Memory Slots Eight DDR4 DIMM slots, up to Quad Channel 1600-3200 MHz
Supporting up to 64 GB UDIMM
Supporting up to 128 GB RDIMM
Video Outputs None
Network Connectivity Intel I211-AT
Intel I218-V
Onboard Audio Realtek ALC1150 (via Purity Sound 2)
Expansion Slots 4 x PCIe 3.0 x16
1 x PCIe 3.0 x8
Onboard Storage 6 x SATA 6 Gbps, RAID 0/1/5/10
4 x S_SATA 6 Gbps, no RAID
8 x SAS 12 Gbps/SATA 6 Gbps via LSI 3008
2 x PCIe 3.0 x4 M.2 up to 22110
USB 3.0 6 x USB 3.0 via PCH (2 headers, 2 rear ports)
2 x USB 3.0 via ASMedia ASM1042 (2 rear ports)
Onboard 18 x SATA 6 Gbps Ports
2 x USB 3.0 Headers
2 x USB 2.0 Headers
7 x Fan Headers
HDD Saver Header
Front Panel Audio Header
Front Panel Header
Power/Reset Buttons
Two-Digit Debug LED
BIOS Selection Switch
COM Header
Power Connectors 1 x 24-pin ATX
1 x 8-pin CPU
2 x Molex for PCIe
Fan Headers 2 x CPU (4-pin, 3-pin)
3 x CHA (4-pin, 2 x 3-pin)
1 x PWR (3-pin)
1 x SB (3-pin)
IO Panel 1 x PS/2 Combination Port
2 x eSATA Ports
4 x USB 2.0
2 x USB 3.0 via PCH
2 x USB 3.0 via ASMedia
1 x Intel I211-AT Network Port
1 x Intel I218-V Network Port
Clear CMOS Button
Audio Jacks
Warranty Period 3 Years
Product Page Link
ASRock X99 Extreme11 BIOS
Comments Locked

58 Comments

View All Comments

  • Vorl - Wednesday, March 11, 2015 - link

    ahh, like I said, I might have missed something. Thanks!

    I was just looking at the haswell family and know it does support IGP. I didn't know that 2011/-E doesn't
  • yuhong - Saturday, March 14, 2015 - link

    Yea, servers are where 2D graphics on a separate chip on the motherboard is still common.
  • Kevin G - Wednesday, March 11, 2015 - link

    Native PCIe SSDs or 10G Ethernet controllers would make good use of the PCIe slots.

    A PCIe slot will be necessary for graphics, at lest during first time setup. Socket 2011-3 chips don't have integrated graphics so it is necessary. (It is possible to setup everything headless but you'll be glad you have a GPU if anything goes wrong.)

    As for why use the LSI controller, it is a decent HBA for software RAID like those used under ZFS. For FreeNAS/NAS4Free users, the numerous number of ports enables some rather larger arrays or features like hot sparing or SSD caching.
  • Vorl - Wednesday, March 11, 2015 - link

    for 10G Ethernet controllers/Fiber HBAs you only need (need is such a strong word too, considering 10g ethernet, and 8gb fiber only need 3 and 2 lanes respectively for PCIe 2.0.) 8x slots. for super fast PCIe storage like SSDs you only need 4x slots which is still 2GB/s for PCIe 2.0 They would have been better served adding more PCIe 8x slots, but then again, what would be the point of 18 SATA slots if you were going to add storage controllers in the PCIe 16x slots?

    The 4x16 PCIE x16 slots makes me think compute server, but that doesn't mesh with 18 SATA ports. If the database engines were able to use graphics cards now (which I know is being worked on) this system might make more sense.

    It still makes me think they just tried to slap a bunch of stuff together without any real thought about what the system would really be used for. I am all for goign fishing and seeing what people would use a board like this for, except that the $600 price tag put's it out of anyone but the most specialized use cases.

    As for the LSI controller, like someone mentioned above, you can get a cheaper board with 8x sata PCIe cards to give you the same number of ports. More ports even since most boards these days come with 6x sata 6Gbs connections The 1mb of cache is so silly for the LSI chip that it's laughable.

    The 128mb of cache for the RAID controller is a little better, but again, with just 6 RAID ports, what's the point?

    The whole board is just a mess of confusion.
  • 3DoubleD - Wednesday, March 11, 2015 - link

    Similar to my thinking in my post above.

    If you are going for a software RAID setup with a ludicrous number of SATA ports, you can get a Z97 board with 3 full PCIe slots (x8,x8,x4) with 8 SATA ports. With three supermicro cards (two 8x SATAIII and one 8x SATAII because of the x4 PCIe slot) you would have 32 SATA ports and it would cost you $650. The software raid I use "only" accepts up to 25 drives, so that last card is only necessary if you need that 1 extra drive, so for $500 you could run a 24 drive array with a M.2 or SATA Express SSD for a cache/system drive. And as you pointed out, since it is Z97, it would have on board video.

    Basically, given the price of these non-RAID add-in SATA cards, I'd say that any manufacturer making a marketing play on SATA ports needs to keep the cost of each additional SATA port to <$20/port over the price of a board with similar PCIe slot configurations.

    As you said, if this board had 18 SATA ports that could support hardware RAID, then it would be worth the additional price tag. This is probably not possible though since 10 SATA ports are from the chipset and the rest from an additional controller. For massive hardware RAID setups your better off getting a PCIe 2.0 x16 card (for 16 SATAIII drives) or a PCIe 3.0 x16 card (if such a thing even exists, it could theoretically handle 32 SATAIII drives). I'm sure such large hardware RAID arrays become overwhelming for the controller and would cost a fortune.

    Anyway, this must be some niche prosumer application that requires ludicrous amounts of non-RAID storage and 4 co-processor slots. I can't imagine what it is though.
  • Runiteshark - Wednesday, March 11, 2015 - link

    No clue why they didn't do a LSI 3108 and have the port for the add on BBU and cache unit like Supermicro does on some of their boards. Also not sure why these companies can't put 10g copper connectors at minimum on these boards. Again, supermicro does it without issue.
  • DanNeely - Wednesday, March 11, 2015 - link

    There're people who think combining their gaming godbox and blueray rip mega storage box into a single computer is a good idea. They're the potential market for a monstrosity like this.

    You know what they say, "A fool and his money will probably make someone else rich."
  • Murloc - Wednesday, March 11, 2015 - link

    I guess this is aimed at the rather unlikely situation of someone wanting both storage and computation/gaming in the same place.

    You know, there are people out there who just want the best and don't care about wasting money on features they don't need.
  • Zak - Thursday, March 12, 2015 - link

    I agree. For reasons Vorl mentioned this is a pointless board. I can't imagine a target market for this. My first reaction was also, wow, beastly storage server. But then yeah, different controllers. What is the point?
  • eanazag - Thursday, March 12, 2015 - link

    It is not a server board. Haswell-E desktop board. I have no use for that many SATA ports but someone might.

    2 x DVD or BD drives
    2 x SSDs on RAID 1 for boot

    Use Windows to mirror the two below RAID 0 volumes.
    7 x SSDs in RAID 0
    7 x SSDs in RAID 0

    The mirrored RAID 0 volumes could get you about 3-6 GBps transfer rates on reads from a 400 MBps SSD in sequential read. Maybe a little less in write speeds. All done with mediocre SSDs.

    This machine would cost over $2000.

Log in

Don't have an account? Sign up now