The Crucial P1 1TB SSD Review: The Other Consumer QLC SSD
by Billy Tallis on November 8, 2018 9:00 AM ESTPower Management Features
Real-world client storage workloads leave SSDs idle most of the time, so the active power measurements presented earlier in this review only account for a small part of what determines a drive's suitability for battery-powered use. Especially under light use, the power efficiency of a SSD is determined mostly be how well it can save power when idle.
For many NVMe SSDs, the closely related matter of thermal management can also be important. M.2 SSDs can concentrate a lot of power in a very small space. They may also be used in locations with high ambient temperatures and poor cooling, such as tucked under a GPU on a desktop motherboard, or in a poorly-ventilated notebook.
Crucial P1 NVMe Power and Thermal Management Features |
|||
Controller | Silicon Motion SM2263 | ||
Firmware | P3CR010 | ||
NVMe Version |
Feature | Status | |
1.0 | Number of operational (active) power states | 3 | |
1.1 | Number of non-operational (idle) power states | 2 | |
Autonomous Power State Transition (APST) | Supported | ||
1.2 | Warning Temperature | 70 °C | |
Critical Temperature | 80 °C | ||
1.3 | Host Controlled Thermal Management | Supported | |
Non-Operational Power State Permissive Mode | Not Supported |
The Crucial P1 includes a fairly typical feature set for a consumer NVMe SSD, with two idle states that should both be quick to get in and out of. The three different active power states probably make little difference in practice, because even in our synthetic benchmarks the P1 seldom draws more than 3-4W.
Crucial P1 NVMe Power States |
|||||
Controller | Silicon Motion SM2263 | ||||
Firmware | P3CR010 | ||||
Power State |
Maximum Power |
Active/Idle | Entry Latency |
Exit Latency |
|
PS 0 | 9 W | Active | - | - | |
PS 1 | 4.6 W | Active | - | - | |
PS 2 | 3.8 W | Active | - | - | |
PS 3 | 50 mW | Idle | 1 ms | 1 ms | |
PS 4 | 4 mW | Idle | 6 ms | 8 ms |
Note that the above tables reflect only the information provided by the drive to the OS. The power and latency numbers are often very conservative estimates, but they are what the OS uses to determine which idle states to use and how long to wait before dropping to a deeper idle state.
Idle Power Measurement
SATA SSDs are tested with SATA link power management disabled to measure their active idle power draw, and with it enabled for the deeper idle power consumption score and the idle wake-up latency test. Our testbed, like any ordinary desktop system, cannot trigger the deepest DevSleep idle state.
Idle power management for NVMe SSDs is far more complicated than for SATA SSDs. NVMe SSDs can support several different idle power states, and through the Autonomous Power State Transition (APST) feature the operating system can set a drive's policy for when to drop down to a lower power state. There is typically a tradeoff in that lower-power states take longer to enter and wake up from, so the choice about what power states to use may differ for desktop and notebooks.
We report two idle power measurements. Active idle is representative of a typical desktop, where none of the advanced PCIe link or NVMe power saving features are enabled and the drive is immediately ready to process new commands. The idle power consumption metric is measured with PCIe Active State Power Management L1.2 state enabled and NVMe APST enabled if supported.
The idle power consumption numbers from the Crucial P1 match the pattern seen with other recent Silicon Motion platforms. The active idle draw is a bit higher for the P1 than the 660p due to the latter having less DRAM, but both do very well when put to sleep.
The wake-up latency of over 73ms for the Crucial P1 is fairly high, and definitely much worse than what the drive advertises to the operating system. This could lead to some responsiveness problems if the OS is misled into choosing an overly-aggressive power management strategy.
66 Comments
View All Comments
Mikewind Dale - Thursday, November 8, 2018 - link
Sic:"A reduction in quantity and an increase in price will increase net revenue only if demand is elastic."
That should be "inelastic."
limitedaccess - Thursday, November 8, 2018 - link
The transition to TLC drives was also shortly followed with the transition to 3D NAND using higher process (larger) from smaller planar litho process. While smaller litho allowed more density it also came with the trade off of worse endurance/higher decay. So the transition to 3D NAND effectively offset the issues of MLC->TLC which is where we are today. What's the equivalent for TLC->QLC?Low litho planar TLC drives were the ones that were poorly received and performed worse then they reviewed in reality due to decay. And decay is the real issue here with QLC since no reviewer tests for it (it isn't the same as poor write endurance). Is that file I don't regularly access going to maintain the same read speeds or have massively higher latency to access due to the need for ECC to kick in?
0ldman79 - Monday, November 12, 2018 - link
I may not be correct on the exact numbers, but I think the NAND lithography has stopped at 22nm as they were having issues retaining data on 14nm, just no real benefit going to a smaller lithography.They may tune that in a couple of years, but the only way I can see that working with my rudimentary understanding of the system is to keep everything the same size as the 22nm (gates, gaps, fences, chains, roads, whatever, it's too late/early for me to remember the correct terms), same gaps only on a smaller process. They'd have no reduction in cost as they'd be using the same amount of each wafer, might have a reduction in power consumption.
I'm eager to see how they address the problem but it really looks like QLC may be a dead end. Eventually we're going to hit walls where lithography can't improve and we're going to have to come at the problem (cpu speed, memory speeds, NAND speeds, etc) from an entirely different angle than what we've been doing. For what, 40 years, we've been doing major design changes every 5 years or so and just relying on lithography to improve clock speeds.
I think that is about to cease entirely. They can probably go farther than what we're seeing but not economically.
Lolimaster - Friday, November 9, 2018 - link
Youre not specting a drive limited to 500MB to be as fast as a PCI-E 4x SSD with full support for it...TLC vs MLC all goes to endurance and degraded performance when the drive is full or the cache is exhausted.
Lolimaster - Friday, November 9, 2018 - link
Random performance seems the land of Optane and similar. Even the 16GB optane M10 absoluletely murders even the top of the line NVME Samsung MLC SSD.PaoDeTech - Thursday, November 8, 2018 - link
Yes, price is still too high. But it will come down. I think that the conclusions fail to highlight the main strength of this SSD: top performance / power. For portable devices, this is the key metric to consider. In this regard is far ahead any SATA SSD and almost all PCIe out there.Lolimaster - Friday, November 9, 2018 - link
Exactly. QLC should stick to big multiterabyte drives for avrg user or HEDT.Like 4TB+.
0ldman79 - Monday, November 12, 2018 - link
I think that's where they need to place QLC.Massive "read mostly" storage. xx layer TLC for a performance drive, QLC for massive data storage, ie; all of my Steam games installed on a 10 cent per gig "read mostly" drive while the OS and my general use is on a 22 cent per gig TLC.
That's what they're trying to do with that SLC cache, but I think they need to push it a lot farther, throw a 500GB TLC cache on a 4 terabyte QLC drive. That might be able to have it fit into the mainstream NVME lineup.
Flunk - Thursday, November 8, 2018 - link
MSRP seems a little high, I recently picked up an HP EX920 1TB for $255 and that's a much faster drive. Perhaps the street price will be lower.B3an - Thursday, November 8, 2018 - link
That latency is APPALLING and the performance is below par. If this was dirt cheap it might be worth it to some people, but at that price it's a joke.