Mixed Random Performance

Our test of mixed random reads and writes covers mixes varying from pure reads to pure writes at 10% increments. Each mix is tested for up to 1 minute or 32GB of data transferred. The test is conducted with a queue depth of 4, and is limited to a 64GB span of the drive. In between each mix, the drive is given idle time of up to one minute so that the overall duty cycle is 50%.

Mixed 4kB Random Read/Write

The Crucial P1 has reasonable entry-level NVMe performance on the mixed random I/O test. It's clearly faster than the MX500 SATA SSD and comes close to some high-end NVMe SSDs. But when the drive is full and the SLC cache is at its minimum size, the P1 slows down to 40% of its speed on a drive containing only the test data. When full, the P1 is about 22% slower than the Intel 660p, but their empty-drive performance is similar.

Sustained 4kB Mixed Random Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The Crucial P1 has worse power efficiency than the Intel 660p on this test, whether it is run on a full drive or not. The efficiency is still reasonable for the mostly-empty drive test run, but when full the P1's power consumption increases slightly and the efficiency is significantly worse than other low-end NVMe SSDs.

When the mixed random I/O test is run on a full Crucial P1, the benefits of the SLC cache almost completely disappear, leaving the drive with a mostly flat performance curve (with some inconsistency) rather than the significant performance upswing as the proportion of writes grows beyond 70%. The Intel 660p's performance is very similar save for slightly lower write performance to the SLC cache, and slightly improved full-drive performance.

Mixed Sequential Performance

Our test of mixed sequential reads and writes differs from the mixed random I/O test by performing 128kB sequential accesses rather than 4kB accesses at random locations, and the sequential test is conducted at queue depth 1. The range of mixes tested is the same, and the timing and limits on data transfers are also the same as above.

Mixed 128kB Sequential Read/Write

The performance of the Crucial P1 on the mixed sequential I/O test is better than most entry-level NVMe SSDs and comes close to some of the slower high-end drives. Even when the test is run on a full drive, the P1 remains faster than SATA SSDs, and its full-drive performance is slightly better than the Intel 660p.

Sustained 128kB Mixed Sequential Read/Write (Power Efficiency)
Power Efficiency in MB/s/W Average Power in W

The power efficiency of the Crucial P1 on this test is about average for an entry-level NVMe drive. When the test is run on a full drive, the reduced performance causes efficiency to take a big hit, but it ends up being only slightly less efficient than the Crucial MX500 SATA SSD.

The Crucial P1 has decent performance at either end of the test, when the workload is either very read-heavy or very write-heavy. Compared to other entry-level NVMe drives, the P1 starts out with better read performance and recovers more of its performance toward the end of the test than many of its competitors. The minimum reached at around a 60/40 read/write split is faster than a SATA drive can manage but is unremarkable among NVMe drives. When the test is run on a full drive, performance during the more read-heavy half of the test is only slightly reduced, but things get worse throughout the write-heavy half of the test instead of improving as write caching comes more into play.

Sequential Performance Power Management
Comments Locked

66 Comments

View All Comments

  • Lolimaster - Friday, November 9, 2018 - link

    With worse of everything how is it going to be "faster", do any TLC SSD beat the Samsung MLC ones? No.
  • Valantar - Thursday, November 8, 2018 - link

    What's the point of increasing performance when current top-level performance is already so high as to be nigh unnoticeable? The real-world difference between a good mid-range NVMe drive and a high end one are barely measurable in actual real-world workloads, let alone noticeable. Sure, improving random perf would be worthwhile, but that's not happening with flash any time soon. Increasing capacity per dollar while maintaining satisfactory performance is clearly a worthy goal. The only issue is that this, as with most drives at launch, is overpriced. It'll come down, though.
  • JoeyJoJo123 - Thursday, November 8, 2018 - link

    ^ This.

    For typical end users, even NVMe over SATA3 SSDs don't provide a noticeable difference in overall system performance. Moving to an SSD over an HDD for your OS install was a different story and a noticeable upgrade, but that kind of noticeable upgrade just isn't going to happen anymore.

    Typical end users aren't writing/reading so much off the drive that QLC presents a noticeable downgrade over TLC, or even MLC storage. Yes, right now QLC isn't cheap enough compared to existing TLC products, but we've already done this dance when TLC first arrived on the scene and people were stalwart about sticking to MLC drives only. Today? We got high-end NVMe TLC drives with better read/write and random IOPS performance compared to the best MLC SATA3 drives back when MLC was the superior technology.

    Yeah, it's going to take time for QLC to come down in price, the tech is newer and yields are lower, and companies are trying to fine tune the characteristics of their product stacks to make them both appealing in price and performance. Give it some time.
  • romrunning - Thursday, November 8, 2018 - link

    Sure, we lost endurance and speed with the switch from MLC to TLC. But the change from TLC to QLC is much worse in terms of latency, endurance, and just overall performance. Frankly, the sad part is that the drive needs the pseudo-SLC area to just barely meet the lowered expectations for QLC. Some of those QLC drives barely beat good SATA drives.

    We now have a new tech (3D Xpoint/Optane) that is demonstrably better for latency, consistency, endurance, and performance. I'd rather Micron continue to put the $ into it to get higher yields for both increased density/capacity & lower costs. That's what I want on the NVMe side, not another race to the bottom.
  • JoeyJoJo123 - Thursday, November 8, 2018 - link

    Sorry, you're not the end consumer that dictates how products get taped out, and honestly, if you were in charge of product management, you'd run the company into the ground focusing on making only premium priced storage drives in a market that's saturated with performance drives.

    The bulk of all SSD sales are for lower cost lower storage options. There is no "race to the bottom", it's just some jank you made up in your head to justify why companies are focusing on making products for the common man. Being able to move from an affordable 500GB SSD on TLC to an similarly priced 1TB SSD in a few years is a GOOD THING.

    If you want preemium(tm) quality products, SSDs with only the HIGHEST of endurance ratings for the massive Read/Write workloads you perform on your personal desktop on a day-to-day basis, SSDs with only the LOWEST of latencies so that you can load into Forknight(tm) faster than the other childerm, then how about you go buy enterprise storage products instead of whining in the comments section of a free news article. The products you want with the technology you need are out there. They're expensive because it's a niche market catered towards enterprise workloads where they can justify the buckets of money.

    You keep whining, I'll keep enjoying the larger storage capacities at cheaper prices so that I can eventually migrate my Home NAS to a completely solid state solution. Right now, getting even a cheap 1TB SSD for caching is super-slick.
  • romrunning - Friday, November 9, 2018 - link

    "...how about you go buy enterprise storage products instead of whining in the comments section of a free news article."

    You are taking this way too personally.

    I'm actually thinking more about the business side. I want 3D-Xpoint/Optane to get cheaper & get more capacity so that I can justify it for more than just some specific servers/use-cases. So I'd like Micron to focus more on developing that side than chasing the price train with QLC, which is inferior to what preceded it. With Micron buying out Intel's stake in IMFT for 3D-Xpoint, I just hope the product line diversification doesn't lessen the work to make 3D-Xpoint cheaper & even greater capacities.
  • JoeyJoJo123 - Friday, November 9, 2018 - link

    >You are taking this way too personally.

    Talk about projecting. Micron is taping out dozens of products across different product segments for all kinds of users. They're working on 3D-Xpoint and QLC stuff simultaneously and independently from each other. What's happening here is that Micron is producing QLC NAND for this Crucial M.2 SSD, and you're here taking it personally (and therefore whining in a free news article comments section) that Micron isn't focusing enough on 3D-Xpoint and that supposedly their QLC is bad for some reason. Thing is, this news article isn't for you. This technology isn't for you. You decided your tech needs are above what this product is aimed for: affordable, large volume SSDs for lower prices.

    Seriously, calm down. This wasn't an assault orchestrated by Micron against people that need/want higher performance storage options. More 3D-Xpoint stuff will come your way if that's the technology you're looking forward to. Again back to my main point, it's going to take some time for these newer technologies to roll out. Until then, don't whine in comments sections that X isn't the Y you were waiting for. If the article is about technology X, make a half-decent effort keep to the topic about technology X.
  • mathew7 - Tuesday, November 13, 2018 - link

    "I'll keep enjoying the larger storage capacities at cheaper prices so that I can eventually migrate my Home NAS to a completely solid state solution."
    Wwwwwhhhhhhhaaaaaaaaaaattttt?? NEVER. You don't understand the SSD limits. I would not do that with SLC (assuming current quality at QLC price).
    Enterprises with SSD NASes only use them for short-term performance storage with hourly/daily backup. Anyone who uses them differently is asking for a disaster.
    Look for linuxconf Intel SSD. There is a presentation where they explain how reading a cell damages nearby cells and manufacturers need to monitor this a relocate the data that is only read.
    I have 2 servers with only 1 SSD each for OS and 8-10TB HDDs for my actual long-term data.
    All my desktops/laptops have SSDs (Intel 320, Samsung 830-860 evo+pro, Crucial BX100/MX300 etc). But anything important on SSDs will be backed-up to HDDs.
  • Oxford Guy - Thursday, November 8, 2018 - link

    "That's what I want ... not another race to the bottom."

    That's what consumers want: value.

    That's not what companies want. They want the opposite. Their wish is to sell the least for the most.
  • Mikewind Dale - Thursday, November 8, 2018 - link

    "[Companies] want the opposite. Their wish is to sell the least for the most."

    Not true. Companies want to maximize net revenue, i.e. total revenue minus cost.

    Depending on the elasticity of demand (i.e. price sensitivity), that might mean increasing quantity and decreasing price.

    A reduction in quantity and an increase in price will increase net revenue only if demand is elastic.

    But given the existence of HDDs, it makes sense that demand for SSDs is elastic, i.e. price-sensitive. These aren't captive consumers with zero choice.

    Of course, nothing stops a company from catering to BOTH markets, i.e. high performance AND low cost markets.

Log in

Don't have an account? Sign up now