Kingston SSDNow V+100 Review
by Anand Lal Shimpi on November 11, 2010 3:05 AM EST- Posted in
- Storage
- SSDs
- Kingston
- SSDNow V+100
Random Read/Write Speed
The four corners of SSD performance are as follows: random read, random write, sequential read and sequential write speed. Random accesses are generally small in size, while sequential accesses tend to be larger and thus we have the four Iometer tests we use in all of our reviews.
Our first test writes 4KB in a completely random pattern over an 8GB space of the drive to simulate the sort of random access that you'd see on an OS drive (even this is more stressful than a normal desktop user would see). I perform three concurrent IOs and run the test for 3 minutes. The results reported are in average MB/s over the entire time. We use both standard pseudo randomly generated data (data is random within a write, but duplicated between writes) for each write as well as fully random data (data is random within a write and random across most writes) to show you both the maximum and minimum performance offered by SandForce based drives in these tests. The average performance of SF drives will likely be somewhere in between the two values for each drive you see in the graphs. For an understanding of why the type of data you're writing matters, read our original SandForce article.
Random write performance has always been the weak spot of Toshiba’s controllers, this latest combination of controller and firmware is no different. Compared to all other SSDs, the Toshiba based SSDNow V+ 100 doesn’t look very good. It’s even slower than the old Indilinx based Corsair Nova. It’s still over 2x the speed of the fastest desktop 3.5” hard drive however, and enough to give you the feel of an SSD for the most part.
Crucial loses a decent amount of performance when going from 128GB to 64GB. The RealSSD C300 drops below even the worst case scenario performance of the Corsair Force F40.
Note that not all SandForce drives are created equal here. If a manufacturer doesn’t meet SandForce’s sales requirements, their drives are capped at a maximum of 50MB/s here. This is the case with the Patriot Inferno, although OCZ’s Agility 2 voluntarily enforces the limit.
Many of you have asked for random write performance at higher queue depths. What I have below is our 4KB random write test performed at a queue depth of 32 instead of 3. While the vast majority of desktop usage models experience queue depths of 0 - 5, higher depths are possible in heavy I/O (and multi-user) workloads:
Kingston's performance doesn't change regardless of queue depth. The SandForce drives either stay the same or get a little faster as they have more requests to parallelize and write sequentially.
Our random read test is similar to the random write test however we lift the 8GB LBA space restriction:
Random read speed is much better, but still not as good as the competition. At 19.7MB/s the SSDNow V+ 100 is a couple of orders of magnitude faster than the 600GB VelociRaptor. The Indilinx based Corsair Nova is even faster however, and nothing can top the RealSSD C300.
96 Comments
View All Comments
Taft12 - Thursday, November 11, 2010 - link
Can you comment on any penalty for 3Gbps SATA?I'm not convinced any SSD can exhibit any performance impact of the older standard except in the most contrived of benchmarks.
Sufo - Thursday, November 11, 2010 - link
Well, i've seen speeds spike above 375MB/s tho ofc this could well be erroneous reporting on windows' side. I haven't actually hooked the drive up to my 3gbps ports so in all honesty, i can't compare the two - perhaps i should run a couple of benches...Hacp - Thursday, November 11, 2010 - link
It seems that you recommend drives despite the results of your own storage bench. It shows that the Kingston is the premier ssd to have if you want a drive that handles multi-tasking well.Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!
JNo - Thursday, November 11, 2010 - link
"Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"Er... I do. Well obviously I would want a drive that does well handling heavy task loads as well but there are limits to how much I can pay and the cost per gig of some of the better performers is significantly higher. Maybe money is no object for you but if I'm *absolutely honest* with myself, I only *very rarely* perform the type of very heavy loads that Anand uses in his heavy load bench (it has almost ridiculously levels of multi-tasking). So the premium for something that benefits me only 2-3% of the time is unjustified.
Anand Lal Shimpi - Thursday, November 11, 2010 - link
That's why I renamed our light benchmark a "typical" benchmark, because it's not really a light usage case but rather more of what you'd commonly do on a system. The Kingston drive does very well there and in a few other tests, which is why I'd recommend it - however concerns about price and write amplification keep it from being a knock out of the park.Take care,
Anand
OneArmedScissorB - Thursday, November 11, 2010 - link
"Sandforce is nice if you do light tasks, but who the hell buys an ssd that only does well handling light tasks? No one!"Uh...pretty much every single person who buys one for a laptop?
cjcoats - Thursday, November 11, 2010 - link
I have what may be an unusual access pattern -- seeks within a file -- that I haven'tseen any "standard" benchmarks for, and I'm curious how drives do under it, particularly
the Sandforce drives that depend upon (inherently sequential?) compression. Quite possibly, heavy database use has the same problem, but I haven't seen benchmarks on that, either.
I do meteorology and other environmental modeling, and frequently we want to "strip mine" the data in various selective ways. A typical data file might look like:
* Header stuff -- file description, etc.
* Sequence of time steps, each of which is an
> array of variables, each of which is a
+ 2-D or 3-D grid of values
For example, you might have a year's worth of hourly meteorology (about 9000 time steps),
for ten variables (of which temperature is the 2'nd), on a 250-row by 500-column grid.
So for this file, that's 0.5 MB per variable, 5 MB per time step, total size 45 GB, with
one file per year.
Now you might want to know, "What's the temperature for Christmas Eve?" The logical sequence of operations to be performed is:
1. Read the header
2. Compute timestep-record descriptions
3. Seek to { headersize + 8592*5MB + 500KB }
4. Read 0.5 MB
Now with a "conventional" disk, that's two seeks and two reads (assuming the header is not already cached by either the OS or the application), returning a result almost instantaneously.
But what does that mean for a Sandforce-style drive that relies on compression, and implicitly on reading the whole thing in sequence? Does it mean I need to issue the data request and then go take a coffee break? I remember too well when this sort of data was stored in sequential ASCII files, and such a request would mean "Go take a 3-martini lunch." ;-(
FunBunny2 - Thursday, November 11, 2010 - link
I've been asking for similar for a while. What I want to know from a test is how as SSD behaves as a data drive for a real database, DB2/Oracle/PostgreSQL with 10's of gig of data doing realistic random transactions. The compression used by SandForce becomes germane, in that engine writers are incorporating compression/security in storage. Whether one should use consumer/prosumer drives for real databases is not pertinent; people do.Shadowmaster625 - Thursday, November 11, 2010 - link
Yes I have been wondering about exactly this sort of thing too. I propose a seeking and logging benchmark. It should go something like this:Create a set of 100 log files. Some only a few bytes. Some with a few MB of random data.
Create one very large file for seek testing. Just make an uncompressed zip file filled with 1/3 videos and 1/3 temporary internet files and 1/3 documents.
The actual test should be two steps:
1 - Open one log file and write a few bytes onto the end of it. Then close the file.
2 - Open the seek test file and seek to random location and read a few bytes. Close the file.
Then I guess you just count the number of loops this can run in a minute. Maybe run two threads, each working on 50 files.
Shadowmaster625 - Thursday, November 11, 2010 - link
Intel charging too much, surely you must be joking!Do you know what the Dow Jones Industrial Average would be trading at if every DOW component (such as Intel) were to cut their margins down to the level of companies like Kingston? My guess would be about 3000. Something to keep in mind as we witness Bernanke's helicopter induced meltup...