Solid State Drive (SSD) Benchmark Performance Testing |
Articles - Featured Guides | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Written by Olin Coles | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Monday, 22 December 2008 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SSD Testing: JMicron vs ICH10REDITOR'S NOTE 27 August 2010: This article was originally published in December 2008, before NAND Garbage Collection (GC) technologies such as TRIM existed, and very few Solid State Drive controllers were available. Sections of this article have been updated to reflect improvements in benchmark software and driver refinements, and the conclusion has been re-written. I'm not happy to be writing this article. In fact, I'm completely sickened by the thought of it. If SSD technology is close to you, or you are also a reviewer, perhaps by the end you will be too. You see, I have been working very hard to cover Solid State Drive technology. It's literally become an addiction, really, and I have spent countless hours and untold amounts of money to support my habit. But I've had a moment of clarity, and it's time that I come clean and admit that I've got a problem. A legitimate problem. This particular problem isn't a love for all things SSD, which certainly exists; this is a problem that affects you as well. So to put things more astutely: everyone using SSD's has a problem, and there's a problem with the way we're all testing SSDs. After nearly two years of testing Hard Disk Drive (HDD) and Solid State Drive (SSD) products, you might presume that most of the bugs have been ironed out in regard to test tools and SSD firmware. This is not the case, even by a long-shot, as any early JMicron SSD owners will tell you. Problematic firmware aside, testing SSD technology has become a real challenge... at least if you're trying to do it 'right'. That's the point of this article, and to get you to better understand the problem, I must take you through some of my own experiences. Regardless of your position relative to SSD products, whether you test them or use them, every page of this guide will detail the problem behind our favorite technology. Most of it isn't pretty, I'll warn you now; but every single word is worth reading.
In this article, Benchmark Reviews sets out to detail the performance differences between SSD products attached to the JMicron JMB322 host-based controller and the Intel ICH10R, but instead discovered a larger problem. While read and write performance can sway unpredictably between SATA controllers, the most disturbing subject is the inconsistency between test tools. One version of a benchmark tool may record different results from another, while some other tools report different results even when the same version and driver are used. Please join me as I confess my frustrations in detail, and educate you on my test miseries before you are forced to experience them yourselves. Intel ICH10R SATA ControllerEDITOR'S NOTE: ATTO Disk Benchmark offers basic bandwidth speed results at a low queue depth, and more recent versions now offers broader test settings. ATTO Disk Benchmark uses buffered spot samples, therefore NAND wear conditions on segments tested may impact performance results. The Intel I/O Controller Hub (ICH) Southbridge has been around for decades, literally. Early into 2003 the ICH5 chip added support for SATA drives, which became native on all 8xx series motherboards going forward. To this day, the Intel ICH is still the leading SATA drive controller among desktop computer systems. In late 2007, when Solid State Drive (SSD) products were first reaching the retail market, it was discovered that Intel ICH9 chips exhibited a strange behavior when connected to SSD products. As a result, many large review websites, Benchmark Reviews included (but not so large, I add with a frown), used one of the other SATA controller chips for our testing. Because my test platform at the time was a Gigabyte X48 motherboard, Benchmark Reviews tested with the JMicron JMB363 chip. Since the launch of Intel's Core i7 CPU & X58 Platform, which continues to use the ICH10 and ICH10R Southbridge introduced with the P45-Express chipset, Benchmark Reviews has adopted this platform for SSD testing. But the big question remains: can the Intel ICH perform well when a Solid State Drive is attached? The answer is yes, but the details still say no. As a small example, illustrated below are two tests of the same SSD but on two different SATA controllers. The first example is the Mtron MOBI 3500 on the Intel ICH10R chipset, which comes on our Gigabyte GA-EX58-EXTREME test platform. The Intel ICH10R offers a final transfer rate of 102.4 MBps write, and 103.6 read, for the MSD-SATA3535 SSD.
Connecting the same Mtron MOBI 3500 to the JMicron JMB322 controller yields a different result, however. While the transfer speed for write-to bandwidth ever-so-slightly improves from 102.4 to 103.1, the JMicron controller loses almost 4 MBps in the read performance. Not very big differences I'll admit, but evident none the less.
So is this a trend we'll see more of? You'll soon see, as Benchmark Reviews has taken several samples to compare. But by the end of this article, the decision still may not be so clear, and a 'winner' may be like choosing presidential candidates. Before we reach a conclusion though, let's peek at how JMicron chips compare. JMicron JMB322 SATA HBCEDITOR'S NOTE: ATTO Disk Benchmark offers basic bandwidth speed results at a low queue depth, and more recent versions now offers broader test settings. ATTO Disk Benchmark uses buffered spot samples, therefore NAND wear conditions on segments tested may impact performance results. Oh my, where to I begin. The JMicron host-based controller (HBC) has been loathed, loved, and hated by me; and all inside of only two short years. As a result of Intel's decision to no longer support PATA connections in their ICH (which ended with ICH7), I experienced my first taste of despise for the JMicron add-in controller. The early JMicron chips were first used to drive the PATA IDE channels on Intel-based motherboards beginning with the i965P (ICH8) series back in early 2007. The complaints of stuttering and failure filled support forums, and were enough to convince ASUS and MSI to switch their loyalty and use Marvell chips. In retrospect, this might have been the smartest decision they've ever made, as manufacturers such as Abit, ECS, and Foxconn would lose major market-share as a result. So how did JMicron survive long enough to see the P35, X38, P45, X48, and now X58 platforms? Improving the drivers was a major factor, as were the refinements in follow-up chips. But in the end, it ultimately came down to an affordable licensing and low implementation cost for manufacturers. It also helped that some manufacturers disguise the chip as their own, such as the JMB363 controller on the Gigabyte P35 through X48 series for example. This HBC was concealed and marketed as 'Gigabyte SATA 2', which was also stamped on the controller chip itself beside a small JM symbol. The lack of clear specifications sure makes tracking down drivers a little more difficult; which is very important considering how outdated the support portion of their website is. Ultimately JMicron decided to publish a public FTP with their latest JMB3xx Windows drivers (ftp://driver.jmicron.com.tw/jmb36x/), which now receives frequent updates. But as most gamers know, newer doesn't always mean better, as the JMicron controller has been plagued with integration issues from start to finish. Those early JMicron controllers were painfully problematic, but even the most recent chips come with their own 'baggage'. Take for example the Gigabyte GA-EX58-EXTREME X58 motherboard which we now used for testing. This product comes with six SATA ports that use an Intel ICH10R Southbridge, and four more 'Gigabyte SATA 2' ports which is revealed to be a JMicron JMB322 chip (and to further confuse consumers, the JMB322 controller uses the JMB36x driver package). Problems get worse if you don't specifically designate this as a JMicron controller, because otherwise it receives a generic Microsoft driver from 2001. So how much difference could drivers really make? Well, the benchmark results below were taken with the current and previous versions of JMicron'sJMB36x driver after a fresh install of Windows XP Professional SP3. Testing with version 1.17.43.05 dated 11/03/2008 produced the first set of results below, yielding a final bandwidth of 88.9 MBps write-to, and 136.8 MBps read-from score.
On another freshly installed O/S, driver version 1.17.45.01 dated 12/16/2008 produced the results seen below only a few minutes later. Same hardware all-around, same SSD used for testing, but roughly one month difference between drivers and the results become incomparable. All alone, most testers would dismiss this as an erroneous reading. From my perspective however, the JMicron drivers offer nearly no perceivable write-to changes, but the newer driver loses nearly 26 MBps read bandwidth.
My conclusion on JMicron products, at least in regard to drivers, is that they've got a long way to go before the bugs have been ironed out. They're getting better, no doubt, but by the end of this article you'll be scratching your head and wondering which way to go. But wait, it gets better. JMicron is a relative new-comer to the industry, which will leave us wondering how Intel can still get it wrong. In our next few sections, Benchmark Reviews will compare the test performance between chips and drives. SSD Testing MethodologySolid State Drives have traveled a long winding course to finally get where they are today. Up to this point in technology, there have been several key differences separating Solid State Drives from magnetic rotational Hard Disk Drives. While the DRAM-based buffer size on desktop HDD's has recently reached 32 MB and is ever-increasing, there is still a hefty delay in the initial response time. This is one key area in which flash-based Solid State Drives continually dominates because they lack moving parts to "get up to speed". However the benefits inherent to SSD's have traditionally fallen off once the throughput begins, even though data reads or writes are executed at a high constant rate whereas the HDD tapers off in performance. This makes the average transaction speed of a SSD comparable to the data burst rate mentioned in HDD tests, albeit usually lower than the HDD's speed. Test System
Disk Hardware
Test Tools
Comparing a Solid State Disk to a standard Hard Disk Drives is always relative; even if you're comparing the fastest rotational spindle speeds. One is going to be many times faster in response (SSD's), while the other is usually going to have higher throughput bandwidth (HDD's). Additionally, there are certain factors which can effect the results of a test which we do our best to avoid. Nevertheless there will be some tests which will not be completely representative of the individual product tested, but a combination of the product and supporting system hardware. An excellent example of such a test is the write-to bandwidth benchmarks, which rely on other system components to force data onto the drive along with the bus bandwidth to support it. This dependency on system hardware is why you will see Benchmark Reviews place an emphasis on read bandwidth over write tests. HD Tach: HDD PerformanceHD Tach has been used for many years now, and has earned its name as a reliable tool for testing Hard Disk Drive (HDD) performance. Of course, SSD's aren't like HDD's in any way, except for purpose. This is why we feel it's important to compare apples to apples, even when they're all carried in the same basket. It's critical to understand that Solid State Drives use an internal disk controller that manages the read and write algorithm for wear-leveling. Test tools like HD Tach, HD Tune, and IO Meter all use static spot-samples for measuring bandwidth performance, but SSD's don't have static positions on the drive because each request is sent to a less-used location in the bank of DRAM modules whereas a HDD saves to a sector on the disk. So we begin the way Simpli Software would want their product used: with the Western Digital Raptor HDD. Using the exact same hardware across both tests, with the only difference being the SATA drive controller, the following illustration begins to tell the story. While connected to the JMicron JMB322 controller, the Raptor HDD scores a 112.3 MBps burst with 75.2 read and 102.0 write. Things get confusion though, because once the WD740ADFD drive is connected to the Intel ICH10 Southbridge, performance is much different. On the ICH10 our test HDD reports 120.8 MBps burst (an increase of 8.5 MBps), while read performance is identical and write performance drops from 102.0 down to 75.8 MBps (a decrease of 26.2 MBps). What happened?
I'm not entirely sure why the JMicron JMB322 would outperform the Intel ICH10R chip by almost 35%, even despite a nearly 8% burst speed advantage. Let's see if testing a Seagate 7200.11 HDD will reveal any clues.
Hmm... Judging from the burst speeds, I'm beginning to wonder about HD Tach's ability to accurately measure this performance; or maybe it's just the Intel ICH10 SATA controller? More questions, and still no solid answers. Using the ST3500320AS 7,200 RPM SATA-II Hard Disk Drive didn't really prove very much to us. The sequential read speeds were nearly identical, again. The major difference between the write speed on each controller seems to have been reduced to only 8%, or 6.9 MBps. This is a long way off from the Raptor's results, but still indicates and advantage towards the JMicron controller. All of this is good and fine if we're planning to test HDD's for the foreseeable future, but this is hardly the case. Solid State Drives are quickly gaining ground, and with massive price drops even the latest SSD technology can be purchased by mainstream consumers. In our next section, HD Tach is used to test and compare SSD technology. We will soon see if HD Tach is reliable, or if the Intel chipset is second-best (or worse). HD Tach: SSD PerformanceEDITOR'S NOTE: HD Tach offers very basic bandwidth speed results at a single queue depth. HD Tach uses buffered spot samples, therefore NAND wear conditions on segments tested may impact performance results. Okay, before we being this section, I must give the legal disclaimer: Simpli Software has not designed their HD Tach software to be used for SSD technology. Just look at the name; HD stands for Hard Drive. Cross your fingers and hope for SSD Tach sometime soon, but in the mean time we use what we've got. In our last section, we witnessed HD Tach report a burst speed advantage to the Intel ICH10 chip, while the write bandwidth heavily favored the JMicron chip. Read performance was virtually (and sometimes literally) identical. In this section, it's time to test SSD products, and see of HD Tach offers consistent results. For anyone who didn't fully understand how data is written to a Solid State Drive, perhaps one of my famous analogies will help explain. Think of an ice cube tray. If the tray was a HDD, water (data) would fill the tray from beginning to end. But when the tray is a SSD product, water goes to the least fills areas first so that the tray receives a level amount all-around. It is this wear-level feature that causes so many problems for benchmark software, primarily because most tools were designed to spot-test sectors of a hard disk, and with a SSD these spots could be anywhere depending on the amount of wear to the modules. Beginning with the Silicon Power 32GB SLC SATA-II SSD, we see that burst speed still favors the Intel ICH10 controller, even though SSD burst speed is a virtually useless figure because of internal cache mechanisms. Sequential read speeds using the JMB322 are 97.1 MBps, while the ICH10 produces 103.0 and produces a nearly 10% improvement. This shatters our previous findings which demonstrated comparable read-from results between chips. The next major difference is the write-to performance, which still favors the JMicron controller with 76.6 MBps over Intel's 66.8. That's expected, but look at the read pattern: JMicron has a somewhat stead waveform while the Intel controller makes me a little concerned.
Moving on to our recent 2008 Editors Choice Award winner, the G.Skill FM-25S2S-64GB MLC SSD, we realize that anything can happen with HD Tach. Sure, the burst speeds (which should really be ignored for SSD testing) are in very-slight favor to the Intel ICH10 chip, which has been consistent throughout our testing, but the rest is a little different. The G.Skill SSD performs read-from bandwidth at 123.2 MBps on the JMicron, while the write-to performance is only 113.2 for the ICH10 chip. Going from identical performance between HDD tests to almost 10% difference swaying back-and-forth between controllers is not something a tester wants to see.
At this point, my technically-minded brain wonders if the internal controllers present inside SSD products are coded specifically for the different SATA controllers, or if HD Tach has become suspect. All good questions, and so far there are no good answers. By this point in my Solid State Drive Benchmark Performance Testing article, you might have noticed that we're using 6GB of triple-channel DDR3 on a 32-bit Windows XP platform. Benchmark Reviews is well aware of the memory mapping limitations of 32-bit software, so please don't send us mail to remind our staff. But does it matter? Does it make a difference to performance or test results? Benchmark Reviews has taken the extra steps necessary to answer this question, and perhaps save the good name of HD Tach, all in our next section. Do Memory Amounts Matter?EDITOR'S NOTE: ATTO Disk Benchmark offers basic bandwidth speed results at a low queue depth, and more recent versions now offers broader test settings. ATTO Disk Benchmark uses buffered spot samples, therefore NAND wear conditions on segments tested may impact performance results. I've seen several reliable sources mention in their analysis of Solid State Drive testing that system memory amounts will impact the test results. Whenever I hear or read something that doesn't seem logical to me, I don't just question it, I question and answer it. This has been the driving force behind the information below, which details the differences between testing with a 3 GB triple-channel CL6 system memory kit, and a 6 GB triple-channel CL8 kit. Beginning with the Mtron MOBI 3500 on the Intel ICH10 SATA controller, the results indicate that there is no perceivable difference between memory sizes. The results are practically identical, even despite the difference in memory timings and module sizes.
Continuing on with our test of the MSD-SATA3535 SSD while attached the JMicron JMB322 SATA controller, we again see that performance is identical between latencies and amounts.
One surprising by-product of this test is the difference between controller chips. The Mtron 3500 performed at 107.2 MBps write and 99.0 read on the ICH10, while recording an identical 107.2 MBps write and 98.0 on the read performance. So in ATTO Disk Benchmark v2.34 it appears that there is very little disparity in SSD performance between controllers and RAM. Does this mean that HD Tach was broken all along? Let's do a comparison on it to be sure...
Well, there you go. All these years I've been testing SSD products with HD Tach, and now I'm beginning to see that maybe I should have paid closer attention to the name of the software. It appears that HD Tach is prone to reporting test anomalies, and SSD performance is not only affected by the controller, but also the internal cache of the SSD while the amount of system memory impacts the storage driver's buffer. Lesson learned. Hopefully other technical writers and editors will not be too proud to read my article, or discover this for themselves. Proving The ResultsEDITOR'S NOTE: CrystalDiskMark is not suggested for testing SSDs with versions prior to 3.0, as the software had not been written for or optimized to SSDs. therefore NAND wear conditions on segments tested may impact performance results. That last section was a bit of a bomb, dropped dead-center on the belief system that many testers have had for years. So can I prove the results? Sure, of course we can. Let's go over the points we need to cover:
Alright, now that we're all on the same train of thought, it's time to leave the station. Using the free CrystalDiskMark software (version 2.2), we tested that same Mtron MOBI 3500 and analyzed our results after three test loops. In the first set, the JMicron controller was used. The sequential read performance was 98.38 MBps, while write speed was 109.2 MBps. Dropping down to 512 KB chunks, the read speed is 96.31 MBps while write performance is 26.08. Finally, with the use of only 4 KB chunks, we see read performance at 23.17 MBps and write bandwidth at 0.326.
Now let's see how the Intel ICH10 controller reacts. The ICH10 controller yields a nearly identical 97.07 MBps sequential read performance, and slightly lower 105.1 write bandwidth. The results are slightly lower, but a 1.3% read performance reduction and 3.9% drop in write performance will probably not raise any concerns. Or are these results just the beginning?
Upon closer examination the 512 KB chunk tests perform essentially the same, and a 1.6% read speed decline is widened by the nearly 6% decline in write performance over the JMicron controller. This is beginning to get confusing again, and the 4 KB chunk test only makes matters worse. The Intel ICH10 actually records a 46.9% improvement over the JMicron JMB322 controller, recording 34.03 MBps over 23.17. The opposite is true of the ICH10 SATA controller when we look at 4 KB writes', which drop to 0.26 MBps for a 25.4% difference in favor of the JMicron. So what does this all mean? How does this effect performance testing for reviewers, and results for consumers? Read on for my final conclusion on the topic. PCMark05 PerformanceEDITOR'S NOTE: According to FutureMark's PCMark Vantage White Paper document regarding HDD tests (see page 27), the workload for each subsystem depends heavily on the overall system configuration. Different components can be severe bottlenecks and cause variance in the results of the actual component performance measurement. Fragmented HDDs, old drivers, cluttered OS, 3rd party applications etc. are all factors which will affect the benchmark results. The PCMark HDD test consists of benchmarks using software applications that depend on processor, memory, and graphics hardware. Additionally, NAND wear condition may impact performance results. PCMark05 (and PCMark Vantage for Windows Vista) are tools intended to test real-world performance for computer hardware. With specific regard to SSD products, Benchmark Reviews configures PCMark to only conduct 'HDD' tests. With a focus on storage media, PCMark05 tests in five different areas and then generates a composite PCMark score. In the example below, the average of three tests on the G.Skill FM-25S2S-64GB SSD were charted and compared. During our testing the first test was discarded, as were the highest and lowest test. This essentially leaves the most average representation of performance.
The JMicron SATA controller appeared to run very consistent tests from start to finish, but there were rare occasions where the highest or lowest test was more than 10% out of normal range. To the same extent, and possible more frequent, was the occasional irregularity in a test performed on the Intel ICH10 controller. Despite these test irregularities (which were discarded as noted above), the averages indicated a performance benefit for the Intel ICH10 SATA controller over the JMicron JMB322. While I would normally not consider the 3.5-6.4% margin to be anything worth mentioning, the 25.1% improvement in Virus Scan results was enough to make me take notice of Intel's ICH10 controller. Additionally, the File Write test showed a 13.2% improvement for the ICH10, even though all of our previous tests with other software have claimed differently. This is all good for broadening the differences between Intel and JMicron SATA controllers, even though the results are extremely confusing between software tools. But the real interesting news is how PCMark05 compares the performance of one drive to another.
So if we take PCMark05 at face value, the Mtron MOBI 3000 seems to easily out-perform the G.Skill SSD, and doubles the performance of the best Hard Disk Drive products. Sound believable? Not to me, it doesn't. Since I personally use all of these products each and every day on several different test systems, I can assure you that PCMark05 is not properly comparing HDD products to SSD counterparts. I suppose the disclaimer must rear its ugly head again, since Futuremark was really quite specific in naming this test 'HDD' and not storage media. Which brings me to my final thoughts and conclusion... EVEREST Controller AnalysisEDITOR'S NOTE: Lavalys EVEREST continues to be an excellent tool for benchmark testing maximum sequential linear read and write bandwidth speeds, however NAND wear condition may impact performance results. After completing the initial version of this article, I received several messages asking why there wasn't a chart with all of the results compiled to show the difference in SATA controllers. The truth is, I actually wanted visitors to read through this article in its entirety, as the information was embedded on each page. But alas, I know that visitors like their information hand fed to them, so I've returned to add this section. Instead of hand-picking data from the collection of software programs we've used for benchmark testing, which would be repetitive considering that the results have already been revealed, I have decided to use Lavalys EVEREST to demonstrate the trend that's been repeated.
This first chart illustrates the linear read performance of the G.Skill 64GB 2.5" SATA-2 SSD FM-25S2S-64GB on the Intel ICH10R Southbridge SATA controller. As you can see, there is very little movement in the waveform, and the average sustains a very tight range. With a minimum of 136.1 MBps and maximum of 137.6, the average is a very consistent 137.5 MBps. So let's see how the JMicron controller handles this same SSD...
Attached to the JMicron JMB322 SATA controller, the G.Skill SSD performs considerably worse. The minimum sustained read bandwidth occasionally dipped to 108.3 MBps, while the maximum and average was 111.5 MBps. So if we compare results for the G.Skill FM-25S2S-64GB, the Intel ICH10R controller offers a full 26.0 MBps sustained read performance advantage. Now let's see about write performance on the same drive...
Back on the ICH10R, the G.Skill SSD give some movement between minimum and maximum linear write performance, with an average of 89.9 MBps. That's not bad, so let's see how different the performance is for JMicron's chip...
On the JMicron JMB322 controller, our G.Skill SSD performs 7.4 MBps worse than the Intel ICH10R. The G.Skill SSD indicates a heavy preference towards the Intel ICH10 chip, with a 26.0 MBps linear read performance advantage and 7.4 MBps linear write bandwidth gain. I tested several other SSDs using the two SATA controllers, and the results seemed much closer for some drives more than others. The OCZ SATA-II SSD (OCZSSD2-1S32G) gained only 4.5 MBps read bandwidth on the ICH10, and a mere 2.3 MBps write improvement over JMicron. Conversely, the Western Digital Raptor 74GB HDD actually performed better on the JMicron controller, even though the JMB322 only improved linear read speed by 0.8 MBps for 75.2 total while write speed improved by only 0.4 MBps for 74.2. While this matches the trend we received with PCMark05 and ATTO Disk Benchmark, they conflict with HD Tach and CrystalDiskMark. So who do we believe, and can any of these tools be 'right'? SSD Testing Final ThoughtsEDITOR'S NOTE: This section was written prior to basic industry standardization and mass product production. While firmware updates still pose an issue for benchmark testing, many of the problems no longer exist in modern SSD product lines. Nobody is perfect. However, when you're testing high-dollar hardware you had better make sure your test results are. This is what makes me so sick about this article: the amount of data that is perfect, but the results of flawed software and controller technology by which they were collected. Essentially, I have worked hard to produce nearly meaningless results. Testing SSD's is a dangerous endeavor, because the internal disk controller firmware present in every Solid State Drive is different from the time they are released to reviewers and media partners, up to the point they reach consumers, with several unannounced revisions thereafter resting invisibly on store shelves. I've already seen it a few times in my two years of testing these products, just as others have also reported. This is why the documentation of specific test variables and product part numbers is so critical. The item tested at Benchmark Reviews may have the same retail name, but the part number may change several times in small (undetectable) revisions. Taking things one step further, some manufacturers have started to replace costly high-performance flash DRAM in their SSD's with slower more cost-effective modules. In some cases, the Solid State Drive may even switch from SLC to MLC without a change in product name or part number. This makes it very difficult to keep results consistent, even among the same SSD product line. Hard Disk Drives come with firmware to interface with the disk controller attached to each unit. But Solid State Drives have a disk controller built into the device for wear-level management, which then interfaces with the motherboard's own disk controller. Since no manufacturer is ever going to agree on a mutual SSD internal-controller technology, we the consumer are stuck with a myriad of different SSD algorithm technologies that mate to a handful of controllers which are instructed by countless driver revisions. It's going to be a tough job to test SSD products, no matter which software tool you decide to test with, and regardless of the platform controller used. It comes down to one very simple problem: Solid State Drive technology uses wear level algorithms to ensure each DRAM modules receives equal usage, but HDD tests tools are designed to sample disk 'sectors' for performance and SSD's don't never read or write to the same sector. All of this explains why we haven't seen a test tool specifically targeted towards SSD technology. But if we take a moment to analyze the test tools available, we find ourselves with some lesser-know tools becoming more useful for testing SSD products when compared the best-known benchmarks of the past few years. You'll get my real feel for the situation in the SSD Testing Conclusion, so please read on. SSD Testing ConclusionEDITOR'S NOTE 27-August-2010: This section has been updated to provide relevant testing advice for modern SSDs using current benchmark tools. As we've explained in our SSD Benchmark Tests: SATA IDE vs AHCI Mode guide, Solid State Drive performance revolves around two dynamics: bandwidth speed (MB/s) and operational performance (IOPS). These two metrics work together, but one is more important than the other. Consider this analogy: operational IOPS performance determines how much cargo a ship can transport in one voyage, and the bandwidth speed is to fast the ship moves. By understanding this and applying it to SSD storage, there is a clear importance set on each variable depending on the task at hand. For casual users, especially those with laptop or desktop computers that have been upgraded to use an SSD, the naturally quick response time is enough to automatically improve the user experience. Bandwidth speed is important, but only to the extent that operational performance meets the minimum needs of the system. If an SSD has a very high bandwidth speed but a low operational performance, it will take longer to load applications and boot the computer into Windows than if the SSD offered a higher IOPS performance If you've read through the details of this article, you've probably reached this section wondering which benchmark software is good for testing SSDs. When this article was originally written in late 2008, there were very few options available. As of August 2010 the SSD market has dramatically expanded, yet only a few more useful SSD test tools exist. DOS-based System Speed Test v4.78 by Vladimir Afanasiev is an excellent precision tool for measuring SSD response time, and results have always been extremely accurate and consistent. System Speed Test also offers sequential read benchmarks, but these results are not suited for testing SSDs because a proper device driver is not present. Response times reported by HD-Tach, HD-Tune, or similar software are often rounded to the nearest tenth of a millisecond. System Speed Test reports exact figures to the nearest thousandth. Bandwidth speed tests on Solid State Drive technology is best done with linear tools, in my opinion. Linear test tools reveal buffer limitations and NAND IC separation. There are two software suites which provide linear testing that I am aware of: Lavalys EVEREST offers an excellent 'Disk Benchmark' component as part of the test suite. Equipped with both Linear Read and Linear Write bandwidth tests with selectable block size options, I feel that this benchmark tool offers the best bandwidth spped snap-shot for SSD performance. SiSoftware Sandra offers linear bandwidth tests, although I personally feel the options and ergonomics of this software leave a lot to be desired. As we discovered a few sections back, our benchmark tests showed favoritism towards the Intel SATA controller on one SSD, while it seems that performance between ICH10 and JMB322 are nearly equal using another Solid State Drive for performance testing. Using linear tests, both EVEREST and Sandra gave the same results per drive and controller. So even with linear testing removing doubt from NAND wear-level algorithm inconsistencies, you'll still have to work around the motherboard controller and it's limitations. The Microsoft Windows Operating System is another consideration, as only Windows 7 presently offers TRIM garbage collection support. Sequential test tools such as ATTO Disk Benchmark, HD-Tach, HD-Tune, Passmark PerformaceTest , CrystalDiskMark, and AS-SSD are all usable benchmarks, but occasionally report performance inconsistencies because of buffered spot sampling and NAND condition. Additionally, CrystalDiskMark and AS-SSD often report much lower sequential read and write bandwidth speeds compared to HD-Tach and HD-Tune, while ATTO Disk Benchmark relies on file size chucks to report bandwidth. The sequential bandwidth speeds reported by CrystalDiskMark and AS-SSD are so low they become questionable, while Passmark PerformaceTest, HD-Tach, and HD-Tune reveal very little information about buffer saturation and are prone to NAND condition impacting performance results. Of the sequential tools, ATTO Disk Benchmark is most preferred because it illustrates bandwidth speed results at varying file size transfers. From my testing for this article and in other projects, along with the results I've seen from the software tools used, I can conclude that ATTO Disk Benchmark has proven itself consistent in recording SSD bandwidth results and doesn't seem to have a preference for faster SSD DRAM cache mechanisms found in some SSDs. The variety of file size chunks that it tests does give a broad picture of bandwidth performance at each level. Although ATTO is not perfect and still uses spot-testing, it's less imperfect for SSD testing than many of the other alternatives. Other test tools discard the need for measuring raw bandwidth speeds, and attempt a 'real world' approach towards comparing product performance. PCMark05 and PCMark Vantage are two products that fit this description, and work questionably well for comparing SSD performance. Our own test results section in the article demonstrated that SSD's cannot be benchmarked accurately with PCMark05 or Vantage, and further examination of their White Paper document (pg 27) indicates that the computer system's video card, memory, processor, and Operating System all play a factor in the final score. One of the most preferred methods of illustrating SSD performance levels is IOPS operational performance measured at a deep queue depth. Unfortunately there are so many different ways to produce these results it becomes difficult to determine which tools are most valuable. Most review websites have adopted Iometer as their preferred test tool, because it offers the greatest queue depth and can be configured for sector offset. Since the SandForce SF-1200 SSD Processor was introduced, Benchmark Reviews has used Iometer to express IOPS performance using this configuration file. AS-SSD and CrystalDiskMark (3.0 and later) all offer IOPS performance at deep queue depth, and express their results in MB/s rather then the Input/Output Per Second result. These are also very helpful at illustrating SSD performance, and useful in comparison. While HD-Tune does offer IOPS performance, this tool is limited to a single queue depth and is meaningless to SSDs. In conclusion, Solid State Drives are excellent products with plenty of performance gains to offer even the most casual computer user. Even though the SSD industry has grown, benchmark tools are still limited in their ability to convert performance results into a useful number. Ultimately, I warn readers to regard SSD reviews with a high degree of caution, and lean towards articles that compare results against a wide variety of well-know HDD products to use as a baseline comparison. Other useful articles on this topic include SSD Benchmark Tests: SATA IDE vs AHCI Mode, and Marvell SATA-6G SSD Performance vs Intel ICH10. Benchmark Reviews offers dozens of SSD comparisons in our Featured Reviews: Storage section, and we hope you'll give them a read.
Related Articles:
|
Comments
From all of the recent SSD testing I've done, I recommend the Intel ICH for all SATA-3GBps SSDs, and the Marvell SATA 6Gb/s controller only for compliant SSDs (presently only the Crucial C300). SATA 6Gb/s HDDs are somewhat pointless, and work just as fast on the Intel ICH10.
I suggest that you also read my ACHI vs IDE article: /index.php?option=com_content&task=view&id=505&Itemid=38