An Argument Against Expensive Solid State Drives |
Articles - Opinion & Editorials | ||
Written by Olin Coles | ||
Friday, 31 August 2012 | ||
An Argument Against Expensive Solid State DrivesAn industry that began with ultra-durable storage devices transformed overnight into NAND flash speed shops scratching for every I/O... and dollar.I've been reporting on solid state drive technology since it made a retail debut back in February 2008, nearly five years ago. From the very start I've been a huge proponent of all things SSD, as it closes the gap between the fastest computer system components (processor followed by memory) and the slowest (data storage drive). But as speeds grew increasingly faster with each new generation, there's still one giant gap that remains: price. The cost of components has dropped year after year, but today the vast majority of low capacity entry-level SSDs still fetch about $1/GB compared to less than $0.20/GB for high-performance hard disk drive technology. In this editorial, I chronicle the current state of retail solid state storage devices. While the technology has been around for decades, there wasn't much progress in the solid state storage industry until 2008. As the system memory market sputtered (or collapsed depending on viewpoint), a market-wide oversupply of NAND components forced prices so low that DDR2 and DDR3 became dirt cheap. In the pursuit of profit, many 'memory' companies reduced their mainstay product line and branched out into new territory. Some found margin in power supply units while others discovered it in computer cases, but a few would take a gamble on a little-known technology and invest into a consumer storage product called the Solid State Drive. Around this same time SSDs were being used within experimental aerospace and military applications, as they were one of the only products durable enough to withstand constant vibration, high-gravity impact shock, and extreme temperature ranges. The first SSDs utilized SLC (single-layer cell) construction, using a component that was expensive to produce and offered unimpressive storage capacity. These early devices were very expensive, as well, but were usually funded by taxpayer dollars for government projects. As SSDs reached store shelves, sticker shock would cause deep criticism from consumers who argued the new storage devices failed to deliver enough value to justify the extremely expensive purchase price. In response, manufacturers determined that faster speeds would add to that value. By 2009 solid state devices were fitted with more-affordable MLC (multi-layer cell) NAND flash components, allowing for impressive sales margin with every unit sold. The market soon exploded with SSDs, and brands fought to occupy shelf space beside competing hard disk drives. Finished goods companies (those who merely brand a product with their label) grabbed attention by designing bright and colorful packaging, a move that set their product apart from the familiar silver finish of hard drives. While companies late to the market worked desperately to lure consumers to their product with edgy graphics or unique color schemes, the early players were beginning tout performance specifications. For the next year, transfer speed was the primary talking point in SSD marketing material. Combined with the release of SATA 6 Gb/s on enthusiast motherboards, 2010 would relegate leading brand names to clamor for attention by producing the fastest SSDs possible. Thanks to testing done by websites such as Benchmark Reviews and others, the emphasis on speed would soon yield way to IOPS (input-output per second) performance. Of course, this only paved the way for manufacturers to brag about their new IO performance specifications, keeping them ahead of other brands who still sold product based on speed alone. IOPS performance had always been a part of solid state storage, but it wasn't tapped as a marketing tool until media zeroed-in on its relevance to real-world performance. Unfortunately, this new marketing maneuver gave manufacturer's license to continue offering SSDs at an ultra-premium price that failed to close the gap with HDDs. By the time 2011 arrived, some finished-goods companies invested into technology that would enable them to compete outside of SATA storage. For example, PCI-Express SSDs were introduced to further extend the run on speed and IOPS performance. By this point, though, consumers were beginning to notice that one generation of SSD operated nearly identical to another when they were used for real-world computing tasks. Benchmarks be damned. So for the first time since their debut, there was a real concerted effort to finally bridge that gap. Enter the solid state hybrid drive (SSHD). Hybrid storage looked great on paper, and truly delivered much of the same speedy application performance users had enjoyed with their solid state counterpart, but it wasn't enough to write off SSDs. For the remainder of 2011 and well into 2012, manufacturers would introduce a myriad of hybrid storage solutions in effort to reduce overall costs while still delivering the quick application response time that had earned SSDs their reputation. This effort still continues today, and in many respects it has been succeeding, but the cost of true solid state drives has barely changed despite the introduction of new ultra-efficient NAND construction processes and much less expensive flash components. Other factors have also influenced SSD pricing, such as portable electronic devices like the smartphone and tablet platform. These devices have become extremely popular, forcing NAND flash suppliers to give them first consideration for access to supply. So will the Solid State Drive ever achieve affordable pricing like its hard disk counterpart? Not likely, and certainly not in the foreseeable future. NAND flash pricing has taken a nose dive over the past twelve months, but supply is quickly consumed by portable and mobile devices produced by multi-billion dollar corporations. For the first time since they hit market, manufacturers are finding it difficult to sustain profits through SSD sales because of fierce competition and service/support costs. This has forced some to bow out, and others to seek buy-out of their brand. If there were ever a time when hybrid storage had a fighting chance to succeed, that time is right now. Have been holding out for SSD pricing to drop to an affordable level before you buy?
Related Articles:
|
Comments
The title of this article should have been an inaccurate depiction of the progression of solid-state drive technology. Clearly, there was no understanding of the SSD market before 2007 and surely the author can't be serious with this statement: "but the cost of true solid state drives has barely changed despite the introduction of new ultra-efficient NAND construction processes and much less expensive flash components". I am not sure if he is familiar with SSD prices in 2006-2009, but they clearly weren't below $1/GB as they are today...in fact NAND prices weren't below $1/GB even during the collapse of the industry in late 2008/2009 and they were selling components 50% below cost!
He is right that Hybrid Storage has its best shot right now, too bad there are barely any credible second sources in the market and there is a real possibility that there will be LESS Hybrid drives shipping this year than last year - its been that poor thus far! Not sure how you came to the conclusion that Hybrid Drives are as good as SSDs (did you compare them with the best SSD and what will happen when PCIe PC SSDs hit the market)...certainly won't find any major PC OEM telling you Hybrids rule...oh wait, pretty much no PC OEMs use them.
BTW, which SSD vendor has bowed out? Last I checked everyone and then some is still in the game because they can get flash at near cost and use SandForce...next year is when we see companies bow out when prices are stable and flash availability is scarce.
As for your out of context quote, I state that the price has barely changed in the past TWELVE MONTHS. Of course, I can tell by your email address that you work in the storage industry... so way to troll.
Joe also made a legitimate point about your misleading title. I thought this would be a discussion about why we should not spend a lot of money on expensive brands of ssd's as apposed to bargain brands. Instead you seem to be more interested in talking about hybrid drives.
Hybrid drives seemed like a wonderful idea when it would cost you 200 dollars for 120gb of space. but now you can get that sort of space for less than 80 dollars. This is more than enough to put an operating system and most of your applications. you can then buy a 1 tb hard drive to store data, and all this can be done at a lower gb per dollar margin than you would have spent buying your hybrid. If 120 gb is not large enough then you could move up to the 240gb to 256gb range where the dollar per gb cost is even lower.
There will always be some people who want to store all of their data on one drive and forget it. These people could go for a hybrid drive and get better performance then anyone who had a regular hard drive, but for everyone else. Hybrids are not popular because they don't make a lot of since when it comes to performance or price. Great idea a year ago, does not make much since now.
As for the title, I thought he was just saying all SSDs are expensive (per GB) and that we just shouldn't purchase them. I thought maybe he found the SSD achilles heel or something that would convince me not to purchase SSDs any more.
Sure, maybe they are not really competing with hard drives on capacity, but they are kicking them all over hell's half acre on performance. I'm perfectly happy having a 120Gb of SSD in my system as a boot drive, and with my browsers and a few games I play a lot installed on it, and use my cheap ass 1TB hard drive for storing other stuff, some apps, and games that I don't consider a priority for SSD like speeds.
Amen-Ra - I'm curious to know the specs of your drives? I've purchased a few SSDs the past couple of years and the only point at which I've noticed a price drop is when lowering the specs and getting into "budget drives" versus "performance drives" (purposely leaving out counterfeit/knockoffs in my statement as I wouldn't take the chance with them).
All in all an okay article in which most folk who come to BMR already are aware of, while those just discovering the technology will have a bit more of a history to supliment their research with.
I now use a Samsung 830 256 gb that cost me 189 dollars on sale. I have seen the drive under 200 dollars on several occasions and it currently retails for 219 on amazon. The specs are all over the web and well known.
I also own a Sandisk extreme that I payed 154 dollars for 240gb and a OCZ vertex 4. The vertex four is currently under 200 dollars, and is considered one of the fastest drives in the market today.
The only drive I have that might be considered a budget drive is the Sandisk, but it is also a very good drive. it utilizes the sandforce controller and toggle nand flash. On Atto my Sandisk drive that is filled to 45% capacity scores as high as 541685 reads and 497102 writes. I will not go over the scores of the other two drives but the OCZ vertex 4 was reviewed on this site and scored a 9.05 out of 10. The Sandisk did outperform it on ATTO if that means anything to you.
more expensive per GB than a 1TB rotating HDD @ ~$90.
A LOT more reliable? I beg to differ: when we buy
WDC's RAID Edition HDDs with 5-year warranties,
they run and run and run and require no special tweaking.
Of course, we cool them properly and feed them UPS power.
How many MLC SSDs have 5-year warranties? Just calculate
cost per warranty year, for a good comparative metric.
People are reporting all kinds of serious problems with SSDs,
e.g. firmware updates that are needed but don't work,
lack of TRIM support, serious degradation after many WRITEs.
If one lives under a rock, these problems don't exist (of course).
MRFS
The only reason I bought 256gb was because I could hold a lot of games on the drive. My solution is less in dollar per gb than a Hybrid drive, or a high speed hard drive, and crushes both when it comes to performance. the SSD cost 190 dollars the hdd cost 150 dollars. The hard drive space is 3256gb. Thats about 10 cent a gb. What hybrid solution, or high speed hdd solution can you use to touch that. ! Sure you could raid a few hdd's and get a lower dollar per gb price, but performance would suffer and you increase your fail rate, not to mention the SSD would still outperform it.
I tried to reply to "renosablast"
but that comment is now gone.
You seem to have stepped in a hotbed here and I don't envy you. I also don't necessarily agree because (and in my opinion), hard drives are being replaced, at the consumer level, by SSDs as we speak. A quick look at either SSD or ultra projected sales is a great indicator. Also, technically, there is not anywhere near enough evidence to compare failure rates between hard drives and SSDs although, anyone with the years behind them (as we do) knows that they would trust the SSD much more than the hard drive.
I don't know how many hard drives I have watched go corrupt and then crash and burn in comparison to the SSDs I use on a regular basis. IMHO SSDs are MUCH more reliable than any HDD and, moving over to caching, I don't think it really has a chance with prices as we are seeing them. To clarify for all, the typical 120/128GB SSD is now under a hundred bucks and the typical 200/240GB is under 200 bucks.
Sandisk and Toshiba are about to give an adrenaline pump to the thought of SSD cached hard drives once again and, who knows, it may work as the consumer wants performance and capacity. I think it will be much the same as we have seen it with the Momentus XT though... a great toy as long as you don't need it for any real work.
Hope to meet up again soon!
But as for price, I think SSDs have a long way to go before they'll replace HDDs in the consumer market. Just look at the OEMs, and what they're choosing to build with (Dell in particular). If SSDs were only 100% more expensive than a HDD it would be an option on every system, but we're dealing with a component that adds 800%+ over the cost of a hard drive.
Since I began building systems, somewhen back in '93-'94, I've had exactly 22 HDDs fail while the warranty still covered them and 110 failed within 2 years of the warranty expiring, and I've had no trouble getting the manufacturers to replace a hard drive that was under warranty when it died. I've got no clue as to how many have/would have failed past 2 years outside warranty since I'm usually asked to put in all new hard drives for more disk space at that point, and usually do so for my own systems, as well.
Since I started using SSDs, back around 2009, I've had roughly 1 in every 10 fail within a year of installation. 2 in every 10 or so fail within 2 years. I've specifically bought the longest warranties I could, which has been the 2 year warranties for most, in case they failed and had to fight tooth and baseball bat with manufactures for replacements. I have not had 1 SSD survive past it's warranty period. I've purchased roughly 200 SSDs so far.
If those numbers don't conclusively indicate reliability differences, then I don't know what would.
there was a glitch or two in their database maintenance
this morning.
So, I'm repeating the sincere Compliments which
I posted earlier: I do value Olin Coles in ways
that are just too numerous to itemize here:
In fact, I made a serious mistake by accusing
him of CENSORSHIP, when that was NOT the case
and he did NOT censor any of my Comments.
Olin has had the courage to ask the TOUGH QUESTIONS,
particularly in the face of an IT industry that
will ONLY provide review samples of hardware
if and when the Reviewers publish POSITIVE REVIEWS.
I think it's OK for me to repeat what Olin told me privately
this morning: many manufacturers not only expect POSITIVE
reviews: THEY EXPECT HIM TO PAY THE RETURN SHIPPING
because they are too cheap to donate the hardware to
his fine website.
(I'll try to re-construct one other Comment that
did not get posted, because Olin has asked me to do so.)
MRFS
OK, here's the other Comment which Olin graciously requested
that I re-construct from memory ...
I don't know if I am several standard deviations from the "norm" --
maybe I have spent an unusually large amount of time making sure
that our disk drives are properly cooled, environmentally controlled
and fed very clean power from dedicated UPS units.
Nevertheless, I have found that WDC's RAID Edition HDDs
have been very reliable, and worth the premium e.g.
just compute retain cost per warranty year.
The only issues that matter, imho, are price / performance / capacity.
7 SSDs on 4 different metrics:
(1) fresh
(2) after 2 x data fill
(3) after 30 minutes idle
(4) after TRIM
##xbitlabs.com/articles/storage/display/marvell-ssd_7.html#sect0
Only 2 had decent BGC: the others all degraded
significantly with 4K random WRITEs and
only recovered with TRIM.
You want TRIM? OK, then, you need Windows 7,
but you can't have TRIM with a RAID array
unless you install Intel's latest RST
on an Intel Series 7 chipset ONLY!
You want TRIM to work with a RAID 0 array and 4 x 6G SSDs,
you're out of luck with Intel's latest chipsets.
You want TRIM with AMD's native RAID: you're out of luck.
You want TRIM with any third-party add-on RAID controller,
you're out of luck.
I'd like to hear others react to the situation described above:
SATA-III has been around far too long for SSDs vendors to be
dragging their feet as much as they have.
Clarifying my view just a bit; I dont believe hard drives at the enterprise and business level are going anywhere. It will be millenniums before we see complete data centers running on SSD technology only and, well actually, given technological advance it will never occur.Actually, there would not be anywhere near adequate flash supply worldwide for this. The footprint for the next few years though seems to show consumer systems adapting SSDs right from initial point of sale with enterprise environments retaining hard drives for capacity and getting the adrenaline boost of SSDs and caching.
Will we finally see the Intel cached ultra? Inevidably and unfortunately yes...
Ok back to work for me and hope I could contribute just a bit!
Booting to Windows 7-64 off the SSD takes 15.6 seconds
Booting to Windows 7-64 off the HD takes 21.2 seconds
Booting to Windows 7-64 off the SSD/HD hybrid takes 17.0 seconds
After a year, I have saved 102 minutes with the SSD over the HD (7 boots per week) so the SSD costs me about $117 per hour. Other than that, I don't really see any observable difference in performance other than in benchmarks. Running AutoCAD 13, office apps, etc., I can't tell which boot I used. The BMR file transfer test on the hybrid uses a 1 Gb compressed files. I have 6 files out of 213,000 that are 1Gb or larger in size, almost all of which are install programs. My biggest AutoCAD files run 1 -2 MB. A file on the SSD might open twice as fast than the HD but at 1/500th of the file size in that test, I cant notice the difference between 0.003 seconds and 0.0015 seconds.
Playing games, my fav MMO takes the exact same 45 seconds to load off the SSD or off the HD (and yes, as with the programs above, test were done with program / game on the same partition with the OS in each instance).
My biggest beef w/ SSD's when I do a build for folks is the inevitable call 3 - 6 months don the road telling me the SSD is "full" and can in come clean over and it out. Count all the set up, tweaking, moving back and forth, windows customizations etc, and I bet I'm using up those 102 minutes.
So, partly because of the review here on the Momentus 750GB, I just spec'd out my 1st machine (custom built Clevo lappie) in a long time with no separate SSD/ HD pairing. I will just have to live with the fact that I only have a 5.9 WEI score instead of a 7.8 and I can't post impressive benchmark scores, but I am totally content with the machine's performance in every other respect.
My take, having been at the plate w/ all 3, if ya have the cashola and ya like the tinkering, by all means grab that speedy SSD. If ya wanna go to the boss w/ a proposal that adding SSD's to all the office machines (not talking video production or other "special" apps) will increase productivity and pay a return on the hardware investment, you don't have the proverbial snowball's chance in "H-E-Doublesticks" in proving ya case.
Allow me to play devil's advocate for today ...
If all this IT marketing is deliberately misrepresenting
a relatively new technology, and if excellent websites
like Benchmark Reviews are being crudely pressured
to publish ONLY positive measurements, it is conceivable to me that
the Internet will then become a thinly disguised TV advertisement,
complete with LOUD audio volumes and rapidly changing imagery.
Did you want to view OEM's latest promotional video? Then,
BETTER TURN DOWN THE VOLUME ON YOUR HEADPHONES FIRST!!
The word "crass" comes to my mind, honestly. We're all human,
and we all make mistakes; nevertheless, this IT "oligopoly"
is starting to show signs of tight industry-wide collaboration,
made to appear like legitimate market competition when it's NOT.
p.s. From a legal point of view, one would not get very far
in the real estate business, e.g., if 5 out of 7 new homes were
sold withOUT disclosing major cracks in their concrete
foundations: I've already shared an evidently sincere
review of 7 modern SSDs, and only 2 of those could recover
their former performance after 2 full writes + idling 30 minutes.
And, no, the ONLY solution is NOT a new Series 7 motherboard
-AND- an entirely new copy of RST + Windows 7. Where I come from,
withholding such negative performance characteristics is FRAUD
(read "a failure to disclose what should have been disclosed").
MRFS
> People who buy ssd's are paying a premium for the performance, just like a person would spend more for a high end graphics card.
(Forgive me for spending too much time on this topic;
if Olin or his colleagues would prefer that I sit back
and do more reading, I will do so.)
I think the comment above is overlooking some of the
factual points that have already been made.
An SSD does not need to fail completely to be defective,
or to render the vendor liable for what amounts to false
and/or misleading advertising.
Failing to disclose is defined as FRAUD in legal dictionaries.
If a User is "paying a premium for the performance" BUT
that "performance" is advertised as the speed of WRITEs
that can be expected ONLY from a brand new SSD device,
-AND- if that "performance" deteriorates predictably to
something like one-fourth or one-fifth of that "new speed"
after only "2x Data Fills", such a User is paying a
price premium for inferior performance and NOT for the
advertised performance.
Again, see: xbitlabs.com/articles/storage/display/marvell-ssd_7.html#sect0
I'm seeing ~150 MB/second drop to about 25-30 MB/second
for the Intel SSD 520 and the Corsair Force GT.
That is a very serious deterioration in performance.
And, going back to the title of Olin's article,
the relatively high price is NOT WARRANTED
by the measured performance after 2x Data Fills.
MRFS
Also, the typical consumer workload simply doesn't performance multi-gigabyte writes very often. Read speed is what makes users sit up and notice.
I think you're somewhat begging the question with
an unfortunate choice of words, imho.
When you say "the drives aren't pounded on",
I think you are overlooking the test which
only filled each SSD twice.
I would hardly call that "pounding";
but, read on where I try to show why
this is not just a "symantic" issue ...
The pagefile.sys on our short-stroked C: partition
gets "pounded on" and by that I mean that it experiences
thousands of WRITEs -- to the exact same sectors --
during the full warranty period, typically 5 years.
Let me illustrate with the Newegg add for Intel's SSD 520:
#newegg.com/Product/Product.aspx?Item=N82E16820167086
In one place, that ad says:
"sequential write speeds of up to 520 MB/s"
and in another place that ad says:
"sequential writes of 520 MB/s to unleash your applications"
Now, under "Details" that ad says:
"Max Sequential Write Up to 520 MB/s (SATAIII)"
I don't see any honest and open disclosure of anything
like the results reported by xbitlabs.com, namely, a measured
degradation of random 4K WRITEs from 150 to ~30 MB/second.
Instead, that ad's "Details" say this:
"4KB Random Write Up to 60,000 IOPS"
and
"Out-of-the-Box Write Performance (IOPS): up to 80,000 IOPS"
80/60 is NOT 5-to-1.
Have I made my point, and is this NOT exactly what Olin
is trying to address in his article?
MRFS
You have a problem with SSD advertising not disclosing that TRIM is required for best performance, and that TRIM isn't available in the case of RAIDs or other limiting factors. Fine. That accounts for a minuscule percentage of SSD buyers, who, by and large, are running non-RAID systems on Win 7.
Paul, in your submitted links, they are interesting but not indicative of a typical user scenario, but then again...even that is not really the subject of the article I don't see. We can speak to whether we like or dont like the way that performance is advertised but it is not fraud. It is not misrepresentation and, if we are going to make that claim, we would have to do it of just about every industry out there. It is simply cleaver marketing. For instance, EVERY car is sold with a mpg indicator on the sticker BUT THAT IS FOR IDEAL CONDITIONS WHEN THE CAR IS NEW. It is exactly the same as we see here. Sorry Olin, needed to rant...
"IOPS performance had always been a part of solid state storage, but
it wasn't tapped as a marketing tool until media zeroed-in
on its relevance to real-world performance."
NOTE: "marketing tool"
-AND-
"Unfortunately, this new marketing maneuver gave manufacturer's
license to continue offering SSDs at an ultra-premium price
that failed to close the gap with HDDs."
NOTE: "marketing maneuver"
NOTE: "ultra-premium prices"
But vendors routinely fail to disclose when, where and how
such premium prices might not be justified.
Go back to the Newegg add now:
NOTE: "New level of performance"
But, there is no mention of TRIM, and no mention of the fact
that TRIM currently requires Windows 7 and that
TRIM will only work with RAID arrays under circumstances
that are presently very limited -- and favorable to Intel.
MRFS
and a licensed copy of Windows 7 Ultimate 64-bit.
After trying to get most of my third-party utility
software working on it, I went back to XP/Pro
Service Pack 3, 32-bit version: i know it well, by now.
The latter does EVERYTHING I need for all of my daily
computing tasks: in fact, I REMOVED THE VIDEO CARD
I had installed into that Windows 7 machine, because
I just wasn't using that machine AT ALL, after giving
it the "good ol' college try".
Fast forward: In order to exploit the "bleeding edge"
of storage, I just happen to use lots of RAID 0 arrays
and I'm very comfortable with them; I have a lot of
experience with them; I know a lot about them;
I help lots of Users who request help at Internet Forums.
For me to have the kind of steady-state performance
I would expect from all of this exaggerated advertising,
I would need to purchase and be satisfied with an entirely new
Windows 7 machine installed on a Intel Series 7 chipset,
plus I would also need to install the latest RST software
from Intel's website.
Even then, I can only have 2 x SSDs in a native RAID 0 array,
because Intel's Series 7 chipsets only support 2 x 6G SSDs.
I can't have TRIM with 4 x SSDs (which is what I would prefer);
I can't have TRIM with any third-party add-on controllers e.g. LSI;
I can't have TRIM with any integrated third-party controllers;
I can't have TRIM with any of AMD's chipsets, EVEN THOUGH
the 990FX supports 6 x native 6G SSD ports.
But ... BUT ... I can have TRIM with 2 x 6G SSDs
for the small "premium" of an entirely new and
expensive Intel-based Personal Computer.
Talk about PREMIUMS -- HOLY TOLEDO!!
MRFS
I am using 2 x 60GB SSD in Raid 0 in my Win 7 system, and sure I can't use Trim, but it's not a huge deal. I leave my system on overnight once a month for garbage collection and that seems to be keeping it working fine.
Whether or not you believe the specs advertised by the manufacturers, SSDs still kick hard drives. Over the 25+ years I have been working with and building computers, the single biggest improvement in the performance of my system from any one upgrade was when I installed an SSD. This improved everything, from booting, to loading software, to wait times during games that load from the HD a lot.
SSDs are well worth the money, and they're still coming down in price. I really don't understand what everyone is complaining about.
The arguments about pricing are simply patently false; my original SSD purchase was a 120GB vertex for roughly $400 - just over $3/GB in 2009. Since then, prices have consistently and gradually fallen, and *many* options are now readily available that are consistently below the $1/GB point - I see 60-70 cents a gigabyte for decent drives.
This puts SSDs at roughly an 8x-10x position relative to hard drives in terms of $/GB - *if* that was the metric of primary importance here (which it isn't). SSDs provide the greatest benefit when used as OS and common program drives, so size is only important insofar as it allows you to fit those programs without the drive costing too significant a portion of the total system.
With 120GB drives available for $70-$100, you're paying roughly the same amount for that SSD as you would for your hard drive - a pretty reasonable chunk of your overall system cost for the performance improvement it provides.
Put another way, you can examine an SSD from a much more useful metric for its use as an OS drive; $/IOPS (as this will principally affect how fast the drive feels as a boot drive).
You would be very lucky to reach 200 IOPS with a 7200rpm hard drive; for a small hard drive of 500GB (where this ratio is most favorable) this puts a hard drive somewhere around 0.25$/IOPS (or 4 IOPS per dollar). For a large 240GB SSD (where this ratio is *least* favorable) the ratio is more around 0.005$/IOPS, or 200 IOPS per dollar - a 50x improvement over the hard drive with the comparison stacked *against* it.
So we've established that an SSD that's large enough for use as a boot drive is affordable (roughly the same price as a hard drive you'd use as a boot drive), and provides massive improvements in the important metric for a boot drive (its random read/write performance). How exactly are SSDs a bad idea, again?
Hybrid drives attempt to bridge some sort of gap between hard drives and SSDs, but the plummeting price of the latter gives them no real margin to work in for their only moderately impressive performance compared to a full SSD.
Claims that SSDs are unsuitable for single-drive systems carry slightly more weight, but mSATA SSDs are available in boot drive sizes for just about any notebook (letting you leave your platter spinner in your drive bay for bulk storage) and when I can get a 1TB HD and 120GB SSD together for maybe $150 or so in a desktop it's pretty tough to argue against the inclusion of an SSD at all but the most rock bottom of price points.
Arguing against a piece of hardware by misrepresenting its usage model and what metrics make it valuable is incompetent at best and downright deceptive at worst. Do your research before posting any further articles.
REPEATING (a third time now) ...
7 models were measured fairly and objectively above.
5 models deteriorated significantly after only "2x Data Fill"
and their "out-of-box" performance did not recover very well
"After 30 minutes idle" and before TRIM (see link above).
2 models did recover very well even without TRIM
"After 30 minutes idle" (again, see link above).
Just tilt your head back :)
If you want to contest those measurements,
then take it up with the Reviewer!
p.s. Maybe our workstations allocate 30-50 GB for C: -- by design --
and the remainder is formatted as a dedicated data partition:
if tasks like drive image creation slow down by a factor of
~5 to 1, that much degradation is unacceptable -- to me anyway --
and CERTAINLY not worth the premium prices.
If that much degradation does not bother you, TO EACH HIS OWN.
At that rate of degradation, I can confidently expect
superior reliability over a 5-year factory warranty period,
from a RAID 0 array with 2 x WD2503ABYX rotating HDDs.
What are the factory warranties on the 5 "losers" above?
Maybe retail buyers just need to see these results,
so they can STOP BUYING the bad performers.
Survival of the fittest!
MRFS
2. The results from that test have *nothing to do* with drive cloning, because they're 4k random writes, not sequentials. Sequential performance should degrade less in even that extreme scenario.
3. This "degradation" (as mentioned above) is entirely temporary and has nothing to do with flash cell wear-out, which modern controllers mitigate *extremely* well (if you think the drive will somehow be less reliable compared to a RAID0 array of spinning platters, you're sorely mistaken). As an aside, exactly *how* reliable a storage medium is (assuming it's not failing every week) is not really that relevant because you should have regular backups regardless of how reliable it's advertised to be. Warranties are what's more important, and warranty periods for hard drives are generally no better than SSDs, and in many cases they've gotten worse recently (lots of consumer hard drives are 1-2 years now).
3. You're not attempting to make any comparison to your beloved hard drives here while parroting about degradation you barely understand.
##storagereview.com/images/wd_velociraptor_1tb_4k_randomtransfer _mb.png
That's an IOPS slice of some pretty decent rotational hard drives doing a similar IOmeter 4k random write test. Notice the listed speeds in MB/s - ranging from 0.5MB/s to 1.5MB/s. Your *very worst* SSD listed in your comparison at its *very worst* time (right after an excessive 2x drive write) still achieves over 20MB/s; 20x faster than the hard drives. Even when the deck has been seriously stacked against the SSDs by an unrealistic scenario, they're *still* wiping the floor with the hard drives.
Do you have any actual evidence or reasoning why these drives are "bad performers" vis-a-vis hard drives?
#newegg.com/Product/Product.aspx?Item=N82E16820148443
Intel SSD 520 : Limited Warranty period (parts): 5 years
#newegg.com/Product/Product.aspx?Item=N82E16820167086
Corsair Force GT 256GB : Limited Warranty period (parts): 3 years
#newegg.com/Product/Product.aspx?Item=N82E16820233208
OCZ Vertex 3 MI 240GB : Limited Warranty period (parts): 3 years
#newegg.com/Product/Product.aspx?Item=N82E16820227715
Corsair Force 3 240GB : Limited Warranty period (parts): 3 years
#newegg.com/Product/Product.aspx?Item=N82E16820233268
Yes it is: I already explained that above.
I'm very happy with Microsoft XP/Pro,
and TRIM does NOT work with any versions of XP.
If I'm required to buy a new motherboard
-AND- a new compatible Intel processor
-AND- Windows 7
-AND- install the latest RST from Intel
THEN
I will want 4 x 6G SSDs with TRIM support using
the native Series 7 chipset.
THAT'S THE LEAST I WILL REASONABLY EXPECT AND REQUIRE
FROM SUCH PREMIUM PRICES -- PRICES I WOULD HAVE TO PAY
IN ORDER TO "UPGRADE" TO TRULY HIGH-PERFORMANCE STORAGE.
BUT ... I can only wire TWO (2) 6G SSDs to Intel's Series 7 chipsets.
And, no, I'm not going to pay the even HIGHER PREMIUMS
for SLC -- not as long as MLC vendors are claiming
there is little if any degradation to worry about.
YOU'RE DREAMING!! And, you just shot yourself in the foot
with that ridiculous SLC recommendation, and perhaps
without intending to, YOU HAVE AGREED WITH Olin.
> the drive will somehow be less reliable compared to a RAID0 array of spinning platters
Olin already told you 25% of his SSDs have failed now:
he told me privately that he has sixty (60) in his
spare parts inventory.
He took one off the shelf the other day,
and it was D-E-A-D. So much for NON-volatility ;(
I know the man: he's NOT the kind of builder
who pours cold water on his computer parts.
Seems like these problems just won't go away,
even after all this time e.g.:
##theregister.co.uk/2011/01/14/ocz_and_ddrdrive_per formance_row/
Who pays your salary?
MRFS
Vis-a-vis SSDs, I've personally worked with roughly 10 units, and have seen or dealt with many more. Of those 10, 2 have failed; both were earlier controllers (1st gen indilinx) and both were replaced no questions asked well within warranty. Not a single sandforce (or more recent) based SSD has given the slightest indication of failure. I guess if you scrape the bottom of the barrel with janky old jmicron controller drives, you'd probably see higher failures. Honestly, not that surprising in a product category that's only a few years old (compared to the 50 years of hard drives) but today, any differences between hard drive and SSD reliability are small enough that it's not a significant issue. Compared to the giant stack of failed hard drives I have at home, I really don't think the percentages work out all that different.
The fact that you're "very happy" with an operating system that's now 11 years old doesn't change the reality of the fact that it's *way* past its lifecycle and was introduced 7+ years before SSDs ever came close to the consumer market (and another year or two before TRIM support was common). Same point about the hardware; generally speaking, people *actually do* upgrade their computers from time to time for reasons other than TRIM support; if you'd done this any time in the last 3 years you'd be in very good shape for having TRIM on your system. If you're somehow telling me that installing the latest drivers for your hardware is too onerous to get a feature working right, then I really shouldn't even be wasting my time with this discussion. My point still stands that anybody using a system today that *isn't* way past its lifecycle will have TRIM support.
You mention you want to hook 4x 6Gig SSDs up to your system; what is your usage model for this layout? Do you actually have a reason for needing this configuration, or are you simply postulating it to try and push an irrelevant point as if it were some dramatic limitation of the platform? Not only is the extra bandwidth and IOPS afforded by 6G SSDs compared to 3G ones (or 6G ones running on 3G ports, so you could have 4 SSDs with TRIM if you really had to) not going to have much of a visible effect on system performance (owing to the excellent performance I detailed and you haven't addressed), what is 4 SSDs going to do for you that 1 or 2 aren't?
If you're telling me you need the bandwidth, I still don't see how this helps your case anything when your only alternative are hard drives that are going to be soundly outperformed either way, TRIM/6G or not.
Christopher George, try this:
Google site:#theregister.co.uk "Flashy fists fly as OCZ and DDRdrive row over SSD performance"
##xbitlabs.com/articles/storage/display/ocz-agility-4_4.html#sect0
They all appear to have the exact same problem i.e.
no recovery to anywhere near their "Fresh" performance
after 30 minutes of idle time.
This looks like a very strong pattern to me. ;(
Also, just scroll down for his Sequential WRITE tests.
Some time ago we purchased an LGA775 motherboard,
Intel Q9550, and 16GB of Corsair DDR2 in order to host
our research database in a ramdisk -- using RamDisk Plus
from SuperSpeed LLC.
We enabled automatic SAVE and RESTORE so that
the ramdisk's contents will be effectively
non-volatile.
This means the ramdisk must be saved to
permanent storage at every shutdown
and restored to RAM at every startup.
We installed a Highpoint RocketRAID 2720
and 4 x 15,000 rpm Hitachi 2.5" SAS HDDs:
the best they can do is around 600 MB/second
in RAID 0 mode.
So, all startups and shutdowns must move
12 GB at a rate of 600 MB/second.
START | RUN | CALCULATOR
That system is running just fine with XP/Pro,
but TRIM does not work at all with XP/Pro,
and it is not currently supported by the RR2720.
I decline to purchase all new high-performance
hardware, IN ADDITION to buying only 2 x 6G SSDs
simply because Intel's Series 7 chipsets + RST
only support TRIM on a native RAID 0 array
with only 2 x 6 G SSDs.
To me THAT'S RIDICULOUS, particularly after 6 GHz SATA-III
has been the SATA standard for a long time now.
I infer that you want me to purchase all new hardware
that will support Windows 7 and TRIM; however, EVEN THEN
I can't have 4 x 6G SSDs in RAID 0 without losing TRIM
and without suffering serious performance degradation
with many modern SSD models.
As the title of this article says: "EXPENSIVE Solid State Drives"!
They're triply expensive because of all the ADDITIONAL
infrastructure they require in order to operate properly
and without serious degradation after normal use
during normal factory warranty periods.
p.s. If you don't think ramdisks are "normal"
contact SuperSpeed LLC and they'll tell you.
##supremelaw.org/systems/superspeed/RamDiskPlus.Review.htm
MRFS
Just to note for you, using RAID 0 in Windows 7 stops Trim from working too. So, even if you did set your system up in Windows 7, using RAID 0 you still wouldn't be able to use Trim.
This is why I let my system sit overnight once a month, idling, so that the SSDs can use their garbage collection routines, which do not require Windows 7.
Just a note for the Benchmark Reviews people: I have not gotten a single spam since you lengthened your security code. Good job.
So a new build for a similar role (in your case) would probably deploy 2 quick 6G-based SSDs in RAID0 to replace your 4 15k rpm SAS drives. You can generally get sequential read/writes above 500MB/s for a good 6G-based SSD, meaning your array would read/write at 1000MB/s or so, nearly doubling your current performance. The drives would consume way less power (2 instead of 4 drives, and SSDs are generally a bit more power efficient than even slow 5400rpm HDDs let alone 15k drives) and would have lower aggregate MTBF (since you have to divide the MTBF by the number of drives in the array for the whole setup and there's only 2 here, not 4).
In addition, you describe your usage model as writing about 12GB every shutdown; unless you're rebooting the system on an hourly basis (in which case you may want to analyse your workflow to avoid having to do so so often) this amount of writing should be well handled by the overprovision and garbage collection routines of any good SSD, even without TRIM support.
*Even if* you decided to throw some SSDs at your current setup (which sinks all of your high cost arguments), 3 or more SSDs in RAID-0 would beat your current performance at reduced power consumption. Since your write profile is pretty manageable, lack of TRIM support in your case should be effectively a non-issue.
The way you mention/reference SSD "degradation" makes me think you may not have a good understanding of exactly what is meant by that, and the temporary nature of this "degradation". Anandtech has an excellent set of SSD primers/articles that go into excellent detail about exactly what the issues with SSDs are, and what mitigation strategies are deployed to make them into non-issues for most use cases. I highly recommend giving them a look if you want to further your understanding of SSD tech.
My suspicion is that some particular spot issues are contributing to those high numbers; jmicron controllers were never reliable, and I remember a couple earlier intel controllers that were bricking on users due to a firmware bug. There were reports of BSODs due to either a driver or firmware bug with I think SF-2000 series based SSDs; this has since been resolved, but I suspect at least a decent chunk of these were returned as "failed" anyways.
Further analysis would either confirm or refute a bias towards high failure rates with particular problem models; my suspicion would be the former. There's nothing inherently unreliable about SSD technology, and it's pretty expected that an entirely new product category requiring entirely new firmware/hardware to control it will have some bugs and growing pains. Early adopters know and accept this, but at this point most any decent SSD on the market is going to be in pretty good shape reliability-wise.
Addendum: My comment about nothing "inherently unreliable" in SSDs takes the technology as a whole. I'm well aware that flash cells themselves have write endurance limits, but the leveling and write amplification reduction algorithms used in the controllers intelligently manage these issues such that it's not a problem for the SSD as a whole in all but the most pathological scenarios. Unless a drive was very early-gen (not sold today or in the last couple years) and had poor wear-leveling algorithms as a result, very nearly all of the abnormal premature issues/failures are a result of firmware/controller immaturity as the products develop. Today, even these issues are largely resolved.
That's not an overall condemnation of SSDs. This is also not a scientific survey, as many people who have had no problems with their SSD may not have taken the survey. As we know, the most vocal population on the 'net regarding hardware are those who have had problems.
As the article states: "However I think it?s very hard to get good meaningful results in this kind of survey because the irritated, the disappointed and the dissatisfied customer will be more likely to fill in this kind of survey to show their displeasure, that?s human nature."
Everyone who knows SSDs knows there was a batch a year or so ago that sucked (sandforce controller I believe). This is not proof of anything except one controller sucked and a lot of people were unhappy about it.
The same article states that less than 1% of Intel SSDs were returned as defective. Basically, the article and survey prove nothing.
Besides, it's not like we all have not had our share of bad hard drives.
Nope. That's MARKETING HYPE aka "Shilling for Sheckels" :)
The Q9550 and 16GB of Corsair DDR2 are running just fine, thank you --
very very reliable. That RAM has a lifetime warranty too.
If anything, the philosophy of PCI Express "open architectures"
was always intended to permit new add-on cards to be installed
withOUT trashing the motherboard too.
But, under the circumstances, I can't have 4 or 8 x 6G SSDs
because add-on cards don't support TRIM with RAID arrays either
(I think you missed that point above, even though it was
repeated several times already).
No, I don't buy new hardware every 3 years,
just because Intel likes it that way,
and wants it that way.
"Oh, Big Bro Intel, please TRICKLE DOWN some more nice
h/w cuz I have unlimited amounts of CASH to upgrade
every 18 months (or sooner0." NOT!!
MRFS
in addition to startups and shutdowns :)
I just laugh when others try to tell me
what I want, or should want.
MRFS
##ddrdrive.com/zil_rw_revelation.pdf
If you have problems with it, please take it up with him.
This is relevant in any way to the discussion or article how? None of the conditions or assumptions important to that application relate in any way to normal hard drive or SSD use.
I'm sure you do many things with your Q9550, but if it involves extra write workloads against your RAID-0 array you haven't mentioned them. I can't exactly comment accurately on a workload you haven't described, so I'm not sure what you're saying here.
About upgrades, I am absolutely not advocating upgrading unless necessary; the Q9550 is still a pretty quick processor depending on your workload and application. There are faster products available today though for reasonable prices, and they'll also give you the benefit of 6G ports with TRIM support if you were to upgrade. Whether that's worth it or not depends entirely on how much value the system has and how much increased value you'd get by the extra performance. This is, of course, a question only you or your company's accountant/economist could answer (seeing as how we have no idea what the cost analysis is on your system).
The point still stands though, that with your workload as described you would see increased performance by replacing your 15k rpm drives with SSDs, and you would see *even more* increased performance by upgrading the core system to support faster speeds. Considering the workload as described is not particularly TRIM-dependent, you could even try to track down a 4-port 6G SATA/SAS pci-express raid card and pop that into your system to get most of the I/O benefit of a new system with a fair bit less investment.
Clarification: I would like to reiterate that your workload as described does not appear to be one that would degrade heavily without TRIM capability. If you feel this is untrue or that you have not adequately described your workload, feel free to clarify or provide further reasoning as to why your workload is so dependent on TRIM support for adequate performance.
"Please put your money where your mouth is!" LOL!
No
No
No
"Did I hear 'no" 4 times?" asked the church mouse.
"No," replied the Choir Master.