• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Question SSD endurance... Has it turned out to be a non-issue or not?

Hulk

Diamond Member
I purchased my first SSD in 2011, 15 years ago, an Intel 320 with 120GB. Of course it was a revelation in terms of performance. Once I learned how it worked and how bits stored per cell was related to endurance, like many here I became what I look back on now as being overly concerned with endurance.

So now here we are 15 years later and that old Intel drive, having "gone through" about 3 or 4 computers is still working, as is every SSD I've ever purchased except for one $80 after rebate PNY drive that got flaky, and I don't think it was an endurance issue because it had very few "miles" on it when it started acting up.

I even have an QVO 870 Samsung I use as NAS back-up that runs daily that is 5 years old with no problems.

Anyway for me endurance turned out to be a non-issue as the manufacturers told me it was.

I'm curious as to other experiences here with long-term SSD use?
 
Pretty much the same as yours. I have a 256GB Samsung 860 EVO that's a transaction log drive, meaning much more writing than reading, that's been working every day since summer 2018. I suspect that most drives will continue to work far past the rated TBW, which makes sense since the TBW spec needs to be reasonably conservative. It may be that some of the QLC drives aren't quite as robust; I don't have any QLC drives in write intensive situations, so I'm unlikely to find out.
 
I forgot to mention the SSD on my old Surface Laptop 2 has been telling me it's "tapped out" for about 2 years and is still working perfectly fine. Either it's very conservatively rated, CrystalDiskInfo is reading it incorrectly regarding "life," or more than likely a combination of both.

It's just funny how I thought this endurance thing was going to be just horrible and so far it's been a non factor. I still haven't quite gotten the guts to put a quad cell as a boot drive but I probably will if the price and performance are good.
 
Check the health with CrystalDiskInfo on all old SSDs.

My 5+ year old 240GB WD Green SSD is now at 21% health due to excessive paging.

There was an old article (probably available on archive.org) where The Tech Report hammered SSDs continuously with writes. Some SSDs failed before the SMART Life Remaining counter reached zero. Others kept taking hits even after 0 percent remaining life, meaning the endurance calculation for those SSDs was very conservative. Samsung 830 256GB SSD was able to survive several petabytes.

Also, their testing revealed that not all SSD controllers handle end of life gracefully. A good controller is supposed to lock the drive into read only mode once it determines that NAND health is too poor to sustain further writes but some controllers outright bricked the SSDs and the data became inaccessible.
 
It's just funny how I thought this endurance thing was going to be just horrible and so far it's been a non factor.
Normal consumer workloads are unlikely to exhaust writes on most TLC SSDs.

But QLC is wayyyy too crappy to trust with anything important. There's four fairly narrow voltage ranges to distinguish between the four bit states so more chances of having data integrity issues, at least in theory. Of course, SSD controllers implement a lot of redundancy tricks to keep things going so a good QLC controller could fix a lot of the ugly stuff in the background before it impacts data integrity.

One thing that can kill a consumer SSD relatively quick is frequent hard power shutdowns. Sooner or later, the controller will be interrupted in the middle of a critical maintenance task of updating internal tables and then bye bye SSD. Have seen it happen with at least one Crucial BX500. I think the data sheet mentioned that it is only to be used in a laptop or PC with UPS.
 
TBH, unless your very cheap with upgrades and PC Parts, i think endurance anxiety for SSD's was over inflated like Range Anxiety in Electric cars.

I think i honestly upgraded my SSD more for storage space reasons, then because they got worn out. However i think there are some usages where endurance will still play an important role, like a swap file drive, or a scratch drive, where your doing a lot of writes and reads. I think however tho, those drives are not even being made anymore (Optane for example) and people are just setting up a RAM Drive and enlarging the ram capacity on their PC. Like for example, i am running 192GB of Ram, where i have 64GB of it partitioned off as a Ram Drive for swap file usage and other things which require temp storage with super fast access.
 
Like for example, i am running 192GB of Ram, where i have 64GB of it partitioned off as a Ram Drive for swap file usage
Any special software required to do that or will imdisk for example work fine for that purpose? I'm not sure if the swap file is initiated before the RAMdisk driver is loaded which could cause Windows to not create pagefile.sys at all.
 
Any special software required to do that or will imdisk for example work fine for that purpose? I'm not sure if the swap file is initiated before the RAMdisk driver is loaded which could cause Windows to not create pagefile.sys at all.
Yeah windows temp file wont work.
But all other temp files like for plex server transcoding, does.

You bring up a good point in that.
 
Here's the thing I think that needs to be taken into account when deciding between QLC and TLC. It's not just the number of writes that is important but also total size of the disk. For example, a TLC drive might do 3000 writes and a QLC 1000, but you can buy a larger QLC for the same money and that tends to equalize endurance somewhat.

But yes, I know there is something that just feel more reassuring about 3 bits/cell vs. 4. That's the difference between identifying 8 voltages levels in a cell as opposed to16.
 
but you can buy a larger QLC for the same money and that tends to equalize endurance somewhat.
Unfortunately, not many manufacturers passed on the savings to consumers. Samsung particularly overcharges for their QLC SSDs. So did Intel when they paired their QLC NAND chips with Optane caches to prolong the life of the SSD for basic workloads without making the drives cheaper than TLC ones from their competitors.
 
My first SSD (Samsung 840 PRO 256GB) still resides in my old Haswell build that I've sold to a customer. The last time I checked its stats, it had over 20TB writes and its health % was between 89-91.

I've seen a couple of endurance problems with SSDs, but one or both were with janky OEM SSDs that came with the laptop; one certainly was and wheezed its way to about 10TB writes, with plenty of bad sectors and was affecting system stability before the laptop was replaced (the whole thing was falling to pieces).

I've also got a couple of 980 PROs in the field that look like they were victim to that Samsung firmware bug. I've firmware updated them since, but at the time that I saw the issue, their health rating was around 80%, having I think only written about 10GB apiece.
 
Last edited:
Did I post about how I got screwed by Samsung (I probably did but I'll do it again because I was wronged and this needs to send a cautious chill down anyone's spine!)?

Well, I had an ASUS Haswell laptop (my most expensive laptop I've ever splurged on. Probably won't ever pay this much for a laptop again) with a Hitachi HDD. After several years of Win10 thrashing it (12GB RAM and it took at least 10 minutes for disk activity to cease when it was booted), the HDD developed some bad sectors. I was able to "fix" those with HDD Regenerator but they kept coming back, forcing me to run that software for hours for a full surface scan. So I had to get it replaced. Bought Samsung 860 EVO SSD and cloned the HDD on that without issue. But then I looked underneath the laptop and I didn't have a screwdriver for the stupid uncommon screws they used. I put the SSD away and got busy with life.

In the mean time, I kept using the laptop like that fixing the bad sectors until six months later, a bad sector developed near the boot volume that would prevent Windows from booting. It would still work after fixing but now things are wayyyyyyyyyyyyyy too serious for me to ignore. So I order the screwdriver and get the HDD replaced with the 860 EVO. Everything seems to be working fine on first boot. Then suddenly I notice some sluggishness. Check in Task Manager and disk activity is really high. Open My Computer and notice that the free space on Drive C is increasing fast! Within a minute or two, I receive some fatal Windows error (some critical file not found) and then probably a BSOD. Reboot and Windows won't boot anymore. The HDD had gotten into such a sensitive state that I didn't want to risk killing it from the burden of another clone attempt and a lot of my data on Drive D was still intact on the Samsung SSD. So I left it like that and had to switch to an Ivy Bridge laptop which I'm using to this day.

Maybe that incident happened for a good reason (that laptop could be upgraded to max 24GB RAM while my Ivy Bridge Thinkpad now has 32GB RAM) but I still cannot trust Samsung SSD serving as a critical boot drive. All the personal PC's I've built since do not contain a Samsung SSD and probably never will.
 
Did I post about how I got screwed by Samsung (I probably did but I'll do it again because I was wronged and this needs to send a cautious chill down anyone's spine!)?

Well, I had an ASUS Haswell laptop (my most expensive laptop I've ever splurged on. Probably won't ever pay this much for a laptop again) with a Hitachi HDD. After several years of Win10 thrashing it (12GB RAM and it took at least 10 minutes for disk activity to cease when it was booted), the HDD developed some bad sectors. I was able to "fix" those with HDD Regenerator but they kept coming back, forcing me to run that software for hours for a full surface scan. So I had to get it replaced. Bought Samsung 860 EVO SSD and cloned the HDD on that without issue. But then I looked underneath the laptop and I didn't have a screwdriver for the stupid uncommon screws they used. I put the SSD away and got busy with life.

In the mean time, I kept using the laptop like that fixing the bad sectors until six months later, a bad sector developed near the boot volume that would prevent Windows from booting. It would still work after fixing but now things are wayyyyyyyyyyyyyy too serious for me to ignore. So I order the screwdriver and get the HDD replaced with the 860 EVO. Everything seems to be working fine on first boot. Then suddenly I notice some sluggishness. Check in Task Manager and disk activity is really high. Open My Computer and notice that the free space on Drive C is increasing fast! Within a minute or two, I receive some fatal Windows error (some critical file not found) and then probably a BSOD. Reboot and Windows won't boot anymore. The HDD had gotten into such a sensitive state that I didn't want to risk killing it from the burden of another clone attempt and a lot of my data on Drive D was still intact on the Samsung SSD. So I left it like that and had to switch to an Ivy Bridge laptop which I'm using to this day.

Maybe that incident happened for a good reason (that laptop could be upgraded to max 24GB RAM while my Ivy Bridge Thinkpad now has 32GB RAM) but I still cannot trust Samsung SSD serving as a critical boot drive. All the personal PC's I've built since do not contain a Samsung SSD and probably never will.
Your story reads very much like, "I had a drive fail once, so I'm never going to use that manufacturer again". You'll be lucky if you don't run out of drive manufacturers to choose from eventually, and every manufacturer you choose will have a percentage of failed drives each year. It's always raining somewhere, eventually you're going to get wet.
 
SSD failures I've seen with my own eyes:

Sandforce based Corsair F60 (it recovered with data intact after I kept it plugged in for a few hours. Obviously never used it again)

Samsung 860 EVO (not failed per se but did lose an entire partition while drive D is still intact to this day)

BX500 240GB (used by my dumb IT guy in a desktop PC when the data sheet says it is recommended to only use it in laptops)
 
My first SSD (Samsung 840 PRO 256GB) still resides in my old Haswell build that I've sold to a customer. The last time I checked its stats, it had over 20GB writes and its health % was between 89-91.

I've seen a couple of endurance problems with SSDs, but one or both were with janky OEM SSDs that came with the laptop; one certainly was and wheezed its way to about 10GB writes, with plenty of bad sectors and was affecting system stability before the laptop was replaced (the whole thing was falling to pieces).

I've also got a couple of 980 PROs in the field that look like they were victim to that Samsung firmware bug. I've firmware updated them since, but at the time that I saw the issue, their health rating was around 80%, having I think only written about 10GB apiece.
Do you "TB" and not "GB" for those write numbers? If GB then that seems really low.
 
Normal consumer workloads are unlikely to exhaust writes on most TLC SSDs.

But QLC is wayyyy too crappy to trust with anything important. There's four fairly narrow voltage ranges to distinguish between the four bit states so more chances of having data integrity issues, at least in theory. Of course, SSD controllers implement a lot of redundancy tricks to keep things going so a good QLC controller could fix a lot of the ugly stuff in the background before it impacts data integrity.

One thing that can kill a consumer SSD relatively quick is frequent hard power shutdowns. Sooner or later, the controller will be interrupted in the middle of a critical maintenance task of updating internal tables and then bye bye SSD. Have seen it happen with at least one Crucial BX500. I think the data sheet mentioned that it is only to be used in a laptop or PC with UPS.
I have a stack of PNY CS900 240 GB SATA SSDs that suffered failure because of power outages, I would guess. The controller would come up with some odd ID. Really the only SSDs that I had any repeat problems with. Had to replace a lot (probably 10) for customers. None were endurance related.

They were inexpensive and I learned my lesson.
 
I have a stack of PNY CS900 240 GB SATA SSDs that suffered failure because of power outages,
Yeah, meant to be used in laptops only or desktop with UPS.

By the way, I just remembered. Never buy TwinMOS SSD. Had a 128GB one at my workplace that stopped working after writing a few gigabytes. Got the replacement and that one also died, quicker than the first one.
 
I have a stack of PNY CS900 240 GB SATA SSDs that suffered failure because of power outages, I would guess. The controller would come up with some odd ID. Really the only SSDs that I had any repeat problems with. Had to replace a lot (probably 10) for customers. None were endurance related.

They were inexpensive and I learned my lesson.
One big issue with cheap SSDs today is they use different types of NAND and even controllers under the same product name. If we take a look at Techpowerup's SSD database we can see the CS900 uses even two different controllers for the 240GB version, which complicates any attempt to purchase a more reliable product based on the spec sheet alone.
 
One big issue with cheap SSDs today is they use different types of NAND and even controllers under the same product name. If we take a look at Techpowerup's SSD database we can see the CS900 uses even two different controllers for the 240GB version, which complicates any attempt to purchase a more reliable product based on the spec sheet alone.
I just quit buying PNY SSDs.
 
One big issue with cheap SSDs today is they use different types of NAND and even controllers under the same product name. If we take a look at Techpowerup's SSD database we can see the CS900 uses even two different controllers for the 240GB version, which complicates any attempt to purchase a more reliable product based on the spec sheet alone.
Sandisk/WD and Crucial are bigger players who are really bad about this, as are most manufacturers that aren't vertically integrated for at least the NAND portion of their drives. Samsung SSDs and NVME are a pretty good bet, but even they've had some hiccups along the way. My favorite PCIe 4.0 drives are the Western Digital SN850 or SN850X, and only pre-Sandisk acquisition. I've used dozens of those drives in builds for myself, family, and friends, and not one single failure. On the SSD side of things, all the Crucial drives I have are from before they shrunk the DRAM caches and generally cheapened the BOM for the MX500 line of drives.
 
Last edited:
Western Digital Blue series are pretty reliable SSDs.
Yeah, meant to be used in laptops only or desktop with UPS.

By the way, I just remembered. Never buy TwinMOS SSD. Had a 128GB one at my workplace that stopped working after writing a few gigabytes. Got the replacement and that one also died, quicker than the first one.
I had serious performance issue with Silicon Power DRAMless SSDs. The performance loss was so noticeable that after a week of basic browsing type of usage the TRIM feature would make a huge performance difference. A week later it would stutter like you are in a Ferrari in New York going 100 mph in between but having to stop every traffic light. I never get DRAMless after that.

Also had a Adata SSD fail in my brother's laptop. It would be so slow that opening up a browser would take 15+ seconds on Broadwell i7. Briefly switching to my SSD confirmed the issue.

I had another SSD fail on the used Thinkpad X390 Yoga I bought and had to return. It kept having issues on some updates.

It seems at least based on my experience you have to be far more careful with SSDs in the used market, whereas with HDDs not so much.
 
Your story reads very much like, "I had a drive fail once, so I'm never going to use that manufacturer again". You'll be lucky if you don't run out of drive manufacturers to choose from eventually, and every manufacturer you choose will have a percentage of failed drives each year. It's always raining somewhere, eventually you're going to get wet.
It's up to the manufacturer to reduce that, not the consumer to "take a chance" on a manufacturer. I had two Samsung monitors fail myself, both lasting ~5 years. Whereas I have an NEC that was bought used almost 10 years ago, and was manufactured in 2008.

Also, some of the failures are due to intentional decisions made by the manufacturer. Looking up on how to replace backlights on monitors, I came across an article where it talked about why Samsung TVs fail more compared to LG, and while it may be on certain lineup and not everything, it made perfect sense. It was the decision they made that made it more susceptible to failures.

For example, talking about Toyota vs GM, the reason Toyota has less issues is because the way they run their factory. Asianomics channel in Youtube talked about how in most assembly lines the assembler has to keep up with the line, even if they are not ready to go at that speed.

In Toyota's assembly lines, any assembler can press a button to stop it so they can get it ready, and the stops are frequent. In Toyota's mind, those few pauses in the long term saves time and money because not everyone is at the same speed, and it reduces failures down the road. When the company is struggling, most decide to fire them(calling it layoffs, softening the blow to make it sound less bad), while during downturns Toyota sends them to more training, and even after hiring they train their employees for a longer period.
 
I only had a crappy Zalman 32GB SATA SSD fail, and it did so at least 8 years ago. It was so crappy that it maxed at 42MB/s sustained writes in CrystalDiskMark. That number is not missing a zero at the end! Wasn't sad to see it go, but it was really cheap at the time, after a rebate.

My most write enduring SSDs are 8x lowish end Silicon Power S55's, 240GB SATA, the version with the Marvell controller instead of Silicon Motion, that have been in a couple of RAID arrays for about 10 years, the whole time with one array used for video editing and the other hammered with p2p activity and both still working fine. Knock on wood.

I can't pull up a lot of info on them like TBW or estimated remaining lifespan due to being on the Raid controllers. CrystalDiskInfo can't even see the drive volumes they make. They appear to have house (re)marked Micron 25nm planar MLC NAND. I got a bit lucky buying them at well below average pricing compared to the major brands at the time. I got luckier still that back then, they were already using half length PCBs so one of them fit into an old laptop with only IDE interface, in it's 2.5" HDD bay along with an IDE to SATA controller board in series.

On the other hand I've got a Crucial MX500 1TB with about 86TBW in 26 mos. (19K5 hrs) use, that CrystalDiskInfo is showing as down to 64% health already.
 
Last edited:
Here's the thing I think that needs to be taken into account when deciding between QLC and TLC. It's not just the number of writes that is important but also total size of the disk. For example, a TLC drive might do 3000 writes and a QLC 1000, but you can buy a larger QLC for the same money and that tends to equalize endurance somewhat.
The two problems with that are that I did not historically see the capacity: price ratio rise as much as the endurance ratio dropped, and second is the state of HQ flash chip shortage-inducted SSD pricing today, where you're getting a 240GB TLC or even QLC SSD at higher cost than I paid for my 240GB MLC SSDs, 10 years ago. Granted that price comparison is in non-adjusted (for inflation) dollars, after adjustment a 240TB QLC SSD is slightly (20%?) less expensive today.

NAND TypeBits per CellTypical P/E Cycles
Planar MLC (2016 era)2 bits3,000–10,000
Modern 3D QLC4 bits~200–1,000
 
Last edited:
Back
Top