• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

2 * Samsung 960 EVO 250 gb in raid 0

2*960 EVO 250 gb vs 960 EVO 500 gb

  • 2 * Samsung 960 EVO 250 gb

    Votes: 0 0.0%
  • 1 * Samsung 960 EVO 500 gb

    Votes: 11 100.0%

  • Total voters
    11

CyberCat3

Junior Member
Hi, I would like to get a really nice SSD setup, but I was wondering, would 2 * Samsung 960 EVO 250 gb in RAID 0 be better or worse than a single 960 EVO 500 gb?

I know RAID 0 doubles the chance for your storage setup to die but does this count for SSD's?
Since they wear off in a different way than hard drives, maybe it wouldn't affect it?
 
The chance of failure in RAID 0 applies to both mechanical and SSD drives. While the drives themselves may fail in different ways, you still run the risk that if either drive fails, the entire array is lost.

To (partially) answer the first question: The 256GB variety gets the same speeds as the 500GB drive in read tests, but it is a bit slower writing. Theoretically, RAID 0 would give you better read performance as long as your motherboard has two full speed NVMe slots and doesn't cripple them by making them share the same bus...
 
I would personally choose the 500 GB 960 EVO, or even the 512 GB 960 PRO over two of the smaller drives. The SLC buffer is too small on the 250 GB versions, plus the warranty for those is only 100 TBW compared to 200 TBW for the 960 EVO 500 GB version, and 400 TBW on the 960 PRO.

Why do you want to RAID two 250 GB 960 EVOs? Is it for synthetic benchmark bragging rights, or is their some specific need or program where it would help you?
 
I'd pick the 500 for half the failure rate compared to two drives, no extra chance of unreadable data because of a RAID problem, and because SSDs work better if they have plenty of unused space (the 500 will have twice the unused space as the 2 drives).

SSD speeds are high enough that for my workloads (gaming and Visual Studio) going to RAID-0 would only really matter for benchmarks.
 
Something like this is going to compete with other options to fill PCIE slots of x4 or more, make you think this way and that if you were thinking about 2x SLI, or make people lean to boards that feature more PCIE lanes and more slots -- spending more money.

Personally, I'm trying to figure out if I should put my "leftover" 960 EVO 250 into PCIE_16_2 to be an SSD cache for maybe three SATA devices in combination of SSD and HDD.

I was even thinking to increase my RAM from 16 to 32GB, so I could create large RAM-caches.

My boot-system 960 Pro gets about 2GB of RAM-cache, benching in sequential read speeds of around 18,000 MB/s. Even without the EVO in the mix, my RAM-cached HDD benches at around 12,000 for 3,072 MB cache.

So you could maybe imagine two Samsung 1TB 960 EVOs in PCIE x4 sockets, or possibly through U.2 or M.2 on the motherboard. You could RAID them, but I'm not sure it's so cost effective. Ulitmately, you'll be limited by the bandwidth of the PCIE slots or the lanes provided through the chipset equivalent to x4.
 
Hi everybody, thank you for all of your answers!
I am know set on taking 1 * Samsung 960 EVO 500gb, instead of the RAID setup, thank you for helping me 😀
 
my boot drive will be two evo 500 in raid 0 and hardware failure is the reason for failure but you still have drive while the other is under warranty or waiting to get replaced out of warranty and it real simple to back it up from time to time now especially as my c drive is no were near that size so I can use a 500gb usb pen for backup and a wd black 6tb for storage so is it not better to have a raid for c drive as if one drive fail your computer is not down as you just install the image on that drive that good and when you get the replacement you simply take the current image again and create the raid 0 and nothing lost nor did you have no computer while you wanted for the warranty exchange
 
Last edited:
So you could maybe imagine two Samsung 1TB 960 EVOs in PCIE x4 sockets, or possibly through U.2 or M.2 on the motherboard. You could RAID them, but I'm not sure it's so cost effective. Ultimately, you'll be limited by the bandwidth of the PCIE slots or the lanes provided through the chip set equivalent to x4.[/QUOTE]

better to use the Pcie slot 4 slots ( m.2) which we got plenty for most people then the video slots 2 slots (u.2)as there just 16 video slots in coffee lake and your video card use most of them to all of them and all of them in SLI 8/8 there no room from u2.stealing slots from it is a serious bottle neck which is why it out of favor unless you have a 299 system with a sky lake chip that got 24 lanes and you only want to SLI 2-3 cards
 
Last edited:
single pci-e nvme... because each of them will deactivate 2 sata ports or a 8x pci-e lane.

And there really is no point in RAID 0 nvme.... so your doing it mostly for giggles/bragging rights if your asking for advice. To be honest, you'll really get laughed at for doing so, because its so pointless.

Meaning if you really needed R0 NVMe, you would probably get fired for being a blockhead IT because you dont R0 anything, or ur doing it purely for research purposes like LinusTech
 
Back
Top