Danoff's NAS and PVR Build

  • Thread starter Danoff
  • 67 comments
  • 7,407 views
@Danoff out of curiosity, what would warrant the need to switch completely over to SSDs and a 10gig home network?

10gig networks are usually reserved for high bandwidth applications such as video editing high end video files through a NAS, or high speed data transfer/backup.

For Plex use, HDDs are fine, they're not usually a source of bottle neck, assuming you have your drives in a RAID setup. If you want to increase the read speed, you can setup an SSD cache system so your drives dont have to constantly spin up from idle to retrieve a file.
 
@Danoff out of curiosity, what would warrant the need to switch completely over to SSDs and a 10gig home network?

10gig networks are usually reserved for high bandwidth applications such as video editing high end video files through a NAS, or high speed data transfer/backup.

For Plex use, HDDs are fine, they're not usually a source of bottle neck, assuming you have your drives in a RAID setup. If you want to increase the read speed, you can setup an SSD cache system so your drives dont have to constantly spin up from idle to retrieve a file.


There's no specific need. It's just a matter of not wanting to buy platter drives anymore. They're fragile, they chew up electricity, and they're slow. They're not too slow to stream a 30gb blu-ray from the NAS, but they're not super quick about it either. Jumping commercials from an HD OTA recording takes a tick too (buffering during a commercial skip is not as fast as one might hope). It's not a big deal, but I'm tired of re-buying platters to get slow transfers.

SATA SSDs would be more than fast enough. I just hesitate to buy antiquated technology when NVME SSDs are the same price per TB. This lead me to consider futureproofing by moving to NVME drives instead of replacing platters. But the future I'd be proofing against would be pretty far away, and the infrastructure (motherboard) needed to make such a thing work is a significant investment.
 
Last edited:
Gotcha!!

What kind of HDDs are you using? Would it be worth it, in your case, to upgrade to NAS HDDs from, say, WD? Or even Seagate IronWolfs? Reason I mention those two specifically is because I don't know what kind of drives the ones you bought refurbished were.

and the infrastructure (motherboard) needed to make such a thing work is a significant investment.

Honestly, going the U.2 route as you pointed out previously would allow for more expandability - in this case I'm talking about U.2 drives not a U.2 to M.2 adapter.
 
Last edited:
Gotcha!!

What kind of HDDs are you using? Would it be worth it, in your case, to upgrade to NAS HDDs from, say, WD? Or even Seagate IronWolfs? Reason I mention those two specifically is because I don't know what kind of drives the ones you bought refurbished were.

The last two drives I bought were:
Seagate - ST2000NM0033
Hitachi - Ultrastar A7K3000

I started to not buy seagate after a number of seagate failures, but I was just going for lowest price on this latest round.

Honestly, going the U.2 route as you pointed out previously would allow for more expandability - in this case I'm talking about U.2 drives not a U.2 to M.2 adapter.

U.2 drives are very expensive.
 
What's your total/usable drive capacity @Danoff? Still 18/9TB? I'm interested in your desire to move away from spinning rust on environmental grounds. I think that SSDs (of any form factor) are probably poor value for money for your use case. If you're worried about total output bandwidth, could you use a PCIe SATA RAID card? They don't look that expensive (I found an 8 port PCIe gen 3 card for £129 in a brief search) and would probably allow you to continue using drives which - even when factoring in the increased power cost - operate at a lower TCO than SSDs. That card I found is pushing 64Gb of disk bandwidth. I'm actually not sure how they can claim that, given that 8x 6Gb is only actually 48Gb, but it's still enough to flood your network.

Western Digital Red HDDs run to around $0.025 per GB, compared with $0.11 per GB for an equivalent SSD. 18TB of SSDs in this model is 9 drives, total cost of $2,088, whereas you could provision 20GB of storage in 5 drives, for a cost of $465. If you're going RAID1+0, then you'd want 6HDDs ($558) and 10 SSDs ($2320). Each of these configs will overwhelm a 10GbE port.

Power consumption of an active WD Red HDD is 5.7W, so 6 of them is 34.2W. I didn't do exhaustive research on this, but I did see someone claiming their SSD consumed 3W (27W when active, which seems high to me. The same poster claimed 0.05W for an SSD when idle (WD states 1.7W for their HDD). I think what we're seeing is that even if we assume the SSDs are idle constantly (0.5W) and the HDDs are active constantly (34.2W) you are not going to make the $1750 cost difference back in power savings. (Prices for the above sourced from PC Part Picker, using WD Red 4TB HDD and WD Red 2TB SSD)

You could sacrifice interface bandwidth for power costs by buying larger drives, but the 4TB HDD and 2TB SSD seem like the cost/GB sweet spots right now.

I was reading your comments about network speed, in which you said it started at the client, and I wanted to offer an alternative perspective. Since your PVR/NAS is acting as a server, it should really have port bandwidth that is approaching the total client draw. In this, you'd connect the server to the network via a 10GbE interface to a switch which has GbE ports for distribution. Something like this: https://store.ui.com/collections/unifi-network-routing-switching/products/usw-pro-48. How many clients are you serving, and what is the distribution between wired/wireless? Are you using a mesh-type WLAN?

It seems your NAS design is more interesting to me than configuring a Splunk interface for ingesting Node logs!
 
What's your total/usable drive capacity @Danoff? Still 18/9TB? I'm interested in your desire to move away from spinning rust on environmental grounds. I think that SSDs (of any form factor) are probably poor value for money for your use case. If you're worried about total output bandwidth, could you use a PCIe SATA RAID card? They don't look that expensive (I found an 8 port PCIe gen 3 card for £129 in a brief search) and would probably allow you to continue using drives which - even when factoring in the increased power cost - operate at a lower TCO than SSDs. That card I found is pushing 64Gb of disk bandwidth. I'm actually not sure how they can claim that, given that 8x 6Gb is only actually 48Gb, but it's still enough to flood your network.

Western Digital Red HDDs run to around $0.025 per GB, compared with $0.11 per GB for an equivalent SSD. 18TB of SSDs in this model is 9 drives, total cost of $2,088, whereas you could provision 20GB of storage in 5 drives, for a cost of $465. If you're going RAID1+0, then you'd want 6HDDs ($558) and 10 SSDs ($2320). Each of these configs will overwhelm a 10GbE port.

Power consumption of an active WD Red HDD is 5.7W, so 6 of them is 34.2W. I didn't do exhaustive research on this, but I did see someone claiming their SSD consumed 3W (27W when active, which seems high to me. The same poster claimed 0.05W for an SSD when idle (WD states 1.7W for their HDD). I think what we're seeing is that even if we assume the SSDs are idle constantly (0.5W) and the HDDs are active constantly (34.2W) you are not going to make the $1750 cost difference back in power savings. (Prices for the above sourced from PC Part Picker, using WD Red 4TB HDD and WD Red 2TB SSD)

You could sacrifice interface bandwidth for power costs by buying larger drives, but the 4TB HDD and 2TB SSD seem like the cost/GB sweet spots right now.

I was reading your comments about network speed, in which you said it started at the client, and I wanted to offer an alternative perspective. Since your PVR/NAS is acting as a server, it should really have port bandwidth that is approaching the total client draw. In this, you'd connect the server to the network via a 10GbE interface to a switch which has GbE ports for distribution. Something like this: https://store.ui.com/collections/unifi-network-routing-switching/products/usw-pro-48. How many clients are you serving, and what is the distribution between wired/wireless? Are you using a mesh-type WLAN?

It seems your NAS design is more interesting to me than configuring a Splunk interface for ingesting Node logs!

Well I'm glad to pique your interest. I'm aware that SSDs are going to cost more. But they'll also be faster, and need less replacement. So I'm willing to spend a little extra. Some savings comes back in the form of power consumption, as you note above. 30W (converting at the rate I pay for electricity) is something like $30/year. Not a ton. But not zero either. The replacement cost is also a factor, so, if platters need to be replaced 3 times as often, for example, the numbers are coming much closer together.

I don't have my drives idle ever, because I get annoyed with the spinup response. SSDs, OTOH, could idle. Actually after typing this I just went and changed my least-used pools to use standby.

I'm not actually worried about total output bandwidth. Although I did have 2 users (my kids) hitting it yesterday simultaneously, but they were hitting the same drives. So adding sata bandwidth across drives wouldn't really help. Data striping would help, but it drastically increases the consumption of usable life for the disk and increases the chances of losing data. I do actually have a PCI sata card in order to handle the number of drives. Right now I have 10 drives, which is more than the motherboard could handle on its own. I top out as about 100mb/s steady state reading from those drives. 500mb/s would be much nicer. Not necessary, just nicer.

Occasionally I transcode, but mostly the bandwidth is annoying when I'm making changes to the files, replacing or changing drives, or when I'm trying to jump around, for example skipping commercials.

One thing that I've started to do is to break data up into different categories of data that get accessed at different rates. So I've long held documents on one pool (mirrored 600gb drives, yes that's right not TB, GB). But I've started to break up more pools so that I have some data that I don't need to access regularly, or almost ever, on another pool, and then I have regular-access media on a third pool. And doing that has helped break up the disk space. It's that third pool that sucks the most bandwidth, and the third pool doesn't actually take up all that much space, it's also the one that ends up with all the failures, because it's the most read/write activity. So that's the pool I'd probably move to SSD (just due to attrition) and I've also made it smaller than it used to be. I think I could probably make it smaller still.

That third pool has about 3.3TB of data on it. At the moment I have that mirrored across 2x2TB and 2x3TB drives for a total of just under 5TB of actual usable space. I have spare 3TB drives (used in pools I don't need), so if I lose a 3TB in that third pool, I'll replace it with a 3TB from elsewhere. If I lose a 2TB in that third pool, I'll replace it with a 3TB from elsewhere. If I lose a second 2TB drive, I'll buy a 2TB SSD. Right now a 2TB HDD costs about $30 (refurbished cheapo). Right now a 2TB SATA SSD is edging down toward like $180. I'd guess that this comes to pass in about a year or two. :)
 
Last edited:
@Danoff

Screenshot_20210304-155926_Chrome.jpg


HBA Adapter


Source:




The one time Linus Tech Tips actually teaches you something. U.3 is now here and slowly trickling down.
 
Last edited:
Bump

Seagate may have just unveiled the world's fastest HDD


Speed is 524 MB/S pulling 13.5W under heavy load.


Seagate's Exos 2X14 HDD has a capacity of 14TB but the drive is essentially two 7TB HDDs fused together in a hermetically sealed helium-filled 3.5inch chassis. It features a spindle speed of 7200 RPM, a 256MB multisegmented cache and a single-port SAS 12GB/s interface.


When plugged into a server in a data center, the host system will view the Exos 2X14 as two logical drives that can be addressed independently. The sequential read/write speeds of Seagate's new HDD are so fast that the drive can even rival some inexpensive SATA/SAS SSDs at a far lower cost-per-TB.



This is promising to see some life still left in HDD development.
 
Last edited:
Back