What's your total/usable drive capacity
@Danoff? Still 18/9TB? I'm interested in your desire to move away from spinning rust on environmental grounds. I think that SSDs (of any form factor) are probably poor value for money for your use case. If you're worried about total output bandwidth, could you use a PCIe SATA RAID card? They don't look that expensive (I found an 8 port PCIe gen 3 card for £129 in a brief search) and would probably allow you to continue using drives which - even when factoring in the increased power cost - operate at a lower TCO than SSDs. That card I found is pushing 64Gb of disk bandwidth. I'm actually not sure how they can claim that, given that 8x 6Gb is only actually 48Gb, but it's still enough to flood your network.
Western Digital Red HDDs run to around $0.025 per GB, compared with $0.11 per GB for an equivalent SSD. 18TB of SSDs in this model is 9 drives, total cost of $2,088, whereas you could provision 20GB of storage in 5 drives, for a cost of $465. If you're going RAID1+0, then you'd want 6HDDs ($558) and 10 SSDs ($2320). Each of these configs will overwhelm a 10GbE port.
Power consumption of an active WD Red HDD is 5.7W, so 6 of them is 34.2W. I didn't do exhaustive research on this, but I did see someone claiming their SSD consumed 3W (27W when active, which seems high to me. The same poster claimed 0.05W for an SSD when idle (WD states 1.7W for their HDD). I think what we're seeing is that even if we assume the SSDs are idle constantly (0.5W) and the HDDs are active constantly (34.2W) you are not going to make the $1750 cost difference back in power savings. (Prices for the above sourced from PC Part Picker, using WD Red 4TB HDD and WD Red 2TB SSD)
You could sacrifice interface bandwidth for power costs by buying larger drives, but the 4TB HDD and 2TB SSD seem like the cost/GB sweet spots right now.
I was reading your comments about network speed, in which you said it started at the client, and I wanted to offer an alternative perspective. Since your PVR/NAS is acting as a server, it should really have port bandwidth that is approaching the total client draw. In this, you'd connect the server to the network via a 10GbE interface to a switch which has GbE ports for distribution. Something like this:
https://store.ui.com/collections/unifi-network-routing-switching/products/usw-pro-48. How many clients are you serving, and what is the distribution between wired/wireless? Are you using a mesh-type WLAN?
It seems your NAS design is more interesting to me than configuring a Splunk interface for ingesting Node logs!