My Jellyfin just quivered…
😏
I’ve been looking to buy a couple 24TB drives. Hopefully, this pushes their price down.
Peertube instance owners rejoice!
Or just people who download porn.
That’s… a lot of porn.
Who doesn’t have multiple TB of videos just laying around?
*Raises hand confidently
*pisses pants nervously before turning into a wolf
Sir, this is a Wendy’s
I don’t have porn just lying around, thank you very much
It’s all seeding for the other degenerates, doing hard work
Seeding… Porn… Heh
I prefer 1980s porn jpgs around 90kB each thankyouverymuch.
It’s crazy sizes though uf you think about it, I have like 2 or 4 TB drives and they are far from full.
When will it be commercially available though? Supposedly Seagate has had 30TB drives out for the better part of a year, but I can’t find anything larger than 24TB actually available for purchase.
I’ve been waiting for a 32TB to become available as well, Seagate announced that drive last year and it’s still not available outside data centers. I suspect the WD one will be the same.
I’d guess that they’re commercially available but only for hyperscalers - large companies like Google, Amazon (AWS), etc that need a huge amount of storage.
Obligatory hint that SMR isn’t suited for RAID systems.
A better way to word it is: SMR is only suited for archival usage. Large writes, little-to-no random writes.
I wonder how the read performance would be.
If you know the format of SMR, then you can trivially see the read performance is not impacted. Writing is impacted, because it has to write multiple times for each sector write (because of overlapping sectors that allow the extra density).
Impacted write performance, coupled with hdds are generally slow with random writes PLUS the extra potential for data loss due to less-atomic sector writes, makes them terrible drives for everything except archival usage.
Tape on a platter, basically.
Wonder what happens if you throw them in an unraid BTRFS/jbod configuration with a CMR parity drive.
Slowdown and data corruption?
Assuming that these have fairly impressive 100 MB/s sustained write speed, then it’s going to take about 93 hours to write the whole contents of the disk - basically four days. That’s a long time to replace a failed drive in a RAID array; you’d need to consider multiple disks of redundancy just in case another one fails while you’re resilvering the first.
This is one of the reasons I use unRAID with two parity disks. If one fails, I’ll still have access to my data while I rebuild the data on the replacement drive.
Although, parity checks with these would take forever, of course…
That’s a pretty common failure scenario in SANs. If you buy a bunch of drives, they’re almost guaranteed to come from the same batch, meaning they’re likely to fail around the same time. The extra load of a rebuild can kill drives that are already close to failure.
Which is why SANs have hot spares that can be allocated instantly on failure. And you should use a RAID level with enough redundancy to meet your reliability needs. And RAID is not backup, you should have backups too.
Also why you need to schedule periodical parity scrubs, then the “extra load of a rebuild” is exercised regularly so weak drives will be found long before a rebuild is needed.
2 parity is standard and should still be adequate. Likelihood of two failures within four days on the same array is small.
It’s more likely if you bought all the drives from the same store (since that increases the likelihood that they’re from the same batch), so you should make sure that you buy them from different stores.
My 16TB ultrastars get upwards of 180MB/s sustained read and write, these will presumably be faster than that as the density is higher.
I’m guessing that only works if the file is smaller than the RAM cache of the drives. Transfer a file that’s bigger than that, and it will go fast at first, but then fill the cache and the rate starts to drop closer to 100 MB/s.
My data hoarder drives are a pair of WD ultrastar 18TB SAS drives on RAID1, and that’s how they tend to behave.
This is for very long sustained writes, like 40TiB at a time. I can’t say I’ve ever noticed any slowdown, but I’ll keep a closer eye on it next time I do another huge copy. I’ve also never seen any kind of noticeable slowdown on my 4 8TB SATA WD golds, although they only get to about 150MB/s each.
EDIT: The effect would be obvious pretty fast at even moderate write speeds, I’ve never seen a drive with more than a GB of cache. My 16TB drives have 256MB, and the 8TB drives only 64MB of cache.
Except these drives are SMR - not something you’d want in a RAID.
Title literally says SMR for one size and CMR for another. Not that I should expect much from a .ml account.
If you eyeballing these, please remind that these babies tend to be LOUD AS FUCK, so might not be suitable for home server use.
Are they any louder than any HDD from the last 30 years?
If so, im actually curious why that is
Edit: fixed to say HDD not SSD
Well I have no experience with these particular drives, but they do seem to have 11 platters. Which is beyond insane as far as I’m concerned. More platters means more moving parts, more friction more noise (all other things being equal).
Oops, yes. I definitely would expect these to be much louder than your 6 GB 1998 model HDD wrangling under stress of copying files at 30 MB/s.
Tell that to my IBM 10GB 10.000 RPM U2W SCSI from back then. To this day I have never witnessed a noisier harddrive… But that PC was pretty epic, including the biggest mf of a mainboard I ever had (the SCSI controller was onboard).
Ah, the sound of turning on the SCSI storage tower.
KA-TSCHONK. WeeeeeeeeEEEEEIIIIIII… skrrrt, skrrrt, clack.
Either that or KA-TSCHONK, silence, if there were already too many boxes on that circuit at a lan party 😁
Your everyday modern HDD does not much more than 60MB/s after the on-disk cache (a few GB) is full.
not sure what you’re on about, i have some cheap 500GB USB 3 drives from like 2016 lying around and even those can happily deal with sustained writes over 130MB/s.
When the cache isn’t full, yes, that’s true. Copy a file that’s significantly bigger than cache and performance will drop part way through.
You’ve made me uncertain if I’ve somehow never noticed this before, so I gave it a shot. I’ve been
dd
-ing/dev/random
onto one of those drives for the last 20 minutes and the transfer rate has only dropped by about 4MB/s since I started, which is about the kind of slowdown I would expect as the drive head gets closer to the center of the platter.EDIT: I’ve now been doing 1.2GB/s onto an 8 drive RAID0 (8x 600GB 15k SAS Seagates) for over 10 minutes with no noticable slowdown. That comes out to 150MB/s per drive, and these drives are from 2014 or 2015. If you’re only getting 60MB/s on a modern non-SMR HDD, especially something as dense as an 18TB drive, you’ve either configured something wrong or your hardware is broken.
deleted by creator
My NAS uses a pair of SAS drives, and they make noises at boot up that would be concerning in a desktop. They’re quite obnoxious. But I keep them in part of the house where they don’t bother me.
Just don’t put it in your bedroom. All those dead skin cells wouldn’t do good to it anyway.
Since when is dust a concern for hard drives??
I was talking about the server in general
Drives like this are hermetically sealed with an inert gas like argon or helium on the inside. Even the presence of oxygen and nitrogen molecules can compromise the drive. If dust is getting to the moving parts of your hard drive, it’s toast no matter where it’s installed.
I’ve found that the only thing you can hear through a closed basement door are noisy high speed fans, e.g. from used 19" servers, disks produce much less noise.
Comparatively, yes - that’s auditory masking for you. On a relatively quiet place like a home, these will sound like rats running wild in your pipes.
Nah, I’m living outside the US, my home is made from proper bricks and concrete. A bit slower to build but rather good when it comes to sound insulation. I could imagine with those strand board walls that might be a problem though.
Parity rebuild will only take a week…
A week before next month
My 6TB drive just died. So I’m in the market for a new one.
sorry but these aren’t 6TB
Mebbe the 26 one is just 3-4 smaller drives inside it?
You joke but that’s sorta how it works for some HDDs lol
I hope you think of having several platters, not real drives :-D
this is great news! I’m running low on space on my 20tb now.
Archive link: https://archive.ph/CAxE9
deleted by creator
Damn, how are you so confident?
Nobody will remember or care if he’s wrong.
Think of the parity!
There is already a samsung 8 Tb SSD being sold on amazon. Buying 4 of those will be far cheaper than this monstrosity. And it will be silent, and actually useful as a home server, much faster too.
No shot 4 SSDs will be the same price as a HDD of the same capacity yet. HDD is still the king of GB/$.
If I’m wrong… Can you send me some links? I could use some cheap 8TB SSDs.
Aliexpress/cheap-fake-ssd-16TB 80€
Jk
I trust ali with a lot, but not drives :-)
Nah I don’t believe you at all.
SAMSUNG 870 QVO SATA 8TB = $683.38 x 4 = $2,733.52
8TB x 4 = 32TB
$2,733.52 / 32TB = $85.4225/TB
Yeah one of these disks does not cost more than $25/TB.
26TB x $25 = $650
FWIW in July last year Amazon was selling these as low as $320. My biggest fear of a 26 TB HDD is getting all 26 TB of data off of it if I needed it without the drive dying.
As long as you follow the 3-2-1 rule, you don’t need to worry about putting your eggs in one basket.
That’s true but more concerned with rebuilding the raid than necessarily losing the data. I have to admit that I’m lazy with backups and I’ve had my ass saved by RAID 6.
It’s really difficult/expensive for a home user to do a 3-2-1 backup properly. Especially if you’re pushing beyond a few TB.
QVO drives are trash though. Would not recommend. Very slow and they don’t last as long as Samsung’s EVO and PRO drives.