• jarfil@beehaw.org
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    edit-2
    1 year ago

    Can we agree that brand new drives aren’t supposed to fail?

    No.

    The typical failure rates, for pretty much all electronics, even mechanic stuff, form a “bathtub graph”: relatively many early failures, very few failures for a long time, with a final increasing number of failures tending to a 100%.

    That’s why you’re supposed to have a “burn in” period for everything, before you can trust it within some probably (still make backups), and beware of it reaching end of life (make sure the backups actually work).

    • u_tamtam@programming.dev
      link
      fedilink
      arrow-up
      21
      ·
      1 year ago

      That’s absolutely true in the physical sense, but in the “commercial”/practical sense, most respectable companies’ QA process would shave off a large part of that first bathtub slope through testing and good quality practices. Not everything off of the assembly line is meant to make it into a boxed up product.

      • jarfil@beehaw.org
        link
        fedilink
        arrow-up
        9
        ·
        1 year ago

        Apparently even respectable companies are finding out that it’s cheaper to skimp on QA and just ship a replacement item when a customer complains. Particularly when it’s small items that aren’t too expensive to ship, but some are doing it even with full blown HDDs.

        • u_tamtam@programming.dev
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          In this case, I think we can remove what’s left of the benefit of the doubt from Western Digital (who owns SanDisk). They are as scammy/shady as I know a company to be.

          Personally I’ve been boycotting them since 2016 after I couldn’t recover the data from an external drive, which WD encrypted without warning nor consent. A faulty component on the PCB (unrelated to the drive itself), combined with WD’s non standard practices (non SATA pins + mandated proprietary encryption) meant that I had to lose this drive and the data it contained so they could make a quick buck. I can’t trust a company with such ethics to store anything for me.

          In 2020, they got themselves into another scandal. WD reds, which were advertised as pro/NAS storage, and sold at a premium, were found to behave like shingled drives (a technique that trades away some reliably and availability in exchange for extra storage density), exposing many users to heightened risk of critical failure (esp. during disks swaps). WD of course denied, and then again when confronted with evidence, up until the internet burst in flames. Again consumer hostile practices.

          Here we have SSDs which have been reported for months, and by several reputable sources, to be having problems, which SanDisk even attempted to patch without success. And now, wouldn’t you think that they are trying to recall them all in order to protect consumers from likely data loss (like any responsible data storage provider would do)? Nope. They are currently trying to sell those at significant discount, as quickly as they can, hurting plenty of consumers in the process is less important than their short term financials.

          As far as I can care, they can go to hell, bankruptcy is all they deserve, for the greater good.

    • agentsquirrel@beehaw.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      Indeed. An old EE mentor told me once that most component aging takes place the first two weeks of operation. If it operates for two weeks, it will probably operate for a long, long time after that. When you’re burning in a piece of gear, it helps the testing process if you put it in a high temperature environment as well (within reason) to place more stress on the components.

      • jarfil@beehaw.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        The high temperature part is kind of a trap with SSDs: flash memory is easier to write (less likely to error out) at temperatures above 50C, so if you run a write heavy application at higher temperature, it’s less likely to fail than if it was kept colder.

        Properly stress testing an SSD would be writing to it while cold (below 20C) and checking read errors while hot (above 60C).

        For normal use you’d want the opposite: write hot, read cold.