For those of you preparing for the External Drive feature, you may wish to read this article when deciding where to throw your hard earned dollars. Here is an excerpt and the link:
Earlier this month, Google researchers released a fascinating paper called "Failure Trends in a Large Disk Drive Population" that examined hard drive failure rates in Google's infrastructure. Two conclusions stood out: self-monitoring data isn't useful for predicting individual drive failures, and temperature and activity levels don't correlate well with drive failure. This throws conventional wisdom about predicting drive failures into question, so we sought out some independent expert analysis to weigh in on the findings. First, we briefly recap last week's Google study.
The Google study
First, though, let's take a look at what the Google researchers found. They examined the data from more than 100,000 drives deployed in Google's servers, all of which were consumer-grade serial and parallel ATA units with spindle speeds of 5400rpm and 7200rpm. Drives were considered "failed" if they were replaced as part of a repair procedure. Self-Monitoring, Analysis and Reporting Technology (SMART) information was recorded from all drives, and spurious readings were filtered out of the resulting database.
When they looked at annualized failure rates, they saw the expected "infant mortality" effect, where drives die more often very early in their life cycle. The thinking behind this is that poorly-made drives fail quickly, while well-made ones then have a few trouble-free years before they begin to reach their end-of-life stage at around five years. This is sometimes referred to as the "bathtub curve" for its shape, but Google's researchers found that the failure rate ticked up much sooner—starting at two years—and remained steady for the next several years.
Experts: No cure in sight for unpredictable hard drive loss
Earlier this month, Google researchers released a fascinating paper called "Failure Trends in a Large Disk Drive Population" that examined hard drive failure rates in Google's infrastructure. Two conclusions stood out: self-monitoring data isn't useful for predicting individual drive failures, and temperature and activity levels don't correlate well with drive failure. This throws conventional wisdom about predicting drive failures into question, so we sought out some independent expert analysis to weigh in on the findings. First, we briefly recap last week's Google study.
The Google study
First, though, let's take a look at what the Google researchers found. They examined the data from more than 100,000 drives deployed in Google's servers, all of which were consumer-grade serial and parallel ATA units with spindle speeds of 5400rpm and 7200rpm. Drives were considered "failed" if they were replaced as part of a repair procedure. Self-Monitoring, Analysis and Reporting Technology (SMART) information was recorded from all drives, and spurious readings were filtered out of the resulting database.
When they looked at annualized failure rates, they saw the expected "infant mortality" effect, where drives die more often very early in their life cycle. The thinking behind this is that poorly-made drives fail quickly, while well-made ones then have a few trouble-free years before they begin to reach their end-of-life stage at around five years. This is sometimes referred to as the "bathtub curve" for its shape, but Google's researchers found that the failure rate ticked up much sooner—starting at two years—and remained steady for the next several years.
Experts: No cure in sight for unpredictable hard drive loss