Just the other day, my occasional colleague Paul Gross passed this very recent gem of a paper along to me, for which I am grateful.
Mai Zheng, Joseph Tucek, Feng Qin, Mark Lillibridge, "Understanding the Robustness of SSDs under Power Fault", 11th USENIX Conference on File and Storage Technologies (FAST '13), San Jose CA USA, February 12-15, 2013It's worth a read. Here's just a brief section from the abstract.
Applying our testing framework, we test fifteen commodity SSDs from five different vendors using more than three thousand fault injection cycles in total. Our experimental results reveal that thirteen out of the fifteen tested SSD devices exhibit surprising failure behaviors under power faults, including bit corruption, shorn writes, unserializable writes, meta-data corruption, and total device failure.These researchers from the Ohio State University and H-P Labs take the same approach as unpublished (and kinda clever) work done by my occasional colleague Julie Remington, a hardware engineer who hooked up a system we were troubleshooting that had a surface-mount SSD to a computer-controlled power supply and proceeded to cycle power on the system in a controlled, scripted fashion. Each power cycle waited until the system under test was up and stable, and logged all of the results from the system's serial console. Her results: after a few iterations, the system had to use fsck to repair the EXT3 file system on the SSD (file system corruption, not unexpected under the circumstances); after a few more, the SSD began reporting a bogus device type and serial number to hdparm (internal meta-data corruption); after just a few more, the device quit responding completely to I/O commands. It could only be recovered by removing the tiny flash chips from the top of the SSD chip itself and replacing them with uncorrupted chips from an identical SSD. Which one of Julie's colleagues did. Which is kinda, you know, hard core.
But it doesn't have to be an SSD. I've seen similar failure modes in embedded systems using JFFS2 (Journaling Flash File System version 2) under Linux. JFFS2 does all the same kinds of things behind the scenes that an SSD does, except its controller is implemented in software instead of hardware, on top of commodity NAND flash. Just as the controller inside an SSD is rewriting its flash behind the scenes, the JFFS2 garbage collector kernel thread (which will look something like [jffs2_gc_mtd2], interpreted as "JFFS2 Garbage Collector for Memory Technology Device partition 2", when you do a ps command) is rolling along its merry way, erasing and rewriting flash blocks with little or no regard to the fact that your finger is on the power switch.
But with JFFS2, at least you have some prayer of coming to an orderly stop if you do something like a shutdown -h now before turning off the power. Not that systems in the field with hard power off will have an opportunity to do so, of course. But at least you might save a system or two in the development lab.
The problem with SSDs is that they are not only asynchronous -- doing stuff behind the scenes -- but also autonomous -- doing stuff that you don't even know about and have no control over. The SSD controller continues to erase and rewrite flash blocks even if you unmount the file system. Even if you shut down the operating system. Even if you hold the damn processor in reset.
It's like the honey badger. Honey badger SSD don't care.
No comments:
Post a Comment