Tuesday, April 29, 2014

Why do SSDs Come in Unusual Sizes?


Technobiru :







why-do-ssds-come-in-unusual-sizes-00








SSDs seems to come in quite a variety of ‘new’ sizes these days, but why is that? Today’s SuperUser Q&A post has the answers to one curious reader’s question.








Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites.








Photo courtesy of Jung-nam Nam (Flickr).








The Question








SuperUser reader Dudemanword wants to know why SSDs seem to come in weird GB sizes:












Why do SSDs come in sizes like 240 GB or 120 GB rather than the normal 256 GB or 512 GB? Those numbers make much more sense than the 240 GB or 120 GB size.












Why do companies manufacture SSDs in what seems to be “non-standard” sizes?








The Answer








SuperUser contributors Patrick R. and Adam Davis have the answer for us. First up, Patrick R.:












While a lot of modern SSDs like the 840 EVO series do provide the sizes you are used to, like the mentioned 256 GB, manufacturers used to preserve a bit of storage for mechanisms fighting performance drops and defects.








If you, for example, bought a 120 GB drive, you can be pretty sure that it is really 128 GB internally. The preserved space simply gives the controller/firmware room for stuff like TRIM, Garbage Collection, and Wear Leveling. It was common practice to leave a bit of space unpartitioned – on top of the space that had already been made invisible by the controller – when SSDs first hit the market, but the algorithms have gotten significantly better, so you should not need to do that anymore.








EDIT: There have been some comments regarding the fact that this phenomenon has to be explained with the discrepancy between advertised space, stated in Gigabytes (i.e. 128 x 10^9 Bytes) versus the Gibibyte value the operating system shows, which is – most of the time – a power of two, calculating to 119.2 Gibibyte in this example.








As for as I know, this is something that comes on top of the things already explained above. While I certainly can not state which exact algorithms need most of that extra space, the calculation stays the same. The manufacturer assembles an SSD that indeed uses a power of two number of flash cells (or a combination of such), though the controller does not make all that space visible to the operating system. The space that is left is advertised as Gigabytes, netting you 111 Gibibyte in this example.












Followed by the answer from Adam Davis:












Both mechanical and solid state hard drives have raw capacity greater than their rated capacity. The “extra” capacity is held aside to replace bad sectors, so the drives do not have to be perfect off the assembly line, and so that bad sectors can be re-mapped later during use with the spare sectors. During initial testing at the factory, any bad sectors are mapped to the spare sectors. As the drive is used, it monitors the sectors (using error correction routines to detect bit level errors and when a sector starts going bad, it copies the sector to a spare, then re-maps it. Whenever that sector is requested, the drive goes to the new sector, rather than the original sector.








On mechanical drives, they can add arbitrary amounts of spare storage since they control the servo, head, and platter encoding, so they can have a rated storage of 1 terabyte with an additional 1 gigabyte of spare space for sector re-mapping.








However, SSDs use flash memory, which is always manufactured in powers of two. The silicon required to decode an address is the same for an 8 bit address accessing 200 bytes as an 8 bit address accessing 256 bytes. Since that part of the silicon does not change in size, then the most efficient use of the silicon real estate is to use powers of two in the actual flash capacity.








So the drive manufacturers are stuck with a total raw capacity in powers of 2, but they still need to set aside a portion of the raw capacity for sector re-mapping. This leads to 256GB of raw capacity providing only 240GB of usable capacity, for instance.




















Have something to add to the explanation? Sound off in the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.




















Akemi Iwaya (Asian Angel) is our very own Firefox Fangirl who enjoys working with multiple browsers and loves 'old school' role-playing games. Visit her on Twitter and .



















No comments:

Post a Comment