Saturday, December 24, 2011

Big Drives in 2020

Previously I've written about Mark Kyrder's 7TB/platter (2.5 inch) prediction for 2020.
This is more speculation around that topic.

1. What if we don't hit 7TB/platter, maybe only 4TB?

There have  been any number of "unanticipated problems" encountered with scaling Silicon and Computing technologies, will more be encountered with HDD before 202?

We already have 1TB platters in 3.5 inch announced in Dec-2011, with at least one new technique announced to increase recording density (Sodium Chloride doping), so it's not unreasonable to expect another 2 doublings in capacity, just in taking what's in the Labs and figuring out how to put it into production.

Which means we can expect 2-4TB/platter (2.5 inch) to be delivered in 2020.
At $40 per single-platter disk?
That depends on a) the two major vendors and the oligopoly pricing and b) the yields and costs of the new fabrication plants.

Seems to me that Price/GB will drop, but maybe not to levels expected.
Especially if the rapid decline in SSD's/Flash Memory Price/Gb plateaus and removes price competition.


2. Do we need to offer The Full Enchilada to everyone?

Do laptop and ultrabook users really need 4TB of HDD when they are constantly on-line?
1-2TB will store a huge amount of video, many virtual machine images and a lifetimes' worth of audio.
There might be a market for smaller capacity disks, either through smaller platters, smaller form-factors or underusing a full-width platter.

Each option has merits.
The final determinant will be perceived consumer Value Proposition, the Price/Performance in the end-user equipment.


3. What will the 1.8 inch market be doing?

If these very small form-factor drives in mobile equipment get to 0.5-2TB, that will seem effectively infinite.

There is no point in adopting old/different platter coatings and head-manufacturing techniques for these smaller form-factors unless other engineering or usability factors come into play: such as sensitivity to electronic noise, contamination, heat, vibration, ...


4. The fifth-power of diameter and cube-of-RPM: impact of size and speed?

2.5 inch drives are set to completely displace 3.5 inch in new Enterprise Storage systems within a year. This is primarily driven by Watts/GB and GB/cubic-space.

The aerodynamic drag of disk platters, hence the power consumed by a drive, varies with the fifth-power of platter diameter and the cube of the rotational velocity (RPM).

If you halve the platter size (5.25 inch to 2.5 inch), drive power reduces 32-fold.
If you then double the RPM of the drive (3600 to 7200), power increases 8-fold,
a nett reduction in power demand of 4 times.

Changing platter diameter by square-root of 2 (halving the recordable area), the drive power reduction is 5.5-fold. This is the same proportion for 5.25::3.5 inch, 3.5::2.5 inch and 2.5::1.8 inch.

Reducing a 2.5 inch platter to 1.92 inches allows a drive to be spun up from 5400 RPM to 7200 RPM whilst using the same drive power, with 60% of the original surface area.

Whilst not in the class of Enterprise Storage "performance optimised" drives (10K and 15K), it would be a noticeable improvement for Desktop PC's, given they will also be using large SSD's/Flash Memory as well in 2020 and this will be solely for "Seek and Stream" tasks.

There is very little reason to "de-stroke" drives and limit them to less than full-platter access if they are not "performance-optimised". It's a waste of resource for exactly the same input cost.


5. Will 3.5 inch "capacity-optimised" disks survive?
Will everything be 2.5 or 1.8 inch form-factor?

There are 3 markets that are interested in "capacity-optimised" disks:
  • Storage Appliances [SOHO, SME, Enterprise and Cloud]
  • Desktop PC
  • Consumer Electronics: PVR's etc.
When 1TB 2.5 inch drives are affordable, they will make new, smaller and lighter Desktop PC designs possible. Dell and HP might even offer modules that attach on the 100mm x 100mm "VIA" standard to the back of LCD screens. A smaller variant of the Apple Mac Mini is possible, especially if a single power-supply is available.

Consumer PVR's are interested in Price/GB, not Watts/GB. They will be driven by HDD price.
The manufacturers don't pay for power consumed, customers don't evaluate/compare TCO's and there is no legislative requirement for low-power devices.  Government regulation could be the wild-card driving this market.

There's a saying something like this I though made by Dennis Ritchie:
 "Memory is Cheap, until you need to buy enough for 10,000 PC's".
[A comment on MS-Windows lack of parsimony with real memory.]

Corporations will look to trimming costs of their PC (laptop and Desktop) fleets, and the PC vendors will respond to this demand.

Storage Appliances:
Already Enterprise and Cloud providers are moving to 2.5 inch form-factor to reduce power demand (Watts/GB) and floor-space footprint (GB/cubic-space).

Consumer and entry-level servers and storage appliances (NAS and iSCSI) are currently mostly 3.5 inch because that has always been the "capacity-optimised" sweet spot.

Besides power-use, the slam-dunk reasons for SOHO and SME users to move to 2.5 inch are:
  • lighter
  • smaller
    • smaller footprint and higher drive count per Rack Unit.
    • more aggregate bandwidth from higher actuator count
    • mirrored drives or better, are possible in a small portable and low-power case.
  • more robust, better able to cope with knocks and movement.
2.5 inch drives may be much better suited to "canister" or (sealed) "drive-pack" designs, such as used by Copan in their MAID systems. This is due to their lighter weight and lower power dissipation.
The 14-drive Copan 3.5 inch "Canister" of 4RU could be replaced by a 20-24 drive 2.5 inch Canister of 3RU, putting 3-4 times the number of drives in the same space.

6. What if there are some unforeseen "drop-deads", like low data-retention rates or hyper-sensitivity to heat, that limit useful capacities to the current 3-600GB/platter (2.5 inch)?

We can't know the future perfectly, so can't say just what surprises lie ahead.
If there is some technical reason why current drive densities are an engineering maximum, we cannot rely on technology advances to automatically reduce the Price/GB each year.

Even if technology is frozen, useful price reductions, albeit minor in comparison to "50% per year", will be achievable in the production process. It might take a decade for prices to drop 50% per GB.

I'm not sure how exactly designs might be made scale if drive sizes/densities are pegged to current levels.
What is apparent and universal, "Free Goods" with apparently Infinite Supply, will engender Infinite Demand.

If we do hit a "capacity wall", then the best Social Engineering response is to limit demand, which requires a "Cost" on capacity. This could be charging, as Google does with its Gmail service, or by other means, such as publicly ranking "capacity hogs".

Thursday, December 22, 2011

IDC on Hard Disk Drive market: Transformational Times

One of the problems, as an "industry outsider", of researching the field is lack of access to hard data/research. It's there, it's high-quality and timely. Just expensive and behind pay-walls.

A little information leaks via Press Releases and press articles promoting the research companies.

When one of these professional analyst firms makes a public statement alerting us to a radical restructuring of the industry, that's big news. [Though you'd expect "insiders" to have been aware of this for quite some time.]

What's not spelled out publicly, is How will this impact Enterprise Storage vendors/manufacturers?
There seems an implication that the two major HDD vendors will start to compete 'up' the value-chain with RAID and Enterprise Storage vendors, and across storage segments with Flash memory/SSD vendors.

IDC's Worldwide Hard Disk Drive 2011-2015 Forecast: Transformational Times
 published  in May, 2011. This report consists of Pages: 62 and the price starts from US $ 4500.

Headline: Transformation to just 3 major vendors. (really 2 major + 1 minor @ 10%)
 "The hard disk drive industry has navigated many technology and product transitions over the past 50 years, but not a transformation. [emphasis added]

 The HDD industry is poised to consolidate from five to three HDD vendors by 2012, and
 HDD unit shipment growth over the next five years will slow.

 HDD revenue will grow faster than unit shipments after 2012, in part because HDD vendors will offer higher-performance hybrid HDD solutions that will command a price premium.

 But for the remaining three HDD vendors to achieve faster revenue growth,
 it will be necessary by the middle of the decade for HDD vendors to transform into [bullets added]
  •  storage device and
  •  storage solution suppliers,
  •  with a much broader range of products for a wider variety of markets
  •  but at the same time a larger set of competitors."

Platters per Disk.

Headline:
  • 2.5 inch:
    • 9.5mm = 1 or 2 platters
    • 12.5mm = 2 or 3 platters
    • 15mm = ? platters. Guess at least 3. 4 unlikely, compare to 3.5 inch density
  • 3.5 inch
    • 25.4mm = commonly 4. Max. 5 platters.
Why is this useful, interesting or important?
To compare capacity across form-factors and for future configuration/design possibilities.

Disk form-factors are related by an approximate halving of platter area between sizes:
8::5.25 inch, 5.25::3.5 inch, 3.5::2.5 inch, 2.5::1.8 inch, 1.8::1.3 inch, 1.3::1 inch...
What we (as outsiders) know, but only approximately, is the recording area per platter for the platter sizes.  We know there are at least 3 regions of disk platter, but not their ratios/sizes, and these will vary per form-factor/platter-size:
  • motor/hub. The area of the inside 'torus' is small, not much is lost.
  • recorded area
  • outer 'ring' for landing and idling or "unloading" heads. Coated differently (plastic?) to not damage heads if they "skid" or come into contact with a surface (vs 'flying' on the aerodynamic air-cushion).
Research:

Chris Mellor, 12th September 2011 12:02 GMT, The Register, "Five Platters, 4TB".
Seagate has a 4TB GoFlex Desk external drive but this is a 5-platter
disk with 800GB platters.
IDC, 2009, report sponsored by Hewlett-Packard:
By 2010, the HDD industry is expected to increase the maximum number of platters per 2.5inch performance-optimized HDD from two to three,
enabling them to accelerate delivering a doubling of capacity per drive, and subsequently achieving 50% capacity increases per drive over a shorter time frame.
7th September 2011 06:00 GMT, The Register.
Oddly Hitachi GST is only shipping single-platter versions of these new drives, although it is saying they are the first ones in a new family, with their 569Gbit/in2 areal density. The announced but not yet shipping terabyte platter Barracuda had a 635Gbit/in2 areal density.
Sebastian Anthony on December 12, 2011 at 12:25 pm, Extreme Tech
Hitachi, seemingly in defiance of the weather gods, has launched the
world’s largest 3.5-inch hard drive:
 The monstrous 4TB Deskstar 5K.
 With a rotational speed of 5,900RPM,
 a 6Gbps SATA 3 interface,
 and the same 32MB of cache as
 its 2 and 3TB siblings,
 the 4TB model is basically the same beast
— just with four platters instead of two or three.
 The list price is around $345
Silverton Consulting, 13-Sep-2011:
shipping over 1TB/disk platter using 3.5″ platters shipping with 569Gb/sqin technology

Monday, December 19, 2011

"Missed by _that_ much": Disk Form Factor vs Rack Units

Apologies to 1965 TV series "Get Smart" and the catch-phrase "Missed by that much" (with a visual indication of a near-miss).

This is a lament, not a call to action or grumble. Standards are necessary and good.
We have two standards that we just have to live with now: too many devices depend on them for a change. Unlike the "imperial" to metric conversion, there would be few discernible benefits.

There's a fundamental mismatch between the Rack Unit (1.75 inches) or the vertical space allowed for equipment in 19 inch Racks (standard EIA-310) and the Disk form factors of  5.25, 3.5 and 2.5 inches defined by the Small Form Factor Committee.

There is no way to mount a standard disk drive (3.5 or 2.5 inch) exactly in a Rack. There are various amounts of wasted space.
Originally, "full-height" 5.25 inch drives could be mounted horizontally exactly in 2 Rack Units (3.5 inches), three abreast.

The "headline" size of the form-factor is the notional size of the platters or removable media.
The envelope allows for the enclosure.

So whilst "3.5 inch" looks like a perfect multiple of the 1.75 inch Rack Unit, a "3.5 inch" drive is  around 0.5 inch larger.
Manufacturers of vertical-mount "hot-swap" drives allow around 1mm on the thinnest dimension, 9 mm on the "height" and 1.5 inches (42-43 mm) on the longest dimension (depth).

A guess at the dimensions of hot-swap housings:
1/32 in (0.8mm) or 1mm sheet metal could be used between drives (upright)
and 1.5-2mm sheet metal would be needed to support the load (with an upturned edge?).

In total, around 0.5 inch (12.5mm) might need be allowed vertically for supporting structures.
An ideal Rack Unit size for the "3.5 inch" drive form-factor would be 4.5 inches.

Or, "3.5 inch" drives could be 3.00 - 3.25 inches wide to fit exactly in 2 Rack Units.

Different manufacturers approach this problem differently:
  • Copan/SGI and Backblaze mount 3.5 inch drives vertically in 4 Rack Units (7 inches).
    Both of these solutions aim for high-density packing, 28 and 11.25 drives per Rack Unit .
    • Copan, via US Patent # 7145770, uses 4U hot-swap "canisters" that store 14 drives in 2 rows, with 8 canisters per "shelf" (112 drives/shelf). In a 42 U rack, they can house 8 shelves, for 896 drives per Rack. Their RAID system is 3+1, with max 5 spares per shelf, yielding 79 data drives per shelf, and 632 drives per Rack.
      These systems are designed specifically to hold archival data, with up to 25% or 50% of drives active at any one time, as "MAID": Massive Array of Idle Disks.
    • Backblaze are not a storage vendor, but have made their design public with a hardware vendor able to supply cases and pre-built (but not populated) systems.
      Their solution, fixed-disks not hot-swap, is 3 rows of 15 disks mounted vertically, sitting on their connectors. The Backblaze systems include a CPU and network card and are targeted at providing affordable and reliable on-line Cloud Backup services [and are specifically "low performance"]. Individual "storage pods" do not supply "High Availability", there is little per-unit redundancy. Like Google, Backblaze rely on whole-system replication and software to achieve redundancy and resilience.
  • Most server and storage appliance vendors use "shelves" of 3 Rack Units (5.25 inches), but fit 13-16 drives across the rack (~17.75 inches or 450mm) depending on their hot-swap carriers.
  • "2.5 inch" drives fitted vertically (2.75 inch) need 2 Rack Units (3.5 inches). Most vendors fit 24 drives across a shelf. "Enterprise class" 2.5 inch drives are typically 12.5 or 15 mm thick.
Another possibility,  not been widely pursued, is to build disk housings or shelves that don't exactly fit the EIA-310 standard Rack Units. Unfortunately, the available internal width of 450mm cannot be varied.



The form factors:
"5.25 inch": (5.75 in x 8 in x 1.63 in =  146.1 mm x 203 mm x 41.4 mm)
"3.5 inch"  : (4 in x 5.75 in x 1 in =  101.6 mm x 146.05 mm x 25.4 mm)
"2.5" inch  : (2.75 in x 3.945 in x 0.25-0.75 in = 69.85 mm x 100.2 mm x [7, 9.5, 12.5, 15, 19] mm)
Old disk height form factors, originating in 5.25 inch disks circa mid-1980's.
low-profile = 1 inch.
Half-height = 1.63 inch.
Full-height = 3.25 inch. [Fitting well into 2 Rack Units]

Wednesday, December 14, 2011

"Disk is the new Tape" - Not Quite Right. Disks are CD's

Jim Gray, in recognising that Flash Memory was redefining the world of Storage, famously developed between 2002 and 2006 the view that:
Tape is Dead
Disk is Tape
Flash is Disk
RAM Locality is King
My view is that: Disk is the new CD.

Jim Gray was obviously intending that Disk had replaced Tape as the new backup storage media, with Flash Memory being used for "high performance" tasks. In this he was completely correct. Seeing this clearly and annunciating it a decade ago was remarkably insightful.

Disks do both the Sequential Access of Tapes and Random I/O.
In the New World Order of Storage, they can be considered functionally identical to Read-Write Optical disks or WORM (Write Once, Read-only Memory).

As the ratios between access time (seek or latency) and sequential transfer rate, or throughput, continues to change in favour of capacity and throughput, managing disks becomes more about running them in "Seek and Stream" mode than doing Random I/O.

With current 1TB disks, the sequential scan time (capacity ÷ sustained transfer rate) [1,000GB/ 1Gbps] is 2-3 hours. However, to read a disk with 4KB random I/O's at ~250/sec (4msec avg. seek), the type of workload a filesystem causes, gives an effective through put of around 1MB/sec, or 128 times slower than a sequential read.

It behoves system designers to treat disks as faster RW Optical Disk, not as primary Random IO media, and as Jim Gray observed, "Flash is the New Disk".

The 35TB drive (of 2020) and Using them.

What's the maximum capacity possible in a disk drive?

Kryder, 2009, projects 7TB/platter for 2.5 inch platters will be commercially available in 2020.
[10Tbit/in² demo by 2015 and $3/TB for drives]

Given that prices of drive components are driven by production volumes, in the next decade we're likely to see the end of 3.5 inch platters in commercial disks with 2.5 inch platters taking over.
The fith-power relationship between platter-size and drag/power-consumed also suggests "Less is More". A 3.5 inch platter needs 5+ times more power to twirl it around than a 2.5 inch platter - the reason that 10K and 15K drives run the small platters: they already use the same media/platters for 3.5 inch and 2.5 inch drives.

Sankar, Gurumurthi, and Stan in "Intra-Disk Parallelism: An Idea Whose Time Has Come" ISCA, 2008, discuss both the fifth-power relationship and that multiple actuators (2 or 4) make a significant difference in seek times.

How many platters are fitted in the 25.4 mm (1 inch) thickness of a 3.5 inch drive's form-factor?

This report on the Hitachi 4TB drive (Dec, 2011) says they use 4 * 1TB platters in a 3.5 inch drive, with 5 possible.

It seems we're on-track to at least the Kryder 2020 projection, with 6TB per 3.5 inch platter already demonstrated using 10nm grains enhanced with Sodium Chloride.

How might those maximum capacity drives be lashed together?

If you want big chunks of data, then even in a world of 2.5 inch componentry, it still makes sense to use the thickest form-factor around to squeeze in more platters. All the other power-saving tricks of variable-RPM and idling drives are still available.
The 101.6mm [4 inch] width of the 3.5 inch form-factor allows 4 to sit comfortably side-by-side in the usual 17.75 inch wide "19 inch rack", using just more than half the 1.75 inch height available.

It makes more sense to make a half-rack-width storage blade, with 4 * 3.5 inch disks (2 across, 2 deep) with a small/low-power CPU, a reasonable amount of RAM and "SCM" (Flash Memory or similar) as working-memory and cache and dual high-speed ethernet, infiniband or similar ports (10Gbps) as redundant uplinks.
SATA controllers with 4 drives per motherboard are already common.
Such "storage bricks", to borrow Jim Grays' term, would store a protected 3 * 35Tb, or 100TB per unit, or 200Tb per Rack Unit (RU). A standard 42RU rack, allowing for a controller (3RU), switch (2RU), patch-panel (1RU) and common power-supplies (4RU), would have a capacity of 6.5PB.

Kryder projected a unit cost of $40 per drive, with the article suggesting 2 platters/drive.
Scaled up, ~$125 per 35TB drive, or ~$1,000 for 100TB protected ($10/TB) [$65-100,000 per rack]

The "scan time" or time-to-populate a disk is the rate-limiting factor for many tasks, especially RAID parity rebuilds.
For a single actuator drive using 7TB  platters and streaming at 1GB/sec, "scan time" is a daunting 2 hours per platter: At best 10 hours to just read a 35TB drive.

Putting 4 actuators in the drive, cuts scan time to 2-2.5 hours, with some small optimisations.

While not exceptional, its compares favourably with 3-5 hours minimum currently reported with 1TB drives.

But a single-parity drive won't work for such large RAID volumes!

Leventhal, 2009, in "Triple Parity and Beyond", suggested that the UER (Unrecoverable Error Rate) of large drives would force force parity-group RAID implementations to use a minimum of 3 parity drives to achieve a 99.2% probability of a successful (Nil Data Loss) RAID rebuild following a single-drive failure. Obviously, triple parity is not possible with only 4 drives.

The extra parity drives are NOT to cover additional drive failures (this scenario is not calculated), but to cover read errors, with the assumption that a single error invalidates all data on a drive.

Leventhal uses in his equations:
  •  512 byte sectors,
  • 1 in 10^16 probability of UER,
  • hence one unreadable sector per 200 billion (10TB) read, or
  • 10 sectors per 2 trillion (100TB) read.
Already, drives are using 4Kb sectors (with mapping to the 'standard' 0.5Kb sectors) to achieve the higher UER's.  The calculation should be done with the native disk sector size.

If platter storage densities are increased by 32-fold, it makes sense to similarly scale up the native sector size to decrease the UER. There is a strong case for 64-128Kb sectors on 7Tb platters.

Recasting Leventhal's equations with:
  • 100TB to be read,
  • 64KB native sectors,
  • or 1 in 1.5625 * 10^9 native sectors read for a UER of 1 in 10^16.
What UER would enable a better than 99.2% probability of reading 1.5 billion native sectors?
First approximation is 1 in 10^18 [confirm].
Zeta claims UER better than 1 in 10^58. Is possible to do much better.

Inserting Gibson's "horizontal" error detection/correction (extra redundancy on the one disk) is around the same overhead, or less. [do exact calculation].


Rotating parity or single-disk parity RAID?

The reasons to rotate parity around disk are simple - avoid "hot-spots", otherwise the full parallel IO bandwidth possible over all disks is reduced to just that of the parity disk. NetApp neatly solve this problem with their WAFL (Write Anywhere File Layout).

In order to force disks into mainly sequential access, "seek then stream", writes won't be simply cached, but shouldn't be written to HDD but kept to SMC/Flash until writes have quiesced.

The single parity-disk problem only occurs on writes. Reading, in normal or degraded mode, occurs at equal speed.

If writes across all disks are stored then written in large blocks, there is no IO performance difference between single-parity disk and rotating parity.

Tuesday, December 13, 2011

Revolutions End: Computing in 2020

We haven't reached the end of the Silicon Revolution yet, but "we can see it from here".

Why should anyone care? Discussed at the end.

There are two expert commentaries that point the way:
  • David Patterson's 2004 HPEC Keynote, "Latency vs Bandwidth", and
  • Mark Kryder's 2009 paper in IEEE Magnetics, "After Hard Drives—What Comes Next?"
    [no link]
Kryder projected the current expected limits of magnetic recording technology in 2020 (2.5": 7Tb/platter) and how another dozen technologies will compare, but there's no guarantee. Some unanticipated problem might, like CPU's, derail Kryders' Law before then: disk space doubles every year.
We will get an early "heads-up": by 2015 Kryder expects 7Tb/platter to be demonstrated.

This "failure to fulfil the roadmap" has happened before: In 2005 Herb Sutter pointed out that 2003 marked the end of Moore's Law for single-core CPU's in "The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software". Whilst Silicon fabrication kept improving, CPU's hit a "Heat Wall" limiting the clock-frequency, spawning a new generation of "multi-core" CPUs.

IBM with its 5.2GHz Z-series processors and gamers "over-clocking" standard x86 CPUs showed part of the problem was a "Cooling Wall". This is still to play out fully with servers and blades.
Back to water-cooling, anyone?
We can't "do a Cray" anymore and dunk the whole machine in a vat of Freon (a CFC refrigerant, now banned).

Patterson examines the evolution of four computing techologies over 25 years from ~1980 and the increasing disparity between "latency" (like disk access time) and "bandwidth" (throughput):
  • Disks
  • Memory (RAM)
  • LANs (local Networking)
  • CPUs
He  neglects "backplanes", PCI etc, Graphic sub-systems/Video interfaces and non-LAN peripheral interconnection.

He argues there are 3 ways to cope with "Latency lagging Bandwidth":
  • Caching (substitute different types of capacity)
  • Replication (leverage capacity)
  • Prediction (leverage bandwidth)
Whilst  Patterson doesn't attempt to forecast the limits of technologies like Kryder, he provides an extremely important and useful insight:
If everything improves at the same rate, then nothing really changes
When rates vary, require real innovation
In this new milieu, Software and System designers will have to step-up to build systems that are effective and efficient, and any speed improvements will only come from better software.

There is an effect that will dominate bandwidth improvement, especially in networking and interconnections (backplanes, video, CPU/GPU and peripheral interconnects):
the bandwidth-distance product
This affects both copper and fibre-optic links. Using a single technology, a 10-times speed-up shortens the effective distance 10-times. Well know in transmission line theory.

For LANs to go from 10Mbps to 100Mbps to 1Gbps, higher-spec cable (Cat 4, Cat 5, Cat 5e/6) had to be used. Although 40Gbps and 100Gbps Ethernet have been agreed and ratified, I expect these speeds will only ever be Fibre Optic. Copper versions will either be very limited in length (1-3m) or use very bulk, heavy and expensive cables: worse in every dimension than fibre.

See the "International Technology Roadmap for Semiconductors" for the expert forecasts of the underlying Silicon Fabrication technologies, currently out to 2024. There is a lot of detail in there.

The one solid prediction I have is Kryder's 7Tb/platter.
A 32 times increase in bit-areal density, Or 5 doublings of capacity.
This implies the transfer rate of disks will increase 5-6 times, given there's no point in increasing rotational speed, to roughly 8Gbps. Faster than "SATA 3.0" (6Gbps) but within the current cable limits. Maintaining the current "headroom" would require a 24Gbps spec - needing a new generation of cable. The SATA Express standard/proposal of 16Gbps might work.

There are three ways disk connectors could evolve:
  • SATA/SAS (copepr) at 10-20Gbps
  • Fibre Optic
  • Thunderbolt (already 2 * 10Gbps)
Which type to dominate will be determined by the Industry, particularly the major Vendors.

The disk "scan time" (to fully populate a drive) at 1GB/sec, will be about 2hours/platter. Or 6 hours for a 20Tb laptop drive, or 9 hours for a 30Tb server class drive. [16 hours if 50TB drives are packaged in 3.5" (25.4mm thick) enclosures].  Versus the ~65 minutes for a 500Gb drive now.

There is one unequivocal outcome:
Populating a drive using random I/O, as we now do via filesystems, is not an option. Random I/O is 10-100 times slower than streaming/sequential I/O. It's not good enough to take a month or two to restore a single drive, when 1-24 hours are the real business requirements.

Also, for laptops and workstations with large drives (SSD or HDD), they will require 10Gbps networking as a minimum. This may be Ethernet or the much smaller and available Thunderbolt.

A caveat: This piece isn't "Evolutions' End", but "(Silicon) Revolutions' End". Hardware Engineers are really smart folk, they will keep innovating and providing Bigger, Faster, Better hardware. Just don't expect the rates of increase to be nearly as fast. Moores' Law didn't get repealed in 2003, the rate-of-doubling changed...


Why should anyone care? is really: Who should care?

If you're a consumer of technology or a mid-tier integrator, very little of this will matter. In the same way that now when buying a motor vehicle, you don't care about the particular technologies under the hood, just what it can do versus your needs and budget.

People designing software and systems, the businesses selling those technology/services and Vendors supplying parts/components hardware or software that others build upon, will be intimately concerned with the changes wrought by Revolutions End.

One example is provided above:
 backing up and restoring disks can no longer be a usual filesystem copy. New techniques are required.

Wednesday, December 07, 2011

RAID: Something funny happened on the way to the Future...

With apologies to Stephen Sondheim et al, "A Funny Thing Happened on the Way to the Forum", the book, 1962 musical and 1966 film.

Contents:

Summary:
Robin Harris of "StorageMojo" in "Google File System Eval", June 13th, 2006, neatly summaries my thoughts/feelings:
As regular readers know, I believe that the current model of enterprise storage is badly broken.
Not discussed in this document is The Elephant in the Room, or the new Disruptive Technology:Enterprise Flash Memory or SSD (Solid State Disk). It offers (near) "zero latency" access and random I/O performance 20-50 times cheaper than "Tier 1" Enterprise Storage arrays.

Excellent presentations by Jim Gray about the fundamental changes in Storage are available on-line:
  • 2006 "Flash is good": "Flash is Disk, Disk is Tape, Tape is dead".
  • 2002 "Storage Bricks". Don't ship tapes or even disks. Courier whole fileservers, it's cheaper, faster and more reliable.