About This Case

Closed

28 Jul 2008, 11:59PM PT

Bonus Detail

  • Qualifying Insights Split a $12,000 Bonus

Posted

12 Jun 2008, 11:02AM PT

Industries

  • Enterprise Software & Services
  • Hardware
  • IT / IT Security
  • Internet / Online Services / Consumer Software
  • Start-Ups / Small Businesses / Franchises
  • Telecom / Broadband / Wireless

Continuing Insights Into The Rapidly Evolving Storage Area Network Market

 

Closed: 28 Jul 2008, 11:59PM PT

Qualifying Insights Split a $12,000 Bonus.

In April, we began a conversation around The Future of Storage with support from Dell.  Top insights in the community were posted to http://thefutureofstorage.com/ with some making it to Ars Technica and in ad promotions. We're looking to continue the conversation with good insight into any of the following topics: Virtualization, iSCSI, FCoE, deduplification, thin provisioning, encryption etc.  

The focus of this conversation is centered on the SAN market and its current and future directions, including the aforementioned areas.  To get an idea of the type of insights we're looking for, just look over http://thefutureofstorage.com/ to see some earlier posts. Please note that while that site is sponsored by Dell, the topic for discussion is not vendor specific.

In continuing this conversation, feel free to write about any topic in this space that you think would fit on the site, such as where you think the SAN market is heading. Alternatively, feel free to write thoughtful insights that address some of the points raised in earlier posts on The Future of Storage site.

For the purpose of encouraging discussion, here are a few topics that might be interesting to explore:

  • What type of storage architectures (SAN, NAS, iSCSI, FC, DAS) work well for virtualized servers and why?
  • What methods are best for accessing/provisioning storage with virtual systems (virtual group service, like VMWare? software function in the virutal system? raw device interface?)
  • What backup methodologies make sense for virtual server environments?

If you have personal experience with any of these, or even just want to walk through your thought process in evaluating options, those tend to make for very interesting posts.

Also, as we look to continue this ongoing conversation, please let us know what other topics you think would be good to discuss in future months.

The insights selected to be on this site will each get a "share" of the bonus pool below. You can write multiple insights to get multiple shares.

PLEASE NOTE: We are looking for unique insights that delve into a single subject concerning this topic. Don't try to cover too many things in a single insight submission. Again, look over the existing Future of Storage site to get an idea of what's appropriate.

Like last time, please get your insights in early. We will be closing the case once we feel there are enough insights (somewhere between 10 and 20 insights).

17 Insights

 



I really want to see a focus on seamless, offsite, incremental rolling (SOIR) backups.  At this stage of the industry, it is ridiculous to suppose there needs to be any kind of manual backup process.  There is plenty of bandwidth on any network and network monitoring tools to determine the best time to take scheduled backups.  Furthermore, storage is incredibly cheap now, to where there shouldn't be any problem storing a few days' worth of bit-for-bit backups as necessary.  Virtual server environments seem poised to benefit the most from SOIR backups, since a lot of virtual environments may be used for volatile test beds.  To be able to resurrect a particular instance of a virtual environment without having to worry about having taken the backup in the first place would be a huge benefit, and could be integrated into the idea of the virtual environment to begin with.  For example, having 5 virtual servers should imply that if 2 crash, I can just go grab their backup from yesterday and restore instantly.  Moreover, couldn't the virtual server manager service be configured to do this automatically if one of the servers did die? 
icon
Devin Moore
Thu Jun 26 12:53pm
As an example of what I mean by super-cheap storage, you can currently put 212 DVD's worth of material on a hard drive for 80 dollars. So for the cost of the old-school SAN, say into the thousands, that's easily thousands of DVD's worth of storage (tens of terabytes). Is there any chance any single person would ever use the amount of space you could currently buy with a thousand dollars, assuming you had incremental rolling storage usage to help contend with deduplification issues?

Virtualization and Storage

I posted a rather lengthy entry about this topic on my blog recently http://joergsstorageblog.blogspot.com, but I thought that I would summarize it here as well.

Virtualization Changes Everything

I keep hearing this, and to some extent, it's true, but on the other hand, if you have been around storage you also see a lot of similarities to the issues that we always had to deal with as storage admins/managers. The main difference with VMWare is that you might find that it's even more important that you address the following three issues:

  1. Application performance is dependant on storage performance. This isn't news for most storage administrators. However, what's different is that since VMWare can combine a number of different workloads all talking through the same HBA(s), the result is that the workload, as seen by the storage array, turns into a highly random usually small block I/O workload. These kinds of workloads are typically very sensitive to latency much more than they require a great deal of bandwidth. Therefore the storage design in a VMWare environment needs to be able to provide for this type of workload across multiple servers. Again, something that storage administrators have done in the past for Exchange servers, for example, but on a much larger scale.
  2. End to end visibility from VM to physical disk is very difficult to obtain for storage admins with current SRM software tools. These tools were typically designed with the assumption that there was a one-to-one correspondence between a server and the application that ran on that server. Obviously this isn't the case with VMWare, so reporting for things like chargeback becomes a challenge. This also effects troubleshooting and change management as well since the clear lines of demarcation between server administration and storage administration are now blurred by things like VMFS, VMotion, etc.
  3. Storage utilization can be significantly decreased. This is due to a couple of factors, the first of which is that VMWare requires more storage overhead to hold all of the memory, etc. so that it can perform things like VMotion. The second reason that VMWare uses more storage is that VMWare admins tend to want very large LUNs assigned to them to hold their VMFS file systems and to have a pool of storage that they can use to rapidly deploy a new VM. This means that there is a large pool of unused storage sitting around on the VMWare servers waiting to be allocated to a new VM. Finally, there is a ton of redundancy in the VMs. Think about how many copies of Windows are sitting around in all those VMs. This isn't new, but VMware sure shows it to be an issue.

So, the question is, what can we do about these things? Well, some new technology that's just coming into the market will help. things like thin provisioning, block storage virtualization, new SRM tools that can do correllization between servers, networks, and storage will help. Another thing that we are storating to see are virtual HBAs which will help as well on the reporting end of things.

So, as I see it, server virtualization (VMWare) will drive storage virtualization. I would even go so far to say that you won't be able to get the full promise of server virtualization unless you impliment some storage virtualization. If I'm right about that, then we should see a significant uptick in the sales of virtualization very soon...

 

The storage industry is not going away, and it's not a domestic automaker.

Our industry has made long strides in recent years.  There is no need for it to make a drastic change in it's 
designs or how it does business, like some of our troubled domestic automakers.  Companies are buying all of the 
various offerings from the big storage vendors, and even some of the small ones.  It's a growing market, and "if it 
ain't broke, don't fix it". 

At least that's the attitude that most companies will have when they're asked if they will implement a new-fangled 
technology for their existing SANs.  Most firms have their critical systems on a SAN, and unless  there is a real need to, will not forklift upgrade their existing SANs.  That leaves the new technologies to attract  new clients, such as small business and small-to-medium enterprises (SME).  iSCSI is catching their eye now, and sure  maybe FCoE will catch it later.  The key is still having devices that more than one protocol and interface, such as  those that use FC and iSCSI in order to attract the new but retain customers with existing investments.  Later perhaps, it will shift to FCoE and iSCSI, possibly Infiniband.  As a sidenote, I see the most bandwidth potential with fiber and Infiniband rather than copper.  Plus the cost of fiber continues to go down while copper costs are going up.  Fiber is smaller, but then again so was "Betamax".

Evolution vs. Revolution
No one should expect the storage industry to make a drastic change unless a  start-up company comes along with revolutionary technology that really takes off and others take notice.  I believe that is unlikely since storage networks also rely on network technology which isn't going to change overnight just for storage.  Afterall we're moving back towards commonly used network technologies like IP and Ethernet for our Storage Networks!

I also feel it's not accurate to think the storage industry is not improving daily.  We are not merely recreating what 
we started with, we are improving its reliability, performance, resiliance, flexibility, and utilization.  We are not 
mimicking evolution, there is no such thing.  To ignore all of the progress in hopes of a revolutionary shift seems to 
me as useful as questioning why we still use processors.  Granted quantum computing would be great, but how do you 
write code for something you don't understand, let alone something that changes only once it's observed?  Do you want 
your storage to be defragmented only when you observe it?

Thoughts on recent topics

Cloud computing will not take off in the financial industry or many other conservative industries.  They simply will 
not rely on it to get critical data or services.  If your Internet is gone, it's ALL GONE.  They just won't tolerate 
that.  I've seen the Internet go down in all 3 of the biggest cities for an extended period, and so did a lot of 
people that will never be convinced to invest too much in "cloud computing".

Deduplication is not simply compression.  It is inherently different.  It does so much more than just looking for 
repeated strings or bits and replacing them with a shortcut for "1000 spaces" for example.  It doesn't end with each 
file or with each archive or backup.  Deduplication looks at every single block of data and determines if it has ever 
seen that block before, and if so replaces it with a reference to the original block.  After that you could even 
compress the deduped data.

As far as synchronization of files at home goes, a cloud with silver lining is called NAS with sync features enabled.  
Have everything do a 2-way sync with the giant NAS.  If simplicity is desired, having 1 PC per person and a one-way 
upload from a "control device" such as an iPOD would be the way to go.

We Are The Industry
And for those of you who haven't noticed -- we ARE the industry, the big guys are asking us experts what we see coming 
around the corner, and they want to invest in developing and marketing that product.  No company wants to make the 
next "Betamax", despite it being a better product than the others.  Sites like this are catching vendors' attention and will change their minds.  Remember first comes expert opinions and recommendations, then vendor investment and production, and last comes recognition from customers. 

 


Hey Brocade, do you think I'm an FCoE sucker ?

At the Brocade Tech Day briefing to analysts, Brocade presented a set of slides. One of them (Page 62) said (and I quote):

"The FC incumbent has a huge advantage in being the FCoE vendor of choice. Like today's FC networks, we do not expect mixed vendor FCoE-FC networks."

Storage is a serious business, and everyone is very concerned about compatibility and interoperability. So much so, that they will buy more of the vendor that they already have and ignore the interoperability. Brocade knows that and is planning for it. Now, to me, that sounds like an opportunity to charge a more, and make more profit, for very little effort.

Those storage folks must love being bunnies.

A deeply cynical observer would conclude that Brocade feels that FCoE locks customers in to their product and can't wait to get their hands on the money. In fact, they believe it enough to suggest it to their investors.

How is this going to work ?

The "operability lock in" means they charge extra.  Because you aren't going to mix your "standardised" FCoE and FC vendors are you ? Why ? Because support would be a nightmare! Its not a truly open market, competition is low so Brocade can have a go at gouging you.

Bingo, Brocade is profit assured, good for bonuses and the share price!

What is the bet that Brocade can make sure of this by promoting and marketing compatibility and interoperability ? (Cisco is already making a lot of loud noises about their upcoming interop sessions). Let me also ask how many IT people are really going to deploy multiple vendors ? Anyone, anyone ? Bueller ?

Give it some thought people, when you take FCoE onboard you just might getting stitched up by some clever marketing.

You can download the Brocade Tech Day briefing http://media.corporate-ir.net/media_files/irol/90/90440/TechDayJune2008.pdf and read it for yourself. It is quite educational.

Storage Convergence on Ethernet - works for iSCSI, also FCoE - Part 1


A key issue around the development Converged Enhanced Ethernet(CEE) (also known as Data Centre Ethernet in Cisco marketing because of their proprietary extensions), is that CEE will enable iSCSI to reach it's full potential. For nearly everyone, iSCSI will become the default the technology once the CEE standards are finalised, and products come to market. What ? You haven't heard this before ? Lets have a quick intro to CEE.

What are we trying to solve ?

Existing applications are tolerant to packet loss in the network, and QoS is managed (but not solved) by classifying traffic and managing queues in the network (where queues exit). However, Storage is very sensitive to loss, both FC and iSCSI can retransmit lost data, but this causes significant bottlenecks and a performance hit to the application.

In a modern data centre, attempting to QoS storage AND voice AND other high value traffic is a practical impossibility. The configuration and maintenance of a converged backbone with competing requirements would be unsuccessful in enough cases that the market would reject any such attempt.  

The obvious thing is to stop packet loss. The only practical way to achieve this is to create some signalling that notifies the sender to stop sending data before the loss or congestion occurs. Congestion is always possible in ANY NETWORK no matter how much bandwidth you provision.

IEEE are working on 802.1Qau Congestion Notification which is an end to end message. When combined with 802.3x Pause control (which operates at link level), we can guarantee zero loss in Ethernet backbone because the source can be signalled to slow down the data transmissopm.

Note that 802.3x needs to be modified from a start/stop for the entire link, to be able to Pause certain traffic priorities, so when combined with 802.1Qaz you are able to overcome this limitation. 802.1Qaz describes Enhanced Transmission Selection which support allocation of bandwidth amongst traffic classes.

Another standard  under development is Discovery and Capability Exchange where servers and network equipment are able to signal their capabilities to each other, so that strategies for traffic management can be agreed between all elements.

So now we have the ability to configure our network with at least three major types of buffering or congestion strategies. 1) zero loss suitable for voice traffic 2) buffer overflow causes signal to be sent back to source or upstream network device such traffic is controlled and not dropped 3) discard traffic if needed and let application protocols retransmit. (of course, you should be able to have less, none, or variations in the saem way that we do today.

Unlike using TCP/IP to perform QoS, where the QoS mechanisms are not will suited to storage traffic, CEE will deliver a zero to low loss switching backplane to iSCSI and yet offer support to a number of other applications. The option for single, unified Ethernet fabric in the Data Centre is very real.

Further, we will still have native IP support for WAN for inter-Data Centre connections when needed without extra actions or work. (Unlike Fibrechannel, which will still need FCIP plus FCOE / FC to perform inter-Data Centre connections).

Conclusion


The interesting outcome of this is the impact on iSCSI. Storage network technologies are very sensitive to packet loss, and if the network can assert mechanisms for assuring low latency and very low (or no) loss, then our Converged Enhanced Ethernet Networks will take over the data centre for our storage.

Most importantly, iSCSI will now be able to reach it's full potential. Most people find that iSCSI works fine in Small to Medium server farms, but in large scale IT the impact of packet loss is too great a risk. By comparison Fibrechannel networks deliver a lossless medium and that is a key enabler for the technology.

Look for my next post which discusses more aspects of Converged Enhanced Ethernet and its impact on Storage networks.

References


"Storage convergence over Ethernet: iSCSI for new SAN installations FC tunneled over Ethernet (FCoE) for expanding FC SAN installation" Data Center Bridging, IEEE 802 Tutorial - Page 11 - 12th November 2007.

http://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbcxp-overview-re v0.2.pdf

Storage Convergence on Ethernet - combining data and storage on single fabric - Part 2


Continuing from my previous article which introduced Converged Enhanced Ethernet, lets look at how we can converge our storage and data networks and have a single fabric in the data centre.

The value of HBAs

In this plan, your network adapters and drivers (HBAs) must respond to these flow control messages to stop or slow data transmission so as to avoid congestion in the network and subsequent packet loss. Thus HBAs will becomes critical to the success of iSCSI or FCoE because they will need to have large and smart buffering schemes to be able to handle the data pumping. I have talked in previous articles about the value of HBAs in your servers and this is another example of their value.

The value of a single Network fabric


The current storage network strategy means that there are two networks in our data centre, Fibrechannel and Ethernet. Many people are co-opting this strategy for their iSCSI backbones by creating a second network dedicated to IP Storage.

Unlike using TCP/IP to perform QoS, where the QoS mechanisms are not will suited to storage traffic, CEE will deliver a zero to low loss switching backplane to iSCSI and yet offer support to a number of other applications. The option for single, unified Ethernet fabric in the Data Centre is very real.

Further, we will still have native IP support for WAN for inter-Data Centre connections when needed without extra actions or work. (Unlike Fibrechannel, which will still need FCIP plus FCOE / FC to perform inter-Data Centre connections).

Lets merge these together


Not so long ago, Telephony was migrated to an IP network. It was widely agreed that voice could never travel across a packet network because delivery was not guaranteed, delay / jitter was a problem, the sky would fall down etc etc.

The argument against merging storage and data networks is a reasonable statement with the Ethernet technology we have today. However, Converged Enhanced Ethernet is expected to deliver (in time) a single Ethernet fabric for our entire data centre.

Consider the Brocade approach "FCoE is Fibre Channel, not legacy Ethernet." - Page 62 - Brocade Tech Day briefing here. This means buying super special sauce switches from Brocade, that support FCoE / FC. It hard to perceive that we can build a unified fabric using these products. At best, they are a "transition" or short term fixit for the Fibrechannel problem.

Building a Unified Fabric


So there are several things that have to happen. Lets be real, and think about what to look for.

First the CEE standard needs to be completed by the IEEE. Second, the vendors need to announce Ethernet switches that support CEE. Look for Cisco and Brocade to be early to market (because they want to force customers to FCoE), but keep an eye on Juniper, Force 10 and Woven who can all use this disruption in the market to take a new position in the market and attack Cisco's dominance.

Third, look for quality HBA's and their software drivers. Fourth, look at the integration required between the Network and Server Teams to seamlessly deploy this new network strategy.

When do you think it will be ready


I would _guess_ that CEE will be done by 2010 (OK so there will be pre-standard stuff around like the Cisco Nexus 7000 / 5000, but not many people will buy that). Because this is a market disruption point, I would expect the vendors to have products out quickly, particularly Juniper. So you should be thinkinh about deployment in 2011. Thats only _three_ years away, or a single investment cycle.

Conclusion


So this sounds like a long way off, but we are already getting the marketing hype from Cisco, and I guess Brocade will not be far behind. We haven't heard much from NetApp / EMC / HP / IBM so its not clear what might happen if they decide to move down a different path.

In the mean time, the focus should be to stick with iSCSI and that will be a viable, long term, technology. When you need to scale to muligigabit performance, new technologies will be there to make it happen. Reliable, Scalable, and Easy to Use. Thats what we want, right ?
icon
Joerg Hallbauer
Wed Jul 16 2:26pm
I agree that FCoE is a great idea/technology, however, I'm concerned about some of the orginazational issues that it will create. Specifically, what you call the "integration between the Network and the Server Teams".

First, in most large organizations, storage is handled by a seperate Storage Team, not the Server Team. Right now, the Storage Team has control over it's own network (i.e. fiber channel). FCoE's use of Ethernet will, I believe, cause a power struggle between the Network Team, who will want to control all things Ethernet, and the Storage Team.

So I'm wondering if you have thought about that and how it might be addressed? One obvious answer would be to build a seperate Ethernet network just for storage which is controled by the Storage Team. But that would somewhat blunt the advantages of going to FCoE in the first place, wouldn't it? The other answer would be to allow the Storage Team some administrative access to the Ethernet switches in a way that's limited to just FCoE activities. This is going to require that Cisco and Brocade, etc. develop some enhancements to their management interfaces.

This leaves the issue of physical connectivity. Who will be responsible for connecting the "HBA"s to the Ethernet network? Again, I suspect that there will be a power struggle here as well.

So, I'm going to guess that a lot of large IT shops who currently run fiber channel networks are going to decide that they really don't want to address these issues, and simply choose not to adopt FCoE. At least not very soon. Even if it is a better mouse trap.
icon
Greg Ferro
Thu Jul 17 12:19am
You are bang on the money. The big shops will be all have their storage people swinging the wagons around their sacred cows and howling about 'uptime, performance blah blah ...hot air smelly fart sounds'.

But the day of truth will come when the CIO is going say "you are buying Ethernet switches from Brocade ... for how much ?" Then he is going to turn to the network guy, and the network will merge.

The fun part, is that its going to be brutal on the storage folks. They act just like Telephony people before they got eviscerated by IP Telephony ..... and I love stepping over their graves as I walk through the door everyday.
icon
Greg Ferro
Thu Jul 17 12:23am
Oh yeah, I forgot to mention, that the in long term a storage unit will look like just another server to the network. It will have a different traffic profile and specific design considerations, but otherwise no different from a database server or an exchange server

Virtualization Does Not Hide The Physical World We Live In

Don't misunderstand me, I love virtualization.  With any technology, we weigh the benefits of it against the cost and viability of implementation.  Sometimes we focus too much on virtualization or the protocols and the upper layers of proposed ideas.  We might completely overlook the lower layer benefits that technologies such as FCoE could provide, such as the ease of wiring and running patch panels.  This gives data centers massive flexibility.  It would eliminate the need to worry about what type of wiring will go where and having to "homerun" every type of new cable that comes out. 

Convenience Killed The Cat? 

As I've stressed before, I feel this convergence should be focused on the physical medium with the most potential like fiber cable rather than twisted pair copper.  Regardless, we must not forget the importance of convenience.  Several markets such as cell phones and even websites forego performance in favor of convenience.  I would like to think that we will continue to stave off that rationale in the IT storage industry, but it does pull a lot of weight.  Convenience is good, but an even bigger factor is the reality that consumers want all of their information right away.  We all need to be a bit more curious, look at the big picture, and question the status quo.

Our Fiber Diet

I could see cabling convenience being huge in the data center.  I was recently rerouting core switch cabling in our data center and fighting to protect the precious fiber cable while moving the huge, bulky and ubiquitous UTP copper.  Though I favor the fiber tremendously, I was looking at our patch panels and contemplating the rapid deployment of our FC SAN.  I started to fumble when I thought about how nice it would be to have fiber patch panels, and ideally it would be used for everything, but I thought about what a risk that was with everything favoring copper these days.  Sure the cost of copper is rising and fiber is dropping.  Indeed fiber has so much more performance potential, can be pulled at higher tension than copper, is smaller than copper, however we cannot overlook the convenience factor which has caused us to restrict fiber from our collective diets. 

More Questions Than Answers 

Why is it that copper seems to be winning the race?  Is it because more people know how to terminate UTP than polish fiber ends?  Will they be just as good with STP when they have a shield to deal with and even thicker cable?  When will copper become cost-prohibitive and reach the end of its roadmap?

Often I feel the IT industry and the big-vendors out there are focused too much on the road right in front of them without looking at the map first.

Turning the Page on RAID

It has been the core technology behind the storage industry since day one, but the sun is setting on traditional RAID technology. After two decades of refinement and fragmentation, we are abandoning the core concepts of disk-centric data protection as storage and servers go virtual. Next-generation storage products will feature refined and integrated capabilities based on pools of storage rather than combinations of disk drives, and we will all benefit from improved reliability and performance.

RAID Classic

Early storage systems were revolutionary, in physically removing storage from the CPU, in enabling sharing of storage between multiple CPUs, and especially in virtualizing disk drives using RAID. When Patterson, Gibson, and Katz proposed the creation of a redundant array of inexpensive disks (RAID) in 1987, they specified five numbered "levels". Each level had its own features and benefits, but all centered on the idea that a static set of disk drives would be grouped together and presented to higher-level systems as a single drive. Storage devices, as a rule, mapped host data back to these integral disk sets, sometimes sharing a single RAID group among multiple "LUNs", but never spreading data more broadly. Storage has remained stuck with small sets of drives ever since.

The core insight of the 1980s remains true: More spindles means better performance. Although additional overhead dulls the impact somewhat, the benefit of spreading data across multiple drives can be tremendous. A typical RAID set offers much better performance than the drives alone, and can handle a mechanical failure as a bonus.

Cracks are appearing in the RAID veneer, though. Double drive failures are much more common than one would expect, leading to the development of hot spare drives and dual-parity RAID-6. If four drives perform well, then forty drives perform much better, leading to the common practice of "stacking" one RAID set on others. Caches and specialized processors were introduced to overcome the performance issues related to parity calculation.

But traditional RAID cannot overcome today's most critical storage issues. As drives have become larger, the tiny chance of an unrecoverable media error compounds, becoming a certainty. Even dual-parity will not be able to guarantee data protection on the massive disks predicted for the near future – statistics cannot be denied. The latest disks contain so much data, without commensurate improvements in throughput, that rebuild times have skyrocketed, resulting in hours or days of reduced data protection.

RAID is also ill-suited to the demands of virtualized systems, where predictable I/O patterns become fragmented. It cannot provide tiered storage or account for changing requirements over time. It cannot take advantage of the latest high-performance solid state storage technology. It cannot be used in cloud architectures, with massive numbers of small devices clustered together. It interferes with power-saving spin-down ideas. Most RAID implementations cannot even grow or shrink with the addition or removal of a disk. In short, traditional RAID cannot do what we now need storage to do.

RAID is Dead

Although most vendors still use the name, nearly every one has abandoned much of the classic RAID technology. EMC's Symmetrix pioneered the idea of sub-disk RAID, pairing just a portion of each disk with others to reduce the impact of "hot spots". HP's AutoRAID added the ability to dynamically move data from one RAID type to another to balance performance. And NetApp paired disk management so closely with their filesystem that they were able to use RAID-4 and the flexibility it brings.

Today, a new generation of devices has even evolved beyond RAID's concept of coherent disk sets. Compellent, Dell EqualLogic, and others focus on blocks of data, moving portions of a LUN between RAID sets, disk drive types, and even inner or outer tracks based on access patterns. With these devices, a single LUN could encompass data on every drive in the storage array. And the latest clustered arrays can spread data across multiple storage nodes to scale performance and protection.

These innovative devices point the way to a future in which virtual storage is serviced and protected very differently than in the past. Perhaps software like Sun's ZFS serves to illustrate this future best: It unifies storage as a single pool, intelligently protecting it and presenting flexible storage volumes to the operating system. Although Sun calls its data protection scheme "RAID-Z", it has little in common with its namesake. Like NetApp's WAFL, the copy-on-write ZFS filesystem is totally integrated with the layout of data on disk, allowing mobility and efficient use of storage. A single pool can include striping, single- or dual-parity, and mirroring, and disks can be added as needed. Importantly, ZFS also checksums all reads, detecting disk errors.

Long Live RAID

The post-RAID future will see these concepts spread across all enterprise storage devices. Disks will be pooled rather than segregated into RAID sets. Tight integration between layout and data protection will allow for much greater flexibility, integrating tiering and differing data protection strategies in a unified whole. Storage virtualization will allow mobility of data within these future storage arrays, and clustering will enable massive scalability.

Two things will likely remain to remind us of Patterson, Gibson, and Katz, however. First, the core principle that multiple drives working as one yields dividends in terms of performance and data protection. And second, that whatever we use should be called RAID, even though the definition of that term has changed beyond recognition in the last two decades.

Data Centers More Abstract Than A Picasso

Virtualization is a big buzzword right now, and for good reason.  The more we virtualize, the more flexibility we build into our infrastructure.  What we are really talking about is providing a layer of abstraction between our resources.  The entire IT industry is moving in this direction and it's fantastic.  Swapping server hardware has never been easier.  Network traffic can be grouped and segmented without extra hardware.  Storage space can move moved, increased, and even migrated to higher performance arrays without downtime.  These added abstractions need to continue and I see every aspect of the data center benefitting from virtualization.

You virtualized what?

Anything is possible.  We can abstract the power from the hardware and circuits, which might allow us to redistribute power as needed to various parts of the data center and ease use of products with heterogeneous power requirements, including the power hungry 10 gigE coming our way.  We would only trip virtual circuits.  Virtual power, virtual cooling, virtual phones, virtually anything is possible.  Virtual IT staff?  Well, that's called outsourcing the staff and is as useful as virtual money, right?  In the world of IT storage, providing a layer of abstraction between the file system and the operating system should be focused on.  We also need to improve the underlying resilience technology.  With SSDs being fragmented by nature and most of the time being slower on writes than spinning disk, a RAID 1+0 looks pretty good, especially since it's cheaper per gigabyte too.  A RAID 1+0 could lose half of its disks and still run without issue.

Back to Reality

I understand there may not be a perceived need for some of these technologies yet.  There still are several opportunities for more abstraction.  Too much abstraction can make management cumbersome.  The key is abstraction with less complication.  Let's take a look at the needs of SMB/SMEs, the Small-to-Medium Enterprises.  Many of these companies have enterprise-class needs for storage, network security, and messaging solutions.  These needs don't necessarily mean they have the money or space to fill these roles and house this technology.  How do you get network monitoring around the clock, and a warehouse full of information in a small office?  Bring in the appliances and the outsource companies.

Where Are The Abstract Appliances?

Appliances serve many purposes, and do a great job fulfilling various needs.  The problem with appliances is that the vendors that make them seem to be removing layers of abstraction instead of adding them.  Specifically I'm thinking of deduplication appliances.  There are too few deduplication appliances available for SMEs that do not have their own proprietary and locally attached storage.  EMC bought Avamar and rather than leaving the option of it being available as software or offering an appliance gateway, they now force the consumer to buy it with locally attached storage.  There is a virtual edition, but it comes with many restrictions and Avamar in general requires giving up on existing backup software investments and using the Avamar interface.  Data Domain does have a gateway product, but it is their flagship model and its price point punts it out of reach from the SME market.   Maybe an abstract appliance is an oxymoron, but I know I'm still looking for one to buy for our datacenter.

Proprietary file system you say?  Profit margin prohibitive?  Give us abstraction, and we'll reward you with our business.

 

We are running out of places to put things.

Data continues to grow at a frightening rate. According to an IDC study there was about 281 Exabytes of data stored on disk in 2007 word wide. This data is growing at CAGR of about 70%. At this rate, in 3 years there will be about 1400 Exabytes of data sitting on disk.

Now, a lot of this data is sitting on people's desktops, laptops, ipods, phones, digital cameras, etc. right now. However, things like cloud storage will change all of that. Heck, we are seeing some of the change right now with things like social networking sites, photo sharing sites, etc.  IDC says that for 85% of that data a corporate entity will be responsible for the protection and security of the data.

So, in the future, we are going to have to store a lot more data than we do today, a LOT more data. How are we going to do that? Just the physical aspect of getting exabytes of data on the floor is going to be a challenge. I don't even want to talk about protecting and managing that much data. But for now, I want to talk about the density of the hard disk drive since that's going to soon become the physical limit of what we can store on the floor of our data centers.

The bits are getting too small!

Enterprise disk drive capacity has obeyed Moore's Law and doubled every 18 months for quite a few years.  However, this growth has appears to be slowing down over the last 5 years, and it is now taking approximately 29-30 months to double the capacity of an Enterprise disk Drive.

This shows that we are nearing the maximum areal density (max capacity) of current disk drive technology called the superparamagnetic  limit. Areal density as it refers to disk drives is measured by the number of bits per inch (bpi) times the number of tracks per inch (tpi).

 The areal density of disk storage devices has increased dramatically since IBM introduced the RAMAC in 1956. RAMAC had an areal density of two thousand bits per square inch, while current-day disks have reached 100 billion bits (100 gigabits per square inch). Perpendicular recording is expected to increase storage capacity even more over time, but we do appear to be approaching the limit. 

As the magnetic bits get smaller, at some point they no longer hold their charge. Thermal fluctuations reduce the signal strength and render the bits unstable. However, this ultimate areal density keeps changing as researchers find new techniques for recording and sensing the bit. Years ago the limit was thought to be 20 gigabits per square inch. Today, the limit is several hundred gigabits per square inch, and more than a terabit is expected soon.  But that's about all you can get out of the technology.

Denser is faster.

Increasing the density of hard disk drives has a side benefit. It makes the drives faster as well. This is really quite logical when you think about it. Since the closer things are together on the drive, the more data passes by a read/write head in the same period of time thus making the drive faster.

Shorter term solution.

So, if the disk drive is not going to be able to continue to provide us with the kinds of capacities we are going to need in the future, what will?  Well, there are a number of things that are being looked at by a lot of folks who are a lot smarter than me! But in the short term, things like SSD look promising once we work out some of the kinks.  Specifically, the write speed issue. Until we can get that up I'm not sure how much general acceptance SSD technology is going to get. Price, I am convinced, will take care of itself as the scales of economy kick in. Holographic storage, some people have been working on this for a very long time and it seems like such a promising technology, but it has yet to come to fruition. There is one company out there that's trying to ship a product, but they recently pushed off their release date until the end of this year. Still, if they can work out the kinks, it definitely has promise, especially for media applications. But what about beyond that? What technologies are the researchers looking at that sound really cool? I look at some of those next.

 

Sci-Fi data storage.

So, this is where it gets fun. Some of the technologies that researches are currently looking into really do sound like something out of a Sci-Fi movie.  Here are some examples of the stuff I'm talking about:

  • Nanodots - A nanodot has north and south poles like a tiny bar magnet and switches back and forth (or between 0 and 1) in response to a strong magnetic field. Generally, the smaller the dot, the stronger the field required to induce the switch. Until now researchers have been unable to understand and control a wide variation in nanodot switching response. A NIST team significantly reduced the variation to less than 5 percent of the average switching field and also identified what is believed to be the key cause of variability. Nanodots, as small as 50 nanometers (nm) wide could be used to storage data.
  • Array's of magnetic snakes - According to a weekly digest from the American Physical Society (APS), physicists at Argonne National Laboratory (ANL) have found that under certain conditions, magnetic particles could form magnetic ‘snakes' able to control fluids. According to the researchers, this magnetic self-assembly phenomena may be used to make the next generation of magnetic recording media or transparent conductors based on self-assembled conducting networks of magnetic micro-particles.
  • Nanowires - Switchable fluorescent proteins, able to move reversibly between two optical states, have been known from some years. But now, German researchers have discovered the mechanism behind this optical switch in a protein found on the tentacles of a sea anemone. According to the researchers from the University of Pennsylvania, Drexel University and Harvard University, barium titanium oxide nanowires suspended in water could hold 12.8 million GB per square centimeter. If the memory density can be realized commercially, "a device the size of an iPod Nano could hold enough MP3 music to play for 300,000 years without repeating a song or enough DVD-quality video to play movies for 10,000 years without repetition," the University of Pennsylvania researchers said.

Is the disk drive dead?

So, does this mean that the disk drive is dead in the future?  I don't think so. I believe that the disk drive we know and love will simply move from one tier of storage to another. We are already seeing some of this movement with the implementation is backup to disk. Technologies such as data deduplication will continue to accelerate this process, and the addition of new primary data storage technologies will simply end the process by pushing hard disk drives from on-line primary storage to what will be considered near-line storage in the future. Long live the disk drive!

 

Storage decisions for virtual servers
 
While these concepts and decisions should be fairly basic, I thought I would throw them out there and see if I either get some feedback from others who have gone through some of the same decision making processes, or possibly help out someone who is going through it right now.
 
As I worked on the design of my upcoming server virtualization infrastructure, obviously storage was one of my primary decisions. I already had my primary storage platform in place with various SAN arrays sitting behind a storage virtualization appliance, and I pretty much knew that I would just be expanding on that to support the new virtual machines. Plus, that fit in perfectly with the HA design that was already decided for the VM host machines.  I rolled out two quad proc quad core hosts with 64 GB of RAM each, and then 1 smaller 2 proc quad core machine with some DASD and a decent amount of RAM for a staging box for my P to V migrations.
 
Next I had to decide on the arrays. I pretty much knew that any physical box that was currently RAID 10, would be RAID 10 as a VM. What I had to decide was if I would just go RAID 10 across the board. Many physical machines were RAID 5, but virtualization adds its own bit of overhead, and there’s never anything wrong with boosting performance on anything. It really wasn’t cost prohibitive in buying the physical disks, but it did present a problem in maxing the physical disk count per controller a little quicker than I liked. Also, my IOPS per $$$ looked better with going all 10, but again, it was going to lead to bringing in additional controllers to support the number of physical disks that were going to be required. So in the end, I did a server by server worksheet, and ended up about half and half. I simply have TIER 1 and TIER 2 disk groups in my storage arrays to support the 2 groups of servers I decided on. Everything is very manageable and scalable, and I felt like I got the best bang for my buck.
 
We started our P to V migrations, and quickly had our first dozen or so servers up and going. We monitored performance, tested failover, and felt comfortable with what we had put in place. We were ready to proceed aggressively at this point. We knew we couldn’t virtualize everything we had in the data center (about 130 servers), as we needed to stay under 50% utilization on the cluster to maintain high availability, but we wanted to get a good number of physical boxes out of there. Soon we were near 40 VM’s, and all was running great. Disk performance was more than satisfactory, including the VM’s on the large shared RAID 5 arrays.
 
Next came the part that I’m sure many admins of virtualization run in to. Besides virtualizing production boxes, our developers (around 100 or so) had little test servers all over the place. Some were retired production boxes, some were just PC’s running Server, etc. Time to clean house, get all these things virtualized, and give them the benefit of snap shots and so on. It all went great as we piled up the carcasses. But all of a sudden, our utilization was spiking up near 60% on CPU and memory. While no single one of those was a hog, the group of them took a significant amount of disk, processor, and memory. Now I was stuck in the position of not being able to virtualize anything the rest of this year.
 
Then I thought, while it was great to virtualize all those little test machines, why am I chewing up the resources on my HA cluster, and throwing high end SAN disk at them. What to do. Then I thought, now that the bulk of the P to V migrations are done, and we are totally comfortable doing them going forward, that staging server sure does just sit idle now. With all I push for centralized SAN disk, moving those low end test machines on to that staging server and its DASD array, sure looked attractive. And that’s exactly what I did. 8 cores, 24 GB of RAM, and over ½ TB of 15K DASD. A perfect home for those machines, and now my HA cluster has breathing room. I worried a little about losing HA on those, but then again, they’re test (no SLA for test machines!). So, test machines, snapshots, clones etc., are all happily existing on that box. A small risk, and it would be minimal pain if we lose it.
 
So, in the end, my virtual infrastructure ended up a combination of large RAID 5 arrays, pooled up smaller RAID 10 arrays for IO intensive machines, and some good old DASD for Dev/Test. I nearly went into this with the “what’s good for one is good for all” mentality, but am very happy with the mixed environment I ended up with.
 

Best iSCSI SAN?   InfoWorld says it is the EqualLogic PS3800XV.

What is the best iSCSI SAN on the market?  According to InfoWorld the answer is the EqualLogic PS3800XV.

Each year InfoWorld designates products in several IT categories as the "Technology of the Year".    For the 2008 storage Equalogic took the prize for best iSCSI SAN.    Reviewer Paul Venezia extolled both the simplicity of installation and the power of the PS3800XV, and InfoWorld's stellar ratings of 9 or 10 in all categories were also very impressive.    The device scored 10 of 10 in the "performance" rating, and 9 of 10 points in all the following categories:  Management,  interoperability, scalability, reliability, and value. 

The magazine summed up their choice for the best iSCSI SAN of 2008 (though it was reviewed in 2007) as follows: 

EqualLogic’s PS3800XV iSCSI array represents the highly evolved state of iSCSI SAN arrays with panache. Fully featured and blazingly fast, this SAS-based array will easily find a home in infrastructures of any size, although the cost will keep it out of smaller shops. For virtualization implementations, you can’t do better than a PS3800XV.    Source

icon
Joseph Hunkins
Mon Jul 28 11:55pm
This insight was pretty short - OK to combine it with another to avoid me being a post hog.

Virtualization Best Practices

Thanks to the overwhelming benefits most IT departments have already deployed virtualization across some of their infrastructure with plans to increase the level of virtualization in the near future. According to the EMA study cited below the adoption of virtualization is growing at about 25% per year with over 95% of all enterprises using some form of virtualization already. 

The cost benefits of virtualization can be exceptional but the process also provides for superior availability, disaster recovery, and load balancing.  It also facilitates faster software deployment and development as those processes are disconnected from physical hardware limitations.   Virtualization also reduces downtime and thecomplications associated with hardware failure. 

In a paper for BMC software, IT consultancy Enterprise Management Associates took a look at best practices in the virtualization space and identified areas where virtualization brings unique challenges to the enterprise infrastructure.

EMA recommend adoption of the best practices as defined in the IT Infrastructure Library "ITIL", an IT community approach to bringing workable, quality standards to IT practices.  

The study suggested that despite its many advantages, virtualization does bring a host of complications to the table relating to the need to manage, allocated, and scale the virtual server environment effectively. 

EMA:

Application performance monitoring and availability management become much more complex, as virtualization makes it harder to map virtual resources to physical systems...

It is harder to accurately detect, measure, and plan capacity, because virtual services are so rapidly deployed, consumed, and destroyed....

Cost accounting and financial management becomes much more intricate and time-consuming, as tools must measure rapidly changing virtual environments, and license management and compliance are much harder to measure and enforce ...

Ensuring service levels becomes harder, because the new virtualization layer increases the potential for errors and chokepointd...

The solution, says EMA, is to implement a quality virtualization management software system, consistent with the IT Infrastructure Library Guidelines, such as the solutions provided by BMC.  EMA suggests BMC is a superior environment and more consistent with best practices and the Infrastructure Library than much of the competition, though we should note that this white paper was written for BMC and thus is probably not to be taken as an unbiased evaluation of all the options - rather just good basic insight into the issue of best practices with respect to virtualization of the enterprise.

EMA White Paper (pdf)

The Storage Economy

Who are the biggest companies in the storage space and how are they faring in terms of earnings and relative market capitalization?    I'd suggest that we are seeing some early signs of commoditization of the storage market happening as broader standards develop and cheaper iSCSI implementations take over from fiber.  This will tend to put pressure on any struggling companies as margins lower and customers migrate to cheaper solutions.

Who are the key companies in the networked storage sector and how are they faring financially?

Sources:  Yahoo Finance, Google Finance, CBS MarketWatch:

Name     Symbol  Last Trad e     Mkt Cap
Quantum Corporation     QTM 1.44     298.04M
Overland Storage, Inc.     OVRL 1.12     14.30M
Xyratex Ltd.     XRTX 14.55   ;   425.43M
EMC Corporation     EMC 13.91     28.80B
Ciprico Inc.     CPCI 0.650   ;   3.32M
Qualstar Corporation     QBAK 3.00     36.76M
Adaptec, Inc.     ADPT 3.60     445.04M
Brocade Communications Systems, Inc.     BRCD 6.68     2.47B
Sun Microsystems, Inc.     JAVA 9.94     7.77B
NetApp Inc.     NTAP 24.57   ;   8.03B

After the aquistion of Equalogic, clearly the big kahuna in storage is now Dell which enjoys a market capitalization of 47 Billion and a healthy P/E of 17.14.      I'm not clear why Dell did not appear in the chart above - a Google storage market comparison.   Dell Profile

EMC, Brocade, and SUN all remain major players in network storage and all companies appear in healthy shape with a P/E of 18, 17, 13 respectively, indicating strong earnings relative to the stock price.    EMC Profile    Brocade Profile   Sun Profile

Quantum, Ciprico, Overland, Adaptec  appear to be struggling financially at least in the short term, all show losses in the last quarter.   This is not necessarily a sign of great problems but indicates that the market is competitive and without improved performance these companies could be in for troubled times.   Quantum Profile    Overland Profile     Ciprico Profile  Adaptec Profile 

Qualstar lost a few cents per share last quarter, but at the same time declared a .06 dividend which normally suggests financial health, so generalizing about their prospects is difficult without a lot more information.    Qualstar Profile

Netapp and Xyratex both maintain PE of about 28 which should be no cause for financial alarm.
Xyratex Profile    NetApp Profile

Granularity: The Hidden Challenge of Storage Management

Many storage challenges focus on correlating the higher-level use of data with the nuts and bolts of storage. These discussions often revolve around the conflict between data management, which demands an ever-smaller unit of management, and storage management, which benefits most from consolidation. Developing storage management capability that is both granular and scalable is one key to the future of storage.

Storage Management: Scaling Up

As I discussed in my last piece, Turning the Page on RAID, the data storage industry has traditionally focused on reducing granularity. Disk capacity has expanded, and RAID technology has multiplied this by combining multiple physical drive mechanisms into a single virtual one. Storage virtualization technologies, from the SAN to the server, have also often been touted primarily as a mechanism to reduce heterogeneity. From a technical perspective, therefore, granularity has been an obstacle to overcome.

The core organizational best practice for storage management is reducing complexity and increasing standardization. Consolidation of storage arrays and file servers is a common goal, as IT seeks to benefit from economies of scale. The goal of both initiatives is the creation of a storage utility or managed storage service.

Although both technological and organizational factors have traditionally driven granularity out of storage, this does not have to be the case. Virtual pools of storage are ideal for providing storage on demand, as disk-focused RAID groups give way to more flexible sub-disk storage arrangements. And an operational focus on standardized storage service offerings has the potential to enable scalable management of these smaller units.

Filing Service

File-based protocols would seem to have more potential for granular storage management, but they have been undermined by the hierarchical nature of modern file storage. Whether the connection to a file server uses NFS, CIFS, or AFP, the key unit of management is actually the shared directory, not the file. All files in the share, \fireflybackups, would be located on the same server and would be managed as a unit.

NAS virtualization can change this somewhat, as can more specialized NAS servers. Although Microsoft DFS enables consolidation and virtualization of NAS shares, it does not allow subdivision of shares below the directory level - all files in a directory must be placed on the same server. Tricks like stubbing and links allow for some movement, but these do not solve the core issue. Specialized virtual NAS devices from F5 Acopia, NetApp, BlueArc, and ONStor have the ability to move files individually, providing as much a virtualized storage environment as any block-focused enterprise array.

But even an ideal virtualized file server lacks the kind of granularity demanded by users. They care about data, not files, and most applications consolidate their data storage into a few files. Consider a database, for example, where users want each record treated uniquely but storage devices see just a few much larger files.  As I pointed out in my piece, We Need a Storage Revolution, the ideal storage platform for data would be one in which each individual record or object included custom metadata and was managed independently. This would truly be a massive change, however, and it is not clear that all applications will follow the object storage model of Google and Amazon.

Small is Beautiful

Barring a revolution in data management, our best hope is to allow greater granularity in storage management. As mentioned above, virtualization technology has the potential to enable management and protection of any unit of storage, right down to the individual block or record. But the reality of storage virtualization has not matched its promise.

What is needed is greater integration. Each layer of virtualization (file system, volume manager, hypervisor, network, array, and RAID) also hides necessary details from lower layers. Consider the case of a virtual server snapshot: The application and filesystem must be in a quiesced state to allow a snapshot to be taken at the storage level, but the storage array has no intrinsic information about how its capacity is used. A given LUN might contain dozens of servers on a shared VMFS volume, so all must be snapped together.

Integration can be enabled by sharing more information through APIs. VMware recently announced that Update 2 of Virtual Infrastructure will enable Microsoft Volume Shadow Copy Service (VSS) integration for shared storage. So a VMFS snapshot can call the operating system and even applications (Windows Server 2003 only, for now) to prepare the data. Similarly, VSS can communicate directly with supported iSCSI and Fibre Channel arrays, calling a snapshot at the right moment.

As virtualization technology matures, expect this type of integration to improve. More and more APIs will be exposed, allowing communication up and down the stack to break through the information barrier. Imagine a future where a standard API like VSS can pass a message through VMware, Xen, and Hyper-V to the underlying storage array to initiate a snap. I predict that this kind of integration-enabled granularity is not too far off.

Do you need a storage network consultant?

Storage area networking is a rapidly evolving and complex topic, and unless your company has staff with a fair level of experience and expertise in storage area networking applications you may want to consider using a consultant that is well versed in issues relating to SAN deployments.   

When budgeting for your new storage system or changes to an existing SAN, you'll want to factor in not only hardware costs but also the likely consulting costs to fully deploy the new systems.   Even for a fully in-house solution there will likely be costs outside of just the new hardware, and these will vary depending on vendors, solutions, and sales negotiating ability.

One new option is to use Dell services SAN consulting, a new offering from the company that now provides more ICSI installs than any other thanks to the aquisition of major SAN market leader Equalogic.    Obviously a Dell consultant will steer you in the direction of Dell|Equalogic servers and products, but unless you have a highly specialized application or prefer other hardware you are likely considering Dell solutions already.   You'll want to estimate the cost savings of going this route over private consulting with more company choices for hardware to see if you are likely to make up differences in hardware costs by what is likely going to be a cheaper consulting and hardware combination package.

Over at the ARS technica blog an experience SAN IT manager recently suggested:

... get someone that can safely guide you through this minefield and get you a solution that you can understand. We buy all our hardware under the Dell Gold Support contract. It's a live-saver to be able to call someone that can understand all those systems and get you back on track ASAP.

More important than the initail cost and choice of hardware, it is critical to make sure your consultant understands your storage needs very clearly and is capable of delivering a solution very consistent with these needs, especially if you need to integrate legacy storage systems - for example Fiber based - with newer deployments which are often iSCSI.  Before choosing a consultant take some time to clearly inventory your existing network and your current and future storage needs, and create a brief interview for your prospective consultants that will determine if they appear highly proficient with solutions that address all your storage needs.

So, do you need a network storage consultant?   If you are not sure about it the answer is probably yes, and choosing one wisely should be a top priority.

 

 

How Green is your SAN?

Optimizing IT power consumption has become a critical cost factor as well as an environmental concern, and the good news is that key innovations in storage such as deduplication, virtualization, and solid state storage are keeping the costs and environmental problems far more manageable than they would be without the innovations.

 The Storage Forum has summarized an excellent report from IDC which notes:

IT spending required to cool and power spinning disk drives will reach $1.8 billion by the end of this year and over $2 billion in 2009

 David Reinsel of IDC:

 “As companies continue to add storage capacity at an aggregate rate of over 50 percent per year, the number of spinning disks continues to be a larger part of the overall power and cooling costs within a datacenter ... Vendors must do more to promote and enable a well-rounded green storage strategies that includes datacentre redesign, data consolidation, and data reduction.”

Reinsel also noted in the report that storage needs are growing faster than HD capacity, meaning we'll need more and more drives or continued SAN innovations if this trend continues:

“Data center storage requirements are growing 50% to 55% per year, but hard drive capacities are only growing 30% to 35% per year. In order to keep up with this growth, you either have to put in more and more drives, or you look for alternatives to stave off buying new drives,”

IDC put the number of disk drives in storage arrays around the world at 49 million and growing fast, with power consumption as the key environmental challenge in the sector.

So the bad news is that your power needs and power costs are going to continue to rise, but the good news is that keeping up with the latest SAN innovations in deduplication, virtualization, and solid storage will help offset those costs and help keep your enterprise .... grow green.