28 Jul 2008, 11:59PM PT
12 Jun 2008, 11:02AM PT
Closed: 28 Jul 2008, 11:59PM PT
Qualifying Insights Split a $12,000 Bonus.
In April, we began a conversation around The Future of Storage with support from Dell. Top insights in the community were posted to http://thefutureofstorage.com/ with some making it to Ars Technica and in ad promotions. We're looking to continue the conversation with good insight into any of the following topics: Virtualization, iSCSI, FCoE, deduplification, thin provisioning, encryption etc.
The focus of this conversation is centered on the SAN market and its current and future directions, including the aforementioned areas. To get an idea of the type of insights we're looking for, just look over http://thefutureofstorage.com/ to see some earlier posts. Please note that while that site is sponsored by Dell, the topic for discussion is not vendor specific.
In continuing this conversation, feel free to write about any topic in this space that you think would fit on the site, such as where you think the SAN market is heading. Alternatively, feel free to write thoughtful insights that address some of the points raised in earlier posts on The Future of Storage site.
For the purpose of encouraging discussion, here are a few topics that might be interesting to explore:
If you have personal experience with any of these, or even just want to walk through your thought process in evaluating options, those tend to make for very interesting posts.
Also, as we look to continue this ongoing conversation, please let us know what other topics you think would be good to discuss in future months.
The insights selected to be on this site will each get a "share" of the bonus pool below. You can write multiple insights to get multiple shares.
PLEASE NOTE: We are looking for unique insights that delve into a single subject concerning this topic. Don't try to cover too many things in a single insight submission. Again, look over the existing Future of Storage site to get an idea of what's appropriate.
Like last time, please get your insights in early. We will be closing the case once we feel there are enough insights (somewhere between 10 and 20 insights).
17 Insights
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Devin Moore
Friday, June 13th, 2008 @ 7:34AM
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joerg Hallbauer
Friday, June 13th, 2008 @ 4:01PM
Virtualization and Storage
I posted a rather lengthy entry about this topic on my blog recently http://joergsstorageblog.blogspot.com, but I thought that I would summarize it here as well.
Virtualization Changes Everything
I keep hearing this, and to some extent, it's true, but on the other hand, if you have been around storage you also see a lot of similarities to the issues that we always had to deal with as storage admins/managers. The main difference with VMWare is that you might find that it's even more important that you address the following three issues:
So, the question is, what can we do about these things? Well, some new technology that's just coming into the market will help. things like thin provisioning, block storage virtualization, new SRM tools that can do correllization between servers, networks, and storage will help. Another thing that we are storating to see are virtual HBAs which will help as well on the reporting end of things.
So, as I see it, server virtualization (VMWare) will drive storage virtualization. I would even go so far to say that you won't be able to get the full promise of server virtualization unless you impliment some storage virtualization. If I'm right about that, then we should see a significant uptick in the sales of virtualization very soon...
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Michael Kramer
Friday, July 4th, 2008 @ 10:44PM
Our industry has made long strides in recent years. There is no need for it to make a drastic change in it's
designs or how it does business, like some of our troubled domestic automakers. Companies are buying all of the
various offerings from the big storage vendors, and even some of the small ones. It's a growing market, and "if it
ain't broke, don't fix it".
At least that's the attitude that most companies will have when they're asked if they will implement a new-fangled
technology for their existing SANs. Most firms have their critical systems on a SAN, and unless there is a real need to, will not forklift upgrade their existing SANs. That leaves the new technologies to attract new clients, such as small business and small-to-medium enterprises (SME). iSCSI is catching their eye now, and sure maybe FCoE will catch it later. The key is still having devices that more than one protocol and interface, such as those that use FC and iSCSI in order to attract the new but retain customers with existing investments. Later perhaps, it will shift to FCoE and iSCSI, possibly Infiniband. As a sidenote, I see the most bandwidth potential with fiber and Infiniband rather than copper. Plus the cost of fiber continues to go down while copper costs are going up. Fiber is smaller, but then again so was "Betamax".
Evolution vs. Revolution
No one should expect the storage industry to make a drastic change unless a start-up company comes along with revolutionary technology that really takes off and others take notice. I believe that is unlikely since storage networks also rely on network technology which isn't going to change overnight just for storage. Afterall we're moving back towards commonly used network technologies like IP and Ethernet for our Storage Networks!
I also feel it's not accurate to think the storage industry is not improving daily. We are not merely recreating what
we started with, we are improving its reliability, performance, resiliance, flexibility, and utilization. We are not
mimicking evolution, there is no such thing. To ignore all of the progress in hopes of a revolutionary shift seems to
me as useful as questioning why we still use processors. Granted quantum computing would be great, but how do you
write code for something you don't understand, let alone something that changes only once it's observed? Do you want
your storage to be defragmented only when you observe it?
Cloud computing will not take off in the financial industry or many other conservative industries. They simply will
not rely on it to get critical data or services. If your Internet is gone, it's ALL GONE. They just won't tolerate
that. I've seen the Internet go down in all 3 of the biggest cities for an extended period, and so did a lot of
people that will never be convinced to invest too much in "cloud computing".
Deduplication is not simply compression. It is inherently different. It does so much more than just looking for
repeated strings or bits and replacing them with a shortcut for "1000 spaces" for example. It doesn't end with each
file or with each archive or backup. Deduplication looks at every single block of data and determines if it has ever
seen that block before, and if so replaces it with a reference to the original block. After that you could even
compress the deduped data.
As far as synchronization of files at home goes, a cloud with silver lining is called NAS with sync features enabled.
Have everything do a 2-way sync with the giant NAS. If simplicity is desired, having 1 PC per person and a one-way
upload from a "control device" such as an iPOD would be the way to go.
We Are The Industry
And for those of you who haven't noticed -- we ARE the industry, the big guys are asking us experts what we see coming
around the corner, and they want to invest in developing and marketing that product. No company wants to make the
next "Betamax", despite it being a better product than the others. Sites like this are catching vendors' attention and will change their minds. Remember first comes expert opinions and recommendations, then vendor investment and production, and last comes recognition from customers.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Greg Ferro
Wednesday, July 9th, 2008 @ 6:49AM
At the Brocade Tech Day briefing to analysts, Brocade presented a set of slides. One of them (Page 62) said (and I quote):
"The FC incumbent has a huge advantage in being the FCoE vendor of choice. Like today's FC networks, we do not expect mixed vendor FCoE-FC networks."
Storage is a serious business, and everyone is very concerned about compatibility and interoperability. So much so, that they will buy more of the vendor that they already have and ignore the interoperability. Brocade knows that and is planning for it. Now, to me, that sounds like an opportunity to charge a more, and make more profit, for very little effort.
Those storage folks must love being bunnies.
A deeply cynical observer would conclude that Brocade feels that FCoE locks customers in to their product and can't wait to get their hands on the money. In fact, they believe it enough to suggest it to their investors.
How is this going to work ?
The "operability lock in" means they charge extra. Because you aren't going to mix your "standardised" FCoE and FC vendors are you ? Why ? Because support would be a nightmare! Its not a truly open market, competition is low so Brocade can have a go at gouging you.
Bingo, Brocade is profit assured, good for bonuses and the share price!
What is the bet that Brocade can make sure of this by promoting and marketing compatibility and interoperability ? (Cisco is already making a lot of loud noises about their upcoming interop sessions). Let me also ask how many IT people are really going to deploy multiple vendors ? Anyone, anyone ? Bueller ?
Give it some thought people, when you take FCoE onboard you just might getting stitched up by some clever marketing.
You can download the Brocade Tech Day briefing http://media.corporate-ir.net/media_files/irol/90/90440/TechDayJune2008.pdf and read it for yourself. It is quite educational.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Greg Ferro
Tuesday, July 15th, 2008 @ 2:29PM
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Greg Ferro
Tuesday, July 15th, 2008 @ 2:32PM
Joerg Hallbauer Wed Jul 16 2:26pm |
I agree that FCoE is a great idea/technology, however, I'm concerned about some of the orginazational issues that it will create. Specifically, what you call the "integration between the Network and the Server Teams". First, in most large organizations, storage is handled by a seperate Storage Team, not the Server Team. Right now, the Storage Team has control over it's own network (i.e. fiber channel). FCoE's use of Ethernet will, I believe, cause a power struggle between the Network Team, who will want to control all things Ethernet, and the Storage Team. So I'm wondering if you have thought about that and how it might be addressed? One obvious answer would be to build a seperate Ethernet network just for storage which is controled by the Storage Team. But that would somewhat blunt the advantages of going to FCoE in the first place, wouldn't it? The other answer would be to allow the Storage Team some administrative access to the Ethernet switches in a way that's limited to just FCoE activities. This is going to require that Cisco and Brocade, etc. develop some enhancements to their management interfaces. This leaves the issue of physical connectivity. Who will be responsible for connecting the "HBA"s to the Ethernet network? Again, I suspect that there will be a power struggle here as well. So, I'm going to guess that a lot of large IT shops who currently run fiber channel networks are going to decide that they really don't want to address these issues, and simply choose not to adopt FCoE. At least not very soon. Even if it is a better mouse trap. |
Greg Ferro Thu Jul 17 12:19am |
You are bang on the money. The big shops will be all have their storage people swinging the wagons around their sacred cows and howling about 'uptime, performance blah blah ...hot air smelly fart sounds'. But the day of truth will come when the CIO is going say "you are buying Ethernet switches from Brocade ... for how much ?" Then he is going to turn to the network guy, and the network will merge. The fun part, is that its going to be brutal on the storage folks. They act just like Telephony people before they got eviscerated by IP Telephony ..... and I love stepping over their graves as I walk through the door everyday. |
Greg Ferro Thu Jul 17 12:23am |
Oh yeah, I forgot to mention, that the in long term a storage unit will look like just another server to the network. It will have a different traffic profile and specific design considerations, but otherwise no different from a database server or an exchange server |
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Michael Kramer
Sunday, July 20th, 2008 @ 7:36PM
Don't misunderstand me, I love virtualization. With any technology, we weigh the benefits of it against the cost and viability of implementation. Sometimes we focus too much on virtualization or the protocols and the upper layers of proposed ideas. We might completely overlook the lower layer benefits that technologies such as FCoE could provide, such as the ease of wiring and running patch panels. This gives data centers massive flexibility. It would eliminate the need to worry about what type of wiring will go where and having to "homerun" every type of new cable that comes out.
As I've stressed before, I feel this convergence should be focused on the physical medium with the most potential like fiber cable rather than twisted pair copper. Regardless, we must not forget the importance of convenience. Several markets such as cell phones and even websites forego performance in favor of convenience. I would like to think that we will continue to stave off that rationale in the IT storage industry, but it does pull a lot of weight. Convenience is good, but an even bigger factor is the reality that consumers want all of their information right away. We all need to be a bit more curious, look at the big picture, and question the status quo.
I could see cabling convenience being huge in the data center. I was recently rerouting core switch cabling in our data center and fighting to protect the precious fiber cable while moving the huge, bulky and ubiquitous UTP copper. Though I favor the fiber tremendously, I was looking at our patch panels and contemplating the rapid deployment of our FC SAN. I started to fumble when I thought about how nice it would be to have fiber patch panels, and ideally it would be used for everything, but I thought about what a risk that was with everything favoring copper these days. Sure the cost of copper is rising and fiber is dropping. Indeed fiber has so much more performance potential, can be pulled at higher tension than copper, is smaller than copper, however we cannot overlook the convenience factor which has caused us to restrict fiber from our collective diets.
Why is it that copper seems to be winning the race? Is it because more people know how to terminate UTP than polish fiber ends? Will they be just as good with STP when they have a shield to deal with and even thicker cable? When will copper become cost-prohibitive and reach the end of its roadmap?
Often I feel the IT industry and the big-vendors out there are focused too much on the road right in front of them without looking at the map first.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Stephen Foskett
Wednesday, July 23rd, 2008 @ 9:26PM
Turning the Page on RAID
It has been the core technology behind the storage industry since day one, but the sun is setting on traditional RAID technology. After two decades of refinement and fragmentation, we are abandoning the core concepts of disk-centric data protection as storage and servers go virtual. Next-generation storage products will feature refined and integrated capabilities based on pools of storage rather than combinations of disk drives, and we will all benefit from improved reliability and performance.
RAID Classic
Early storage systems were revolutionary, in physically removing storage from the CPU, in enabling sharing of storage between multiple CPUs, and especially in virtualizing disk drives using RAID. When Patterson, Gibson, and Katz proposed the creation of a redundant array of inexpensive disks (RAID) in 1987, they specified five numbered "levels". Each level had its own features and benefits, but all centered on the idea that a static set of disk drives would be grouped together and presented to higher-level systems as a single drive. Storage devices, as a rule, mapped host data back to these integral disk sets, sometimes sharing a single RAID group among multiple "LUNs", but never spreading data more broadly. Storage has remained stuck with small sets of drives ever since.
The core insight of the 1980s remains true: More spindles means better performance. Although additional overhead dulls the impact somewhat, the benefit of spreading data across multiple drives can be tremendous. A typical RAID set offers much better performance than the drives alone, and can handle a mechanical failure as a bonus.
Cracks are appearing in the RAID veneer, though. Double drive failures are much more common than one would expect, leading to the development of hot spare drives and dual-parity RAID-6. If four drives perform well, then forty drives perform much better, leading to the common practice of "stacking" one RAID set on others. Caches and specialized processors were introduced to overcome the performance issues related to parity calculation.
But traditional RAID cannot overcome today's most critical storage issues. As drives have become larger, the tiny chance of an unrecoverable media error compounds, becoming a certainty. Even dual-parity will not be able to guarantee data protection on the massive disks predicted for the near future – statistics cannot be denied. The latest disks contain so much data, without commensurate improvements in throughput, that rebuild times have skyrocketed, resulting in hours or days of reduced data protection.
RAID is also ill-suited to the demands of virtualized systems, where predictable I/O patterns become fragmented. It cannot provide tiered storage or account for changing requirements over time. It cannot take advantage of the latest high-performance solid state storage technology. It cannot be used in cloud architectures, with massive numbers of small devices clustered together. It interferes with power-saving spin-down ideas. Most RAID implementations cannot even grow or shrink with the addition or removal of a disk. In short, traditional RAID cannot do what we now need storage to do.
RAID is Dead
Although most vendors still use the name, nearly every one has abandoned much of the classic RAID technology. EMC's Symmetrix pioneered the idea of sub-disk RAID, pairing just a portion of each disk with others to reduce the impact of "hot spots". HP's AutoRAID added the ability to dynamically move data from one RAID type to another to balance performance. And NetApp paired disk management so closely with their filesystem that they were able to use RAID-4 and the flexibility it brings.
Today, a new generation of devices has even evolved beyond RAID's concept of coherent disk sets. Compellent, Dell EqualLogic, and others focus on blocks of data, moving portions of a LUN between RAID sets, disk drive types, and even inner or outer tracks based on access patterns. With these devices, a single LUN could encompass data on every drive in the storage array. And the latest clustered arrays can spread data across multiple storage nodes to scale performance and protection.
These innovative devices point the way to a future in which virtual storage is serviced and protected very differently than in the past. Perhaps software like Sun's ZFS serves to illustrate this future best: It unifies storage as a single pool, intelligently protecting it and presenting flexible storage volumes to the operating system. Although Sun calls its data protection scheme "RAID-Z", it has little in common with its namesake. Like NetApp's WAFL, the copy-on-write ZFS filesystem is totally integrated with the layout of data on disk, allowing mobility and efficient use of storage. A single pool can include striping, single- or dual-parity, and mirroring, and disks can be added as needed. Importantly, ZFS also checksums all reads, detecting disk errors.
Long Live RAID
The post-RAID future will see these concepts spread across all enterprise storage devices. Disks will be pooled rather than segregated into RAID sets. Tight integration between layout and data protection will allow for much greater flexibility, integrating tiering and differing data protection strategies in a unified whole. Storage virtualization will allow mobility of data within these future storage arrays, and clustering will enable massive scalability.
Two things will likely remain to remind us of Patterson, Gibson, and Katz, however. First, the core principle that multiple drives working as one yields dividends in terms of performance and data protection. And second, that whatever we use should be called RAID, even though the definition of that term has changed beyond recognition in the last two decades.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Michael Kramer
Thursday, July 24th, 2008 @ 8:03AM
Virtualization is a big buzzword right now, and for good reason. The more we virtualize, the more flexibility we build into our infrastructure. What we are really talking about is providing a layer of abstraction between our resources. The entire IT industry is moving in this direction and it's fantastic. Swapping server hardware has never been easier. Network traffic can be grouped and segmented without extra hardware. Storage space can move moved, increased, and even migrated to higher performance arrays without downtime. These added abstractions need to continue and I see every aspect of the data center benefitting from virtualization.
Anything is possible. We can abstract the power from the hardware and circuits, which might allow us to redistribute power as needed to various parts of the data center and ease use of products with heterogeneous power requirements, including the power hungry 10 gigE coming our way. We would only trip virtual circuits. Virtual power, virtual cooling, virtual phones, virtually anything is possible. Virtual IT staff? Well, that's called outsourcing the staff and is as useful as virtual money, right? In the world of IT storage, providing a layer of abstraction between the file system and the operating system should be focused on. We also need to improve the underlying resilience technology. With SSDs being fragmented by nature and most of the time being slower on writes than spinning disk, a RAID 1+0 looks pretty good, especially since it's cheaper per gigabyte too. A RAID 1+0 could lose half of its disks and still run without issue.
I understand there may not be a perceived need for some of these technologies yet. There still are several opportunities for more abstraction. Too much abstraction can make management cumbersome. The key is abstraction with less complication. Let's take a look at the needs of SMB/SMEs, the Small-to-Medium Enterprises. Many of these companies have enterprise-class needs for storage, network security, and messaging solutions. These needs don't necessarily mean they have the money or space to fill these roles and house this technology. How do you get network monitoring around the clock, and a warehouse full of information in a small office? Bring in the appliances and the outsource companies.
Appliances serve many purposes, and do a great job fulfilling various needs. The problem with appliances is that the vendors that make them seem to be removing layers of abstraction instead of adding them. Specifically I'm thinking of deduplication appliances. There are too few deduplication appliances available for SMEs that do not have their own proprietary and locally attached storage. EMC bought Avamar and rather than leaving the option of it being available as software or offering an appliance gateway, they now force the consumer to buy it with locally attached storage. There is a virtual edition, but it comes with many restrictions and Avamar in general requires giving up on existing backup software investments and using the Avamar interface. Data Domain does have a gateway product, but it is their flagship model and its price point punts it out of reach from the SME market. Maybe an abstract appliance is an oxymoron, but I know I'm still looking for one to buy for our datacenter.
Proprietary file system you say? Profit margin prohibitive? Give us abstraction, and we'll reward you with our business.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joerg Hallbauer
Friday, July 25th, 2008 @ 11:04AM
Data continues to grow at a frightening rate. According to an IDC study there was about 281 Exabytes of data stored on disk in 2007 word wide. This data is growing at CAGR of about 70%. At this rate, in 3 years there will be about 1400 Exabytes of data sitting on disk.
Now, a lot of this data is sitting on people's desktops, laptops, ipods, phones, digital cameras, etc. right now. However, things like cloud storage will change all of that. Heck, we are seeing some of the change right now with things like social networking sites, photo sharing sites, etc. IDC says that for 85% of that data a corporate entity will be responsible for the protection and security of the data.
So, in the future, we are going to have to store a lot more data than we do today, a LOT more data. How are we going to do that? Just the physical aspect of getting exabytes of data on the floor is going to be a challenge. I don't even want to talk about protecting and managing that much data. But for now, I want to talk about the density of the hard disk drive since that's going to soon become the physical limit of what we can store on the floor of our data centers.
Enterprise disk drive capacity has obeyed Moore's Law and doubled every 18 months for quite a few years. However, this growth has appears to be slowing down over the last 5 years, and it is now taking approximately 29-30 months to double the capacity of an Enterprise disk Drive.
This shows that we are nearing the maximum areal density (max capacity) of current disk drive technology called the superparamagnetic limit. Areal density as it refers to disk drives is measured by the number of bits per inch (bpi) times the number of tracks per inch (tpi).
The areal density of disk storage devices has increased dramatically since IBM introduced the RAMAC in 1956. RAMAC had an areal density of two thousand bits per square inch, while current-day disks have reached 100 billion bits (100 gigabits per square inch). Perpendicular recording is expected to increase storage capacity even more over time, but we do appear to be approaching the limit.
As the magnetic bits get smaller, at some point they no longer hold their charge. Thermal fluctuations reduce the signal strength and render the bits unstable. However, this ultimate areal density keeps changing as researchers find new techniques for recording and sensing the bit. Years ago the limit was thought to be 20 gigabits per square inch. Today, the limit is several hundred gigabits per square inch, and more than a terabit is expected soon. But that's about all you can get out of the technology.
Increasing the density of hard disk drives has a side benefit. It makes the drives faster as well. This is really quite logical when you think about it. Since the closer things are together on the drive, the more data passes by a read/write head in the same period of time thus making the drive faster.
So, if the disk drive is not going to be able to continue to provide us with the kinds of capacities we are going to need in the future, what will? Well, there are a number of things that are being looked at by a lot of folks who are a lot smarter than me! But in the short term, things like SSD look promising once we work out some of the kinks. Specifically, the write speed issue. Until we can get that up I'm not sure how much general acceptance SSD technology is going to get. Price, I am convinced, will take care of itself as the scales of economy kick in. Holographic storage, some people have been working on this for a very long time and it seems like such a promising technology, but it has yet to come to fruition. There is one company out there that's trying to ship a product, but they recently pushed off their release date until the end of this year. Still, if they can work out the kinks, it definitely has promise, especially for media applications. But what about beyond that? What technologies are the researchers looking at that sound really cool? I look at some of those next.
So, this is where it gets fun. Some of the technologies that researches are currently looking into really do sound like something out of a Sci-Fi movie. Here are some examples of the stuff I'm talking about:
So, does this mean that the disk drive is dead in the future? I don't think so. I believe that the disk drive we know and love will simply move from one tier of storage to another. We are already seeing some of this movement with the implementation is backup to disk. Technologies such as data deduplication will continue to accelerate this process, and the addition of new primary data storage technologies will simply end the process by pushing hard disk drives from on-line primary storage to what will be considered near-line storage in the future. Long live the disk drive!
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Todd DiGirolamo
Monday, July 28th, 2008 @ 1:00AM
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joseph Hunkins
Monday, July 28th, 2008 @ 1:25PM
Best iSCSI SAN? InfoWorld says it is the EqualLogic PS3800XV.
What is the best iSCSI SAN on the market? According to InfoWorld the answer is the EqualLogic PS3800XV.
Each year InfoWorld designates products in several IT categories as the "Technology of the Year". For the 2008 storage Equalogic took the prize for best iSCSI SAN. Reviewer Paul Venezia extolled both the simplicity of installation and the power of the PS3800XV, and InfoWorld's stellar ratings of 9 or 10 in all categories were also very impressive. The device scored 10 of 10 in the "performance" rating, and 9 of 10 points in all the following categories: Management, interoperability, scalability, reliability, and value.
The magazine summed up their choice for the best iSCSI SAN of 2008 (though it was reviewed in 2007) as follows:
EqualLogic’s
PS3800XV iSCSI array represents the highly evolved state of iSCSI SAN
arrays with panache. Fully featured and blazingly fast, this SAS-based
array will easily find a home in infrastructures of any size, although
the cost will keep it out of smaller shops. For virtualization
implementations, you can’t do better than a PS3800XV. Source
Joseph Hunkins Mon Jul 28 11:55pm |
This insight was pretty short - OK to combine it with another to avoid me being a post hog. |
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joseph Hunkins
Monday, July 28th, 2008 @ 3:39PM
Virtualization Best Practices
Thanks to the overwhelming benefits most IT departments have already deployed virtualization across some of their infrastructure with plans to increase the level of virtualization in the near future. According to the EMA study cited below the adoption of virtualization is growing at about 25% per year with over 95% of all enterprises using some form of virtualization already.
The cost benefits of virtualization can be exceptional but the process also provides for superior availability, disaster recovery, and load balancing. It also facilitates faster software deployment and development as those processes are disconnected from physical hardware limitations. Virtualization also reduces downtime and thecomplications associated with hardware failure.
In a paper for BMC software, IT consultancy Enterprise Management Associates took a look at best practices in the virtualization space and identified areas where virtualization brings unique challenges to the enterprise infrastructure.
EMA recommend adoption of the best practices as defined in the IT Infrastructure Library "ITIL", an IT community approach to bringing workable, quality standards to IT practices.
The study suggested that despite its many advantages, virtualization does bring a host of complications to the table relating to the need to manage, allocated, and scale the virtual server environment effectively.
EMA:
Application performance monitoring and availability management become much more complex, as virtualization makes it harder to map virtual resources to physical systems...
It is harder to accurately detect, measure, and plan capacity, because virtual services are so rapidly deployed, consumed, and destroyed....
Cost accounting and financial management becomes much more intricate and time-consuming, as tools must measure rapidly changing virtual environments, and license management and compliance are much harder to measure and enforce ...
Ensuring service levels becomes harder, because the new virtualization layer increases the potential for errors and chokepointd...
The solution, says EMA, is to implement a quality virtualization management software system, consistent with the IT Infrastructure Library Guidelines, such as the solutions provided by BMC. EMA suggests BMC is a superior environment and more consistent with best practices and the Infrastructure Library than much of the competition, though we should note that this white paper was written for BMC and thus is probably not to be taken as an unbiased evaluation of all the options - rather just good basic insight into the issue of best practices with respect to virtualization of the enterprise.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joseph Hunkins
Monday, July 28th, 2008 @ 4:30PM
The Storage Economy
Who are the biggest companies in the storage space and how are they faring in terms of earnings and relative market capitalization? I'd suggest that we are seeing some early signs of commoditization of the storage market happening as broader standards develop and cheaper iSCSI implementations take over from fiber. This will tend to put pressure on any struggling companies as margins lower and customers migrate to cheaper solutions.
Who are the key companies in the networked storage sector and how are they faring financially?
Sources: Yahoo Finance, Google Finance, CBS MarketWatch:
Name | Symbol | Last Trad e | Mkt Cap | ||||
Quantum Corporation | QTM | 1.44 | 298.04M | ||||
Overland Storage, Inc. | OVRL | 1.12 | 14.30M | ||||
Xyratex Ltd. | XRTX | 14.55 |   ; | 425.43M | |||
EMC Corporation | EMC | 13.91 | 28.80B | ||||
Ciprico Inc. | CPCI | 0.650 |   ; | 3.32M | |||
Qualstar Corporation | QBAK | 3.00 | 36.76M | ||||
Adaptec, Inc. | ADPT | 3.60 | 445.04M | ||||
Brocade Communications Systems, Inc. | BRCD | 6.68 | 2.47B | ||||
Sun Microsystems, Inc. | JAVA | 9.94 | 7.77B | ||||
NetApp Inc. | NTAP | 24.57 |   ; | 8.03B |
After the aquistion of Equalogic, clearly the big kahuna in storage is now Dell which enjoys a market capitalization of 47 Billion and a healthy P/E of 17.14. I'm not clear why Dell did not appear in the chart above - a Google storage market comparison. Dell Profile
EMC, Brocade, and SUN all remain major players in network storage and all companies appear in healthy shape with a P/E of 18, 17, 13 respectively, indicating strong earnings relative to the stock price. EMC Profile Brocade Profile Sun Profile
Quantum, Ciprico, Overland, Adaptec appear to be struggling financially at least in the short term, all show losses in the last quarter. This is not necessarily a sign of great problems but indicates that the market is competitive and without improved performance these companies could be in for troubled times. Quantum Profile Overland Profile Ciprico Profile Adaptec Profile
Qualstar lost a few cents per share last quarter, but at the same time declared a .06 dividend which normally suggests financial health, so generalizing about their prospects is difficult without a lot more information. Qualstar Profile
Netapp and Xyratex both maintain PE of about 28 which should be no cause for financial alarm.
Xyratex Profile NetApp Profile
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Stephen Foskett
Monday, July 28th, 2008 @ 8:01PM
Many storage challenges focus on correlating the higher-level use of data with the nuts and bolts of storage. These discussions often revolve around the conflict between data management, which demands an ever-smaller unit of management, and storage management, which benefits most from consolidation. Developing storage management capability that is both granular and scalable is one key to the future of storage.
Storage Management: Scaling UpAs I discussed in my last piece, Turning the Page on RAID, the data storage industry has traditionally focused on reducing granularity. Disk capacity has expanded, and RAID technology has multiplied this by combining multiple physical drive mechanisms into a single virtual one. Storage virtualization technologies, from the SAN to the server, have also often been touted primarily as a mechanism to reduce heterogeneity. From a technical perspective, therefore, granularity has been an obstacle to overcome.
The core organizational best practice for storage management is reducing complexity and increasing standardization. Consolidation of storage arrays and file servers is a common goal, as IT seeks to benefit from economies of scale. The goal of both initiatives is the creation of a storage utility or managed storage service.
Although both technological and organizational factors have traditionally driven granularity out of storage, this does not have to be the case. Virtual pools of storage are ideal for providing storage on demand, as disk-focused RAID groups give way to more flexible sub-disk storage arrangements. And an operational focus on standardized storage service offerings has the potential to enable scalable management of these smaller units.
Filing Service
File-based protocols would seem to have more potential for granular storage management, but they have been undermined by the hierarchical nature of modern file storage. Whether the connection to a file server uses NFS, CIFS, or AFP, the key unit of management is actually the shared directory, not the file. All files in the share, \fireflybackups, would be located on the same server and would be managed as a unit.
NAS virtualization can change this somewhat, as can more specialized NAS servers. Although Microsoft DFS enables consolidation and virtualization of NAS shares, it does not allow subdivision of shares below the directory level - all files in a directory must be placed on the same server. Tricks like stubbing and links allow for some movement, but these do not solve the core issue. Specialized virtual NAS devices from F5 Acopia, NetApp, BlueArc, and ONStor have the ability to move files individually, providing as much a virtualized storage environment as any block-focused enterprise array.
But even an ideal virtualized file server lacks the kind of granularity demanded by users. They care about data, not files, and most applications consolidate their data storage into a few files. Consider a database, for example, where users want each record treated uniquely but storage devices see just a few much larger files. As I pointed out in my piece, We Need a Storage Revolution, the ideal storage platform for data would be one in which each individual record or object included custom metadata and was managed independently. This would truly be a massive change, however, and it is not clear that all applications will follow the object storage model of Google and Amazon.
Small is Beautiful
Barring a revolution in data management, our best hope is to allow greater granularity in storage management. As mentioned above, virtualization technology has the potential to enable management and protection of any unit of storage, right down to the individual block or record. But the reality of storage virtualization has not matched its promise.
What is needed is greater integration. Each layer of virtualization (file system, volume manager, hypervisor, network, array, and RAID) also hides necessary details from lower layers. Consider the case of a virtual server snapshot: The application and filesystem must be in a quiesced state to allow a snapshot to be taken at the storage level, but the storage array has no intrinsic information about how its capacity is used. A given LUN might contain dozens of servers on a shared VMFS volume, so all must be snapped together.
Integration can be enabled by sharing more information through APIs. VMware recently announced that Update 2 of Virtual Infrastructure will enable Microsoft Volume Shadow Copy Service (VSS) integration for shared storage. So a VMFS snapshot can call the operating system and even applications (Windows Server 2003 only, for now) to prepare the data. Similarly, VSS can communicate directly with supported iSCSI and Fibre Channel arrays, calling a snapshot at the right moment.
As virtualization technology matures, expect this type of integration to improve. More and more APIs will be exposed, allowing communication up and down the stack to break through the information barrier. Imagine a future where a standard API like VSS can pass a message through VMware, Xen, and Hyper-V to the underlying storage array to initiate a snap. I predict that this kind of integration-enabled granularity is not too far off.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joseph Hunkins
Monday, July 28th, 2008 @ 11:32PM
Do you need a storage network consultant?
Storage area networking is a rapidly evolving and complex topic, and unless your company has staff with a fair level of experience and expertise in storage area networking applications you may want to consider using a consultant that is well versed in issues relating to SAN deployments.
When budgeting for your new storage system or changes to an existing SAN, you'll want to factor in not only hardware costs but also the likely consulting costs to fully deploy the new systems. Even for a fully in-house solution there will likely be costs outside of just the new hardware, and these will vary depending on vendors, solutions, and sales negotiating ability.
One new option is to use Dell services SAN consulting, a new offering from the company that now provides more ICSI installs than any other thanks to the aquisition of major SAN market leader Equalogic. Obviously a Dell consultant will steer you in the direction of Dell|Equalogic servers and products, but unless you have a highly specialized application or prefer other hardware you are likely considering Dell solutions already. You'll want to estimate the cost savings of going this route over private consulting with more company choices for hardware to see if you are likely to make up differences in hardware costs by what is likely going to be a cheaper consulting and hardware combination package.
Over at the ARS technica blog an experience SAN IT manager recently suggested:
... get someone that can safely guide you through this minefield and get you a solution that you can understand. We buy all our hardware under the Dell Gold Support contract. It's a live-saver to be able to call someone that can understand all those systems and get you back on track ASAP.
More important than the initail cost and choice of hardware, it is critical to make sure your consultant understands your storage needs very clearly and is capable of delivering a solution very consistent with these needs, especially if you need to integrate legacy storage systems - for example Fiber based - with newer deployments which are often iSCSI. Before choosing a consultant take some time to clearly inventory your existing network and your current and future storage needs, and create a brief interview for your prospective consultants that will determine if they appear highly proficient with solutions that address all your storage needs.
So, do you need a network storage consultant? If you are not sure about it the answer is probably yes, and choosing one wisely should be a top priority.
Continuing Insights Into The Rapidly Evolving Storage Area Network Market by Joseph Hunkins
Monday, July 28th, 2008 @ 11:53PM
How Green is your SAN?
Optimizing IT power consumption has become a critical cost factor as well as an environmental concern, and the good news is that key innovations in storage such as deduplication, virtualization, and solid state storage are keeping the costs and environmental problems far more manageable than they would be without the innovations.
The Storage Forum has summarized an excellent report from IDC which notes:
IT spending required to cool and power spinning disk drives will reach $1.8 billion by the end of this year and over $2 billion in 2009
David Reinsel of IDC:
“As companies continue to add storage capacity at an aggregate rate of over 50 percent per year, the number of spinning disks continues to be a larger part of the overall power and cooling costs within a datacenter ... Vendors must do more to promote and enable a well-rounded green storage strategies that includes datacentre redesign, data consolidation, and data reduction.”
Reinsel also noted in the report that storage needs are growing faster than HD capacity, meaning we'll need more and more drives or continued SAN innovations if this trend continues:
“Data center storage requirements are growing 50% to 55% per year, but hard drive capacities are only growing 30% to 35% per year. In order to keep up with this growth, you either have to put in more and more drives, or you look for alternatives to stave off buying new drives,”
IDC put the number of disk drives in storage arrays around the world at 49 million and growing fast, with power consumption as the key environmental challenge in the sector.
So the bad news is that your power needs and power costs are going to continue to rise, but the good news is that keeping up with the latest SAN innovations in deduplication, virtualization, and solid storage will help offset those costs and help keep your enterprise .... grow green.
Devin Moore
Thu Jun 26 12:53pm