About This Case

Closed

3 Dec 2008, 11:59PM PT

Bonus Detail

  • Top 8 Qualifying Insights Earn $500 Bonus

Posted

24 Nov 2008, 12:00AM PT

Industries

  • Hardware
  • IT / IT Security
  • Internet / Online Services / Consumer Software
  • Logistics / Supply Chain
  • Start-Ups / Small Businesses / Franchises
  • Telecom / Broadband / Wireless

What Does Virtualization Mean To You?

 

Closed: 3 Dec 2008, 11:59PM PT

Earn up to $500 for Insights on this case.

Intel and IBM would like to get the Insight Community's thoughts on what virtualization (in the IT context) means to you.  

They will be hosting the best thoughts on this subject on their new site, Virtualization Conversation.

Pick ONE of the following topics and expand on it to discuss your views on the subject in approximately 750 to 1,500 words.

  • Maximizing the business value of data centers
  • Virtualization benchmarks
  • The benefits of virtualization
  • Improving efficiencies in the work environment

We're looking for views from folks in the IT world, giving some insight into their real world experiences on these topics. Eight Three responses will be chosen and placed on the Virtualization Conversation site.

Update: Intel and IBM were so pleased with the quality of responses, that they have increased the number of insights that they would like to use from three to eight.  Thanks everyone for your excellent insights!

17 Insights

 



Today is most assuredly the day of virtualization.  As the worldwide economic markets face trying times, technology managers increasingly search for cost-savings without compromising capacity.  Virtualization is the single best solution to accomplish this goal.

As an independent contractor, I work with various businesses of every size, from the sole proprietorship to multi-million dollar corporations.  With each contract, invariably I find that virtualization comes into play - for my smaller clients, in the form of moving them away from private hosted solutions (self hosting, colocation, or private server rental) and towards virtual private hosting solutions; for my larger clients, virtualization provides the least-painful path for server consolidation and capacity planning, resulting in immediate cost savings on power, staffing, and maintenance.

In both these cases, the single most important feature that virtualization provides is the flexibility to abstract away the hardware from the equation.  Planning and projection become simplified because the ultimate solution becomes mostly hardware agnostic.  Simplification leads to savings, pure and simple, and while layoffs occur across the country and around the world, you will find that virtualization is one sector that suffers no such fate.

While I’m fairly new to the world of Virtualization it did not take long for me to see the incredible benefits. I’ve been working with VMs from and administrative perspective for about 3 months. Before that my knowledge of VMs when pretty basic and I was mainly interested in VM Fusion, running Windows on a Mac.

Our company is a large group of GIS software developers. Most of our virtualization revolves around providing environments for development with the ability to store snapshots incase development goes wrong.

We have almost 150 active VMs with another 100 in cold storage. The 150 active VMs are running on 3 servers while the 100 in storage are on one machine. That is about 62 to 1 VM to physical machine ratio. We are saving a ton of space and money by virtualizing. Less heat output = less cooling cost, less space = less server room space, less power supplies = less energy consumed, less machines = less management.

To me Virtualization is no longer running two different operating systems simultaneously on one machine. It is conserving resources in an IT sense and a “green” sense.

Virtualization Background

I have a few years of experience using virtual servers.  Early on, virtualization was so complicated to implement and maintain that it only made sense for those organizations with the most dire need for software segmentation or other complex architectural needs.  Virtualization is so easy now that implementation is almost an afterthought.  I am currently implementing several virtual servers on a massive grid as part of a major virtualization effort at my organization.  Virtualization has definitely come a long ways since my earliest experiences, and the value it can provide to me and my coworkers is now much more obvious.

Direct Benefits of Virtualization

Virtualization is expected to directly provide me and my coworkers with the following benefits:

1.  Easy-to-manage future hardware upgrades.  The virtual machine operations folks can just add processor power, disk space or RAM to our configuration instantly at any time, at a near zero cost compared to the complexity involved in roll out a new physical server in our very large server environment (many thousands of servers).  Conversely, if a piece of physical hardware that is connected into the grid dies, it can be swapped out without shutting down any of the virtual servers, yielding a far greater "hardware uptime" end-user experience.  The hardware still dies just like it always does, but it doesn't affect the performance of the software in as significant a way as if a single system dies and takes some application down.  For those who have never had a direct hardware failure, remember virtualization when it happens to you (and it will happen one day!)

2.  Easy-to-redeploy software configurations.  If one of these virtual machines were to suffer a data loss or software misconfiguration that would require a rebuild, we can rebuild and redeploy the entire server in minutes instead of hours or days.  If we need other "firmware" changes, those are equally easier on the hardware folks to implement.

3.  Other logistics are faster and more easily controlled by IT staff.  For example, relocating the virtual machines to a different physical location can be achieved easily and without physically moving the old hardware to the new location.  Finally, software upgrades and patches are easier for IT to roll out.

Management and Indirect Benefits

Those are some of the benefits I will realize that will improve my ability to deploy custom software solutions to those servers vs. our old physical server configuration.  There are far more benefits than that to the operators and upper management, such as the cost savings of having the minimum necessary virtual hardware added to each application or project.  The larger your virtual grid becomes, the more this cost savings will outweigh the overhard costs and ultimately with a multi-thousand server grid, I can't imagine everyone wouldn't be impressed with the new bottom line.

The Performance Cost of Virtualization

Virtualization has a performance cost on the hardware due to the overhead of the virtualization management software.  This overhead would have a huge effect if you had a large application and only a few pieces of hardware, and you were hammering the hardware to max capacity before virtualization.  However, if like our organization you can get a big grid of hardware going, that overhead cost will be vastly outweighed by the virtual-hardware-minimizing savings noted above.  If you have a lot of smaller apps that need their own distinct servers, virtualization is definitely worth consideration.  I would even prefer hosting my own apps in a third-party virtual environment to having to manage my own hardware, because I know it's so much easier to manage virtual servers that I will get way better and cheaper service via that compared with paying for my own hardware.

Systems Architecture Considerations

Virtualization is the most technically correct way to implement hardware and software from a systems architecture standpoint.  Consider the following scenario: if I purchase hardware, no matter how I choose to upgrade the software, I am stuck with that hardware configuration's limitations because it is directly controlled by the software.  Contrast this with a virtual environment, where the hardware can be upgraded and configured independently of the software.  This allows for legacy systems to either be emulated or otherwise virtualized and kept functional far beyond the original hardware's lifespan.  Furthermore, the latest software and hardware can be implemented basically immediately with little cost to at least test the software, versus trying to purchase an entire new physical hardware configuration. 

New Ways to Solve Problems

Finally, virtual servers allow for the consideration of possibilities far outside the restrictions imposed by traditional physical server environments.  One of the future enhancements I may propose for our virtual grid is the ability to have tons of client systems automatically test various components for us, so that we can know right away if a specific type of configuration will be incompatible with our software.  Organizations can realize substantial savings and time-to-market improvement from a grid of virtual QA tester machines in different configurations, and that's just one random idea out of a whole new industry made possible by a virtual server grid.

Virtualization, to me, is simply a continuation of specialization. When I was a kid (a very long time ago; we still rode in buggies part of the time) we did a lot of things that are now confined to specialists - we fixed appliances ourselves, worked on our own cars, etc.

VERY inefficient; and when the socio-economic structure allowed, we all started handing that "stuff" over to specialists.

I see the same thing happening, sooner or later, in the online world. It makes absolutely no sense whatsoever for, say, a patent attorney like me to attempt to be my own system administrator; but at the moment, to some extent, that is what I have to do.

Virtualization will free me of that, and allow me more time to learn to be a better patent attorney.

At my last job each developer had their own personal development system. Since it was a military-affiliated command, there were 3 separate networks that required development efforts. There was on server where code was stored in a repository for community access. The command did have thin-client terminal systems for most users but the developers couldn’t use them because of the work that they performed.

Because each developer had 3 personal computer systems, space was at a premium. It also meant that special privileges had to be set up for everyone so they could have Administrator access to the computers to install software for testing and compatibility research.

One problem that we ran into was making sure everyone had the necessary version of software for testing and development. There was no central file server that had the software the developers required so everyone was on their own to ensure their systems were up to date.

Additionally, since each developer programmed on his local systems, there was no data redundancy in case of a system failure and there were no policies or procedures to back up data on a regular basis. Network profiles enabled each person to have a network “hard drive” for personal data storage; these profiles were backed up with the normal server backups. However, no one trusted them for important file storage because network problems could prevent access when needed and there was no way to synchronize the files between the local hard drive and the profile.

To try and reduce the number of desktop systems we were responsible for (and the associated licensing issues) and to resolve many of the problems mentioned above, a virtualization scheme was tested for the development process. Three servers that were recycled from a previous project were converted to virtual machine servers, each with 4 virtual systems created. A default image was created for initial testing purposes to see whether a viable development system could be virtualized.

The test was conducted for a year with one developer using the virtual system for most of his work, and several developers using the VMs on an occasional basis. There were no problems programming in the virtual environment. The developers who didn’t use it simply felt they didn’t have as much control as they would like (even though each person had complete admin access to their VM) and some programming tasks, such as compiling, seemed to take longer than normal over the network.

Some of the highlights of test were:

1.       We controlled the servers so control of the VMs wasn’t up to a third party. We could administer them as needed without having to wait for approval or having to notify anyone.

2.       Custom environments could be created, depending on the programming tasks required. For example, a separate VM image could be created for Java programming, web development, image processing, or MS Windows development.

3.       Once a development environment was created and approved, an image snapshot was taken and stored both on the server and offline. Any new VMs that needed to be created could simply use the established snapshots as-is or as a template.

4.       Memory and disc sizes could be dynamically altered on the servers depending on the needs of the developers. It also enabled more resources to be devoted to the tasks that needed it, such as compiling, when it was necessary.

5.       When a developer no longer needed the VM, it could be saved and closed, freeing up resources for other uses.

6.       Software could be updated and monitored centrally rather than having to keep track of each person’s systems.

7.       The virtual environment allowed use of the thin-client workstations, bypassing the limitations of the normal network. This meant users could simply use one, small workstation instead of 3 towers to do the same work. It also meant that security was enhanced because the unclassified and classified workstations were no longer physical entities within the workspace.

The test was concluded to be a success. Even accounting for the complaints from the part-time users, the virtual development environment was proven to be usable. The problems with the arrangement were generally issues that were out of the division’s control.

1.       Purchasing VMWare or other virtualization software requires the headaches and hassles of regular software purchases. Open-source software, e.g. Xen, doesn’t have the corporate support “required” for approval. Free software, e.g. MS Virtual Server, often doesn’t have the enterprise-capabilities necessary for a full-blown rollout.

2.       Servers need hardware upgrades to support a reasonable number of VMs and the minimum requirements are higher than for “normal” uses. For servers that weren’t under the control of the division required approval and support from the Enterprise Support Dept.

3.       A small learning curve is required for users to learn how to “remote in” to the virtual systems and for administrators to learn how to properly administer a virtual environment.

Overall, virtualization can make life easier for users and administrators. The learning curve is relatively small, especially for users. Money can be saved by reducing the physical systems required for work and the associated overhead of those systems (energy costs, cabling, hardware repairs, etc.). Fewer administrators are required for the same number of users and administration is easier to centralize. The initial costs may be higher than normal but, in the long run, the benefits justify going virtual.

Virtualization of IT systems decouples physical infrastructure from logical resources, hiding complexity and enabling new capabilities. However, not all potential benefits of virtualization have meaningful value outside IT circles: Too many of our discussions revolve around the very complexity that virtualization technology should be hiding! True business value is derived from transformed virtual resources in the next-generation data center, not the incremental capacity gains of virtual servers. But how will we get there, and what will this future look like?

The Problem with Virtual Servers

Implementation of virtualization technology to date has merely delivered condensation of physical resources: 250 physical servers are condensed onto 20 physical servers, but 250 virtual server images remain. True, this does result in the reduction of data center footprint, from rack space to power and cooling, enabling moderate cost savings. But these are not examples of real consolidation, let alone business transformation.

Many have lamented this "virtual server sprawl" and suggested alternative methods of consolidating low-utilization applications into larger, more flexible "resource servers". For example, numerous SQL servers can be combined on a single central server with more focused management. But these larger resource servers are not normally virtualized since their concentrated I/O demands can overtax current server virtualization platforms. Therefore, consolidation and virtualization remain separate.

This is the problem with conventional server virtualization. It enables us to condense data center demands for some systems, but delivers very little else apart from new backup and management headaches. Certainly we can provision servers more quickly, and we might be able to recover from a disaster more easily, but these are IT-facing benefits that other business entities care little about.

Storage and Network Virtualization

Virtualization of storage and network resources face even higher barriers. Where server virtualization has quickly delivered incremental "green" savings, these benefits are harder to come by in other areas.

Storage virtualization primarily delivers flexibility. SAN or NAS systems can be combined into larger pools, allowing existing resources to be better utilized or provisioned more quickly. But there is only a little cost avoidance to be gleaned from more efficient use of storage capacity. Real cost savings would require reduction of infrastructure, and constant data growth makes this extremely difficult to achieve. Other benefits, like enhanced data migration or heterogeneous replication, ought to be invisible to the business anyway.

Network virtualization lags even further behind. Only a few shops have attempted to use technology like InfiniBand to enable flexible virtual connectivity, though the future Converged Enhanced Ethernet concept is beginning to spark some interest. Here again, financial benefits from network virtualization technology are limited to a moderate reduction in future equipment cost.

Transforming the Data Center

In all three instances (server, storage, and network), the financial benefits are merely the sideshow. The underlying benefit from virtualization of IT infrastructure comes from the extension of IT systems outside the data center, a change on the order of the advent of minicomputers or the spread of open systems.

VMware recently laid out a serious and compelling vision of this future Virtual Data Center as VDC-OS. Their concept is evolutionary and radical at once, with the simple virtual server infrastructure of today augmented with increasingly uniform and flexible storage and network layers. This culminates in a truly virtual data center, where running server images can move from device to device, location to location, and even out to the cloud.

VMware's brilliance is in leveraging what works today (virtual server images on ESX) to build a foundation for complete virtualization of physical resources. But virtual servers running on VDC-OS remain tied to the present: They run the same operating systems and will likely remain bound to the same "one (virtual) server per application" world view that pervades open systems today. This leads to exactly the same situation of server sprawl that has proven a management nightmare.

Others are extending the web hosting concept to enable custom applications to be run on the scalable, flexible, multi-homed servers that run the world's biggest Internet applications. Google and Amazon's visions are decidedly post-data center, with applications, rather than server images, being the primary unit, and database-style storage replacing conventional blocks and files. Use of these web-oriented application platforms has so far been limited to entirely new systems built from scratch to take advantage of them, limiting their appeal to current IT environments.

Where Is the Business Value?

Yet, most discussions of these virtualization strategies (mine included) fails when it comes to demonstrating real business value. We must move away from quickly-forgotten cost savings and focus instead on profoundly transforming how IT serves business goals. Virtualized infrastructure allows flexibility and scalability, changing how everything in IT works.

Whether it uses conventional operating systems and applications (as in VDC-OS) or re-engineered web-enabled solutions, virtualized infrastructure fundamentally changed our world. Organizations would be free to physically move their systems, even outsourcing or offshoring the infrastructure component entirely. They could move to an on-demand purchasing model for logical capacity, not just bits and bytes.

In the process, they would render current server platforms, operating systems, and storage devices irrelevant. Undoubtedly, attaining this future remains a while off, but IT professionals should consider its implications. Much of what we do is focused on making the "plumbing" work efficiently rather than serving the needs of the business. Where do we stand once the perennial issues of performance, availability, and scalability are solved?

Stephen Foskett is a professional information technology consultant, providing vendor-independent strategic advice to assist Fortune 500 companies in aligning their storage and computing infrastructures with their business objectives. He has been recognized as a thought leader in the industry, authoring numerous articles for industry publications, and is a popular presenter at seminars and events. In 2008, he was awarded Microsoft's Most Valuable Professional (MVP) status in the area of File System Storage. He holds a bachelor of science in Society/Technology Studies, from Worcester Polytechnic Institute.

icon
Michael Kramer
Thu Dec 4 3:40pm
I'd like to understand what you mean by "true business value being derived from... not the incremental capacity gains...". I see true business value in nearly any abstraction of resources, not just large scale virtual data centers. The virtualization doesn't need to be "big picture" just for the business to realize the benefits of abstraction.

You also mentioned that quicker deployment and disaster recovery are not business concerns, but are merely IT-facing benefits. I feel that the business units need their applications to do their jobs and they need them to be resilient in a disaster. These are requests from the business, not from within IT.

Lastly, your description of a "truly virtual data center" intrigues me. It sounds to me like a description of three actual data centers that simply replicate to each other. Perhaps I'm missing something, or perhaps you're talking about application virtualization?
icon
Stephen Foskett
Fri Dec 5 7:40am
Thanks for the comments, Michael! I mean to define true business value as something completely removed from IT - the business doesn't care about IT at all, or shouldn't except at IT companies. They just ASSUME quick provisioning and DR capability - that's IT's job. So we need to be able to meet their expectations, and a truly virtual data center can do that!

After thinking about this for some time, I researched the meaning on wikipedia. As expected, virtualization means pretty much whatever you want it to mean!

Even so, regardless of the definition one chooses, and independent of the context (yes, I know we are addressing IT, but the concepts are broader than "just IT") virtualization is an inevitable progression.

Even as recently as when I was a child (late Stone Age), we often did general purpose tasks ourselves; fixing things, making things where we had a need and no product, building stuff; and it had a negative impact on the primary purpose we were attempting to fill. How could I be effective in my chores, when I often had to improvise, say, fly swatters, or some sort of bucket, etc.? Each time I spent time, sometimes hours, filling some need that was only incidentally related to my chores, and the chores suffered!

So, inevitably, for societies to progress, the "incidental stuff" has to be taken over by experts (or, at least, dedicated entities). The same has to be true of IT! It makes no sense for me, a patent attorney, to spend time managing system resources! With virtualization (assuming it is done correctly) those things simply "happen", and I am free to pore over the latest case law, or the latest USPTO rules (or even to philosophize about how we can remake the present, less than optimal, system!).

I have found the USPTO (and my congressional representatives) to be quite receptive to ideas; but when your "computer", a term that encompasses software, IT, etc.: is not working, who has time for such things, no matter how vital they may be in the long term?

Clearly, when we have progressed sufficiently, virtualization will convert the peripheral tasks that now "soak up" much of our time to transparent activities that are simply there.

I think the primary place where this MUST happen is in linux. Windows, as popular as it is, and will be for some time, is based on the "weakest link" philosophy. By its very nature, Windows is the venue of "morons in a hurry", people who either won't, or can't, participate in the ongoing process. As such, Windows is inherently prone to attacks by criminals, because the system, in order to accomodate the "moron in a hurry", must be simplified to the point that security is a desirable, but unattainable, goal.

Linux, or any Unix derivative, on the other hand, is designed for security and safety. The "moron in a hurry" is not accomodated (or even encouraged). Yet, as the world (IT) moves toward reasonable security, but less personal involvement, virtualization is the only answer. I do not see, in our lifetime, Windows being able to "smarten up" to a position of secure operation without abandoning the "moron in a hurry" (which won't happen). Linux, on the other hand, MIGHT be able to redefine itself as a virtual helper rather than an "ivory tower" for the elite (in the "computer world).

I suspect that, like it or not, it will happen! It may be traumatic to the "computer geek" (people I greatly admire, by the way) and the "moron in a hurry" (people I sympathize with very much), but it is a natural evolution of society, and no matter who gets hurt, the "train is on the track"!

I have a belief that, one day, I will be able to go home at a reasonable time nearly every day. The process of working late into the night performing upgrades or migrations makes my family life difficult and a personal life almost impossible. Don't get me wrong, I don't mind working hard and putting in the hours when needed, but when an upgrade goes wrong and you work all night, missing a personal or family event, well, that is hard.

Virtualisation is a solution to that problem. I can take a "snapshot" of my server before the upgrade, make a copy of it, then start work. Then at 10PM when the team is tired, we can make a decision to "roll back". Then migrate back to the copy of the server disk image or snapshot and..... go home.

More recently, for certain types of server software, we have been performing the upgrades on a copy of the virtual server in lab environments. We take a snapshot of the Virtual Hard Disk, move it into a lab virtual server, and then run the upgrade procedure. As we find problems (there is always a problem isn’t there?) we read the manuals, contact tech support, communicate with the developers, get external resources or any of other myriad things that can fix the problems.

Previously, we were restoring from tape, or a backup on a SAN. And this could take hours to do the rollback. The temptation to keep pushing through the problem

But now, we have often performed the upgrade a number of times and know exactly what is going to happen. Importantly, becuase we never need to load a tape or backup media, or put the Boot CD into the drive, or press F12 at boot time, we are able to do all this work from home. The hypervisor console means that we have total control over the machine environment from a remote console.

Ah, home. Now that is where work should be done from.

We are also saving money. It’s surprising and it’s not obvious, but we save money by:

- no longer needing to have external consultants on standby or on site for the upgrade “just in case” (and having to pay penalty rates for night work)
- travel and accomodation at the data centre
- reduction in time used for low risk projects because we can prove the upgrade in the lab
- Less overall resources. Many operational projects use less resources to achieve an outcome.
- reduced training budget because people can learn on test systems and copies of the live systems.
- not purchasing expensive KVM systems to have remote control of servers

You might also guess that there is a lot less stress going on. While it is true that designing and creating the virtual servers then learning the tools that manage and control those systems took some time and some money, it is not as bad as you might think. Why ? Because it is the same as what we already do ! You see, a virtual server still just a CPU, Memory and Hard Disk Drives.

Virtualisation means that I can usually go home at a good time, be less stressed and spend more time at home since I can do server work remotely. Thats good enough for me.

-- Dennis Byron

 

The excitement that built around the VMware IPO in 2007, Citrix’s subsequent acquisition of Xen the same year, and related Red Hat and Microsoft announcements in 2008 made virtualization a hot buzzword. But it is a buzzword that is thrown into the information technology (IT) mix in ways that make little sense either technically or in terms of expected benefits, based on user feedback to IT Investment Research. Savvy IT professionals and users believe various types of virtualization (for example, memory, storage, server, desktop, appliance, application) should not be treated as if they are one (and there is no demand and expected benefits among the un-savvy).

 

Memory/Storage/Server Virtualization

Savvy users also know that the techniques for virtualizing IT resources are almost as old as the IT industry itself. Virtual memory concepts introduced as part of the MIT/Bell-Labs/GE Multics project in the 1960s represent the beginnings of virtualization. Storage virtualization is basically an extension of memory virtualization. These concepts have been totally built into all major memory management systems since the early 1970s and even into Windows for more than 15 years. They have been built into storage management since the late 1980s.  There is no new demand and no benefit to IT professionals and users from memory/storage virtualization that they are not already realizing.

 

The concept of virtualizing the server via better management of its operating system resources dates to IBM developments in the 1970s, when IBM virtualized its popular 1960-era System/360 machines with VM/360.  Descendants of that software are still available from IBM and are in wide use. Server virtualization (the major VMware value proposition as outlined in its IPO) is an important concept, but like memory/storage virtualization, it is not a differentiator. Server virtualization is kind of like rack-and-pinion steering and electronic ignition in the auto market; IT users will have no other choice at some point. The actual demand for separate server virtualization software will dry up as it is added to operating software such as Microsoft Windows and Red Hat Enterprise Linux as a no-cost option.

 

In terms of benefits, users do see server virtualization in particular being used in the 2009-2012 timeframe as a new way to inculcate some much needed high availability (HA)/failover capabilities into IT infrastructures. IT Investment Research has heard more from users about HA than better resource utilization as an expected benefit of virtualization.

 

Desktop/Appliance/Application Virtualization

VMware recognized before its IPO that server virtualization faced some market-demand and “user appreciation” limitations and in its SEC IPO documentation also talks about the fact that less than 1% of non-server personal computer (PC) devices are virtualized according to IDC. This indicates VMware feels there is a demand for a second kind of virtualization, “desktop virtualization.” Just as memory, storage and server virtualization is a 1960/1970 development, desktop, appliance (or non-desktop/non-server) and application virtualization are 1980s and 1990s IT developments. But unlike memory/storage/server virtualization, desktop/appliance/application virtualization is not about duplicating or mirroring multiple logical resources on a single physical device. Instead, the idea is to move administration, software distribution and other complex tasks associated with running PCs in large enterprises back to a central server

 

Desktop or appliance virtualization lets—for example—the desktop or device simply deliver the ease-of-use and ergonomic advantages of a Windows-based—for example—user interface without all the overhead of having the applications resident on the desktop or device. Application virtualization is a subset of desktop/appliance virtualization that is primarily related to one of the major issues in managing thousands of desktops or appliances, distribution of the application software to the devices running it.  Application virtualization has applicability where an enterprise decides not to fully virtualize desktop/appliance management. Citrix has built a good business doing a sort of desktop and application virtualization. It acquired the server virtualization technology of Xensource during 2007, hoping to marry the functionality of server and desktop virtualization from the desktop side in the manner that VMware is hoping to marry the two from the server side.

 

On the desktop/appliance side, virtualization has potential to help very large enterprises running thousands of PCs. But users tell us that this version of the virtualization idea still (as in the 1990s when the relevant technology was MIT’s X-Windows) has some IT staffs today worrying that desktop virtualization will move all kinds of systems management problems back into the data center. But everyone agrees there will be demand for virtualization technology in the management of appliances, the devices “out on the edge” of the network that are not on the desktop (and the number of which is bounded only by world population times two—one for each hand).  But various service providers will be using such software, not the actual IT professional or user.  

 

Summary

Users believe the benefits of the different types of virtualization are similar even where the concepts and technologies differ. Virtualization helps enterprises control data center costs, such as power consumption, real estate, underutilization of hardware resources, and so forth, as well as control the personnel costs relating to administering thousands of servers, desktop devices and “edge” appliances.  Based on user feedback however, IT Investment Research believes that cost control is actually less of an issue with users than virtualization software suppliers promote because these “hard” costs of IT still represent a relatively small proportion of IT budgets.

 

According to users, lack of standardized operating systems, shared-services issues, and lack of applications that take advantage of virtualization are retarding adoption. None of these are solved by the technology itself.  For example, the differences between Windows and Linux do not go away with virtualization; users still need employees with different skill sets if they have both operating environments irrespective where the software physically resides. Similarly it is not trivial to get the same middleware service (an Apache web server for example) working the same way on top of different operating environments, irrespective of where the operating environments physically reside. 

 

And as with all technologies over the history of the IT industry, there will be no substantial uptake of virtualization until software such as ERP, CRM, enterprise content management, business intelligence and similar applications are rewritten to take advantage of it.  The lag between the underlying functionality and the applications catching up has often been measured in decades, not just years (for example, the full use of n-tier client/server technology).

The benefits of virtulization come directly from the flexibility that it allows.

There are three scenarios discussed here that show what the flexibility of some form of VM solution can provide.

1.  Disaster Recovery Opportunities.

    Traditional DR strategies around failover, system and whole site replication and third site options rely on similar server and operating systems to make them viable. Data recovery is a problem solved long ago and IT operations are more than familiar with incremental backups, snapshots and recovery.  As more businesses begin to realise that systems/applications are increasingly mission critial the applications themselves need to be more resilient. 
   
    Having to have replicated server environments for Windows, Unix and/or Linux means that many Data Centres are over designed and carrying unused capacity just in case one architecture dependent application fails.
   
    This is where the power of virtualization becomes a tangible benefit. Owning or leasing some short term capacity in it's cheapest form (either Windows or Linux) could be a solution for many enterprises.  In essence all you need to buy is CPU time and memory to host a guest VM soltion of your application.  The data may be in the cloud or on portable storage that can be easily accessed by the stood up virtualized platfrom.
   
    Needless to say the focus then becomes on well trained administrators who are comfortable with working with hybridized solutions and not a single flavour of architecture.
   
2.  Pre Sales Demonstrations.

    Virtualized platforms are a must for the average pre-sales or training consultants.  No lenghty installs are required and you are no longer reliant on the right hardware being in the right place at the right time.  Now you can take a VM solution of your application either as part of an image on your laptop or on a portable USB drive.
   
    The benefit is the ability to pre-configure the scenarios in advance of the session but still have the flexibility to model new scenarios as the demo evolves.  Most VM solutions allow for persistent or non-persistent storage so this allows a quick recovery if you have deviated away from your configuration that you might need in the next session.
   
    The only limitations are processing power and disk, but with high end laptops and portable USB drives with up to 1TB of storage this should not be  limiting factor for most demo systems.

Server in a Suitcase

    It's akin to having your test server in your laptop bag or suitcase.
   
   
   
3.  Massively Parallel Processing.


    Having worked with a Unix based billing system that like most billing solutions has to deal with peak loads for rating and billing processing virtualization could offer a solution to a common problem.
   
    Typically disk is a cheap and well used resource that can be added to an expanded.  There will always be a need to manage such a resource effectively but CPU and memory in enterprise servers are expensive and less easy to add in a 24x7 production server.  Even planned downtime can severely impact on the SLA of the billing department.
   
    Virtualization can work in two ways in such a situation.
   
    Lunchtimes and out of hours there is an untapped resource of idle CPU and memory in distinct units, the common desktop on the employees desk.  If these are Windows PC's (which most are) then a VM solution that allows you to run a unit of rating or billing in a hosted Unix or Linux Virtual Machine allows you to distribute the processing across the available computing power of the enterprise.  Even as a stop gap solution to handle abnormal peak loads (SMS sent on New Years Eve, phone calls made in the aftermath of a terrorist event) or a longer term transition to more CPU and Memory on the production server virualization allows for this to happen.
   
    New Linux version of Suse, Ubuntu and Redhat are becoming more and more usable for the average end user.  The interface looks familiar and in a time of web clients a browser run on XP has the same functionality as one run on Suse 11.0.
   
    So what prevents enterprises from mass adoption of Linux clients? the inertia of trying to replace Microsoft Office is usually cited as a reason. Despite the efforts OpenOffice or GoogleDocs don't have the same draw. The answer is a VM solution to allow users to run Office applications on their Linux clients.
   
    Alternatively adoption of application virutalization options like the Mono Project or Wine allow Windows applications to be run directly in the Linux environment without the need for guest virtual machines.
   
With the release of freeware options like VirtualBox, virtualization is set to become a growth area.  With the focus on green data centres and new domains (e.g. Healthcare) starting to understand the need for cost effective DR strategies 2009 should be a good year for the proponents of virtualization.

Business does not care about virtualization.  Executives do not care about VMWare, IBM, Sun, and Microsoft.  The business cares about their business:  maximizing profits for the short and long term.  This is accomplished by focusing on and improving various aspects of the business and how it operates.  Each one of these can be improved upon with virtualization technology.

Ways to maximize profits:

  • Reduce costs
  • Increase productivity and output
  • Be resilient to risk
  • Attract and retain customers

Reduce costs

Many costs are incurred by IT departments.  These include money spent on electricity, hardware, staff support time, deployment time, and disaster costs to name a few.  Virtualization can reduce all of these expenses.  Virtual storage on SANs will only allocate storage used, making it easier to manage and increasing utilization of storage.  100% of allocated space is used instead of a storage manager having to juggle space between LUNs and decide whether to overallocate or manage storage more frequently.  Virtualization allows several server operating systems to be hosted on one piece of physical server hardware which increases its utilization.  All of those unused clock cycles and memory are now being utilized to the fullest saving real dollars by not having to buy a physical server for each system that demands a dedicated server OS.

Time of deployment for a new server drops from two or three hours down to fifteen minutes or less when cloning server or workstation templates in the virtual world.  With technologies such as high-availability and disaster recovery managers for virtual machines, there is unheard of uptimes and true business continuity, not just disaster recovery.  Less equipment means less energy used for redundant power supplies on each server.  This also means more servers per square foot in a data center, which means the data center doesn't need to waste valuable real estate.

Increase productivity and output

With much shorter deployment times and decreased maintenance and recovery times, IT staff has much more time for projects and improving support of business units.  To the business this means less staff required to maintain more servers, or it could mean much more is accomplished in the same amount of time.  There is also a better perception of the IT department when the business isn't upset about a server being down to long, taking too long to recover or deploy.

Be resilient to risk

Risk to the business can come in all forms.  One of the most obvious risks discussed with IT staff is the concept of "disaster recovery".  Some firms have evolved this concept into one of business resumption or continuity, as if there is no pause or hiatus, and business simply resumes.  This advanced concept is certainly more of a reality now with HA or "high availability" technology.  Virtualization takes advantage of their resource abstraction to offer impressive HA features.  For example a physical server can go down, yet the application server marches on little or no downtime with no impact to the users. 

Couple the HA technology with snapshot or backup technology and the risk of data, system, or registry corruption is mitigated as well.  With various management front-end interfaces now available, an entire site can go down physically, but its data and features can be available virtually, either in the "cloud", or with replicated HA virtual machines becoming active at a remote site.

Attract and retain customers

This may be the hardest part, but one that every business strives for.  Virtualization can help with this aspect by marketing their dominance in the first three categories.  By offering more product for less money with strong resilience to risk, your business becomes much more attractive to both customers and shareholders.

Virtualization Defined

Let's not forget, virtualization is not just separating the operating system from the hardware, rather it is the abstraction of all resources.  This means adding a layer of abstraction between two resources.  There are several current examples, such as storage system from the storage used as in the case of virtual storage.  The most obvious is abstracting the hardware of a server from the OS and software of a server.  There are VLANs and VSANs, allowing for complicated network configurations without the complicated wiring of them.  Let's not get boxed in to how "it's always been".  We need to move forward and expand the exciting world of virtualization and abstraction of resources.

 

The benefits of virtualization are very straightforward to understand.  Improvements in recent years have taken away the complexity that used to be inherent to running virtualized servers, and left behind the benefits: permitting the agnostic use of hardware, enabling greater flexibility and robustness, and more easily matching server capacity to actual needs.  Think of it like packing everything when you move house.  You either put your strange-shaped objects (like a guitar or a lampstand) in special cases or rattling around in boxes with other things.  Big bags and big boxes are needed for big possessions, but even so, whilst packing, you keep finding things that never seem to fit with anything else.  No matter how efficiently you packed, you can be sure that there will be lots of empty space in most bags and boxes because there was no better way to fit everything in, whilst you also can also be sure that you will struggle to zip your cases shut.  Repacking to fit things in is a chore.  Finally, you can think of a hundred things you would rather be doing with your time.  Then imagine there were no fixed-sized containers for you to fit your possessions into.  Instead, your things could be magically disassembled and reassembled at the other end, without leaving any trace.  All you would need is a vehicle large enough to carry them all.  Virtualization does away with the physical constraints of fitting things on to specific servers.  If you could virtualize your packing, you could stop trying to fit specific objects into specific containers, and instead just ensure your boxes add up to the right cubic capacity to cover all your possessions.

Better matching of physical resources to logical requirements generates a saving.  Manifold benefits go straight to the bottom line, including lower hardware costs, saving of space, and reduced energy bills. Greater flexibility and robustness also result as by-products of virtualization.  Because there is no connection to any particular machine, adding processor power or memory is greatly simplified, as is the job of updating and replacing old hardware without causing downtime.  Your VMs can exist anywhere and everywhere, meaning the logistics of relocating hardware or rebuilding a server are greatly simplified.  Big firms that utilize thousands of servers can most obviously realize all of these benefits, but the advantages of virtualization can also be enjoyed by the smallest businesses too.  Virtualization means small companies can more readily pool their needs via a supplier of virtualized servers.  So they can also receive the knock-on benefit of reduced costs by taking advantage of virtual private hosting.

A consequence of virtualization is that we are likely to see fewer and fewer businesses running data centers.  If you took two large independent data centers and pooled them, there would be a financial benefit because of two factors.  First, virtualization means the greater the resource being pooled, the more efficiently it managed and matched to requirements.  This lowers costs for all the reasons given above.  Second, customers of hardware who buy in greater bulk will enjoy the best prices when negotiating with suppliers.  However, an individual business can only go so far in rationalizing the physical infrastructure demanded by its own needs.  Economic efficiencies will make it increasingly attractive for businesses to effectively pool resources by turning to third-party hosts for all their needs.  As outsourcing becomes more popular, there will be an acceleration in its take up, not least because large outsourced suppliers will be better able to recruit, retain and provide the specialist staff needed to manage the variety operating systems and middleware options that will continue to be in use.  This will be another facet of the economies of scale they will offer and will become a form of differentiation based on quality of service.

Telecommunications costs continue to come down due to proliferation of next generation IP networks, and the continuing deployment of submarine cables across ocean floors.  This makes outsourcing more tenable and permits third party data center providers to run and provide a service that is ever more independent of the location of their hardware.  Off-shoring will be a natural consequence of this trend, which will steadily gain momentum for all customers except those where politics and national security create in insurmountable obstacle.  The nations best placed to take advantage will be those with good international telecommunications links, local manufacturing of servers, cheap and reliable sources of energy, an education system that delivers high-calibre IT staff in the right numbers and a cost of living that permits wages to be kept low.

Of course, an analysis of technological and economic factors makes the path towards global virtualization sound easy and inevitable.  In reality, there are obstacles in the way.  The biggest real obstacles will be job protection from staff and middle management and lack of imagination from senior management.  Nobody should be surprised if IT professionals are as reluctant to give up their jobs as automotive workers are.  It is very likely that people will be more creative at finding reasons to set limits on these trends than they are at finding justifications for them.  Even so, businesses will find it hard to argue with a simple more bang-for-your-buck argument that will drive the move towards consolidation, outsourcing and off-shoring.

Demand for server resources keeps growing, and will continue to keep growing as developing nations catch up with the developed world.  The BRIC nations of Brazil, Russia, India and China are in the vanguard of this trend.  However, rationalization of servers has an ultimate limit in terms of the economic savings it can deliver.  If the world all used one massive data center, no more efficiencies would be possible, though this would never be attractive because competition is necessary to ensure the benefits of economies of scale are passed on to customers.  As the limit of outsourced supply consolidation is reached, a concurrent force will be increasingly important to virtualization.  This is the virtualization of all the PCs that are not servers.  Google and others are effectively working towards this goal by trying to create applications whose base unit is the internet.  That will deliver some gains towards virtualization, but there will likely be a balance struck between the freedom of the web and the freedom of choosing to use applications that could only appeal to a tiny fraction of the global market being pursued by Google.  One possible outcome is that these separate technological trends will find a harmony somewhere down line.  For example, key corporate applications like Customer Relationship Management or Business Intelligence could be redesigned with virtualization in mind.  The idea here would be to permit a more finely-tuned matching of the resources purchased with those needed.  This would enable corporate customers to acquire virtualized services like BI, and pay for them, based on the numbers of users, and the complexity and amount of data they work with.  This would supplant servers (virtual or otherwise) as the base unit of their capacity-related IT costs.  The virtualization of applications is still primitive in relation to that possible outcome, but progress would be more rapid if the virtualized data center market had already consolidated, causing suppliers to aggressively explore refinements of their business model.

Predicting the future of technological and economic change is notoriously difficult.  We must still plan for the future even so.  One aspect of virtualization seems certain.  Cost savings will drive a synergistic trend towards outsourcing and consolidation of data centers.  That will leave everybody else with more time and money to focus on the rest of their business.

The common concept of server virtualization has grown through breaking new technologies before years to a point today, when server virtualization vendors tend to implement similar features with minor innovation. There are two virtualization features in my mind which could be the next move in IT virtualization:

Application-Centric Virtualization

Virtualization systems today work with operating systems. They support deploying virtual machines, failing them over to another physical hardware when disaster strikes, they optimize physical memory utilization and many more OS-related tasks. What is more, the virtualization platform can deploy new operating system automagically, according preset scenarios. Well, where's a problem?

 

It's application, not operating system what the end users really need to work with. They need the application interface to be available on demand, to respond fast, to stay available should one of company's facilities fail. And there are many services like databases, fileservers, mail stores and other which that application interface depend on. And it's the operating system at the end of this application dependency chain.

 

This means we have a mature technology available to cover the last link of chain. Now it's time to move operating systems' role to background, wher they'll provide a simple runtime environment for applications. Let's move the focus to applications and let them be managed through the same interfaces we use to manage virtual servers.

 

It's not enough, however. We need to keep those applications highly-available and their data consistent. There's one problem with current OS virtualization systems' high availability solutions, however: If one physical hardware fails, the application stops until the OS is auto-restarted on another. Not to mention the consistency is usually corrupted in such events.

 

To move further to 100% available applications there are two options:

  1. To redesign each (suitable) application separately to cluster their transactions among multiple running instances and thus be ready to transparently failover in case a single hardware fails.

  2. To create a generic API layer in operating system to take care of applications HA features. Similarly to what Volume Shadow Copy did for applications' consistent snapshots on Windows platform.

Server-bound storage

Most of current server virtualization features depend on central storage, connected to many physical servers. Few companies realize this to be a single point of failure case, when that array's failure causes a disruption to all connected virtual machines running applications. Mirroring that array or yet better – clustering it using technologies like those by LeftHand or Equallogic – can eliminate this SPOF issue.

 

There's another option, however – use the same x86 box for both the virtualization system and storage. Just imagine a common server used for virtualization – it boots from an internal flash drive and still have some 6 drive slots available to accommodate SAS or SATA drives.

 

By implementing a synchronous block replication feature – or implementing an existing one – to replicate data between boxes, we could start building an autonomous virtualization cluster. Should one box fail, the other one will transparently take over both it's applications and storage.

Do you worry about such storage performance? You might be right, you need tens of SAS drives to satisfy some applications' IOps needs. Soon, however, such performance will accommodate in both read and write operations a single SSD drive.

 

Putting right virtualization system platform, block replication system, 2 SSD drives and some large capacity SATA drives into a single box is a dream solution of most server administrators. We have all technologies available to build such system, the time some server vendor designs it is comming. I believe.

 

 

icon
Michael Kramer
Thu Dec 4 4:51am
I'm not clear on how using a single SSD drive would eliminate what you mentioned was a "single point of failure" that exists on centralized storage. That centralized storage is, more often than not, extremely resilient to drive failure and much more scalable than a few slots that could be available on the shared DAS (direct-attached storage) setup that you suggest. I feel you would still have the problem of using all the available storage while still having unused CPU and memory within those boxes.
Also, SSD still needs to come a long way to be competitive and truly better than spinning disks.
icon
Lukas Kubin
Fri Dec 5 7:45am
Michael,
what I meant was true clustering beyond boxes - both applications and storage. Those technologies already exist in eg. LeftHand storage clusters - replicate blocks synchronously among relatively unlimited number of boxes. There is no single point of failure in such design.

The usage of SSD drives (or any other kind of memory based devices) I suggested was mainly to tier most often used or random accessed blocks to the fastest storage. The purpose is to replace the performance of multiple SAS drives with a in-box mirrored pair of equally fast SSD drives. There are existing technologies which can do these movements between fast and slow drive types online - eg. FalconStor IPstor.

Virtualization as a technology existed from the mainframe era in large glasshouse data centers.

Transfer of virtualization technology on to commodity x86 based hardware has increased the adoption of virtualization.

The first generation of x86 based Virtualization technologies had limited use in testing, development and re-hosting legacy applications.

We are seeing rapid evolution of Virtualization technologies by the vendor community into what we know as Virtualization 2.0.

What is Virtualization 2.0 ?

In my view, Virtualization 2.0 is about taking the virtualization from development, staging and production servers to Desktop (Virtual Desktop Infrastructure - VDI), Mobile and affordable Cloud based Disaster Recovery/business continuity applications.

Virtualization 2.0 signifies free/inexpensive access to advanced hypervisor based Virtualization technologies from various vendors.

This is allowing the change in deployment models for virtualization from typical scale-up approach(SMP with large Memory servers) to scale-out model on low-cost cost off-the-shelf hardware.  

There is a lot of emphasis on hardware Assistance (Intel VT etc), Provisioning, Metering and Management tools.

Trends and Technologies propelling Virtualization 2.0

Green IT - The proponents of Green IT have raised the consousness on energy efficiency and resource utilization.

Cloud Computing - Creating large uniform Compute clusters to enable cloud computing is a common practice. Amazon has taken lead on providing on-demand compute and storage resources to scale up your operation. Portability of Server state provided virtualization is fundamental requirement for the success of cloud computing.

Blade Servers - Blade servers are improving manageability and creating opportunities for labor and material savings in the datacenter. Their elegant design, easy operations and reliable performance are helped the adoption of virtualization

Networked Storage - Networked storage both SAN/NAS have helped in consolidating servers and services in Virtual machines by removing scalability constraints imposed by direct attached storage. Today VM's are directly being mounted from networked storage.

Networking - the average speeds of internet connectivity are growing. this growth is fuelling need for better utilization cloud based virtualization.

 

 

Impact of Virtualization 2.0 on Storage and Networking

Virtualization 2.0 would not only feed on Networked storage but also generates a need for larger network capacity.

· In case of VDI the Virtual Machines of Non Logged on users remain dormant and when accessed consume a lot of bandwidth.

·  Mobile Device are the next target for virtualization due to falling cost of flash storage and ease of application downloads due to better connectivity.

·In Disaster Recovery Scenarios…Storage Network based replication helps Virtualization to create easy recovery options.

What are the benefits of Virtualization 2.0...

Virtualization demands better managed data center environment due to power and cooling density due to higher compute/storage resource utilization.

The effect of better utilization of compute, network and storage resources translates into the hard dollar savings on IT infrastructure while promoting the Green IT agenda.

The cloud computing or Server in the cloud is relavent to internet centric businesses, academic and one-off projects.

A more compelling story is emerging due to virtualization in the Enterprise Cloud computing.

There is Rapid development, evolution and availability of provisioning, metering and management tools for virtual servers and cloud computing.

This trend is creating opportunities for in-expensive capacity building for enterprise cloud computing farms using low-cost servers, network elements and storage(iscsi).

Virtualization helps typical stateless workloads like Internet Security gateways, Web servers and application servers to be moved into enterprise clouds to reduce overall cost of the infrastructure.

The savings realized from redeploying expensive servers and network elements to new data oriented state based applications.

Disaster Recovery and Business continuity have become compelling in terms of cost and reduction of deployment complexity due to advances in synergy of storage and virtualization technology.

The Availability of ample bandwidth and on-demand cloud computing is enabling even the small and medium enterprises to implement working Disaster recovery environments using virtualization.

Challenges of Virtualization 2.0...

There is a significant challenge in the management large scale virtual infrastructures. There are no clear boundaries and responsibilities in terms network, storage and datacenter management teams.

The impact of faults and incidents is felt across the environment. it hard to troubleshoot the performance and stability issues as there is significantly small body of knowledge on Virtual server implementations of software products.

The support teams for most of the Enterprise software vendors are not particularly geared to support virtual environments.

The open-source product deployment is little bit more complicated as there is lack of community participation on deployment issues.

There is a significant learning curve to realize the synergies of Virtualization and networked storage. Most of the Virtualization still limited development, test and isolated production environments.

Security of Virtual Server environment is a hard nut to crack.. Wider availability of virtualization technology lends more discovery of vulnerability and exploits. This creates resistance in deploying mission critical workloads on virtualization.

Virtualization 2.0 is here to stay...

In spite of the learning curve, cost and security issues, Virtualization is here to stay due the flexibility, energy and cost savings.

Virtualization helps us by improving the time-to-innovate, time-to-test and time-to-deploy.

It improves the efficiency of IT staff and allows them to focus on innovation in servicing customer requirements.

 

In the "IT" sector, virtualization can be defined as enabling a physical resource to perform in such a way that is different than it's physical attributes or functionality.  This can mean making a single server behave as though it's many servers, making many servers behave as though they are one, or making a server or servers perform the functionality of servers with different capabilities.  Virtualization allows for a much more efficient utilization of infrastructure.  There are two kinds of server "virtualization": hosted virtualization(or OS-based virtualization) and virtualization through a hypervisor.  With hosted virtualization, an operating system on a single physical server runs multiple instances of the operating system; each instance performs as though it is an independent server.  These virtual servers all have to run the same operating system.  With the hypervisor-based virtualization, a level of software(the hypervisor) sits on top of the physical server.  The hypervisor allocates portions of the physical server's hardware resources to create multiple independent and secure virtual servers from the single physical machine.  Unlike with hosted virtualization, hypervisor managed virtual servers can all be running different operating systems.

Server virtualization allows unprecedented optimization of an "IT" environment through several unique capabilities.  

  1. Each virtual server runs independently from the others on the same physical hardware preventing application and "OS" incompatibilities.'
  2. A running virtual server can be moved from one physical machine to another to manage workloads.
  3. Virtual servers can be copied from one physical machine to another without modifications or hardware altercations, creating a high availability similar to more expensive redundant physical servers.
  4. Virtualization allows for pooling server resources rather than manging individual servers, increasing server utilization rates.
  5. Virtualization can reduce "provisioning and re-provisioning times."  Individual machines can be removedfor maintenance without interrupting application use.
  6. Virtualization improves "environmental" impact and costs.  Pooling resources decreases power, cooling and floor space requirements.

WHO BENEFITS FROM VIRTUALIZATION?

**Anyone with an "IT" infrastructure that includes servers can benefit from virtualization's increased server utilization capabilities.  Also, anyone supporting applications running on multiple "OS'S" will see a decided cost advantage in the ability to isolate operations and run mutiple "OS'S" on a physical server.  The "benefits" include:

  • Hardware consolidation: which saves capital and energy.
  • Standardization of operating system: which reduces errors.
  • Rapid provisioning: which improves time-to-market.
  • Automation: which reduces staffing of facilities.
  • Abstraction :which makes individual servers interchangeable.
  • Portability: which improves availability, recover, and flexibility.
  • An overall decrease in operational costs.

How Can Virtualized Server Environments Be Used?

Virtualization provides a cost-effective means for creating and managing distinct server environments that are ideal for resolving key "IT" challenges, including:

  • Business Recovery.
  • Maintenance window & load balancing.
  • Abstraction Layer.
  • Virtual Labs for development and testing.
  • Patching
  • Security

How Much Does Virtualization Improve Server Performance?

In most cases, processors are extremely underutilized due to application compatibility and availability issues.  it is estimated that servers are currently utilized at a disappointing rate of 5%.  With "virtualization," however, applications that are incompatible or require different "OS" parameters can be isolated to their own allocated hardware and "OS" resources, allowing a single physical server to run more applications and use more of its performance potential.  This dramatically increases server efficiency and utilization by reducing the number of physical machines required and thereby reducing overall server infrastructure costs.

IS VIRTUALIZATION SAFE AND SECURE?

There is a misperception that virtualization decreases security.  Virtualization actually can increase the safety and privacy of information by separating various applications and services, creating an additional level of security.

Is Virtualization Right for Your Business?

On balance I think it's very clear that virtualization offers huge cost advantages for small to moderate sized business that can allocate IT budgets more effectively using virtualized hosting and other services.   There are also different but major benefits for enterprise level businesses where server needs can be reduced significantly using various virtualization techniques.

As internet travel publishers we've had many experiences with both the challenges and the pitfalls of partial virtualization, especially for DNS and hosting services and indirectly as participants in cloud based storage of photos, blogs, and email.   Our small infrastructure needs meant that virtualization issues with our own small number of servers were not of much consequence.

Virtualization + Cloud Services = Big Savings

For hosting and DNS there are generally large cost benefits in virtualized server environments. this type of virtualization can be combined with cloud services like email and documents to offer a business exceptional savings.  This approach is especially suited to small businesses that do not need or can't afford a local server and network but require 24/7 website and online services uptime.  Even a tiny business can easily manage a website, blog, email, and documents for trivial to *zero costs* thanks to environments like WordPress or Google free hosting and blogs, Google or other free document services, and Microsoft, Yahoo, or Google free mail services.

Over the past several years the ability to customize and combine or "mashup" complex and free services and website applications has grown dramatically, and these very inexpensive or free services now offer exceptional value to even the smallest business.   Even a huge enterprise-level encrypted website with robust shopping cart functionality, security certificates, and more can be run remotely and effectively at very low cost via any of dozens of remote hosting services, all of which use partial virtualization to keep costs low.

Yes Virginia, Santa Uses Virtualization, too.

There are also clear virtualization benefits for an enterprise deployment with high security needs although in this case they are not so much from remote hosting and DNS which may not suit larger businesses.  Instead, large scale deployments that utilize 24/7 data centers and many staffers will be able to use virtualization to more flexibly and effectively manage extensive resources that are likely to include a storage area network or other centralized storage as well as several networked servers that must function together in a "no downtime" environment.  

This earlier insight by Lucas Kubin sheds some light on how virtualization can be used with respect to implementation of storage area networks.

Without virtualization many routine maintenance tasks can become problematic in that they'll create downtime for workers and for the websites, lowering productivity and confusing or losing potential customers.

So, why isn't everybody reaping these benefits?

For the many small businesses that are struggling with server management issues I think the most common impediment to increased virtualization is simply the difficulty or confusion about switching from a managed local server and network to a remotely managed data center environment.    Although I'm now fairly confident that our largest travel websites would have been (and would now be) both easier and cheaper to manage if we were set up at a data center with virtual dedicated servers, the cost to move and reconfigure the whole show would now be prohibitive as it uses several servers and is also integrated with the office network, email, spam filtering, and more.    In addition to advantages mentioned above many business websites will experience greater uptime and excellent load balancing in remotely hosted, virtual server environments which have tumbled to a fraction of former prices even as they have dramatically improved in terms of quality, uptime, and support response time.    An example is that only a few years ago I was spending $800 per month for a single dedicated server hosting several websites that I can now serve via a virtual dedicated setup at under $50 per month.    Unfortunately our main projects can't be moved easily to this type of environment at this time.

Although we have an excellent server administrator, the tasks done only a few times a year by staff are a daily routine at the hosting center, so even assuming *equal employee quality* regular IT folks are likely to need more time to perform the routine but rarely done tasks (such as IP provisioning, zone files, domain troubleshooting, etc, etc).   For a data center these are handled either automatically or routinely by staff, often lowering the net cost for the data center and for the customer.

Summary:

It was not very long ago that virtualization was poorly understood by many server administrators and not commonly used for most applications, where now almost all enterprise applications and most remotely hosted websites and cloud computing services are using partial virtualization to more effectively manage resource and reduce the cost of creating effectively independent websites and networks.    Regardless of the size or focus of your business it would be advisable to consider the many benefits of virtualization in light of the cost to revise your enterprise configuration and/or make better use of remote virtualized hosting and cloud based services.

 

 

I first realized the possibilities of virtualization back in 1981 -- back before the first MacIntosh, back when it wasn't even possible to major in computer science. But Stanford University had implemented a primitive network of dumb terminals which all tapped into a central mainframe. It had been around since 1975, and was dubbed Low-Overhead Time-Sharing System (or LOTSS).

It saved money -- mainly because each workstation was just a display. But we also realized

its biggest drawback: resources were shared. When the load was heavy, games like Rogue and Wumpus were disabled. During finals week, user accounts were limited to one hour a day -- except between the hours of midnight and 6 a.m. And when the system crashed under the load, you'd hear every student at every terminal groaning in unison. Eventually a second mainframe was added, which one clever student dubbed it "LESS" -- LOTS' Even Slower Sister.

That experience left me with fond memories -- and some real-world benchmarks. It suggests the first obvious test for a virtualized system's performance is: do users notice? Ten years later I was working at a biotechnology startup, and our sys-admin began offering some applications that were hosted on the server. We learned quickly not to use them -- because they were dramatically slower than the applications which were already installed on our workstations. That system failed to meet a benchmark set by college students in 1981: response times shouldn't be prohibitively slow. In the end, that system was always down -- not because it was unreliable, but because the IT department was desperately trying to find ways to improve its speed.

We take for granted the virtualization capabilities of today. Now users at a workstation may not even realize that they're working on a virtual server -- and it has the opposite effect of the systems from the 80s. It allows companies to purchase more "servers" with less hardware -- which creates more capacity for less money. Now our network structure has other benefits

-- the user's desktop reappears wherever the log into the network, and they can tap into shared directories that can be accessed from anywhere in the country.

But beyond the obvious miracles are the miracles they never see in the underlying hardware.

Back at Stanford, my roommate pasted his favorite saying to our wall: "Any science, sufficiently advanced, will be indistinguishable from magic." And there are exciting yet real possibilities that come from virtualization. It can centralize backups and security upgrades. It can create dedicated "compartments" that vastly simplify the allocation of resources. It's cheaper and easier, and -- most importantly -- it turns one computer into many servers.

Virtualization is magic.

My college days are long gone, and even the old computer science center was finally replaced. (I heard the system's predecessor was an IBM 2741 which dated back to 1966.) But it's got me wondering what the next 40 years will bring. The dumb clients of today are surprisingly smart -- from laptops to iPhones and even cell phones which combine the processing power of them all. To make the internet be truly ubiquitous, we'll need to have servers scattered across the planet --

at the library, at the bus station, at Starbucks. And we'll only achieve this a radical build-out on this scale if we can keep server costs low. To put it simply: we can't achieve the next generation of computing without a corresponding reduction in server costs. Virtualization is the key to our future.

It's being adopted by major corporations because of its obvious financial benefits -- but it's interesting to how else it might change our society. Barack Obama is now talking about "modernizing the infrastructure" of our schools -- and it makes me smile to think the grade school students of today will have better computers than we had at Stanford in the 1980s. But cost-conscious schools will almost certainly have to adopt the cheapest server technology they can find. Whether they know it or not, I'm certain that tomorrow's grade school students will be working on virtual servers.

I don't know what devices they'll be using. Maybe they'll have tablet internet devices which display "See Dick Run" in grey nanotech pixels. But I'm absolutely convinced that they'll be downloading their lessons and uploading their homework onto virtual servers.

icon
Ramkaran Rudravaram
Thu Dec 4 1:49am
The magic show has just started...Cloud computing...