About This Case

Closed

24 Jan 2008, 11:59PM PT

Bonus Detail

  • Top 3 Qualifying Insights Earn $350 Bonus

Posted

2 Jan 2008, 12:00AM PT

Industries

  • Advertising / Marketing / Sales
  • Enterprise Software & Services
  • Hardware
  • IT / IT Security
  • Internet / Online Services / Consumer Software
  • Legal / Intellectual Property
  • Logistics / Supply Chain
  • Media / Entertainment
  • Start-Ups / Small Businesses / Franchises
  • Telecom / Broadband / Wireless

The Shift To Computing As A Utility

 

Closed: 24 Jan 2008, 11:59PM PT

Earn up to $350 for Insights on this case.

While there's been plenty of talk about the move to software-as-a-service, an equally interesting one may be hardware-as-a-service. Certainly, Sun and IBM have pushed for utility computing offerings -- and Amazon has done quite well with its EC2 offering. There's been talk for years that Google could get into the space as well.

However, even with all the ROI support that marketing folks from Sun and IBM throw around, it still seems risky. We're trying to understand if it makes sense for large IT organizations to look seriously at moving over to "on-demand" computing systems, or if it pays to wait. Under what conditions would it make sense and what are the biggest risks involved? If you were a consultant, in charge of making the case for or against a utility computing move to a Fortune 500 company (recognizing that there are different issues involved with every individual company) what key points would you focus on?

Clarification: This is about making the case for the company to *using* utility computing, rather than offering it as a service.

15 Insights

 



I don't think very many people understand what is going on with utility computing, so I will try to explain.  Google already is doing on-demand computing, and any fortune 500 company that offers a public web service is also providing on-demand computing... to themselves. 

First, In terms of generic hardware support to someone's applications, on-demand computing is a home run provided that you can get the contract for the software services to sell.  If it's a question of generating demand for on-demand computing services, though, I think the market is fairly saturated.  

Now, to address the larger issue.  Hardware-based, generic on-demand computing is not nearly as important as on-demand software services, which imply on-demand hardware availability internally.  Any company providing software services should bill to itself as a hardware service provider.  This way, they can manage expansion like how Google always expands their email capacity, and at the same time they can accurately bill out the generic hardware service management to other companies.  

icon
Devin Moore
Fri Jan 4 8:44am
P.s. to the clarification: If you are using utility computing, you are accepting it from yourself as a service, so my insight still applies. The biggest reason for a Fortune 500 company to move to a utility computing system would be the maintenance cost savings. Suppose that your company is located somewhere that makes maintaining your own compute grid prohibitively expensive, i.e. you're located in downtown Manhattan. Using utility computing would provide your company with the availability and support of a massive warehouse full of systems, without the insane cost of that physically present in Manhattan. Modern networks are so fast that you're unlikely to notice significant latency, and so you'll doubly save on trying to manage services locally as well as on the physical space necessary to maintain a compute grid of any kind.

Conventional wisdom is that we plan and prepare for the highest possible level of network usage. Economic realities dictate that, at best, we prepared or the highest probable level of network usage. Even at that, we leave most of our daily capacity unused, simply wasted, because we need that as a buffer between daily usage, and peak usage. We buy more than we need, because someday we might need more.

Further aggravating this is the so-called "Digg Effect" (formerly the "Slashdot Effect"), or as I like to think of it -- the problem we want to have. More and more we focus on social media, blogs and news sites such as Digg, Slashdot, Reddit, Engadget, etc. as marketing tools. Social media marketing is in reality a kind of word-of-mouth on steroids. Thousands, ten's of thousands, even hundreds of thousands of online users pointing each other to your latest product release, of other major company event announcement. Any marketer's dream scenario. But when its successful, really successful, an unprepared company can find that their web server and attendant web applications have ground to a halt.  All those thousands of users vying for the same limited bandwidth simultaneously. Paralysis by success.

So let me ask you. Would you buy an extra warehouse and let it sit empty most of the year because for a few random days you need extra capacity? No, you'd probably lease a facility for those few extra capacity days or make some other "as needed arrangement".

Amazon, Google, and others have the same issue with peak and daily usage. They have invested millions in load-balanced, multiple-redundant servers that spend most of their time running at a fraction of capacity. Now some of these companies have decided to offload that capacity to others based on a simple metered business model. Like your electric company, you pay for what you use, and you are prepared (to the extent that they are prepared) for almost any level of peak usage.

Yes, the company looking to integrate these tools needs to have proper management of the off-loaded content, proper security controls and internal governance, and a realistic view of the long-term prospects of the company making the metered offering. Frankly, many of these same concerns should be being addressed for content your internal network as well. Certainly at some level, almost any company of any size can profit by making use of these offerings as part of an intelligent, integrated information management plan.

Use what you need, pay for what you use, be fortified against the paralysis by success scenario. Win, win, and win again.

icon
Timothy Lee
Mon Jan 7 5:50pm
I'm going to respectfully disagree. I think that for most Fortune 500 companies, planning for sudden usage spikes on their websites is a relatively small fraction of the overall IT budget. Proctor and Gamble or Exxon-Mobil almost certainly already have beefy enough servers to withstand a "Slashdot effect" level of traffic. And they're not likely to see explosive traffic growth of the sort YouTube or Facebook have experienced lately.

For the kind of traffic most Fortune 500 companies can reasonable expect to receive, spending a few tens of thousands of dollars on extra web hardware will be more than enough to withstand any traffic spikes they're likely to see. They're just not going to get the kind of traffic a Google or an Amazon have to deal with, and the responsiveness of their website isn't as crucial to their corporate success anyway.

On the other hand, if we're talking about Fortune 500 company like Google or Microsoft, they're not likely to be willing to relinquish control of what is, for them, a core competitive asset.

I think the analogy to electricity is misleading. Every company in the country uses electricity that's identical to the electricity used by every other company. Computing isn't like that at all. Every company has slightly different hardware, software, and support needs. It therefore requires quite a bit more work to farm out IT resources than it does to farm out power generation. And that, in turn, reduces the potential savings and increases the potential for screwups.

If you lease laptops for your staff according to the specification "uses a certain operating system and enough memory and processor, and not older than one year at the time of installation", does that constitute "hardware as a service"? With blades and virtualization, the issue of server hardware is more or less irrelevant, and with SAN:s the memory hardware is irrelevant too. 

That said, there are a number of issues with "hardware as a service". One is that you lose control. What do your users actually have in their computers, and what guarantee do you actually have for it to work as ordered? If you farm out your server park, what kind of guarantee is offered? THis is no different from outsourcing your web hosting - only the security requirements are far more stringent, since you do not want anyone which is out of your control to be able to access your data center. If you farm out the operations, there is also the issue that while the cost may be under control, the quality may suffer as more and more operations are moved to India or China (remote monitoring, for instance). And it is not certain that you have control over the costs - as quality degrades, the costs may rise as users start demanding better service.

User requirements for support (how do you explain what the problem is to someone who can not see the screen in time to make that crucial presentation) is likely to trip up outsourcing. But operational requirements are also a potential trap. How does the outsourcer ensure that the system is working 100 % around the end of the month and the end of the quarter, and 99.99 % other times? How sensitive are the systems really? Knowing yourself is essential to put appropriate requirements, and before you do that there is no way that you can purchase service. Having a couple of dry runs before the outsourcing, so you know what you are ordering, is essential.

The positive side is that if you have got the requirements right, and have metrics for how they are fulfilled and an outsourcer who fulfills them, the sailing is likely to be extremely smooth. If operations work as ordered, then there is a big headache gone, and it is no harder than making sure you have electricity (for which you have to have two separate operators, with different transmission links, and assured voltage and current guarantees, plus ample backup power just in case).

Either way, an outsourcing deal has to be managed. Whether the advantages outweigh the disadvantages depends on the business - probably, it is more important if you can use your own system as a test bench, in which case it makes a lot of sense to have your own people run operations, since you get more and better feedback that way.

You have to be careful what you farm out, and how it is monitored and managed. Backup, as mentioned, is a good service to outsource, as is virus control (most companies already outsource this without thinking about it, as they subscribe to antivirus file updates).

Hope this helps.

//Johan 

I think I'm in the "against" camp, at least if we're talking about a Fortune 500 company. I can see two good rationales for the utility computing model. One is economies of scale. When small and medium-sized companies build their data centers in-house, they often can't justify the expense of a full-blown data center, with features such as nightly off-site backups, redundant power supplies, failover server hardware, regular server patching, 24/7 tech support, etc. Signing up for a computing utility plan allows a small company to effectively share those costs with several other small companies, enjoying most of the benefits at a fraction of the cost.

The second benefit of computing as a utility is the ability to gracefully handle rapid growth or highly erratic usage patterns. This applies most clearly to technology startups, which have been known to grow from a few hundred users to several million in a matter of months. Building an application around a service like EC2 gives a company expecting rapid growth the peace of mind of knowing they won't run out of hardware capacity due to too much growth.

I don't think either of these rationales really apply to a Fortune 500 company. There's enough IT work in a Fortune 500 company to keep several full time systems administrators busy running backups, applying patches, swapping out hardware, etc. So there's not a lot of room for cost sharing. And unless the Fortune 500 company in question is a technology company like Microsoft and Google (which  wouldn't be interested in buying utility computing services for obvious reasons), it's not likely to experience large, unpredictable spikes in demand for its computing resources. Most of the IT infrastructure will be focused on internal operations, and demand for those will be fairly steady from week to week.

On the other side of the ledger, there are important downsides to relinquishing control of key IT functions. The people who manage outsourced IT services are never going to be as responsive or as concerned with your company's specific requirements as a full-time, in-house IT staff. They'll have multiple clients to worry about, and they won't have spent as much time on the corporate campus interacting with the employees and becoming acquainted with the company's culture. If the hardware is hosted offsite, it will inevitably suffer somewhat in performance and reliability. Perhaps most importantly, an outside IT company will be reluctant to do very much customization of their setup to accommodate a particular company's company's needs. An in-house IT shop can perform a variety of customizations to ensure that the IT infrastructure is well-honed for the particular needs of the company.

In short, IT is the nerve center of a Fortune 500 company, and it's generally not a good idea to relinquish too much control over it to outside firms. If the company is large enough to justify the expense of a full-blown, dedicated data center, it makes sense to purchase the hardware outright and hire staff to manage it. That will save the overhead associated with outsourcing, and in the long run it will ensure that IT infrastructure is better integrated into the company's operations.

Finally, I should note that while I wouldn't farm out the IT infrastructure of an entire Fortune 500 company, there might be individual divisions that have idiosyncratic needs that are best met by utility computing. For example, if a research and development division needs access to a high-performance computing cluster, but wouldn't use it enough to justify buying it outright, that would obviously be a case where outsourcing would be justified. This might apply, for example, to a car company that needed to perform a series of compute-intensive car crash simulations once every few months. In that case, renting access to a high-performance computing cluster might be the way to go. 

 

 

The question of the value vs. risk for computing as a utility is right on.  There are huge potential benefits to be gained in both the bottom line and risk but also there could be significant problems.  I think that the time is right to examine this move but it will require a lot of forethought and planning.

Commodity and Specialty

There are definitely margins to be gained by moving some of your company's IT needs out-of-house.  But you need to be careful about which ones.  I would sit down with the IT staff and make a list of all the different services that your company needs to get from its IT infrastructure.  Things like email, teleconferencing, data backup, etc.  Then divide them into two categories.  The first is what I will call commodity services - those services that every company needs and differ very little except for maybe some minor details.  Email is a good example of this commodity IT.  Everyone uses basically the same system with minor things like storage limits or special address books.

IT services that your company relies on that are uncommon and perhaps even unique to your company would fall into he second category of specialty services.  For example, let's say that  your company makes widgets and you have developed a special CAD suite for designing those widgets.  It wouldn't make sense to try and replace that software with something off the shelf as an outside company is unlikely to have the same expertise and experience with your widget to design it.  Another case maybe where your company has developed an IT system-in house that gives you a competitive edge.  Replacing that with a software-as-service result in loss of efficiency.  And even if you did work with an outside company to develop your process, now there is the possibility hat your competitors could use the same service.

Risk vs Gain

Once you have your services divided into commodity and specialty, I would then go through and give each one a number that signifies how important that service is to your revenue.  ie, if that service were to go down, what would the short term, mid term, and long term effects be on your bottom line?  This will allow you to gauge the risk of what happens if you moved that service out of house and it crashed.

There may be cases where your risk is actually reduced because you outsourced an IT service.  Data backups for small ato medium sized companies are a good example.  Having a redundant, reliable backup system for all of your company's data is a must.  But it can be expensive to do from scratch and your company might not have the internal expertise.  However, you could use something like Amazon's S3 service instead.  Encrypt and backup your company's data to them and let Amazon handle it.

Once you have the various IT services your company uses laid out you will be in a much better position to make effective decisions.  Weighing the commodity, specialty, potential risk and potential gain against each other gives you a sense of the trade offs involved.  

Google Apps

As an example of one software-as-a-service you might not be familiar with let me mention Google Apps for your Domain.  This is a service where Google will offer Gmail, Google Talk, Google Apps and other services for your company.  They will customize it to work with your company's domain name, so if your company's website is BlueWidgets.com all of your email will be from <user>@BlueWidgets.com but still use the Gmail interface.  Google will take care of all the email and data storage and management.  

There's no doubt that computing on demand is fascinating. Amazon's EC2 service has attracted a lot of enthusiastic attention from the geek community, and an impressive number of AMIs are proliferating. The photo sharing site Smugmug has written extensively about their success using Amazon's Web Services — primarily S3, but also EC2 for image processing. The New York Times has also blogged about how they've used EC2 for the computationally-intensive task of assembling their archives into a cache of downloadable PDFs. Speaking as a web developer, both are inspiring.

But consider the nature of these stories. In both cases they were deployed for on-demand use in applications where reliability had a low premium: Smugmug can always fall back to its own hardware, slowing the processing of the queue of uploaded images. The Times' processing task was offline in nature. There was a deadline, but one measured in weeks rather than the timeout period of an HTTP request.

It may be tempting to view these organization's reluctance to place mission-critical functions on EC2 as simple reticence about a new service. But there are good reasons for that reticence. Simply put, utilizing an external service inherently introduces more potential failure points into your business process. This is inevitable: in most configurations you'll want to maintain a core of servers that will more than likely rely on a connection to Amazon. If something goes wrong, you'll be competing with Amazon's other customers for attention. And although Amazon has recently announces a service level agreement for EC2, it's not a particularly stringent one, nor are the quantity of EC2 credits that it specifies you'll receive in the event of an outage likely to leave you feeling satisfied.

And of course you'll inevitably be paying a premium: although Amazon enjoys economies of scale and can be counted on to employ top-notch talent, they still have to mark up the bandwidth and storage they're selling you. This case posits a Fortune 500 company; an organization of that size can almost certainly provide itself with computing power more cost-effectively than Amazon can.

The real advantage of EC2 can be found in its name: it's an elastic service — one that confers benefits via its ability to expand and contract in scale as the user requires. It will inevitably be more expensive to use Amazon's services for dedicated, baseline computing functions, both in terms of price and risk. However, it's likely to be significantly less expensive to use Amazon's services for computing power that is only needed on a contingency basis.

For early-stage startups and rapidly-growing services like Smugmug, virtually all computing requirements arise in a difficult-to-predict manner. But established organizations should generally be able to anticipate their needs and develop an IT plan accordingly. In cases where such isn't possible, utility computing may be an great option.

Ultimately it boils down to an economic question — unfortunately, the kind that can't be answered in general terms. How cheaply can a unit of computing power be supplied by your firm internally versus from external vendors? What is the maximum, realistically finite level of reliability that you can supply internally versus the one guaranteed by Amazon and their competitors? What level or levels of reliability does your firm require for its computing applications? And, perhaps most importantly, does the scope of those applications remain constant or vary wildly over time?

icon

rpm2007

The Shift to Computing As A Utility - On Demand Computing 

Context : 

“On Demand”, Utility or Grid Computing is a complex and often ‘scary’ area for most business leaders to consider, not least within a $350 Techdirt Insight, especially in the absence of a quality, user-friendly Wiki.

A $350 Techdirt Insight does not facilitate a detailed or exhaustive response to the complex, interlocking and causal issues associated with “On Demand”, Utility or Grid Computing, which I personally believe will form the basis of Web 3.0. 

However, the response presented, I trust offers, a different perspective and value-adding insights. Web 2.0 is primarily focused on the production of, and access to, user generated content. 

Web 3.0 will address the issue of, the shift from the age ownership, to the age of access and connectivity, of information technology assets, business processes, advanced software functionality, complemented with intelligently structured and aggregated information-centric assets. 

                         &n bsp;            &nbs p;                    “Don’t own nothin’ if you can help it.

              ;             & nbsp;            &nb sp;              ;             & nbsp; If you can, rent your shoes”. 

          & nbsp;            &nb sp;              ;             & nbsp;            &nb sp;            Forrest Gump 

Value Adding Insights 

The IBM ‘On Demand’ Strategy : 

            & nbsp;            &nb sp;              ;             & nbsp;            &nb sp; Is to expand technologies borders,

             &n bsp;            &nbs p;                           &n bsp;            &nbs p;          by pushing users,

             &n bsp;            &nbs p;                           &n bsp;            &nbs p;       and entire industries,

             &n bsp;            &nbs p;                           &n bsp;    towards radically different business models.

            & nbsp;            &nb sp;              ;          The payoff for IBM would be access to an ocean of revenue !!!

              ;             & nbsp;            &nb sp;           Sam Palmisano estimates the revenue to be $500Bn a year,

            &nb sp;              ;             & nbsp;             that technology companies have never been able to touch.

 

The key ‘On Demand’ power influencers, for example IBM, wish to (again) exert : 

·         Control

·         Ownership

·         [Their] Standards

 

Why ‘On Demand’ Services ?

To understand Why ‘On Demand’ Services, both the characteristics, and the causal effects of ‘On Demand’ services must be understood (in some detail). 

 

Characteristics of ‘On Demand’ Services : 

·         Highly scalable, centralised, low cost operating model; the marginal cost of distribution tends asymptotically to zero

·         Subscription based and / or usage based (utility) pricing model

·         Potentially versioned / personalised to the individual

·         Potentially a mass market model

·         Potentially geo-centric

 

Causal Effects of ‘On Demand’ Services :

·         Control 

plus

·         Ownership

plus

·         [Their] Standards

Delivers :

·         Client ‘Lock-in’

 

Which precipitates :

  • Economic impacts; cost base and revenue stream effects
 
  • Value chain impacts; industry structure
 
  • Relationship impacts; power-balance effects

 

Economic Impacts : 

  • Reduced IT infrastructure asset ownership
  • Reduced (Client) capital investments in IT Infrastructure, Application Development and Support & Maintenance
  • IT asset investment reduction (non-current assets) = reduced balance sheet drag, and therefore potentially stronger balance sheets
  • Reduced capital investment complemented with reduced depreciation charges = potentially higher profitability
  • Reduced (c. 18%) annual software maintenance charges
  • Reduced implementation consultancy charges
  • ‘On Demand’ = ‘OUTSOURCED’ + CENTRALISED + PROVEN BUSINESS PROCESSES = VASTLY REDUCED IT STAFFING LEVELS

Therefore : 

·         ‘On Demand’ = Reduced Client Costs (or does it ?)

Which presents :

·         Reduced barriers to entry

Which precipitates :

·         Increased competition

 

Revenue Stream Effects :     

  • ‘On Demand’ changes the nature and  potentially the scale of revenue flows, from discrete to annuity, from uncertain to certain
 
  • To traditional, non-‘On Demand’ technology service providers, the effect of competing in a world without annual software maintenance charges, i.e. the competitive effect of potentially of losing  c. 18% annual  software maintenance charges, could be potentially devastating

Value Chain Impacts : 

The ultimate objective of ‘On Demand’ services is concerned with applying technologies over entire value chain, without bound.

Compare with this strategy, with traditional technology deployments, which are generally executed across organisations, and where the applied technology is bounded by that organisation, perhaps at most with API interfaces to vendors and customers.

Examples of ‘On Demand services’, which are specifically usage based, technology facilitated services, applied within value chains  : 

1.     ‘On Demand’ access to computer processing power 

Facilitating low cost access to advanced multiple processor computing power. 

2.     ‘On Demand’ Digital Photographic Processing, c.f. ‘analogue processing’ service providers 

Precipitating the decimation of the  ‘analogue processing’ service providers 

3.     ‘On Demand’ PAYG car insurance 

What will be the value chain impact in respect ‘On Demand’ PAYG car insurance ?

  

A “What-If” Scenario : 

What if, you could integrate value chains via ‘On Demand’ technology-centric services ?

What if, you could integrate value chains across industries via ‘On Demand’ technology-centric services ?

What if, you could integrate the value chains of the automotive industry to the automotive and life insurance industries to emergency services and to health care providers ? 

 

Relationship Impacts : 

·         Metcalfe’s Law, values the utility of a network  as the square of the number of its’ users

·         Standardised, centralised, outsourced, (industry-wide), business processes and   databases

·         Aggregated and disaggregated informational  value

·         Subscription based revenue model

Creates : 

·         A potentially highly-leverageable, ‘lock-in’, client to ‘On Demand’ Service Provider vendor dependency

 

Conclusion : 

1.     ‘On Demand’ is a complex strategic ‘play’ that changes Business Models ! 

2.     Have we seen this business model before ?  Yes. ‘On Demand’ is a more sophisticated and powerful variant of the IBM mainframe business / sales model, complemented with a proven and highly leveraged Professional Service Firm consultancy ‘wrapper’.  

3.     ‘On Demand’ is a transformational S-curve change, which will have the potential to disrupt entire industries.

HaaS is essentially what we used to call hosting, however it is supposed to give more flexibility now that it is easier to connect to such environments from an external source. I would always focus on the individual requirements of a project, but in general these would need to be aligned with the SLAs of the HaaS provider.

The key considerations of the project would then come down to:

1. Security - does the provider guarantee the level of security required for the project inside their data centre?

2. Uptime - does the provider guarantee five 9s uptime and immediate response in case of failure?

3. Disaster recovery - is there a sensible backup plan involved - including the requisiste knowledge of the systems to rebuild?

There are obviously many advantages to being able to physically touch kit once it is purchased, physical security is key to this, as well as the element of trust that is put in the hands of the HaaS provider. For high security projects it is unlikely to be attractive, just because it adds an extra element of the unknown. 

For lower security projects I would be looking to ensure that the uptime was sufficient to run my business profitably, and get the access I required to fulfil customer requirements. This should not affect the cost, but be a standard that is adhered to. This should include a full and detailed DR plan, which I would expect the provider to be involved with on creation. 

The knowledge of the systems would have to be such that if rapid expansion of the project were required, they could effect this without any downtime or impact on the project, again requiring a time investment from the provider.

In the case of project termination I would want to know that any of my IP was not left on the hardware provider's servers. I would accept the word of an independent auditor on this, but it would need to be written into a contract before the service was started and proven on completion. 

What are the long term costs associated with using this service vs. buying the hardware yourself? Component prices always go down over time so it's relatively cheap to add more hardware. On the other hand, beyond the initial purchase price you have to contend with administration, cooling, electricity, etc. when you could just pay someone else to do it for you. Like any other out-sourcing endevor, you need to decide where the break-even point is and how much effort you company is willing to put into it.

You need to also realize that you're relying on someone else for your data processing and storage. How valuable is the data and how much will it hurt you if compromised or lost? This is the similar to the problems facing consumers right now with data theft of credit card accounts, SSNs, or other items. Once the information has left your hands, you don't know how well it will be taken care of nor who will have access to it. All the laws in the world won't stop an unscrupulous person from fiddling with the data.

What happens if the company goes bankrupt or gets bought out? Will you still have access to your information? This is one of the issues people had when Google brought out its online office software; you don't control the data so if something goes wrong you can't get to it. Additionally, this applies to network connectivity. If the network is down, you can't access your information.

In my opinion, one of the easiest ways to look at it is: how damaging will it be if you can't get the information or it becomes compromised? If you're looking for just occasional data processing, like borrowing a clustered server for a project, then it's probably not a bad idea. But true off-site data storage would make me hesitant.

If you don't need to use these "rent-a-supercomputers" very often, then the cost to use them most likely outweighs the cost of building your own. Again, the cost-to-benefit ratio needs to be looked at: how often will you require something like this?

One alternative is the datacenter-in-a-box prototype that Sun has. Essentially you can buy/rent a container that holds all the computer, HVAC, and power equipment and you simply hook into it. You don't have to install anything or worry about how another company deals with your data; all you do is have the shipping container dropped off and the hook it up. It's like an instant datacenter that you can upgrade as necessary then give it back when you no longer need it. All the maintenance is done by someone else but you still control how it works.

This is also a good way to see whether building your own data center is a better proposition than using a hardware-as-a-service provider. Or you can just continue to use the box for your needs. Granted, Sun's Project Blackbox is still a prototype but the potential is there and definitely something to look into.

Sun Project Blackbox:  http://www.sun.com/emrkt/blackbox/index.jsp

I think you're wise to question this. Utility processing is a case where it's likely that "if you don't know that you need it, you don't." I remember a few years ago being at a conference on High Performance Computing and, as I was talking about risks and so forth, being cut-off by a guy who said : "You don't get it. In my industry, pharmaceuticals [he could have said bioinformatics, finance, and a few others], we have infinite computational needs. Anything we can get, that gives us an edge, we will pay for." Ah. For those guys, utility processing (which is really what we're talking about when we're talking about this new push ) makes perfect sense. But I think that if you were in that situation, you would have phrased the question differently.

For most IT organizations, processing power (calculations) is not the bottleneck. Far from it. If a performance problem exists, it is generally memory paging (disk access) or, less commonly, bandwidth consumption (bandwidth consumption is a big issue if you're YouTube, not so much if you're calculating payroll). The current wave of utility computing offerings can address those situations, but with two big exceptions, these new offerings don't seem to offer anything significantly beyond what you could achieve with redundant hosting services (i.e.: if we can't handle it, have a cage full of blades at X kick in. If they can't handle it, have a cage full of blades at Y kick in). Is it easier to contract with a single vendor? Of course. Is it more expensive? Almost certainly. Realistically, most IT organizations don't have systems so stable and consumptive that they simultaneously require 99.9+% uptime (<8 hours per year) AND on-demand processing. (Of course, some systems have to be up 99.9+% of the time and, as discussed, some systems may require on-demand resources. However, for most IT organizations, those won't be the same systems.)

Now, the two big exceptions:
  • Systems where rapid, very large "surges" in demand are expected (e.g., ticket-vendors, a site dedicated to the Olympics, etc.)
  • Many IT organizations have some "crown jewel" piece of data or algorithm that's very expensive to compute. For instance, I have a client that has some data that takes them about 200 hours to calculate and distribute. But they only have to do that once per year. An increase in data volumes is a major trend in IT; if that number went from 200 hours to 2,000, utility processing would obviously make sense to knock it back down. Or if there was a competitive advantage to be had by that "crown jewel" calculation taking placecontinuously, again, utility processing would be logical.

--

     

Now, to the question of biggest risks:
  1. Excessive cost relative to benefits : We invest a great deal of money and effort making the move in anticipation of a set of circumstances that will provide us a good ROI. Those circumstances do not materialize.
  2. Difficulty in deployment : Our systems require an environment that is not provided, or not fully supported, by the utility vendor.
  3. Vendor lockin : Deployment is painful enough that once deployed, there is a significant barrier to exit.
  4. Slow communication : The involvement of another company necessarily increases "friction" in communication. This can lead to slow response times in crises.
  5. Lackluster commitment : IT work is often done at inconvenient times. The vendor does not "go the extra mile" on weekends, holidays, and nights.
  6. Time-consuming remote management : Administering hosted services takes place over network connections. File and data tasks take significantly longer than they do locally (and even everyday tasks, if a graphical environment like Microsoft Remote Desktop is used).
  7. Catastrophic location failure : Asteroid strikes the one-and-only datacenter supporting our system. Offsite backups take days to restore. (Obviously low risk and easily manageable with a big provider.)
  8. Vendor strategy conflicts with our technology choices: Despite initial support, the utility vendor decides to break away from some aspect of our technology choice (CPU manufacturer, OS, database, programming language, etc.). This is a slower moving risk, but it happens all the time.

     

     

icon
Larry O'Brien
Tue Jan 22 7:06pm
Reading other insights, it strikes me that there are (at least) two very different things that could be considered "hardware-as-a-service" : one is the more-or-less "traditional" concept of datacenter hosting with some provision for scaling up in the face of demand and the other, which I took to be the core of the question, is the idea of having a very "fluid" view of the amount of resources (processing power, bandwidth, storage, etc.) allocated to the customer.

My skeptical response was largely about this second concept, which I took to be the core of the question. I think Timothy Lee's (binarybits) insight speaks well to the first concept.

Computing assets are under contstant change and pressure to be faster, cheaper, more storage.  Traditional purchase in insourced rentals are a costly and can have long lead times which can lead to them being out of date before they even arrive for use.  This is key strength for utility computing and where the ROI will be for companies to move to utility, on demand, computing.

 The benefits available include easier upgrade options as your needs grow.  A company can buy increased storage or CPU time as can be seen in many corporate datacentres around the world.  The majority of vendors use clustered machines that aide with failover, backup and recovery options that could be too costly to implement yourself.

The risks for most companies will be the loss of control on hardware.  Many small businesses and bloggers rely on their ISP for access and support, magnify this reliance to an enterprise and the resistance for a company to move could be too large to overcome.

 For those vendors that distribute their resources around global centres one of the key benefits could be faster access to VPN like systems.  With the supplier taking the risk and the cost of connectivity a company could get tangible benefit from distributed computing with a multi national/virtual workforce requiring acces to the "corporate" network.  If the vendor was also able to bundle SaaS utilities such as groupware and collaboration tools as part of the hardware package this could be a key driver for adoption.

A good example would the GDrive.  A small utiliity that allows users to utiliz their gmail storage (6Gb plus) as a virtual network drive for their documents.  With the fast connectivity that people expect from a search engine, blog tool plus the raft of services that Google now provides the advantages to distributed storage such as the GDrive provides could prove to be a model approach for utility computing.

The benefits of distribution extend beyond speed.  Resilience of the storage could be beneficial to companies that today use a centrally located datacentre (even with a DR site).  As the rate of natrual disasters increases with floods, hurricanes, earthquakes and the civil unrest if countries becomes more frequent the benefits of a multi site storage and computing solution could prove to be a key selling point for utility computing vendors.

 The final decision will rest on the type of usage a company envisages using, their theatre of operations, the complexity and cost of current hardware rollouts/updates/upgrades and connectivity required for a VPN.  As the market develops and the cost of sale can be better quantified to help establish ROI large enterprises can start to make allowance for utility computing in their decision making processes as they continue on expansion and M&A plans.  The utility computing market is most likely due for some consolidation as the take up increases and the revenue opportunities become more obvious.  I would wait and see what the space looks like first but would look to balance any new hardware purchases against the cost/benefit of on demand computing.

Citing Moore's Law, the exponential advancement of computing technology combined with a trend toward greater cost effieciency of said technology, will render utility computing obsolete within ten years. In other words, by 2015-2020 there will be no shortage of computing power even for consumers -- today's supercomputers are tomorrow's affordable home computers! Therefore any hardware-as-a-service contracts should be focused on the coming decade.

A la carte solutions will help to eliminate risk for large IT organizations, but the cost of such on-demand service will inevitably limit the much-hyped ROI. The ROI figures projected to lure in business are typically based on long-term value (i.e. 15-20 years), however the value of utility computing will have drastically diminished long before then.

In sum, if utility computing is a necessity for an organization, a contract of 5-10 years is likely to yield the highest ROI. Shorter agreements will be cost prohibitive while longer-term service agreements will see a premature and sharp decrease in value.

icon
Larry O'Brien
Thu Jan 24 8:54am
Moore's Law does not apply to whole-system performance. Bus transport speeds do not exponentially increase, bandwidth capacity does not exponentially increase, memory latency does not exponentially decrease.

You seem to be saying that datacenters will diminish in importance over the next decade. I'd be willing to bet a tall latte that the opposite is true.
icon
Rick Frauton
Thu Jan 24 10:15am
"Almost every measure of the capabilities of digital electronic devices is linked to Moore's Law: processing speed, memory capacity, even the resolution of digital cameras. All of these are improving at (roughly) exponential rates as well."
icon
Rick Frauton
Thu Jan 24 10:29am
My point is that datacenters will be in everyones homes in time, as commonplace, affordable technology that does not require much physical space.
!-- @page { size: 21.59cm 27.94cm; margin: 2cm } P { margin-bottom: 0.21cm } -->The challenge can be split up into the following issues: Is there a trend towards utility computing or is it a vendor-driven hype? Should a Fortune 500 company join the trend? What are the opportunities and risks of joining?

1- If we believe the hype surrounding utility computing we can assume that in several years, the data center in the enterprise will be history. The question, however, is, is this hype really a trend?

The Analogy Argument: The first hypothesis is that the trend is built on the analogy between web-applications/software-on-demand and hardware-on-demand. At this point in time, this is probable, but unproven.

The Cyclical Argument: The second hypothesis is that in the cyclical movement of computing the power moved back and forth between center (mainframes, servers) and periphery (dumb terminals, PCs, web-devices) and we are again moving to the center, just that this time the center is not inside, but outside the boundaries of the firm.

The Transformational Argument: The third hypothesis is that we are not only confronted with a technological development, but with new rules for the new economy. This transformation means that we are moving from a world where the center of gravity in our economy is moving from the well-defined firm that responds to stable demand and derives profits from scale economics to organically linked adhoc networks that are integrated through professional ethics and branding.

All three hypotheses point towards a trend, not a hype, so it is to be taken seriously.

2. However, the question remains, should a Fortune 500 company join the trend? Fortune 500 companies are characterized by well-defined processes, incumbency (lobbying capacity and market knowledge), access to capital, and branding. The are big enough to weather most economic transformations, even if the IBM of 2007 is not the IBM of 1907.

Processes are defined by high fixed costs, low marginal costs. In a networked economy, where individual firms are confronted with high fluctuations in demand it makes sense to outsource as many processes as possible. We are moving from ownership of the processes to control of the processes, from rowing to steering. Clearly, we have seen this in manufacturing, but as more of the products/services of a firm are computational and as outsourcing is moving from manufacturing to administration, to strategy. So any Fortune 500 company should move to utility computing as soon as it becomes viable and concentration on its competitive advantages in steering processes, incumbency, access to capital, and branding.


3- The last question remains, what are the opportunities and risks of joining? The opportunity in moving fast is to free capacities to improve steering and strategy in market development, capital acquisition, and branding. It is to be expected that utility computing providers will behave like the Guild in Frank Herbert's Dune, as neutral platform providers. However, the risks are:

Theoretical impossibility: that the technology cannot deliver, because it is based on the problematic analogy between software and hardware.

Practical Impossibility: that the Fortune 500 company will join to early and become a beta-tester for the utility. 

Rent-Seeking: that the Fortune 500 company is exploited by its utility computing provider, because it is dependent on it for core business task.

Divulging Business Secrets: That the utility computing provider can use the information gained from the partnership to gain an advantage over the Fortune 500 company in its core market.


The potential benefits of utility computing are fairly well understood. The cost benefits can be significant, as the utility "pay as you go" approach addresses capacity utilization challenges experienced in many data centers. Another primary benefit is professional management of the hardware running your key applications, including the management of software updates and security patches.  There are also large potential benefits in scalability for applications with fluctuating demand.

The larger question is whether utility computing can deliver on its promise, and whether it's a good fit with your corporate culture and risk management philosophy. The best way to assess whether utility computing can meet your requirements for performance, reliability and security is to put a toe in the water. Most enterprise companies that are interested in utility computing begin by implementing it on small projects or applications to get first-hand experience.

Utility hosting providers like Savvis Inc. (SVVS) and Terremark (TMRK) report that this is the approach taken by many of their enterprise customers, who typically like to start small and establish a comfort level with the concept, the technology and the provider. Industry buzz, marketing claims and third-party experiences are useful in identifying whether utility computing might be a good fit for your company. But first-hand experience with is essential in proving the concept and determining whether it suits your technology requirements, and your comfort level for reliability and regulatory compliance.

A "test drive" allows you the opportunity to track how well the provider meets its uptime promises, as well as the quality of support and timeliness of updates to key operating systems and applications. It will take some time to establish a  meaningful track record. Keep a sharp eye on uptime. Amazon's S3 and EC2 services have generated a lot of buzz and worked well for startups. But Amazon's platform has also suffered performance problems and only recently adopted a service-level agreement. An SLA is essential for enterprise applications.    

It may also be worth considering separate small trials with more than one provider. Utility computing relationships tend to be very "sticky," and switching providers down the line is likely to be messier than you imagine.  Investing a small amount of time and money up front can help define the benefits and challenges of utility computing, and identify providers that you can trust with your mission-critical applications and data.  It's more cost effective to identify problems and shortcomings on the fun and rather than the back end.

Some key questions to ask potential providers:

  • What kind of data backup systems are in place?
  • Does it have more than one data center? Some SaaS providers operate without a backup facility, so make no assumptions about this.
  • If the provider has a secondary data center, where is the secondary site located? Backup sites in the same city or region present a problem in the event of a large disaster. This is particularly important for companies handling financial data, as regulators are urging companies to have at least one backup site outside New York. 
  • How often does a provider test its backup systems? Ask specifically about diesel generators, which are often a problem when not exercised monthly. If they're vague, ask to see maintenance records.
  • How does the provider protect data integrityand ensure controlled access?
  • Can the provider offer dynamic billing so you can assess your costs on an ongoing basis?
  • How do they screen staff to work with secure applications? Establish a comfort level with their hiring practices and background checks.

These are a start, and will suggest additional questions to ask a provider being considered for a test installation. The bottom line is that the promise of utility computing is just a promise until it proves it can meet *your* requirements and standards.

Hardware as a service represents a powerful, significant, and positive change in the overall computing paradigm.   I would suggest the main forces preventing faster adoption are reluctance on the part of IT managers to reduce their own "turf" and the increasingly irrational tendency to fear moving critical services to distant places.

As a general rule I would argue that the online environments of moderate sized and small business computing will gradually move, and should be moved to hosting companies unless they have very complex or specific needs.   These exceptions are becoming far fewer as external hosting companies increasingly offer managed server support with robust connection and server infrastructures and 24/7 support.   

The savings for small and medium sized companies would come partly from lower average hardware costs, but mostly from a reduction in expensive IT staffing needs.    Many job-critical functions of managing a few servers - e.g. rebooting, only take a few minutes per month, yet IT must be on call 24/7 to maintain the integrity of the online environment.    Services like Amazon allow the huge server companies to average most of the hardware and IT management costs over tens of thousands of machines, optimizing their IT staffing and hardware spending which in turn allow them to offer these savings to the customer companies.

For larger companies the issues are more complex for several reasons, which may mean that "waiting" or even keeping services inside the company may be the best course.    Without having company specific information it is not possible to give a specific recommendation, but here are several factors that a large company would want to consider:

Size of the overall computing infrastructure.     If a company is maintaining thousands of servers and large bandwidth and can keep a large IT staff busy, they may already be leveraging the kinds of efficiencies that Amazon E3 can bring to the table, making potential cost savings low.

How much of the computing can leave the building?    Although many large companies may eventually move away from legacy enterprise systems this won't happen for some time, so if a company *must* keep a lot of servers and infrastructure busy that cannot be outsourced, it may actually complicate things to outsource only a portion of the computing infrastructure.    I would suggest that generally if most of the infrastructure is serving online needs there will be more advantages to outsourcing hardware and IT management.     However if much of the computing is still with legacy systems that require specialized IT by staff you have already trained a move could be very destabilizing, and if staff is on hand to manage the (often simpler) online environments then you might as well keep things as they are until legacy pressures are reduced.  

Cost to transition.   Even if there are significant long term savings, there may be significant short term costs to move operations outside of the existing infrastructure.    Given that present value of money is greatest, exercise caution with ROI calculations that involve initial large expenses exchanged for later gains.

How much are the potential savings?   Consider waiting if the ROI calculations only show modest savings from an outsourcing, since a company's internal operations are presumably stable as they are and the uncertainties of change could lead to unforseen costs from a move.

Is this even a significant change at all in the way computing is distributed at your company? With the internet increasingly acting in the role of "the" corporate network, to some extent the idea of external vs internal computing is not all that relevant.   For international companies, where machines are connected to each other and the internet, computing is already globally distributed.   Externalizing for some companies would actually create more computing centralization.    If ROI calculations show a move will save big money but there is internal resistance, consider analyzing the extent to which the current operations are distributed across many countries and servers already.  

Is this consolidating IT functions effectively?    For many companies the largest cost savings may come from savings with more effective use of money for IT management and hardware procurement rather than simply savings from hardware and connectivity costs.  Also, ROI analysis should include potential reduction in human resources staff to serve the smaller IT department needs.

Manage IT costs more effectively.   Outsourcing is likely to create highly predictable values for the costs of operations and the costs of scaling infrastructure up and down.   This could be a potentially huge advantage to companies with IT needs that vary season to season or year to year.   For example a large retailer might want to allocate much more to IT needs over the Christmas sales times and then scale down dramatically over summer.    Outsourcing is likely to make this much more realistic than hiring temporary or seasonal employees for IT, which is problematic anyway.

How much existing IT is this going to destroy or abandon?   If legacy systems are failing and entirely new infrastructure is needed outsourcing is likely to be far more appealing than if new systems have recently been implemented which would require the company to effectively throw away big existing IT value.   In some cases a gradual transition to external computing rather than a full, fast switch may be in order, especially if there are many separate infrastructures in place. 

Morale.   Difficult to factor in but relevant to the outsourcing move could the effect of the outsourcing on IT morale.   Downsizing issues should be explained carefully to remaining staff, and IT staff should be involved in the decisions at some level, though obviously vested interests will get in the way of clear thinking.  

Experiments are good.   Before embracing any large change, experiment with outsourcing of a small portion of the operations and analyze the results.   This move could help uncover unanticipated costs and benefits of the move.