Internet Server Farms as Bandwidth Markets [1]

Nemo Semret[2]

Background

As Internet content becomes larger, more complex, more sensitive to quality, and attracts ever larger numbers of clients, it is increasingly hosted by service providers rather than by the owners/distributors of the content in their own facilities. Some ISPs, such as MCI Worldcom’s Uunet, Qwest, AT&T or Sprint, provide this service in addition to dedicated access, and other services. Others specialize in hosting in its various forms. On AT&T’s Internet backbone, “ supporting Web hosting centers has become the single biggest application requiring new bandwidth”. [3]

Forrester Research has forecast a market for Web hosting of $14.6 billion by 2003 [4] . In addition to content, this market has evolved to include Managed Service Providers (MSPs). MSPs provide, in addition, other services, such as complex server administration, maintenance, content management, etc. Since the late 1990s, this has extended to Application Service Providers (ASPs) [5] , who offer one-to-many service, delivering a packaged software product (in areas such as payroll, electronic commerce, supply chain) as a service running off powerful computers hosted on a server farm, with a “rental” or “pay per use” software model.

The core value provided by MSPs and ASPs is beyond hosting and delivering data. In fact they are often value-added resellers of hosting/co-location service. Even though sometimes a single company will provide the full spectrum, in this paper, we focus on the network aspects common to any type of these “server farms”.

A typical server farm is located in a data center, and consists of racks of servers on a high speed Local Area Network, which is connected to one or more Internet backbone networks.   Such a LAN (typically Gigabit Ethernet switches and cross connections), can be deployed at relatively low cost. The transit fees paid to a backbone ISP providing the connectivity to the Internet constitute the dominant cost. For illustration, consider a server farm consisting of 1 to 5 standard racks of servers (which can hold 20 or more high end servers each), generating a total of 1 Gbps of traffic. As shown in the following table (where costs are order of magnitude approximations), the dominant cost of goods sold in hosting or collocation of content on the Internet is that of backbone bandwidth.

Rack space (incl. power, on-site maintenance)

$1500-$5000/month

Cross-connect/Port fees

$2000/month

LAN (GigE switch, cabling)

$10,000 one time

Transit service (to backbone ISP)

$100,000/month

Current Pricing Models

There are four basic pricing models in use today by hosting and collocation service providers.

  1. Committed Rate: Buyers commit at a fixed price for a fixed quantity, independent of usage. Data throughput equal to the committed rate is “guaranteed” (in addition, sometimes non guaranteed bursts of traffic above the committed rate, up to a maximum are allowed). This model is unattractive to buyers with bursty traffic. To avoid deteriorating quality for its end users, the buyer has to commit and pay for the peak traffic level (or close to it), even though traffic may be much lower on average.

  2. Usage percentile pricing: The service provider charges for bandwidth on a 95th (or 90 th) percentile usage basis. Data throughput is measured in 5 minute intervals and, at the end of each month, the top 5% (or 10%) of samples are discarded; the customer is charged for the highest remaining sample at a fixed unit price. This also penalizes buyers who have bursty traffic, as they pay high rates set by their traffic peak, even though their overall traffic levels may be low (e.g. if a content provider generates 50Mbps for 3 days, and then 5 MBAs for the rest of the month, they will be billed for 50Mbps x 200 $/month/Mbps = 10,000$/month).

  3. Volume pricing:In this model, content providers are charged according to the maximum monthly volume of data transferred (e.g. for $950 per month, the site can transfer up to 50Gbytes). This is usually offered in hosting services, where the content serving application (e.g. http server, or streaming server) is included as part of the service.  The seller here is often a reseller of bandwidth bought in one of the two previous models.

  4. Flat pricing: At the lowest end of the market, web sites can be hosted for a flat price (e.g. $24.95/month), independent of bandwidth.  This is typically offered by providers who host hundreds or thousands of low traffic web sites with no performance guarantees.

In the last three pricing models, the throughput (actual transmission rates) achieved by a content server at aggregate peak periods is essentially random, since each packet of data is treated on a first come first served basis. Thus, even sites generating large amounts of traffic have no guarantee that they will have the bandwidth they need when they need it.

Rather, the buyers rely on the seller to upgrade capacity to keep up with demand. For current traffic patterns, service providers typically follow a rule of thumb in upgrading lines as soon as average utilization reaches 40-50% of capacity:

“what we have to do is over provision the backbone. In fact we don’t use more than 50 percent of the links, but this is not because we don’t have customers. It’s because we want the network to be like this.” [6]

Cheaper providers let utilization rise to 70-80% before upgrading. Since Internet traffic typically has peaks that are more than double the average, this means that congestion is unavoidable. Moreover, all traffic is equally likely to suffer from congestion, regardless of the buyer’s willingness to pay.

As a consequence, the average end user will find a site too slow during peak periods, which often corresponds to the most valuable traffic, and hence lost business. For example, almost 40% of online consumers say poor Web site performance caused them to leave certain sites during the 98/99 holiday season [7] . The usage based pricing model is ill suited to streaming audio and video content—traffic which is particularly affected by the lack of   bandwidth guarantees through a bottleneck.

In the committed rate model, in theory, the provider can honor the guarantee, since the commitment is known ahead of time. In practice, most providers oversubscribe the network and peering links downstream lines, knowing statistically that at any given time, many buyers will be generating traffic below the committed rate. Thus the degree of over subscription plays the same role as the aggregate utilization rules-of-thumb in usage pricing (see Appendix). Since the buyer is rate limited to a maximum throughput, but traffic is variable, buyers such as on-line brokerages, who cannot afford poor performance, respond to this by over-buying capacity for as much as 10 times their average traffic.

Market Inefficiencies

Lack of price transparency

The clearest sign of inefficiency is the lack of price transparency. Most providers don’t publish price lists for their services, and even after one obtains quotes by negotiating with providers, there is little by way of quality or other a priori information available for the buyer to evaluate whether price differences are justified.

According to surveys in late 2000, streaming service bandwidth prices ranged from less than 1c/Mbyte to more than 10c/Mbyte. Sometimes two buyers from the same seller pay different prices, at the same time, for the same service. [8]

In September 2001, Level 3 charges $230/month/Mbps on a 95th percentile basis. Most other Tier 1 providers charge more than $300/month/Mbps. Cogent sells 100Mbps segments for $3000/month, less than one tenth the price. Beyond the generally agreed to, but vague, fact that Tier 1 providers offer better quality because they have “better peering” and more destinations directly on their networks than upstart backbones no matter what their capacity , there is little concrete information available for the buyer to make the best choice.

Lack of flexibility

With the usage percentile and commitment models, there is no flexibility in the duration and price of the contract. Either explicitly by commitment or implicitly by their traffic peaks, buyers with bursty traffic are locked into paying for capacity that they do not use. On the other hand with volume pricing, there is neither control of nor information about traffic levels; rather, the cost of an assumed burstiness is passed through indiscriminately to all buyers, with a significant markup. In September 2001, the most aggressive volume pricing is at 0.5c/Mbyte. For very bursty traffic, with peaks at 5 times the average, this is equivalent , in the committed rate billed at the peak, to $300/Mbps/month. For smooth traffic, with a lower peak of 1.5 times the average, it is equivalent to $1000/Mbps/month. Some resellers exist simply to take advantage of many buyers inability to compare volume pricing with usage pricing.

Unused Capacity

Conversely, for sellers“ the complexity of serving a wide array of customers, from small content streamers to corporate enterprise networks, makes it challenging to devise a pricing policy that is suitable for the entire range of customers.” [9]

In order to maintain the desired quality, sellers have to keep utilization at or below a certain level. But since they can only do it on a long term average basis, there are times when the network utilization is far below optimal. The balance is unsold bandwidth which “perishes” instantly.

Consequences

As a result of these limitations of the pricing models, the market for content bandwidth is inefficient. This hurts the business of content that is high bandwidth, and has bursty demand, such as much of streaming media. [10] But this traffic is the same one that sellers need to grow to make their networks viable.

To make the market efficient, the seller must be able to offer very short term allocations for buyers and let prices float to attract just enough buyers to absorb the supply at all times, evening out traffic without compromising quality. This generates extra revenue from otherwise unused bandwidth. Further, the network infrastructure is fixed and inter-connection fees to other providers are largely sunk to the extent that they are already committed to either explicitly or by the traffic peaks. Thus this additional traffic adds little or no cost.

As shown in the Appendix, tremendous efficiency gains can be achieved if, rather than relying on the law of averages to manage demand within quality constraints, the seller uses the market to let buyers make the decision, since buyers have direct information on their own demand. Depending on the competitive landscape, this gain translates into improved profits for the sellers or savings for the buyers, or both.

But such market efficiency requires two components that are missing in the existing models: dynamic pricing; and on-demand allocation.


Market size

For the total Internet traffic, on the high end, estimates are around 10 Tbps [11] in 2001, reaching 60 Tbps by 2003, and on the low end, estimates are around 2 Tbps in 2001, rising to 9 Tbps by 2003, and 35 Tbps by 2005. [12] Server farm traffic accounts for about 40-50% of this total, with the rest being traffic between end users, and traffic originating from servers directly on corporate dedicated access lines.

Internet bandwidth prices are of course opaque, but the consensus is that they are declining by 30 to 50% per year, with, unsurprisingly, the steepest declines being predicted by streaming software vendors and the lower declines by ISP equity analysts [13] . Based on the author’s experience purchasing collocation service, prices were around $300/Mbps/month in 2000 and in the low $200s in 2001.

Taking median value for total IP traffic, server farms at 40% of total IP traffic, and prices declining at 30% per year, we get a server farm bandwidth market of $5B per year in 2001, growing to $40-50B by 2005.

Year

2001

2002

2003

2004

2005

Total Traffic (Tbps)

               6

                 15

               34

              80

           190

Server farm traffic (Tbps)

            2.4

                6.0

             13.6

           32.0

          76.0

Price ($/Mbps/month)

200

140

98

69

48

Value (M$/year)

         5,760

           10,080

         15,994

       26,342

       43,794

Appendix

Pricing, Utilization and Over-subscription

The rules of thumb used by service providers to plan capacity are roughly equivalent in the utilization and the commitment models. Say for example, that the aggregate traffic peaks are 2.5 times the average [14] levels.

Consider a high quality network, where customers buy on a usage basis. To maintain high quality (i.e. avoid congestion), the provider upgrades capacity when average usage reaches 40%.   In committed rate terms, this means the network is not oversubscribed (i.e. an over subscription factor of 1), since it has bandwidth to accommodate the peaks.

A cheaper provider would upgrade capacity at 60% utilization, or equivalently over-subscribe by a factor of 1.5. This provider experiences some congestion when the largest peaks occur.

An even cheaper one, would let utilization rise to 80%, or equivalently over subscribe by a factor of 2. This providers network is often congested, so it will command lower prices, but also have the lowest capacity costs.

Efficiency Gain Example

Suppose a provider has a server farm in a data center connected to its backbone.  The backbone network is made of lines with 10Gbps of capacity, and the provider has negotiated peering and transit agreements consistent with the same capacity (e.g. if 10% of its traffic goes to a given peer, it has a peering link which can sustain 1Gbps). Suppose traffic from other data centers, peer networks and customers, accounts for 5Gbps maximum.

To maintain “perfect” quality (i.e. a congestion free network), the server farm can generate no more than 5Gbps. Thus the provider must set the price of bandwidth at a point where it can get 5Gbps of billable demand (commitments or 95th percentile usage level). Say that price point is $300Mbps (which in 2001, would be a realistic price for a top quality backbone), and call those buyers at this price point Type A. Then revenues are at $1.5M per month by filling the pipe with as many Type A buyers as can be supported.

However, due to the bursty nature of the traffic (again assuming a peak to mean ratio of 2.5), the average traffic out of the server farm will only be 2Gbps.

If the seller could sell the remaining 3Gbps on short term allocations to other traffic (say with a willingness to pay of $100-$300), the provider would generate an extra $300,000-$900,000 of revenue per month, a 20%-60% increase on top of “legacy revenues”.

Going further, suppose all buyers are in the dynamic market. The initial Type A buyers would be willing to pay a higher unit price in exchange for more flexible commitments (bandwidth on-demand), since they would no longer pay for bandwidth they don’t use, with no reduction in quality. In other words, the total value of $1.5M could still be extracted from them, but with explicit allocations close to their actual traffic level of 2Gbps, which equates to spot prices at $750/month. Now, in the best case scenario for the seller, if there is a lot more of Type A demand (which previously could not be accommodated without risking congestion), then up to 3Gbps of allocations can be sold at $750/Mbps/month. Thus, a complete transformation from the old commitment pricing models to a dynamic market could generate a revenue increase of 150%.



[1] Revision: 2.1 – October 2001.

[2] nemo@invisiblehand.net

[3] “E-commerce drives bandwidth needs”, Inter@ctive Week, November 8, 1999.

[4] “UUNet To Spend $100 Million To Host The Hosts”, Inter@ctive Week, August 9, 1999

[5] “Microsoft Back Office Products to be Included in ebaseOne's Hosted Solutions”, PRNewswire, November 17, 1999.

[6] “Sprint research project turns up surprising results about backbone's strength”, Broadband Week, July 23, 2001.

[7] “Well E-quipped?”, Information Week, January 25, 1999.

[8] “The costs of streaming: how about them apples?”, Streaming Media Magazine, March 2001.

[9] “The costs of streaming: how about them apples?”, Streaming Media Magazine, March 2001.

[10] “Streaming Bleeds Cash”, Industry Standard, Sept 25, 2000.

[11] 1Tbps = 1,000 Gbps = 1,000,000 Mbps.

[12] Probe Research report, July 2000 and “Bandwidth Explosion”, Lehman Brothers, February 2001.

[13] “Shrinking streams grow bigger”, Wired News, November 2000, and “Bandwidth Explosion”, Lehman Brothers, February 2001.

[14] Of course individual content providers may have much more bursty traffic, but aggregation tends to smooth out traffic, since it is less likely that all sources will peak at the same time.