Every day we see indications of a sea change in the way video is consumed. The number of Internet content storefronts is exploding and the array of consumer electronics devices that allow this content to be easily viewed on flat panel HDTVs in the family living room is growing apace. This change in video consumption habits has two dimensions: it is entirely on-demand and it is a streaming experience. This last aspect in particular separates the on-demand Internet video that is increasingly common from the “order-download-and-watch” model prevalent with earlier Internet video experiments.
For broadband service providers (BSPs) this change represents both a threat and an opportunity. On the one hand, in addition to swamping their networks with a dramatic increase in the sheer volume of traffic they must handle, video upsets the capacity engineering principals they have long employed. But at the same time, with the right technology, it affords them the opportunity to leverage their key competence — network engineering — to help fulfill the vision of an Internet video viewing experience indistinguishable from a locally attached DVD, even Blu-Ray, player.
The End-to-End Thing
At the core of the current dilemma is the fact that the Internet, broadly defined, employs what is known as an “end-to-end” design principal. This principle, which has served the Internet quite well, holds that all application intelligence resides outside the network and that devices within the network, namely routers, should concern themselves solely with forwarding IP packets in a best-effort manner. This architectural philosophy works quite well for data applications — SMTP, HTTP, and FTP to name just a few. As the network becomes congested these applications will automatically back off and help alleviate congestion. The user, for the most part, will be unaware and undisturbed by this behavior since the difference in loading a Web page in one second as opposed to half a second is almost indistinguishable to the human eye but represents a significant change in network load.
As a result, access networks could be safely designed with oversubscription factors as high as 50:1 (oversubscription occurs when the aggregate downstream bit rate for all subscribers attached to an access node exceeds the uplink capacity of that node). For example, a DSLAM with 400 connected subscribers receiving a 5 Mbps broadband service might only have a DS3 uplink, a bandwidth oversubscription factor of more than 44:1. This was appropriate and acceptable given the bursty nature of the aforementioned data applications and resulted in a service delivery network where all subscribers would be satisfied the vast majority of the time.
Streaming applications in general, and video in particular, completely upset this model and potentially result in a service delivery network where all subscribers are dissatisfied most of the time.
The Nature of Video
Streaming video is unlike almost every aspect of the data applications on which the Web was born. For example, whereas conventional data applications are bursty (i.e., a very high peak-to-average bandwidth ratio), streaming applications are virtually constant (the peak-to-average bandwidth ratio is equal to one). But more importantly, whereas network congestion will typically not result in user-observable changes with data applications, congestion impacts video applications immediately and in a highly apparent manner. As a result, whereas congestion might not adversely impact customer satisfaction in a data world, in a video world it has a very dramatic and negative impact.
Take the example cited above with 400 subscribers receiving a 5 Mbps service connected to a DSLAM with 45 Mbps of uplink bandwidth. A DVD-quality, standard definition video can require as much as 2 Mbps to be streamed and displayed with acceptable fidelity. From an access bandwidth standpoint that would not appear to be a problem and all 400 subscribers should be able to concurrently watch movies. The problem, however, is on the uplink side. Four hundred video streams would require about 800 Mbps of uplink bandwidth, almost twenty times what is available in this example. The result is that all 400 subscribers would be dissatisfied with their service. And while they might be buying the movie from a content storefront such as Apple (News
) or Netflix, they will blame the BSP for poor performance. In fact, in this example, anything more than about 22 active video subscribers will result in universal dissatisfaction. To make matters even worse, when the 23rd subscriber comes on line, all subscribers will see their service degrade and become dissatisfied customers, not just the 23rd.
A Better Approach
In the example we’ve been working with the BSP must provision capacity sufficient to address peak video viewing times with enough extra capacity to address the data applications already supported. Assuming a 25 percent take rate for video services during peak viewing times, this means roughly 450 Mbps of uplink capacity, a ten-fold increase. But even this can’t guarantee customer satisfaction. It will always be the case with best effort services that once the n+1st subscriber comes on-line in a network with capacity to support n subscribers, all n+1 subscribers will be dissatisfied. The only way to truly guarantee customer satisfaction, even for a subset of customers (in fact, even for a single customer), is to provision enough capacity to support streaming video to all subscribers — 850 Mbps in this case (800 Mbps for video plus 50 Mbps for best effort data traffic), and even this only supports one stream per household. What happens when multiple devices within a single household begin simultaneously viewing streaming video? In could easily be the case that the DS3 on our DSLAM would need to be replaced with an OC-48 or multiple Gigabit Ethernet uplinks. Obviously it would be difficult to cost justify such an upgrade.
A better approach is to modify traffic engineering principles in such a way that an investment in incremental bandwidth can be dedicated to video traffic and partitioned from the best effort portion of the bandwidth. Let’s suppose, for example, that we upgrade our DS3 to a 155 Mbps OC-3 uplink and reserve the incremental 110 Mbps of bandwidth for video (there are ways that we can make idle video capacity available to best effort sessions but we’ll address that another time). We now have enough capacity, as well as the bandwidth management mechanisms, to guarantee customer satisfaction for the first 55 (roughly) video sessions.
Further, when the request for a 56th video session arises, the network can allocate that session to the best effort partition since the video partition is at capacity. As a result, whereas with pure best effort traffic handling this situation would result in 56 unhappy customers, with more robust traffic management principals we have 55 happy customers and one unhappy customer, obviously a vastly superior outcome. This is even more critical since it is assumed that the video subscribers are paying the BSP for “premium” Internet video service. The ability to guarantee quality of service associated with fee based services is obviously critical.
From an economic standpoint it becomes much easier to cost justify network capacity upgrades. In a best effort world it is not possible to quantitatively state what improvement in customer satisfaction will result from any capacity upgrade, making the business case difficult. With better traffic engineering, however, it can be precisely stated that an x percent improvement in customer satisfaction will result from a y investment in network capacity.
Siegfried Luft is the founder and CTO of Zeugma (News - Alert) Systems. To read more of his columns, please visit his columnist page.
Edited by Greg Galitzine