In the communications business, rationing is a fact of network life. Since virtually every part of a communications network uses shared resources, and in a market where users do not want to pay too much for access to those resources, rationing of network resources is necessary.
Shared finite resources always pose a usage problem. Known as the "tragedy of the commons," the economic problem is that multiple individuals, acting independently, solely and rationally when using a common resource can ultimately destroy the shared limited resource.
Some people argue that this problem cannot exist with the Internet, which is virtually infinitely expansible. But that misses the point. In looking at shared resources, the "commons" is the access network's resources, primarily. In other words, the "choke point" is the homeowner's garden hose, not the reservoir.
Some might argue that IP technology, optics, Moore's Law and competition upend the traditional "scarcity" value of access bandwidth. Certainly it helps. Currently, most consumers have access to two terrestrial broadband providers, two satellite networks, three, possibly four mobile networks. Then, there are broadband pipes where people work, at school and at many retail locations.
Still, there are some physical and capital investment limits, at least at retail prices consumers seem willing to pay. If consumers are willing to pay much more, they can get almost any arbitrarily-defined amount of access bandwidth. That, after all, is what businesses do.
If consumers resist paying business prices, network investment has to be shared more robustly than it otherwise might.
Given that all network resources are shared, resources are finite. To support retail prices that require such sharing, networks are designed in ways that "underprovision" resources ranging from radio ports to multiplexers to backhaul bandwidth. Based on experience, network designers engineer networks to work without blocking or degradation most of the time, but not necessarily always. Unusual events that place unexpected load on any part of the access network will cause blocking.
Blocking, in other words, is a network management technique. And that's the problem the Federal Communications Commission is going to have as it looks at additional "network freedoms" rules commonly known as "network neutrality." The term itself is imprecise and in fact already covered by the existing FCC (News - Alert) rules. One might argue the issue is more the definitions and applications of existing rules that require clarification.
The ostensible purpose of the new rules is to prevent access provider blocking or slowing of any lawful applications, but a rule exists for that. Instead, it appears a primary effect of the rules will be to extend wired network rules to wired providers.
Beyond that, policymakers will have to contend with tragedy of the commons effects. If, in forbidding any traffic shaping (a network management technique) in the guise of "permitting the free flow of bits," rulemakers might set the stage for dramatic changes in industry packaging and prices of Internet access and other applications and services.
U.S. consumers prefer "flat rate billing" in large part because of its predictability of cost. But highly differentiated usage, in a scenario where networks cannot be technically managed by any traffic prioritization rules, will lead to some form of metered billing.
If metered billing is not instituted, and if service providers cannot shape traffic at peak hours to preserve network access for all users, then heavy users either have to pay more for their usage patterns, they will have to change their usage patterns, or they might experience some equivalent of "busy hour blocking."
Application providers and "public policy advocates" seem to be happy that new network neutrality rules might be adopted. They might not be so happy if ISPs lose the ability to deny or slow access to network resources. On the voice networks, some actual call blocking is allowed at times of peak usage. Forcing users to redial might be considered a form of traffic shaping, allowing access, but at the cost of additional time, or time-shifted connections.
To the extent that such blocking rules already are impermissible, some other network management techniques must be used. And one way to manage demand is to raise its price, either by increases in flat-rate package prices, by instituting usage-based billing or some other functionally-similar policy.
To avoid the tragedy of the commons problem, in other words, requires raising the end user's understanding of cost to use the shared resource.
Prioritized traffic handling, which assigns users a lower priority in the network once they have reached their fair use level, might be a preferable traffic management technique to slowing any single user's connection, once their individual usage caps have been reached.
When that is done, heavy users experience degradation in service only when competing for resources in a congested situation. For peer-to-peer users, the experienced reduction in throughput will be limited over time.
Only in heavily loaded cells or areas will a peer-to-peer user experience serious issues. Prioritized traffic handling enables operators to focus on dimensioning their networks for normal usage, while still permitting unlimited or "all you can eat" traffic.
Perhaps there are other ways of handling the "rationing," but on a shared network with network congestion, available to users paying a relatively modest amount of money, while a highly-differentiated load being placed on the network by a small number of users, some form of rationing is going to happen.
Perhaps flat rate packaging might still be possible if rationing affects end user credentials, rather than bits and applications or protocols. In other words, instead of "throttling" a user's bandwidth when a pre-set usage cap is exceeded, what is throttled is access to the network itself.
Gary Kim (News - Alert) is a contributing editor for TMCnet. To read more of Gary’s articles, please visit his columnist page.
Edited by Amy Tierney