SUBSCRIBE TO TMCnet
TMCnet - World's Largest Communications and Technology Community

CHANNEL BY TOPICS


QUICK LINKS




How Workarounds Drive Telecom and Networking

TMCnews Featured Article


April 29, 2009

How Workarounds Drive Telecom and Networking

By Fred Goldstein, ionary Consulting


We like to think that the course of progress is dictated by scientific discovery, and by the work done by technologists and engineers to put it to practical use. But the dirty little secret of the telecommunications and networking industry is that it’s driven less by technical reality and more by a drive to work around regulatory absurdities. We have become so accustomed to this way of doing business that we don’t even notice what should be obvious! This is particularly true for the United States, but as the world’s largest market, it sets much of the direction for everyone else.

 
Pretty much everything in the Internet sector owes its existence to some foolish rule or another, either directly or indirectly. The most telling case is the status of the Internet itself, as well as the status of “IP-enabled services” such as Voice over IP (VoIP). Put it next to the telecom industry, the historically-regulated common carriers, and see how the two deal with each other, and the ironies become clear.
 
Telecom Was A Fantasy World Of Monopoly
For most of its life, the telecom industry was simply the telephone industry, primarily offering voice service across a ubiquitous, regulated network. When Bell’s patents expired in 1893, the industry became very competitive. Eventually, AT&T (News - Alert) managed to create a system of territorial exclusivity, and by the time the Communication Act of 1934 was passed, there was no competition left. So the regulations of the day locked in the monopoly.
 
Once competition is prohibited, prices can more easily deviate wildly from costs. Regulators liked the monopoly because it let them constrain the price of basic residential local service. “Residual pricing” was one such regulatory scheme, to maximize monopoly profits wherever possible so that the basic residential rate only had to cover the residue, not all of its own costs. State regulators were even adopting residual pricing plans into the 1990s. No wonder so many opposed every move to open up competition, from Carterfone (which allowed customers to attach their own equipment to telephone lines, rather than just rent from the phone company) to long distance and then local competition. And many tradition-minded regulators were uncomfortable with the Internet itself, which lived in a free-market cocoon of its own.
 
Vertical Integration Bound Facilities To Services
Another part of telecom policy was the coupling of physical facilities to the services that used them. Telephone wires were owned by the service provider, and were only designed, or upgraded, as that service provider saw fit. This vertical integration was not an issue when the one service that really mattered was Plain Old Telephone Service. In that era, the main issue was balancing the price of calls vs. the price of the line itself. Only a small portion of the actual cost of telephone service is usage sensitive. Long distance calls, and in some places local calls, were priced far above cost, in order to subsidize the fixed price of local service and pay for the expensive wires up on the poles and under the streets.
 
This service-driven model quietly started to fall apart in the 1980s when fiber optics became practical. Fiber changed the key economics. Up until then, high-capacity services were very expensive to provision, so they had to be expensive. Even a measly copper T1 circuit, 1.5 megabits, leased for thousands per month. A strand of fiber was far more versatile, and had enough capacity to “bypass” many billable phone calls. Fiber could have been seen as infrastructure, but that was not compatible with a service-driven business model.
 
Crippling The Promise Of Fiber
One group of telco people, the engineers, got together in the 1980s to figure out how to light up the fiber that they assumed would be brought to most homes by end of the century. The ITU standards committees came up with a concept called Broadband ISDN, and developed a core technology for it called ATM. But the engineers had no say in how to charge for it. That was someone else’s problem, and those suits turned out to not have a good answer at all. If high-bandwidth services could be priced at a level that would stimulate demand, then existing low-bandwidth cash cow services might lose revenues to them, a process known as cannibalization.
 
I know the irony well: I was on a B (News - Alert)-ISDN standards committee, representing Digital Equipment Corp., at the time a computer industry giant. They too worried about cannibalization. In the 1980s, their VAX minicomputer line was very profitable, but then cheap commodity PCs became available. Digital didn’t want to cannibalize high-margin VAX sales with low-margin PCs, so they avoided the PC business. Of course this didn’t stop others from killing their high-margin business. If you don’t cannibalize yourself, somebody else will. And Digital’s estate is now owned (via Compaq) by one of its former competitors, Hewlett-Packard (News - Alert), who learned to deal with the new realities rather than deny them.
 
Telephone companies had one up on Digital. They had a near-monopoly on the wires to every home, especially before most cable systems were upgraded in the 1990s. If they didn’t pull the fiber, nobody else was likely to do so either, and if they didn’t cannibalize their own sales, nobody else would be able to. At least that was the strategy, and it almost worked. By the mid-1990s, B-ISDN and ATM were nearly dead; calling rates remained far above cost, and increasing demand for high-priced digital leased lines (Special Access) was boosting margins. The Bell companies got the FCC (News - Alert) and most states to take them off of rate-of-return regulation (basically a set of profit caps) and move them to an “alternative form of regulation”, price caps. And then they got prices for many of their services fully deregulated.
 
The one fly in their ointment was competition. The Telecommunications Act of 1996 had authorized local telephone competition across the U.S., so they no longer had a de jure monopoly on local telephone service. And the Act required them to lease their facilities to competitors at cost-based rates. Or at least it seemed to. A burst of competition in the late 1990s came to a screeching halt in the early 2000s, once a new Republican-led FCC was in place and the rules were changed. Straightforward competition in regulated telecom then became much more difficult.
 
Competition From A Different Business Model
But the big threat to the Bells wasn’t telecom competition per se. It wasn’t even “intermodal” competition from wireless and cable companies, though they certainly did take away a large share of the low-profit residential telephone business – the lines that were usually subsidized by residual pricing. It was “intermodel” competition from the Internet.
 
Now there are two ways to look at this. The standard approach is to pontificate about how smart the Internet people are and how superior its technology is. But that’s self-congratulatory hogwash. The Internet’s business model evolved absent the regulatory decisions that crippled telecom. It takes away telecom business precisely because it is workaround for the broken regulatory and business model used by the traditional telecom industry. Yet it’s a convoluted, inefficient one at that. The Internet was only able to pull it off because regulators made provisions for it.
 
Remember that fiber that was never pulled to every home? In 1989, who could have expected that in 2009, we’d still be nursing so much ancient copper wire, let alone VAX-era telephone switches? It turns out that there is only one easy way to get access to high-bandwidth circuits without paying a ridiculous fee based on 1980s toll call rates. And that’s to take advantage of a little definitional quirk in the federal rules.
 
“Telecommunications” is fundamentally the unchanged carriage of information. “Information service” is a category based on computer systems and networks, dating back to the original Computer Inquiries that began in 1968. In order to allow remote access to computers (remember time-sharing?), the rules had to distinguish between the monopoly telephone carrier’s “basic” service and the “enhanced” services that made use of it. Computer time-sharing companies like CompUServe were classified “enhanced service providers” and allowed to use telephone lines without becoming carriers themselves. And in the 1990s, they evolved into Internet Service Providers, acquiring the “enhanced” status of “information service” along with it.
 
It was evolution, not a sudden shift. The “online service providers” first added Internet mail and news (Usenet) gateways, then Web browsing, but by then thousands of new ISPs had popped up. Most ISPs do little of the actual work of processing data, other than routing packets. In the famous words of former Senator Ted Stevens, who helped write the Telecom Act, the Internet is now just a series of “tubes”. At high enough speeds, IP networks are often (not always) capable of standing in for traditional telecommunications circuits. So by using a protocol intended for computer access as if it were a basic telecom service, while being treated as timesharing companies, IP-enabled service providers work around a basic telecom regulatory scheme that still thinks it’s 1939.
 
To some extent even this result is merely a regulatory convenience. Federal regulators, looking to break the logjam caused by state-based focus on the cheap 1FR, have allowed almost anyone waving the IP flag to gain certain privileges by invoking the “ESP exemption”. This very reasonable policy was meant to allow access to the enhanced services, including the Internet, to become available at reasonable rates. The phone companies had long wanted to treat all calls to ESPs (and later ISPs) as toll calls, yet the FCC didn’t want to do the obvious and say that a local call is a local call even if it is answered by an ISP. (That would have conceded a debatable claim of federal jurisdiction.) The FCC still insisted on classifying calls based on where the payload goes, mainly as a workaround to a loophole in long-distance subsidy system that had briefly existed in the mid-1970s. So the FCC carved out an “exemption” as another workaround for its misguided system of classification.
 
Hence we have a regulatory never-land, and the “exemption” has taken on a life far beyond the ability to call up an ISP. They’re not even fighting the last war; they’re several wars late to the battlefield. The results are thus strange, if you think about it. If you convert your phone call to IP using an adapter at your own premise, then your Internet telephony service provider can deliver long distance calls without paying per-minute access charges at each end. (These can run to over 20 cents/minute on some intrastate calls, though Bell-company interstate access rates are now more like 0.6 cents at either end.) If the call is converted to VoIP anywhere else, though, then it’s treated as a normal toll call. And the local carrier delivering a call to its customer in Peoria is supposed to accept that an arriving call was converted to VoIP at the caller’s house in Santa Monica, not (heaven forbid) at a service provider’s location in Los Angeles.
 
Neutrality Reverses The Definitions
The final indignity is that the Internet so closely resembles a telecom service — information carried without material change — that well-intentioned activists are campaigning to regulate its content in the name of “network neutrality”. This essentially would require that “information service” providers do not engage in core functions of the provision of information, but instead act essentially as common carriers. But the common carriage obligations of the telephone companies themselves were revoked in 2005, so they may be subject to even less regulation! Network neutrality itself, then, is a plea for a regulatory workaround to the lack of common carriage, itself a very serious and misunderstood problem that would be amenable to a very simple and obvious regulatory fix. The workaround simply compounds the confusion between the information and telecommunications roles.
 
Peer Applications Vs. The Copyright Police
But those aren’t the only cases where the technical solution is really a big workaround. Let’s take instead so-called “peer to peer” applications, from the original Napster to Gnutella to BitTorrent (News - Alert). What are they really about (other than a redundant use of the word “peer” which, by definition, would only go to another peer)? They too are a workaround, but of a different legal problem.
 
A typical peer application has three fundamental components. One is a file transfer client, which exchanges files with other users. One is a file transfer server, which makes files available for others to copy. (Since the client and server are peers, the usual difference is that the server is the side that runs unattended, potentially using the network while its owner sleeps.) The third is a search function, which enables users to find the files they’re looking for on someone else’s computer. None of these functions is novel. Every common computer has a file transfer client of some sort: A Web browser can download files using either the Web’s native HTTP (hypertext transfer protocol) or the much older FTP (file transfer protocol). Windows even includes an old text-style FTP client that can be used from a command shell to upload or download files. And search engines are by now old hat on the Web.
 
What makes peer applications controversial is the file server portion. Most consumer ISP services have terms of service that prohibit using a public file transfer or Web server, mainly because it is more efficient to run servers at a more central point like the ISP’s data center, rather than tie up the consumer’s access line and the associated backhaul (which is often costly to provision). And most ISPs provide free use of a Web server. So why do people need “peer” applications?
 
The main reason, it seems, is that many of the files being shared are subject to copyright, and cannot be shared legally. Federal law protects ISPs from responsibility for copyright violations by their subscribers on their servers if they promptly “take down” offending material, when given proper notice, but ISPs also need to be wary of being accused of “abetting” illegal activities. This is a bit of a gray area, with the recording and movie industries taking an aggressive view of alleged copyright violations. So ISPs understandably do not want their users hosting too much on their servers. It should be cheaper for the ISPs to give out big storage quotas on their servers than to have the data served up from customer PCs, since storage costs so little nowadays, but their legal bills might go out of control.
 
Thus peer applications are essentially a workaround for the problems caused by copyright and takedown. Not that everything transferred over them is illegal, but some is, and that is what created the demand, and the critical mass of users, that made these user-built networks large enough to be effective.
 
There are, of course, legitimate, non-copyright-infringing uses of BitTorrent and related technologies. Linux distributions, for instance, are often made available that way. And there are licensed video content distributors that use peer software. But these too are largely a work-around to the cost of supporting file servers, or using content distribution networks.
 
Since peer applications have caught on and are being used ever more widely for very-high-volume video and movie transfer, the cost to the ISPs of supporting these consumer-site file servers is rising. While they usually violate terms of service, the companies behind them have taken advantage of the “network neutrality” plea to pressure the ISPs to leave them alone. They want to take advantage of the pricing aspect of the Internet’s business model, which usually doesn’t charge for usage, and which almost never charges for distance. But they don’t want ISPs to have their traditional freedom to manage or ration the “information” that they carry. And when it comes to creating regulatory friction, it’s making the old arguments over the 1FR rate look pale by comparison.
 
That’s pretty much the driving force behind today’s industry. Little work is being done to improve the fundamentals. Instead, investment is focused on creating new workarounds to the problems created by old workarounds to obsolete structures. Users, service providers and regulators are engaged in a game of one-upmanship and pretzel logic. Imagine how much more productive we could all be if public policy were designed to reduce regulatory friction and improve productivity by encouraging straightforward solutions.
 

Fred Goldstein, principal of Ionary Consulting, writes the Telecom Policy column for TMCnet. To read more of Fred’s articles, please visit his columnist page.

Edited by Greg Galitzine







Technology Marketing Corporation

2 Trap Falls Road Suite 106, Shelton, CT 06484 USA
Ph: +1-203-852-6800, 800-243-6002

General comments: [email protected].
Comments about this site: [email protected].

STAY CURRENT YOUR WAY

© 2024 Technology Marketing Corporation. All rights reserved | Privacy Policy