SUBSCRIBE TO TMCnet
TMCnet - World's Largest Communications and Technology Community

CHANNEL BY TOPICS


QUICK LINKS




Living in the Past

TMCnews Featured Article


November 18, 2010

Living in the Past

By Fred Goldstein, ionary Consulting



Oh, we won't give in,
we'll keep living in the past.

            -Ian Anderson, Jethro Tull

It’s one thing to maintain old things that work. Even a good antique can still be useful. The problem is when we confuse our antique relics with obsolete ways of doing business, and preserve the latter more lovingly than the former.

Welcome to the strange world of American telecommunications. As all Americans of a certain age knew, we once had the world’s greatest telephone system. It took years to get a telephone installed in France, and then it barely worked. Just ask any Bell System retiree. Whether the rest of the world’s telephone systems really were as bad as they were made out to be is a different story. Okay, before the 1970s, France was a particularly bad case… but that was way back in the last century.

In 1970, most computing was still done using punch cards, and a really big mainframe had a megabyte of memory, probably magnetic core. A big city had maybe seven television channels. Ma Bell had a near-absolute monopoly, just being cracked open by the previous year’s Carterfone and MCI decisions by the FCC (News - Alert). Most Americans drove American cars with big V8 engines that got 10-12 miles per gallon, with a gallon of gas costing about 30 cents. There wasn’t much talk about “telecommunications” then – it was mostly just phones – and the word “network” usually referred to CBS, ABC, or NBC. That was then. 

Operator, get me long distance!

A lot of today’s problems are systemic result of policy decisions made in, and for, that era. The phone system then was heavily regulated. The goal of most regulators was to minimize the “1FR” rate – the basic monthly rate for residential service with flat-rate (unmetered) local calling. In order to do this, the regulated monopoly prices for everything else were often set high, often as high as the customer could bear, in order to subsidize 1FR.  So to maintain this flow of above-cost services, the basic local calling area had to be kept small, while toll rates were kept high.

The toll-to-local subsidy was locked into place by a 1930 Supreme Court decision, Smith vs. Illinois Bell, which held that since the local telephone plant was used for interstate calls as well as local ones, part of its cost should be borne by AT&T’s long distance operation. Over the following decades, the interstate share of the cost of local lines grew; it was finally fixed at 25% in the 1980s. And the states themselves set even higher rates for in-state toll calls, as a way to further cross-subsidize the 1FR rate. 

From this, the network is divided into rate centers, with each rate center requiring separate NPA-NXX (prefix) codes. A rate center is the point from which mileage is measured. If you’re old enough, you may remember mileage-based toll rates. Nowadays rate centers are mainly used to create tables of what is and isn’t “local”. In either case, it’s far removed from the Internet-era’s distance-free world.   Not that the telephone network’s technology is so distance-sensitive any more. In 1930, all long distance calls were placed through special operators; longer distances usually required more of them. Even local dial service was far from universal; the Bell System was all manual until the 1920s, even though dial switching had been introduced in the 1890s.

It’s the large number of rate centers that led to many of the area code splits of the past decade and a half. Every carrier needed its own prefix code, and later its own thousand-number block, for each rate center in which it did business. And rate centers can be small indeed. The City of Boston, Massachusetts, for instance, has only 48 square miles of land area, but Verizon (News - Alert) has kept it divided among 11 rate centers. This allows it to charge a premium for calls to non-contiguous ones, both at wholesale (to other carriers) and retail. 

But not all calls are subject to the same rules. Wholesale rates for calls to or from mobile phones are charged as local if they are made within the same Major Trading Area; there are 50 of these across the United States, as defined by a 1990’s Rand McNally Commercial Atlas. With almost half of phone calls in some markets being wireless, who’s left to pay the higher rates that the rate center system was designed to extract? VoIP calls too are in a never-land where distance or locality usually doesn’t count, except when it does, which the FCC still hasn’t clarified after 14 years of cogitation. But ILECs and regulators in most states insist that the tiny rate centers be left in place, better to use as a weapon against competitors.

Take California, for instance. About half a century ago, someone figured out that it cost Pacific Telephone more than a dime to bill for a 10-cent toll call, which was the rate for the 0-8 mile band. So the California Public Utilities Commission decided that all rate centers within 8 miles of each other would be local calls. That was stretched a few years later to 12 miles. But they also ruled that all calls between rate centers more than 12 miles apart would remain toll calls, short of a papal dispensation. 

So high toll rates still apply to wireline calls made across the sprawling Los Angeles, San Diego, and San Francisco Bay metropolitan areas. Does this still create huge subsidies to hold down granny’s basic 1FR rate? Probably not – most callers, including granny, just pick up their cell phones, or VoIP lines, which don’t charge tolls, or subscribe to a national flat rate plan. But the hundreds of tiny rate centers still exist, and your next-door neighbor may still be a toll call if, as is common in rural areas, you’re served out of adjacent rate centers that are more than 12 miles apart.

Contrast this with Colorado. Years ago, regulators and Mountain Bell agreed that it made more sense to make all calls within metropolitan Denver local to one another. Rate centers were kept around in order to maintain the precise distance billing of toll calls. But when distance-based toll went the way of vacuum tube radios, they were merged into a big Denver rate center. It is thus possible to change with the times. Just rare.

The FCC even hinted at a fix for this mess in a 2001 docket entitled, “Developing a Unified Intercarrier Compensation Regime”. But after almost a decade and several rounds of Comments, the Commission has not acted. Its most recent proposals, instead, focus on maintaining the existing system for at least ten more years, but with gradual reductions in various rates. Hence the FCC seems wedded to the Hoover era; they’re just arguing over price.

Living in the imaginary past of a neutral end-to-end network

This leads to the favorite debating topic in the industry today, so-called “network neutrality”. It is largely based on the fiction that ISPs are really just like the phone company, that the “web site” you’re connecting to is a single entity, and that there’s a clear boundary between content, provided at one computer addressed by the URL or DNS, and the network. This is of course totally wrong. A large web site is usually distributed among many locations. 

The servers may be run by a content distribution network (CDN) such as Akamai (News - Alert) or Limelight. These CDNs work by running DNS servers for their client domains, and matching the requester’s IP address to the nearest server. So a Comcast user and a Verizon user might be sent to different IP addresses when requesting the same web site. There may also be huge back-end networks supporting the web servers. There is thus no bright line between Internet content and services!   So trying to regulate ISPs as if they were 1960s telephone companies just leads to failure.

Advocates for various positions in Internet policy discussions such as network neutrality will often cite historical precedent. Neutrality advocates cite the “end to end argument” that described the way the ARPANET and TCP/IP in particular worked in the early 1980s, when its connectionless mode of operation was being pitted in a battle royal against the very rigid form of connection-oriented packet-switched network known as X.25. Talk about living in the past – the X.25 camp, dominated by European then-state-owned telephone companies, had a solution designed for Europe’s very noisy analog telephone lines of the 1970s, optimized for low-speed remote teleprinter connections to big time-shared mainframes. 

The connectionless camp was interested in connecting computers – mostly time-shared minicomputers – to each other. X.25’s design corrected transmission errors at every hop along the way. TCP/IP only corrected errors at the edges of the network – “end to end”. In the real world, the better approach depends on how much loss each link has. (Wireless links, for instance, often have their own error-correction mechanisms, since they are inherently pretty unreliable.) But this became a religious war, and the fact that TCP/IP worked its “end to end” magic to good effect for many applications in many places was promoted to the status of Eleventh Commandment.

But in today’s real Internet, hardly anything really operates on an end-to-end basis. Every corporate network and any careful home user will use a firewall. That is a box whose explicit purpose is to break end-to-end connectivity, in order to prevent bad guys from initiating unwanted connections to the user’s computers. 

Firewalls also usually perform network address translation (NAT), a function that is anathema to old-timey IP fundamentalists, since it means that IP packets are not delivered unchanged end to end, but is seen as beneficial by most users. In fact, NAT has something in common with X.25. An X.25 connection identifier is a local value – it may not be the same at each end of the connection – and it is mapped to the two endpoints at the time the connection is established. NAT turns TCP/IP’s connection identifiers – the IP addresses and port numbers – into local values too.  

An analogy from the phone world is PBX (News - Alert) extensions: Many business phones cannot be dialed directly from the public network; they simply have an extension number behind a listed main number. “End to end” is like a direct inward dialing number – it’s often, but not always, worth having.

Why do some act as if the end-to-end concept were vital, even if it has proven impractical? Again, the answer may lie in the past. The Internet and the telephone network evolved from very different roots, especially with regard to their business models. Telephone monopolies distinguished between different types of calls, and based much of their income on recording billable events. In the early days of TCP/IP, routers simply did not have the horsepower to do much besides pass along IP packets. 

It was impossible to create billable events any more complex than a gross count of packets or bytes. The absence of premium pricing such as “long distance” was very attractive. And in the United States, retail ISPs by 1996 had largely adopted a flat-rate pricing model, selling unmetered service by the month rather than by the hour or byte. “End to end” came with the assumption, however obsolete, that the network in between the end points has no way to bill for, let along muck with, the user’s data.

Thanks to Moore’s Law (which is really just an observation), the price of computing has fallen rapidly, even more rapidly than the cost of bandwidth. Hence it is now possible, and even practical, for IP traffic to be monitored for billable events. This is what the IP Multimedia Subsystem (IMS) is about. This doesn’t make IMS a good idea – it is a Rube Goldberg contraption of weirdly-applied protocol machinery, for a purpose that customers don’t like – but it is a way to use IP in context of a telephone, rather than Internet, business model.

So end-to-end’s theoretical benefits are themselves ephemeral, but its costs aren’t. Take, for instance, IP version 6. This is 2010’s version of “Y2K”, a fear campaign to get companies to buy new network equipment based on some imaginary deadline, in this case the exhaustion of virginal IP version 4 address blocks. The main work-around to using so many IP addresses is NAT, which allows a small number of public addresses to support many users. The main objection to NAT is that it breaks “end to end”.  Since that’s already history, there’s less reason to hurriedly abandon IPv4. 

Perhaps IPv4’s existing four billion addresses should be used a bit more efficiently. Many existing address blocks, assigned long ago when there was no shortage, are largely empty, and the rules allow resale of addresses. So a secondary market in IP addresses will spring up, providing enough addresses for new functions that can’t go behind a NAT. After all, agriculture didn’t stop when the federal land offices stopped handing out new homesteads. The regional address registries will just shift from being land offices to becoming title offices, keeping track of blocks. And other new technologies will no doubt arrive which do not even depend on the use of traditional IP addresses. A 32-year-old protocol suite need not be the only solution for the future.

The FCC keeps the radio spectrum locked up in ancient history

Another case where living in the past has serious consequences is in radio spectrum policy. The FCC, like most other national regulatory agencies, maintains a policy of licensing that is firmly grounded in the way radio systems worked in the mid-20th century. This has a few consequences. Of course new applications are held back from market. But with the addition of spectrum auctions in the 1990s, a small oligarchy of incumbent carriers has been able to buy up spectrum to maintain scarcity and keep competitors off the air.

Radio reception is, of course, prone to interference. How a receiver behaves when multiple signals are on the same or nearby frequencies is very much a property of the type of modulation being used and the quality of the receiver. Old-fashioned 1920s AM radio, for instance, is very sensitive to interference, which causes background noise or whistles. Newer (1940s) FM radio has a wonderful property, the “capture effect”, which means that if an unwanted signal is a certain amount weaker than the desired one, the receiver won’t hear the unwanted one at all. This is one reason why FM can handle high fidelity audio. (So are the wider channels assigned to FM stations, which operate at a higher frequency than AM broadcasting.)

The problem with spectrum policy is that it never really got past that era. The mobile phone industry has big profits protected by licensing policy – the Powell-Martin FCC tried to reduce the industry to three competitors per market, enough to allow serious collusion. In practice most markets got down to four or five players. And with its high profits, the industry developed much more advanced technology than hoary old FM. But it was still designed around, and for, exclusive licensing. Low-budget wireless ISPs, on the other hand, struggle to share the slivers of spectrum that are made available to unlicensed use. These are generally shared with microwave ovens, Bluetooth, water-meter telemetry, wireless LANs, baby monitors, and various other uses.

Yet if one were to use a spectrum analyzer to see how busy the microwave radio spectrum is, especially in the valuable range from 1 to 6 GHz, it would look surprisingly empty. Huge swaths are reserved for government use. Applications like coastal radar are protected from use – even hundreds of miles inland. Big chunks of civilian spectrum lie unused because they are exclusively licensed to companies that can’t afford to use them. Yet the FCC won’t let them be reused because the current licensees still hope that they’ll be valuable, or can be resold, or the FCC hopes to raise more money from a re-auction. Radio spectrum is being treated as a trading commodity, like gold in a vault, rather than being utilized.

New technology like ultra wideband (UWB) is designed to allow greater sharing of non-exclusive spectrum. But when the FCC dipped its toe in the UWB waters a few years ago, they came out with very restrictive rules that found little use. The new rules for TV White Space will allow for a little bit of spectrum sharing, especially in rural areas, but even those are extremely protective of existing broadcast interests, to prevent even a slight chance of interfering with over-the-air reception. And that met with huge political resistance from the broadcasters.

WiFi (News - Alert) came about because the FCC smartly allowed a modest bit of “junque” spectrum to be used without a license. It led to billions of dollars of sales. But that may have been a fluke. So long as entrenched interests are more important than potential new applications, progress will be metered out slowly, the way the old Bell System introduced technology when it had a monopoly, and wanted every investment to last 40 years. Spectrum policy should be revisited to take advantage of newer technologies, such as software-defined and cognitive radios, and to give less protection to obsolete receivers. Ever-weaker technical justifications are no longer adequate cover for a policy whose real purpose is to protect the value of oligopolies via scarcity. The past isn’t always the right place to be living.


Fred Goldstein, principal of Ionary Consulting, writes the Telecom Policy column for TMCnet. To read more of Fred�s articles, please visit his columnist page.

Edited by Stefanie Mosca







Technology Marketing Corporation

2 Trap Falls Road Suite 106, Shelton, CT 06484 USA
Ph: +1-203-852-6800, 800-243-6002

General comments: [email protected].
Comments about this site: [email protected].

STAY CURRENT YOUR WAY

© 2023 Technology Marketing Corporation. All rights reserved | Privacy Policy