Broadcasting and the Internet

I am going to go quite a bit into telephone switching here because there are some interesting parallels between the development of telephone and broadcast technology, and I think this provides some insight into the future of broadcasting.

I worked for Pacific Northwest Bell, then US West when the baby bells got sucked into various regional companies after the AT&T divestiture, and then Qwest when some brilliant marketing folks thought that Qwest sounded better than US West. I guess they just wanted to get “US” out of the company and put some “Q” in. I worked for PNB/US West/Qwest from 1978 to 1995 at which point I left to devote full time to Eskimo North.

Early on most everything was analog. In 1978, many of the central offices were computerized (they referred to it as “stored program controlled”) but those that weren’t completely mechanical used computers to control mechanical switches.

In the 1990’s most of these were replaced with “digital” switches which used at their core a time switch multiplexer which basically took a data sample at one time slot and buffered it then sent it out on a different slot. The 5ESS still had a layer of physical switching acting as a concentrator.

When I started working for Pacific Northwest Bell, most trunking, the circuits that carry conversation between central offices, was on twisted wire pairs with individual trunk circuits on each pair. They used very primitive regenerative bidirectional repeaters on these circuits to overcome wire losses on longer circuits. They were a pain to adjust and maintain.

Over time these individual trunk circuits were moved to “T1” carrier systems, a time division multiplexing scheme that multiplexed 24 individual conversations onto two pairs of wire. These worked by taking samples of the analog voltage of each channel 8000 times per second and encoding it using 8-bit u-law encoding creating a data rate of 1.544 Mb/s which, with the aid of repeaters, could be forced down copper lines to a remote central office.

Later, these T1 circuits were multiplexed into a 45 Mb/s stream and sent down coaxial cable. Over time these streams were multiplexed into still higher bit-rate schemes and sent over optical fiber.

There was never sufficient capacity demand between two central office to justify the entire bandwidth an optical fiber could carry. It became desirable to use a form of multiplexing that could add/drop a portion of that bandwidth at multiple locations. ATM (Asynchronous Transfer Mode) that run over SONET (Synchronous Optical Network) links provided this functionality.

What should be noted about this entire architecture is that each conversation creates an entire 64Kb/s data stream continuously for the entire conversation and each trunk circuit represents 64Kb/s of data being transferred continuously whether or not someone is actually speaking on it. ATM adds considerable overhead because it operates with only 53 byte cells with 48 bytes of data payload and 5 bytes of header information.

As you can see, this means almost 10% of the transmission mediums capacity is eaten by cell headers. The reason they used such small cells has to do with latency. At the speeds common at the time, the concern was that larger cells would introduce too much delay and interfere with natural conversation. Small cells were chosen to minimize latency at the expense of efficiency.

I made the conjecture way back before VOIP (Voice over IP) was commercially available that eventually IP transmission would come to dominate voice transmission. The reason is efficiency. With VOIP you don’t have static paths or connections. You route data as the need arises. Most VOIP software has silence detection and doesn’t send data or much data, during silent intervals. Additionally modern encoding techniques are more efficient and less data is required for a voice conversation than the 64 Kb/s used by the telephone companies.

The main thing that prevented the implementation of VOIP back then was that routers were not sufficiently robust to handle large amounts of voice traffic. High end routers had 25 Mhz CPU’s and limited memory back at that time.

Still, back when I was working for the telephone company and they were educating me with respect to ATM, I believed that eventually the fixed data rate encoding of voice circuits would eventually be replaced with voice over IP and at best ATM would carry IP traffic between routers. The economics of doing so make sense.

Now there are many VOIP long distance and local telephone carriers competing with traditional carriers and British Telecom has committed to converting their entire network to VOIP. But I don’t think VOIP as it exists today is the end point.

As it exists today; you get a box you plug your phone into and it connects to a broadband internet connection. It creates a connection between you and a telephone company switch (which might be a software switch consisting of nothing more than a PC loaded with the proper software). Then that switch takes information and creates a connection to somewhere else, might be over a conventional trunk circuit to a conventional telephone company central office, might be to another VOIP switch, or might be to a customer. Eventually I see all of that going away and connections going directly from one end user to another over a broadband connection. There are programs to do this now but with most people still using conventional telephones you still need access to the circuit switched telephone network.

So how does this impact the broadcast industry? Broadcasting today is extremely inefficient in terms of the way it uses spectrum and energy. In addition, it offers the end user a very limited choice of programming, particularly with the recent change in station ownership rules allowing a few corporate entities to own and control the programming of the bulk of radio and television stations and prevent the entry of independent competitors.

Broadcast today involves a high power transmitter at some central location broadcasting an electromagnetic signal to a limited area surrounding the transmitter. 100 kilowatts of effective radiated power might provide a roughly circular coverage area with a commercially useful radius of perhaps 30 miles (give or take, there are many variables such as antenna height, local terrain, etc). A high quality receiver and antenna might be able to receive a signal up to about ten times that distance but not your average person.

Since the FCC has eliminated clear channels, even at night the geographical coverage area of any given station is very limited. If you are driving and listening to a program, you can not drive very long before that station is no longer receivable.

Net Radio by contrast has the advantage of having a global coverage area and not wasting hundreds of kilowatts of power for each originating source of programming. Potentially millions of stations are available which provides much greater program diversity. The barriers to new entries are much lower than with conventional broadcasting where it costs several tens of millions of dollars to buy or build a broadcast station. With Net Radio, someone with a PC and a broadband connection has everything they need to get started.

Net radio is limited presently to mostly fixed reception. This is so because presently a good infrastructure for continuous IP connectivity on the move doesn’t widely exist. However, the recent introduction of WiMAX protocol will go a long way to changing this as well the introduction of even newer ultra-wide-band wireless data transmission standards.

Already there are companies installing national high-speed networks based on this protocol. I believe it’s only a matter of time until portable internet radios and automotive internet radios become widely available. Presently, there are some portable internet radios that rely on WiFi hotspots, it’s just a matter of time until versions for WiMax evolve and smart cell hand-off that allows you to retain the same IP as you move from cell site to cell site becomes available.

When this happens I believe it will completely displace conventional broadcasting and many other mobile radio services. Instead of having a gazillion different radio services, technologies, and modulation schemes, you’ll have one ultra-wideband data transmission scheme and all of these various services carried over that wireless extension of the Internet.

When this happens, satellite broadcasting and conventional terrestrial broadcasting will become largely obsolete. Satellite may still enjoy some audience in areas were population density is too low to justify data cell sites, WiMAX or whatever future protocol they might be. They will become obsolete because IP broadcasting is so much more cost effective and at the same time will offer much more consumer choice.

That’s my prediction for where broadcasting is ultimately headed; from a situation in which fixed terrestrial stations use tremendous amounts of energy to offer programming to a limited geographical coverage area and the people in that area have limited choices, to a situation where “Net Broadcasting” is broadcasting and wireless internet fills the gap for portable and mobile applications.

One artificial roadblock that was thrown at Net broadcasters was the recent increase in royalty rates, which if left unchecked, will pretty much kill Net broadcasters in the United States. Save Net Radio is an organization that is fighting this and recently a bill has been introduced called the Radio Equality Act which would set royalty fees internet broadcasters pay to the same as those that satellite radio broadcasters pay putting them on an equal footing. I suggest writing to your Senators and asking them to support S 1353 (the Senate version of the bill) and writing your Representatives and asking them to support H.R. 2060 (the House version). Also visit Save Net Radios website and consider contributing to their effort.

The potential for a great broadcasting future exists if power can be wrestled from the megacorporate interests that now control the industry.

2 thoughts on “Broadcasting and the Internet

  1. Thanks for sharing your thoughts about communications development. I agree with you in most points, however there is potentially huge reliability problem in “one connection for all” approach: it is so called Single Point of Failure as well. If it fails, every kind of communication fails with it. Currently, if one radio station fails, you still have many of them available. If one receiver fails, you can get some cheap spare and still hear something. But if your only WiMax link fails… it’s The End.

    (I found your blog accidentally via Google, nice it gave me this link).

  2. I think the error in your assumption is that there need be only one WiMAX link. Like WiFi before it, WiMax can support multiple transmitters sharing the same spectrum and no doubt there will be many.

    For most end users, I suspect the criticality of the application won’t be sufficient to pay for more than one carrier, but like cell phones, in most places there will be more than one available and no doubt a number of community networks will spring up based upon the protocol.

    So for those that having more than one link available is critical I doubt it will be an issue down the road.

Leave a Reply