FCC

The Lies of Net Neutrality

(Update: taxes, too; click through for just the update.) I was just taking a few minutes to check my newsfeed a few mornings ago, enjoying the final days of autumn before the polar blast that came to Colorado this week when I started reading them: the polarized, ignorant perspectives on the misnomer of "Net Neutrality."

On the one side, there are apparent ignorant politicians making comments both pro and con. They oversimplify the issue, and make it seem obvious to their own constituents that their view is right. The fact is, however, that neither "side" is right about this issue. As is typical when politicians attempt to stick their controlling efforts into science and technology, they damage the good with the bad and crush the possible benefits to the United States.

So, if you really want to understand the issues and the way that this should play out, read on. If you'd rather just pick a side and go into pitched battle, feel free. Just leave me out of it.

How the Internet Works

If you're an IP engineer, you can skip this part. If not, take a moment to understand some of the basics about how the Internet actually moves all that data around. It'll help you understand the rest of what's important about how the network gets managed and what is allowed and what's not.

Although it may seem like your downloading web page, video, or music is one continuous stream of ones and zeros, nothing could be further from the truth. In fact, every last bit of data flowing across the Internet is broken into chunks called packets. These packets are typically a bit less than 1500 byte long, and it takes many of them to constitute a single uni-directional communication. These packets do not necessarily take the same path from the source to the destination, each individual packet being individually routed by the hardware and software that makes up the Internet. As a result, they may not arrive in the right order, some may be lost, and others may be corrupted. The receiving system works with the sender to reconstitute the original data, and you often see this as buffering, as that reconstituted data is collected for playback sufficient enough so you don't see any interruptions.

It is critically important to understand this aspect of the Internet before considering how you want to govern it and what rules you insist on creating. Let me explain why next:

How Different Data Needs Different Internets

There are a number of types of data used commonly on the Internet, and hundreds more that most people never experience. Focusing on just the common ones, consider these:

  1. Interactive voice and video. These data require near real-time delivery and controlled streaming. Large gaps between packets received will cause freezing and other issues with the interaction, effectively making the communication unusable. We have all had voice and video over the Internet freeze or fail, and this is why.
  2. Streaming voice and video. These data require the controlled streaming of interactive voice and video, but can be buffered or otherwise can make up for some of the vagaries of an unreliable network. As long as the stream of packets continues to arrive at a predictable rate, the results are good, since it's not interactive. However, if the packets are throttled, have errors, or get dropped, the experience is poor. Most of us have had the experience of Netflix or iTunes dropping back in quality due to poor network performance.
  3. Bulk data. These data do not have time or delivery constraints, and include most web data, email, downloads, and the like. This data can have packets with issues, but the ultimate goal is simply to get all of them to the destination within a reasonable timeframe so that the file will be available for use, regardless of whether it's rendered on a browser screen, played on an mp3 player, synced to a Dropbox folder, or read on the screen.

You should now see that these three types of data place different requirements on the network, and should be treated differently when bandwidth is at a premium. And therein lies the issues with so-called Network Neutrality. Data isn't neutral, so a neutral network will actually create a worse experience for the users of the network than will a network that is well-engineered to prefer the right kinds of data.

This means that networks should be engineered to prefer data packets in the order I listed above, and to use interleaving of lower-class packets with higher-class packets when bandwidth allows. So, for example, if I am on an HD video call and it's using 90% of my available bandwidth, my network should only use that remaining 10% to deliver any of the type 2 and type 3 traffic. If it uses any more, my interactive video experience will suffer. In other words, the network should prefer (there's that word that is so vilified in these discussions) the interactive video packets over the bulk and non-interactive video packets.

Impact on Net Neutrality Planning

Please note that nothing here indicates a desire to see "pay to play" kinds of arrangements in the industry. However, it is common for providers to charge for access to their bandwidth. When I want greater bandwidth, I have to pay more. If I want a guaranteed bandwidth availability, I'll pay more than a best effort bandwidth of the same amount. What I mean is that a 50Mbps download for consumers is usually best effort, and happens when the overall network is relatively uncongested. If I want 50Mbps regardless of the state of the rest of the network, I need to buy dedicated bandwidth, which costs considerably more (and is typically only sold to businesses).

If I sell data delivery to my customers, and that delivery requires a certain bandwidth, I typically buy that bandwidth from two or more Internet Service Providers (ISPs). And I have to pay for the bandwidth as either best effort or dedicated. This is the way packet delivery has worked over the Internet and between content providers and their ISPs since the Internet went commercial in the early 1990s. This arrangement is appropriate, it seems to me.

Furthermore, ISPs should not be restricted from shaping data in order to deliver better service to customers, as I outlined in the story of the 3 data types. They should be able to prefer interactive packets over streaming packets, and both of those over bulk packets.

This is not to say that content providers should be held hostage based on the type of data they are delivering. That should be up to the consumer, and the content providers should simply purchase dedicated bandwidth and be able to use all they purchase, filling it with any of the types of traffic their provider will deliver. Consumers should receive the service to which they subscribe from any provider of that service, delivered with the quality possible given appropriate preferences. But, providers need to be able to shape traffic or they will be forced to over-provision, passing the bill along to consumers.

The United States Compared with The Rest of the World

All of this said, do not buy into the myth that the US has the best Internet access in the world. In fact, it's abysmal. Wikipedia has an article summarizing a damning Akamai survey of Internet capabilities worldwide. South Korea (the leader) has services more than 100x faster than the average speed in the US, for $20/month. So, providers in the US need to do a much better job of delivering bandwidth for the fees that consumers pay.

What does this mean for so-called Net Neutrality? You decide. Now you understand some of the engineering complexities underneath the typical political bluster. At least you can decide if any of the politicians and pundits have a clue what they're talking about.

Update: Taxes, Too

Today, FCC Commissioner Mike O'Reilly said that, “Consumers of these services would face an immediate increase in their Internet bills” during a seminar held by the non-partisan Free State Foundation according to this article. This is an example of the repercussions of choices that involve a government maze of regulations, fees, taxes, and legalities that are unforeseen. Such is the case with the siplmistic idea of "net neutrality" that doesn't take into account the implications of government regulation as a telecommunications technology.

Why "Net Neutrality" is Wrong

In the late 1990s, I worked with an amazing group of brilliant network engineers building the InteropNet for the Interop trade shows around the world. We were always pushing the envelope, introducing next-generation technology before it was really ready. During a number of those years, we delivered real-time video traffic over the network, often using multicast methods that are still not widely used. We were always a little ahead of our time. Before I explain the details, allow me to mention one concept that is critical to understanding everything about the Internet: all transmissions across the Internet are made by packets. This means that every file or stream across the Internet is chopped into little 1400-byte chunks, each of which traverses the network independently of all the others. There is literally no relationship between the packets on the network. They are only reunited at the receiving end after they are off the network and in the device that will interpret them and deliver the result (like a video playback, email, file transfer, or any other end-to-end application).

But over the network, those packets are 100% independent of each other.

Because they are independent, they are subject to all kinds of issues. Sometimes, packets are dropped because a device is overloaded. Since packets can take different paths, they can arrive out of order or with varying time between them (called "jitter"). For many types of data transfer (like email, files, and even instant messaging), most of these things don't matter at all.

However, some traffic is very sensitive. Especially audio and video that is time-sensitive (used for applications like video calls, audio calls, live broadcast).

Back to Interop and the InteropNet... Delivery of video, even over the high-speed networks we were using, meant having to recognize the different requirements of traffic types and using the network resources in ways that accommodated those requirements. During those years, the IETF (Internet Engineering Task Force, the volunteer organization responsible for the standards that allow the Internet to function) defined the Differential Services (diffserv) standards to provide network performance appropriate to the type of service required.

This is an essential concept! Networks must be able to differentiate all of those independent packets flying around the network.

The New York Times has been reporting on both the FCC comments about so-called "Net Neutrality" conversations and the rumored Google/Verizon agreement on network usage. The typical idiotic political conversation has ensued, of course.

The entire idea of "an open Internet" is foolish at best and dishonest political posturing at worse. In this situation, it's actually both. Besides, "Net Neutrality" is not possible! Not only that, it's not even desirable.

Bandwidth costs money. Equipment costs money. More bandwidth costs more. Differentiated services also cost more. We all want them to be offered by the providers so that we can have live video, reliable voice-over-IP, and additional services that we haven't even imagined, yet.

The conversation, then isn't about "neutrality," but rather about universal access to differentiated services... at an appropriate cost that will be determined by the market if we just allow it to do so. After all, nobody wins by denying access, and in a free market, those who do will lose business.

There is one group who benefits: the idiot politicians who want control.

The entire focus is wrong. Typical of the politicians playing at being engineers. It just doesn't work.

Update: The Wall Street Journal ran a bit more detail on the Google/Verizon agreement today. The comments from the so-called "Free Internet" speakers are very telling: they don't understand how the Internet actually works.