by Riley Flaherty
Net Neutrality, Title II, “free and open” internet. These political buzzwords have been consuming the public narrative in the wake of the FCC’s decision to reclassify the internet under new regulatory guidelines. But what exactly is Title II? And what is Net Neutrality for that matter? From a technical standpoint, it’s quite simple:
Net Neutrality is the concept that all carriers should be “neutral” as to what types of traffic they carry, invoking a technical principle known as the “end to end principle” – in short, an ISP should not need to treat one type of “packet” of data differently than any other type of packet in order to deliver it from one end (content provider) to the other end (customer).
Some members of Congress had previously argued for legislation to accomplish the goals of Net Neutrality but failed to pass any proposed bills. The FCC is now attempting to use a pre-existing legal provision known as Title II to accomplish those goals without legislation. So what is Title II?
Title II is a part of the Communications Act of 1934, this is the same set of rules that traditional phone carriers are regulated under. It deems them “common carriers”, theoretically prohibiting them from giving preferential treatment to any specific content which passes over their networks – it also includes a number of other provisions relating to eminent domain rights. The FCC has voted to regulate internet service providers under this set of rules, giving them the (alleged) authority to make various decisions about how internet content is delivered.
All of this sounds fine and dandy in theory – why should ISPs need to treat certain types of content differently than others? Well, as usual, the situation is more complex than regulators would have you believe.
Title II is a regulation that was implemented in 1934 under Franklin Roosevelt, chartering the FCC, mostly to regulate telephone carriers. Telephone providers are also prohibited from giving preferential treatment to certain content over others under Title II, but it turns out there are fundamental differences between traditional analog telephone technology and that of modern high-speed internet. The most obvious difference is that telephone networks only carry one type of data – voice, while the internet carries a wide variety of different kinds (voice, web browsing, email, streaming video and many others.) Although supporters of Net Neutrality and FCC Chairman Tom Wheeler are correct that carriers are capable of transporting this data without regard to what type of content it is, there are often more effective ways to deliver content to the customer, which special notice of what type of data is being transported.
We call this concept Quality-of-Service, or QoS. QoS attempts to improve the user experience in short by violating the “end-to-end” principle of Net Neutrality and prioritizing some data over other data. Yes, that’s right; data is discriminated against in this widely accepted model. Most supporters of Net Neutrality have not actually looked at the technical aspects of this issue past this juncture and simply assume that this discriminatory practice is due to [arbitrary large corporate ISPs] being a gang of rich, evil, conniving monsters that wants to count stacks of money at the expense of some poor man trying to watch Arrested Development on Netflix.
As is often the case with scenarios such as this, supporters of a new regulation have been duped both by the government and corporate lobbyists – the “packet discrimination” is largely there in order to give the end-user a more effective service.
In modern networking, voice, email, web browsing, and every other type of content are “encapsulated” into uniform packets that travel across the network together using a “pipe” of shared capacity, known as bandwidth. Only a given amount of data can travel across the network at a given time because there is a fixed amount of bandwidth, and all other packets must be dropped, however, some types of content are far more sensitive to dropped packets than others. Without any QoS, all services generally try to use as much bandwidth is available.
Activities such as viewing a webpage or reading your email are barely affected by minor dropped packets; the packets are presented at high speeds until the content is successfully loaded, usually without the end-user noticing in the slightest, and are thus very resistant to insufficient bandwidth. Voice-over-IP, a technology that transfers telephone calls over the internet (think Vonage, Skype, etc.), however is very sensitive to dropped packets. Since it is a live telephone call going over the internet, audio needs to be transported in nearly real-time, even small amounts of packet loss make the call unintelligible due to choppiness. Because of this, ISPs use Quality-of-Service technology to limit the amount of bandwidth consumed by things like web, e-mail and yes, even Netflix – in order to provide clarity for phone calls and other sensitive services.
But it gets worse, even more concerning than these fundamental issues regarding the concept behind Net Neutrality is the question of how compliance to the law is going to be measured by the government. Many have theorized the FCC would use packet loss (IE the number of dropped packets) as a way to measure network performance in order to determine whether content was or was not being throttled.
As Professor Steven Bellovin reported on circleid.com “Using [the rate of packet loss] as a net neutrality metric is not just a bad idea, it’s truly horrific”. Websites that can send data faster than the end-user can receive it make up for the difference in speed by “dropping” packets of data, which are then resent at a slower rate until the speeds match, quickly “equalizing” the speed of the two sides of the connection. All of this is done in a matter of milliseconds. Because of the fact that the content provider and the ISP use packet loss as an indicator to match speed with the internet service provider, dropping packets is not only helpful, it is necessary for the internet to function as we know it. In the early days of the internet (actually its predecessor ARPAnet), before the concept of dropped packets was used to detect capacity, the internet nearly faded out of existence due to congestion issues.
The list of negative side effects go on and on, but the last one I will mention is its adverse effect on small internet service providers. Small internet service providers, especially ones that operate in very rural areas where limited internet options are available use QoS to slow down Netflix and torrent downloads. This may sound like a negative, but much like other forms of “price gouging”, it serves to conserve scarce resources.
Rural ISPs can often only get very small amounts of bandwidth to distribute amongst many customers, meaning that if the customers want a usable internet connection, their Netflix and other high bandwidth connections will be “throttled” or slowed down. As “restrictive” as this may sound, it is not the ability to watch Game of Thrones on Netflix or pirate Kim Kardashian’s sex tapes that has ability to connect otherwise uninformed people to a vast expanse of information and opinions. Users that otherwise have very slow or nonexistent internet are more than happy to have a connection to the internet, even if it means a few extra minutes of buffer time on Netflix. Instead of ensuring that people’s data isn’t “messed with”, they will ensure that multitudes of people in rural areas within the US will no longer have access to the internet, and that the small ISPs which served them will now be put out of business, leaving them to wait for years with no internet in the hope that a large corporate carrier starts servicing the area.
Despite the multitude of technical issues that would ensue from Net Neutrality, the most frightening part of the entire debate is that the problem which is being “solved” does not exist. No major ISP has ever actively sought to block content of certain providers on any sort of large scale. The Government, large corporate lobbies, and poorly written internet news sites like Gawker have somehow managed to dupe a significant group of people into allowing the government control over their internet.
So who are these people that now claim the power to regulate the internet? The United States Government. The same United States Government that millions were outraged to learn had been collecting vast stores of e-mail and phone records without due process of law, claiming that e-mail is not protected by the Fourth Amendment. Proponents of Net Neutrality say that the scope of these regulations would not allow the government to see any sensitive information about your web browsing. Oh the naivety…..
How then, can the government confirm that your ISP is complying with the law by not throttling access to certain websites? Would this not require them to know what website you had visited? Furthermore, how will an ISP be able to prove to the government that they are not blocking or throttling access to content without giving the government access to their network? Will these records, which contain a clear history of what websites you visited, servers you access, what times of day you use the internet, how much bandwidth you use, and what type of content you view (legal or not). It’s also important to note that many types of network quality testing run “captures” of data, such as a recording of an internet phone call in order to prove that the data is getting properly delivered.
In an ironic twist, some of the same people who were horrified by the concept of the NSA collecting their data didn’t think to ask questions when the FCC stepped in to save the day.
So, what noticeable effects will these regulations likely have on the end-user?
Well from a technical perspective, the average internet might experience:
- Unusable and Unintelligible Voice over IP (Goodbye Vonage)
- Slow video services and long buffer times
- Congestion and random packet loss
- Increased prices for bandwidth
- The need to purchase more bandwidth
From a legal standpoint:
- Taxes – Title II often has a number of taxes which are levied by state governments associated with it like the Universal Service Fund.
- Eminent domain- large ISPs would now be granted the right to dig up your lawn to bury cables without your permission.
- Increased government surveillance
- Increased government spending (They’ll need some network technicians to analyze all that data!)
- Less options for Internet service
- Chilling Effect on small internet service providers
- Reduced expansion of small ISPs into rural areas
- Less access to internet in remote areas
At the end of the day, the negative outcomes that will likely surround Net Neutrality are a combination of both unintended consequences resulting from poorly crafted regulations and a classic case of the Government stepping in to solve an imaginary problem in order to garner themselves more control and surveillance on the nation. The Internet in United States is some of the best in the world, with 92% of homes in the US having quality broadband connections, beating out Europe by nearly 2 times. By allowing the government to intervene in the inner workings of the internet, we endanger the functionality and the privacy of the last truly free medium of exchange that exists in the surveillance era.