Hard-to-comprehend forecasts from Cisco’s Visual Networking Index:
- By 2015, there will be nearly three billion Internet users
- By 2015, there will be nearly 15 billion global fixed, mobile personal device, and machine-to-machine network connection
- By 2015, the world will reach three trillion Internet video minutes per month–one million Internet video minutes every second
CNNMoney put Cisco’s forecasts in perspective:
- The Internet’s “incremental, one-year growth between 2014 and 2015 will be equal to all the Internet traffic recorded worldwide last year”
- “Four years from now, the Internet’s traffic volume will be so large that every five minutes it will be the equivalent of downloading every movie ever made”
- “In 2015, monthly Internet traffic will reach the equivalent of 20 billion DVDs, 19 trillion MP3s or 500 quadrillion text messages”
- 1 million Internet video minutes per second is “the equivalent of 674 consecutive days of viewing”
CNNMoney notes that Cisco’s annual forecast “has historically been accurate to within a 5% to 10% deviation — usually on the conservative side.”
My forecast: the data flood will
note not make us smarter.
*Or, Why an Information Technology Degree Is a Smart Investment
For 24 hours beginning at 0:00 UTC on Wednesday June 8–7 pm today Boston time–there will be a test in which IPv6 will run alongside IPv4. IPv6 is the forthcoming Internet address protocol that will replace IPv4, which is running out of IP addresses. (The last batch of available addresses was auctioned earlier this year.) IPv6 uses 128-bit addresses, supporting “2128 (approximately 340 undecillion or 3.4×1038) addresses.” (According to Wikipedia’s Names of Large Numbers an undecillion is more than a decillion but less than a duodecillion, two other concepts that also mean nothing to me. An undecillion is what this lawyer would describe as eight orders of magnitude larger than a trillion. A Cisco spokesperson said IPv6 will allow for “50 thousand trillion trillion addresses per person” which, if accurate, is about 49.99 thousand trillion trillion more addresses than I expect to I’ll need.) It is estimated that the test will cause problems for about 0.05% of Internet users, which this article notes “works out to something like 150,000 people in North America alone, and more than a million worldwide.” The two IP address systems can run in parallel but, as the linked article also notes, they are not otherwise compatible. A look at the linked article’s examples of the respective protocol’s addresses shows why:
- IPv4 address: 192.168.5.255
- IPv6 address: 2001:db8:1f70:999:de8:7648:6e8
You can run the test at this site if you’d like to know how your computer will deal with IPv6. It told me:
The more complete test is more disconcerting. My computer failed all ten tests “for your IPv6 stability and readiness, when publishers are forced to go IPv6 only . . . Your DNS server (possibly run by your ISP) appears to have no access to the IPv6 Internet, or is not configured to use it. This may in the future restrict your ability to reach IPv6-only sites.” I expect Verizon FIOS will catch the IPv6 wave when it rolls to shore.
but not Eastern European women scavenging with shovels. The WSJournal reported that recently a 75-year old woman was digging for scrap metal in the Georgian village of Ksani, about 37 miles from Tbilisi, when she struck the “international fiber-optic backbone cable that connects much of the southern Caucasus to Europe,” shutting down Armenian Internet access for about 12 hours and disrupting service in Azerbaijan and Georgia. The cable runs underground long a railroad line. The woman was arrested and faces up to a year in prison, although the Journal does not report what police charged her with. Scavenging without a license?
How deeply buried is this critical cable? Six inches? A foot? Either this 75 year old woman wields a vigorous spade or the cable is laid alarmingly close to the surface. I hope terrorists don’t have access to shovels. They could bring the Caucasus to its knees.
Security in 2020 is a fascinating, provocative post from security expert Bruce Schneier’s latest newsletter. He briefly looks at the current focus of IT security, (each concept he discusses is captured in in what he acknowledges are invented “ugly” words): deperimeterization — “dissolution of the strict boundaries between the internal and external network” — , consumerization — “consumers get the cool new gadgets first, and demand to do their work on them” — , and decentralization — cloud computing. Then he projects developing trends: deconcentration — “general-purpose computer is dying and being replaced by special-purpose devices” — , decustomerization — “we get more of our IT functionality without any business relationship” — , and depersonization — “computing that removes the user, either partially or entirely.” Get past the IT-professional jargon. Each term nails a distinct trend.
Discussing the delivery of IT services without fee-based relationships he says
We’re not Google’s customers; we’re Google’s product that they sell to their customers. It’s a three-way relationship: us, the IT service provider, and the advertiser or data buyer. And as these noncustomer IT relationships proliferate, we’ll see more IT companies treating us as products. If I buy a Dell computer, then I’m obviously a Dell customer; but if I get a Dell computer for free in exchange for access to my life, it’s much less obvious whom I’m entering a business relationship with. Facebook’s continual ratcheting down of user privacy in order to satisfy its actual customers — the advertisers — and enhance its revenue is just a hint of what’s to come.
With respect to “computing that removes the user”–things talking to things–he says
The “Internet of things” won’t need you to communicate. The smart appliances in your smart home will talk directly to the power company. Your smart car will talk to road sensors and, eventually, other cars . . . The ramifications of this are hard to imagine . . . But certainly smart objects will be talking about you, and you probably won’t have much control over what they’re saying.
One old trend: deperimeterization. Two current trends: consumerization and decentralization. Three future trends: deconcentration, decustomerization, and depersonization. That’s IT in 2020 — it’s not under your control, it’s doing things without your knowledge and consent, and it’s not necessarily acting in your best interests.
Worth reading for anyone interested in how technology shapes our lives. Especially Internet law students.
Tim Berners-Lee–the guy who invented the World Wide Web–wrote the best explanation of why net neutrality and open source are important and closed systems like Facebook and iTunes are bad for the future of the Internet: Long-Live the Web: A Call for Continued Open Standards and Neutrality, Scientific American Magazine, December 2010. These two paragraphs from the article’s introduction summarize Berners-Lee’s thesis:
The Web evolved into a powerful, ubiquitous tool because it was built on egalitarian principles and because thousands of individuals, universities and companies have worked, both independently and together as part of the World Wide Web Consortium, to expand its capabilities based on those principles.
The Web as we know it, however, is being threatened in different ways. Some of its most successful inhabitants have begun to chip away at its principles. Large social-networking sites are walling off information posted by their users from the rest of the Web. Wireless Internet providers are being tempted to slow traffic to sites with which they have not made deals. Governments—totalitarian and democratic alike—are monitoring people’s online habits, endangering important human rights.
It will be required reading in Internet law, it’s addresses important topics, and its short. Why not read it now?
An op-ed piece in today’s New York Times notes the birth of the Internet’s first Request for Comments, the then-informal process for proposing ideas, big and small, about the Internet’s workings. We talk of how the original Internet’s open architecture eventually enabled and propelled its explosive growth; Stephen D. Crocker, the op-ed’s author, wrote R.F.C. 1. He explains what was meant by “rough consensus and running code:” “[e]veryone was welcome to propose ideas, and if enough people liked it and used, the design became a standard.” After noting that they avoided patents and the desire for control comes with financial incentives he says “we always tried to design each new protocol to be both useful in its own right and a building block available to others . . . we deliberately exposed the internal architecture to make it easy for others to gain a foothold.” (emphasis original)
They couldn’t know how successful they would be.
Inspired in part by concerns raised by the Conficker worm the New York Times posed a question in Sunday’s Week in Review: Do We Need a New Internet? The issues are not new to anyone who has read Larry Lessig’s Code (either the original or Code 2.0) or Jonathan Zittrain’s The Future of the Internet–And How to Stop it, or anyone who has taken my Internet law course. The Internet was built to facilate sharing research among scientists, academics, and defense researchers. It valued openness, decentralization, and ease of use over security. Then the world discovered this wonderful communications network and brought to it all of the best and all of the worst humans can offer. John Markoff wrote in the Times that
there is a growing belief among engineers and security experts that Internet security and privacy have become so maddeningly elusive that the only way to fix the problem is to start over. What a new Internet might look like is still widely debated, but one alternative would, in effect, create a “gated community” where users would give up their anonymity and certain freedoms in return for safety. Today that is already the case for many corporate and government Internet users. As a new and more secure network becomes widely adopted, the current Internet might end up as the bad neighborhood of cyberspace. You would enter at your own risk and keep an eye over your shoulder while you were there.
We just cannot stop ourselves from screwing up a good thing.
This article from cnet–Net neutrality: An American problem? presents the views of three executives from Australian ISPs who argue that net neutrality is a problem of the typical U.S. ISP unlimited-use business model, not bandwidth. (The article defines net neutrality as opposition to the practice of ISPs to tier or establish priorities for content). Their thesis is borne from the ISP business model dictated by Australia’s “unique geography:” “[A]ll ISP’s in Australia . . . have got used to pay-as-you-go and have handed those pay-as-you-go principles on to their customers.” In other words, the more bandwidth an Australian Internet user consumes, the more her pays. It’s an interesting take, both for what it says and what it omits. The goal of those who advocate net neutrality in the U.S. as a matter of policy is not unlimited bandwidth for a fixed price. The goal is the perpetuation of an open Internet architecture–not for the entire Internet but somewhere, somehow–that continues the original Internet’s non-hierarchical, no-permission-required, everyone-is-a-publisher ethos.
Congressman Ed Markey, chairman of the House subcommittee on telecommunications and the Internet, this week introduced a bill titled The Internet Freedom Preservation Act that seeks to maintain the open architecture of the Internet. Net Neutrality is a buzzword that means different things according to who wields it. My use is consistent with Markey’s. Net Neutrality means keeping some portion of the architecture of the Internet–or some portion of the architecture of each layer of the Internet–open and free from discriminatory treatment of access and data. In other words, maintain that original architecture that allows anyone to get online, establish an Internet presence, establish connections with other networks, and publish and receive information without interference. To others, such as the US Telecom Association, net neutrality means government regulation of Internet architecture. They want the ability to treat some data–such as movies streamed from their servers to their paying customers–preferentially, which in turn means giving lower priority to other data. They argue that Internet architecture has always been market-driven and government should stay out of its design. One irony is that some early-Internet pioneers, for whom government regulation of anything network related was anathema, support the goals of Markey’s bill. Another is that the original design of the Internet was research-driven, not market-driven, certainly not in any commercial sense of “market.” The Internet was created by the U.S. government to enable collaboration among military and academic researchers. Unrestricted data flow is in its DNA. It is disingenuous to argue that the market should continue to govern its design, as if the market was always the invisible hand shaping its development. It wasn’t.
The U.S. Department of Justice yesterday issued a press release describing its position on “net neutrality”–it’s against it–in response to an FCC Notice of Inquiry into broadband practices. The money quote:
[P]recluding broadband providers from charging content and application providers directly for faster or more reliable service “could shift the entire burden of implementing costly network expansions and improvements onto consumers.” If the average consumer is unwilling or unable to pay more for broadband Internet access, the result could be to reduce or delay critical network expansion and improvement.
The DOJ cited the “common and often efficient” practice of “differentiating service levels and pricing,” pointing to the U.S. Postal Service’s range of package delivey services and prices.
The problem for advocates of net neutrality is explaining why it is important, against a backdrop of pricing and service differentials in air travel, cable television access, HOV lanes, etc., that all Internet traffic be treated the same. I’ve discussed net neutrality many times in class and students often don’t understand the fuss. They say “I can take the Acela or the regular Amtrak train from Boston to New York; the Acela is faster and costs more. What’s the big deal about paying more to deliver or receive content more quickly/reliably?” Compounding their lack of comprehension is that after decades of taking the position that government should leave the Internet alone, net neutrality advocates want Congress to mandate that the Internet’s content- and price-neutral processing of data be fixed by law. They don’t understand why laissez-faire is now undesirable.
What gets lost is that the Internet became what it is precisely because it’s original architecture treats all information the same, whether it is the cure to cancer, Paris Hilton’s shopping list, or pictures of my sore toe. The Internet exploded into public consciousness and practical importance because anyone can connect to it without permission, publish content with little or no barrier (let’s leave China and Saudi Arabia out of this for the moment), and access everything that is available online on the same footing as everyone else. Differential service and pricing threaten to change the ground from which the Internet grew. When it costs $.41 to send a one-ounce letter by first-class mail, $4.60 to send it by priority mail, and $14.15 to sent it by express mail it’s a losing argument to oppose a tiered Internet because one might have to pay more to acquire downloads of Heroes from NBC. Net neutrality advocates must providing succinct and compelling policy reasons for its preservation.