Turns Out, The Internet Has Its Limits

By Wesley Fenlon

Network usage is outgrowing the expansion of our infrastructure, and one expert says that smarter data management is the solution.

The Internet, as depicted by South Park, runs off a gigantic old Linksys WRT54G router. When it breaks down, fixing it couldn't be simpler--you just unplug it and plug it back in. Fixing the real Internet won't be as easy. No, it's not broken--not yet. But Markus Hofmann, the head of Bell Labs Research, says we're facing a problem. In about five years, our network usage will exceed what our wires and over-the-air frequencies can support. The Internet will hit its limit.

Scientific American interviewed Hofmann about the limitations of the current network infrastructure and what we can do to improve upon it. "We know there are certain limits that Mother Nature gives us—only so much information you can transmit over certain communications channels," Hofmann said. "That phenomenon is called the nonlinear Shannon limit [named after former Bell Telephone Laboratories mathematician Claude Shannon], and it tells us how far we can push with today’s technologies. We are already very, very close to this limit, within a factor of two roughly. Put another way, based on our experiments in the lab, when we double the amount of network traffic we have today—something that could happen within the next four or five years—we will exceed the Shannon limit. That tells us there’s a fundamental roadblock here. There is no way we can stretch this limit, just as we cannot increase the speed of light. So we need to work with these limits and still find ways to continue the needed growth."

Hofmann lays out a few options. There's the brute-force approach, which means laying down more fiber optic cables. That will cost tons of money--obviously fiber networks will slowly continue to expand, but Hofmann is talking about major fiber additions, like adding more transatlantic cables. SDMA, or space-division multiple access, can improve our usage of cellular frequencies.

But neither offers a permanent solution; the way to go, he says, is to make our network smarter. Today virtually all data is treated equally, random bits and bytes transmitted as packets. Hofmann suggests tagging data that moves across the web as "video" or "text," for example, to prioritize routing. Of course, this idea raises privacy concerns which Hofmann only partially addresses--it seems like tagged data would make it easier for hackers to target and peek into important files.

Interestingly, Hofmann also suggests more localization of data to cut down on network traffic: "we might move to a model where decisions are made about data before it is placed on the network. For example, if you have a security camera at an airport, you would program the camera or a small computer server controlling multiple cameras to perform facial recognition locally, based on a database stored in a camera or server."

He says this doesn't mean an end to the cloud, but perhaps it foretells a necessary reversal to the trend of centralized data. Or maybe we'll just brute force our way to that everything-in-the-cloud future we've been promised. We've got five years to see what happens, one way or the other.