Blame Your Internet Latency on the Speed of Light

By Wesley Fenlon

The Interent provides for instant communication--except when the speed of light or poor network infrastructure get in the way.

We've become so accustomed to the Internet as an instantaneous form of communication that it can often be interesting--and, of course, frustrating--when that instant speed breaks down. When that happens, packets are often at fault. Because networks rely on sending a packet of data and then receiving a confirmation that the data reached its destination, delays can happen with both sending and receiving. Maybe an important packet never arrives, or maybe a server hangs while it waits for our computer to tell it the last packet arrived. When an instant messaging chat becomes oddly delayed or a video call lags and freezes, we'll blame bandwidth or packet loss, but sometimes there's a different issue hampering our communication: the speed of light.

Ars Technica posted a networking primer that includes an emphasis on latency--the delay it takes between sending a signal from your computer and getting a response. And one of the big issues facing Internet-based communication is this: we can't make our data travel faster than the speed of light. The further your voice, bundled up in a packet, has to travel, the longer it's going to take to get there. And when we use satellites for communication, that's a long, long trip.

Photo Credit: Flickr user Arnybo via Creative Commons

"Communications satellites are in geostationary orbits, putting them about 35,786 kilometers above the equator," writes Ars Technica's Peter Bright. "Even if the satellite is directly overhead, a signal is going to have to travel 71,572 km—35,786 km up, 35,786 km down. If you're not on the equator, directly under the satellite, the distance is even greater. Even at light speed that's going to take 0.24 seconds; every message you send over the satellite link will arrive a quarter of a second later. The reply to the message will take another quarter of a second, for a total round trip time of half a second."

Bright points out that undersea cables, by comparison, are far shorter. The round trip for the US-Europe cable is only about 15,000 kilometers. Even though data doesn't quite travel as fast along a cable, it's still fast enough to bring the latency down to under 100 ms.

Memory buffers can also be a problem, according to Bright. He writes that cheap RAM prices have caused buffers in networking equipment like modems and routers to grow tremendously. That sounds like a good thing--we like having more RAM in our home PCs, after all--but having large buffers in networking equipment can cause problems.

"[Your] DSL modem/router that joins the two networks might have several megabytes of buffer in it. Even a megabyte of buffer is a problem. Imagine you're uploading a 20MB video to YouTube, for example. A megabyte of buffer will fill in about eight milliseconds, because it's on the fast gigabit connection. But a megabyte of buffer will take eight seconds to actually upload to YouTube.

If the only traffic you cared about was your YouTube connection, this wouldn't be a big deal. But it normally isn't. Normally you'll leave that tediously slow upload to churn away in one tab while continuing to look at cat pictures in another tab. Here's where the problem bites you: each request you send for a new cat picture will get in the same buffer, at the back. It will have to wait for the megabyte of traffic in front of it to be uploaded before it can finally get onto the Internet and retrieve the latest Maru. Which means it has to wait eight seconds."

This is a problem that should be solvable with smart traffic management, but it's interesting because it runs counter-intuitive to how we'd normally view hardware. More isn't always better. If you do want to learn more about how the Internet carts out data around the world, though, read the rest of Ars Technica's post--it digs into TCP, traffic shaping algorithms, and how smarter protocols and new networking technologies could eventually ease the congestion of backed up networks.