Dark Clouds on the Horizon: Cloud Service Latency

HONG KONG-WEATHER-STORM  PL01Cloud. Cheap, faster, dynamic, secure, and … slow? More and more commentators are starting to raise the issue of latency and it’s impact on performance as Cloud service growth surges across the world. Latency is complex and expensive, Amazon predicts that one second of latency can cause a direct 1% decrease in sales. For the enterprise, performance of particularly software as a service, is paramount. So what do you need to consider when you are choosing a Cloud service provider (CSP)?

One of the primary causes of latency is the multi-layered and frighteningly complex networks. These start inside the enterprise itself, pass through gateway services to the Internet, then travel potentially thousands of miles through multiple providers, exchanges, and telecommunication companies before they arrive at the CSP data centre where they are ingested, slammed through a massive network and series of load balancers, to arrive at your virtual machine or piece of storage.

Add to that intricacy data load. The sheer increase in data is almost exponential with one major network provider indicating an eight fold increase year on year of Cloud data traffic. General traffic is expected to double year on year, though with the growth of new home entertainment options and the Internet of Things, that could be a very conservative figure.

Latency is also impacted by the laws of physics. The speed of light across fiber is about two thirds of the speed of light in vacuum. Regardless, the entire global round trip of light across fiber is about 200ms. You can reduce that latency by building direct fiber connections, point to point. In fact, latency is so much of an issue in the trading world, that they do exactly that, spending hundreds of millions to interconnect trading floors across entire oceans, just to get the latency down by a few dozen milliseconds.

Now, in terms of latency, we can put up with a reasonable amount. As you read this now you are living about 80ms in the past. That’s the time it takes for your brain to process all the input it is receiving in real-time into something useful.

There are dozens of other factors involved in latency, not least of all is the performance of the application itself. Clearly if an application in remote Cloud land is performing poorly, it adds to the overall latency issue.

There are ways to decrease latency.

The most obvious is buying Cloud services that are near to you. The nearer you are the lesser the time it takes to travel back and forth. Of course, that’s not always possible given that a) Cloud workloads may move geographic location (further away from you) based on complex rules and b) if you are on the outer edge of the world’s fiber network (here’s looking at you New Zealand) then close could still be far away.

The other is a direct connection to your CSP. Even Amazon offers this. This takes out the unpredictability associated with general internet usage. In my work with large organisations and government agencies, this is almost always the case when it comes to design. As well as lowering latency it smooths network performance and allows for tighter security controls to be put in place.

You can look at performance metrics to weed out CSP who have poor performance from their border, internally to their service platform. There are a number of companies that produce this information, which will help you in your decision process.

As we talked about one of the primary causes of latency is the complex nature of the public network and most importantly, the amount of “hops” (or intersections if you like) that traffic must flow through. Each one of those hops introduces latency.

Internationally companies like Internap have figured a way to reduce this. Internap connects itself to multiple telecommunication carriers and continuously looks for the shortest routes for traffic to take. It’s better than some more traditional methods (BGP) because it finds the fastest path, not necessarily the most logical shortest path. Think GPS modes, quickest route versus fastest route.

The other piece of the puzzle, which is often missed, is optimizing your own network and gateway. Your best to talk to whomever is providing that service to get a design. Remember if you are looking at WAN optimisation, you’re not only going to have do it at your end, your also going to have to have something at the CSP end. Remember as well, that as your workforce becomes more mobile, your going to have additional chatty traffic to deal with from a multitude of external devices. Phones, tablets, laptops, desktops, and workers own home PC’s are all constantly talking and synchronising data.

Latency is something that has to be figured out up-front, trying to retro fit afterwards can be expensive, and if you choose the wrong CSP, possibly means that you are stuck with what you get, the only option to migrate loads to a CSP with better performance characteristics.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: