Earlier this month I spent a few days at the Open Networking Summit in Santa Clara, Calif. and walked away certain I watched history being made in the networking industry. The emergence of the OpenFlow standard and software defined networking have been on my radar for a while, but at this event, the future coalesced.

The secret is out on SDN.

I’ve been following SDN and OpenFlow almost since its earliest days. I’ve been lucky enough to know Martin Casado since before Nicira knew what it was going to build and Guido Appenzeller of Big Switch of SDN since his days at Voltage Security. I attended the first Open Networking Summit back in October, but was floored by the scale of the April event. Attendance was up over 3x, and people from all corners of the ecosystem were there. Clearly the secret is out and it’s evident that the networking…

View original post 709 more words


Equinix’s White Paper: Optimizing Internet Application Performance

Enterprises, today, require high performance for their cloud applications. 

Equinix wrote a great white paper (and easy read)( talking about Internet Application Performance–both from a business benefits perspective and a technical one.  I’ll briefly summarize their paper and then I’ll extend that discussion to Corporate and Cloud Application Performance.   

What’s great about the Equinix discussion is that it brings to light the negative business impact of high latency on the network, especially with respect to bandwidth.  Many people have a perception that adding bandwidth is the solution to solving the s-l-u-g-gishness of your applications.  However, latency (geographic distance), could have a much bigger impact.  (Riverbed calls latency the “silent killer.”)

Here are some statistics presented by Equinix:

  • Amazon — “Every 100ms delay costs 1% of sales”1 — for 2009 that translates into $245 million
  • Mozilla shaved 2.2 seconds of load time off its landing pages and increased download conversions by 15.4%, translating into an additional 60 million downloads each year2
  • Microsoft found that an increase of 500ms of delays on its page loads resulted in losing 1.2% in revenue per user3
  • When Shopzilla reduced its page load time by 5 seconds, it saw an increase of 25% in page views and a 7-12% increase in revenue.
  • 10ms latency could result in 10% less revenue for U.S. brokerages.

Equinix does a great job presenting a background of the technical components of “speed”–i.e. what makes the network fast or slow.  It does really come down to these things:

“The speed of your site is judged on responsiveness to actions on the page (script requests, image renders, etc.) and on how quickly users can transition from page to page (loading a new page). The elements of speed can be further broken down: [1] the network latency and [2] bandwidth between your end users and your site, [3]the performance of your server infrastructure in responding to a request, and [4] how quickly the user’s browser can render your site based on how the web page is coded. While there is a tremendous body of knowledge on how to increase bandwidth, optimize servers and code web pages, network latency is generally considered immutable. But new studies show that if you reduce latency, it will have a tremendous effect on page load times, even more so than bandwidth, with every 20ms of reduced network latency resulting in a 7-15% decrease in page load times.”

Furthermore, Equinix discusses the traditional solutions to speed problems very well:

“conventional advice on reducing latency recommends using a third-party provider such as a Content-Delivery Network (CDN) to distribute content and leveraging their infrastructure to get geographically closer to the end user. While a CDN can help accelerate static content and effectively distribute video, the increasingly dynamic nature of the web (social media, real-time API access, etc.) reduces CDN effectiveness, and in a real-time cloud application may not be able to help at all.”

In short, the traditional solution of using CDN (content deliver networks) is no longer sufficient for the end-to-end speed of all applications.  The new era of social media, real-time API access, and real-time cloud applications requires a comprehensive network strategy to achieve maximum performance of applications.

Here are some strategies on how to improve your performance:

1)   Locate your infrastructure as close as possible to the main POP’s around the world.

For your customers, they will typically traverse several different ISPs before reaching your ISP and then accessing your servers.  For your internal employees who are accessing your datacenter or a cloud application, they will typically do the same.  Traversing multiple ISPs not only increases the number of hops, but it also increases the latency significantly because different ISPs are not optimally interconnected.  As Equinix decribes it, “a route between two computers connected to different carriers might not be the shortest possible route: data sent from a computer in Boston to a computer in Maine may go through a peering point in New York. The implications become even greater in Asia, where a [customer] in Singapore trying to reach Sydney may be routed through Los Angeles, transiting the Pacific Ocean twice. These types of routing inefficiencies can have devastating consequences on performance.”  It would be ideal if your customers and you were utilizing the same ISP, but that will rarely be the case.  Thus, it’s important to locate your servers as close as possible to the main POP’s as possible.

If you’re using the cloud, pick a cloud provider that resides extremely close to the main POP’s.  The short geographic distance not only decreases the latency, but it’s also likely that the two datacenters will have a multi-Gigabit link between each other.  Locating your cloud close to the POPs is typically not too difficult, as it’s also the cloud provider’s incentive to reside close to the POP’s.  For example, AWS resides in a datacenter in Ashburn, VA, very close to the Equinix datacenter, which terminates a majority of international backbones.  To confirm the performance, you may want to test the bandwidth and latency between your cloud’s datacenter and the POP.

2)   Use cloud acceleration to decrease the amount of data that needs to be transferred.  

Cloud acceleration is the next technology to take advantage of in order to maximize the performance of your network.  Cloud acceleration essentially lowers the amount of data that needs to be sent across the network.   It’s a substantial reduction on orders of magnitudes (we’ve seen 10X).  One of the main problems with applications is that they’re built inefficiently without the network in mind.  Specifically, applications, while serving a purpose for end-user functionality, typically transfer duplicative data across the network.  This action not only causes application performance degradation, but also a doubling of bandwidth usage and processing tax on the network and end server.

Cloud acceleration reduces the negative impact on the networks and servers, while enhancing the performance of the end-user applications significantly.  Instead of the duplicative response data having to traverse multiple ISPs to receive a response from the end server, the duplicative data is now “served up” to the user locally from the user’s hard drive.  These days most users access the same website or application repeatedly throughout the day.  Most websites or applications contain more static information than dynamic information and even with dynamic information, it consists of mostly incremental changes to a prior GET or GET-like request.

Cloud is About Cost Transfer by Robert Faletra

I recently read an article entitled, “Cloud is about cost transfer,” by Mr. Faletra.  (See CRN, March 2012) He makes a good point–namely, most cloud companies are incurring a capex cost that’s capitalized onto their balance sheet, thus incurring a longer term higher risk and a longer term need for sustaining customers.  I’d like to add to his article about some of the remedies for this, since the cloud industry continues to grow.

Here are some ways to achieve a faster and higher ROI:

1)  Build a unique product and establish barriers to entry to maintain a cost premium

It’s well-known that the number one price control mechanism is owning the supply curve, while the demand curve shifts to the right, driving the price north.  Not only does an increase in demand drive the price up, but also a premium is able to be garnered on top of the market price.  This is why cloud companies must establish a barriers to entry–i.e. network effect, strategic mix, patented engineering, etc–in order to protect their prices and to make a little premium on top of the market price.  I’ll go over barriers to entry on another post, but this is the number one mechanism for ROI.  Be a monopolist.

2)  Be first to market and win customers.

The theory of the “first to market” is nebulous–sometimes it’s better to be first and sometimes is better to be a laggard.  For cloud services, it’s better to be “first to market.”  If you’re first to market, you have the first shot at winning customers.  In general, cloud services have gone beyond innovators’ phase and onto the early adopters’ phase on the diffusion curve.   Because of this, there a  substantial market to be picked up by the first-to-marketers–about 16%.  This is where marketing must accelerate.  Once a product is launched, it’s the marketing department’s job to reach the 16% of the market and win customers over.

3)  Convert your temporary customers into lifelong customers.  Do everything you can to keep your customers happy.

Customer service will be a number one priority, again–a la Southwest Airlines.  With all these monthly no-commitment plans, the ability to defect and go elsewhere is as easy as grabbing a cup of coffee.   This is called the churn rate for cloud services companies.  Once you won over a customer, you should do EVERYTHING you can to keep them.  This could be a bit challenging because cloud companies already don’t make a big margin on each customer, so justifying the cost of customer service may be difficult.  This is where the premium achieved in strategy #1 needs to be sustained.

In short, cloud services are here to stay.  But, as Mr. Faletra alluded to, cloud services companies will be financially challenged.  That market will be a huge global market, with low barriers to entry, high capex, fickle customers, and ugly balance sheets.  However, the upside is huge, with the world as a potential customer.  To make money and to keep this customer happy requires a winning “horse,” great marketing/selling, and outstanding customer services.

Internet on the plane is a Godsend.

Better Internet and Application Speeds on Airlines is Essential.

Over the past 12 months, I have logged about 100,000k miles on the plane.  It sounds brutal, but to be quite frank, having Internet access on the plane has been a god-send. After a 6 hour flight, I recently did not want to get off the plane.  I couldn’t believe I was wishing the flight was a lil’ longer, so I could finish studying, working, facebooking, watching videos, etc.–just doing what I would normally be doing on my couch at home.  It made no difference to me.  That is the power of having Internet on the plane.  It not only makes time fly by, but it makes a 6 hour flight seem like a couch surfing session at home.

As the world “flattens” and global traversal becomes more widespread for fun, work, and school, the next big efficiency enabler is Internet technology on the planes.  Many could do exactly what they would be doing on the Internet in their office, home, or library and the traveling time that would have been lost in the past, would be regained.  Ironically, without many distractions on the plane, I found that my efficiency actually improved.

However, there is a lot of room for technological improvement before planes become moving offices.  Planes need to improve two main necessities:  bigger bandwidth pipes for at least 200 passengers and better latency as they fly across different geographic regions.

On a recent trip to L.A. from D.C., I had an opportunity to conduct some speedtests from the air on Virgin America Airline’s Gogo Internet service with and without Cloud Acceleration.  The results were seemingly better with Cloud Acceleration on.  I’ve attached the results of the comprehensive test below.   Here’s a summary of my methodology, results and experience.

1)  The night before I ran some speedtests from my house, using Cloudharmony’s cloud speed test.  (Cloudharmony has some awesome tests–you can run upload, download, and DNS tests to a global list of cloud providers.)  I ran speed tests without acceleration on (cold), with acceleration on for the “1st pass” (warm), and with acceleration on for the “2nd pass” (hot).  The results are posted below.

2)  On the plane, I ran two tests:  cold and hot.  I did not want to empty my warm datastore, so I just ran a hot test.

3)  Results:  The raw results indicate that the cold tests failed consistently.  In fact, I did not complete the test because I saw that the test was failing.  The hot results were great.  While most roundtrip times and download were longer than my warm test the night before at home, they were comparatively better than the cold tests.

The results confirm my belief that Cloud Acceleration is going to deliver much better performance for airlines!  Although this was just a first experiment and the results are mixed, I experienced improved performance.   The expectation wasn’t to see anything extraordinary because more cloud acceleration infrastructure needs to be built out across the country.

What still needs to be done?

As I alluded to earlier, airlines need to do 2 things to improve Internet performance:  have bigger bandwidth pipes for at least 200 passengers and better latency as they fly across different geographic regions.  Here are some thoughts/ideas on how that can be accomplished:

1)  It’s likely that the Internet from the plane is received from satellites.

2)  Increasing the bandwidth from the plane to satellite should not be a problem.  The airlines just needs to purchase higher aggregate bandwidth speeds for their users.  According to Wikipedia, “a shared download carrier may have a bit rate of 1 to 40 Mbit/s and be shared by up to 100 to 4,000 end users.”  For every 50 passengers, I would estimate that airlines would need at least 20Mbs.  I think most people like to stream youtube, hulu, or movies while they’re on the plane, so there could be significant bandwidth usage at any one point in time.

3)  Latency is a much bigger theoretical problem when using satellites.  According to Wikipedia,  “factoring in other normal delays from network sources gives a typical one-way connection latency of 500–700 ms from the user to the ISP, or about 1,000–1,400 ms latency for the total round-trip time (RTT) back to the user. This is much more than most dial-up users experience at typically 150–200 ms total latency.[2]

Cloud Acceleration has the potential to help both the bandwidth and latency problems.  With cloud acceleration, there will be less bandwidth usage as less data will be transmitted up and down the satellite connection.  Most laptop users access the same Internet sites and online applications, so most of the data will be stored locally and not need to be requested from the satellite.  More needs to be done to figure out how to best save bandwidth for multimedia access since that could be a large percentage of the usage on airplanes.

In terms of latency, since less data needs to be transmitted, the latency will improve for those bits of data–they will not need to take the round-trip.  For Internet access and web application users, this could result in a significant performance increase, as most of those applications constantly send duplicative data.

Overall, more technical work still needs to be researched and completed for acceleration on airplanes.   However, from a cursory examination of it, I think Cloud Acceleration is on its way to delivering exceptional Internet and application performance for consumers on planes, as they continue to traverse the globe.