Contributed by Pejman Roshan, VP Product, Teridion
The SaaS delivery model is now trusted by companies ranging from small startups to large global enterprises for their business-critical functions. By 2020, three out of four organizations will be running nearly all their applications on a SaaS platform, according to recent studies.
For purveyors of SaaS applications, great opportunity comes with great responsibility. As more customers prefer SaaS for their most important business functions, providers must ensure their applications meet users’ expectations for good performance—and that means optimizing the application and its delivery however possible.
One method for performance optimization is to locate the application as close to users as possible to reduce latency. Oftentimes, a SaaS provider with a global customer base will build out a series of private regional points of presence (PoPs) across the Internet and host its applications at those PoPs.
The reasoning behind this approach is quite sound. The fact is, the public Internet has inherent performance issues that are magnified as traffic needs to travel further across the Internet. The more “hops” the traffic needs to take as it gets routed from one packet-passing autonomous system on the Internet to another, the slower the performance of the SaaS app will be for end users.
So, for example, if the application is hosted in a datacenter in Seattle but a significant portion of users reside in Singapore, Moscow and Sao Paulo, those remote users are going to wait too long – perhaps multiple seconds – for page loads and data updates coming from and going to the application. Users grow impatient if response time regularly goes beyond a few seconds, and impatient users are unhappy users that aren’t likely to renew their service subscription. However, if the SaaS provider puts a PoP in (or close to) Singapore, Moscow and Sao Paulo and then hosts its application in each of those PoPs, the users should see a noticeable improvement in application performance.
That’s fine if only a few PoPs are needed, but what happens when the application is in high global demand and customers are located all over the world? While it’s possible to put a PoP in most major geographies, this is a very expensive, time-consuming approach. The DDoS protection company Imperva wrote a good blog post on what it takes to implement just one PoP. In their experience, it took many months to select an appropriate datacenter provider, negotiate a contract, provide and test the equipment, and get everything working to go live with their application. This process – which can take upwards of six months – is anathema to customer responsiveness and business agility.
In addition, once that series of private PoPs is fully deployed, it must be maintained. Someone has to make sure the devices are all in service and operating effectively. They must be monitored, secured and upgraded from time to time. DevOps resources can be severely stretched managing multiple PoPs, sometimes requiring third party maintenance which comes with additional costs and complexity.
Another requirement when using a series of private PoPs is the need to do application sharding. Sharding is the process of splitting an application into many instances which act as one. This is needed because if there are multiple PoPs, logically there must be multiple instances of the application. It’s a very big undertaking, just purely from an engineering cost and time perspective.
Traditionally, implementing private PoPs was the only way a SaaS provider could assure itself that its users in different geographies were going to get adequate performance. Content delivery networks, or CDNs, aren’t viable for SaaS providers because a CDN caches static content within its own network. This works well for, say, an eCommerce vendor that wants to assure prompt page display of a product catalog. It doesn’t work for bi-directional traffic in enterprise applications where users are uploading and downloading content, there’s dynamic or personalized content for each user, and collaboration is going on. CDN vendors are trying to make their caching smarter, but it’s still just caching, and it doesn’t work for most SaaS providers.
Ditch the expensive and complicated regional PoPs
SaaS providers don’t have to build their own PoPs to get improvement in throughput on the public Internet. In fact, it’s possible to get 10X (or more) in application performance improvement with little more effort than making a CNAME change.
For example, one prominent application acceleration solution deploys a global overlay network on top of some of the largest public cloud providers on the Internet—AWS, Google Cloud, Alibaba Cloud, etc. Sensors within these providers’ network fabrics collect data in real time about the performance of the various traffic routes that the providers have available to them. A cloud-based orchestrator can then make decisions about how to use this overlay to route traffic most efficiently between a particular SaaS provider and its customers, regardless of where those customers are located.
The orchestrator also controls virtualized routing engines that get deployed across the fabric of those public cloud providers. This routing infrastructure dynamically establishes the fastest path, at any given time, between an end user and a SaaS provider; for example, to enhance data upload performance for a cloud-based storage application. Route adjustments are made in real time if performance on a different route is better. The goal is to always get the best throughput, the lowest latency, and the tightest control over packet loss between user and provider.
This kind of solution has close to infinite scalability. When traffic goes up, more virtual cloud routers can spin up. And when traffic goes down, those containers are discarded until they are needed again. It’s an elegant way of handling capacity demands. Overall this kind of solution gives SaaS providers the kind of performance control they could never get from deploying their own PoPs on the public Internet.
And unlike a CDN, the SaaS traffic isn’t decrypted at the edge, preserving privacy and data security. The solution is single tenant by design so that every SaaS provider gets its own network, which further enhances security and provides protection against DDoS attacks.
SaaS customers expect to have a good experience. If they don’t, it’s easy enough to move to the next provider waiting in the wings. A vast improvement in application performance can go a long way to improve the user experience and reduce customer churn.
The opinions expressed within this article are the personal opinions of the author. The facts and opinions appearing in the article do not reflect the views of CISO MAG and CISO MAG does not assume any responsibility or liability for the same.