Application Performance and Cloud Connectivity issues are common challenges faced by CIOs and IT Departments. This document looks at the various solutions on offer and explains why a fully managed Cloud-First WAN, delivered as-a service, solution is the perfect answer to your application performance
issues.
Technology is disrupting every sector of the economy. While innovators reap the benefits of digital transformation, laggards, including organizations that once dominated their respective markets, are dropping off the Dow Jones. An essential part of digital transformation is the agility that cloud enables. Applications such as email, CRM systems, video and voice, which were once hosted on in-house data centers, moved or are in the process of moving to the cloud. This has a dramatic impact on the enterprise WAN architecture.
Legacy WAN infrastructures based on architectures predicated on MPLS technology are simply not capable of delivering the speed and agility at a reasonable cost. The challenge for CIOs and other IT leaders is to find a solution that is reliable, secure and agile while being cost effective. Without such a solution, employee productivity and corporate profitability suffers, and the WAN becomes a barrier to effective digital transformation efforts.
Selecting the right connectivity solution is only part of the challenge. Once operational, challenges shift to centralized configuration and monitoring. Monitoring of WAN links is certainly not new to IT departments. Engineers have relied on protocols and tools like ping, traceroute and the Simple Network Monitoring Protocol (SMTP) to monitor various aspects of their enterprise WAN, including jitter, latency, packet loss and throughput.
However, with the advent of cloud-based SaaS applications, it is no longer enough to monitor WAN performance at an aggregate link level. Administrators must be aware of the application specific data flowing through their network and be able to detect and fix any performance degradation in end-user applications.
By necessity, organizations with international branches often sign up with multiple WAN service providers to build out their enterprise network. Even in the pre-cloud era, monitoring this mishmash of networks from different service providers was a huge challenge. Now, with cloud applications, monitoring such networks is a herculean task.
With the increasing adoption of SaaS and IaaS, some applications like Office 365, Teams, Zoom, and
CRM solutions like Salesforce or Dynamics 365 moved out of in-house datacenters into the cloud, while
others, including backup and legacy enterprise applications, remained in the datacenter. Enterprises
that continued with MPLS connectivity therefore faced new challenges and performance issues with
cloudbased applications.
In this architecture, private MPLS links continued to be the mainstay of the enterprise WAN. Branch traffic destined for cloud applications were backhauled across the entire MPLS network, along with traffic bound for the in-house datacenter and the internet. Application-bound traffic broke out at the central DC to the SaaS / IaaS provider.
The ‘trombone effect’ was common in such architectures. The long path across the network resulted in
high latency and delay, thereby adversely affecting application performance. Thus, from an application
performance perspective, MPLS was off the table.
In the pre-cloud era, carriers were the chief providers of WAN connectivity services to enterprises. With SD-WAN establishing itself as a technology of choice for cloud connectivity, many of these same carriers are jumping onto the SD-WAN bandwagon.
Carrier SD-WAN networks are built using equipment sourced from multiple vendors, with each vendor
providing a propriety configuration and monitoring solution; hence making a unified view of the network
very hard to achieve. The situation is further complicated by the fact that carriers tend to operate within their national boundaries, thus requiring complex inter-carrier agreements for international connectivity.
The simplest route to SD-WAN is to deploy it as an edge overlay solution. In this configuration, the
overlay solution does provide some benefits over the legacy MPLS network, as it leverages local internet connectivity at branch locations. The SD-WAN CPE provides the necessary functionality to route and distribute traffic between the MPLS network, the internet, and any other available connectivity. Depending on the network quality, application traffic can be routed via the MPLS network or the public internet; neither of which is a perfect solution for application performance in the cloud-era.
Large enterprises have traditionally taken the ‘build it yourself’ approach with much of their IT
infrastructure. But with the adoption of SaaS, PaaS, IaaS and UCaaS, that paradigm is changing. CIOs are beginning to see the merits of ‘as-a-Service’ models. Building the entire SD-WAN infrastructure inhouse, while the rest of the IT infrastructure is moving to the cloud, runs counter to the overall ‘as-a-Service’ ethos. Some of the challenges faced by the DIY SD-WAN approach are:
SaaS application performance is not just a matter of adding SD-WAN equipment into the existing
network. Ensuring good application performance requires a wholistic approach that takes into account
foundational aspects of technology such as capacity, availability and security. Superior availability
results when SLAs, built-in redundancy, and other redundancy options are uniquely combined. Security, a
critical component, must be part-and-parcel of the solution conceptualization process, which includes
3rd party integrations. Finally, the solution must provide for optimal capacity where it directly relates to agility, and scaling.
Though important, these foundational aspects are not sufficient. Building on this foundation, an effective solution must consider QoS, the topology, application routing, application acceleration and optimization. For example, in the case of MPLS, QoS is enforced only after experiencing packet loss, and with many user TCP connections fighting each-other causing unnecessary loss.
The choice of topology also has a big impact on application performance. The user ought to be
connected to the SaaS application in a full mesh architecture regardless of where it resides. The
alternative is to make U-turns and slingshots through hubs and datacenters to get to the destination,
further increasing the latency and unpredictability of the packets. Another important area to consider
is the deployment model, namely DIY vs a managed service. With such a fastchanging technology, is it more cost effective to constantly recruit, train and upskill employees, or is it easier to leave the complexity to specialist players and simply consume connectivity as a service? Bringing it all together
is the process, which should be simple yet still allow technology to move at the pace of the business.
Aryaka’s Cloud-First WAN delivered as-a-Service delivers SaaS acceleration through a private, software-defined Layer 2 network. Through the strategic distribution of over 30 Service PoPs, our private network is within 1-8 milliseconds from SaaS applications i.e. Office 365 data centers around the world.
Aryaka’s proprietary and patented optimization stack is baked into our fully-meshed private global network, freeing businesses from the hassles of maintaining and managing appliances, while providing
optimized performance to cloud-hosted instances. Aryaka thus maximizes O365 and other SaaS application performance by tailoring the solution for each customer by selecting those geos that minimize the average distance to the users to minimize latency.
Aryaka further solidifies a wholistic approach with its SD-Edge Aryaka Network Access Point (ANAP). The
ANAP is a cloud-managed and provisioned device that provides significant advantages like bandwidth scaling and improved last mile optimization when deployed within a customer site.
WAN optimization is another critical area to ensure application performance. Aryaka’s WAN optimization is built on two patented innovations, multi-segment optimization and data deduplication, along with other standard techniques like compression, bandwidth management (QoS, prioritization) and SSL acceleration.
A simplified network diagram shows datacenters, headquarters, XaaS and branch offices all connected
to Aryaka’s Services PoP Network. The path to the cloud hosted application comprises of the first-mile,
middle-mile and last-mile connectivity.
Aryaka uses its patented multi-segment optimization to achieve optimal application performance. In this approach, each segment, first-mile, middle-mile and last-mile have independent proxies. This allows for optimal data flow by reducing the time taken for the 1st byte transfer, using bigger payloads sizes per packet and providing recovery from up to 5% packet loss.
In a typical MPLS environment, packet loss extends over the entire round trip from the end user to the server. But using Aryaka’s patented algorithms and optimization techniques, packet loss is localized to an individual segment, typically first and last-mile, as the middle-mile is a private Layer 2 backbone and is fully redundant.
Data de-duplication is another area of innovation. It is a WAN Optimization technique that eliminates the transfer of redundant data across the LAN/ WAN by sending references/checksums instead of the actual data. Aryaka has built a patented data duplication engine called ‘Advanced Redundancy Removal’ that spans across protocols and applications, thus providing benefits across the organization at a network layer.
Compression is an important technique used in WAN bandwidth optimizations, reducing the file size of data transmitted over the network. Dictionary compression is one of the commonly used compression types, with the Lempel-Ziv algorithm one example. It is structured on a dictionary, dynamically encoded, and actively substituting a continuous stream of characters with codes. Many other popular compression programs including ZIP, GZIP, Stac (LZS), and the UNIX compress utility employ variants of the Lempel-Ziv algorithm. Compression adds value by addressing throughput concerns. Together with traffic management techniques, compression can help in WAN latency management. It is often used in conjunction with byte level pattern matching (byte caching) or deduplication. Typically, low bandwidth links with packet loss & latency benefit the most from this feature.
Aryaka’s solution includes a built-in quality of service support that provides customers with a portal
dashboard to prioritize their applications on the network and to monitor network performance and traffic flows. Using the MyAryaka portal, customers can flag each type of traffic or application on the network to indicate its performance priority level. The classifications are transactional, real-time, productivity, critical and best effort. For instance, database transactions may be classified as transactional, while voice-over-IP and streaming video may be real-time, file transfers as best effort and e-mail as productivity.
Aryaka Connectivity TO IAAS
Infrastructure-as-a-Service and Software-asa- Service are sometimes used interchangeably. This synonymous usage perhaps stems from applications like Office 365 that are SaaS applications but hosted on Microsoft’s Azure IaaS. However, it is important to realize that IaaS and SaaS are distinct from the perspective of connectivity.
In the spirit of flexibility and agility that is so synonymous with cloud offerings, Aryaka provides two different ways to connect to IaaS providers. The first is a direct connection that adheres to AWS’s Direct Connect, Google’s Dedicated Interconnect and Alibaba Cloud’s Express Connect, Microsoft’s ExpressRoute or Oracle’s FastConnect, and the second is an IPSec tunnel from the nearest PoP router.
Connectivity to SaaS applications like Office 365, Salesforce, WebEx or Fuze is a challenge. Traditional connectivity solutions for accessing SaaS applications depend on the public internet which is unreliable or slow in places.
A ‘Public Virtual Office’ (VO) is Aryaka’s solution for providing connectivity and improving the performance of cloudbased office applications that are accessed over the internet. A VO is an Aryaka virtual router with Layer 4 stateful firewall capability and uses a public IP address. It also provides an optimization container and multi-segment TCP architecture to reduce the RTT. Note that Fuze leverages both Aryaka Virtual Office (VO) capabilities as well as direct Layer 2 peering, one of the first SaaS applications so enabled.
MyAryaka is included as part of every Aryaka service and solution. It is a powerful, web-based management and analytics portal that provides real-time, contextual insight into your network and applications. With MyAryaka, you can perform complete configurations in real-time across edge access as well as the core private network.
The path selection feature selects the optimal link for customers’ business-critical traffic. Path Selection actively monitors each path for packet loss and latency and selects the link with the best performance. This helps to ensure that traffic is not sent through a path that is experiencing heavy packet loss or high latency.
Aryaka’s global private network provides the world’s business users with fast and reliable cloud and SaaS access from any location in the world. Our worldwide Services PoP are located on all six habitable continents and have been strategically located to place all end-users with access to SaaS applications and data centers as if they resided on their own desktop. In many of these locations, our PoPs are in close proximity to the leading IaaS providers of AWS, Azure, Alibaba Cloud, Google Cloud and Oracle.
Aryaka’s innovative approach to cloud connectivity accelerates a variety of SaaS applications.
A number of solutions ranging from MPLS, DIY SD-WAN to carrier SD-WAN, are available for connectivity to cloud applications. While DIY SD-WAN and carrier SD-WAN have some advantages over MPLS, they fall short on some key aspects. The table below compares the various options.
Aryaka’s fully managed Cloud-First WAN combines a software-defined, global private network, application optimization, multi-cloud connectivity, security, and visibility in a single unified service. Aryaka’s Azure global ExpressRoute footprint and Office 365 / Teams acceleration and optimization increase productivity and profitability. In addition, direct connections to IaaS providers using AWS’s Direct Connect, Microsoft’s ExpressRoute, Oracle’s FastConnect, Google’s Dedicated Interconnect or Alibaba Cloud’s Express Connect ensures optimal performance of applications hosted on IaaS environments.
Aryaka, the Cloud-First WAN and SASE company, and a Gartner “Voice of the Customer” leader, makes it easy for enterprises to consume network and network security solutions delivered as-a-service for a variety of modern deployments. Aryaka uniquely combines innovative SD-WAN and security technology with a global network and a managed service approach to offer the industry’s best customer and application experience. The company’s customers include hundreds of global enterprises including several in the Fortune 100.