01 Apr

Case Study: WhiteSource simplifies deployments using Azure Kubernetes Service

WhiteSource simplifies deployments using Azure Kubernetes Service

March 28, 2019

WhiteSource simplifies open-source usage management for security and compliance professionals worldwide. Now the WhiteSource solution can meet the needs of even more companies, thanks to a re-engineering effort that incorporated Azure Kubernetes Service (AKS).

WhiteSource was created by software developers on a mission to make it easier to consume open-source code. Founded in 2008, the company is headquartered in Israel, with offices in Boston and New York City. Today, WhiteSource serves customers around the world, including Fortune 100 companies. As much as 60 to 70 percent of the modern codebase includes open-source components. WhiteSource simplifies the process of consuming these components and helps to minimize the cost and effort of securing and managing them so that developers can freely and fearlessly use open-source code.

WhiteSource is a user-friendly, cloud-based, open-source management solution that automates the process for monitoring and documenting open-source dependencies. The WhiteSource platform continuously detects all the open-source components used in a customer’s software using a patent-pending Contextual Pattern Matching (CPM) Engine that supports more than 200 programming languages. It then compares these components against the extensive WhiteSource database. Unparalleled in its coverage and accuracy, this database is built by collecting up-to-date information about open-source components from numerous sources, including various vulnerability feeds and hundreds of community and developer resources. New sources are added on a daily basis and are validated and ranked by credibility.

You can read more about WhiteSource in this Azure customer story.

Simplifying deployments, monitoring, availability, and scalability

WhiteSource was looking for a way to deliver new services faster to provide more value for its customers. The solution required more agility and the ability to quickly and dynamically scale up and down, while maintaining the lowest costs possible.

Because WhiteSource is a security DevOps–oriented company, its solution required the ability to deploy fast and to roll back even faster. Focusing on an immutable approach, WhiteSource was looking for a built-in solution to refresh the environment upon deployment or restart, keeping no data on the app nodes.

This was the impetus to investigate containers. Containers make it possible to run multiple instances of an application on a single instance of an operating system, thereby using resources more efficiently. Containers also enable continuous deployment (CD), because an application can be developed on a desktop, tested in a virtual machine (VM), and then deployed for production in the cloud.

Finding the right container solution

The WhiteSource development team explored many vendors and technologies in its quest to find the right container orchestrator. The team knew that it wanted to use Kubernetes because it was the best established container solution in the open-source community. The WhiteSource team was already using other managed alternatives, but it hoped to find an even better way to manage the building process of Kubernetes clusters in the cloud. The solution needed to quickly scale per work queue and to keep the application environment clean post-execution. However, the Kubernetes management solutions that the team tried were too cumbersome to deploy, maintain, and get proper support for.

Fortunately, WhiteSource has a long-standing relationship with Microsoft. A few years ago, the team responsible for Microsoft Visual Studio Team Services (now Azure DevOps) reached out to WhiteSource after hearing customers request a better way to manage the open-source components in their software. Now WhiteSource Bolt is available as a free extension to Azure DevOps in the Azure Marketplace. In addition, Microsoft signed a global agreement with WhiteSource to use the WhiteSource solution to track open-source components in Microsoft software and in the open-source projects that Microsoft supports.

A Microsoft solution specialist demonstrated Azure Kubernetes Service to the WhiteSource development team, and the team knew immediately that it had found the right easy-to-use solution. AKS manages a hosted Kubernetes environment, making it quick and easy to deploy and manage containerized applications—without container orchestration expertise. It also eliminates the burden of ongoing operations and maintenance by provisioning, upgrading, and scaling resources on demand, without taking the application offline. AKS also supported a cloud-agnostic solution that could run the WhiteSource application on multiple clouds.

Architecture

The WhiteSource solution was redesigned as a multicontainer application. The developmental language was mostly Java that runs under a WildFly (JBoss) application server. The application is deployed to an AKS cluster that pulls images from Azure Container Registry and runs in 60 to 70 Kubernetes pods.

The main WhiteSource app runs on Azure Virtual Machines and is exposed from behind a web application firewall (WAF) in Azure Application Gateway. The WhiteSource developers were early adopters of Application Gateway, which provides an application delivery controller (ADC) as a service with Layer 7 load-balancing capabilities—in effect, handling the solution’s front end.

The services that run on AKS communicate with the front end through Application Gateway to get the requests from clients, process them, and then return the answers to the application servers. When the WhiteSource application starts, it samples an Azure Database for MySQL database and looks for an open-source component to process. After finding the data, it starts processing, sends the results to the database, and expires. The process running in the container can be scrubbed entirely from the environment. The container starts fresh, and no data is saved. Then it starts all over.

Containers also make it easy to continuously build and deploy applications. The containerized workflow is integrated into the WhiteSource continuous integration (CI) and continuous deployment in Jenkins. The developers update the application by pushing commits to GitHub. Jenkins automatically runs a new container build, pushes container images to Azure Container Registry, and then runs the app in AKS. By setting up a continuous build to produce the WhiteSource container images and orchestration, the team has increased the speed and reliability of its deployments. In addition, the new CI/CD pipeline serves environments hosted on multiple clouds.

We write our AKS manifests and implement CI/CD so we can build it once and deploy it on multiple clouds. That is the coolest thing!

Uzi Yassef: senior DevOps engineer

WhiteSource

Story image 2

Azure services in the WhiteSource solution

The WhiteSource solution is built on a stack of Azure services that includes the following primary components, in addition to AKS:

  • Azure Virtual Machine scale sets are used to run the AKS containers. They make it easy to create and manage a group of identical, load-balanced, and autoscaling VMs and are designed to support scale-out workloads, like the WhiteSource container orchestration based on AKS.
  • Application Gateway is the web traffic load balancer that manages traffic to the WhiteSource application. A favorite feature is connection draining, which enables the developers to change members within a back-end pool without disruption to the service. Existing connections to the WhiteSource application continue to be sent to their previous destinations until either the connections are closed or a configurable timeout expires.
  • Azure Database for MySQL is a relational database service based on the open-source MySQL Server engine that stores information about a customer’s detected open-source components.
  • Azure Blob storage is optimized for storing massive amounts of unstructured data. The WhiteSource application uses blob storage to serve reports directly to a customer’s browser.
  • Azure Queue storage is used to store large numbers of messages that can be accessed from anywhere in the world via authenticated calls using HTTP or HTTPS. The WhiteSource solution uses queues to create a backlog of work to process asynchronously.

Using AKS, we get all of the advantages of Kubernetes as a service without the overhead of building and maintaining our own managed cluster. And I get my support in one place for everything. As a Microsoft customer, for me, that’s very important.

Uzi Yassef: senior DevOps engineer

WhiteSource

Benefits of AKS

The WhiteSource developers couldn’t help comparing AKS to their experience with Amazon Elastic Container Service for Kubernetes (Amazon EKS). They felt that the learning curve for AKS was considerably shorter. Using the AKS documentation, walk-throughs, and example scenarios, they ramped up quickly and created two clusters in less time than it took to get started with EKS. The integration with other Azure components provided a great operational and development experience, too.

Other benefits included:

  • Automated scaling. With AKS, WhiteSource can scale its container usage according to demand. The workloads can change dynamically, enabling some background processes to run when the cluster is idle, and then return to running the customer-facing services when needed. In addition, more instances can run for a much lower cost than they could with the previous methods the company used.
  • Faster updates. Security is a top priority for WhiteSource. The company needs to update its databases of open-source components as quickly as possible with the latest information. It was accustomed to a more manual deployment process, so the ease of AKS surprised them. Its integration with Azure DevOps and its CD pipeline makes it simple to push updates as often as needed.
  • Safer deployments. The WhiteSource deployment pipeline includes rolling updates. An update can be deployed with zero downtime by incrementally updating pod instances with new ones. Even if an update includes an error and the application crashes, it doesn’t terminate the other pods and performance is not affected.
  • Control over configurations. The entire WhiteSource environment is set up to use a Kubernetes manifest file that defines the desired state for the cluster and the container images to run. The manifest file makes it easy to configure and create all the objects needed to run the application on Azure.

Summary

The developers at WhiteSource understand that every improvement they make to the infrastructure of their solution helps enhance the product offering. They are moving forward with containers and beginning to containerize other utilities and small tool sets. In addition, all new development is now being done using AKS.

The simplicity of the AKS deployments has more than made up for any inconvenience related to the move to containers. The entire managed Kubernetes experience—from the first demo of AKS to the most recent application deployment—far exceeded the company’s expectations.

In the past, we were setting up our own Kubernetes cluster, maintaining it, and updating it. It was cumbersome and it took time and specialized knowledge. Now we can just focus on building, deploying, and maintaining our application. AKS shortens the time and helps us to focus more on innovation.

Uzi Yassef: senior DevOps engineer

WhiteSource

AbnAsia.org Software. Faster. Better. More Reliable. +1-669-999-6606 +84-945-924-877 [email protected]
  • AWS Cloud with VAT-invoice

    Read more
  • Cyber Security Inspection

    Read more
  • Virtual CIO Advisory Service

    Read more

Leave a Reply

Call Now Button