LAST UPDATED August 20, 2021
Microservices and containers are a hot topic right now – and for various reasons. They are making the news because of their ease of use, but also for the challenges associated with protecting them. We’ll start with the positives…In the realm of organizational growth and flexibility, microservices have various advantages. One such advantage is their ability to enable continuous refactoring of small parts of an application, which prevents developers from dealing with large product releases that tend to be more prone to bugs, backward compatibility issues, etc. Other key advantages of microservices include:
Independent scaling. Scaling can be limited to the parts of the application that require more resources
Services can be implemented using different technology stacks
More fine-grained testing of individual components
Makes workflow failures easy to handle and recover from
Easier to distribute development tasks across different developers or teams
In addition to these, containers also enable easier orchestration and migration of services between different cloud providers and datacenters. For example, let’s say we have containers deployed in AWS, but Azure lowered their price for the compute instances that meet our requirements. Multi-cloud deployments can be easily managed by container automation tools such as Kubernetes.
We have prebuilt container images for all of our multi-services ready to go, so as long as a cloud provider supports specific container technology (such as Docker), our deployment path is very similar no matter what provider we use. The only required actions are to instantiate Azure instances and switch DNS to this new environment. These tasks can be accomplished within minutes compared to hours or days using other multi-cloud management techniques. But what about security? Most standard tools don’t lend themselves well to container-based deployment. The primary issue is licensing, as most standard endpoint security solutions providers will charge for an individual license for each container. As you can imagine, this becomes very expensive, very fast.
And how do we protect internet facing applications? One option is to use a SaaS Web Application Firewall (WAF). This option may not be ideal or even possible, however, if our applications are latency sensitive or if additional configuration is required every time we migrate clouds. Not to mention the impact this would have on the scalability of the solution.
So what’s the solution?
Web applications deployed in cloud environments will either use a load balancer or a DNS load balancing technique. Most solutions will use a load balancer provided by the vendor, like Elastic Load Balancer in Amazon AWS.
However, using any cloud vendor specific technology will make migrating to a different cloud provider all the more complex. Instead, we need a Load Balancer microservice that migrates easily with other services. This is also the most ideal scenario to enforce consistent Layer 3 and Layer 7 security policies across all cloud deployments.
To protect microservices from external threats, we must prevent any of the containers from being exposed to the outside Internet. The only ingress and egress should be through a load balancer and/or firewall that is deployed in front of them. This separation can be accomplished by using overlay networks that are natively supported by container technologies such as Docker:
The ThreatX WAF solution is a container that provides a consistent, decentralized Web Application Firewall across all microservice deployments, and can be managed using a single Dashboard or API. It also provides load balancing and edge caching as part of the solution.
Unlike legacy WAF solutions, the ThreatX WAF runs as another microservice, is cloud provider agnostic, and an ideal way to enforce your security policies across multi-cloud web application deployments.