Everything You Need to Know About Container Security
Everything you need to know in 5 minutes!
Containers are quickly becoming the standard method for delivering and running BH building and deploying software. This page helps you understand how to secure your containerized code against various security threats.
Container Security Topics
Containers What & Why
Containers are a modular way of packaging code so that all dependencies are encapsulated in the container. This makes each container independent of the underlying operating system, so it can run anywhere, e.g. portable to any cloud or on-premise. It also makes the container independent of the other containers so you can mix and match containers. Think of containers like lego blocks that you can mix-and-match to assemble anything.
This modularity helps explain the why of containers. Applications built with containers can be assembled, maintained, and enhanced by changing each container independently, without impacting other containers. This means that software developers have little to no dependence on other software developers, they build and deliver their containers separately. This enables the continuous development and continuous integration enabled by CI/CD tools. As a result, every aspect of development moves at the speed of the fastest developer, not at the speed of the slowest developer.
Because containers are self-contained and can be assembled in an ad hoc fashion, they are commonly referred to as Microservices. You can add a discrete service by adding the container that provides that service. This modular assembly model describes why the Lego model of assembling blocks is so appropriate. It also hints at a future where software vendors don’t provide monolithic applications, but instead sell microservices or features that end users assemble to fit their unique needs.
What is Container Orchestration (Kubernetes)
Container orchestrators make it easy to run containers by seamlessly handling the typical operational challenges. Orchestrators handle the following functions:
- Admission Controllers: Check incoming containers to make sure they conform to the rules.
- Service Discovery: Provide a way for containers to find each other and communicate in a standard way.
- Load Balancing: Spreads containers across various servicers to both spread the load evenly, and group containers that communicate together for efficiency.
- Storage Management: Handles the ability for containers to access data storage resources.
- Container Management: The orchestrator can be configured for a variety of functions like scaling containers, killing and replacing containers, and more.
- Server Management: You can define which servers are available to the cluster and then constrain their available resources, e.g. CPU, memory.
- Self-Healing: If containers aren’t responding it will try to restart them or replace them and it won’t advertise their services until they are functioning properly again.
- Secrets & Authentication Management: It separates the secrets and authentication from the containers enabling better security practices and for changes to authentication configurations without modifying the containers.
There are various container orchestration systems, such as Docker Swarm, Apache Mesos, Amazon Elastic Container Services, Kubernetes, and others. Kubernetes, also known as Kube and K8S, is widely accepted as the industry standard orchestrator, so we’ll use Kubernetes to refer to orchestrators in this article. There is also a lightweight derivative from Rancher known as K3S that is used for embedded/IoT deployments.
VMs vs. Containers/Kubernetes vs. Both
Virtual Machines (VMs) are one way of sharing the resources–CPU, storage, memory–of a single server across multiple applications. These applications include their own operating system, so they act as independent units per the diagram below. This provides certain security advantages through isolation.

Fig 1. Virtual Machines: Share a single server and each VM contains the code and the operation system
Kubernetes and Containers don’t have their own operating system, they share an operating system. Operating systems consume a lot of resources. If a server is running multiple copies of the same operating system, using VMs, this is inefficient. VMs do have their advantages, such as cloud environments, where a single server is shared by multiple clients and they each need isolation. However, VMs do consume resources by supporting multiple operating systems.

Fig 2. Containers: Kubernetes can span one or multiple servers. There is one container per pod.
Kubernetes requires an underlying operating system. So in a situation where Kubernetes is not utilizing the full resources of one or more of the servers in the cluster, those servers can leverage VMs to efficiently share their resources. In situations where Kubernetes is using the full resources of the underlying servers, a single operating system is more efficient.
Public clouds started by offering Kubernetes images for you to run on your own instances, on top of VMs sharing physical hardware. Now those public clouds offer Kubernetes as a managed service that runs containers for multiple clients while handling client isolation. Examples are AWS Fargate, Azure Cloud Instances, and Google CloudRun.
Because containers, especially managed Kubernetes services, provide a more efficient means of sharing server resources with a single shared operating system, they are considered a strategic threat to virtual machines. Whether they are competitive or complementary will be determined by the market.
Container Security: Pros & Cons
Running containers in Kubernetes provides some inherent security advantages, but it does not mean that this model is inherently secure. Kubernetes leverages the underlying operating system to provide security to containers. For example, in the Linux operating system, containers are isolated by namespaces and Cgroups. Seccomp provides an additional layer of isolation and control. Beyond this, certain Linux distros provide further security measures such as Ubuntu’s AppArmor, Red Hat’s Security Enhanced Linux (SELinux). Then there are operating systems that have been tailored specifically to run Kubernetes/Containers such as CoreOS, Project Atomic, Ubuntu Snappy Core, VMWare Photon. In addition, cloud providers provide additional security practices and ensure that the components they run are up to date with the latest security patches. All of these reduce the threat surface available to attackers, but it doesn’t mean that using containers endows your solution with inherent security.
Containers are immutable and ephemeral, meaning they aren’t changed and they can be easily replaced. This makes securing them easier, but it doesn’t mean they are secure. In fact, unless your container is configured properly, they can be modified. For example, they can suffer from privilege escalation, increasing their ability to do damage in a run-time environment. The fact that you can swap out problematic containers only provides value if you in fact identify and swap out problematic containers.
Another challenge is that Kubernetes enables a promiscuous environment where all numbers of containers can communicate with each other. This traffic is called East/West communications (between containers) as opposed to North/South communications (in/out of the Kubernetes cluster). This means that any threat that enters a Kubernetes cluster can spread and do damage.
Containers also fuel the shift-left movement of pushing more of the operation concerns (DevOps) and security concerns (DevSecOps) to the developer. This distributed development model is further exacerbated by the work-from-home model being fueled by Covid-19 and social distancing. This breaks the traditional command-and-control model of centralized security, a chokepoint where software is approved prior to deployment. With developers working from home, working independently on their containers, and then building and deploying in an automated fashion with CI/CD tools, security is cut out of the process. The only way to deliver compliance and governance that security requires is to leverage tools that are equally distributed and automated.
Container Security: Full Container Lifecycle
Container security, like all IT security, is a process, not a tool. It requires a combination of tools, policies, and processes to contain your security threats. It must also be applied across the full lifecycle of the container. This includes the build process, the run-time environment, and the platform (Kubernetes and host operating system).
Container Security: The Development or Build-Side
The process of developing software and building container images is the starting point for container security. Securing your container images involves three aspects: (1) making sure you don’t introduce threats in the form of content (vulnerabilities, malware, license risk); (2) making sure you don’t expose sensitive information (secrets); (3) configuring your containers to avoid risks during operation (configuration). Because these three aspects involve large amounts of data and varying threat levels, the only way to manage them at scale and in a distributed development environment is to combine processes with automated tools that use policy to enforce compliance and enable governance. These need to be built into your build or CI/CD process.
Mode detail about build-side container security:
- Third-Party Threats: These are threats you inherit by including third-party software, assets, or libraries. There are three main concerns:
- Third-Party Vulnerabilities: If you use a third-party tool that has vulnerabilities, and your code exercises the functions with these vulnerabilities (dependencies), then your container is at risk. Evaluating third-party code for these vulnerabilities and dependencies is called Software Composition Analysis (SCA).
- License Risk: Some of these third-party tools are licensed under terms that can put your own software at risk. For example, by using such a tool in conjunction with your own code, it may render your code open source. You need to make sure you only use tools with licenses acceptable to your company.
- Malware: Some individuals may infect public images with malware to create havoc, gain control, steal information, or even mine cryptocurrencies on your infrastructure. You’ll need to make sure you are not embedding malware into your containers.
- Your Native Code: There are two classes of threats you’ll need to address in your own code:
- Vulnerabilities: Your code may also have vulnerabilities that enable a third-party to exfiltrate information, gain control, or create havoc. You’ll need to evaluate your code for potential vulnerabilities before it goes into production.
- Secrets: Kubernetes provides secure mechanisms for separating secrets such as usernames, passwords, AWS keys, etc. and placing them in a secure repository. However, developers may not use this method in practice, so you’ll need to check for secrets to make sure they are not exposed by placing them in the container
- Container Configuration: You need to configure your container to limit access and rights and also to limit resource utilization.
- Limit Access & Rights: You want to minimize such risks as root access and privilege. You also want to configure your container to prevent privilege escalation at run-time.
- Limit Resource Utilization: You want to set scaling constraints as well as constraints on how much CPU and memory each container can use so a rogue process does not become a very costly mistake.
- Security Must Be Policy-Driven:
- The Hand-Off to Run-Time: Admission Controllers (signed/identified containers, require analysis/policy approval, etc.)
Container Security: The Production or Run-Time Side
As you move your container images into production, you encounter another set of challenges. There are three primary ways to secure your run-time environment: (1) Make sure the containers don’t change over time; (2) Monitor network traffic for signs of threats; (3) Continuously respond, adapt, and improve your security position based on what you learn in production. As with securing the build process, securing the run-time environment involves a massive amount of ever-evolving data and metadata, so you’ll need a combination of tools and processes to assess and automate this process.
More detail about the run-time container security:
- Secure Containers During Run-Time: You’ve done your best to ensure that you aren’t introducing threats into your containers, now you need to make sure they stay that way. Since security is an evolving art, you also need to monitor the activity of the container
- Container Drift: While containers are immutable, they can drift from their original state. Yes, sort of an oxymoron, but containers can escalate privilege or begin accessing additional data that increases their risk profile. They need to be monitored to prevent this.
- Evolving Threats: You analyzed your containers against known threats before putting them into production. But now there are new threats, and attackers will scan for these weaknesses and exploit them. So you need to continuously analyze containers in production against new threats and patch them before attackers can exploit them.
- Monitor Behavior: Changes in container behavior can provide an early warning that something is amiss. Some run-time security tools will establish a pattern for container behavior and then watch for behavior that deviates from this pattern.
- Network Security: Containers communicate with each other, with internal resources and with the outside world via the network. As with all security, you need a combination of prevention and remediation.
- Internal Communications: Communications between containers inside a Kubernetes cluster is called East/West traffic. It can be observed, secured, and managed using a variety of tools including Istio. Visualization tools on top of Istio can simplify monitoring, while other tools model these communications in order to identify changes that might weaken your security position or indicate a breach. Network segmentation is one way of enforcing that containers only communicate with other containers appropriate to them; it also limits the impact of any threat by limiting its reach or blast radius.
- External Communications: Communications in and out of the Kubernetes cluster are known as North/South traffic. This can involve system calls, accessing files, mounting storage, and traffic both inside and outside of the corporate network. There are a variety of ways to secure North/South traffic. For example, you can lock down any unnecessary and unused ports, while keeping open ports in the appropriate range. For example, docker containers should only use ports in the 49153 – 65525 range. You can also block certain types of traffic as well as traffic to IP addresses that don’t have a good reputation.
- Incident Management: The old truism applies, ‘An ounce of prevention is worth a pound of cure’. However, you need to be ready to respond to any security incident. Early detection and mitigation are key. Some run-time security tools use machine learning for early detection and isolation of any anomaly. These can learn over time so they improve their effectiveness. They can also rank the potential risks and bring those to your attention so that you can monitor or mitigate them. And in worst-case scenarios, they run forensics on incidents so you can figure out what happened and apply those learnings to other containers or clusters.
Container Security: Securing the Host System
Maintaining a secure posture for your Kubernetes cluster not only involves securing the build and run-time processes, but you must also secure the host system itself. Kubernetes relies on an underlying operating system that can expose threats. Standard best practices require that you maintain your operating system by using the latest version and applying security patches in a timely manner. But there are additional steps you can take.
- Secure Linux Distributions. Certain Linux distros provide additional security. These include AppArmor access control (Ubuntu) and Security Enhanced Linux or SELinux (Red Hat). These solutions provide generalized security enhancements. The next level of security is described below.
- Container-Specific Operating Systems. These operating systems reduce their threat profile by removing capabilities in Linux that are not required by container orchestration. These include CoreOS, Project Atomic, Ubuntu Snappy Core, VMWare Photon.
- Managed Kubernetes. Operating Kubernetes at an optimal security posture requires a combination of patches, tools, oversight, and processes. Public clouds package all of this up into very popular services, including AWS Fargate, Azure Cloud Instances, and Google CloudRun.
Why Traditional Application Security Doesn’t Work for Containers
Containers have unique attributes relative to traditional application security (AppSec). The good news is that pressure to handle containers is growing from customers and analysts, so traditional AppSec companies are looking for ways to handle container security. Here are a few reasons why traditional AppSec cannot address containers:
- Blind to Containers. Many AppSec tools authenticate using SSH to Linux systems to scan them. However, containers don’t have SSHD so the scanner cannot scan them.
- Host Operating System Only. Traditional AppSec tools will scan the operating system, but they only see the host operating system, they don’t see the OS components in the container, meaning they are blind to these as well.
- Configuration Management. Containers include a configuration that is unique to containers and most AppSec tools are not capable of making sense of these, assuming they are able to scan them at all (see above).
- Secrets. Containers leverage a separate store for secrets, so the tool needs to understand this and scan for secrets inside the container.
Application security vendors will certainly be extending their product portfolios to handle containers, but you shouldn’t assume that their legacy tools handle containers now, be sure to ask specifically about container support.
Container Security: Conclusion
By adopting containers, you benefit from certain inherent security advantages, but you also encounter a whole new set of threats unique to containers that traditional tools
Container Security Information & Resources
- Container security best practices (Stackrox)
- 5 Keys to DevSecOps Success (eBook)
Container Security Tools by Category
If we are missing any tools here, please let us know and we’ll update it.
Development – Build-side
- Vulnerability Scanning Only: Trivy, Clair, Anchore, McAfee MVision, SaltStack SecOps, Sysdig
- SCA Only: Whitesource, FlexNet Code Insight, FOSSA, JFROG, Synopsys
- Malware Only: Symantec Cloud Workload Protection
- Vulnerability + SCA: TrendMicro Cloud One, Snyk, Tenable Container Security, Tripwire IP360
- Comprehensive Container Analysis (Vulnerability, SCA, License Analysis, Malware, Secrets, Configuration, & Policy Evaluation): Carbonetes
Run-Time
- Container Identity Verification: Google Binary Authorization, Aporeto, Portshift
- Monitoring/Visibility: Threatstack, Sysdig, Qalys/Layered Insight
- Mesh/Network Management: Istio
- Zero-Day Threat Detection: Capsule8, K2 Cyber Security
- Intrusion Detection: Alert Logic
- Network Threat Detection: Illumio, NeuVector
- Intrusion Protection: Deepfence.io
- Abnormal Behavior Detection: Falco Operator
- Compliance Monitoring: Aptible Comply, Tigera Secured, Tufin Orca
- Comprehensive Container Security: Stackrox, Aqua, Palo Alto Networks/Prisma (Twistlock)

Fig 4. Container-Specific Operating Systems: CoreOS, Project Atomic, Ubuntu Snappy Core, VMWare Photon

Fig 5. Kubernetes-as-a-Service (Managed): Azure Cloud Instances, AWS Fargate, Google CloudRun, Oracle Container Engine for Kubernetes, Red Hat Openshift Dedicated.