Facebook Pixel
Container Security: Comprehensive Analysis vs. Specialty Analyzers

Written by Mike Hogan

July 13, 2021

Securing a container involves analyzing for a variety of potential risks over a variety of components inside, or associated with, the container. There are two approaches: comprehensive analysis—analyzing for all risks—or assembling a collection of specialty analyzers. Another way of phrasing this decision is whether the whole (comprehensive) is greater than the sum of the parts (specialty tools). This article attempts to answer that question.

What Are We Securing?

Containers are comprised of code, third-party libraries/tools, operating system components and artifacts.  They are combined with infrastructure-as-Code (IaC) when deployed as pods in Kubernetes. All these components must be secured under the premise that you are only as secure as your weakest link.

To fully secure your containers you need to analyze for the following:

  • Vulnerabilities: e.g. SAST, DAST, IAST
  • Software Composition Analysis (SCA): Analyzing open source tools/apps for vulnerabilities, licenses and dependencies
  • Infrastructure-as-Code (IaC): e.g. YAML, Terraform, and Helm Charts
  • Malware
  • Secrets: e.g. passwords, PID, AWS keys, etc.
  • Bill of Materials (BoM): A listing of everything in each container
Comprehensive Analysis VS Specialty Analyzers

Comprehensive vs. Specialty Tools

Companies can choose between a single tool that addresses all of the above, or they can implement a collection of specialty tools. Comparing comprehensive vs. specialty analysis reminds me of the old joke about doctors: generalists vs. specialists. They say that specialists learn more and more about less and less until they know everything about nothing. At the same time, generalists learn less and less about more and more until they know nothing about everything. While amusing this joke is driven by the fact that humans have a scarcity of time so they are forced to choose how they spend their time between generalization and specialization.

In technology time is measured in compute power, which isn’t the constraining factor it is for humans. In the realm of container security, both specialty and comprehensive security tools are leveraging the same data feeds (e.g. feeds of CVEs, malware, IaC risks, etc.). This levels the playing field between the two approaches, meaning that data quality does not favor one approach over another, it is a push. Let’s dig a bit deeper into this question.

Data Synergy – Mo’ Data Mo’ Better

Viewing security risks in a vacuum results in poor risk assessment and resulting prioritization. This is because risk factors can compound or reduce the overall risk assessment. For example, Container A has 3 critical vulnerabilities, while Container B has 10 critical vulnerabilities, which is the highest priority to be remediated? In a vacuum, you would say Container B. However, if I told you that Container A is running in a customer-facing app, in privilege, exposing a public port and with the label UI, while Container B is running in a little-used internal facing app with no ports exposed, you would clearly prioritize Container A. Proper risk assessment and prioritization requires that you blend a variety of risk factors, which can only be done by a comprehensive security analysis tool.

In fact, correlating various potential risks is so critical that companies are taking a holistic approach to tracking and correlating everything about containers in the software development lifecycle (SDLC); this is called asset management. It can be used to define risk prioritization, instantly identify “at risk” containers, explore versioning risks, and much more. Since asset management correlates all risks, you need a way of aggregating all of that risk data in one tool, which is considerably easier with a comprehensive security tool.

Overlapping & Conflicting Data

While the above section addresses data gaps, you can also have too much data that overlaps and conflicts. Because some tools analyze more than one of the issues above, you can get tools finding the same issue, but assigning it different threat levels, resulting in confusion that slows the development process.

Managing multiple policies is also problematic. Security teams shouldn’t have to learn multiple policy engines and maintain multiple policies across those tools, especially when these policies, which may rely on conflicting threat data, can result in conflicting recommendations. It makes compliance a nightmare.

Impact on Speed & Agility

In an age when corporate competitiveness is often driven by rapid evolution of the software used by employees and customers, speed and agility is critical. Security must operate at development speed. If developers are juggling too many security tools, they slow down. If developers are relying on a bug tracking system like Jira and multiple security tools are creating Jira bugs, a developer might have to deal with 10 different Jira bugs for the same container, when a single consolidated bug post would have accelerated the repair and reporting processes.

Conclusions

Comprehensive container security, with its one-stop-shop approach are much easier than specialty tools. The convenience, efficiency, and ease-of-use are far superior. We find analogues to this in many other areas: Walmart/Target superstores, Amazon’s world’s largest inventory, Multi-function gyms, Microsoft Office, GSuite, and many more. In the world of security, we see Prisma Cloud, Snyk, and of course Carbonetes, taking a one-stop-shop approach to security. We believe that consolidation is the future in container security, let us know your thoughts.

Related Blog

The Challenges in Container Security That Can Be Overlooked

The Challenges in Container Security That Can Be Overlooked

Container security is becoming increasingly important in the world of cloud computing. As containers become more popular, organizations need to be aware of their potential risks. Unfortunately, many organizations need to pay more attention to key security challenges...

read more
The Benefits of Network Functions Visualization

The Benefits of Network Functions Visualization

Software development teams, network operators and service providers all benefit from using Network Functions Visualization (NFV) to visualize their networks. NFV is a system that enables network administrators to visualize the physical and logical components of their...

read more
The Importance of Dependency Injection

The Importance of Dependency Injection

Developers of the modern tech landscape are familiar with the concept of dependency injection and understand why it is crucial. Dependency injection (DI) helps to make code more decoupled so that components are less dependent on each other. This makes programs easier...

read more
Skip to content