Look around and you will see that almost everyone is packaging their software in Docker containers or at the very least thinking about it.
Stating that containers changed everything in the way we approach software development might be an understatement. No wonder – days, when developers had to write an application separately for every platform, are long gone, thankfully, and it happened because of the lightweight, portable and easily scalable nature of containers.
THE PROBLEM WITH CONTAINERS
Even though containers made a lot of things easier and just better, nothing ever is perfect, isn’t it? To see the problem we need to get our basics down. A common definition of a container online is this: “an instance of an executable package that includes everything needed to run an application: code, configuration files, runtime, libraries and packages, environment variables, etc”.
A container is launched based on something called an image, which consists of a series of layers. For Docker, each layer represents an instruction in a text file called a Dockerfile. A parent image is a base on which your custom image is built. Most Dockerfiles start from a parent image.
When talking about container images, we often focus on one particular piece of software that we are interested in. AWS blog post notices though that an image includes the whole collection of software that plays a supporting role to the featured component. Even a developer who regularly works with a particular image may have only a superficial understanding of everything in the image.
It’s time-consuming to track all the libraries and packages included in an image once it’s built. Moreover, developers casually pull images from public repositories where it is impossible to know who built an image, what they used to build it and what exactly is included in it. But when you ship your application along with everything that is in the container, you are responsible for security. If there is a security breach, it is your reputation that could be destroyed.
It is so difficult to track what is going on under the hood of a container image. Image scanners have emerged to address this issue, giving users varying degrees of insight into Docker container images. Most of the tools execute the same set of actions:
- Binary scan of the Docker image, deconstruct it to layers and put together a detailed bill of material of the contents.
- Take a snapshot (index) of the OS and packages.
- Compare this bill of material from an image against a database of known vulnerabilities and report any matches.
Even though those tools are similar they are not the same. And when choosing one to use, you need to consider how effective they are:
- How deep can the scan go? In other words, the scanner’s ability to see inside the image layers and their contents (packages and files).
- How up-to-date the vulnerability lists are.
- How the results of the scan are presented, in which form/format.
- Capabilities to reduce noisy data (duplication).
5 TOOLS TO CONSIDER
Clair – tool from well-known and loved CoreOS. It is a scanning engine for static analyses of vulnerabilities in containers or clusters of containers (like Kubernetes). Static means that the actual container image doesn’t have to be executed, and you can catch the security threats before they enter your system.
Clair maintains a comprehensive vulnerability database from configured CVE resources. It exposes APIs to clients to invoke and perform scans of images. A scan indexes features present in the image and are stored in the database. Clients can use the Clair API to query the database for vulnerabilities of a particular image.
Anchor – is a well-maintained and powerful automated scanning and policy enforcement engine that can be integrated into CI/CD pipelines and Docker images. Users can create whitelists, blacklists and enforce rules.
It is available as a free online SaaS navigator to scan public repositories, and as an open-source engine for on-prem scans. The on-prem engine can be wired into your CI/CD through CLI or REST to automatically fail builds that don’t pass defined policies.
Anchore Engine ultimately provides a policy evaluation result for each image: pass/fail against policies defined by the user. Even though it comes with some predefined security and compliance policies, functions and decision gates, you can also write your own analysis modules and reports.
Zenko works with multiple Open Sources tools
Promoting change in the world by providing the best multi cloud manager
Dagda – is a tool to perform static analyses of known vulnerabilities in Docker images and containers. Dagda retrieves information about the software installed into your Docker images, such as the OS packages and the dependencies of the programming languages, and verifies for each product and its version if it is free of vulnerabilities against the previously stored information into a MongoDB instance. This database includes the known vulnerabilities as CVEs (Common Vulnerabilities and Exposures), BIDs (Bugtraq IDs), RHSAs (Red Hat Security Advisories) and RHBAs (Red Hat Bug Advisories), and the known exploits from Offensive Security database.
On top of that, it uses ClamAV to detect viruses and malware. I also want to note that all reports from scanning the image/container are stored in MongoDB where the user can access it.
Docker Bench for Security – the Center of Internet Security came up with a solid step-by-step guide on how to secure Docker. As a result, the Docker team released a tool (shell script) that runs as a small container and checks for these best-practices around deploying Docker containers in production.
OpenSCAP – this is a full ecosystem of tools that assist with measurement and enforcement of a security baseline. They have a specific container-oriented tool, oscap-docker, that performs the CVE scans of containers and checks it against predefined policies.
OSCAP Base is the base command line NIST-certified scanner. OSCAP WorkBench is a graphical user interface that represents the results of the scanner and aims to be intuitive and user-friendly.
These tools appeared because Docker’s popularity has grown so fast. Only two years ago, it would have been hard to trust those tools as they only started to pop up. Today, they are more experienced with Docker containers and the challenges that came up with the rise of this technology.
Next week, I will go through other tools and scanners that are more OSS compliance-oriented.
That’s it for today. Stay safe and let’s chat on the forum.