Kubernetes and Microservices Security

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "How does K8s help you secure containers?" Here’s what we learned.

Microservices, Security, and Kubernetes (K8s)

RBAC

  • K8s helps with authorization and authentication via workload identity. Role-based access control (RBAC) is provided out of the box. Service mesh ensures communication between microservices. 
  • K8s solves more problems than it creates. RBAC enforces relationships between resources like pod security policies to control the level of access pods have to each other, how many resources pods have access to. It’s too complicated but K8s provides tools to build a more stable infrastructure.
  • RBAC mechanisms. Good security profile and posture. K8s provides access and mechanisms to use other things to secure containers.
  • K8s provides a pluggable infrastructure so you can customize it to the security demands of your organization. The API was purpose-built for extensibility to ensure, for example, that you can scan workloads before they go into production clusters. You can apply RBAC rules for who can access your environments, and you can use webhooks for all kinds of sophisticated desired-state security and compliance policies that mitigate operational, security, and compliance risk.
  • The advantage of K8s is in its open-source ecosystem, which offers several security tools like CIS Benchmarks, Network Policy, Istio, Grafeas, Clair, etc. that can help you enforce security. K8s also has comprehensive support for RBAC on System and Custom resources. Container firewalls help enforce network security to K8s. Due to the increased autonomy of microservices deployed as pods in K8s, having a thorough vulnerability assessment on each service, change control enforcement on the security architecture, and strict security enforcement is critical to fending off security threats. Things like automated monitoring-auditing-alerting, OS hardening, and continuous system patching must be done. 
  • The great part about the industry adopting K8s as the de facto standard for orchestration means that many talented people have collaboratively focused on building secure best practices into the core system, such as RBAC, namespaces, and admission controllers. We take the security of our own architecture and that of our customers very seriously, and the project and other open-source supporting projects releasing patches and new versions quickly in the event of common vulnerabilities and exposures (CVE) makes it possible for us to always ensure we are keeping systems fully up to date. We have built-in support for automated K8s upgrades. We are always rolling out new versions and patches and providing our users with the notifications and tooling to keep their systems up to date and secure.

Reduced Exposure

  • You have services deployed individually in separated containers. Even if someone gets into a container, they’re only getting into one microservice, rather than being able to get into an entire server infrastructure. Integration with Istio provides virtualization and security on the access side.
  • Since the beginning, containers have been perceived as a potential security threat because you are running these entities on the same machine with low isolation. There's a perceived risk of data leakage, moving from one container to another. I see the security benefits of containers and K8s outweigh the risks because containers tend to be much smaller than a VM running NGINX will run a full OS with many processes and servers. Containers have far less exposure and attack surface. You can keep containers clean and small for minimal attack surface. K8s has a lot of security functionality embedded in the platform. Security is turned on by default with K8s. When you turn it on, there are a lot of security features in the platform. The microservices and container model is based on immutable infrastructure. You offer limited access to the container running the application. You're able to lock down containers in a better way and be in charge of what our application does. We are now using ML to understand what service or set of containers are doing and we can lock down the service. 
  • You need to look at security differently with containers to be more secure. Containers allow you to reduce the attack surface with fewer privileges. Limit access to the production environment. K8s has a number of security mechanisms built-in like authentication and authorization to control access to resources. You need to learn to configure. Be careful about setting the right privileges.

K8s Security Policies

  • My team helps with full and automatic application discovery spread across multiple data centers and cloud, and creating clean infrastructure policies and runtime discovery. Dynamic policies help lock down containers and apps built on top of containers.
  • You’re only as secure as you design yourself to be in the first place. By automating concepts and constructs of where things go using rules and stabilizing the environment, it eliminates a lot of human errors that occur in a manual configuration process. K8s standardizes the way we deploy containers. 
  • Namespaces, pod security policies, and network layer firewalling where we just define a policy using Calico and then kernel-level security that’s easy for us since we’re running on top of K8s. Kaneko runs Docker builds inside of Docker. Kaneko came out of Google. 
  • 1) Helps us with features like namespaces to segregate networks, from a team perspective. 2) Second is network policies. This pod cannot talk to that database and helps prevent mal-use of software.  3) Theres benefits from K8s protecting individual containers. This mitigates problems escaping from containers and helps you stay isolated.

Other

  • It’s not built-in per se, that’s why there are a number of security tools coming up. Things like scanning  Docker images. as K8s does not scan. A number of security companies are coming out with continuous scanning of Docker images before they are deployed, looking for security vulnerabilities during the SDLC. DevSecOps moves security checking and scanning to occur earlier in the development process. There are tools that are popping up to do that.
  • If you enable the security capabilities provided, it’s an advantage. There are capabilities in K8s that control whether you have the ability to pull up a container. It has to be set up correctly. You need to learn to use the capabilities. You need to think about the security of every container.
  • Security is a very important topic. One area is open source and the level of involvement and the number of developers involved can help drive greater security in the environment. Cloud-native security with the code and the cluster. For customers to leverage K8s in the cloud it changes the investment you have to make because you are inheriting the security capabilities of the cloud provider and dramatically lowering costs. K8s has API-level automation built-in.
  • Our container images are created using the Linux package security update mechanism to ensure the images include the latest security patches. Further, our container image is published to the Red Hat Container Catalog which requires these security measures to be applied as part of the publishing process. In addition, domain and database administrative commands are authenticated using TLS secure certificate authentication and LDAP, as well, domain meta-data, application SQL commands, and user data communications are all protected using the AES-256-CTR encryption cipher.
  • K8s provides only minimal structures for security, and it is largely the responsibility of implementers to provide security. You can build quite a lot on top of the Secrets API, such as implementing TLS in your applications or using it to store password objects.
  • K8s-orchestrated containerized environments and microservices present a large attack surface. The highly-dynamic container-to-container communications internal to these environments offer an opportune space for attacks to grow and escalate if they aren’t detected and thwarted. At the same time, K8s itself is drawing attackers’ attention: just last year a critical vulnerability exposing the K8s API server presented the first major known weakness in K8s security, but certainly not the last. To secure K8s environments, enterprises must introduce effective container network security and host security, which must include the visibility to closely monitor container traffic. Enterprise environments must be protected along each of the many vectors through which attackers may attempt entry. To do so, developers should implement security strategies featuring layer-7 inspection (capable of identifying any possible application layer issues). At the same time, the rise of production container environments that handle personally identifiable information (PII) has made data loss prevention a key security concern. This is especially true considering industry and governmental regulatory compliance requirements dictating how sensitive data must be handled and protected.

Here’s who shared their insights: