Five Security Best Practices for Kubernetes Deployments

The use of containers continues to rise in popularity in enterprise environments, increasing the need for a means to manage and orchestrate them. There’s no dispute that Kubernetes (K8s) has emerged as the market leader in container orchestration for cloud-native environments. 

Since Kubernetes plays a critical role in managing who and what could be done with containerized workloads, security should be well-understood and managed. It is therefore essential to use the right deployment architecture and security best practices for all deployments. 

Kubernetes and Microservices Security

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "How does K8s help you secure containers?" Here’s what we learned.

Microservices, Security, and Kubernetes (K8s)

RBAC

  • K8s helps with authorization and authentication via workload identity. Role-based access control (RBAC) is provided out of the box. Service mesh ensures communication between microservices. 
  • K8s solves more problems than it creates. RBAC enforces relationships between resources like pod security policies to control the level of access pods have to each other, how many resources pods have access to. It’s too complicated but K8s provides tools to build a more stable infrastructure.
  • RBAC mechanisms. Good security profile and posture. K8s provides access and mechanisms to use other things to secure containers.
  • K8s provides a pluggable infrastructure so you can customize it to the security demands of your organization. The API was purpose-built for extensibility to ensure, for example, that you can scan workloads before they go into production clusters. You can apply RBAC rules for who can access your environments, and you can use webhooks for all kinds of sophisticated desired-state security and compliance policies that mitigate operational, security, and compliance risk.
  • The advantage of K8s is in its open-source ecosystem, which offers several security tools like CIS Benchmarks, Network Policy, Istio, Grafeas, Clair, etc. that can help you enforce security. K8s also has comprehensive support for RBAC on System and Custom resources. Container firewalls help enforce network security to K8s. Due to the increased autonomy of microservices deployed as pods in K8s, having a thorough vulnerability assessment on each service, change control enforcement on the security architecture, and strict security enforcement is critical to fending off security threats. Things like automated monitoring-auditing-alerting, OS hardening, and continuous system patching must be done. 
  • The great part about the industry adopting K8s as the de facto standard for orchestration means that many talented people have collaboratively focused on building secure best practices into the core system, such as RBAC, namespaces, and admission controllers. We take the security of our own architecture and that of our customers very seriously, and the project and other open-source supporting projects releasing patches and new versions quickly in the event of common vulnerabilities and exposures (CVE) makes it possible for us to always ensure we are keeping systems fully up to date. We have built-in support for automated K8s upgrades. We are always rolling out new versions and patches and providing our users with the notifications and tooling to keep their systems up to date and secure.

Reduced Exposure

  • You have services deployed individually in separated containers. Even if someone gets into a container, they’re only getting into one microservice, rather than being able to get into an entire server infrastructure. Integration with Istio provides virtualization and security on the access side.
  • Since the beginning, containers have been perceived as a potential security threat because you are running these entities on the same machine with low isolation. There's a perceived risk of data leakage, moving from one container to another. I see the security benefits of containers and K8s outweigh the risks because containers tend to be much smaller than a VM running NGINX will run a full OS with many processes and servers. Containers have far less exposure and attack surface. You can keep containers clean and small for minimal attack surface. K8s has a lot of security functionality embedded in the platform. Security is turned on by default with K8s. When you turn it on, there are a lot of security features in the platform. The microservices and container model is based on immutable infrastructure. You offer limited access to the container running the application. You're able to lock down containers in a better way and be in charge of what our application does. We are now using ML to understand what service or set of containers are doing and we can lock down the service. 
  • You need to look at security differently with containers to be more secure. Containers allow you to reduce the attack surface with fewer privileges. Limit access to the production environment. K8s has a number of security mechanisms built-in like authentication and authorization to control access to resources. You need to learn to configure. Be careful about setting the right privileges.

K8s Security Policies

  • My team helps with full and automatic application discovery spread across multiple data centers and cloud, and creating clean infrastructure policies and runtime discovery. Dynamic policies help lock down containers and apps built on top of containers.
  • You’re only as secure as you design yourself to be in the first place. By automating concepts and constructs of where things go using rules and stabilizing the environment, it eliminates a lot of human errors that occur in a manual configuration process. K8s standardizes the way we deploy containers. 
  • Namespaces, pod security policies, and network layer firewalling where we just define a policy using Calico and then kernel-level security that’s easy for us since we’re running on top of K8s. Kaneko runs Docker builds inside of Docker. Kaneko came out of Google. 
  • 1) Helps us with features like namespaces to segregate networks, from a team perspective. 2) Second is network policies. This pod cannot talk to that database and helps prevent mal-use of software.  3) Theres benefits from K8s protecting individual containers. This mitigates problems escaping from containers and helps you stay isolated.

Other

  • It’s not built-in per se, that’s why there are a number of security tools coming up. Things like scanning  Docker images. as K8s does not scan. A number of security companies are coming out with continuous scanning of Docker images before they are deployed, looking for security vulnerabilities during the SDLC. DevSecOps moves security checking and scanning to occur earlier in the development process. There are tools that are popping up to do that.
  • If you enable the security capabilities provided, it’s an advantage. There are capabilities in K8s that control whether you have the ability to pull up a container. It has to be set up correctly. You need to learn to use the capabilities. You need to think about the security of every container.
  • Security is a very important topic. One area is open source and the level of involvement and the number of developers involved can help drive greater security in the environment. Cloud-native security with the code and the cluster. For customers to leverage K8s in the cloud it changes the investment you have to make because you are inheriting the security capabilities of the cloud provider and dramatically lowering costs. K8s has API-level automation built-in.
  • Our container images are created using the Linux package security update mechanism to ensure the images include the latest security patches. Further, our container image is published to the Red Hat Container Catalog which requires these security measures to be applied as part of the publishing process. In addition, domain and database administrative commands are authenticated using TLS secure certificate authentication and LDAP, as well, domain meta-data, application SQL commands, and user data communications are all protected using the AES-256-CTR encryption cipher.
  • K8s provides only minimal structures for security, and it is largely the responsibility of implementers to provide security. You can build quite a lot on top of the Secrets API, such as implementing TLS in your applications or using it to store password objects.
  • K8s-orchestrated containerized environments and microservices present a large attack surface. The highly-dynamic container-to-container communications internal to these environments offer an opportune space for attacks to grow and escalate if they aren’t detected and thwarted. At the same time, K8s itself is drawing attackers’ attention: just last year a critical vulnerability exposing the K8s API server presented the first major known weakness in K8s security, but certainly not the last. To secure K8s environments, enterprises must introduce effective container network security and host security, which must include the visibility to closely monitor container traffic. Enterprise environments must be protected along each of the many vectors through which attackers may attempt entry. To do so, developers should implement security strategies featuring layer-7 inspection (capable of identifying any possible application layer issues). At the same time, the rise of production container environments that handle personally identifiable information (PII) has made data loss prevention a key security concern. This is especially true considering industry and governmental regulatory compliance requirements dictating how sensitive data must be handled and protected.

Here’s who shared their insights:

How to Implement Kubernetes

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What are the most important elements of implementing K8s for orchestrating containers?" Here’s what we learned:

Security

  • Four things: 1) security, 2) you don’t have to go “all-in” on K8s, don’t use it for databases, 3) capacity planning CPU, 4) K8s structure will mimic team structure. 
  • Networking, storage, security and monitoring, and management capabilities are all essential elements for implementing Kubernetes container orchestration. Businesses stand to realize tremendous benefits due to the fast pace at which both Kubernetes and container ecosystems are currently advancing. However, this pace also increases the challenge of keeping up with new advancements and functionality that are critical to success – especially in the area of security.

Planning

  • A lot is around planning. Prevent surprises with a lot of power and complexity. Learn how to set up the right environments. 
  • Future-proof your architecture/strategy for multi-cloud, platform-agnostic and whatever else comes our way. It’s easier to build decoupled and distributed applications with K8s that can run anywhere. Adopting different programming languages and frameworks also becomes easier and that allows you to use the best tool for the job when building our applications. There are new challenges when we enable more and more API communication over a network across our applications. Critical performance and security issues increase as you accelerate the flow of information across our services as does being able to observe the traffic and collect metrics at a much bigger scale in order to debug errors down the road. While Kubernetes allows us to deploy our workloads across multiple clouds and platforms, it's vital that we adopt platform-agnostic technology that can be deployed within our Kubernetes cluster instead of relying too much on cloud vendor solutions to secure and monitor traffic. Increasing team productivity and business value is very dependent on speed. Consolidating how we manage and operate our services across every environment, across every cloud makes it faster to both create new services and consume these services from our client applications.

Experience

  • K8s is still new though it’s been around for five years. There’s lack of expertise and talent — that’s the number one challenge. What you want from an enterprise standpoint is a standardized, shared platform to use on multiple clouds and on-prem. Containers are portable, K8s has the standard open-source API you can build a platform that can run anywhere. Create a shared platform that can run anywhere. Challenge is having the right people with the skill and then day two operations. Once in production, you have to deal with day two operation like upgrades, a new version out every three months. How to keep patched, back up, disaster recovery, scale. It abstracts the infrastructure from the developers. A declarative approach. To define the end state of what you want K8s to do, you just declare that. Tell K8s what you want, and it makes it happen. If something fails it, it will automatically recreate it. The downside of that is if something goes wrong with the system, now you have to search through multiple levels of abstraction to figure out where the problem is. To have a successful implementation, you need a team that knows what’s happening across the landscape. If it fails, you need to know the nitty-gritty details of all of the services that are running. Troubleshooting, debugging, upgrading the cluster, SLA management, day-two operations are challenging today. 
  • Learning the technology, there’s a learning curve. Every type of developer probably knows but the data engineering side is quite new. First step making it easy for developers to understand what the pieces are and how to be used. Then important aspects of data locality within the K8s cluster. Making it stateful and stateless when needed. Important concepts to explain to end-users and how to fit with K8s.

Data Locality

  • It depends on what they are trying to achieve with containers. A lot of customers want portability between on-prem and public cloud or deploying a scalable container platform. One of the aspects is the differentiation between stateless and stateful applications. Think about how to claim and reclaim storage resources deal with security, performance, reliability, availability all of the traditional data center operations topics. Containers support that through persistent volume claims and persistent volume storage. There is a shift in how developers need to think about having to take advantage of persistent storage as they move from stateless to stateful. 
  • How you divide your application into smaller services is a critical decision to get right. But for K8s specifically, it’s really important to think about how you are going to handle state: whether it’s using Stateful Sets, leveraging your provider’s block storage devices, or moving to a completely managed storage solution, implementing stateful services correctly the first time around is going to save you huge headaches. The other important question is what your application looked like before you began implementing K8s. Already had hundreds of services? Your biggest concern should be how to migrate those services in an incremental but in a seamless way. Just breaking the first few services off of your monolith. Making sure you have the infrastructure to support lots of services are going to be critical to a successful implementation.

Other

  • 1) Labels are your friends, label everything. They are the road map to be able to figure out where things are going. 2) Keep in mind you don’t know where anything is. Build your environment to be specific to a purpose, not to a location. In K8s it’s not as small as possible, it’s as small as necessary. Don’t over-engineer your environment to create a thousand tiny little things – deliver the information needed from each component.
  • We are seeing more people adopt K8s — different types of deployments, different flavors, different approaches to use. Some customers use a “build your own” approach. We are seeing people using on-prem vendors offering pre-packaged K8s distributions (e.g., Mesosphere, Docker, VMware). A lot is available on public cloud vendors. We see people adopting a consulting-based approach. The exact mix of what you pick depends on what kind of apps you are running on K8s and what kind of users you are servicing, and how advanced you are with your K8s deployments. We see a lot of reliance on cloud and on-prem (Red Hat and IBM are the most prominent). We recommend making sure you understand where you are in your journey, and who your users are to figure out the right mix. Make sure when deploying these technologies you start with people. People need to work well together when services are split between teams in terms of technology, culture, and people in engineering and ops.
  • Declarative APIs. The customer says here’s what I want and know it will happen. Applications will be better if they are stateless. Able to get its state from somewhere else like the database. Observability is a huge issue across a broad number of microservices.
  • The overall strategy of automating testing is critical. We see clients trying to find the right way to test. There is a huge variety of techniques and approaches. What needs to be tested, how are you set up, what is your maturity, what is the right level of automation? Test the right things in the right way, what tests can be run in parallel, how to deal with data management, how to leverage orchestration capability. What are the right devices you want to include? It depends on the maturity of the team and the software. Integrations, what else is your testing touching on? What are dependencies? When environments cannot take the scale and you fail in your expectations of what’s possible.
  • K8s alone won’t solve your problem. It’s not an enterprise-grade orchestration stack. You should have the same concerns for K8s as when you put software into production – security, monitoring and debugging. There are 500+ open source products for cloud-native networking. It's impossible to keep up with and maintain. K8s comes out with new releases all of the time. 
  • Think about how to manage configuration, how to use managed services, resource management, how to apply AI to K8s infrastructure. Managed services and credential management.
  • We have a consulting package where we do a lot of training around developing and managing K8s clusters. Look for micro-improvements along with the massive ecosystem with 500 different open source tools. Each is a new area of discovery for people getting into cloud-native computing. We help customers consume open-source with little to no friction with security updates.
  • The most important elements of implementing K8s to orchestrate containers is its ability to declaratively define application policies that are enforced at application runtime to maintain the desired state (e.g. the number of application pods, their types, and attributes) to ensure critical applications always remain available. Most recently, auto-scaling pods have also become a very important element to ensure predefined applications SLAs are always met. As well, the ease of deploying containers is an important element. Companies require the ability to develop, test, and deploy container-based applications quickly and seamlessly using their CI/CD pipelines.
  • I think the main thing to keep in mind is how important core infrastructure is to be successful with Kubernetes.  What I mean by this is that you need to have your ducks in a row with storage and networking especially.
  • 1) Have a plan first, driven by your goals for moving to K8s. Moving from monolithic apps to microservices running on Kubernetes has many benefits, but trying to solve every problem at the same time is a recipe for delayed migration, and frustration. Know what you’re trying to achieve (or better yet, the sequence of goals you’re trying to achieve) and design a plan to accomplish those. The roadmap is key. Think about how you stage the adoption of K8s and the migration from monolith to microservice and how that will get rolled out across the organization. There’s a tremendous amount of new technology in the cloud-native ecosystem; fold that technology into the roadmap, too. Realize that the roadmap can and will change as you gain experience with each piece of that new technology stack. 2) Don’t forget that a new implementation doesn’t eliminate the need to address all the old requirements around Operations, Security, and Compliance. Factors to consider: What kind of app are you creating? Internal, or external? Will it have customer data? How often will it be updated?  Questions to answer: who has access, and how will you enforce that access? Kubernetes to the rescue: Kubernetes provides a revolutionary way of implementing custom guardrails so that you can prevent problems before they happen. Kubernetes lets you inject custom rules and regulations right into the API server (via Admission Control) that enforce an unprecedented level of control. And because Kubernetes provides a uniform way of representing resources that used to be contained in silos (e.g. compute, storage, network), you can impose cross-silo controls. 3) Take your policy out of PDFs and put it into the code. When your infrastructure is code, and your apps are code, so too should your policy be code. The business needs developers to push code rapidly — to improve the business’s software faster, ideally, than competitors — but the business also needs that software to follow the same age-old operations, security, and compliance rules and regulations. The only way to succeed at both is to automate the enforcement of those rules and regulations by pulling them out of PDFs and wikis and moving them into the software. That’s what policy-as-code is all about.
  • Ensure the application is built as a set of independent microservices that are loosely coupled to serve the business. This helps get the most out of Kubernetes. Ensure microservices have built-in resilience (to handle failures), observability (to monitor application), and administrative features (to allow for elastic scaling, data backup, access control, and security, etc.). Essentially, having the application architected the right way is critical to reaping the benefits of Kubernetes.
  • One of the most important elements is ensuring K8s remain simple enough for developers to use. Developers are growing more committed to Kubernetes: in 2016, just under half said they were committed to the technology but by 2017, 77 percent said the same. Despite Kubernetes’ growing popularity, it is still often challenging for developers to manage manually. Our approach focuses on ensuring that clusters are configured for high availability, stability, and best practices. Kubernetes has many knobs that can be turned to limit resources, segregate components, and configure the way the system performs. It can be challenging to do this on your own so we have worked hard to provide users with a platform that has best practices baked in from the start.

Here’s who shared their insights:

Exposing Services to External Applications in Kubernetes (Part 1)

Kubernetes (K8s) has now become the most popular production-grade container management and orchestration system. Recently, I was involved in a project where an on-premises application needed to be containerized, managed by a Kubernetes cluster, and be able to connect to other apps outside cluster.

Here, I share my experience of running a container in a local K8s cluster and various options for networking externally to cluster. There are many posts related to or touching this topic, but most of them followed the minikube installation (single-node K8s cluster) whereas my cluster was installed using kubeadm. There are certain differences in K8s install options, as noted in the references below.