TechTalks With Tom Smith: Kubernetes’ Additional Considerations

Research analyst Tom Smith

To understand the current and future state of Kubernetes (K8s) in the enterprise, we gathered insights from IT executives at 22 companies. We asked, "What have I failed to ask that you think we need to consider with regards to K8s?" Here’s what we learned.

You may also enjoy:  How to Get Started on The Path to Kubernetes Expertise

Cloud

  • Think about which cloud to use, what your target is, and where do you want to run it. There are pros and cons with every choice. Think about a hybrid multi-cloud and a platform that helps with that. It is not easy but it’s good to stay flexible and a platform will help.
  • Is the cloud changing certain aspects of market dynamics where you have one popular player? There isn’t really a competitor to K8s. Is that good or not? Will there be vendors supporting beyond the mega-cloud. Who can provide first-class support for F500 beyond the large public cloud providers? 
  • What deployment options do developers have to run their containerized app on a K8s? You can deploy K8s on your laptop, or in a data center if you own or lease and then deploy your app. But you manage both the app and the administration of the K8s infrastructure itself. Oh, the hardware as well. Alternatively, deploy K8s on a cloud provider (example on a bunch of AWS EC2 instances) so you don’t have to manage the hardware, but you still manage the K8s infrastructure. An even better option is to forget the complexities of managing K8s infrastructure or the hardware, and simply deploy your containerized applications on a managed K8s service like Amazon EKS or Azure K8s Service. Well, the choice may be influenced by the requirements of your app and business.

Evolution

  • What’s next after K8s? Serverless. 
  • Focus on how all of these movements are tied in with ML and cloud adoption. ML will change how people think about DevOps and code writing. How to take deep learning and ML to revolutionize how DevOps is approached. 
  • What’s going to happen next? What does the world look like after K8s? It has raised the bar. Platforms like Heroku were hard to get right. Now you can use K8s to accomplish more quickly. The new wave of platforms solves very specific problems. K8s is providing tools to iterate with less effort and more confidence to be more productive. K8s enables greater collaboration.

Details

  • Before jumping into K8s, evaluate if it’s the right solution and whether or not the organization is ready for it. This involves testing and architectural design decisions. Careful investigation is needed to determine if it’s the right fit.  K8s allows apps to write once and deploy anywhere. Figure out how that fits into your use cases.
  • While K8s is capable you need to make sure you don’t limit yourself to looking at one level of the problem. Look at multiple levels.
  • Should you use a managed K8s solution like GKE? It depends on your needs and the kinds of applications you are building. GKE has a fairly infrequent release cycle so if you want the latest and greatest features you cannot get them on GKE. K8s 1.15 has been out for a while and GKE is still on 1.13. The cycles are getting longer. It has moved from weeks to several months. GKE is better than AWS but it’s still a significant investment in time to manage. 
  • Power comes at the cost of complexity. It might not be a fit for every project. The only way to manage is to automate. Look for ways to automate yourself out of a job. Stay agile. K8s grows fast in an organization. Be ready to govern that. Don’t forget about best practice for day two operations. 
  • Think about the operator of the cluster and how you design to take advantage of the things K8s offers. DevOps and CI/CD is important to stand up locally and on K8s. 
  • Due to how complicated it is to set up K8s, compared to other container orchestration tools, I recommend deploying K8s using something like AKS. It can be time-consuming figuring out all of the plumbing to make K8s work on your machine properly. Obviously, setting it up from scratch is a worthwhile exercise but if you plan on using this in production, it’s better to rely on your cloud provider. If you are just interested in just making sure your containers work, K8s is overkill, something like Docker Swarm will suit better. 
  • There are several benefits of deploying K8s Operators: 1) Accessibility (vendors publishing them to the common K8s marketplaces). 2) Automation of complex tasks. 3) Operators are a K8s feature and run across different K8s distributions with little or no changes required. 4) They can also diagnose and monitor K8s applications to assist with trouble-shooting and performing root-cause analysis. 
  • When it comes to deploying K8s in production, look into Admission Controllers. By implementing policy via webhooks, you can minimize outages, accelerate development, and prevent security and compliance risk. The intent-based API in K8s offers a truly transformative way to manage your environment and should not be overlooked!

Other

  • There’s another opportunity to make K8s more approachable for people who come from different backgrounds. Explain containers in simple terms to people from a virtualized environment, or traditional IT who still need to learn what containers are. There’s an opportunity for the K8s community to reach a broader range of IT professionals.
  • People, other than DevOps, shouldn’t care about/worry about K8s.
  • I think the K8s community is a resource that really can’t be overvalued. K8s is certainly not the first open-source orchestration tool, but it’s got a vibrant and quickly growing community. This is really what’s powering the continued development of K8s as it continues to turn out new features. The community is also a great way for platform engineers to give back – and build their own open-source reputations – and can be a great way to attract new team members.
  • Teams want to use something that they can control. As our systems get more decoupled, teams are moving away from monolithic, closed source "black-box" solutions because OSS technologies provide a self-service ecosystem that developers and operators can easily use and put in production.
  • Here's a webinar on data locality with Spark and Presto workloads for faster performance and better data access in K8s.

Here’s who shared their insights: