Cilium: The De Facto Kubernetes Networking Layer and Its Exciting Future

Cilium is an eBPF-based project that was originally created by Isovalent, open-sourced in 2015, and has become the center of gravity for cloud-native networking and security. With 700 active contributors and more than 18,000 GitHub stars, Cilium is the second most active project in the CNCF (behind only Kubernetes), where in Q4 2023 it became the first project to graduate in the cloud-native networking category. A week ahead of the KubeCon EU event where Cilium and the recent 1.15 release are expected to be among the most popular topics with attendees, I caught up with Nico Vibert, Senior Staff Technical Engineer at Isovalent, to learn more about why this is just the beginning for the Cilium project.

Q:  Cilium recently became the first CNCF graduating “cloud native networking” project — why do you think Cilium was the right project at the right time in terms of the next-generation networking requirements of cloud-native?

4 Big GitOps Moments of 2021

The growing complexity of application development and demand for more frequent deployments bolstered the rise of GitOps. GitOps, in simple terms, is all about using Git for container-based continuous integration and deployment. GitOps enables a seamless developer experience and greater control for Ops teams. It is often considered an extension of DevOps.

The central idea of GitOps is to use Git as the single source of truth. With Git repositories storing the declarative state of the system, it makes code management, reconciliation, and audits fairly easy to control and implement at scale. GitOps offers productivity, reliability, and security for cloud-native applications, accelerating its adoption.

Service Mesh 101: The Role of Envoy

If you’ve done any reading about service meshes, you’ve probably come across mentions of an open-source project named Envoy. And if you’ve done any reading about Envoy, you’ve probably seen references to service meshes. How are these two technologies related? How are they different? Do they work together? I’ll attempt to answer all those questions in this blog post’s first and second parts, plus possibly a few more.

What Is a Service Mesh?

As companies are increasingly re-architecting their applications and embracing a microservices-based approach, the need for solutions to traffic management, observability, security, and reliability features increases. A service mesh is one approach to adding these features to the underlying platform instead of individual applications or services.

Success Story: From AWS EMR to Kubernetes

Motivation

This article is an overview of the path we followed to migrate Spark Workloads to Kubernetes and to avoid EMR dependency. EMR was an important support tool at Empathy.co to orchestrate Spark workloads, but once the workloads became more complex, the use of EMR also became more complicated. So, back in December 2020, the Step Function flow to orchestrate the different EMR clusters was like this:

In January 2021, an initial Spark on Kubernetes RFC was proposed by the Platform Engineering team. The aim was to have a better solution with less possible friction between teams, especially the Data and Ether teams, the main users of the Spark workloads.

Developer Tooling for Kubernetes in 2021 – Docker, BuildKit, Buildpacks, Jib and Kaniko (Part 4)

Over the last few blog posts, I've covered critical elements of developer tooling for Kubernetes and how things are looking in 2021. As we continue to dive into that discussion, we must not forget the process of building container images.

Of course, most of us create our images by writing Dockerfiles and building them with the Docker engine. And yet, more and more teams are adopting newer alternatives. After all, the Docker image format has been standardized as part of the OCI (Open Container Initiative) a long while ago.

3 Serverless Strategies to Look for in 2021

If you had at least one chance to attend business and technologies conferences recently, you probably saw lots of DevOps strategies, Agilist, and DevSecOps engineers around the digital transformation track. No matter what business you work in, it’s no secret that DevOps is a big trigger to craft new companies. It is also used to optimize existing resources, from the IT infrastructure to workflow processes and cultural changes.

While I was moderating several tracks (i.e. serverless) at KubeCon + CloudNativeCon North America 2020, I had a chance to catch a very interesting trend around DevOps. The trend was more interest in cloud-native application development than deployment and provisioning with certain cloud platforms on Kubernetes. Kubernetes was unleashed upon the world back in 2014. Since then, IT ops organizations have benefited from orchestrating immutable application runtimes with service invocation and discovery, autoscaling, and resilience by using Kubernetes. In the meantime, application workloads also need to move forward to serverless functions and event-driven reactive services, along with the DevOps platform, which is a key ability to manage multi and hybrid cloud infrastructure.

Exploring Kubernetes With Gigi Sayfan

You have probably read about Kubernetes, and maybe even dipped your toes in and used it in a side project or even at work. But understanding what Kubernetes is all about, how to use it effectively, and what the best practices are requires much more effort. Kubernetes is a big open-source project and ecosystem with a lot of code and a lot of functionality. Kubernetes came out of Google, but joined the Cloud Native Computing Foundation (CNCF) and became the clear leader in the space of container-based applications.

Let's hear from Gigi Sayfan, author of the bestseller Mastering Kubernetes, Third Edition, about his methodologies and the approach he followed to create a powerful resource to acquaint learners all over the globe with the fundamentals and more advanced concepts of Kubernetes.

Kafka on Kubernetes, the Strimzi Way! (Part 4)

Welcome to part four of this blog series! So far, we have a Kafka single-node cluster with TLS encryption on top of which we configured different authentication modes (TLS and SASL SCRAM-SHA-512), defined users with the User Operator, connected to the cluster using CLI and Go clients, and saw how easy it is to manage Kafka topics with the Topic Operator. So far, our cluster used ephemeral persistence, which in the case of a single-node cluster, means that we will lose data if the Kafka or Zookeeper nodes (Pods) are restarted due to any reason.

Let's march on! In this part we will cover:

Kafka on Kubernetes, the Strimzi Way! (Part 1)

Some of my previous blog posts (such as Kafka Connect on Kubernetes, the easy way!), demonstrate how to use Kafka Connect in a Kubernetes-native way. This is the first in a series of blog posts which will cover Apache Kafka on Kubernetes using the Strimzi Operator. In this post, we will start off with the simplest possible setup i.e. a single node Kafka (and Zookeeper) cluster and learn:

  • Strimzi overview and setup
  • Kafka cluster installation
  • Kubernetes resources used/created behind the scenes
  • Test the Kafka setup using clients within the Kubernetes cluster

The code is available on GitHub - https://github.com/abhirockzz/kafka-kubernetes-strimzi

Kafka Connect on Kubernetes The Easy Way!

This is a tutorial that shows how to set up and use Kafka Connect on Kubernetes using Strimzi, with the help of an example.

Kafka Connect is a tool for scalably and reliably streaming data between Apache Kafka and other systems using source and sink connectors. Although it's not too hard to deploy a Kafka Connect cluster on Kubernetes (just "DIY"!), I love the fact that Strimzi enables a Kubernetes-native way of doing this using the Operator pattern with the help of Custom Resource Definitions.

Using Buildpacks to Provision OCI-Compliant Container Images

It never fails that the CNCF seems be cooking up something interesting in their ecosystem. In my free time, I find myself in a habit of playing in the Sandbox to see what new cutting edge tools I can add to my collection. It is my goal today to introduce you to a project at the Sandbox stage known as "Buildpacks".

What Are Buildpacks?

Buildpacks are an OCI-compliant tool for building applications that serve as a higher-level abstraction as opposed to writing Dockerfiles. The project was spawned by Pivotal and Heroku in 2011 and joined the Cloud Native Sandbox in October 2018. Since then, Buildpacks has been adopted by Cloud Foundry and other PaaS, such as Gitlab, Knative, Deis, Dokku, and Drie.

The project seeks to unify the buildpack ecosystems with a platform-to-buildpack contract that is well-defined and incorporates years of learning from maintaining production-grade buildpacks at both Pivotal and Heroku.

Master Shifu and His Cloud-Native Mentoring Sessions

Let’s start with a quick introduction of InfraCloud Technologies, shall we? (Master Shifu here; yep, we’re going the Kung Fu Panda route!). In case you’re wondering, in this story, we also have the warrior panda and the furious…..many!

Now, Shall We Begin?

Master Shifu has been in Cloud-native technologies even before Kubernetes had reached 1.0 milestone and Kubernetes was just one of the contenders in the orchestration game (Master Shifu has grey cells too). But the tide turned quite quickly.

Cloud-Native DevOps: Your World to New Possibilities

In DevOps, everyone needs to trust that everyone else is doing their best for the business. This can happen only when there is trust between the teams, shared goals, and standard practices. For example, Devs need to talk to Ops about the impact of their code, risks involved, challenges so that the Ops can be well aware and prepared to handle and maintain the stability of the system if any unexpected incidents occur.

While embracing DevOps initially, failures are inevitable, but that doesn't mean you stop innovating.

The Gorilla Guide to Kubernetes in the Enterprise – Chapter 1: The Changing Development Landscape

This is an excerpt from The Gorilla Guide to Kubernetes in the Enterprise, written by Joep Piscaer. You can download the full guide here.

Software Delivery Has Evolved

The way we build and run applications has changed dramatically over the years. Traditionally, apps ran on top of physical machines. Those machines eventually became virtual. In both cases, the application and all its dependencies were installed on top of an OS.

This relationship between OS and applications created a tightly-coupled bundle of everything needed to run that application. Each virtual machine (VM) ran a complete OS, no matter how big or small the VM was, or how demanding the application on top.

Kubernetes — The Fast and Furious?

Alexis Richardson recently spoke at the two-day event Container Stack in Zurich and described the role of the CNCF and what it means to be cloud-native. After a brief introduction on the CNCF and its role, Alexis then explained the type of projects and how they get accepted to the foundation before diving into how enterprises can benefit by adopting cloud-native technology.

CNCF — A Brief History

The Cloud Native Computing Foundation (CNCF) was created three years ago as the home of Kubernetes. Kubernetes builds upon 15 years of experience of running production workloads at Google combined with best-of-breed ideas and practices from the community.

The Top 3 Things Holding Back Your Kubernetes Strategy and How to Fix Them

A reported 69% of organizations surveyed by CNCF (the Cloud Native Computing Foundation) use Kubernetes to manage containers. As Kubernetes becomes the new standard for container orchestration, a new set of challenges result and enterprises are often spending significant time focused on managing their Kubernetes deployments rather than innovating. The most common barriers are around security vulnerabilities and lack of trust, scarcity of skills and expertise, and navigating storage needs. 

Why Kubernetes?

We can all agree that our industry is prone to hype and sometimes we feel the pressure of adopting a new technology simply because our peers and competitors do. Before diving into challenges of adopting Kubernetes (K8s), let’s remind ourselves of why someone should (or shouldn’t) bother.