Why DevOps Should be Responsible for Development Environments

Let’s discuss an extremely common anti-pattern I’ve noticed with teams that are relatively new to containers/cloud-native/Kubernetes, etc. More so than when building traditional monoliths, cloud-native applications can be incredibly complex and, as a result, need a relatively sophisticated development environment. Unfortunately, this need often isn’t evident at the beginning of the cloud-native journey. Development environments are an afterthought – a cumbersome, heavy, brittle drag on productivity.

The best teams treat development environments as a priority and devote significant DevOps/SRE time to perfecting them. In doing so, they end up with development environments that “just work” for every developer, not just those who are experienced with containers and Kubernetes. For these teams, every developer has a fast, easy-to-use development environment that works for every developer every time.

How to Develop Your Node.Js Docker Applications Faster

Docker has revolutionized how Node.js developers create and deploy applications. But developing a Node.js Docker application can be slow and clunky. The main culprit: the process for testing your code in development.

In this article, we’ll show a tutorial and example on how you can use Docker’s host volumes and nodemon to code faster and radically reduce the time you spend testing.

How to Use Docker Volumes to Code Faster

If you are a developer who uses Docker, odds are you may have heard that you can use volumes to maintain persistent state for your containers in production. But what many developers don’t realize is that volumes can also be an excellent tool for speeding up your development workflow.

In this post, I’ll give you a brief overview of what is a Docker volume, how Docker host volumes work, and show you a tutorial and example of how you can use volumes and nodemon to make coding with Docker easier and faster.

Kubernetes Explained: Part 2 — Containers

In our previous post, Kube Explained: Part 1, I described how the introduction of the cloud resulted in CI/CD, Microservices, and a massive amount of pressure to standardize backend infrastructure tooling.

In this post, we’ll cover the first and most important domino to fall in this wave of standardization: the container. For the first time, containers standardized the packaging, distribution, and lifecycle of backend services and in doing so, paved the way for container orchestration, Kubernetes, and the explosion of innovation that’s surrounded that ecosystem.

The Dark Side of Microservices

There is an endless supply of blog posts, white papers, and slide decks, evangelizing the virtues of microservices. They talk about how microservices “increase agility,” are “more scalable,” and promise that when you make the switch, engineers will be pounding at your office door looking for a job.

Let’s be clear, though occasionally exaggerated, the benefits of microservices can be real in some cases. Particularly for large organizations, with many teams, microservices can make a lot of sense. However, microservices aren’t magic — for all of their benefits, they come with significant drawbacks. In this article, I’ll describe how the distributed nature of microservices makes them inherently more complex.

How Microservices Enable Multi-Cloud at the Expense of Developers

This article was originally posted on the Kelda.io blog by CEO and Founder, Ethan J. Jackson. Kelda is Docker compose for Kubernetes. It allows you to quickly test your code changes in a remote environment that matches production, without the complexity of interacting with Kubernetes directly.

escape 19 graphic
Learn more about multi-cloud!

I recently had the pleasure of speaking about at — the multi-cloud conference, in New York City. It was a fantastic event packed full of sharp folks with interesting perspectives.