Rapidly Develop Java Microservices on Kubernetes With Telepresence

Many organizations adopt cloud native development practices with the dream of shipping features faster. Kubernetes has become the de facto container orchestration platform for building cloud native applications and although it provides great opportunities for development teams to move faster, the learning curve can be steep and the burden often falls on application developers who have to learn new processes and tools. 

Challenges of Developing Locally With Kubernetes

For larger enterprises, Kubernetes and cloud architectures present a critical challenge compared to monolithic legacy applications. As Kubernetes applications evolve into complex microservice architectures, the development environments also become more complex as every microservice adds additional dependencies. These services quickly start to need more resources than are available in your typical local development environment. 

Seeing 5XXs When Configuring a Kubernetes API Gateway for the First Time?

Kubernetes is a fantastic foundation for an application platform, but it is just that: a foundational component. In order for K8s to be useful for application developers the following components must be added to Kubernetes: ingress, an API gateway, and observability; you need to get user traffic into your applications, and you need to be able to understand what is going on. 

Getting K8s Ingress up and running for the first time can be challenging due to the various cloud vendor load balancer implementations. I've seen my fair share of 5XX HTTP errors, and have not been able to identify where the problem lies...

Why Cloud Native? Speed, Stability, and Full Cycle Development

The emergence of “cloud-native” technologies and practices, such as microservices, cloud computing, and DevOps, has enabled innovative organizations to respond and adapt to market changes more rapidly than their competitors. Just look at the success of the initial web “unicorns”, Spotify, Netflix, and Google. Not every company can be a unicorn, but there is much to learn from the early adopters of the cloud.

The Benefits of Being Cloud-Native

Spotify’s now-famous “squad, chapters, and guilds” organizational model ultimately led to the creation of their applications as independent microservices, which in turn supported the rapid rate of change they desired. Through a combination of a compelling vision and the whole-scale adoption of cloud services, Netflix was able to out-innovate existing market incumbents in the video streaming space. And Google’s approach to collaboration, automation, and solving ops problems using techniques inspired by software development enabled them to scale to a global phenomenon over the past two decades.

The Two Most Important Challenges With an API Gateway When Adopting Kubernetes

API Gateway when adopting Kubernetes.


Building applications using the microservices pattern and deploying these services onto Kubernetes has become the de facto approach for running cloud-native applications today. In a microservice architecture, a single application is decomposed into multiple microservices. Each microservice is owned by a small team that is empowered and responsible to make the right decisions for the specific microservice.

Routing in a Multi-Platform Data Center: From VMs to Kubernetes, via Ambassador

At Datawire, we are seeing more organizations migrating to their “next-generation” cloud-native platform built around Docker and Kubernetes. However, this migration doesn’t happen overnight. Instead, we see the proliferation of multi-platform data centers and cloud environments where applications span both VMs and containers. In these data centers, the Ambassador API gateway is being used as a central point of ingress, consolidating authentication, rate limiting, and other cross-cutting operational concerns.

This article is the first in a series on how to use Ambassador as a multi-platform ingress solution when incrementally migrating applications to Kubernetes. We’ve added sample Terraform code to the Ambassador Pro Reference Architecture GitHub repo which enables the creation of a multi-platform “sandbox” infrastructure on Google Cloud Platform. This will allow you to spin up a Kubernetes cluster and several VMs, and practice routing traffic from Ambassador to the existing applications.

Server Name Indication (SNI) and Ingress TLS in Kubernetes with Ambassador

The open-source Ambassador 0.50 API gateway adds support for Server Name Indication (SNI), a much-requested feature from the community that allows the configuration of multiple TLS certificates to be served from a single ingress IP address. In this tutorial, we explore how multiple secure domains (e.g., https://www.datawire.io and https://www.getambassador.io.) can be provided by a single or load balanced Ambassador running within a Kubernetes cluster.

SNI Use Cases

In a nutshell (and with thanks to Wikipedia), SNI is an extension to the TLS protocol, which allows a client to indicate which hostname it is attempting to connect to at the start of the TCP handshaking process. This allows the server to present multiple certificates on the same IP address and TCP port number, which in turn enables the serving of multiple secure websites or API services without requiring all those sites to use the same certificate.