Monitoring Serverless Functions Using OpenTracing and LightStep

The adoption of serverless functions is reaching record levels within enterprise organizations. Interestingly, given the growing adoption and interest, many monitoring solutions silo performance of the code executed in these environments, or provide just basic metrics about the execution. To understand the performance of applications, I want to know what bottlenecks exist, where time is being spent, and the current state of each system involved in fulfilling a request. While metrics, logs, and segmented stack traces are helpful, a more cohesive performance story is still the most useful way to understand an application, and should be achievable using existing technologies.

In this post, I’ll explore how to achieve a cohesive performance story using OpenTracing to instrument an Express app running in a local container, and a function running on Lambda. For visualizations and analysis, I’ll use LightStep [x]PM for this example, although you can choose another OpenTracing backend like Jaeger.

Integrating LightStep [x]PM With Istio

Service mesh technologies decouple application logic from service infrastructure concerns. This separation enables organizations to converge on standard patterns of observability making it easier for them to consume metrics, logs, and most excitingly (ok, we’re biased) distributed tracing across all their services! In this post, we’ll introduce a LightStep [x]PM integration we built for Istio and show you how it works with an example application that’s deployed with Istio. This integration makes it faster and easier to get started with distributed tracing at scale. If this is your first time hearing about Istio, Envoy, or Service Mesh, check out the Istio website.

Distributed Tracing With Istio

Istio 1.x supports distributed tracing via two mechanisms: