Setting up Request Rate Limiting With NGINX Ingress

In today's highly interconnected digital landscape, web applications face the constant challenge of handling a high volume of incoming requests. However, not all requests are equal, and excessive traffic can put a strain on resources, leading to service disruptions or even potential security risks. To address this, implementing request rate limiting is crucial to preserve the stability and security of your environment.

Request rate limiting allows you to control the number of requests per unit of time that a server or application can handle. By setting limits, you can prevent abuse, manage resource allocation, and mitigate the risk of malicious attacks such as DDoS or brute-force attempts. In this article, we will explore how to set up request rate limiting using NGINX Ingress, a popular Kubernetes Ingress Controller. We will also demonstrate how to test the rate-limiting configuration using Locust, a load-testing tool.

Testing the Performance of the NGINX Ingress Controller for Kubernetes

Kubernetes has become the de-facto standard for managing containerized applications, with many enterprises adopting it in their production environments. In this blog, we describe the kind of performance you can achieve with the NGINX Ingress Controller for Kubernetes, detailing the results of our performance testing of three metrics: requests per second, SSL/TLS transactions per second, and throughput. We also include the full NGINX and Kubernetes configurations we used.

You can use our performance numbers and configuration to help you determine the right topology, specs, and Kubernetes configuration for delivering your own apps at the level of performance and scale you require.