3 Ways to Reduce Latency in Multi-Region Deployments

Globally deployed applications don't have to be slow. In this post, we'll show you three ways to reduce latency in multi-region deployments, using a sample app we built called Wikifeedia. Wikifeedia is built on top of the public APIs from Wikipedia. It shows users a globally sorted index of content based on the most reviewed content in each language for the previous day. Since its target audience is global, it needs to be accessible from anywhere in the world, with low latency. But, like many news aggregations, the content isn’t changing from second to second. As such, it can tolerate slightly stale data.

The application is hosted on Google Cloud Kubernetes Engine, while the underlying database is hosted using our own Cockroach Cloud managed service offering. As a side note, this application has become a personal favorite of ours as we find ourselves eagerly checking out Wikifeedia to determine what’s trending on Wikipedia in any given day. 

Atomic Replication Changes in etcd/Raft

At Cockroach Labs, we write quite a bit about consensus algorithms. They are a critical component of CockroachDB and we rely on them in the lower layers of our transactional, scalable, distributed key-value store. In fact, large clusters can contain tens of thousands of consensus groups because in CockroachDB, every Range (similar to a shard) is an independent consensus group. Under the hood, we run a large number of instances of Raft (a consensus algorithm), which has come with interesting engineering challenges. This post dives into one that we’ve tackled recently: adding support for atomic replication changes (“Joint Quorums”) to etcd/raft and using them in CockroachDB to improve resilience against region failures.

A replication change is a configuration change of a Range, that is, a change in where the consistent copies of that Range should be stored. Let’s use a standard deployment topology to illustrate this.

The Effect of Isolation Levels on Distributed SQL Performance Benchmarking

This cottage has a high isolation level.


You may also like: SQL Server Tips and Techniques for Database Performance Optimization

The general perception is that benchmarks published by vendors can never be trusted; however, well-run benchmarks absolutely have their place, even if performed by a vendor. Benchmarks are well-run when the input parameters to the benchmarked systems match the needs of the target workloads included in the benchmark. The target workloads for the benchmark in question are simple inserts and secondary indexes served with high performance, massive scale and ACID guarantees.

Lessons Learned From 2+ Years of Nightly Jepsen Tests

Since the pre-1.0 betas of CockroachDB, we've been using Jepsen to test the correctness of the database in the presence of failures. We have re-run these tests every night as a part of our nightly test suite. Last fall, these tests found their first post-release bug. This blog post is a more digestible walkthrough of that discovery (many of the links here point to specific comments in that issue's thread to highlight the most important moments).

Two Years of Jepsen Testing

Running Jepsen tests in an automated fashion has been somewhat challenging. The tests run in a complex environment with network dependencies, spawn multiple cloud VMs, and have many potential points of failure, so they turn out to be rather flaky. If you search our issue tracker for Jepsen failures you'll see a lot of issues, but before the bug we're discussing here, they were all benign — failures of our automation setup, not an inconsistency or bug in CockroachDB itself. By now, though, we've worked out the kinks and have the tests running reliably.