How a Distributed File System in Go Reduced Memory Usage by 90%

JuiceFS, written in Go, can manage tens of billions of files in a single namespace. Its metadata engine uses an all-in-memory approach and achieves remarkable memory optimization, handling 300 million files with 30 GiB of memory and 100 microseconds response time. Techniques like memory pools, manual memory management, directory compression, and compact file formats reduced metadata memory usage by 90%.

JuiceFS Enterprise Edition, a cloud-native distributed file system written in Go, can manage tens of billions of files in a single namespace. After years of iteration, it can manage about 300 million files with a single metadata service process using 30 GiB of memory while maintaining the average processing time of metadata requests at 100 microseconds. In production, 10 metadata nodes, each with 512 GB of memory, collectively manage over 20 billion files.

A Deep Dive Into the Directory Quotas Design of a Distributed File System

As an open-source distributed file system, JuiceFS introduced the directory quota feature in version 1.1. The released commands support setting quotas for directories, fetching directory quota information, and listing all directory quotas. For details, see the Command Reference document.

In designing this feature, our team went through numerous discussions and trade-offs regarding its accuracy, effectiveness, and performance implications.

GlusterFS vs. JuiceFS

GlusterFS is an open-source software-defined distributed storage solution. It can support data storage of PiB levels within a single cluster.

JuiceFS is an open-source, high-performance distributed file system designed for the cloud. It delivers massive, elastic, and high-performance storage at a low cost.