Redis Is Not Just a Cache

Do You Always Need a Classic Database?

For some time now the huge amount of data that needs to be processed forces any app to have a caching strategy in front of the databases. The databases, even with huge underline optimizations, can’t always provide enough speed and availability. The main reason is that the farther is the data, the harder it is to access it. Another reason is that usually, a database persists data on disk and not on RAM. They do have embedded cache on RAM for increasing the optimizations but, even so, having a dedicated separated cache is a well-used strategy.

The usual solution to solve the performance issues when accessing a database is caching. Caching isn’t new. Caching actually means to keep a smaller amount of data that you frequently access closer to you. We have caches on processors, databases have embedded caches also and you can even code your own cache in your app. 

Microservices, Event-Driven Architecture and Kafka

Imagine having a huge monolith application with a lot of complex functionalities strongly tied together. The scalability is a big challenge, the deployment process could become very cumbersome, and, since the internal components are highly coupled, to change the functional flow isn’t gonna be that easy.

Maybe a lot of people are familiar with this concept since this was the standard way to build an application until few years ago and that there are still a lot of monoliths in production these days.