I have heard the below questions many times
'How do I scale Microservices?'
Tips, Expertise, Articles and Advice from the Pro's for Your Website or Blog to Succeed
I have heard the below questions many times
'How do I scale Microservices?'
To understand the current state of migrating legacy apps to microservices, we spoke to IT executives from 18 different companies. We asked, "What are some real-world problems you, or your clients, are solving by migrating legacy apps to microservices?" Here’s what we learned:
You may also like: Successful Migration to Microservices: Why, When, and How
Here’s who shared their insights:
Caching has been around for decades, as accessing data quickly and efficiently is critical when building application services. Caching is a mandatory requirement for building scalable microservice applications. Therefore, we will be reviewing three approaches to caching in modern cloud-native applications.
In many use cases, the cache is a relatively small data storage medium using fast and expensive technology. The primary benefit of using a cache is to reduce the time it takes to access frequently queried data on slower and cheaper large data stores. In modern applications, there is a multitude of storage methods of caching data, but let’s briefly cover the two most popular approaches:
Microservices architectures are very popular today. In this article, we discuss the three main advantages of having a microservices architecture.
This is the third article in a series of five articles on cloud and microservices. Here are the first two:
Deciding when and how to approach scaling your service can be a daunting task, but it doesn't have to be. In order to effectively scale, engineers must have a solid understanding of the behavior of the system, data to support their hypothesis, and a method to test the results. We'll talk more about measuring the performance later in this series, but for now it's important to understand that most effective scaling strategies involve "Just In Time" optimizations.
It's critical to have an environment and culture which allows for production level testing via canary type deployments or synthetic testing infrastructure in place to allow for test engineers to develop load tests which closely emulate production workloads. If neither of those are available in your organization, you're effectively shooting in the dark when it comes to scaling and optimization.
When designing distributed systems it's important to understand that explicit design decisions must be made to enable scalability within components. These applications must be engineered from the beginning with the requirement to meet anticipated needs with options that facilitate future growth. We build our systems in anticipation of scaling because we anticipate the platform will grow, which means more users, features, or data.
This is the first article in a series of posts where we will discuss topics which include: