Continuation from here. In that article, I explored how a microservice-based solution can address several limitations of serverless computing. For instance, AWS Lambda’s time limit for running long tasks can be mitigated by moving to containerized solutions. In this article, I’ll dive deeper into the microservice concept and explain why it’s crucial for enabling true serverless architecture with better reliability, scalability, and flexibility.
How Microservices Can Help Serverless Architectures Handle Long-Running Tasks
Before diving into microservices, let’s consider a simple real-world example.
Imagine you’re running a company. You have departments—like IT, Sales, and Marketing—each with its own responsibilities, but all working together toward the same big goals: growing the business, keeping customers happy, and increasing revenue.
Each department focuses on what it does best. Sales connects with customers and gathers information. IT processes and analyzes that information. Marketing uses the insights to plan smart campaigns. No department works in isolation—they all depend on each other to succeed.
To work more efficiently, you set up offices in different parts of the world. You place Sales teams in regions where your products are popular, IT teams in areas with strong technical talent, and Marketing teams where they can engage the right audience. Each office has the tools and resources it needs—nothing more, nothing less.
Everything is distributed and organized so each part of the company can be more focused and effective. You have granular control over resources, and you can run with your plan knowing that every team has what it needs.
Now imagine doing the opposite. What if all your teams—Sales, Marketing, IT—had to work in one big office, sharing desks, meeting rooms, and tools? No matter how different their work is, they’re stuck using the same resources.
Here’s what would happen:
- The Sales team wants to meet clients, but the meeting room is already booked by Marketing.
- IT needs to test a new idea, but the equipment is being used for something else.
- If the office Wi-Fi goes down, everyone is affected.
This is similar to what we call a monolithic system in software. Everything is bundled into one large block. If you want to make a change to even a small part, you often end up needing to adjust everything else. If one part fails, the whole system can go down. And this issue is what’s often referred to in computer science as the “noisy neighbor phenomenon.” Originally used in the context of NVMe storage, it applies to any monolithic system where one noisy application disrupts others by consuming shared resources like CPU, memory, or storage.
On the other hand, the first setup—with separate, independent teams—is like a microservices approach. Each team (or microservice) handles its own tasks and doesn’t interfere with others. They still work together, but they don’t block each other’s progress. If one team needs to change something, it can do so without affecting everyone else. And if one part of the system goes down, the rest can keep running.
The Appeal of Microservices
Microservices bring a lot of flexibility, but they also come with their own challenges. So let’s not get too caught up in the trade-offs for now—that’s a whole other discussion. The key is to focus on the use case—microservices aren’t always the solution, and it’s important to think through your system needs before jumping to conclusions. But for today, let’s focus on why microservices are so appealing.
First off, control. With microservices, you’re not stuck with a giant, always-on system that consumes resources even when it’s not doing anything. Instead, you can say, “This service is done—let’s shut it off.” This is especially cost-effective in serverless environments, where you pay for usage, not idle time. Why keep everything running when only a few parts of the system are active?
The Power of Loose Coupling
One of the key benefits of microservices is loose coupling. Let’s use our company analogy again. Sales, IT, and Marketing are each focused on their tasks without constantly relying on one another. Sales doesn’t need IT to try a new pitch, and Marketing can run campaigns independently. Each department has its own responsibilities, but they coordinate just enough to stay aligned.
In software terms, loose coupling means that each microservice handles its own tasks without needing to know how the others work internally. They communicate via APIs—simple, predictable messengers passing requests and responses without worrying about each other’s inner workings.
Take ChatGPT as an example. When you interact with ChatGPT, you ask a question and get an answer. But behind the scenes, ChatGPT is calling different services—like language understanding or context management—without you needing to know how they work. You only care about the final answer, and you interact with the system through a simple API call.
Microservices in Action: A hypothetical case study
Let’s bring it back to software. Imagine you have an on-premise server running both your Database Management Service (DBMS) and RabbitMQ for messaging between different applications. There’s no virtualization in this setup, so both share the same physical resources. You’d have unit tests to check if the DBMS and RabbitMQ work in isolation, and integration tests to ensure RabbitMQ can communicate with the DBMS for data persistence. But the real question is: can you guarantee that these two systems won’t cause issues down the line?
The answer is NO. A sudden spike in RabbitMQ queues can increase CPU usage, which will affect DBMS performance, slowing down queries and impacting users. Even in virtualized environments, services are still bound by the limitations of the underlying physical hardware. For instance, if your MySQL VM is assigned 4 vCPUs, but the host only has one physical CPU, your performance is limited by that single core.
But with microservices, you can isolate RabbitMQ and the DBMS into separate environments, such as different physical servers or containers. This ensures that the two services don’t interfere with each other. If RabbitMQ needs to interact with the DBMS, a simple HTTP request can handle it. Microservices make such interactions seamless and efficient.
How Microservices Help with Long-Running Tasks in Serverless Environments
Here’s where serverless architecture and long-running tasks come into play. Serverless environments like AWS Lambda are perfect for short tasks, but what happens when you have a long-running job—say, a machine learning model training task?
Lambda functions have a time limit, and you can’t always fit large, complex tasks into a 15-minute window. This is where microservices come in handy. You can break down long tasks into smaller, manageable chunks and chain them together. Each microservice handles one part of the task and passes the baton to the next. It’s not about cramming everything into one large function—it’s about breaking it down into smaller, independent tasks that work together efficiently.
Microservices are the perfect solution for managing these long-running, complex tasks in a serverless setup. Each service can scale independently, ensuring that your system remains flexible and efficient. And if a service fails, it doesn’t bring the whole system down. It’s like having a team of independent workers, each doing their part without causing disruptions.
Containers and Orchestration with Kubernetes
This is why containers are a great fit for microservices. A container bundles your app with only what it needs to run, and it’s easy to deploy anywhere. Containers make microservices portable and lightweight. But managing multiple microservices requires more than just containers—you need an orchestration system to coordinate the scaling and management of these services.
That’s where Kubernetes comes in. Kubernetes automates the deployment, scaling, and management of containerized applications. It ensures that each microservice runs in its own container, scaling based on demand without manual intervention. It’s the key to making microservices work reliably at scale.
Machine Learning Training and Inference with Microservices
Microservices are also a great fit for machine learning tasks. Neural networks are often used to train models for predictions, whether they involve text, images, audio, or video. Before the network can process data, it must be transformed into a numerical format—like converting an image into pixel values or audio into spectrograms.
Once the data is in the right format, the model uses backpropagation to adjust its internal weights and learn from the data. The process involves tweaking various hyperparameters like learning rate and batch size. Once the model is trained, it’s ready for inference—making predictions on new data.
By splitting these tasks into microservices, you can break down the model training process into smaller, more manageable chunks. Each microservice can handle a specific part of the training or inference process, running in parallel or sequentially, depending on the system’s needs. This is ideal for serverless environments like AWS Lambda, where long-running tasks can be broken into smaller, independent steps that work together.
Okay, the post is getting a bit lengthy. In the next post, I’ll dive deeper into how the microservice concept can be used to run neural network-based ML model training tasks in a serverless way. This approach offers a great balance between reliability, scalability, cost-efficiency, and sustainability. Stay tuned!
One thought on “MicroService concept, philosophy and ML importance”