Serverless computing continue.

When reading my articles, you might notice that some ideas are simplified — and that’s intentional. I know that not everyone comes from a technical background, and I want these posts to be as approachable as possible. That’s also why I welcome conversation in the comments — to dive deeper, clarify, and learn together. My audience comes from all kinds of fields — not just IT. Whether you’re in biotech, e-commerce, business, or something entirely different, my goal is to share the exciting, sometimes complex world of computer systems in a way that feels clear and engaging.

Continuation from here https://joyantablog.wordpress.com/2025/03/26/serverless-computing-what-really-happens-inside-contd/.

Alright, time to wrap up this serverless series. If you’ve been following along, you’ve probably got a good idea by now of what serverless actually means and where it fits in the cloud world. But just to make it clear — let’s bring all the pieces together.

Serverless computing is basically a way to run your code in the cloud without worrying about the servers underneath. You don’t have to spin up machines, set up infrastructure, or babysit anything. You just write the logic, deploy it, and the cloud provider handles everything else — scaling, networking, patching, the whole deal.

The beauty of it? You only pay when your code runs. That’s it. No idle time costs. If nothing happens, you pay nothing. It’s cost-efficient, especially for apps that don’t run all the time. And it’s pretty good for the environment too. Since the infrastructure only spins up when needed, there’s less energy waste and fewer machines running around the clock.

Now, let’s talk about containers — because they’re a big part of how serverless systems stay lightweight and fast. Containers are like portable little boxes that bundle your code with just the stuff it needs to run — nothing more. Not like full-blown virtual machines with whole operating systems in them. Containers are clean, lean, and ready to go in seconds.

Because they’re small and self-contained, containers help reduce what’s called the cold start time — that little delay when something boots up after being idle. The smaller and more efficient the setup, the faster it responds. That’s huge when you’re trying to build apps that feel fast and seamless.

Alright, now let’s move into something real. Say you’ve got a search app. A user types in a question, and your system goes off to query a few databases, maybe even hits a vector store, pulls in some results, ranks them, and sends the best match back. Pretty standard stuff. If you wanted to build that with a serverless mindset, your first thought might be: “Hey, let me just throw this in a Lambda function.”

Makes sense at first. Lambda is easy to set up and doesn’t cost you anything while it’s idle. But then the reality hits — how long is that search operation going to take? If it’s a simple query, sure. But what if the data is huge? What if you’re calling out to multiple external services or crunching some embeddings on the fly?

Now imagine you’re training a machine learning model on a massive dataset. No top-tier GPU. Just you, some CPU time, and a giant pile of training data. That job could take hours. Maybe days.

And here’s the thing: AWS Lambda has a hard timeout — 15 minutes max. If your function is still running at 15:01, boom — it gets shut down.

This is where the idea of “serverless everything” starts to fall apart for bigger or long-running jobs. Serverless is great for lightweight, short-lived tasks — like resizing an image, sending a notification, or responding to a quick API call. But for heavier workloads? No. It just doesn’t scale that way.

Enter containers.

Containers change the whole game. You’re not locked into a 15-minute window anymore. You can run a container for as long as you want — seconds, hours, whatever. There’s no cap unless you set one yourself. And you still get that nice stateless feeling if you design it that way. You can package your whole ML model, your embedding, your vector DB connector, all your dependencies — wrap them up in one container and boom, you’re ready to deploy. And it runs exactly the same way on your laptop as it does in the cloud. That’s the magic of containers. No “it worked on my machine” drama.

Now here’s the cool part — you don’t even need to manage the servers behind those containers. AWS gives you tools to stay serverless-ish even when running full-blown containerized apps. That’s where AWS Fargate and Amazon EKS come in.

With Fargate, you run containers directly in the cloud without touching a single EC2 instance. You just define how much CPU and memory you want, point to your container image, and you’re good to go. No servers to patch. No scaling logic to write. It just works. Fargate is great for microservices, APIs, background jobs, and one-off batch processes.

And if you’re into Kubernetes — or you need more control over how your containers run, scale, and talk to each other — that’s where Amazon EKS comes into play. It’s fully managed Kubernetes on AWS. You can even pair EKS with Fargate to keep the serverless vibe going. You get the power and flexibility of Kubernetes without the headache of managing the cluster infrastructure yourself.

So yeah — when your app starts pushing past the limits of Lambda, you don’t need to go back to bare metal or mess with virtual machines. Just move up to containers. They scale better, last longer, and give you full control without losing the simplicity you liked in serverless to begin with.

Serverless is awesome when used right. It’s light, fast, and cheap for short tasks. But when your workloads get heavier — like training models, running APIs 24/7, or doing long queries — containers on Fargate or EKS are the next step. You get flexibility, power, and still no server babysitting.

Once again, Vendor specific solutions like AWS EKS and Fargate is not applicable for applications hosted on on-premise server or private data center setup. For that, you might need some open source solutions, that can be enhanced to help you building serverless on your custom (on-premise) server. For that, you need to understand microservices and container orchestration tools.

Thanks for reading the series. Hope it helped clear things up. But I am not still done with the container. Next post I will start talking about the Microservice and Kubernetes concept.

Stay safe.

One thought on “Serverless computing continue.

Leave a comment