We set up Kubernetes clusters for a living. We've migrated 15+ microservices to EKS. We run K8s with Istio service mesh in production. And we're about to talk you out of using any of it.
You went to a conference. Every other talk was about Kubernetes. Your LinkedIn feed is drowning in K8s success stories. Maybe a recruiter hit you up asking if you "do Kubernetes." Now you're thinking about migrating your perfectly working app to K8s.
Stop. Please.
The Conference FOMO Problem
Here's the thing those conference talks leave out. The companies up on stage showing off their Kubernetes setups? They're running hundreds of microservices across thousands of containers. They have platform teams of 5-10 engineers who do nothing but babysit the cluster. Their infra budget is in the millions.
That's not you. And honestly? It doesn't need to be.
90% of startups don't need Kubernetes. We know that sounds wild coming from a team that makes money setting it up. But most of the time, we're on a call with a founder or CTO and our actual job is convincing them they don't need what they're asking us to build.
What Kubernetes Actually Costs (In Real Money)
Let us break this down with real numbers. Running Kubernetes on AWS (EKS) for even a small production workload:
- EKS control plane: $73/month per cluster
- Worker nodes (minimum 3 for HA): $400-$1,200/month
- Load balancers: $22-$65/month
- Monitoring (and yes, you absolutely need it): $130-$400/month
- Learning curve: 2-6 months before your team is comfortable
- Ongoing maintenance: 10-20 hours/month of engineering time
That's $650-$2,000/month minimum. Plus a massive chunk of your team's attention that should be going toward your actual product.
Meanwhile? A straightforward ECS Fargate setup or even plain EC2 with autoscaling handles the same workload for $200-$500/month with almost zero maintenance. We've seen this math play out dozens of times. It's not even close.
The Over-Engineering Tax
We call this the over-engineering tax. You pay it every single day.
- Slower deploys. A Docker push-to-ECS takes about 2 minutes. A Kubernetes deploy with Helm charts, service meshes, and config maps? That's 10-15 minutes to set up right and 5 minutes every time it runs. Multiply that across your team.
- Harder debugging. Something broke at 2am? With a simple setup, you check the logs, find the problem, fix it, go back to sleep. With Kubernetes, you're digging through pod logs, events, ingress controllers, service selectors, persistent volume claims, and node health. Then you're Googling the error because it turns out to be some K8s-specific weirdness that has nothing to do with your code.
- More stuff that can break. Every abstraction layer is another thing that can fail. Kubernetes stacks a lot of layers on top of each other.
- Pricier engineers. Kubernetes people cost more. And if you require K8s experience in your job posts, you just eliminated about 60% of your candidate pool. Good luck hiring.
The Boring Stack That Actually Works
Here's what we actually recommend to most startups we work with. It's boring. It works. It's cheap. Nobody will write a blog post about it, and that's exactly the point.
Not sure what you actually need? We'll tell you for free →
For Web Applications
- AWS ECS with Fargate or, honestly, a single EC2 instance with Docker Compose. We're serious. If you have fewer than 10 containers, Docker Compose on a beefy EC2 instance is completely fine. We've seen startups with real revenue running exactly this.
- An Application Load Balancer for routing and SSL termination.
- RDS for your database. Not self-managed PostgreSQL on Kubernetes. Just use RDS. Let Amazon worry about backups and failovers.
- S3 + CloudFront for static assets.
Total infrastructure cost: $250-$650/month. Total maintenance time: 2-4 hours/month. The rest of your week goes toward building your product.
For APIs and Microservices
You probably have 3-5 services. That's probably all you need. Use:
- ECS with service discovery for container orchestration.
- API Gateway or ALB for routing between services.
- SQS or SNS for async communication.
That gives you 80% of what Kubernetes offers at 20% of the cost and complexity. The other 20%? You almost certainly don't need it yet.
For Background Jobs
- AWS Lambda for anything under 15 minutes.
- ECS tasks for longer-running jobs.
- Step Functions for workflows.
Forget Kubernetes CronJobs. These managed services are simpler, cheaper, and they just work. We can't remember the last time a Lambda failed because of infrastructure problems.
When You Actually Need Kubernetes
Look, we're not anti-Kubernetes. We literally do this for a living. There are real situations where it's the right call.
You need Kubernetes when:
- You're running 50+ microservices and the orchestration complexity is real, not imagined
- You have a dedicated platform team (at least 3 engineers) whose whole job is managing the cluster
- You genuinely need multi-cloud portability — and we mean actually need it right now, not "might need it someday"
- You're running complex stateful workloads with specific scheduling requirements
- Your team already knows Kubernetes well and is actually more productive with it, not just more impressed with themselves
You don't need Kubernetes when:
- You have fewer than 10 services
- Your team has never run Kubernetes in production
- You're picking it because it looks good on a resume
- You're picking it because "everyone else is doing it" (they're not, by the way — it just seems that way on Twitter)
- Your main reason is future-proofing — you can migrate later, and it's easier than you think
The Migration Fantasy
We hear this one all the time: "But what if we need Kubernetes later? Won't it be way harder to migrate then?"
No. It won't. If you containerize your apps now (and you should, regardless), migrating to Kubernetes later is a 2-3 week project. We've done it. Multiple times. Docker containers run the same way whether they're on ECS, Fargate, or Kubernetes.
The months you save right now by keeping things simple are worth way more than some hypothetical migration cost down the road.
What to Do Right Now
If you're considering Kubernetes:
- Write down the specific problems you're trying to solve. Actual problems, not vibes.
- For each one, search "[problem] without Kubernetes."
- You'll find a simpler answer for every single one. Every time.
If you're already running Kubernetes and it feels like a part-time job:
- List every service in your cluster.
- Be honest with yourself: does each one actually benefit from being in K8s?
- Move the simple stuff to managed services. Keep Kubernetes only for workloads that genuinely need it.
The Bottom Line
The best infrastructure is the simplest infrastructure that actually handles your workload. Not the flashiest. Not the most "scalable." Not the one with the most GitHub stars.
Simple. Reliable. Cheap. That's the boring stack. It wins every time. We've watched it beat overbuilt Kubernetes setups for 9 years now, and the pattern never changes.
Wondering if your startup actually needs Kubernetes — or if you're about to waste six figures on something you'll regret? We'll look at your setup, tell you what you really need, and give you a plan. No pitch, no strings.
We audit startup infrastructure for free.
Our team will look at your AWS setup, your CI/CD, your security posture, and tell you exactly what to fix first. No charge, no obligation.
Book My Free Audit