If you’re reading this, you’re probably trying to figure out which AWS compute service to use for your project. There are a lot of options, and like most things in the cloud, what’s best depends on your use case.
This isn’t going to be a super deep dive/exhaustive breakdown of every feature – it’s meant a practical overview of the major compute offerings so you can make an informed choice or at least have a starting point to jump off from. This post is aimed more at the beginner level or those looking to figure out how what they are looking for maps inside AWS. I highly recommend you do your own research to make sure your specific use-case fits the model you plan to use, as each one of these options comes with caveats, additional tradeoffs, and potential “gotchas”.
The Big Three
While there are many more than three options for running your workloads on AWS, I am going to make a bold statement that for 90%+ of general workloads, you will want to pick one of the following options. All of these are mature, robust offerings that cover three extremely common use case types: Traditional VMs, Containers, and Serverless. There are other “press a button” style services that can get you up and running quicker in certain cases, but they all tend to be more opinionated and a bit less flexible.
EC2 – The original workhorse VM as a service offering.
EC2 gives you a virtual machine in the cloud. You choose an OS, CPU/RAM, and storage, and AWS spins up a server for you—just like renting hardware, but without the physical maintenance. It’s the most flexible compute option AWS offers, and if you’re comfortable with traditional servers, this is the natural entry point.
You manage everything—scaling, patching, networking, and security—which is both the power and the drawback. Great for legacy migrations, custom runtimes, or full-control workloads. Easy to get started, but you’re on the hook for keeping things running and cost-optimized.
ECS/Fargate – Containers in the cloud
ECS is AWS’s managed container service. It comes in two modes: ECS on EC2, where you run containers on your own fleet of instances, and Fargate, where AWS handles the compute layer for you. You hand over a container image and a task definition, and AWS handles the scheduling and scaling.
Fargate in particular removes a ton of operational overhead. If your workload is containerized and you don’t need deep customization (like tight host access or complex multi-container coordination), it’s often a cleaner choice than rolling your own container platform. ECS/Fargate is ideal for APIs, background workers, or any modern service-based app architecture.
Lambda – Functions as a service (Faas)
Lambda is AWS’s Serverless and Event-Driven compute offering. You write a bit of code, upload it, and AWS runs it in response to events—API calls, file uploads, queue messages, scheduled jobs, and more. It integrates easily with services like API Gateway, S3, DynamoDB, and EventBridge, and supports a wide range of languages including Node.js, Python, Go, Java, and C#.
Lambda is best for lightweight workloads, glue logic, and APIs that don’t need to run 24/7. Common examples include processing image uploads, cleaning up data on a schedule, or powering turn-based games where each action is a discrete API call.
Lambda functions can run for up to 15 minutes, which covers most things you would need it for – but if you need more time, Step Functions let you chain functions together for longer or more complex workflows. You give up some control over the runtime and may hit cold starts or config limits, but for the right use case, Lambda is fast, cheap, and easy to use.
Other Options
Lightsail – Quick and simple VPS hosting
Lightsail is AWS’s simplified VPS offering. It bundles compute, storage, networking, and a static IP into a single package and supports preconfigured app stacks like WordPress, LAMP, and Node.js. It’s designed for users who want to spin up small web apps or test environments without dealing with the complexity of full-blown AWS services.
You don’t get much flexibility or deep integration with other AWS offerings, but it’s fast to deploy and predictable in cost. Best suited for small-scale workloads, demos, or projects that need minimal customization.
App Runner – Easy way to deploy a containerized web app
App Runner is a fully managed service that lets you deploy web apps and APIs from source code or container images with almost zero infrastructure knowledge. You give it a repo or container, and AWS handles the build, deployment, load balancing, HTTPS, scaling, and health checks—no ECS, Fargate, or networking setup required.
Compared to ECS + Fargate, App Runner is simpler but more opinionated. You don’t manage tasks, clusters, or service definitions as it abstracts all of that away. It’s great for developer speed, but limits customization. You get fewer knobs to tune, and you can’t control the runtime environment as deeply (e.g., no sidecars, custom networking, or non-HTTP workloads).
If you need something web-facing with low ops overhead, App Runner is often faster and cleaner than setting up ECS or Fargate. But if you need more control, more complex networking, private connectivity, or anything non-HTTP, ECS/Fargate is most likely still the better choice.
Elastic Beanstalk – Old-School PaaS
Elastic Beanstalk is a legacy PaaS-like (platform as a service) offering that automates infrastructure provisioning for web apps. You upload your code, choose a runtime (like Node.js, Java, Python, etc.), and AWS handles the EC2 provisioning, load balancing, scaling, and monitoring. It supports rolling and blue/green deployments out of the box and can be a good fit for traditional apps that aren’t containerized but still need a managed option to handle scaling.
Beanstalk is built on top of core services like EC2, so you still pay for and manage those resources indirectly. While it’s not as modern or flexible as some of the newer offerings, it remains a viable option for certain legacy use cases or teams looking for a familiar PaaS-style deployment model.
EKS (Elastic Kubernetes Service) – Managed K8s for AWS
EKS is AWS’s managed Kubernetes service. It takes care of running and scaling the control plane (API server, etcd, etc.) so you can focus on deploying workloads. You can run your worker nodes on EC2 or use Fargate for serverless pods. It’s a solid fit for teams already working with Kubernetes who want to run clusters in AWS without managing every piece from scratch.
EKS integrates well with IAM, VPC, and other AWS services. It has a steeper learning curve than ECS and requires more upfront configuration, but you get full Kubernetes flexibility without having to build and maintain the core components yourself. The pricing is higher, both for the service and the operational model, but the tradeoff is significantly reduced management overhead (relative to deploying and scaling it yourself… K8s will have overhead no matter how you slice it…)
Batch – Batch computing at scale
AWS Batch is used to run batch computing jobs at scale. You submit jobs (like simulations, ML model training, media rendering, or big data ETL tasks), and AWS handles scheduling, queuing, and provisioning of the required compute resources.
It integrates with EC2 and Fargate and is a good choice for workloads that don’t require real-time responses but need to scale across hundreds or thousands of jobs. You only pay for the compute used during job execution.
Outposts – Bring AWS to on-prem (literally)
Outposts lets you run AWS infrastructure and services on-premises. It’s essentially a rack of AWS hardware delivered to your data center, giving you the ability to run EC2, ECS, EKS, RDS, and other services locally while staying integrated with your AWS account.
It’s intended for use cases that require ultra-low latency, local data processing, or regulatory compliance that prevents using AWS regions directly. It is *highly* unlikely you’ll need this unless you’re working in a specialized enterprise or hybrid-cloud environment.
Conclusion
I hope this overview of AWS compute options helped clarify which services might be the best fit for your needs. It’s possible I missed a niche use case or edge service, but for most workloads, the tools covered here will get you where you need to go. Until next time – happy building!