3 ways to run Kubernetes on AWS: EKS, self-managed, and Fargate



Key points
- EKS with EC2 remains the standard: It provides the necessary kernel access for DaemonSets and custom network configurations required by enterprise security tools.
- Fargate has strict limits: Understand the lack of DaemonSet support before migrating your entire observability and logging stack to a serverless model.
- Self-managed is a trap: Building clusters from scratch on raw EC2 instances wastes engineering time that should be spent on product development.
Container orchestration on Amazon Web Services is heavily commoditized, yet organizations continue to paralyze their platform teams by choosing the wrong compute primitives. The decision between managed control planes, serverless containers, and rolling your own infrastructure dictates your FinOps reality and scaling trajectory.
At a fleet scale of thousands of clusters, manual interventions fail. To survive Day-2 operations, infrastructure teams must select an architecture that supports agentic automation, strict resource governance, and centralized visibility. This guide evaluates the three primary deployment methods for AWS Kubernetes and their impact on enterprise operations.
The 1,000-cluster reality: the operational tax of self-managed infrastructure
Platform Architects often overestimate their capacity to manage infrastructure. Running a self-managed Kubernetes cluster using kops or kubeadm works for a single isolated environment. At a fleet scale of thousands of clusters, manual control plane patching and etcd backups become an operational nightmare.
Managing this scale requires Amazon EKS to handle the control plane, paired with an Agentic Kubernetes Management Platform to enforce global configurations and prevent configuration drift.
Option 1: Amazon EKS with EC2 (the enterprise standard)
Running Amazon EKS with managed EC2 node groups is the default standard for enterprise workloads. It offloads the control plane management to AWS while giving you full root access to the worker nodes.
This access is non-negotiable for running service meshes like Istio, complex CSI drivers, and DaemonSets required for Datadog or Fluent Bit.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluent-bit
namespace: logging
spec:
selector:
matchLabels:
app: fluent-bit
template:
metadata:
labels:
app: fluent-bit
spec:
containers:
- name: fluent-bit
image: fluent/fluent-bit:latest
volumeMounts:
- name: varlog
mountPath: /var/log
volumes:
- name: varlog
hostPath:
path: /var/log
If you plan to implement Karpenter for node-level autoscaling, you must use EC2 instances. EKS with EC2 provides the most flexibility for Day-2 operations and aggressive cost optimization.
Option 2: Amazon EKS with Fargate (the serverless compromise)
Fargate removes the need to manage underlying EC2 instances by running pods in isolated compute environments. It is excellent for batch jobs and isolated workloads.
The architectural limitations are severe. Fargate does not support DaemonSets. You must inject sidecar containers into every pod for logging and monitoring. This inflates pod resource requests and complicates CI/CD pipelines.
# creating a fargate profile requires specifying namespaces
aws eks create-fargate-profile \
--fargate-profile-name batch-workloads \
--cluster-name my-eks-cluster \
--pod-execution-role-arn arn:aws:iam::111122223333:role/AmazonEKSFargatePodExecutionRole \
--selectors namespace=batchOption 3: self-managed Kubernetes on EC2 (the DIY trap)
Building your own Kubernetes architecture on raw EC2 instances gives you absolute control over API server flags and etcd topology. It also guarantees your SREs will spend their weekends fixing quorum failures.
If the etcd database corrupts at 3 AM, your team must recover it. Compute is a commodity. Your engineering talent should focus on application delivery, not maintaining infrastructure plumbing.
🚀 Real-world proof
Alan hit scaling limits with AWS Elastic Beanstalk and needed to move to Kubernetes without hiring a massive platform team to manage the control plane.
⭐ The result: Reduced deployment time from over 1 hour to 8 minutes while eliminating the need for a dedicated infrastructure engineer. Read the Alan case study.
Standardizing fleet management with Qovery
Regardless of whether you use EC2 or Fargate, managing deployments across multiple Amazon EKS clusters creates YAML fatigue. Qovery acts as an intent-based abstraction layer over AWS. You connect your AWS account, and Qovery provisions the EKS clusters, configures the VPCs, and manages the deployment pipelines globally.
# .qovery.yml
application:
backend-api:
build_mode: docker
auto_scaling:
min_instances: 3
max_instances: 50
cpu_threshold: 75
This transforms EKS from a raw compute primitive into an Agentic Kubernetes Management Platform, allowing CTOs to enforce FinOps policies without slowing down developers.
FAQs
Can I run DaemonSets on AWS Fargate?
No. AWS Fargate does not support DaemonSets, privileged containers, or hostPath volumes. If you need logging or monitoring agents, you must run them as sidecar containers within each individual pod.
What is the main benefit of using Amazon EKS over self-managed Kubernetes?
Amazon EKS offloads the operational burden of managing the Kubernetes control plane and etcd database. AWS handles high availability, backups, and automated version upgrades for the master nodes so your team can focus on workload delivery.
How does Qovery help manage Amazon EKS clusters?
Qovery acts as an Agentic Kubernetes Management Platform. It abstracts the complex Terraform and YAML configurations required to manage EKS. It automates cluster provisioning, environment deployments, and FinOps controls natively within your AWS account.

Suggested articles
.webp)











