Blog
Engineering
Kubernetes
AWS
6
minutes

Our migration from Kubernetes Built-in NLB to ALB Controller

Stop fighting legacy Kubernetes NLB issues. Learn why Qovery migrated to the AWS Load Balancer Controller to fix deletion bugs and unlock advanced PROXY protocol features.
March 6, 2026
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

Key Points:

  • The Maintenance Dead-End: Kubernetes’ built-in NLB is considered a legacy codebase that is no longer actively maintained by AWS, leading to persistent bugs where resources aren't properly cleaned up after deletion.
  • Operational Workarounds: Before migrating, the Qovery team had to manually instrument the Qovery Engine to deal with orphaned load balancers that the native Kubernetes implementation failed to remove.
  • Feature Gaps: The transition was necessitated by the need for advanced networking features not supported by the built-in NLB, specifically the PROXY protocol for IP preservation and fine-grained target group attributes.
  • Migration Complexity: AWS does not provide a transparent path to move from the built-in NLB to the ALB Controller; it requires managing new NLB availability times and potentially disruptive CNAME changes.

Working with Kubernetes Services is convenient, especially when you can deploy Load Balancers via cloud providers like AWS. At Qovery, we initially started with Kubernetes’ built-in Network Load Balancer (NLB). However, we decided to move to the AWS Load Balancer Controller (ALB Controller).

In this article, I explain why we made this switch and how it benefits our infrastructure. We will discuss the reasons for the transition, the features of the ALB Controller, and provide a guide for deploying it. This shift has helped us simplify management, reduce costs, and enhance performance. By understanding these points, you can decide if the ALB Controller is right for your Kubernetes setup.

Important consideration: We should have been using it from the beginning, as migrating can be a pain, depending on your configuration. Moving to it from day one or as early as possible would have been the best!

Why did we start with the NLB controller

For our customers and several technical people we discussed this with, NLB is the default choice because:

  1. Ease of Use: Simple to configure and use with built-in Kubernetes service annotations.
  2. Kubernetes Native: Uses Kubernetes-native objects, reducing the need for AWS-specific knowledge.
  3. Cloud-Agnostic: It is easier to migrate to other cloud providers or on-premises environments without deep AWS integration. As we support multiple cloud providers in the managed offering, we must maintain maximum transparency for our customers and be able to port functionalities to every supported cloud provider. Without some NLB features, this compatibility is not possible.
  4. Maintenance: Minimal maintenance overhead compared to managing additional AWS services and controllers.

Why did we move to the ALB controller

Migration to the ALB Controller came late (4 years after we used the built-in NLB). We were able to live without it for a long time.

However, during this time, we faced issues like NLB not being cleaned correctly after deletion (on the Kubernetes side).

NLB deletion issue with Kubernetes

AWS support told us they wouldn’t make efforts to make fixes since they developed the ALB Controller. We contacted AWS support and looked at GitHub issues on Kubernetes, and the result is that it’s a legacy part of the Kubernetes code base that is not maintained anymore.

When using Kubernetes built-in NLB, be prepared to manage issues manually 😅.

This is what we did! We manually instrumented our Qovery Engine to manage this kind of issue.

// fix for NLB not properly removed https://discuss.qovery.com/t/why-provision-nlbs-for-container-databases/1114/10?u=pierre_mavro
pub fn clean_up_deleted_k8s_nlb(
event_details: EventDetails,
target: &DeploymentTarget,
) -> Result<(), Box> {
// DO SOME NASTY STUFF TO DEAL WITH NLB DELETION ISSUE -_-'
}

But recently, we wanted to leverage some NLB features not present in Kubernetes's built-in NLB, so we had to move to the ALB Controller.

The move to the ALB Controller brings useful features such as:

Moving to the ALB Controller is an old topic for Qovery, as we already raised it a few years back when encountering the issues discussed above.

Stop Fighting Legacy Infrastructure

Moving to the ALB Controller is just one part of a stable production environment. Learn how to audit your entire stack for efficiency, security, and cost-savings with our 2026 playbook.

Kubernetes Production Best Practices Guide

Why didn’t we move to the ALB Controller before

The main reason is that it was one more thing to maintain on our end. But this time, we had no choice since some features requested by our customers required specific configurations that the native Kubernetes NLB implementation couldn’t handle.

Even if you’re not changing the Load Balancer type (NLB), you, unfortunately, can’t move from Kubernetes's "built-in NLB" to "ALB Controller NLB". cf. documentation:

You can't easily migrate from "built-in NLB" to "ALB Controller NLB"

And obviously, this has been confirmed as well by contacting AWS support 😭.

So if you have DNS CNAME pointing directly to the NLB DNS name (xxx.elb.eu-west-3.amazonaws.com), TLS/SSL certificates associated, or anything directly connected to it, you will have to manage it properly to avoid/reduce downtime as much as possible.

In anticipation, at Qovery, we use our domain on top of the NLB domain name, so we’re not concerned about TLS issues but about CNAME name changes and new NLB availability time 🤩.

However, it’s a shame that the AWS ALB Controller does not manage it transparently. The consequences regarding reliability and time investment for the migration are high. For most companies, it’s a problem. I’m worried that AWS didn’t consider it. Luckily for our customers, it's fully transparent.

Yes, at Qovery, we manage thousands of EKS clusters for our customers. If you are interested in this, read this article.

Deployment

I won’t go into details because several tutorials are already available on the internet. To summarize, here is the Terraform configuration to prepare ALB Controller permissions:

resource "aws_iam_policy" "aws_load_balancer_controller_policy" {
name = "qovery-alb-controller-${var.kubernetes_cluster_id}"
description = "Policy for AWS Load Balancer Controller"

// https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/deploy/installation/#option-b-attach-iam-policies-to-nodes
policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
...REDACTED...
})
}

resource "aws_iam_role" "aws_load_balancer_controller" {
name = "qovery-eks-alb-controller-${var.kubernetes_cluster_id}"
description = "ALB controller role for EKS cluster ${var.kubernetes_cluster_id}"
tags = local.tags_eks

assume_role_policy = <{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "${aws_iam_openid_connect_provider.oidc.arn}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${replace(aws_iam_openid_connect_provider.oidc.url, "https://", "")}:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
}
}
}
]
}
POLICY
}

resource "aws_iam_instance_profile" "aws_load_balancer_controller" {
name = "qovery-eks-alb-controller-${var.kubernetes_cluster_id}"
role = aws_iam_role.eks_cluster.name
tags = local.tags_eks
}

resource "aws_iam_role_policy_attachment" "aws_load_balancer_controller" {
policy_arn = aws_iam_policy.aws_load_balancer_controller_policy.arn
role = aws_iam_role.aws_load_balancer_controller.name
}

Then deploying the Helm ALB controller chart is not complicated:

helm repo add eks https://aws.github.io/eks-charts
helm upgrade --install --set clusterName=qovery-clusterid --set "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn=arn:aws:iam::xxx:role/qovery-eks-alb-controller-clusterid" aws-load-balancer-controller eks/aws-load-balancer-controller

Now you’re ready to deploy NLB managed by ALB controller:

apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-ingress-nginx-controller
namespace: nginx-ingress
annotations:
external-dns.alpha.kubernetes.io/hostname: 'xxx.yourdomain.com'
service.beta.kubernetes.io/aws-load-balancer-healthcheck-interval: "10"
service.beta.kubernetes.io/aws-load-balancer-name: nginx-ingress
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: '*'
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: target_health_state.unhealthy.connection_termination.enabled=false
service.beta.kubernetes.io/aws-load-balancer-type: external
spec:
externalTrafficPolicy: Local
internalTrafficPolicy: Cluster
ports:
- name: http
nodePort: xxxxx
port: 80
protocol: TCP
targetPort: http
- name: https
nodePort: xxxxx
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: nginx-ingress
app.kubernetes.io/name: ingress-nginx
type: LoadBalancer

If you look into the EC2 console, you should see your named load balancer (here: nginx-ingress).

Conclusion

The ALB Controller offers many features compared to the built-in NLB and allows you to extend usage to ALB if desired.

We hope AWS will include this in the EKS add-ons to simplify lifecycle management and reduce the setup and deployment phase.

Companies should be strongly encouraged to move to the ALB Controller sooner rather than later to avoid a lengthy migration process.

Slash Cloud Costs & Prevent Downtime

Still struggling with inefficiency, security risks, and high cloud bills? This guide cuts through the complexity with actionable best practices for production Kubernetes environments.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

By adopting GitOps and utilizing management platforms like Qovery, fintech teams can automatically generate DORA-compliant audit trails, transforming regulatory compliance from a manual, time-consuming chore into an automated, native byproduct of their infrastructure.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.