Blog
Kubernetes
Engineering
3
minutes

Kubernetes network isolation and NetworkPolicy at fleet scale

Kubernetes NetworkPolicy acts as a native firewall to control pod-to-pod communication. Because Kubernetes allows all lateral traffic by default, configuring network isolation manually creates massive configuration drift. Agentic control planes automate zero-trust compliance, deploying and reconciling network policies simultaneously across thousands of global clusters.
April 9, 2026
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

Key points:

  • Default open networks expose clusters: Kubernetes permits all lateral pod-to-pod communication by default. You must explicitly define rules to block unauthorized access.
  • Manual YAML scaling fails: Writing manifests for a single cluster is trivial. Enforcing those same policies across 1,000s of clusters causes configuration drift and audit failures.
  • Agentic enforcement guarantees compliance: Centralized control planes automatically deploy and reconcile network isolation policies across AWS, GCP, and Azure fleets.

Kubernetes NetworkPolicy acts as a native firewall to control pod-to-pod communication. Because Kubernetes allows all lateral traffic by default, configuring network isolation manually creates massive configuration drift.

Agentic control planes automate zero-trust compliance, deploying and reconciling network policies simultaneously across thousands of global clusters.

Kubernetes provides a native resource called NetworkPolicy. This configuration allows platform teams to explicitly allow or deny network traffic, functioning identically to a traditional network firewall. However, deploying Kubernetes with default settings leaves your architecture entirely open. To enforce network isolation, you must install a networking plugin (such as Calico) and define rules to block unauthorized requests.

🚀 Real-world proof

RxVantage needed a compliant, secure infrastructure to manage healthcare data across isolated microservices without dedicating internal teams to manual YAML maintenance.

⭐ The result: They achieved a secure, automated deployment environment while entirely removing DevOps bottlenecks. Read the full case study here.

The 1,000-cluster reality

Implementing a zero-trust model on a single cluster requires basic YAML definition. Enforcing that same zero-trust model across 1,000 clusters distributed across AWS, GCP, and Azure is an operational liability. Platform architects cannot rely on individual engineers to manually verify that every new namespace and deployment correctly implements ingress and egress rules.

Manual YAML configuration at this scale guarantees configuration drift. A single missed NetworkPolicy on a staging cluster creates a compliance failure during a security audit. Managing an enterprise fleet requires an agentic control plane that actively deploys and enforces network isolation across the entire global footprint, eliminating human error from day-2 operations.

Configuring network isolation

By default, the NetworkPolicy resource does not execute anything. To make it work, you must add a Kubernetes Networking plugin that implements it. Cloud providers offer their own implementations, such as GKE and AKS. Alternatively, you can use Calico, which AWS recommends for EKS.

The standard security protocol for Kubernetes networking requires blocking all inbound requests by default and strictly defining allowed traffic.

Blocking all incoming traffic

In the following example, we configure the production environment to isolate itself from all other namespaces while allowing pods deployed within the production namespace to communicate with each other.

First, create the namespace:

apiVersion: v1
kind: Namespace
metadata:
  name: production
  labels:
    role: production

Next, define the NetworkPolicy to block incoming traffic for this namespace:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: no-inbound-traffic
  namespace: production
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress

The policyTypes: Ingress directive selects only incoming traffic. Passing an empty set to podSelector.matchLabels applies the rule to all pods in the namespace. Because we defined no specific ingress rules, Kubernetes blocks everything.

Allowing traffic between pods within the same namespace

To permit any pods within the production namespace to communicate with one another, define the following ingress rule:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-same-namespace-traffic
  namespace: production
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Ingress
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          role: production

The ingress rules instruct Kubernetes to allow all traffic originating from namespaces that carry the label role: production.

Allowing incoming traffic from outside

If a web application listens on port 8000, you must add an explicit rule to allow external traffic to reach it:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-8000
  namespace: production
spec:
  podSelector:
    matchLabels:
      app: web-server
  policyTypes:
  - Ingress
  ingress:
  - ports:
    - port: 8000

Instead of selecting all pods, this configuration isolates the rule to pods carrying the app: web-server label. The ingress rule then permits external connections exclusively on port 8000.

Blocking outgoing traffic

NetworkPolicy configurations also govern egress traffic. For example, to prevent an application from querying the AWS metadata server, you block the specific IP block:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: disable-aws-metadata
  namespace: production
spec:
  podSelector:
    matchLabels: {}
  policyTypes:
  - Egress
  egress:
  - to:
    - ipBlock:
        cidr: 0.0.0.0/0
        except:
        - 169.254.169.254/32

Standardizing day-2 security operations

NetworkPolicy is useful for network traffic filtering, but relying on manual configuration leaves critical enterprise infrastructure vulnerable to human error. Filtering rules are made only with Pod and Namespace selectors; an unauthorized user can still bypass configurations if network ports are exposed unintentionally.

Zero-trust architecture demands centralized governance. An agentic Kubernetes management platform abstracts these configuration requirements, applying compliance standards, enforcing network isolation, and managing sidecar proxies (like Istio) automatically across your entire fleet.

FAQs

What does a Kubernetes NetworkPolicy do?

A Kubernetes NetworkPolicy dictates how groups of pods communicate with each other and other network endpoints. It acts as a firewall, enforcing ingress and egress rules to secure cluster traffic.

Why is manual NetworkPolicy management risky at scale?

Applying YAML configurations manually across thousands of clusters causes configuration drift. A single missed policy leaves environments exposed, violating zero-trust architecture and risking security audit failures.

How does agentic Kubernetes improve network security?

Agentic platforms automate the deployment and reconciliation of network policies centrally. This eliminates manual configuration errors and guarantees that all clusters in a fleet adhere to the same security standards.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
 minutes
Building a single pane of glass for enterprise Kubernetes fleets

A Kubernetes single pane of glass is a centralized management layer that unifies visibility, access control, cost allocation, and policy enforcement across § cluster in an enterprise fleet for all cloud providers. It replaces the fragmented practice of switching between AWS, GCP, and Azure consoles to govern infrastructure, giving platform teams a single source of truth for multi-cloud Kubernetes operations.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
 minutes
How to deploy a Docker container on Kubernetes (and why manual YAML fails at scale)

Deploying a Docker container on Kubernetes requires building an image, authenticating with a registry, writing YAML deployment manifests, configuring services, and executing kubectl commands. While necessary to understand, executing this manual workflow across thousands of clusters causes severe configuration drift. Enterprise platform teams use agentic platforms to automate the entire deployment lifecycle.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

By adopting GitOps and utilizing management platforms like Qovery, fintech teams can automatically generate DORA-compliant audit trails, transforming regulatory compliance from a manual, time-consuming chore into an automated, native byproduct of their infrastructure.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.