Blog
Kubernetes
Engineering
3
minutes

Kubernetes - Network isolation with NetworkPolicy

As your number of deployed applications within Kubernetes grows, you may want to isolate them from a network point of view. By default, Kubernetes does not offer any network isolation, all pods of all your namespaces can talk to each other without any isolation, and even on network port that you have not defined. Yes, that's scary! There are different approaches and tools to do network isolation; let's take a look at the NetworkPolicy.
September 26, 2025
Pierre Mavro
CTO & Co-founder
Summary
Twitter icon
linkedin icon

Kubernetes Networking plugin

Kubernetes provides a resource called NetworkPolicy that allows rules to allow/deny network traffic, which works like a network firewall. By default using this resource doesn't do anything. To make it work, you need first to add a Kubernetes Networking plugin that implements it.

Some Kubernetes cluster providers propose their implementation, like GKS and AKS. On the other side, you can use Calico, like recommended by AWS with EKS.

This page assumes you have installed the Kubernetes Networking Plugin (See below).

Installation

Here are the links to install the Kubernetes Networking plugin according to your Cloud provider.

Configuration

Implementing Network Isolation is the same rule of thumb as configuring a firewall - block every inbound request and allow what you need.

Block all incoming traffic

In the example below, we will configure the production to be isolated from all other namespaces but still allow any pods deployed within the production namespace to talk to each other.

First, let's create a namespace:

apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
role: production

Then, blocking incoming traffic for this namespace looks like this:

#...
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: no-inbound-traffic
namespace: production
spec:
policyTypes:
- Ingress
podSelector:
matchLabels: {}

The rule is:

  • policyTypes=Ingress to select only the incoming traffic
  • an empty set in podSelector/matchLabels, to apply the rule to all pods within the namespace.
  • no ingress rules have been defined, so everything is blocked

Allow traffic between pods within the same namespace

To allow any pods within the production namespace to communicate to each other, add a NetworkPolicy rule:

#...
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-same-namespace-traffic
namespace: production
spec:
policyTypes:
- Ingress
podSelector:
matchLabels: {}
ingress:
- from:
- namespaceSelector:
matchLabels:
role: production

The ingress rules indicate that we want to allow all traffic from the namespace with the label role=production.

Allow incoming traffic from outside.

Let's now imagine that you have a web application listening on port 8000. To make it publicly accessible, we need to add one more rule:

#...
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-port-8000
namespace: production
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app=web-server
ingress:
- ports:
- port: 8000

Instead of selecting all pods, I pick only those with the label app=web-server of the productions namespace. Then the ingress: rule allows anybody to connect to the port 8000 of my web-server.

Block outgoing traffic

NetworkPolicy can also be used to prevent traffic from going out. For instance, we may not want an application to read the AWS metadata server information.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: disable-aws-metadata
namespace: production
spec:
policyTypes:
- Egress
podSelector:
matchLabels: {}
egress:
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 169.254.169.254/32

Going further

NetworkPolicy is useful for simple network traffic filtering but not enough to have perfect control over pods communication. Filtering rules are made only with Pod and Namespace selectors. A person with bad intentions can still connect directly to the application port (here 8000) and bypass your Ingress resources and Loadbalancer setup once the network port is open.

In a forthcoming post, we will see how we can have fine-grained filtering with a sidecar service called Istio.

Resources

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
Terraform
 minutes
Managing Kubernetes deployment YAML across multi-cloud enterprise fleets

At enterprise scale, managing provider-specific Kubernetes YAML across multiple clouds creates crippling configuration drift and operational toil. By adopting an agentic Kubernetes management platform, infrastructure teams abstract cloud-specific configurations (like ingress controllers and storage classes) into a single, declarative intent that automatically reconciles across 1,000+ clusters.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

To stop GPU costs from destroying SaaS margins, teams must transition from static to consumption-based infrastructure by utilizing Karpenter for dynamic provisioning, maximizing hardware density with NVIDIA MIG, and leveraging Qovery to tie scaling directly to business metrics.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

By adopting GitOps and utilizing management platforms like Qovery, fintech teams can automatically generate DORA-compliant audit trails, transforming regulatory compliance from a manual, time-consuming chore into an automated, native byproduct of their infrastructure.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.