Blog
Product
Infrastructure Management
Deployment
minutes

Stop tool sprawl - Welcome to Terraform/OpenTofu support

Provisioning cloud resources shouldn’t require a second stack of tools. With Qovery’s new Terraform and OpenTofu support, you can now define and deploy your infrastructure right alongside your applications. Declaratively, securely, and in one place. No external runners. No glue code. No tool sprawl.
December 22, 2025
Alessandro Carrano
Head of Product
Summary
Twitter icon
linkedin icon

Too Many Tools, Too Much Overhead

Provisioning cloud infrastructure and deploying applications have traditionally lived in separate silos. Teams use tools like Atlantis, Spacelift, or custom runners to manage Terraform or OpenTofu. Then, they turn to ArgoCD, Flux, or Qovery to deploy their applications.

The result?

Fragmented workflows, inconsistent deployment timing, fragile CI scripts, and a constant back-and-forth between tools just to get a working environment up and running.

If your infra isn’t ready, your app deployment fails. If your app needs outputs from Terraform, someone has to wire them together manually. It works, but it’s painful and hard to scale.

The Qovery Platform: One Environment, One Control Plane

Qovery was built to simplify the application lifecycle by unifying it inside your own Kubernetes cluster, on your infrastructure, with your security, under your control.

Now, with Terraform & OpenTofu native support, Qovery extends that same control to infrastructure provisioning. You can deploy everything from a single environment: no CI glue, no handoffs, no tool sprawl.

This feature isn’t a side add-on. It’s a natural extension of how Qovery environments work.

You can specify deployment order between infrastructure resources and applications, pass outputs as environment variables for workloads, and manage the full stack lifecycle directly in Qovery, all running securely inside your Kubernetes cluster.

Outcomes for Your Team

With Terraform & OpenTofu support, you’ll get:

  • Fewer scripts: no more custom CI jobs to glue Terraform to app deployments
  • Consistent deployments: define the full stack once, deploy it the same way every time
  • Less waiting on DevOps: developers can self-serve infra with guardrails
  • No tool sprawl: one platform to manage infra and apps together

A Realistic Example: From Three Tools to One Platform

One of our users was running Terraform through Atlantis, applications through ArgoCD, and using CI scripts to pass values between them. The process worked but was fragile and hard to scale. Any change required coordination across repos, tooling, and teams.

They moved to Qovery’s native Terraform support, defined their infrastructure and applications in the same environment, set the proper deployment order (RDS → seed job → backend), and removed dozens of lines of CI logic. Now, it’s all handled by Qovery: in one flow, with full visibility.

Read more: Cut Tool Sprawl: Automate Your Tech Stack with a Unified Platform

Deploy Your Manifest in 3 Simple Steps

  1. Add a new service of type “Terraform” inside your existing Qovery environment
  2. Connect your Git repository containing the Terraform or OpenTofu manifest, set inputs automatically fetched by Qovery from your manifest, define the state location and define the deployment order
  3. Execute plan or apply: Qovery will manage the lifecycle, handle remote state, and inject outputs as environment variables for your other services to consume

Want to see it in action? Check the demo below:

Try It Today

Ready to simplify your infra and app deployments?

Try it out today by adding a new resource via Terraform directly on an existing environment.

Need help migrating from Atlantis or custom scripts? We’re here to help.

Share on :
Twitter icon
linkedin icon
Tired of fighting your Kubernetes platform?
Qovery provides a unified Kubernetes control plane for cluster provisioning, security, and deployments - giving you an enterprise-grade platform without the DIY overhead.
See it in action

Suggested articles

Kubernetes
Cloud
AI
FinOps
 minutes
GPU orchestration guide: How to auto-scale Kubernetes clusters and slash AI infrastructure costs

Is your GPU spend outpacing revenue? Discover how to transform AI infrastructure into a variable cost. Learn the strategies to auto-scale Kubernetes clusters and optimize cost-per-inference.

Mélanie Dallé
Senior Marketing Manager
Product
AI
Deployment
 minutes
Stop Guessing, Start Shipping. AI-Powered Deployment Troubleshooting

AI is helping developers write more code, faster than ever. But writing code is only half the story. What happens after? Building, deploying, debugging, scaling. That's where teams still lose hours.We're building Qovery for this era. Not just to deploy your code, but to make everything that comes after writing it just as fast.

Alessandro Carrano
Head of Product
AI
Developer Experience
Kubernetes
 minutes
MCP Server is the future of your team's incident’s response

Learn how to use the Model Context Protocol (MCP) to transform static runbooks into intelligent, real-time investigation tools for Kubernetes and cert-manager.

Romain Gérard
Staff Software Engineer
Compliance
Developer Experience
 minutes
Beyond the spreadsheet: Using GitOps to generate DORA-compliant audit trails.

In the 2026 regulatory landscape, manual audits are a liability. This guide explores using GitOps to generate DORA-compliant audit trails through IaC, drift detection, and automated segregation of duties. Discover how the Qovery management layer turns compliance into an architectural output, reducing manual overhead for CTOs and Senior Engineers.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
7
 minutes
Day 2 operations: an executive guide to Kubernetes operations and scale

Kubernetes success is determined by Day 2 execution, not Day 1 deployment. While migration is a bounded project, maintenance is an infinite loop that often consumes 40% of senior engineering capacity. To protect margins and velocity, enterprises must transition from manual toil to agentic automation that handles scaling, security, and cost.

Mélanie Dallé
Senior Marketing Manager
Kubernetes
8
 minutes
The 2026 guide to Kubernetes management: master day-2 ops with agentic control

Master Kubernetes management in 2026. Discover how Agentic Automation resolves Day-2 Ops, eliminates configuration drift, and cuts cloud spend on vanilla EKS/GKE/AKS.

Mélanie Dallé
Senior Marketing Manager
DevOps
Kubernetes
6
 minutes
Day-0, day-1, and day-2 Kubernetes: defining the phases of fleet management

Day-0 is planning, Day-1 is deployment, and Day-2 is the infinite lifecycle of maintenance. While Day-0/1 are foundational, Day-2 is where enterprise operational debt accumulates. At fleet scale (1,000+ clusters), managing these differences manually is impossible, requiring agentic automation to maintain stability and eliminate toil.

Morgan Perry
Co-founder
Kubernetes
7
 minutes
Kubernetes multi-cluster: the Day-2 enterprise strategy

A multi-cluster Kubernetes architecture distributes application workloads across geographically separated clusters rather than a single environment. This strategy strictly isolates failure domains, ensures regional data compliance, and guarantees global high availability, but demands centralized Day-2 control to prevent exponential cloud costs and operational sprawl.

Morgan Perry
Co-founder

It’s time to change
the way you manage K8s

Turn Kubernetes into your strategic advantage with Qovery, automating the heavy lifting while you stay in control.