Terraform OSS effectively ended in July 2025 when IBM completed the HashiCorp acquisition and consolidated everything under the BSL-licensed Terraform product line. If you were using Terraform with OpenStack, you now have a choice: accept the Business Source License, or switch to OpenTofu.
For most OpenStack operators, the switch is straightforward. OpenTofu 1.11.5 (the current stable release as of February 2026) maintains full compatibility with the OpenStack provider, supports the same HCL configuration language, and adds features that Terraform never shipped – including native state encryption.
This guide walks through a complete OpenTofu setup for OpenStack, starting from authentication and ending with a production-ready environment including networks, compute instances, block storage, DNS records, and a load balancer. Every example in this guide runs against real OpenStack APIs.
Why OpenTofu for OpenStack
Three reasons matter for OpenStack users specifically.
License clarity. OpenTofu is MPL 2.0. You can use it commercially, modify it, embed it in CI/CD pipelines, and distribute it without concern about license restrictions. For enterprises running private cloud infrastructure, license ambiguity in core tooling creates procurement and legal friction that nobody needs.
Full provider compatibility. The OpenStack provider (v2.1.0) works identically with OpenTofu. Your existing .tf files, state files, and modules carry over without modification. The provider covers Nova, Neutron, Cinder, Glance, Octavia, Barbican, Designate, Keystone, and Swift/S3.
Native state encryption. Terraform state files contain secrets – passwords, IPs, resource IDs – stored in plaintext JSON. Terraform’s answer was “use a remote backend with server-side encryption.” OpenTofu added client-side state encryption as a first-class feature. Your state is encrypted before it ever leaves the machine. For organizations following SOC 2/ISO 27001 control frameworks, this matters.
Prerequisites
You will need three things before starting.
1. Install OpenTofu. The official install methods cover every platform. On Linux:
curl -fsSL https://get.opentofu.org/install-opentofu.sh | sh -s -- --install-method deb
tofu --version
# OpenTofu v1.11.5
2. Create an application credential. Do not use your username and password in automation. OpenStack application credentials are scoped, revocable, and safe for CI/CD. Create one from the Open Edge dashboard under Identity > Application Credentials, or via CLI:
openstack application credential create tofu-deploy \
--description "OpenTofu automation" \
--unrestricted
Save the ID and secret. You will not see the secret again.
3. Configure authentication. Create a clouds.yaml file in your working directory or ~/.config/openstack/:
clouds:
open-edge:
auth_type: v3applicationcredential
auth:
auth_url: https://api.us-east-1.open-edge.io:5000
application_credential_id: "YOUR_CREDENTIAL_ID"
application_credential_secret: "YOUR_CREDENTIAL_SECRET"
region_name: us-east-1
interface: public
identity_api_version: 3
Provider Configuration
Note: All code examples in this guide have been tested against Open Edge Cloud APIs. Create an application credential in your Open Edge dashboard to get started.
Create a file called main.tf. Start with the provider block:
terraform {
required_version = ">= 1.6.0"
required_providers {
openstack = {
source = "terraform-provider-openstack/openstack"
version = "~> 2.1.0"
}
}
}
provider "openstack" {
cloud = "open-edge"
}
Then initialize:
tofu init
OpenTofu downloads the OpenStack provider and you are ready to go. Note that terraform {} blocks work identically in OpenTofu – no syntax changes required.
Tutorial: Build a Production-Ready Environment
The following sections build a complete environment step by step. Each resource block goes in your main.tf file (or split across files as you prefer – OpenTofu loads all .tf files in a directory).
We will use variables for values that change between environments:
variable "env_name" {
description = "Environment name prefix"
type = string
default = "production"
}
variable "external_network" {
description = "Name of the external/provider network"
type = string
default = "public"
}
variable "image_name" {
description = "Name of the compute image"
type = string
default = "Ubuntu Server 24.04 LTS (Noble Numbat)"
}
variable "flavor_name" {
description = "Name of the compute flavor"
type = string
default = "m1.medium"
}
variable "ssh_public_key" {
description = "Path to SSH public key"
type = string
default = "~/.ssh/id_ed25519.pub"
}
variable "dns_zone" {
description = "DNS zone name (must end with a dot)"
type = string
default = "example.com."
}
Step 1: Create a Network and Subnet
Note: All code examples in this guide have been tested against Open Edge Cloud APIs. Create an application credential in your Open Edge dashboard to get started.
resource "openstack_networking_network_v2" "main" {
name = "${var.env_name}-network"
admin_state_up = true
}
resource "openstack_networking_subnet_v2" "main" {
name = "${var.env_name}-subnet"
network_id = openstack_networking_network_v2.main.id
cidr = "10.0.1.0/24"
ip_version = 4
dns_nameservers = ["1.1.1.1", "8.8.8.8"]
}
This creates a private network with a /24 subnet. Nothing can reach the internet yet – that comes next.
Step 2: Create a Router with External Gateway
data "openstack_networking_network_v2" "external" {
name = var.external_network
}
resource "openstack_networking_router_v2" "main" {
name = "${var.env_name}-router"
external_network_id = data.openstack_networking_network_v2.external.id
}
resource "openstack_networking_router_interface_v2" "main" {
router_id = openstack_networking_router_v2.main.id
subnet_id = openstack_networking_subnet_v2.main.id
}
The router connects your private subnet to the external network, enabling outbound internet access and floating IP assignment.
Step 3: Create Security Groups
resource "openstack_networking_secgroup_v2" "ssh" {
name = "${var.env_name}-ssh"
description = "Allow SSH access"
}
resource "openstack_networking_secgroup_rule_v2" "ssh_ingress" {
security_group_id = openstack_networking_secgroup_v2.ssh.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 22
port_range_max = 22
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_v2" "web" {
name = "${var.env_name}-web"
description = "Allow HTTP and HTTPS"
}
resource "openstack_networking_secgroup_rule_v2" "http_ingress" {
security_group_id = openstack_networking_secgroup_v2.web.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 80
port_range_max = 80
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_rule_v2" "https_ingress" {
security_group_id = openstack_networking_secgroup_v2.web.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 443
port_range_max = 443
remote_ip_prefix = "0.0.0.0/0"
}
resource "openstack_networking_secgroup_v2" "database" {
name = "${var.env_name}-database"
description = "Allow database access from internal network only"
}
resource "openstack_networking_secgroup_rule_v2" "postgres_ingress" {
security_group_id = openstack_networking_secgroup_v2.database.id
direction = "ingress"
ethertype = "IPv4"
protocol = "tcp"
port_range_min = 5432
port_range_max = 5432
remote_ip_prefix = "10.0.1.0/24"
}
Notice that the database security group restricts access to the private subnet CIDR. Never expose database ports to the public internet.
Step 4: Create a Keypair
resource "openstack_compute_keypair_v2" "deploy" {
name = "${var.env_name}-deploy-key"
public_key = file(var.ssh_public_key)
}
Step 5: Launch an Instance
resource "openstack_compute_instance_v2" "app" {
name = "${var.env_name}-app-01"
image_name = var.image_name
flavor_name = var.flavor_name
key_pair = openstack_compute_keypair_v2.deploy.name
security_groups = [
openstack_networking_secgroup_v2.ssh.name,
openstack_networking_secgroup_v2.web.name,
]
network {
uuid = openstack_networking_network_v2.main.id
}
user_data = <<-EOF
#!/bin/bash
apt-get update && apt-get install -y nginx
systemctl enable --now nginx
EOF
}
A note on flavor_name vs flavor_id: use names for readability, but be aware that flavor names are not guaranteed unique across OpenStack deployments. If you are writing reusable modules, consider using flavor_id instead.
Step 6: Attach a Floating IP
resource "openstack_networking_floatingip_v2" "app" {
pool = var.external_network
}
resource "openstack_networking_floatingip_associate_v2" "app" {
floating_ip = openstack_networking_floatingip_v2.app.address
port_id = openstack_compute_instance_v2.app.network[0].port
}
The openstack_networking_floatingip_associate_v2 resource associates the floating IP with the instance’s port. After tofu apply, your instance will be reachable at the floating IP address. Note: the older openstack_compute_floatingip_associate_v2 resource is deprecated – always use the networking variant.
Step 7: Create a Block Volume and Attach It
resource "openstack_blockstorage_volume_v3" "data" {
name = "${var.env_name}-data-vol"
size = 100
description = "Persistent data volume"
}
resource "openstack_compute_volume_attach_v2" "data" {
instance_id = openstack_compute_instance_v2.app.id
volume_id = openstack_blockstorage_volume_v3.data.id
}
The volume appears as a block device on the instance. You will still need to partition, format, and mount it – OpenTofu handles provisioning, not OS-level configuration. Use user_data or a configuration management tool for that.
Step 8: Create a DNS Zone and A Record (Designate)
Note: All code examples in this guide have been tested against Open Edge Cloud APIs. Create an application credential in your Open Edge dashboard to get started.
resource "openstack_dns_zone_v2" "main" {
name = var.dns_zone
email = "admin@${trimsuffix(var.dns_zone, ".")}"
description = "Managed by OpenTofu"
ttl = 3600
type = "PRIMARY"
}
resource "openstack_dns_recordset_v2" "app" {
zone_id = openstack_dns_zone_v2.main.id
name = "app.${var.dns_zone}"
type = "A"
ttl = 300
records = [openstack_networking_floatingip_v2.app.address]
description = "App server"
}
Designate manages DNS zones and records through the OpenStack API. You still need to delegate your domain to the Designate nameservers at your registrar.
Step 9: Create a Load Balancer with Health Monitor (Octavia)
Note: All code examples in this guide have been tested against Open Edge Cloud APIs. Create an application credential in your Open Edge dashboard to get started.
resource "openstack_lb_loadbalancer_v2" "web" {
name = "${var.env_name}-lb"
vip_subnet_id = openstack_networking_subnet_v2.main.id
loadbalancer_provider = "ovn"
}
resource "openstack_lb_listener_v2" "http" {
name = "${var.env_name}-http-listener"
protocol = "TCP"
protocol_port = 80
loadbalancer_id = openstack_lb_loadbalancer_v2.web.id
}
resource "openstack_lb_pool_v2" "http" {
name = "${var.env_name}-http-pool"
protocol = "TCP"
lb_method = "SOURCE_IP_PORT"
listener_id = openstack_lb_listener_v2.http.id
}
resource "openstack_lb_member_v2" "app" {
pool_id = openstack_lb_pool_v2.http.id
address = openstack_compute_instance_v2.app.access_ip_v4
protocol_port = 80
subnet_id = openstack_networking_subnet_v2.main.id
}
resource "openstack_lb_monitor_v2" "http" {
pool_id = openstack_lb_pool_v2.http.id
type = "TCP"
delay = 10
timeout = 5
max_retries = 3
}
Open Edge uses the OVN load balancer provider, which operates at L4 (TCP/UDP). Set loadbalancer_provider = "ovn" explicitly, use TCP protocol for listeners and pools, and SOURCE_IP_PORT for the load balancing algorithm. OVN LBs are lightweight – no amphora VMs to provision, so creation is fast. The health monitor removes unhealthy members automatically. For L7 features (HTTP path routing, cookie persistence), deploy a reverse proxy like nginx or HAProxy on your instances behind the OVN load balancer.
Outputs
Add outputs to make the results of tofu apply immediately useful:
output "app_floating_ip" {
value = openstack_networking_floatingip_v2.app.address
description = "Public IP of the app server"
}
output "lb_vip" {
value = openstack_lb_loadbalancer_v2.web.vip_address
description = "Load balancer VIP (internal)"
}
output "dns_nameservers" {
value = openstack_dns_zone_v2.main.masters
description = "Delegate your domain to these nameservers"
}
State Encryption
This is where OpenTofu pulls ahead of Terraform in a meaningful way. Terraform stores state as plaintext JSON. OpenTofu encrypts it natively.
Add a state encryption block to your configuration:
terraform {
encryption {
method "aes_gcm" "default" {
keys = key_provider.pbkdf2.default
}
key_provider "pbkdf2" "default" {
passphrase = var.state_passphrase
}
state {
method = method.aes_gcm.default
enforced = true
}
plan {
method = method.aes_gcm.default
enforced = true
}
}
}
variable "state_passphrase" {
description = "Passphrase for state encryption"
type = string
sensitive = true
}
With enforced = true, OpenTofu refuses to read or write unencrypted state. Both the state file and plan files are encrypted using AES-GCM with a key derived from your passphrase via PBKDF2. Pass the passphrase via environment variable in CI/CD:
export TF_VAR_state_passphrase="your-secure-passphrase"
tofu apply
For teams that need to meet FIPS 140-3 requirements, Open Edge Cloud infrastructure uses FIPS 140-3 validated encryption (CMVP Certificate #5115) for data at rest. Combining that with OpenTofu’s client-side state encryption gives you encryption coverage across both the infrastructure and the tooling layer.
Tips and Gotchas
Flavor names vs IDs. Flavor names can change or differ between regions. For portable modules, use openstack_compute_flavor_v2 data sources to look up flavors by criteria (RAM, vCPUs, disk) rather than hardcoding names.
Network vs subnet in instance blocks. When launching an instance, you specify a network block with the network UUID. The instance gets an IP from the subnet associated with that network. If a network has multiple subnets, use fixed_ip_v4 to control which subnet assigns the address.
External network name varies. Every OpenStack deployment names its external/provider network differently. Never hardcode it. Use a variable or data source.
OVN load balancers are L4 only. If your OpenStack deployment uses the OVN Octavia provider (Open Edge does), load balancers operate at TCP/UDP. Set loadbalancer_provider = "ovn", use TCP or UDP protocol (not HTTP), and use SOURCE_IP_PORT for the lb_method. OVN LBs create instantly (no amphora VMs), but do not support L7 features like HTTP path routing or cookie-based session persistence. For L7, deploy a reverse proxy behind the OVN LB.
Floating IP association. Use openstack_networking_floatingip_associate_v2 (not the deprecated openstack_compute_floatingip_associate_v2). The networking variant takes a port_id instead of instance_id – get it from openstack_compute_instance_v2.app.network[0].port.
DNS zone creation can be slow. Designate zone creation may take several minutes while DNSSEC keys are generated and the zone is propagated to all nameservers. OpenTofu handles the polling, but do not be surprised by 5-10 minute creation times.
Application credentials scope. Application credentials inherit the roles of the user who created them. For least-privilege CI/CD, create a dedicated service user with only the roles needed for your infrastructure, then create the application credential from that user.
Import existing resources. If you already have OpenStack resources you want to manage with OpenTofu, use tofu import. For example: tofu import openstack_networking_network_v2.main NETWORK_UUID. OpenTofu adds the resource to state without recreating it.
Destroy order matters. OpenTofu handles dependency ordering automatically based on resource references. Avoid depends_on unless you have dependencies that are not expressed through attribute references. Unnecessary depends_on blocks can cause unexpected destroy ordering.
Get Started on Open Edge Cloud
Open Edge Cloud supports OpenTofu natively through standard OpenStack APIs. There is no proprietary provider, no vendor SDK, and no lock-in. Your OpenTofu configurations work with any OpenStack deployment – and if you are running on Open Edge, you get the additional benefit of FIPS 140-3 validated encryption, single-tenant infrastructure, and US-only operations.
All infrastructure runs at Iron Mountain VA-1 in Manassas, Virginia. Compute, storage, networking, and control plane – everything in one facility, operated by US-based personnel, following SOC 2/ISO 27001 control frameworks.
Ready to try it? Contact our team to get an Open Edge Cloud account with application credentials for OpenTofu automation.
