Valtik Studios
Back to blog
Terraformhigh2026-03-0613 min

Terraform State Files: The IaC Secret Store That Keeps Getting Leaked

A Terraform state file is a JSON document that contains the entire cloud infrastructure plus every secret Terraform touched while provisioning. Database passwords, API keys, private certs, cloud credentials. Often stored in plaintext. State files are being found in public S3 buckets, in Git repositories, in CI/CD artifacts, and in developer laptops on a weekly basis. A practical walkthrough of the exposure patterns and how to actually harden state handling.

TT
Tre Trebucchi·Founder, Valtik Studios. Penetration Tester

Founder of Valtik Studios. Pentester. Based in Connecticut, serving US mid-market.

The file nobody should be able to read

Here's the part consultants don't put in the glossy PDF.

Terraform is how most cloud infrastructure gets provisioned in 2026. You write HCL describing what you want (S3 buckets, EC2 instances, RDS databases, IAM roles, security groups, Kubernetes clusters). You run terraform apply. Terraform figures out what needs to change and makes the changes.

To track what exists, Terraform keeps a state file. By default it's terraform.tfstate. A JSON document that describes every resource Terraform manages. The state file is Terraform's source of truth about your infrastructure.

The state file contains, in plaintext:

  • Resource identifiers (every EC2 instance ID, every S3 bucket name, every RDS endpoint)
  • Attributes of each resource (security group rules, IAM policies, encryption settings)
  • Any secrets that were passed to Terraform during provisioning (database master passwords, KMS keys that happened to be materialized, API tokens used for deployment)
  • Infrastructure topology (what connects to what)
  • IAM role definitions and trust policies

In short: if you get access to a Terraform state file, you effectively have a map of the target's entire cloud infrastructure plus a bunch of credentials.

State files should be treated as Tier-0 secrets. In practice, they frequently aren't. This post walks through the patterns we find on cloud security audits, the specific failure modes, and the hardening that works.

How state files leak

Pattern 1: Committed to Git

The single most common exposure. A developer runs terraform apply locally, which creates terraform.tfstate in the working directory. They commit it to the repo "because it tracks infrastructure state."

What we find:

  • Public GitHub repositories with .tfstate files in the history
  • Private repositories where the state was committed, then later "removed" but remains in git history
  • CI/CD pipelines that check state files into artifact storage

Every public GitHub search for filename:terraform.tfstate shows thousands of results. A meaningful fraction of them contain production secrets.

Real finding: a fintech startup's public repo had committed terraform.tfstate 18 months ago. The state contained their production RDS master password, their Stripe secret key, and their AWS access keys. The keys had since been rotated, but only after the exposure was discovered during an audit.

Pattern 2: Public S3 buckets

Many teams configure Terraform to store state in S3 (the common "remote state" backend):

terraform {
  backend "s3" {
    bucket = "my-company-terraform-state"
    key = "prod/terraform.tfstate"
    region = "us-east-1"
  }
}

Good architecture. The failure: the S3 bucket is misconfigured with public read access.

Real findings:

  • S3 buckets named *-terraform-state indexed by search engines
  • Bucket ACLs allowing AllUsers read access
  • Bucket policies with overly broad Principal: * permissions
  • Cross-account roles with read access to state buckets

Every cloud pentest we run includes specific searches for Terraform state buckets associated with the target. We find publicly-readable state buckets on approximately 1 in 5 engagements.

Pattern 3: CI/CD artifact storage

CI pipelines that run terraform apply often keep the state file as a pipeline artifact for debugging or downstream use. Pipeline artifacts are often:

  • Accessible to anyone with read access to the CI system
  • Retained indefinitely
  • Not scrubbed of secrets
  • Sometimes publicly accessible (public GitHub Actions, public GitLab CI)

Real finding: an open-source project's GitHub Actions workflow uploaded terraform.tfstate as a pipeline artifact. Any visitor could download it from the artifact archive for months before being discovered.

Pattern 4: Developer laptops

State files on developer laptops are typically created during local development:

  • Developers running terraform apply from their laptops
  • Retained on disk indefinitely
  • Potentially synced to cloud backups (iCloud, Google Drive, OneDrive, Dropbox)
  • Accessible to anyone who compromises the laptop

The WAVESHAPER RAT (from our Axios npm supply chain post) specifically looks for *.tfstate files in developer file systems. Multiple ransomware groups include state file exfiltration in their IR playbooks.

Pattern 5: Backup files

State files get backed up via:

  • terraform.tfstate.backup. Automatic backup created by Terraform
  • Cloud provider backups of any storage containing state files
  • Organizational backup systems (Veeam, Rubrik, Cohesity) that back up the entire filesystem where state files live

Backups often have looser access controls than the primary state file. Attackers who can't reach the primary state file often can reach backups.

Pattern 6: State file in terraform plan output

terraform plan output can include state information. If plans are saved to files and committed to CI artifacts or shared via email for review, they expose sensitive state details.

Specific issue: terraform plan -out=plan.tfplan creates a file that includes state snapshots. The file is compressed but not encrypted. Sharing these plan files via email, ticketing systems, or PR comments leaks state.

Pattern 7: Shared tfvars files

Terraform variable files (*.tfvars) often contain secrets that get passed into resources:

# terraform.tfvars
database_master_password = "super-secret-password"
stripe_api_key = "sk_live_abc123..."
jwt_signing_secret = "random-secret-value"

Committed to Git. Shared via Slack. Stored in shared drives. Each exposure is another path to the infrastructure they configure.

What an attacker does with a state file

Once an attacker has your state file, they've a target list and often the credentials to compromise it. Typical attack flow:

Step 1: Parse the state

jq '.resources[].instances[] |.attributes | {type, id, name}' terraform.tfstate

Enumerates every resource. Attackers now know:

  • AWS account ID
  • Every EC2 instance ID
  • Every S3 bucket name (and policy)
  • Every RDS instance with endpoint and database name
  • Every IAM role name and policy
  • Every VPC, subnet, security group
  • Every Lambda function name and runtime
  • Every Route53 record

Step 2: Extract secrets

jq -r '. | select(type == "object") | select(has("master_password") or has("password") or has("secret_key") or has("access_key")) | [.]' terraform.tfstate

Pulls out any field that looks like a secret. Common findings:

  • Database master passwords (from aws_db_instance.master_password)
  • Cloud credentials (AWS access key / secret key if managed by Terraform)
  • API keys (Stripe, Twilio, SendGrid, etc. passed as environment variables)
  • TLS private keys (from tls_private_key resources or ACM imports)
  • Random passwords generated by random_password or random_string
  • Kubernetes cluster CA certificates and client keys

Step 3: Use the secrets

The credentials give cloud-admin level access to the infrastructure. Attacker can:

  • Log in to databases directly using the master password
  • Assume IAM roles using extracted credentials
  • Deploy their own infrastructure in the account (crypto mining, attack staging)
  • Disable monitoring and logging before doing worse things
  • Exfiltrate data from databases and S3

In ransomware scenarios specifically, state file access often provides the "infrastructure control" step of the attack. Attackers can disable backups, delete snapshots, and maximize damage before demanding ransom.

The hardening

Rule 1: Never commit state files to Git

Your .gitignore should include:

# Terraform state
*.tfstate
*.tfstate.*
*.tfvars
!*.tfvars.example

# Terraform directories
.terraform/
.terraform.lock.hcl

Configure this at the repository level, at the organization level via GitHub's default .gitignore. And enforce via pre-commit hooks.

If state has already been committed: purging from Git history requires git filter-branch or BFG Repo-Cleaner. Every developer who's ever cloned the repo has a copy on their machine. Treat all secrets in the historically-committed state as compromised. Rotate them.

Rule 2: Use remote state with appropriate access controls

Remote state backends (S3 + DynamoDB for locking is the common AWS pattern) should be:

terraform {
  backend "s3" {
    bucket = "my-company-terraform-state-prod"
    key = "prod/terraform.tfstate"
    region = "us-east-1"
    encrypt = true # SSE encryption
    dynamodb_table = "terraform-state-lock" # state locking
    kms_key_id = "arn:aws:kms:...:key/..." # explicit KMS key
  }
}

Critical configurations:

  • encrypt = true. Uses SSE-S3 or SSE-KMS at minimum
  • KMS key. Dedicated key for state file encryption, access logged to CloudTrail
  • Bucket ACL. Private (no AllUsers, AllAuthenticatedUsers)
  • Bucket policy. Only specific IAM roles/users can read/write state
  • Public access block. Enforced at bucket level
  • Versioning. Enabled (allows recovery of prior state versions)
  • Cross-region replication. For disaster recovery
  • Access logging. Log every access to state files

Rule 3: State bucket IAM

Access to the state bucket should be tightly scoped:

Read access:

  • Only the CI/CD pipeline role that runs terraform plan
  • Specific admins who need emergency read access

Write access:

  • Only the CI/CD pipeline role that runs terraform apply
  • No interactive developer write access to production state

Not access:

  • Regular developers (they shouldn't be running terraform apply against production)
  • Read-only roles
  • Cross-account roles unless absolutely necessary

Rule 4: Use Terraform Cloud / Enterprise for sensitive environments

For production workloads, consider:

  • Terraform Cloud / Enterprise (HashiCorp's hosted offering)
  • Spacelift
  • Atlantis (self-hosted)
  • env0

These platforms handle state securely by default. Encrypted, access-controlled, audit-logged. They also provide workflow features (approvals, policies, compliance checks) that local state doesn't.

Rule 5: Don't pass secrets through Terraform state

The root of the problem is that secrets end up in state at all. Architectural patterns to avoid this:

Use AWS Secrets Manager / Parameter Store:

# Don't do this:
resource "aws_db_instance" "main" {
  master_password = var.db_password # ends up in state
}

# Do this:
resource "aws_secretsmanager_secret" "db_password" {
  name = "prod/db/master-password"
}

Resource "random_password" "db_password" {
  length = 32
  special = true
}

Resource "aws_secretsmanager_secret_version" "db_password" {
  secret_id = aws_secretsmanager_secret.db_password.id
  secret_string = random_password.db_password.result
}

Resource "aws_db_instance" "main" {
  manage_master_user_password = true # RDS integration with Secrets Manager
  # No master_password field set in Terraform
}

RDS, ElastiCache, and other AWS services now support "managed secrets" integrations with Secrets Manager. The secret is never in Terraform state.

Use External Secrets Operator on Kubernetes:

Kubernetes secrets managed by ESO pull from Secrets Manager/Parameter Store at runtime. Terraform configures the ESO policy. The secrets never pass through Terraform.

Rotate everything post-terraform apply:

If secrets must transit through Terraform, rotate them immediately after application:

  1. Terraform creates resource with initial password
  2. Automated post-apply script rotates the password via cloud provider API
  3. State now contains the stale (rotated) password, which is no longer valid

Rule 6: sensitive = true for variables

Mark sensitive variables:

variable "db_password" {
  type = string
  sensitive = true
}

This doesn't remove secrets from state (they're still there), but it prevents Terraform from printing them in plan/apply output. Reduces accidental exposure via CI logs.

Rule 7: Scan state files before and during commits

Pre-commit hooks that scan for state files + secrets:

  • git-secrets (AWS) scans commits for credential patterns
  • gitleaks. Comprehensive secret scanning
  • trivy. IaC security scanning that includes state file checks
  • checkov. Includes state file exposure checks
  • Custom pre-commit hook rejecting *.tfstate files

GitHub's own secret scanning catches many patterns post-commit. It's a safety net, not primary defense.

Rule 8: Audit state buckets periodically

Part of regular security hygiene:

  • Quarterly inventory of all state buckets
  • Access policy review
  • CloudTrail/S3 access log review for anomalous access
  • Old state buckets (from deprecated projects) identified and deleted

Rule 9: Treat state files as secrets in all contexts

Any process that handles state files should treat them at the same security level as the secrets they contain:

  • Don't email state files
  • Don't paste state content into Slack, tickets, or chat
  • Don't store state files in personal cloud storage
  • Don't include state files in debug archives or support uploads

When debugging, redact state content before sharing with anyone. Or use terraform state show for specific resources without exposing the whole file.

Rule 10: Compartmentalize state by environment

A monolithic state file for all environments maximizes blast radius if compromised. Break into:

  • prod/. Production infrastructure
  • staging/. Staging
  • dev/. Development
  • security/. Security-relevant infrastructure (IAM, WAF, GuardDuty)
  • networking/. VPCs, transit gateways, DNS

Each has its own state file, its own access controls, its own CI/CD pipeline.

The workspace question

Terraform has two concepts:

  • Workspaces within a state file (multiple named states under one configuration)
  • Separate state files per environment

Workspaces are less secure from an access-control perspective. Whoever has access to the workspace system has access to all workspaces. For multi-environment deployment (dev/staging/prod), separate state files with separate backends are preferred for access control isolation.

Detection: is your state file already leaked?

If you're worried about existing exposure, some specific checks:

Check 1: public GitHub

# Searches public GitHub for your company's state files
gh search code 'filename:terraform.tfstate your-company-name'
gh search code 'filename:terraform.tfstate "your-aws-account-id"'

Check 2: public S3 buckets

Try common state-bucket naming patterns for your organization:

  • -terraform-state
  • -tfstate
  • -terraform
  • -prod-state

Attempt unauthenticated access. Public response indicates exposure.

Check 3: CI/CD artifact inventory

For each CI system:

  • Review pipeline artifacts
  • Check for state files in archived runs
  • Check for state files in public workflow runs (open-source projects especially)

Check 4: cloud access logs

Review CloudTrail / Azure Monitor / GCP audit logs for:

  • GetObject calls against state buckets from unexpected sources
  • Role assumption to Terraform roles from unexpected locations
  • Downloads of state file sizes (state files are usually 100KB-10MB)

Check 5: developer laptop audit

For organizations with formal endpoint management, search file systems for:

  • *.tfstate files
  • *.tfstate.backup files
  • .terraform/ directories that shouldn't be on endpoint storage

If state has leaked

Assume worst case:

  1. Rotate every secret in the state file. Database passwords, API keys, cloud credentials. Even if Terraform "manages" the secret, rotate out-of-band.

  1. Audit cloud logs. Look for activity during and after the exposure window. Any unusual API calls, resource access, or authentication from unexpected locations.

  1. Scan for persistence. Attackers with admin access often install persistence mechanisms. IAM users, cross-account roles, Lambda functions with long-running access. Check for resources you don't remember creating.

  1. Review IAM policies. Any policy that got attached during the exposure window. Any role trust policy that got updated.

  1. Check for data exfiltration. S3 access logs, RDS query logs, anything that indicates data was pulled from the environment.

  1. File breach notification if PII was at risk. State file exposure alone may not trigger obligations. But if the state-revealed credentials were used to access PII, reporting requirements apply.

  1. Remediate the root cause. Whatever let the state file leak needs permanent fixing before next Terraform run.

For DevOps teams

If you're responsible for infrastructure at your organization, the action list:

This week:

  • Verify .gitignore includes *.tfstate and *.tfvars
  • Search your repos (including private) for committed state files
  • Check state bucket public access blocks

This month:

  • Review IAM policies on state buckets
  • Enable state bucket access logging if not already
  • Verify state encryption at rest
  • Schedule penetration test scoped to include state file exposure

This quarter:

  • Migrate secrets out of Terraform state where possible (Secrets Manager integration, ESO)
  • Implement state-file scanning in CI/CD
  • Standardize on a state backend pattern across projects
  • Implement tabletop exercise covering state file exposure scenarios

For Valtik clients

Valtik's cloud security audits include Terraform state file exposure review as a standard component:

  • Inventory of state files across environments
  • Access control review of state backends
  • Git repository scanning for historical state file commits
  • CI/CD artifact review for state file exposure
  • State file content review for sensitive data exposure
  • Remediation roadmap with prioritization

For organizations using Terraform at scale, this review alone typically identifies findings that would be catastrophic if exploited. Reach out via https://valtikstudios.com.

The honest summary

Terraform state files are the backbone of modern infrastructure management and the weakest link in secret management for a significant fraction of organizations using them. The protections aren't hard. They require discipline: no state in Git, remote state with appropriate access controls, no secrets in state where avoidable.

The alternative is the dynamic where a lost laptop, a misconfigured S3 bucket, or a public GitHub repo leak gives an attacker your production infrastructure plus the keys to open its front door. This has happened to organizations you've heard of. The ones it hasn't happened to publicly include a lot that haven't noticed yet.

Audit your state file posture before someone else does.

Sources

  1. Terraform State Documentation
  2. Terraform Backend Configuration
  3. AWS Secrets Manager Integration with RDS
  4. Terraform Cloud
  5. External Secrets Operator
  6. Gitleaks Secret Scanning
  7. trivy IaC Security Scanner
  8. HashiCorp Security Best Practices
  9. AWS S3 Security Best Practices
  10. State Best Practices. Terraform Up & Running
terraformiac securitysecrets managementaws securitycloud securitypenetration testingdevsecopsvulnerability assessmentresearch

Want us to check your Terraform setup?

Our scanner detects this exact misconfiguration. plus dozens more across 38 platforms. Free website check available, no commitment required.

Get new research in your inbox
No spam. No newsletter filler. Only new posts as they publish.