Executing terraform plans

Hi BoltOps Community,

I would like to get some opinion/advice on how to best execute a terraform plan…

For example, let’s imagine I have 2 stacks

  • vpc
  • subnet

To generate the plan file, I could run

  • terraspace plan vpc --out vpc.plan
  • terraspace plan subnet --out subnet.plan

For now, let’s assume we’re running the stacks individually (and not using e.g. terraspace all).

Firstly, where should I output the .plan file? Should I e.g output it to the .terraspace-cache/<region>/<env>/stacks/<stack_name> folder?

Secondly, how would I run the plan? Should I run e.g. terraform apply vpc.plan (and would I need to be in the .terraspace-cache/<region>/<env>/stacks/vpc folder to run it)? Or should I use terraspace up vpc instead?

To give a bit of context, what we would like to do is for the plan file to be evaluated/approved. Once it is approved, ideally, we want to run what’s in the plan file (rather than generating a new plan file and running the new plan - which arguably, should not change if there are no changes… but we just wanted to make sure that we’re applying the plan that we’ve reviewed)… Running terraspace up would regenerate a new plan file - but at the same time, we would like to drive the process using terraspace for consistency (and also get the benefit of the lockfile and automatic creation of backend etc not to say the least).

The next question… Would the answer to the above questions change if we’re now running terraspace all (for both stacks) - if it’s possibly at all? E.g. Is there a way to get the plan output for both stacks? Looking at the documentation, terraspace all does not seem to support a --out argument… And let’s say it does produce a plan, how would it get executed?

Love to hear your thoughts on this

Many thanks,

RE: Firstly, where should I output the .plan file?

It’s up to you. To explain, here are summary of commands:

terraspace new project infra --examples
cd infra
terraspace clean all -y
terraspace plan demo -o tmp/a.plan
terraspace up demo --plan tmp/a.plan

The terraspace plan demo -o tmp/a.plan copies file back to the terraspace project root folder. IE:

mkdir -p tmp
cp .terraspace-cache/us-west-2/dev/stacks/demo/tmp/a.plan tmp/a.plan

Terraspace source code where it does this: https://github.com/boltops-tools/terraspace/blob/17f196d67550fc159ffef79138ffa140c4d05d7f/lib/terraspace/terraform/ihooks/after/plan.rb Ihooks stand for internal hooks.

The terraspace up demo --plan tmp/a.plan will copy the file to the .terraspace-cache

Terraspace source code where it does this:

Click to Expand and See The Debugging Session
$ terraspace clean all -y
Removed .terraspace-cache
Removed /tmp/terraspace
$ terraspace plan demo -o tmp/a.plan
Building .terraspace-cache/us-west-2/dev/stacks/demo
Built in .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform init -get -input=false >> /tmp/terraspace/log/init/demo.log
=> terraform plan -input=false -out tmp/a.plan

Terraform used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:
  # random_pet.this will be created
  + resource "random_pet" "this" {
      + id        = (known after apply)
      + length    = 2
      + separator = "-"
  # module.bucket.aws_s3_bucket.this will be created
  + resource "aws_s3_bucket" "this" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = (known after apply)
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags_all                    = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
Plan: 2 to add, 0 to change, 0 to destroy.

Changes to Outputs:
  + bucket_name = (known after apply)


Saved the plan to: tmp/a.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "tmp/a.plan"
$ terraspace up demo --plan tmp/a.plan
Building .terraspace-cache/us-west-2/dev/stacks/demo
Built in .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform apply -input=false tmp/a.plan
random_pet.this: Creating...
random_pet.this: Creation complete after 0s [id=fleet-spider]
module.bucket.aws_s3_bucket.this: Creating...
module.bucket.aws_s3_bucket.this: Creation complete after 1s [id=bucket-fleet-spider]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.


bucket_name = "bucket-fleet-spider"
Time took: 4s

RE: Would the answer to the above questions change if we’re now running terraspace all (for both stacks) - if it’s possibly at all?

Currently, the plan option is not supported for terraspace all. It’s come up a few times. Would like to see it added. Would probably use some type of conventional plan name. Unsure when will get to it. Will consider PRs. Of course, no sweat either way.

Hi Tung,

I think I’ll go with this

terraspace plan demo -o tmp/a.plan
terraspace up demo --plan tmp/a.plan

Thanks for quick and useful response as always :smiley:


Hi Tung,

Happy New Year and hope you’ve had a good festive seasons break.

Just revisiting an old thread - in the earlier posts, you mentioned that terraspace all plan wasn’t a supported option then (but it is now)… Can you tell me where the plan (or how I can save the plan output) from doing a terraspace all plan?

Many thanks,

A bit subtle difference. Pass-through options are better supported now with individual commands generally. The -o option was already supported for plan and up before the improvements.

The all plan is different though is still not yet supported. Will add support as part of this issue:

1 Like

Thanks Tung,

I think I got myself mixed up with support for doing a terraform all plan vs getting the output for terraform all plan.

All good now, thanks :slight_smile: