Hi Tung and Boltops Community,
In a parallel universe, terraspace all down
used to work for me when I had multiple dependent stacks. However, I’m using now using terraspace 1.1.3 and doing a terraspace all down
is giving me errors, which I suspect is due to Terraspace not building the dependent stacks (admittedly, the number of stacks in the project have grown since I last used terraspace all down
).
So, I think that… When we have dependent stacks, terraspace all down
still works out the dependent stacks correctly. But I think it doesn’t build/create the dependent folder/stack in the cache, and therefore the current won’t destroy properly (because it will fail to resolve the variables that references the dependent stacks).
Here is an output of doing a terraspace all graph --format text
of our project
Building graph...
.
├── appstream
│ ├── network
│ ├── s3
│ └── securitygroups
│ └── network
├── asg
│ ├── compute
│ │ ├── network
│ │ ├── s3
│ │ └── securitygroups
│ │ └── network
│ ├── securitygroups
│ │ └── network
│ ├── network
│ └── loadbalancer
│ ├── network
│ ├── s3
│ ├── compute
│ │ ├── network
│ │ ├── s3
│ │ └── securitygroups
│ │ └── network
│ └── securitygroups
│ └── network
├── cloudfront
│ ├── s3
│ ├── loadbalancer
│ │ ├── network
│ │ ├── s3
│ │ ├── compute
│ │ │ ├── network
│ │ │ ├── s3
│ │ │ └── securitygroups
│ │ │ └── network
│ │ └── securitygroups
│ │ └── network
│ ├── acm
│ │ └── route53
│ │ ├── network
│ │ ├── compute
│ │ │ ├── network
│ │ │ ├── s3
│ │ │ └── securitygroups
│ │ │ └── network
│ │ └── loadbalancer
│ │ ├── network
│ │ ├── s3
│ │ ├── compute
│ │ │ ├── network
│ │ │ ├── s3
│ │ │ └── securitygroups
│ │ │ └── network
│ │ └── securitygroups
│ │ └── network
│ └── lambda
│ └── s3
├── efs
│ ├── network
│ └── securitygroups
│ └── network
├── iam_role
└── waf
Below is the output of running terraspace all down --yes
, including the error messages (I’ve omitted the output of batch 3 onwards as they are very similar to the output of batch 1 and batch 2)
Running:
terraspace down cloudfront # batch 1
terraspace down acm # batch 2
terraspace down asg # batch 3
terraspace down route53 # batch 3
terraspace down loadbalancer # batch 4
terraspace down appstream # batch 5
terraspace down compute # batch 5
terraspace down efs # batch 5
terraspace down securitygroups # batch 6
terraspace down lambda # batch 6
terraspace down iam_role # batch 7
terraspace down network # batch 7
terraspace down s3 # batch 7
terraspace down waf # batch 7
Batch Run 1:
Errno::ENOENT: No such file or directory - /tmp/SITSv3/.terraspace-cache/ap-southeast-2/dev/stacks/s3
Error evaluating ERB template around line 6 of: /tmp/SITSv3/app/stacks/cloudfront/tfvars/base.tfvars:
1
2
3
4
5
6 bucket_testing_bucket_only_jalam = <%= output("s3.bucket_testing_bucket_only_jalam") %>
7
8
9
10
11
Original backtrace (last 8 lines):
/usr/lib/ruby/2.7.0/open3.rb:213:in `spawn'
/usr/lib/ruby/2.7.0/open3.rb:213:in `popen_run'
/usr/lib/ruby/2.7.0/open3.rb:101:in `popen3'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/shell.rb:36:in `popen3'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/shell.rb:26:in `shell'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/shell.rb:17:in `run'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/terraform/runner.rb:51:in `block in terraform'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/hooks/builder.rb:25:in `run_hooks'
Re-run with FULL_BACKTRACE=1 to see all lines
Error running: terraspace down cloudfront. Fix the error above or check logs for the error.
Batch Run 2:
Errno::ENOENT: No such file or directory - /tmp/SITSv3/.terraspace-cache/ap-southeast-2/dev/stacks/route53
Error evaluating ERB template around line 6 of: /tmp/SITSv3/app/stacks/acm/tfvars/base.tfvars:
1
2
3
4
5
6 zone_mydomain = <%= output("route53.zone_mydomain") %>
7
8
9
10
11
Original backtrace (last 8 lines):
/usr/lib/ruby/2.7.0/open3.rb:213:in `spawn'
/usr/lib/ruby/2.7.0/open3.rb:213:in `popen_run'
/usr/lib/ruby/2.7.0/open3.rb:101:in `popen3'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/shell.rb:36:in `popen3'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/shell.rb:26:in `shell'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/shell.rb:17:in `run'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/terraform/runner.rb:51:in `block in terraform'
/var/lib/gems/2.7.0/gems/terraspace-1.1.3/lib/terraspace/hooks/builder.rb:25:in `run_hooks'
Re-run with FULL_BACKTRACE=1 to see all lines
Error running: terraspace down acm. Fix the error above or check logs for the error.
....
....
....
From the dependency graph, we can see that the s3
stack is a dependency of the cloudfront
stack (executed at batch 1). But looking at the message for batch run 1, it says: Errno::ENOENT: No such file or directory - /tmp/SITSv3/.terraspace-cache/ap-southeast-2/dev/stacks/s3
, which leads me to think that the directory isn’t there because it isn’t built when terraspace all down --yes
is run (the cache folder is not build/present before running terraspace all down
).
FYI, I also have the following set in config/app.rb
(if it makes any difference)
Terraspace.configure do |config|
config.logger.level = :info
config.test_framework = "rspec"
config.all.exit_on_fail.down = false
config.all.exit_on_fail.up = false
end
Many thanks,
James