Terraspace all up and Terraform local-file Resource

I’ve been working with Terraspace for a few days and it is terrific, love the inter-stack dependencies, auto tf-state backend creation, and ootb Azure, AWS plugins, + a lot more. Ran into an issue with Terraform local-file creation… When terraspace all up is utilized to build stack-based resources, e.g. my main.tf is under /app/stacks// directory, and has the Terraform local-file resource like this.

resource “local_file” “akskubeconfig3” {
depends_on = [
module.aks
]
content = “TEST Random String in current directory”
filename = “${var.maz_shortprefix}_kubecontxt”
}

… When using terraspace all up, this file is not seen/created anywhere in the .terraspace-cache or any local directory. However if I were to do terraspace init, build and then use plain terraform init, plan and apply within the stack/mod directory, it does generate the file. Is there anything that I’m doing incorrectly? Or is local file outputs not supported by terraspace?

Unsure. Tested by added some code to the examples files.

resource "local_file" "foo" {
  content     = "foo!"
  filename = "${path.module}/foo.bar"
}

Deployed:

$ terraspace up demo -y
Building .terraspace-cache/us-west-2/dev/stacks/demo
Built in .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform plan -input=false -out /tmp/terraspace/plans/demo-d823004442d6bc3a7a2d7c8e4d4d580c.plan
random_pet.this: Refreshing state... [id=correct-urchin]
local_file.foo: Refreshing state... [id=4bf3e335199107182c6f7638efaad377acc7f452]
module.bucket.aws_s3_bucket.this: Refreshing state... [id=bucket-correct-urchin]

An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # local_file.foo will be created
  + resource "local_file" "foo" {
      + content              = "foo!"
      + directory_permission = "0777"
      + file_permission      = "0777"
      + filename             = "./foo.bar"
      + id                   = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

------------------------------------------------------------------------

This plan was saved to: /tmp/terraspace/plans/demo-d823004442d6bc3a7a2d7c8e4d4d580c.plan

To perform exactly these actions, run the following command to apply:
    terraform apply "/tmp/terraspace/plans/demo-d823004442d6bc3a7a2d7c8e4d4d580c.plan"

=> terraform apply -auto-approve -input=false /tmp/terraspace/plans/demo-d823004442d6bc3a7a2d7c8e4d4d580c.plan
local_file.foo: Creating...
local_file.foo: Creation complete after 0s [id=4bf3e335199107182c6f7638efaad377acc7f452]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

bucket_name = "bucket-correct-urchin"
Time took: 3s
$ 

The foo.bar file is created:

$ cat .terraspace-cache/us-west-2/dev/stacks/demo/foo.bar
foo!

Hi @tung - first off… thank you for looking into this issue. Fully agree with your findings above. I even verified your findings by not only using terraspace up demo but also terraspace all up (abbreviated to TAU). So, the difference b/w the demo case above and my case is that there’s only one stack to build in the above. So I created another demo case that has two stacks, demo creates foofile, demo2 creates barfile, and I tried the following test cases:

  1. No dependency ‘wiring’ b/w both demo and demo2 stacks. Executed TAU, and both foo and bar files are created successfully.
  2. Dependency of demo and demo2 were “wired together” (per https://terraspace.cloud/docs/dependencies/deploy-multiple/), and then TAU was executed. Foo file is not found, but bar is. Something is probably rebuilding the .terraspace-cache directory b/w dependent stack “batch runs” that removes the earlier stack’s local-files …

$ terraspace all up
Building one stack to build all stacks
Building .terraspace-cache/us-east-2/dev/stacks/demo2
Downloading tfstate files for dependencies defined in tfvars…
Built in .terraspace-cache/us-east-2/dev/stacks/demo2
Will run:
terraspace up demo # batch 1
terraspace up demo2 # batch 2
Are you sure? (y/N) y
Batch Run 1:
Running: terraspace up demo Logs: log/up/demo.log
terraspace up demo: Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Batch Run 2:
Running: terraspace up demo2 Logs: log/up/demo2.log
terraspace up demo2: Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Time took: 18s

foofile (demo stack) is not found

$ cat .terraspace-cache/us-east-2/dev/stacks/demo/
.terraform/ backend.tf outputs.tf variables.tf
.terraform.lock.hcl main.tf provider.tf
$ cat .terraspace-cache/us-east-2/dev/stacks/demo/foofile
cat: .terraspace-cache/us-east-2/dev/stacks/demo/foofile: No such file or directory

barfile (demo2 stack) found

t$ cat .terraspace-cache/us-east-2/dev/stacks/demo2/barfile
bar!

I see. This is because terraspace all cleans out .terraspace-cache between each batch run. Here’s an example repo that details out the debugging:

tongueroo/terraspace-debug-local-file

In short, you can set config.build.clean_cache = false

Terraspace.configure do |config|
  config.logger.level = :info
  config.test_framework = "rspec"
  config.build.clean_cache = false
end

Unsure though if terraform stacks should rely on local files from other stacks. It somewhat feels weird, but maybe am missing something. Hope that helps!

I was just about to post a new discussion but this seems similar enough. I am running into a similar issue where I want to use one stack (files stack) to build a dynamic.tf file in another stack. Not in the ts cache directory but in the app/stacks/ directory using a tf local_file resource. This is to get around the dynamic provider issue where I want to deploy resources in multiple azure subscriptions.

The files stack builds the dynamic.tf in the ts cache directory, but when I run the real stack, these files get removed.

I was able to run vanilla terraform against the files stack and it created the dynamic.tf files in the app/stacks directory, but that doesnt feel right.

I will try to disable the clean_cache. will that casue any other problems?

None that can think of. Cleaned out the cache by default because it seems cleaner. Though an unsure.