Issue using output helper

stack1 provides output to stack2.

Removing any stack2 dependency from base.tfvars works without issue.
For a while all worked well also with dependencies in base.tfvars in stack2
then I start to get this strange behavior:

Runnign up command on stack1

terraspace up stack1

I get this error:

Building .terraspace-cache/eu-south-1/prod/stacks/stack1
Downloading tfstate files for dependencies defined in tfvars...
Built in .terraspace-cache/eu-south-1/prod/stacks/stack1
=> terraform init -get -input=false >> /tmp/terraspace/log/init/stack1.log
Error: Error inspecting states in the "s3" backend:
    S3 bucket does not exist.
The referenced S3 bucket must have been previously created. If the S3 bucket was created within the last minute, please wait for a minute or two and try again.

Error: NoSuchBucket: The specified bucket does not exist
        status code: 404, request id: ..., host id: ...

Prior to changing backends, Terraform inspects the source and destination states to determine what kind of migration steps need to be taken, if any. Terraform failed to load the states. The data in both the source and the destination remain unmodified. Please resolve the above error and try again.

Error running command: terraform init -get -input=false >> /tmp/terraspace/log/init/stack1.log

inspecting /tmp/terraspace/log/init/stack1.log I can see:

Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing the backend...
...
Initializing the backend...
Initializing modules...
Downloading git::...
...
Initializing the backend...
Backend configuration changed!
Terraform has detected that the configuration specified for the backend
has changed. Terraform will now check for existing state in the backends.

Then I tried:

terraspace clean all
terraspace init stack1

Getting this:

Building .terraspace-cache/eu-south-1/prod/stacks/stack1
Downloading tfstate files for dependencies defined in tfvars...

Error: Initialization required. Please see the error message above.
Error running: cd .../.terraspace-cache/eu-south-1/prod/stacks/stack1 && terraform state pull > /tmp/terraspace/remote_state/stacks/stack1/state.json
Please fix the error before continuing

Error: Initialization required. Please see the error message above.
Error running: cd .../.terraspace-cache/eu-south-1/prod/stacks/stack1 && terraform state pull > /tmp/terraspace/remote_state/stacks/stack1/state.json
Please fix the error before continuing
...
Built in .terraspace-cache/eu-south-1/prod/stacks/account-master
Current directory: .terraspace-cache/eu-south-1/prod/stacks/account-master
=> terraform init -get -input=false
Initializing modules...
Downloading git::...
...

Initializing the backend...

Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Initializing provider plugins...
...
Terraform has been successfully initialized!
...

Running manually terraform pull command works

cd .../.terraspace-cache/eu-south-1/prod/stacks/stack1 && terraform state pull > /tmp/terraspace/remote_state/stacks/stack1/state.json

But using terraspace I get same error and I cannot use output helper anymore.

It seems a terraspace bug

On every terraspace up of stack1 with output helper defined in stack2, terraspace seems to change the stack1 backend configuration in .terraform/terraform.tfstate under stack1 cache directory. Terraspace do that also if backend.tf under stack1 cache is right and different. Indeed in .terraform/terraform.tfstate I can see a wrong backend configuration causing the errors above.

NOTE
I use config/terraform/backend.tf for backend configuration for custom backend definition using <%= expansion… %> helper (stack2 will use that) but for stack1 I need a completely different backend configuration so I set app/stacks/stack1/config/terraform/backend.tf.
I suppose when terraspace run for the stack1 the dependencies part (command to pull state from stack1 for stack2), terraspace change the backend configuration imposing config/terraform/backend.tf temporary corrupting stack1 backend config, going in a loop of backend update on every run and errors.

Is this a Bug or a problem with how I set up backends.tf (anti-pattern) ?

As per Provide backend in the stack module itself backend.tf set in app/stack/stackname/backend.tf is not overwritten by terraform:

If an existing backend.rb or backend.tf is in the module’s folder, terraspace will not overwrite it.

But also setting backend in app/stack/stackname/backend.tf cause error described above so bug probability is higher,

A workaround could be probably to move state file to s3 bucket/key respecting config/terraform/backend.tf pattern, but this way seems too restrictive to me.

Another workaround (to be verified):
can Logic in the config/terraform/backend.rb probably help (not scure outputs helper will respect this). I’ll Wait help from someone who knows Ruby better than me :thinking:

I can confirm this is a bug based on documentation and given that applying workaround using a single backend.tf config file works (moving tfstate file to bucket/key as configured in the central config/terraform/backend.tf used by all stacks - do not hit the bug) !

Filled a bug report on github terraspace project:Issue #109