Hi, firstly thanks for terraspace it looks to be a significant improvement over other terraform frameworks.
We are using shared resources within our AWS environment, in particular we have a single network account that stores all the networking resources and these are then shared with our environment accounts such as development, test, etc This gives us greater control of the resources, but is presenting a problem in how to map their use within terraspace and the stack concepts.
Ideally we would like a “core” network stack, which is layered below by an environment specific stack which is then used by the environment resources. The core stack would then however shared across all environments and ideally the state files associated with the environment specific stack would be stored in the network account.
When creating the infrastructure with multiple stacks, my understanding the same S3 bucket is used for all stacks within the application for that particular value of TS_ENV, etc. Is that correct? is it possible to switch the bucket for each stack?
Ideally we would like the state for the network resources to be in a bucket controlled by the network account and the environment (say dev) state files to be in a bucket in the development account. We manage the account switch in the terraform code by using multiple profiles and passing them as provider aliases to the modules.
Equally we would like to pass the IDs of the network resources to the environment using the output macro and dependency logic built into terraspace, rather than having to redefine it in the environment resources.
We are struggling to find a way to manage the split, either define the full application stack but the state files go into the say the dev account, or we separate networking and run them independantly.
I’d be interested in any suggestions, maybe we are overcomplicating it?