Dealing with shared cross account resources in AWS

Hi, firstly thanks for terraspace it looks to be a significant improvement over other terraform frameworks.

We are using shared resources within our AWS environment, in particular we have a single network account that stores all the networking resources and these are then shared with our environment accounts such as development, test, etc This gives us greater control of the resources, but is presenting a problem in how to map their use within terraspace and the stack concepts.

Ideally we would like a “core” network stack, which is layered below by an environment specific stack which is then used by the environment resources. The core stack would then however shared across all environments and ideally the state files associated with the environment specific stack would be stored in the network account.

When creating the infrastructure with multiple stacks, my understanding the same S3 bucket is used for all stacks within the application for that particular value of TS_ENV, etc. Is that correct? is it possible to switch the bucket for each stack?

Ideally we would like the state for the network resources to be in a bucket controlled by the network account and the environment (say dev) state files to be in a bucket in the development account. We manage the account switch in the terraform code by using multiple profiles and passing them as provider aliases to the modules.

Equally we would like to pass the IDs of the network resources to the environment using the output macro and dependency logic built into terraspace, rather than having to redefine it in the environment resources.

We are struggling to find a way to manage the split, either define the full application stack but the state files go into the say the dev account, or we separate networking and run them independantly.

I’d be interested in any suggestions, maybe we are overcomplicating it?

Thanks

John

Thanks for the kind words.

Think it makes sense that to be able to have a “core” stack that handles things like the network stack. And other stacks can reference the core stack’s output.

It gets tricky when the core stack is in another TS_ENV and even trickier when it’s in another AWS account.

Unsure how to handle this yet. One of the reasons that current the output helper method only supports the same TS_ENV. This keeps the complexity down, in particular when it comes to resolving the dependency graph for the terraspace all command. Somewhat related thoughts here:

Talking about a env: common that possibly skips dependency graph calculation. So at least can access the terraform statefile and outputs programmatically. Orchestration and dependency graph would be skipped though.


Also, also thinking can define your own custom helper:

The custom helper could call the load the terraform statefile for outputs, call aws-sdk, or manually be updated with your core output values. The latter being the most non-ideal but is quickest and easiest. Think this is also non-ideal.

This is a tough one. Shared TS_ENV dependencies are a decent effort and may create too much complexity. Unsure will dig into this one more. Will consider ideas and PRs. Of course, no sweat either way :grin: