Hi, firstly thanks for terraspace it looks to be a significant improvement over other terraform frameworks.
We are using shared resources within our AWS environment, in particular we have a single network account that stores all the networking resources and these are then shared with our environment accounts such as development, test, etc This gives us greater control of the resources, but is presenting a problem in how to map their use within terraspace and the stack concepts.
Ideally we would like a “core” network stack, which is layered below by an environment specific stack which is then used by the environment resources. The core stack would then however shared across all environments and ideally the state files associated with the environment specific stack would be stored in the network account.
When creating the infrastructure with multiple stacks, my understanding the same S3 bucket is used for all stacks within the application for that particular value of TS_ENV, etc. Is that correct? is it possible to switch the bucket for each stack?
Ideally we would like the state for the network resources to be in a bucket controlled by the network account and the environment (say dev) state files to be in a bucket in the development account. We manage the account switch in the terraform code by using multiple profiles and passing them as provider aliases to the modules.
Equally we would like to pass the IDs of the network resources to the environment using the output macro and dependency logic built into terraspace, rather than having to redefine it in the environment resources.
We are struggling to find a way to manage the split, either define the full application stack but the state files go into the say the dev account, or we separate networking and run them independantly.
I’d be interested in any suggestions, maybe we are overcomplicating it?
Think it makes sense that to be able to have a “core” stack that handles things like the network stack. And other stacks can reference the core stack’s output.
It gets tricky when the core stack is in another TS_ENV and even trickier when it’s in another AWS account.
Unsure how to handle this yet. One of the reasons that current the output helper method only supports the same TS_ENV. This keeps the complexity down, in particular when it comes to resolving the dependency graph for the terraspace all command. Somewhat related thoughts here:
Talking about a env: common that possibly skips dependency graph calculation. So at least can access the terraform statefile and outputs programmatically. Orchestration and dependency graph would be skipped though.
Also, also thinking can define your own custom helper:
The custom helper could call the load the terraform statefile for outputs, call aws-sdk, or manually be updated with your core output values. The latter being the most non-ideal but is quickest and easiest. Think this is also non-ideal.
This is a tough one. Shared TS_ENV dependencies are a decent effort and may create too much complexity. Unsure will dig into this one more. Will consider ideas and PRs. Of course, no sweat either way
Seriously great product. Thanks to a colleague of mine I started using it as well now in our projects and even though it takes some getting used to, the framework idea is great.
I’m looking for somewhat similar to this; hence I’ve when on an archeological hunt through the community and dug up this old thread.
My needs are kind of, if not the same as this thread.
I have a project running in a single AWS account with multiple environments.
One I’d call ‘core’ as it’s dealing with account general configuration items like compliance, security, backup. Everything else is in specific environments that I’d like to call ‘dev’, ‘tst’, ‘acc’, ‘prd’.
I would like the ability to read an output from my core as it has global resources that I don’t want to implement 4+ times just because I have different TS_ENV’s. It’s a shared resource for a reason.
I would like refer to it from another TS_ENV environmen’s tfvars like:
variable = <%= global.core.output(‘compliance.ebs_default_kms_arn’) %>
Is there a path forward other than using hardcoded values in my tfvars?
Where I need to reference resources created elsewhere I use a separate stack that only performs data lookups. For example, our networking is managed in a single account using resource shares and a different terraspace repository. In our application repo there is a networking stack that does the data lookups and outputs the network resources for that environment (based on TS_ENV) for other stacks to use.
This way there is a single point in the application stacks to manage names rather than multiple data lookups, not perfect but works well for us