Dependencies/linking output between multiple projects

Hi Tung and BoltOps community,

I am aware that we can use the output helper to configure dependencies with different stacks (Reference: But, what about dependencies from stacks in e.g. another project?

For example, if vpc is a stack and I have a PROD project (with the vpc stack) and a NPROD project (with the vpc stack), how would I reference the PROD vpc when I’m running the vpc stack on the NPROD project? Note that both PROD and NPROD vpc stack would be using the exact same code - with the data (that feeds into the code) being the differentiator between the two vpc stacks.

I guess a use case for this would be something like

  • Create a vpc in PROD
  • Create a vpc in NPROD
  • Create VPC peering between PROD vpc and NPROD vpc

What I would like to do is avoid doing something like manually recording the and manually specifying them for the VPC peering module. How would/could this be done if the PROD and NPROD projects are separate? Or is there other suggestions on how I could do this better?

Many thanks,


Just been thinking of this again from another angle… At the moment, we have the following structure

  |--> stacks
           |--> vpc

And then we would be passing data for prod (during prod build), and passing data for nonprod (during nonprod build). Thefore, keeping the code DRY…

But to workaround the challenge, would the following structure be supported?

  |--> stacks
           |--> prod
                   |--> vpc
           |--> nonprod
                   |--> vpc

While the code for prod/nonprod would still be exactly the same (and data-driven), the nesting/repeat of the stacks might allow us to reference the output for prod stack in the nonprod stack (or vice versa)?

In fact, I had a quick go at it… and while the following seemed to work

terraspace up prod/vpc
terraspace up nonprod/vpc

When I try to chain them using the output helper, it didn’t seem to work (or maybe I’ve just done it wrongly).

I’ve tried using (with what it evaluates to on the following line) e.g.

<%= output("prod/vpc.vpc_id") %>

vpc = "(Output vpc could not be looked up for the nonprod/vpc tfvars file. prod/vpc stack needs to be deployed)"


<%= output("prod.vpc.vpc_id") %> 

vpc = "(Output vpc was not found for the nonprod/vpc tfvars file. Either prod stack has not been deployed yet or it does not have this output: s3. Also, if local backend is being used and has been removed/cleaned, then it will also result zero-byte state.json with the 'terraform state pull' used to download the terraform state and output will not be found.)"

Many thanks,

Wondering if it might be out of scope for terraspace. Experimented it with a long time ago, and it was quite complex. Not ruling it out, just unsure if the trade-off of implementation complexity into terraspace core is worth it. Would consider and review experimental PRs to get a better idea. Of course, no sweat either way.

Related community post: Handling multiple providers/accounts/roles/regions In that post, mention:

Also, did an interview with Anton B, he explains it pretty clearly: “We still have makefiles, we still have shell”. Here’s the video at the specific time:

So maybe it can be handled higher up with something like a pipeline.

Also, related: “Add support for multiple directory levels within stacks #214 Will consider PRs for this also.

Hi Tung,

Understood, and thanks for your input.

The question was more to get a feel of how dependencies between separate projects/accounts/envs/remote backend could be handled by terraspace.

I’ve yet to try it out, but I think we might be able to get around it by querying the vpc using the data block and specifying the provider for the other aws account. What terraspace has allowed us to do is template the providers using erb templating (which worked around the terraform limitation with “dynamic” providers - and without having to hardcode each provider). So, rather than using a mechanism similar to terraspace’s output helper (and also bypass the need to create dependencies) as long as we know the data to query (and like you said could be something within the pipeline), it would still be possible to do.

It might still be a while before I attempt this, but I guess watch this space :slight_smile:

Many thanks,