`terraspace build placeholder` slow + building too much?

Hi there,

I’m working on porting terraspace to managing our 290~ workspaces (About 90 stacks) in TFC, but am struggling with a few things:

As the documentation states, you need to check in the .terraspace-cache folder into source control in order for TFC to be able to execute your terraform code with VCS enabled. Great, no problem. In my case, I’ve got 10 different AWS accounts that I manage, but each account does not use every stack that I have available. I need to ensure that any code that’s checked in to source control has terraspace build placeholder run against the appropriate environment, but it should only generate a folder in cache if the environment actually exists (i.e - existence of a tfvars file). Here’s an example:

|-- stacks/
  |-- ec2/
    |-- main.tf
    |-- tfvars/
      |-- dev.tfvars
      |-- qa.tfvars
      |-- prod.tfvars

With this, I would only expect an environment to be built for dev, qa and prod. My other environments (uat, stage, etc) should not have a cache folder built for them as these don’t actually exist for this stack. How would I go about this?

Second, it seems that terraspace build placeholder takes awhile to run when I need to run it for all of my environments each time I push. I need to make sure that what’s in the cache folder is accurate before checking in to source control as the PR will trigger a plan in TFC against this folder. Right now, a build for my 10 environments can take a few minutes to complete as it iterates through each one. All I really need is terraspace to convert my terraspace files to actual terraform and that’s it, but it seems to be doing a lot more in the background. Can you elaborate if there’s a better way to do this? I’m really using terraspace for it’s DRY stance and templating, which work great - but generating the actual terraform takes awhile when working at scale.

Here’s an example of how long running terraspace build takes regardless if it’s one stack or using placeholder (all stacks):

> time TS_ENV=rd terraspace build stack
Building .terraspace-cache/us-east-1/rd/stacks/stack
Built in .terraspace-cache/us-east-1/rd/stacks/stack
real    0m33.366s
user    0m7.465s
sys     0m0.710s

Thanks for your help.

Okay, I’m guessing the “slowness” is maybe a mix of confusion and misunderstanding.

terraspace build stack appears to actually build your entire “project” (per --help), which is confusing because the argument it wants is just a stack. I would expect only the stack to be built in this case. Turning on debug logging is showing that it’s building all stacks (the project at large).

terraspace build dlm       
Building .terraspace-cache/us-east-1/rd/stacks/dlm
Created .terraspace-cache/us-east-1/rd/stacks/eks/provider.tf
Created .terraspace-cache/us-east-1/rd/stacks/eks/backend.tf
Created .terraspace-cache/us-east-1/rd/stacks/s3/provider.tf
Created .terraspace-cache/us-east-1/rd/stacks/s3/backend.tf
Created .terraspace-cache/us-east-1/rd/stacks/s3/1-us-east-1-rd.auto.tfvars
Created .terraspace-cache/us-east-1/rd/stacks/some-app/provider.tf
Created .terraspace-cache/us-east-1/rd/stacks/some-app/backend.tf
Created .terraspace-cache/us-east-1/rd/stacks/some-app/1-us-east-1-rd.auto.tfvars
Created .terraspace-cache/us-east-1/rd/stacks/another-s3/provider.tf
Created .terraspace-cache/us-east-1/rd/stacks/another-s3/backend.tf
Created .terraspace-cache/us-east-1/rd/stacks/another-s3/1-us-east-1-rd.auto.tfvars

This answers my question why building a stack is slow - but it’s still slow nonetheless and makes pre-commit hooks to building these prior to source control check in very annoying. Again, managing many stacks while having to rely on the cache folder for TFC does not seem sustainable. :confused:

Seems to happen during init and plan too, making this even more cumbersome.

RE: Building all stacks

So terraspace builds all stacks for a specific reason. Here’s some history. Terraspace used to build only the specific stack that it needed to deploy. Changed it back in v0.3.0 so that all stacks are built with the advent of the terraspace all commands. This is because terraspace all commands create multiple processes to deploy stacks in parallel. The implementation got a little tricky when only each stack was built. The complexity also had to due with having to calculate the dependency tree. That’s the context for why all stacks are built now. So yes, you don’t have to call terraspace build on each stack. You only have to call it on each env.

In terms of performance, there’s room improve it, particularly for mono-repo setups IE: 90 stacks. Will think some more about that and dig into it. :face_with_monocle:

RE: VCS-driven workflow

The TFC VCS-driven workflow is not a great workflow. With the way TFC works, we pretty much have to pre-generate the code before uploading. This one of the reasons I prefer the CLI-driven workflow. Understandably, some companies prefer VCS-driven workflow. There’s not an ideal way to handle this. Think will update the docs to note how VCS-driven workflow with the git commit hook is just a workaround. Note, got some long-term ideas to help, but it’s rather far out.