In our setup we separate infrastructure from the backends, by having the backends in their own AWS accounts. The reason for this is simply security: If we would for whatever reason lose access to the infrastructure account, we still have the backend in it’s separate account and can restore everything based on the state.
This works fine in pure terraform, by providing a role of the backend account in the backend.tf of a project. Right now I’m having trouble getting it to work in terraspace though. Especially the auto-creation of backend buckets seems to ignore the role or run into permission issues.
Is there specific support for this or known if cross-account work like this is impossible at the moment in terraspace?
My thoughts on this:
The backend.tf already contains the role_arn of the role in the other account. That should be used by default with option to overwrite it, I think. Not everyone wants this role’s policy to include bucket creation, but it’s a good default I think.
But alright. Automatic creation is not possible at the moment, got it. FOr security reasons I was thinking to create specific roles that only allow access to those buckets anyway, so I can also simply create the backends with a separate terraspace/terraform project. I think I will investigate this a bit more and check if at least cross-account connection would work.
Cool. Think that’s good default behavior and a nice approach. Some thoughts:
Would need to parse backend.tf so that the data structure is available in Ruby to grab the role_arn. Then can use it to create the s3 client specifically for the backend creation.
There’s already a parser that terraspace uses: https://github.com/boltops-tools/hcl_parser It really just uses https://github.com/winebarrel/rhcl parser and rewrites the HCL as a pre-processing step before parsing the HCL. It’s hacky, but tried a few different terraform parsers at the time and each seemed to have some issues. Probably because Terraform HCL syntax itself has evolved and changed and it takes time and effort for authors to update their parsers. Currently, the boltops-tools/hcl_parser hacky preprocessing works for simple HCL cases like backend.tf and variables.tf. At some point hope to revisit the parser with something like https://kschiess.github.io/parslet/ and improve the HCL parser. It’s been quite a while though since messed around with parsing.
Interesting… We’ve also been working with deployments with multiple/cross AWS accounts where we’re running terraspace/terraform against an IAM user/role in a “master” account and assumes role into “child” accounts to deploy the resources.
We started looking at using terraspace for our deployments to work around some limitations we encountered with pure terraform (and terraspace has been great at it for us btw!). We find that when we deploy resources into the “child” accounts, the remote state file is being written to a bucket in the “child” accounts. However, note that when we started using terraspace, the bucket (in the child accounts) were already created in those individual accounts*.
I’ve just done another test deployment to another “child” account without the bucket being provisioned in advance and can confirm that like you said, it turns out that it would create the bucket in the “master” account (if the bucket doesn’t already exist).
*The way we did this (this was before we use terraspace btw) was to have a terraform code that stores the remote state file in the “master” account, but then assume into an IAM role in the “child” accounts to provision the backend bucket… We took this process out when we started using terraspace because we assumed that terraspace would’ve created the s3 bucket in the “child” account automatically (as it has been updating the remote state in the “child” account - except of course, they were created outside terraspace)
@jalam RE: However, note that when we started using terraspace, the bucket (in the child accounts) were already created in those individual accounts*. … I’ve just done another test deployment to another “child” account without the bucket being provisioned in advance and can confirm that like you said, it turns out that it would create the bucket in the “master” account (if the bucket doesn’t already exist).
Yeah. That’s pretty confusing behavior. The code was doing this because when the bucket exists Terraspace will just leave it as it and no bucket creation logic runs. When the bucket does not exist, then Terraspace will run the bucket creation logic. Unsure why the master account has access to check whether the bucket exists or not without having to assume the role though. Maybe bucket permission was granted via a bucket ACL. Unsure there.
In any case. Dug into this and it should be fixed now. Relevant PR:
Had to map the terraform interface to the ruby sdk, which was a bit annoying There were some parameters that didn’t seem to map at though, notably: assume_role_policy_arns, assume_role_tags, assume_role_transitive_tag_keys. Hope to document those in the docs.
Should generally work now, though.
Just make sure you update to the latest terraspace_plugin_aws
cd terraspace-project-folder
bundle update
bundle info terraspace_plugin_aws