Getting Http301Error from aws sdk while having multi region layering on a stack

Hi,
Would really appreciate some help here :slightly_smiling_face:
Im trying to have multi layer region for tfvars on a stack.
The stack was originally created with a single region in mind but I need to add another region.
My provider is configured to have region as variable so that i can configure it in each tfvar that i have.
Here is the following structure for the tfvars:

    app/stacks/network/tfvars/
        us-east-1/
            qa.tfvars
            prod.tfvars
        us-west-2/
            qa.tfvars

Example for us-west-2/qa.tfvars:

    vpcs = {
        "01" = {
            cidr                                 = "10.45.0.0/17" # fake cidr for this topic
            availability_zones           = ["us-west-2a", "us-west-2b", "us-west-2c"]
        }
    }
    environment = "qa"
    region      = "us-west-2"

While running the command AWS_REGION=us-east-1 TS_ENV=qa terraspace plan network everything works fine, but running it with AWS_REGION=us-west-2 returns me the following error from the aws-sdk lib:

    /opt/terraspace/embedded/lib/ruby/gems/2.7.0/gems/aws-sdk-core-3.114.3/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call': Aws::S3::Errors::Http301Error (Aws::S3::Errors::Http301Error)

If needed, i can provide the full stacktrace but its pretty long.
My terraform backend configuration is as follows:

    terraform {
        backend "s3" {
        bucket         = "terraform-states"
        key            = "<%= expansion(':MOD_NAME-:ENV/terraform.tfstate') %>"
        region         = "us-east-1"
        encrypt        = true
        dynamodb_table = "terraform-state-lock"
    }
}

This sits under config/terraform/backend.tf
The end result in the bucket is terraform-states/network-qa/terraform.tfstate

From what i see in previous issues on the sdk, people say to whitelist the 301 response code as ok and not to throw exception.

Thank you in advance for anyone who helps

Unsure. The issue might have to do with the key path not including the region. Sort of reproduced it with an example Terraspace project:

Also here’s a debugging session.

Click to Expand Debugging Sessions Details
tung:~/environment/infra-bucket-301 (master) $ terraspace up demo -y
Building .terraspace-cache/us-west-2/dev/stacks/demo
Built in .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform init -get -input=false >> /tmp/terraspace/log/init/demo.log
=> terraform plan -input=false -out /tmp/terraspace/plans/demo-caebb57ef8b7334bce9f9460233a9f61.plan
random_pet.this: Refreshing state... [id=skilled-bullfrog]
module.bucket.aws_s3_bucket.this: Refreshing state... [id=bucket-skilled-bullfrog]

No changes. Your infrastructure matches the configuration.


Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Releasing state lock. This may take a few moments...
=> terraform apply -auto-approve -input=false /tmp/terraspace/plans/demo-caebb57ef8b7334bce9f9460233a9f61.plan
Releasing state lock. This may take a few moments...

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:


bucket_name = "bucket-skilled-bullfrog"
Time took: 4s
tung:~/environment/infra-bucket-301 (master) $ AWS_REGION=us-west-2 terraspace up demo -y                                                                                                                        
Building .terraspace-cache/us-west-2/dev/stacks/demo
Built in .terraspace-cache/us-west-2/dev/stacks/demo
Current directory: .terraspace-cache/us-west-2/dev/stacks/demo
=> terraform plan -input=false -out /tmp/terraspace/plans/demo-962956bdca79f8041fcc758d720e064f.plan
random_pet.this: Refreshing state... [id=skilled-bullfrog]
module.bucket.aws_s3_bucket.this: Refreshing state... [id=bucket-skilled-bullfrog]

No changes. Your infrastructure matches the configuration.


Terraform has compared your real infrastructure against your configuration
and found no differences, so no changes are needed.
Releasing state lock. This may take a few moments...
=> terraform apply -auto-approve -input=false /tmp/terraspace/plans/demo-962956bdca79f8041fcc758d720e064f.plan
Releasing state lock. This may take a few moments...

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.


Outputs:

bucket_name = "bucket-skilled-bullfrog"
Time took: 4s
tung:~/environment/infra-bucket-301 (master) $ AWS_REGION=us-west-2 TS_ENV=qa terraspace up demo -y                                                                                                              
Building .terraspace-cache/us-west-2/qa/stacks/demo
Built in .terraspace-cache/us-west-2/qa/stacks/demo
Current directory: .terraspace-cache/us-west-2/qa/stacks/demo
=> terraform init -get -input=false >> /tmp/terraspace/log/init/demo.log
=> terraform plan -input=false -out /tmp/terraspace/plans/demo-c2692e70d1b9b939ffb30c07c73c96a2.plan
random_pet.this: Refreshing state... [id=vocal-maggot]
module.bucket.aws_s3_bucket.this: Refreshing state... [id=bucket-vocal-maggot]
Releasing state lock. This may take a few moments...
╷
│ Error: error reading S3 Bucket (bucket-vocal-maggot): BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region at endpoint ''
│       status code: 301, request id: , host id: 
│ 
│   with module.bucket.aws_s3_bucket.this,
│   on ../../modules/example/main.tf line 1, in resource "aws_s3_bucket" "this":
│    1: resource "aws_s3_bucket" "this" {
│ 
╵
Error running command: terraform plan -input=false -out /tmp/terraspace/plans/demo-c2692e70d1b9b939ffb30c07c73c96a2.plan
tung:~/environment/infra-bucket-301 (master) $ 

It’s a bummer. Looks like the key was customized to not include the region above. Tested with a similiar backend.tf configuration as the one you provided.

If the missing region in the key is the reason. You probably want to maintain backward compatibility. Consider introducing some conditional logic. Possibly something like:

<%
def state_key_path
  if ENV['AWS_REGION'] == "us-east-1"
    expansion(':MOD_NAME-:ENV/terraform.tfstate')
  else
    expansion(':REGION:MOD_NAME-:ENV/terraform.tfstate')
  end
end
%>

terraform {
  backend "s3" {
    bucket         = "terraform-states"
    key            = "<%= state_key_path %>"
    region         = "us-east-1"
    encrypt        = true
    dynamodb_table = "terraform-state-lock"
  }
}

Note: Haven’t tested it. But it gives you an idea of the concept.

You may also want to consider moving the statefiles over with the terraform state commands. So it’s cleaner and wont need the conditional logic.

Could also consider copying the files s3 files over to the new key path, but you have to probably update the DynamoDB items in the “terraform-state-lock” table. So would be careful about that. :ok_hand: Probably make backups of everything and do a test run in a new project first.

Hi!

Thank you for your response.
I kinda thought that maybe my region for the backend state was the issue and not that i dont have the region in the state file name.

I used your suggestion to test if it works and it indeed works.
I will work on changing my state files to include the region as suggested and test again.
Will keep this topic posted!

Again, Thank you.

1 Like

Hi,

I updated the state files paths in s3 with region as well as updated the dynamo lockID key and it solved the issue.

Thank you for your assistance!

1 Like