Configuring a Static Website on AWS with Terraform
Oct 31, 2016 · 5 minute readI recently migrated this blog (built using Hugo) from a manually configured setup with S3 and CloudFront to the same infrastructure managed via Terraform. While it’s relatively trivial to host a static site on AWS these days, migrating this simple application to be managed with Terraform was a great way to get started with HashiCorp’s infrastructure automation tools.
This post will lay out the steps for deploying a static site on AWS via Terraform and some of the gotchas of migrating an existing site.
AWS Setup
- Separate accounts. I chose to make a separate account for this work as I’ve seen folks make mistakes with Terraform and tear down entire environments. I wanted some wiggle room for myself. This step is not required.
- Create a new IAM role for Terraform that has access to S3 and CloudFront. For more info on IAM roles see AWS’s best practices doc.
- Have access to your DNS settings. This post assumes DNS is managed outside of AWS but you could just as easily manage it via Route53.
Local Setup
Install Terraform. You can follow these instructions or use
brew install terraform
.Make a subdirectory (I called mine
terraform
but the name does not matter) in your project to hold all Terraform related files.Take the keys from the IAM user made above and store them in a file that is not under version control. Here’s an example:
# ~/.terraform/blog_creds
access_key = "FOO"
secret_key = "BAR"
Variables
Terraform separates concerns nicely for us and allows us to pull out our user configured variables into their own file. Here’s what we’ll need to set for our work.
# terraform/variables.tf
# required for AWS
variable "access_key" {}
variable "secret_key" {}
variable "region" {
default = "us-west-1"
}
# specific to our site
variable "root_domain" {
default = "simontaranto.com"
}
variable "blog_bucket_subdomain" {
default = "techblog"
}
variable "blog_public_subdomain" {
default = "blog"
}
Setting a Provider
Terraform is driven by Providers (such as AWS) and we’ll need to set which one we want to use. We’ll reference the variables we set in the previous step here.
# terraform/infra.tf
provider "aws" {
access_key = "${var.access_key}"
secret_key = "${var.secret_key}"
region = "${var.region}"
}
Creating an S3 Bucket
Now we’ll create the S3 Bucket where we’ll store our static files. Following
the bucket docs
we’ll configure our bucket to be a website
and point the index page to
index.html
and the error page to 404.html
. These pages can be different
depending on your use case but these settings work nicely with Hugo out of the
box.
# terraform/infra.tf
resource "aws_s3_bucket" "blog" {
bucket = "${var.blog_bucket_subdomain}.${var.root_domain}"
website {
index_document = "index.html"
error_document = "404.html"
}
}
Creating a CloudFront Distribution
Next up we’ll create a CloudFront
Distribution
with an origin
pointing to our recently made bucket.
# terraform/infra.tf
resource "aws_cloudfront_distribution" "blog_distribution" {
origin {
custom_origin_config {
http_port = 80,
https_port = 443,
origin_protocol_policy = "http-only",
origin_ssl_protocols = ["SSLv3", "TLSv1", "TLSv1.1", "TLSv1.2"]
}
domain_name = "${aws_s3_bucket.blog.id}.s3-website-${var.region}.amazonaws.com"
origin_id = "${aws_s3_bucket.blog.id}"
}
enabled = true
default_root_object = "index.html"
aliases = ["${var.blog_public_subdomain}.${var.root_domain}", "www.${var.blog_public_subdomain}.${var.root_domain}"]
custom_error_response {
error_code = 404
response_code = 200
response_page_path = "/404.html"
}
http_version = "http2"
default_cache_behavior {
allowed_methods = ["HEAD", "GET", "OPTIONS"]
cached_methods = ["HEAD", "GET", "OPTIONS"]
target_origin_id = "${aws_s3_bucket.blog.id}"
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
viewer_protocol_policy = "allow-all"
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
}
price_class = "PriceClass_All"
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
}
Planning and Applying
Now that we have our infrastructure configured we can ask Terraform to create a
plan
for us so we can see what it will do. Here we manually pass in an
additional variable file that holds our AWS keys.
$ terraform plan -var-file="~/.terraform/blog_creds"
Assuming there are no syntax errors there will be a large set of output showing
that the bucket and distribution will be created. Assuming we’re happy with the
plan, we can ask Terraform to apply
it for us.
$ terraform apply -var-file="~/.terraform/blog_creds"
Assuming all went well, we’ll get a success message. At this point, you’ll find
new files (terraform.tfstate
and terraform.tfstate.backup
) in your project
that are used by Terraform to keep track of your resources.
DNS Updates
Add a CNAME at your preferred subdomain (which should match
blog_public_subdomain
from our variables above) and point it at the
CloudFront distribution.
Testing it Out
Wait for DNS to propagate and then test (using a browser, curl
or host
)
that the subdomain you just added points to your CloudFront distribution. If
all looks good, at this point, you’ll want to publish your content to your S3
bucket and confirm you can access it. Done!
Gotchas
- Terraform bugs. I bumped into a handful of bugs, most of which were solvable by changing Terraform versions or using workarounds. Although frustrating to run into, the project is moving really fast and issues get resolution very quickly.
- Terraform is a wrapper around AWS and even though the automation makes
managing the infrastructure better, as an operator you still need to know how
AWS works. There were many cryptic API error messages from AWS I had to go
debug. Once I got through those errors though, being able to quickly make
changes in code, run an
apply
, and verify them was glorious. - Attempting to move a bucket between accounts by deleting and recreating it lead to the below error. AWS says you can wait some period of time and the bucket will become available again but I chose to make a new name because I didn’t want to wait. Lesson learned: don’t delete a bucket if you want or need the same bucket to be recreated.
Error creating S3 bucket: [WARN] Error creating S3 bucket
blog.simontaranto.com, retrying: OperationAborted: A conflicting conditional
operation is currently in progress against this resource. Please try again.
- CloudFront operations (new distributions, changing distributions to be enabled / disabled) can be slow to propagate (~15 minutes). Be prepared.
Some Next Steps:
- Check out Consolidated Billing to make multiple accounts easier to manage
- Try managing DNS via Route53
- Learn about managing non-local Terraform state