Moving a Static Website to AWS S3 and Route 53
| code howto |
Namecheap is an excellent domain name registrar. Domain registrations, including WHOIS privacy protection, are cheap as-advertised, the platform is both full-featured and easy to use, and their customer service is great. I've used Namecheap for several domain registrations for years but I always used AWS for hosting the actual website. This post documents the steps I took to consolidate domain registration and hosting under a single provider with AWS.
Despite this site (root81) containing simple, static content, I initially wanted the ability to experiment with richer web applications on a whim without needing to stand-up a whole new server and configure the DNS, etc. My solution was to host the site on AWS Elastic Beanstalk with a free static IP (EIP), and then point the DNS A Record for the domain in Namecheap to the AWS static IP address. The deployable unit for my Elastic Beanstalk environment was a .war file, which I produced by packaging up my Scalatra server application code. Obviously, my simple site doesn't need Beanstalk's dynamic scaling (and, indeed, I don't think EB allows scaling an EIP to more than one host anyway), but the goal was to get experience with all the components start to finish, and using the AWS 1-year free tier, the website hosting was free for a year anyway.
Unsurprisingly, after the first year, it made no sense to power such a simple website with a full web application server, given that the current configuration would cost about $8 a month for the 720 hours for the t2.micro EC2 instance. That cost could be reduced with a reserved EC2 instance and/or a spot instance, but I'm not testing out web technologies frequently enough to warrant having an always-on server in the first place.
Migrating DNS from Namecheap to Route 53
The primary guide that I followed was Making Route 53 the DNS Service for a Domain That's in Use . The guide is very detailed and I found it easy to follow, so I'll simply summarize the high-level steps in this post with some screenshots to make things concrete.
The first step is to create a Hosted Zone in Route 53, using your actual domain name (e.g., 'root81.com'):
This creates NS and SOA DNS records for the domain. I added a third record set to the Hosted Zone (see "Create records individually in the console" in the article linked above), an A Record pointing to my server application running in EB with a static EIP. This ensures that as soon as Route 53's Nameservers are used to route traffic to your domain, they will route traffic to your live application server.
In your Namecheap account, click "MANAGE" next to your domain and change the "NAMESERVERS" dropdown to "Custom DNS". Then, add each Nameserver value (I had 4) from Route 53 individually into a new field:
At this point, the AWS articles advise you to shorten the TTL for the DNS NS record from 2 days, or wait the full 2 days. I have a low-traffic website and could afford some downtime (okay, tons of downtime), so I pressed onward. The next step in Namecheap is to click on "Sharing & Transfer", scroll down and click to "UNLOCK" the domain. Finally, click the button to get an AUTH CODE, which will be emailed to you.
Back in AWS Route 53, click "Registered Domains" on the left panel and click "Transfer Domain". Follow the steps in the wizard and select the Hosted Zone that you already setup in the previous step. The wizard will ask you for the AUTH CODE that you received via email from Namecheap. After that, you should receive an email from AWS asking you verify ownership of the domain. Just click the link.
At this point (or shortly hereafter), you should be in step 7 of 14 in the "Pending requests" section of Route 53, and it means that AWS is waiting for the registrar (in my case, Namecheap) to officially release the domain. The registrar has something like 5, or 8, or 13 business days (I forget). I waited about a week before becoming impatient :) and then I reached out to Namecheap customer service via chat to get help. (Again, I should mention that their customer service is really solid.) I explained my situation and the representative instantly released my domain on the spot so I did not need to wait for the full grace period to elapse.
At this point, I could still load my website (from the Beanstalk application) and I verified that Route 53 was the official DNS name server for the domain by adding more record sets for subdomains (e.g., www) in Route 53 that pointed to the same IP. One nice benefit I was not expecting was for Route 53 to honor the domain expiration date I had on the domain with Namecheap. Although Route 53 required me to pay the $12 for another year of registration, it appended that year to the existing domain expiration, which wasn't for another 6 months. I was expecting to eat the 6 months of registration I had already paid Namecheap for, but AWS gave me credit for that. It's good to know that you don't need to worry about timing your domain transfers to save money.
The next step was to move from a web application server (Scalatra) to a static S3 bucket.
Organizing Content in S3
First, I created an S3 bucket with the same name as the domain and configured it to host a static website. Note, if you want to point subdomains, such as www.root81.com, to your root domain, then you'll need to create one S3 bucket for each subdomain and configure each bucket to redirect to the root bucket.
Next, I opened the Hosted Zone (created above) in Route 53 and added an Alias DNS Record pointing to the S3 bucket of the same name (e.g., root81.com). I added one record set for each subdomain as well (e.g., www.root81.com), pointing each to its specific S3 bucket. (Basically, Route 53 sends traffic for the subdomain to the subdomain's S3 bucket, which then redirects traffic to the root S3 bucket.)
A nice result of not fetching the blog posts dynamically is that I could create canonical URLs for the blog posts. Blog posts should always have canonical URLs anyway, but I was experimenting with some AJAX calls when I initially wrote the blog logic. In addition to linking a post to its predecessor, the bash script also copies the post's contents into a template with the rest of the post webpage's HTML. This way I can change the blog's look and feel without modifying every individual post.
The bash script uses the AWS CLI to mirror the local directory with the S3 bucket for the site. In effect, this is how I "deploy" my website now: executing the bash script. I had to add a few HTML file redirects to some paths in my website that I had linked elsewhere to avoid broken links. This is something to keep in mind for any previously shared website path or link.
A final note, Route 53 actually provides SSL certificates to enable HTTPS connections to domains for free. However, Route 53 doesn't provide SSL certificates for domains pointing to S3 buckets directly, so the site needs to operate with some other layer of indirection. The typical configuration is to have Route 53 direct traffic for the domain to a CloudFront distribution, which is "backed" by the site's S3 bucket. I don't need HTTPS for this site, so I did not complete this step, but this guide can help you with that, in addition to providing another resource for this whole task in general.
In all, the DNS process took a few hours followed by a week of waiting, and the static site conversion took about 8 hours to get everything right. In terms of financial costs, the domain registration, including privacy, is a bit pricey at $18 annually for Route 53 ($12 a year and $0.50 a month), but the hosting on S3 is basically free for such a small site (just a few cents a month for storage and traffic).