Thursday, April 16, 2015

Save Money by Moving your Static Websites from GoDaddy to AWS S3

Firstly i'd like to make it very clear that I'm not writing this post because I've got a grudge to settle against GoDaddy, this is just a walkthrough on how I moved the hosting of my static websites to Amazon S3 and how it has saved me a lot of money (relatively) in Hosting Costs. I've been using GoDaddy for as long as I can remember, I started off using it for just my Domains and when I was in a .NET coding phase (I'm purely NodeJS now :) I started hosting my sites with them as well. Many years ago it used to cost something like 5$ a month for Windows based .NET hosting, but over the years this has actually gone up and since of late I had been paying more than 7$ for a basic Windows hosting package. This may not seem like much but when you have 2 or 3 websites (low traffic / personal hobby websites) it does not make sense to pay this much.

Enter Amazon Web Services (AWS) to the rescue!

I initially experimented by moving a website of mine to AWS, but I picked the Elastic Beanstalk path of doing things as I thought this was the only way to host a website on AWS. This was an interesting experience but I ended up paying a lot more per month as with Elastic Beanstalk you had to stand up a EC2 instance and you were pretty much paying for every hour that machine was up and running to respond to your website requests. So as you can imagine the monthly bill ended by being a lot more than the basic GoDaddy hosting account fee.

I then started playing around with AWS S3 for some static image hosting features I was building for one of my web apps. The first thing I noticed was how cheap hosting space and data transfer rates were for S3 and it became evident why so many large websites/web apps host their assets in AWS S3. I also did more research and it seemed that AWS S3 usage costs always seems to go downwards rather than upwards, this was in stalk contrast to many other mainstream hosting companies. Amazon accredits this downward cost pattern to the fact that as they get more users they invest in more infrastructure and hence it gets cheaper in volume to offer cloud services to customers. So its like they are selling Cloud infrastructure in wholesale prices as they rather deal in a large volume of usage. I guess Amazon would definitely know a thing or two about managing massive load and usage volume as they have been in this business for a long time.

This got me to thinking, what if I can host all my static websites in S3. As these websites are just a collection of HTML, JS, CSS and Image assets it certainly makes sense. I didn't have to look around too long before I discovered that this is indeed possible and lots of people were already doing this.


Below is a walkthrough on how you can do this as well for your websites.



Getting up and Running with AWS S3


Firstly, this only works for pure static websites. So no PNP, .NET or other backend pages will work on S3. If I have any backend features I need to build for my static websites (e.g. a subscribe to newsletter form), I basically tie this in using AJAX to a web service that I build and host elsewhere (Currently I used Heroku for my backend NodeJS web services - but more on this later).

So lets get started.

Step 1) Firstly, sign up to AWS S3.

Step 2) Once logged in, go to your S3 Console 


Amazon S3 Console


Step 3) Lets assume your website is "www.example.com" and you want both domains "www.example.com" and "example.com" to point to your website. So create a new S3 "bucket" (basically this is a unique folder that will hold your site).

Call this bucket "www.example.com".

Of course you need to use your own domain names and not "www.example.com" (in the example image below I've used one of my own websites, i.e "www.paulsandmark.com") , also keep in mind that the bucket names are unique to all of AWS S3 so be careful when creating one. When creating your bucket you can also pick a "region" that is closer to you or your target website visitors. (This will make your website load faster for people who are around that region than for people outside it)

To have both "www.example.com" and "example.com" point to the same S3 hosted files, we will use your domain name's DNS configuration to point "example.com" to "www.example.com" (more on this below)

A S3 bucket set up for your website


Step 4) With the new bucket created, open your "www.example.com" bucket by clicking on it. This will be the main folder where your website will be hosted. Upload your entire website file collection here.

Upload all your website files into the bucket. Note: There is a "drag to upload folder"
feature in the upload screen so you don't need to upload file by file


Step 5) Once all your files are uploaded, you need to make sure all these files are set to "Public", select all your files and "Make Public" like so:

Select all files/folder and make them "Public"


Step 6) Next click on the "Properties" for you bucket and "Enable Website Hosting" and point your bucket default page to be your main HTML page (index.html in the example below)

Enable "website hosting" and set your default HTML page


Step 7) And thats it for the S3 configuration side of things, your "www.example.com" bucket is now set up to hold your website. Before you move on to the DNS part of getting your domain names to point to these hosted files make sure your note down the entire "S3 Endpoint" URL in your bucket's properties page. It will look something like this:

Note down your "Endpoint" URL, you will need it for the below steps


Entering this URL in your browser should take you to your website. This is a good test to perform before continuing.



DNS and Domain Name Configuration on GoDaddy


Next we have to work with your domain names settings to tie it all together. The steps below are for Domain names hosted with GoDaddy and for using the default DNS tool that comes with GoDaddy Domain names to point your Domain to your S3 bucket of files.

A disclaimer first: Please be very careful updating DNS settings for your domain, the steps below are what I did but these can vary depending on your hosting setup. At a high level what we are trying to do is create a CNAME www entry to point to your www.example.com S3 bucket and then set a simple redirect on "example.com" to redirect to "www.example.com". So do the below at your own risk and if you are not sure and you don't want any downtime on your website please do more research on this before continuing.


Step 8) Login into your godaddy.com account and go to your "Domain Details" page for your domain.

Step 9) Click on the "DNS Zone File" tab and go down to the "CNAME" section.

Step 10) In the CNAME section you will need to modify your "www" entry (if you don't have one you will need to create one) and enter the full "S3 Endpoint" url  you noted above as the value here.

In the www entry, enter the S3 endpoint URL as the value


After these settings take effect (keep in mind that DNS changes can take around half hour or more to propagate) when a user visits "www.example.com" in the browser, your S3 bucket contents will load and by default your index.html page will display.

Step 11) We have one last step left, which is to redirect the base domain "example.com" to "www.example.com". Do do this go to your main Domain "Settings" tab and set up a new "forwarding" rule like below:

Finally redirect requests to your base domain to forward to your "www." domain


After these settings take effect (will take around half hour or more again), "example.com" will now redirect to to "www.exmaple.com" using a 301 Redirect (which is SEO friendly)



And that's it folks!

After the settings come into affect, your website is now fully hosted on AWS S3. After spending a few days testing your website and making sure nothing has broken as a result of your DNS changes you can then cancel your GoDaddy hosting account.

The cost difference is (relatively) big for me, I used to spend around 7$ for hosting each of my static websites and by moving to AWS S3 I have not spent more than 1$ a month (even though my traffic is not much it still its a very good saving). Also keep in mind, GoDaddy might have some cheaper hosting packages for static websites as well so best do your research in that department if your are worried above moving hosting providers for some reason.

As always, Happy Coding Code Junkies!



11 comments:

  1. thankyou very very very much!

    ReplyDelete
  2. This is so helpful, saved me some time and money! Do you know anything about the instances on EC2? I have one site with S3 that I didnt use any instance and another I did, the one I am using an instance on has a much bigger bill. Like $20 per website. But if I dont need an instance well then I will remove it! Thanks.

    ReplyDelete
    Replies
    1. Hello, glad you found this useful. If its a static website you can host it purely on S3, you don't need a EC2 instance. You only use an EC2 instance if you have "backend" code like node.js / php etc - in which case you need an EC2 instance to launch a web-server and serve your files. Hope this helps.

      Delete
  3. what about the nameservers, can you throw some light on it?

    ReplyDelete
  4. Great article. Do you know if it's worth switching from Go Daddy to Amazon's Route 53? Their pricing page isn't terribly clear to me: https://aws.amazon.com/route53/pricing/

    I'm curious what the annual cost would be just for the domain + privacy.

    Thanks again!

    ReplyDelete
  5. Actually what you should do is:
    1. Set up a hosted zone on AWS Route 53.
    2. Create an A record with an alias that points to your aws bucket. In order for this to work you need to name your bucket that same as your domain name, so if you're www.example.com your bucket has to be called example.com. Click on the alias radio button and then the dropdown. It takes a minute for the S3 bucket to show up in the dropdown but it'll eventually pop in.
    3.Once you're set up on AWS, copy the NS servers and head back to godaddy. Near the bottom of your dns settings there's an option to set custom name servers. Enter your AWS nameservers and wait for the results to propagate

    ReplyDelete
    Replies
    1. Good article, and this particular reply about an improved way to handle the DNS issues was very helpful. Worked great for me, thanks to both of you for taking the time to help others!

      Delete
    2. What if you already have dns records saved at GD and it's a pain to migrate it all to route 53. Would this not be a preferable solution?

      Delete

Fork me on GitHub