S3 directory browsing from a custom subdomain

This week I was asked to set up a site on our server with directory browsing enabled.  They also wanted to be able to upload files to said site.  Since we are already hosting our servers on AWS, I suggested to them that rather than expending the effort to write code to allow them to manage everything via the web, we set up an S3 bucket and allow them to manage the files directly from their desktop.

Somehow I had gotten to this point in time without working with S3.  I am aware of the principle behind it however and I knew it was capable of doing what I had proposed.

Creating a new bucket was easy. Configuring the permissions on the bucket to allow anonymous users was also easy.  Mapping the bucket to a sub-domain and enabling file browsing is where everything started to fall apart on me.

I logged on to AWS and created the bucket.  For the sake of example, let’s say it was called assets.mysite.com.  Something important to note here.  If you are planning on mapping the bucket to a sub-domain like I am in this example, it is typically best practice to make the name of the bucket the same as the sub-domain.

From there I set the permissions to enable everyone to be able to list the contents of the bucket.

Since I knew that my ultimate goal was to map this bucket to a sub-domain on the site, I immediately clicked down to the next tab “Static Website Hosting”, checked “Enable website hosting”, added an index document and clicked “Save”.  I copied the Endpoint that was provided on that tab assets.mysite.com.s3-website-us-east-1.amazonaws.com, jumped over to Route53 and added a CNAME for assets.mysite.com and dropped in the provided Endpoint.

Because I do my research, I knew that to get a list of files to show up, I would need to drop in some javascript to parse the XML content.  I grabbed some bucket listing code from the AWS community and uploaded it to the bucket.

Simple. Right?


The listing page loaded, but it didn’t show any of the content that I knew had been uploaded to the bucket.  I start going back through everything I had done, checking for the mistake.  I couldn’t find anything wrong.  I dug through the permissions thinking that perhaps something was off there.  Nothing.  So, I started going through the script and decomposing it to see if perhaps something was not working correctly with the script.  I found the following line where it pulls the list of available content:

http.open('get', location.protocol+'//'+location.hostname);

Aha!  So, I tried browsing to assets.mysite.com and all I got was a 404 error.  Thinking at this point that perhaps configuring things as a website is what caused the problem, I went back in and disabled the website hosting feature.  I tried again and got a 404 error again.  That was obviously not the issue.

Eventually, after poking around for a while, I stumbled across a different endpoint assets.mysite.com.s3.amazonaws.com.  This returned the XML that I was expecting!  Armed with this new information, I went back and modified the line from the script that was reaching out for the XML to:

http.open('get', 'http://assets.mysite.com.s3.amazonaws.com');

Surely this was going to solve my problem … or not.

This time I am getting a 405 error.  I immediately start googling and digging through the documentation and discover that the issue I am running into is because I am trying to do cross site scripting.  assets.mysite.com needs permission to talk to assets.mysite.com.s3.amazonaws.com. Armed with a little more information now, I run back into the S3 management console and start looking at the options available to me in the permissions tab.  I click on the “Edit CORS Configuration” button and edited the rules to include:


Now when I go to the listing page, I see all of my files!  Mission successful.

Hopefully this can help some of you out.  Myself and a coworker tore (what’s left of) our hair out for a while going through all of the motions on this.