Ultimate Guide to Hosting Gigapixel Panoramas
This article is simply about hosting. For information on the art of creating panoramas and 360° photography check out these articles:
The Ultimate Guide to Panoramic Photography,
Levelling Base for Panoramic Photography,
The Ultimate 360 Photography Guide
Best Software for 360 Photography
Setting up a Webserver on Amazon S3 and using Cloudfront to cache content
Hosting Gigapixel panoramas has been an issue for photographers since forever. You can use an online platform like 360Cities or Kuula to host virtual tours, but you are entirely reliant on the platform to stay in business and most will not host your panoramas. The WordPress plugin Panopress hasn’t been updated for 6 years but you could use Zoomify to enable self-hosted panoramas. But it doesn’t support 350° images. Not the most satisfactory situation!
I tried a variety of online hosts before becoming frustrated with the limitations. Eventually I decided to do it myself and documented the entire process to make it easier for non technical folk to tackle.
I decided to use Amazon S3 to host the images and Cloudfront to cache them.
Setting up a website with Amazon S3 and CloudFront is not for everyone, I came to this with a specific requirement to host very large media in the form of panoramic photographs split into thousands of “tiles” for quick transfer and rendering. I’m not using S3 to host my website, simply to host images, but it is perfectly possible to host a website and use CloudFront as a CDN solution to deliver content quickly to remote clients.
When you use Cloudfront as a proxy, your content is served from the CloudFront server nearest to the location of the request. But first you need to set up your site in Amazon S3. This tutorial explains how to set up Amazon S3 to host a website. how to set up CloudFront as a content distribution proxy for the website and how to configure an SSL certificate for that content.
What you will learn:
Why use Amazon S3 and CloudFront
Ease of Use
I’m sorry, but that was intended to be ironic. WordPress hosting is way easier on almost any other platform!
Cost
Most web hosting companies charge a regular fee based on space or inodes (the number of objects served). Amazon charge on data transferred and that makes them cheaper than most web hosting companies for certain types of content, especially panoramas where the volume is high and the number of objects served is in the thousands for a single panorama.
What is happening in Plain English
If we describe a website in the simplest possible terms, it consists of a set of pages, often linked together, containing text and images.
The simplest possible website in code, looks like this:
<!doctype html>
<html>
<head>
<meta charset="UTF-8">
<title>index.html</title>
</head>
<body>
<p>Welcome to my fantastic new website!</p>
</body>
</html>
This is the raw html that is either created by hand or generated by a content management system like wordpress, served by a web server and converted by the browser into something more palatable to the human eye.
All websites work like this. The server serves html formatted code, the browser interprets that code and renders it as a visual page. Panoramas work in exactly the same way, the original image is broken up into tiles and “wrapped” in an HTML 5 framework that is to all intent and purposes a website.
When you install code on Amazon S3, Amazon provides the web server and serves that content in response to a request from a browser. The domain name (human readable domain such as mygreatcompany.com) is translated into a reference to your space on the amazon server in your DNS record.
If you have a complex website with a lot of content that is read by people all over the world, it makes sense to use a Content Distribution Network (CDN) as a proxy to serve that content from a node close to the origin of the request. If I have an audience in San Francisco for example, and I am based in Spain it will take longer for my web pages to reach San Francisco from Europe than it would from a server in the United States.
We will use Amazon Cloudfront as a proxy to serve our content without the request going all the way to our S3 installation. Cloudfront periodically refreshes its cache from the S3 installation in order to keep content up to date.
Amazon S3
Terminology
Bucket – This is your webspace. You can create multiple buckets for different purposes but for the purpose of creating a simple website or serving a panorama, we need just one.
Object – in Amazon world everything is an Object, including your bucket.
Endpoint – URL
The S3 Dashboard
The first task is to create an account with Amazon. Don’t use your shopping id, create a new one specifically for the purpose of using Amazon technical services.
Open the Amazon S3 website in your browser.
Click on the Get Started button.
Create a new account. There are several proof of identity steps that you’ll need to go through, so have your mobile phone to hand and an email client.
Once you have created your account, log on to the console. Be sure to select Root user.
This is the web console. lick on S3 in the Recently Visited section.
Click on Create Bucket
Creating a Bucket for Content
This page has several important settings that you need to fill out carefully.
- General configuration
- Block Public Access
- Bucket versioning
- Default encryption
- Advanced settings
are the ones to watch out for.
Bucket Name – use something simple and recognisable. Think of it as a directory name. Choose an AWS region that suits your location.
Scrolling down the page, leave ACL’s disabled in Object Ownership, Turn off Block Public Access and tick the “Turning off block all public access” warning – you want people to see the website. Bucket Versioning remains disabled for our purposes as does Default Encryption.
In Advanced settings, check that Object Lock is disabled
Now click on Create Bucket
You’ll be taken to a screen similar to the one above.
Now you have your bucket, we can set permissions and fill it with content.
Setting Permissions for Web Hosting
You’ll notice on the image above that it says “Objects can be public” under the access column. The default setting is that they are not public so we need to alter this.
Click on the name of your bucket. And choose “Properties” in the next screen.
Scroll down this page to the bottom and you’ll see Static Website Hosting – click on Edit.
You will see this dialogue
Enable Static Website Hosting and make sure Hosting Type is set to Host a Static Website. Set the index document to index.html then save your changes. You will be returned to the Bucket Properties page and if you scroll down, you’ll see that static website hosting is enabled and you have been assigned an endpoint
The Bucket Website endpoint is the URL that identifies your bucket within the amazon universe.
The next job is to explicitly set your access policy. Click on Permissions the main menu.
We’re going to provide a policy in the Bucket Policy window.
Press “Edit” opposite the Bucket Policy. We need to supply a policy that allows any member of the public to load any object (page or image) that is contained in the bucket. This is the code you’ll need. Don’t change the date, that indicates a version to Amazon. The last line needs to be changed – it needs to match your own Bucket ARN.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::helterskeltertraining/*"
]
}
]
}
The Bucket ARN is shown in the edit screen, just above the window.
Save your changes. If you get an error, most likely its the ARN name – that line needs to match exactly your own ARN.
Uploading Your Website
Now, you can upload your content into the bucket!
Using the breadcrumbs at the top of the screen go back to your bucket.
Click on the Upload button.
You can add files or folders here. I use the Drag and Drop area as I only have one file in this example. Note that although the filename appears in the list, it is not uploaded until you press the Upload Button!
Now you should have a website or panorama, ready to be served by AWS.
Testing
To test, go back to the properties screen, scroll down to static website hosting and click on the URL – if all the settings are correct it should display in your browser.
Updating your DNS Record
You probably don’t want to use the Amazon formatted URL if you already own a domain. To use your own domain, you need to change a DNS setting.
There are two types of setting that indicate endpoints in your DNS. These are A records and CNAME records. The A record points to an IP address. The CNAME record is used to provide aliases for the A record and can point to a different server. Well known examples are mail.yourdomain.com and ftp.yourdomain.com.
We need to set up a subdomain called the same thing as your bucket, and then provide the Amazon generated URL as a destination.
So create a new CNAME record. Some hosts differ on how this is done, if you input the name of your bucket as the name, CNAME as the type and your Amazon URL as the record value, the system will fallout the rest of your domain in the Name field. This record should reroute requests to the subdomain to Amazon. DNS changes can take some time, in my experience anything between instant and a day, to trickle through, so this may not work immediately. Test with your new URL – eg http://bucketname.yourdomain.com
If it is the whole website you are hosting on Amazon then you need to do a redirect.
Cloudfront
Cloudfront is Amazon’s Content Distribution network and if your site contains a lot of content that is of interest to a global audience then it may make sense to use the CDN to cache content in various locations closer to your audience.
Go to the Services tab at the top left of your Amazon screen and choose Cloudfront from the dropdown. If it isn’t there simply type Cloudfront into the search box.
Recap in Plain English
At this point you have created your website in Amazon S3. If your audience is geographically distant from your site’s hosting (you chose the host geography at the beginning of the S3 process) then you may benefit from using a CDN.
Caveat – if your traffic is low, then a CDN may actually cause a lag because the content kept in the cache will expire between user requests. In that case, the CDN has to go back to S3 to retrieve content so all you have achieved is to add a second iteration into the request/serve process.
Terminology
Distribution – A distribution represents your part of the CDN cache. It is mapped onto your Amazon S3 bucket.
Creating a Distribution
In CloudFront, click on the Create a CloudFront distribution button.
You’ll be taken to a very long form that we’ll step through in sections.
The Origin Domain can be selected from the dropdown in that field. Simply move your cursor to the empty field and click. It will lest the buckets that you have created.
The Origin Path need only be filled out if you have your content inside a subdirectory of the bucket. For example if your bucket contains a single file called index.html, leave this field blank. If it contains a directory at the top level, containing your index.html then enter the name of the directory here.
Ignore Add custom header and leave Enable Origin Shield disabled.
The next section is Default Cache Behaviour.
Viewer Protocol policy should be set to Redirect Http to Https.
Allowed HTTP Methods should be GET, HEAD
Restrict Viewer Access should be set to Yes if you are sure that all viewers must come through CloudFront.
Scroll down to Settings
Choose your Price class – this should be intuitive, the first is the most expensive.
Click on Request Certificate to get a new SSL Certificate.
Alternate Domain Names should include all the variants you may need to secure. For example bucketname.yourdomain.com, *.bucketname.yourdomain.com would secure any subdirectory created under bucketname.yourdomain.com
Default Root Object
Enter index.html as the name of the root object (so that users don’t have to include it in the URL).
Setting up an SSL Certificate
The request for an SSL Certificate is straightforward. For clarity, this certificate encrypts traffic between the browser and Cloudfront.
Choose public certificate and in the next screen, DNS validation.
The system will then give you the data you need to create a new CNAME record – this is simply for the purpose of validating your ownership of the domain and will look very obscure. Copy and paste it into your DNS settings as presented.
Once DNS has propagated, the system will validate your SSL certificate. Mine took about 5 seconds, but depending where your DNS is set, it may take up to a couple of days.
Once the certificate has been issued the Distribution can be published. CloudFront will issue you with a new URL ending in cloudfront.net this needs to replace the S3 value in your existing DNS.
Updating your DNS Record (again)
Go to the CNAME record that you created in the S3 setup and substitute the cloudfront.net value for the S3 value against your subdomain. This has the effect of pointing all requests for bucketname.yourdomain.com to Cloudfront instead of S3. You have already told CloudFront your bucket details so it will go off and populate the cache and serve your content from there instead of S3.
Setting up a website with Amazon S3 and CloudFront is actually quite straightforward once you know the terminology. We used a trivial example here, but the steps to deploying a more complex website or a panorama with an HTML 5 wrapper are identical.
What if You already use a CDN?
If you already use a CDN but want to use Amazon S3 to host large media, then you can create the CNAME record in your CDN DNS settings. Remember that when you set up a CDN with a conventional web host you typically swap the nameservers to those of the CDN and so changing the DNS records at the hosting level won’t work. Instead, carry out the identical process at CDN level.
Here is a link to a panorama I prepared for a series recording the Lecrin Valley, south of Granada. I used S3 and CloudFront exactly as I have set out here, the website is hosted separately from the panoramas and I use Quic.cloud as a CDN for the website and CloudFront to serve the panos. All DNS alterations were therefore done in Quic.cloud. The original file for a single panorama is around 850mb. Almost impossible to serve using conventional http servers.
Conclusion
Setting up a website with Amazon S3 and CloudFront is certainly a job for the technically able. It is not remotely suitable for small easy to maintain websites. However it works very well for the use case of serving large media such as gigapixel panoramas.
Subscribe…
I’ll keep you in the loop with regular monthly updates on Workshops, Courses, Guides & Reviews.
Sign up here and get special prices on all courses and photowalks in 2024