Skip to main content

High Performance Serverless Web

·1529 words·8 mins

Introduction
#

This little bit below is a bit of a rant, so feel free to skip it if you want to get right to the valuable stuff.

Slight rant
#

So, Servers… You know, those expensive things that sit in some dark, loud room and emit intimidating sounds and lights? Well, as it turns out, servers are becoming a relic of the past, trounced by the ever-evolving, connected, functionality-delivery platform we know today as the Internet. Didn’t the Internet birth the server? Well, sure. But, they were originally mutually exclusive concepts ultimately destined on a collision course of massive technical, economic, and social proportions.

Why am I droning about servers? Because it’s important to understand what they were originally built to do, to understand what they’re currently useful for and where they may not actually be applicable - as a concept - at all. After all, the faceless, expensive, complex box we call a server doesn’t typically come out-of-the-box with some valuable functionality. It has to be configured, deployed, connected and nurtured to resemble the foundation of usable. Then software has to be written, tested, deployed and maintained to delver the valuable part: a social networking site, a financial account management app, an IoT analytics platform, etc. So, if the value is the actual functionality and availability of what’s delivered, then concepts like servers, networking, security, and software development look a lot more like speed-bumps on the highway of social, economical, and relationship prosperity.

Does this mean servers aren’t important? Not so! In fact, very important; they’re as important as the electricity that powers them. However, modern society and innovation is propelled by abstractions. While you once may have needed to be a mechanic to drive a Model-T, you certainly don’t need to be an electrician, much less a mechanic, to drive an electric car. This is because technology has abstracted the complexity of converting energy into kinetic propulsion. Just like servers (and a bunch of other stuff) are converting electricity into your banking management platform. If we were all required to know everything in order to do anything, nothing would ever get done.

NOW, we finally understand why concepts like Infrastructure-as-a-Service or Serverless applications become such attractive, catchy or valuable patterns. They represent the smooth highways of progress, abstracting away the speed-bumps. So, it’s no surprise that “serverless” deployments are one of the many the new hotnesses in tech.

Alright, so all of this goes without saying, but I feel the need to say it now. The foundation of anything that becomes an abstraction has to be solid, flexible, and evolve-able. If not, it’s like building a structure on sand… pointless.

What is Serverless?
#

Serverless doesn’t actually mean that no servers are involved. In fact, quite the opposite. Many, distributed servers are actually at play to deliver serverless applications. The concept essentially takes commonalities of all web-delivered, software-based functionality and abstracts them away.

In the AWS paradigm, serverless concepts are represented by AWS Lambda, AWS S3, and other infrastructure related services that don’t require the direct management of the virtual servers (much less the racked-and-stacked physical kind).

In this post, I’m going to cover content delivery, specifically the delivery of bitform.at and the value it embodies, by use of Hugo and AWS.

Hugo
#

Hugo is a static website generator written in the ever-so-popular go programming language. It’s similar to Jekyll (written in Ruby) but, in my opinion, is one of the easiest to use while still most extensible static website generator projects available.

For those not familiar with the concept of static website generation. It’s essentially a program that is responsible for taking content that is organized in a manner optimized for the builder and rendering into HTML, CSS and JavaScript that can be served by the simplest of web-connected services: a file system.

Additionally, Hugo is fast, lightweight and has a growing community of supporters that have catapulted it into a close second place to Jekyll, the incumbent static web generator.

S3 Static Website Hosting
#

AWS S3 provides a fantastic static-website configuration feature for content buckets. It’s simple:

  1. Create a bucket with the name of the domain name of your static website
  2. Enable the static website features
  3. Upload static content to the bucket
  4. Make sure the content is made publicly viewable

These simple steps make serving static HTML, CSS and JavaScript with S3 trivial. You can optionally extend it to be behind your own domain by:

  1. (using Route 53 as your DNS service) Create or update a DNS A record for your domain name like bitform.at and set the value to be an alias, if doing this through the AWS Console it will automatically show your static enabled S3 bucket as an option in the dropdown.

Then if you want to enable TLS support so you’re site can be served via an encrypted HTTPS connection (which you do, because the web should be encrypted) then you can simply serve your S3 content through AWS’s CloudFront CDN network.

  1. Create a web distribution
  2. set the S3 bucket as the origin for the content (make sure you put the bucket’s static web url copied from the S3 bucket’s settings, not the url that pops up in the dropdown)
  3. Redirect all HTTP to HTTPS and use a certificate (that you either uploaded or created using Amazon Certificate Manager, which is free!)

Now, you have a fully encrypted, statically served, serverless website. I’ve overly simplified some of the above steps, but there are already some good tutorials out there of how to create static websites on AWS using S3, CloudFront, Route 53, and ACM. So I won’t re-hash the interwebs. However, if you’d prefer I do a full post on this, just reach out to me via twitter @davidsulpy.

CodePipeline
#

CodePipeline is an AWS offering that allows you to build orchestrated delivery pipelines. The fantastic thing about CodePipeline is that you can, just like nearly all other AWS services, describe it using CloudFormation templates just like any of your other code or infrastructure. Until recently, I found CodePipeline to be nearly unusable because I would inevitably need some kind of transformation step between the source of my code and the delivered output. This transformation step often didn’t suit itself well to the AWS Lambda paradigm due to limitations in the runtime operations in AWS Lambda. In my research, I’d found a few projects or examples that leveraged AWS Lambda through some non-traditional means to fill this gap in CodePipeline. Luckily, however, like most roadblocks on AWS, it didn’t last as a roadblock for long and AWS formally announced the CodeBuild offering which offers at least a feature complete MVP for a pipeline to take instruction, e.g. code, and deliver functionality, e.g. running software.

In this project in particular, I can now use a serverless delivery process to continually deliver a serverless application.

Here is a basic outline of what my CodePipeline flow is:

  1. Get source from GitHub
  2. Send source to CodeBuild a. Compile source files into static content using hugo b. Upload compiled source to static S3 bucket using the s3 aws cli aws s3 sync <local-stuff> S3://<static-bucket> c. Create a CloudFront invalidation to purge the CDN edge cache of old content via the cloudfront aws cli aws cloudfront create-invalidation --distribution-id <CF_DISTRO_ID> --paths "/post/*" "/index.html" "/sitemap.xml"

And that’s it! Most of the work is accomplished in a single CodeBuild process, which launches an ubuntu-based container to execute my commands. This gives me the freedom to do all sorts of command-line build tasks. It also allows me to manage AWS credentials with best practices as well because I can simply use execution roles associated with the CodeBuild container that are created in my CodePipeline definition CloudFormation template with custom IAM policies that restrict access to the project specific resources and then uses the credential management under-the-hood of the CodeBuild container to ensure I have fresh, limited-scope, time-boxed credentials to perform operations on my AWS resources.

Continuous Delivery
#

Because I’m using CodePipeline’s source action in my delivery pipeline, it will trigger off commits to a branch of my choosing. This way, any time I make a new commit to the repository housing my non-static source content, a new build is automatically kicked off, my source content is built into static content, deployed to S3 and made available through a world-wide CDN with geographically distributed redundancy. All of this without the burden of managing a huge network of servers!

This is basically giving me continuous delivery for free! Another benefit through all of this, is that because both the delivery and the content-serving is all serverless (from my perspective) I’m also only paying for only what I or my content consumers use and nothing more. Perfectly optimized.

What’s next?
#

Next, I’ll be detailing a little more about optimizing the results of hugo using other build and compilation tasks that can perform things like image compression and responsiveness as well as content minification and compression. This is something I’m actively working on as I continue to optimize this new site.

Questions or comments? Feel free to reach out to me @davidsulpy or with questions specifically to blog content, @sv_bitformat.