knob with arrow pointing to the word: deploy (3d render)

Super-fast deploys using AWS and ELBs

At PipelineDeals, we deploy code frequently, usually 2-3x per week, and sometimes even more often. As all web application developers know, deploying is sort of a nervous process. I mean, sure, 99.99% of the time, everything will go perfectly smooth. All your tests pass, your deploy to staging went perfectly, all the tickets have been verified. There is no reason to fear hitting the button. And, the vast majority of the time, this is true.

But all web application developers also know that sometimes, there is a snag. Sometimes the fates are against you, and for whatever reason, something goes bust. Perhaps you have a series of sequenced events that must occur to deploy, and one of the events silently failed because the volume that /tmp is mounted on temporarily had a 100% full disk. Perhaps that upload of the new assets to S3 did not work. Perhaps you did not deploy to ALL the servers you needed to deploy to.

And then, the worst happens. For a short period while you are scrambling to revert, your customers see your mistake. They start questioning the reliability of your system. Your mistake (and it is yours, even if some bug in some server caused the problem) is clearly visible to your customers, your bosses, and your peers.

Taking advantage of the tools we have

PipelineDeals runs on Amazon AWS. We utilize EC2 for our server instances, ELB for our load balancing, ElastiCache for our memcached storage. We are also major proponents of Opscode’s Chef, and use it to spin up and configure any type of instance that makes up our stack.

Since we have all these fantastic tools, we decided to use them in a way that makes deploying seamless and easy. We wrote a simple Rakefile called Deployer that orchestrates a seamless app server deploy.

Using the Deployer script

The first thing it does is creates new app servers that have the new code on them. Once the app servers have completed their configuration, the deployer rakefile will register those new app servers with a test ELB load balancer.

Phase 1, new app servers are brought up and registered with the test LB.

From there, we can do a final walkthrough of what exactly is going into production, and indeed the app server is up, awake, and ready to receive requests.

After that final validation, we simply run rake deploy, which adds the new app servers to the current load balancers, verifies their health, then removes the old app servers from the production LB. This all runs in about 3 seconds, so the transition is smooth and seamless.

During the deploy, new app servers are added to the Prod ELB, then the old app servers are moved out.

If indeed anything was wrong with our code, or we found it was generating an error we did not expect, we can simply run rake rollback which does the opposite.

Or, if we are completely satisfied that everything looks ok, we can run rake cleanup which will tag the new app servers as the current production servers, and terminate the old app servers.


Originally we designed the Deployer for when we launch large projects, or risky chunks of code. But I have found that we have started using the Deployer for nearly every deploy, because it is so easy.

If your company utilizes Chef, EC2, and ELB, check out the deployer. It might work great for your deployment workflow!

Share this post:

Share on facebook
Share on google
Share on twitter
Share on linkedin
Share on pinterest
Share on print
Share on email

Don't miss another post! Sign up here.