Managing Chef Cookbooks for OpsWorks with Berkshelf
AWS OpsWorks makes it easy to deploy Chef cookbooks to configure instances. But without a Chef Server, managing a large collection of cookbooks across multiple projects can be a challenge.
In this post, we’ll look an unorthodox solution that uses Berkshelf to bundle multiple cookbooks into a single custom cookbooks tarball for use in OpsWorks.
In my current engagement, our use of OpsWorks has rapidly expanded from a single proof of concept into about 10 different environments, with more to come. We use OpsWorks extensively and have been extremely happy with it, but managing cookbook releases has become increasingly difficult. There’s a few problems we want to avoid:
- Avoid duplicating code. A cookbook for one OpsWorks environment should be available to all environments.
- Versions. If a cookbook is used to multiple environments, updates shouldn’t break existing deployments. The easiest way to do this is with version tags.
- Easily manage external cookbook dependencies on public cookbooks in places like supermarket.io.
- Bundle cookbooks from multiple sources into OpsWorks cookbook tarballs easily.
Berkshelf manages cookbook versions and dependencies. You can define these dependencies in a Berksfile at the root of your cookbook directory, and link them to multiple sources such as Supermarket, git repositories, or local files.
Here’s a sample of cookbook dependencies listed in a Berksfile
In this sample, you can see Berkshelf picking up cookbook dependencies across a variety of different sources. One key design requirement of this solution is that each cookbook needs to be in it’s own repository. This is most likely how you’re already working.
By simply running
berks install from our cookbook directory, Berkshelf will download the dependencies and save them locally.
When Berkshelf fetches these cookbooks, it’s saving them into ~/.berkshelf/cookbooks. Each file is saved as cookbookname-version, and in the case of git tags, the version is the commit id. As such, we end up with:
At this point I’ll confess to using Berkshelf in the wrong way. Berks is designed to operate within a given cookbook to resolve it’s dependencies. When you run
berks install inside that cookbook, berks does all the work.
In AWS OpsWorks, you’re most likely deploying multiple cookbooks and you must package them all into a single monolithic .tar.gz file. I’m using Berks to achieve that, but it certainly isn’t the way Berkshelf was intended to be used.
To do this, I create an empty ‘cookbook’ that is nothing more than a placeholder that lists the cookbooks that I want to be included in my OpsWorks release package.
Dear Berkshelf Developers. Should you ever read this, please accept my sincere apologies for what I’ve done. But it works a treat, just the same!
To clarify, the folder structure looks like this:
metadata.rb is a basic Chef metadata file with name, version, license etc. Just copy this from one of your other cookbooks.
Berksfile is where the magic happens, and in my example looks like this:
Creating an OpsWorks Release
Using Berkshelf in this fashion, creating an OpsWorks release after our ‘cookbook’ dependencies is a simple affair:
berks vendor is used to create a cookbook release with Berkshelf. It will copy all of the cookbook dependencies (and the cookbook itself) into the berks-cookbooks folder, which we can simply tar up for OpsWorks to consume.
I had trouble getting my head around Berkshelf at first. http://berkshelf.com has a good run down of commands, but the theory of why I would use those commands wasn’t so clear to me at first. Finally, a talk on YouTube by one of the Berkshelf developers (Jamie Windsor) explaining the theory and idea of Berkshelf is when things finally clicked for me.