21 Jul 2015

Golang pearl: Thread-safe writes and double-checked locking in Go

Channels should be your first choice for structuring concurrent programs in Go. Remember the Go concurrency credo– “do not communicate by sharing memory; instead, share memory by communicating.” That said, sometimes you just need to roll up your sleeves and share some memory. Lock-based concurrency is pretty old-school stuff, and battle-hardened Java veterans switching to Go will undoubtedly feel nostalgic reading this. Still, many brand-new Go converts probably haven’t encountered low-level concurrency primitives before. So let’s sit down and program like it’s 1999.

To start, let’s set up a simple lazy initialization problem. Imagine that we have a resource that is expensive to construct- it’s read often but only written once. Our first attempt at lazy initialization will be completely broken:

This doesn’t work, as both goroutines in Broken may race in GetInstance. There are many incorrect (or semantically correct but inefficient) solutions to this problem, but let’s focus on two approaches that work. Here’s one using read/write locks:

If you’re a Java developer you might recognize this as a safe approach to double-checked locking. In Java, the volatile keyword is typically used on instance instead of using a read/write lock, but since Go does not have a volatile keyword (there is sync.atomic, and we’ll get to that) we’ve gone with a read lock.

The reason for the additional synchronization around the first read is the same in Go as it is in Java. The go memory model does not otherwise guarantee that the initialization of instance is visible to other threads unless there is a happens-before relation that makes the write visible. The read lock ensures this.

Now back to sync.atomic. Among other things, the sync.atomic package provides utilities for atomically visible writes. We can use this to achieve the same effect as the volatile keyword in Java, and eliminate the read/write lock. The cost is one of readability– we have to change instance to an unsafe.Pointer to make this work, which is aesthetically displeasing. But hey, it’s Go– we’re not here for aesthetics (I’m looking at you interface{}):

Astute Gophers might recognize that we’ve re-derived a utility in the sync package called Once. Once encapsulates all of the locking logic for us so we can simply write:

Lazy initialization is a fairly basic pattern, but once we understand how it works, we can build safe variations like a resettable Once. Remember though– this is all last-resort stuff. Prefer channels to using any of these low-level synchronization primitives.

25 Mar 2015

Golang pearl: It’s dangerous to go alone!


In Go, the panic function behaves somewhat like an unrecoverable exception: panic propagates up the call stack until it reaches the topmost function in the current goroutine, at which point the program crashes.

This is reasonable behavior in some environments, but programs that are structured as asynchronous handler functions (like daemons and servers) need to continue processing requests even if individual handlers panic. This is what recover is for, and if you inspect the source you’ll see that Go’s built-in HTTP server package recovers from panics for you, meaning that bugs in your handler code will never take down your entire HTTP server.

Unless, of course, your handler code spawns a goroutine that panics. Then your server is screwed. Let’s demonstrate with a trivial example:

This server will happily chug along, panicking every time the root resource is hit, but never crashing.

But if hello panics in a goroutine, the entire server goes down:

In our web services, we can never really trust a naked use of go. We wrap all our goroutine creation in a utility function we call GoSafely:

Here’s how we use it:

Not as syntactically sweet as a naked go routine, but it does the trick. The unfortunate thing (which we don’t really have a solution for) is that any third-party code that spawns a go routine could potentially panic– and we have no way of protecting ourselves from that.

LaunchDarkly helps you build better software faster with feature flags as a service. Start your free trial now.
24 Feb 2015

Packaging Go Microservices for AWS Deployment using CircleCI

Most of LaunchDarkly’s backend systems are written in Go. We have a microservice-based architecture, so we have about 10 distinct standalone binaries (either web services or asynchronous worker processes) that we deploy to AWS.

We have a small engineering team, so it’s important that we can package and deploy our code with minimal overhead. Out of the box, Go provides an amazing set of tools that make this manageable, but we found we needed to hunt around to fill in a few gaps in the toolchain. Here are the extra tools we use to get our Go code packaged and ready for deployment.

Repeatable builds with Godep

One of the first issues we saw with Go was that out of the box, the go tool doesn’t give you any way to version dependencies. This seems like a big oversight to me, but perhaps it makes more sense in a world where you have one monolithic repository.

At any rate, we have code in about 20 distinct repositories, so versioned dependencies and repeatable builds are essential for us. We use Godep to handle versioning in our Go builds.

Godep is fairly simple. godep save stores version metadata (for us, git sha’s) for your dependencies into a Godeps/Godep.json file in your repository root. It also stores all your dependency sources in a Godeps/_workspace directory. You commit this entire directory structure including the _workspace into your repository. Instead of running go build, you compile with godep go build. When you want to update a dependency, do so in your normal go path and then invoke godep update IMPORT_PATH. Simple and effective.

One thing that wasn’t obvious to us was how to structure “library” packages to work best with go and godep. By that, I mean repositories that consist of several packages that aren’t necessarily tied together with a top-level “main” package. We have a repository that serves as our ubiquitous “bucket” of essential utility code. We call it foundation, and it has packages for things like epoch times and richer error types. The directory structure for foundation looks like this:

There’s no go source in the top-level of the repository, so go build complains:

The right thing to do is to point go to each of the package subdirectories:

Typing ./... is pretty thoroughly ingrained into our muscle memories now. We have to type it every time we build our foundation, as well as every time we want to do a wholesale update of all the foundation packages in a dependent repository: godep update github.com/launchdarkly/foundation/...

Cross compiling

Go compiles to machine code, so binaries are platform-dependent. We do all our local development on OS X, but for staging and production we need to build Linux targets. Again, reproducible builds are important to us– we want to be able to build our production artifacts for Linux on our OS X boxes if necessary. For that, we use a cross-compilation tool called goxc.

goxc is pretty straightforward to set up. We store our configuration in .goxc.json files in each of our repositories. For a simple worker or web service, our .goxc.json files look like this:

This simple configuration runs the cross compiler and packages a .tar.gz archive of our binary. The BuildConstraints line uses Go’s build constraints notation— the line we use compiles for Linux and OS X, but disables Linux ARM (which we don’t need to target). The GOPATH setting took a bit of figuring out– goxc lets you specify a template format including variables like the current working directory (PWD), a platform specific path separator (PS) and path list separater (PLS), etc. Our GOPATH setting points goxc to our Godeps directory and falls back to our environment-specified GOPATH. The Godeps directory alone might be preferable– ensuring that your build doesn’t depend on local copies of packages:

Another useful trick that we use is to pass a build-ldflags flag to goxc. We use this to inject a git SHA into our binaries in a Version variable. This makes it extremely easy to figure out *exactly* what version of a service is running at runtime. Versions for us are just git SHAs– we don’t bother with the overhead of maintaining semantic version numbers for our services. In our Go code, all we have to do is this:

Once we’ve got this, we wrap up our goxc invocation into a small script called package.sh that sets the VERSION variable to our current git sha:

When built locally with go build or godep go build, the Version is set to "DEV", but the packaged binary will overwrite the variable with a git SHA.

CircleCI setup

Notice how we set the destination directory for our archive based on the $CIRCLE_ARTIFACTS environment variable. We use CircleCI to build our binaries. The symlink trickery we do is necessary to make our “library” repositories build on CircleCI– by default, checked-out repositories on Circle don’t seem to be part of the Gopath. Here’s what a basic circle.yml setup looks like for most of our workers:

Uploading artifacts

We use ansible for our deploy scripts, and they’re set up to pull artifacts from S3. All of our Circle builds run the following upload_to_s3.sh script:

This script should be pretty re-usable– change the service and bucket names, and enter your S3_KEY and S3_SECRET into CircleCI’s environment variables, and you’re good to go. We tell CircleCI to upload the artifacts as part of a “deployment” step (even though we don’t use CircleCI to actually deploy to EC2):

Again, you can customize this– for example, you may only want to push artifacts built from master to S3.


None of this was incredibly complex, and that’s a testament to how good the Go tooling is out of the box. Going from an empty repository to a new Go microservice running in production is remarkably easy. In fact, it’s so easy that we’ve automated it (well, everything except writing the actual service). We’ve created a giter8 template (open-sourced on GitHub that sets up a simple Go program with goxc cross-compilation, CircleCI tests and artifact packaging, and S3 artifact upload:

Once you’ve configured your build on CircleCI (including setting the S3_KEY and S3_SECRET environment variables) and pushed your new repository to GitHub, you’ll see artifacts uploaded to your S3 bucket. From there, we use ansible scripts (which I haven’t covered in this post) to actually deploy artifacts onto EC2 instances.

LaunchDarkly helps you build better software faster with feature flags as a service. Start your free trial now.