13 Oct 2015

Launched: Account Permissions

We’ve just launched account permissions! Team members can now be assigned more restrictive roles on your team management page. There are several roles:

  • Readers— readers can see anything in LaunchDarkly, but can’t modify any data. This role is perfect for members of your organization that need visibility into your feature flags, but shouldn’t be able to modify rollout rules, or administer the system.
  • Writers— writers can modify feature flags, goals, environments and more. They can’t add new team members to the account, or manage your payment method or plans.
  • Admins / owners— admins and owners can do pretty much everything on the site. Owners can’t be removed from the account.

Account Permissions

Over time, we’ll be making our permissions model more powerful. It’ll be possible to put even finer-grained controls on your users, so you can lock down your production environment, or restrict your Test environment to QA engineers if necessary.

02 Oct 2015

Launched: Flag Status Indicators

LaunchDarkly now provides live status indicators for your feature flags. Our dashboard shows you which flags are active, which are candidates to be removed from your code, and which have already been deleted from your code, and are safe to remove from our dashboard.

statusindicators

This is the first of many new features we’re building to help developers manage feature flags throughout their development process. Cleaning up feature flags is an essential part of the development workflow, and LaunchDarkly is taking steps to make this as easy as possible. Happy launching!

02 Sep 2015

Launched: .NET support

LaunchDarkly now supports .NET!

We’ve just released the first supported version of our .NET SDK. Like all our other SDKs (we support Ruby, Python, Node.js, Java, Go, and more), it’s open source— check out the code on GitHub. We’ve also published a brand new .NET reference guide to help you dig in and integrate LaunchDarkly into your .NET application today.

Happy launching!


LaunchDarkly helps you build better software faster with feature flags as a service. Start your free trial now.
05 Aug 2015

Can we stop pretending that Docker is great for development environments?

tl;dr: It makes perfect sense to run backing services (databases, caching, storage, etc.) in Docker, but if you use a compiled language, Docker’s not the way to go for local development.

Every six months I try to use Docker for local development. I tried again recently, and found (once again) that the technology still isn’t ready. But this go-around I reached a further conclusion— for most development stacks, running Docker for local development is a pointless exercise. It introduces complexity and provides almost no benefits.

The crux of the problem is this: achieving an efficient edit, compile, run cycle means that your development container will not be the same as your production container. This negates one of the primary advantages of using containers. Furthermore, you will never make your edit, compile, run cycle as efficient or foolproof as running locally without a container. This means that you’re paying a containerization tax and reaping none of the rewards.

Let’s first look at my claim that an efficient edit, compile, run cycle requires separate “non-production” container definitions. First, assume otherwise. If you’re using your production container for development, your container must contain a pre-compiled artifact of some kind, or you’re doing something crazy like running the compilation in your Dockerfile. In either case, you need to re-build your container every time you make a change. Your E/C/R cycle looks something like this:

We went down this path, and found that the cycle took long enough (> 30 seconds, excluding the build time for the service itself) to trigger a boredom-induced context switch. This is a productivity killer.

You can argue that this is an implementation critique, and that (eventually) this cycle will be much faster, but to compete against local development, the build /restart steps need to be nearly free. They’re not, and I don’t think they’re ever going to be, even with efficient use of the Docker image cache.

If you’re willing to discard the idea of using your production containers for local development, or you run an interpreted stack that has no build process, you can change the rules of the game. You can mount your repository directory into the container, listen for file changes, and use a live reload tool (e.g. Fresh) to re-compile and re-launch your application within your container.

With some tomfoolery, this works reasonably well. It took us time to find and set up docker-osx-dev to mount and sync source folders efficiently, and several hours of putzing around with boot2docker to get inotify working properly, but we did arrive at a working solution.

But when we stepped back and looked at the beast we’d birthed, we could find no compelling advantages to it. We had been using foreman to start all our services locally, and foreman start is incredibly fast compared to docker-compose up. We inherited all the added complexity of managing boot2docker in addition to the Docker containers themselves. Our setup documentation tripled in length.

full-docker

Our original motivation for trying Docker for local development was an errant brew update that broke some of our critical services locally (memcached and elasticsearch). In the end, we decided that running these backing services via docker-compose made plenty of sense, and do end up simplifying setting up and running a local development environment. On the other hand, we moved back to running our own microservices locally via foreman. We haven’t looked back since.


LAUNCHDARKLY HELPS YOU BUILD BETTER SOFTWARE FASTER WITH FEATURE FLAGS AS A SERVICE. START YOUR FREE TRIAL NOW.
03 Aug 2015

Launched: Environments

LaunchDarkly now supports multiple environments!

Environments let you manage your feature flags throughout your entire continuous delivery pipeline, from local development to QA, staging and production. When you create your LaunchDarkly account, we’ll provide you with two environments, called Test and Production. Out of the box, you can use Test to set up feature flags for your non-production environments, while keeping your rules and data separate from your Production environment.

It’s incredibly easy to switch environments— just select your environment from the dropdown on the sidebar. When you create a flag, you’ll get a copy of that flag in every environment. Each flag can have different targeting and rollout rules for each environment. So you can roll a flag out to 100% of your traffic in staging, while keeping it ‘off’ in production.

switch

Each environment has its own API key— use your Test key in your SDK for local development, staging, and QA, and reserve your Production API key for your production environment.

Environments are also completely customizable— you can create new environments, rename them, or delete them. You can even change the sidebar swatch color for your environment— make your Production environment red, for example, to give yourself a visual reminder that you’re modifying the rules for your live customer base.

color

Check out our documentation to learn more!


LAUNCHDARKLY HELPS YOU BUILD BETTER SOFTWARE FASTER WITH FEATURE FLAGS AS A SERVICE. START YOUR FREE TRIAL NOW.

 

21 Jul 2015

Golang pearl: Thread-safe writes and double-checked locking in Go

Channels should be your first choice for structuring concurrent programs in Go. Remember the Go concurrency credo– “do not communicate by sharing memory; instead, share memory by communicating.” That said, sometimes you just need to roll up your sleeves and share some memory. Lock-based concurrency is pretty old-school stuff, and battle-hardened Java veterans switching to Go will undoubtedly feel nostalgic reading this. Still, many brand-new Go converts probably haven’t encountered low-level concurrency primitives before. So let’s sit down and program like it’s 1999.

To start, let’s set up a simple lazy initialization problem. Imagine that we have a resource that is expensive to construct- it’s read often but only written once. Our first attempt at lazy initialization will be completely broken:

This doesn’t work, as both goroutines in Broken may race in GetInstance. There are many incorrect (or semantically correct but inefficient) solutions to this problem, but let’s focus on two approaches that work. Here’s one using read/write locks:

If you’re a Java developer you might recognize this as a safe approach to double-checked locking. In Java, the volatile keyword is typically used on instance instead of using a read/write lock, but since Go does not have a volatile keyword (there is sync.atomic, and we’ll get to that) we’ve gone with a read lock.

The reason for the additional synchronization around the first read is the same in Go as it is in Java. The go memory model does not otherwise guarantee that the initialization of instance is visible to other threads unless there is a happens-before relation that makes the write visible. The read lock ensures this.

Now back to sync.atomic. Among other things, the sync.atomic package provides utilities for atomically visible writes. We can use this to achieve the same effect as the volatile keyword in Java, and eliminate the read/write lock. The cost is one of readability– we have to change instance to an unsafe.Pointer to make this work, which is aesthetically displeasing. But hey, it’s Go– we’re not here for aesthetics (I’m looking at you interface{}):

Astute Gophers might recognize that we’ve re-derived a utility in the sync package called Once. Once encapsulates all of the locking logic for us so we can simply write:

Lazy initialization is a fairly basic pattern, but once we understand how it works, we can build safe variations like a resettable Once. Remember though– this is all last-resort stuff. Prefer channels to using any of these low-level synchronization primitives.


LAUNCHDARKLY HELPS YOU BUILD BETTER SOFTWARE FASTER WITH FEATURE FLAGS AS A SERVICE. START YOUR FREE TRIAL NOW.