20 Dec

LaunchDarkly raises $8.7 million for feature flag management: Separating business logic from code

I’m pleased to announce that we’ve raised an $8.7M Series A  with Josh Stein of DFJ leading the investment and joining our board. John (cofounder and LaunchDarkly CTO) and I are thrilled about the next chapter for LaunchDarkly, the leading feature flag management platform.

Our A funding is a marker to reflect on our progress since we announced our seed round in June 2015. Then, we were a team of four with our product in private beta. Now, we serve billions of feature flags (daily) to amazing customers like AppDirect, CircleCI, Lanetix and Upserve. Microsoft recommended LaunchDarkly for feature flagging, and Atlassian showcases how we help them do DevOps right. As a former Product Director at TripIt, I initially thought our target market would be mobile app developers. I was right – mobile app developers do love us for how we can help them iterate faster. However, the market is so much bigger – LaunchDarkly is feature flag management for literally anyone who codes. ANYONE. Whether that team is reducing riskmanaging subscription plans or  microservices – we can help them. Part of our funding will go towards hiring more amazing engineers to continue to build feature flag management. We’re hiring!

We believe firmly that using feature flags allows software teams to build better products, faster. Really what we allow teams to do is separate business logic (who gets what, when, how) from code. There are amazing benefits not just for developers, but for operations, product, marketing and sales, who suddenly can control functionality. As such, we have literally written the book on how to feature flag. [I’ve also written on the wrong way to feature flag] We believe so passionately in the overall movement, that we also open-sourced the book on GitHub. Want to contribute? Open a PR! Excited about helping evangelize feature flags? We’re hiring!

One of the funnest parts of founding a startup is not just building a product, creating a category, and hearing from happy customers – it’s building an amazing team. I am so proud of how much the original eight of us have achieved. But as we continue to grow, it’s clear that we need to have dedicated resources on making sure we’re providing the best customer experience. One of my favorite customer emails read “I can’t believe that the sweet lady who answered the phone and helped me was actually the CEO.”  I’m pleased that Russ Thau, former VP of Sales Intercom, is joining us to build out a team of sales, support and customer success to help our customers get the most out of feature flagging. Interested in joining? We’re hiring!

I hope you see a persistent pattern – we’re looking forward to continuing to build a great product, customer experience and team. A favorite customer email was from Behalf, who said they enjoyed being on the journey with us. Thank you to our customers, team and investors, and I hope you are as excited about our next chapter as John and I are.

05 Dec

Cultural Changes of Feature Flagging vs Branching: Defrag X

I was honored to give a talk at Defrag X on “The cultural changes of feature flags vs feature branching”. Feature flags initially enabled developers to separate deployment from release. Now an entire organization (product, marketing, sales, customer success) can shift from a waterfall way of thinking to a more iterative, customer- centric release. Afterwards, I was thrilled by how many people came up and said they appreciated my talk. 

When I first stDenverDefragarted LaunchDarkly, I met with an Engineering Director, who was excited about feature flags but was still on a six month release cycle. That same Engineering Director was at Defrag. He’s now at a different company with a quicker release cycle and is ready to move forward with feature flags. I’m happy that as release cycles get quicker and quicker, so to does the need for feature flag management.

Thanks to Defrag for ten years of great conferences, with even a bonus snow appearance.

07 Sep

Why microservices need feature flags

Microservices is the practice of breaking up a huge, monolithic release where all components are tested and released as a whole, into many discrete services that can go on independent release schedules. The benefits of microservices were popularized by Martin Fowler, and put into practice by Amazon and Netflix. However, with microservices, releases become more complicated, by n-factorial. Instead of one monolith that can tested as a whole, if there are even as few as nine different services, each needs to be tested with every other component. Suddenly there are nine potential friction points.

Feature flags to the rescue! With feature flags, engineering teams can have complete control over their various microservices. First, wrap the microservice with a feature flag, with all traffic going to the old version. Then, release a new version. With a feature flag, gradually put whatever traffic you want to the new version, verifying functionality in all the other micro services. The new traffic and be cordoned off however suits your business. A feature flag at it’s simplest can be an “on” “off” switch to quickly flip between versions, similar to a blue green deployment. However, feature flags can serve arbitrarily complex (or simple) variations of traffic to the new microservice. Some example of how to test the new micro service are
– percentage rollout
– by IP address of service
– by version number of other services
or whatever other information. Feature flags allow users to do “microdeployments” of microservices. A microdeployment is breaking a deployment into smaller components of who exactly receives the new service.

Using feature flags and microservices together allows engineering teams to truly unlock the value of decoupling. You can read more about how we use microservices and feature flags at LaunchDarkly here.

 

LAUNCHDARKLY HELPS YOU BUILD BETTER SOFTWARE FASTER BY HELPING MANAGE FEATURE FLAGS AT SCALE. START YOUR FREE TRIAL NOW.

 

31 Aug

Secrets of Netflix’s Engineering Culture

Netflix is not just known for the cultural phenomena of “Netflix and chill”, but for its legendary engineering team that releases hundreds of times a day in a data-driven culture. Netflix is the undisputed winner in the video wars, having driven Blockbuster into the “return” bin of history. Netflix won by iterating quickly and innovating with numerous micro-deployments. Could what worked for Netflix work for you?

Netflix had a virtuous cycle of product innovation. Every change made in the product is with the goal of getting new users to become subscribers. Netflix has a constant flow of new users every month, so they always have new users to test on. Also, they have a vast store of past data to optimize on. Did someone who liked “Princess Bride” also like “Monty Python and the Holy Grail”? When is the right time to prompt for a subscription? Interesting tests that Netflix can run include whether TV ads drive Netflix signups, or whether requiring Facebook to create an account drives enough social activity to counteract the drop in subscriptions from people who don’t have Facebook. If a change increased new user subscriptions, it went into the product. If it didn’t increase new user subscriptions, it didn’t make it in – hypothesis driven development.

However, what if you’re not Netflix? What if you’re a steady SaaS business with 1,000 business customers, on boarding 30 new customers a month? This is a healthy business, doubling in size annually. However what if you wanted to test whether you get more subscriptions with a one step or two step process to add a credit card. With a sample set of 30 a month & 90% current success rate, it will take you three months to determine success. Not everything can be tested at small scale. Tomasz Tunguz talks more about the perils of testing early here.

The other “gotcha” to watch out for with Netflix style development is obsessive focus on one metric can degrade other metrics. For example, focusing on optimizing new user signup might mean degrading experience for old users. Let’s say that 10,000 customers could be served with “good” speed, or 2,000 with “superfast” speed and 8,000 with “not good speed”. Or 1,000 with lightning fast and 9,000 with terrible speed. You might make the 1,000 new customers very happy, but piss off the 9,000 existing customers and have them quit. A good counterweight is to always have a contra-metric to keep an eye on. It’s okay if it dips slightly if the main metric rises. However, if the other metric tanks, re-consider whether the overall gains are worth it.

So what lessons can you take from Netflix to help your own business?

One, have a clear idea of why you’re making changes, even if it’s not something that you can a/b test. Is it to increase stability in your system? Make it quicker for someone to onboard? Know what your success criteria are, even if there’s not a statistically significant “winner”.

Two, break down projects into easily quantifiable chunks of value. Velocity can be as important (if not more important) than always being right. For example if you try 20 small changes, and half are right, you’ll end up 50% better. If you try one big change, and it’s not accretive, you’ll end up with a zero percent gain. Or, as Adrian Cockcroft, Netflix Architect says “If you’re doing quarterly releases and your competitor is doing daily releases you will fall so far behind”.

Three, don’t underestimate the importance of your own domain expertise. If you’re constantly testing ideas, even without having enough data, you’re quicker to get into the right path. Let your competitors copy your past mistakes, while you move forward. As Kris Gale, co-founder and CTO of Clover Health said, “You will always make better decisions with more information, and you will always have more information in the future.” But the way to get more information is to iterate.

LAUNCHDARKLY HELPS YOU BUILD BETTER SOFTWARE FASTER BY HELPING MANAGE FEATURE FLAGS AT SCALE. START YOUR FREE TRIAL NOW.

 

26 Aug

3 Ways to Avoid Technical Debt when Feature Flagging

Feature flags are a valuable technique of separating out release (deployment) from visibility. Feature flags allow a software organization to turn features on and off at a high level, as well as segment out their base to allow different users different levels of access. However, feature flags have an (ill-deserved) reputation of “Technical Debt”. Used incorrectly, feature flags can accumulate, add complexity, and even break your system. Used correctly, feature flags can help you move faster. Here’s three easy ways you can avoid technical debt when using feature flags.

  1. Create a central repository for feature flags

Using config files for feature flags is “the junk drawer” of technical debt. If you have seven config files with different flags for different parts of the system, it’s hard to know what flags exist, or how they interact. Have one place where you manage all of your feature flags.

  1. Avoid ambiguously named flags

Give your flags easy to understand, intuitive names. Assume that someone other than you and your flag could potentially be using this flag days, months, and years into the future. Don’t have a name that could cause someone to turn it on when they mean off, or vice versa. For example “FilterUser”, when it’s off – does this mean users are filtered? or not?

  1. Have a plan for flag removal

Some flags are meant for permanent control, for example for an entitlements system. Other flags are temporary, meant for the purpose of a release only. If a flag can be removed (because it’s serving 100% or 0% of traffic), it should be removed, as quickly as possible. To enforce this rigor, when you write the flag, also write the pull request to remove it. That way, when it’s time to remove the flag, it’s a two second task.

22 Aug

Feature flagging and testing

Sam Stokes, Rapportive co-founder, has this tip on testing and feature flagging:

“I’ve found that explicitly testing both sides of the flag can help with managing the increased complexity a flag introduces.  It helps you catch changes down the line that would have broken the old, forgotten code path that 10% of customers are still on.  It provides reminders to clean up the old code and delete the flag (via occasionally breaking the tests).  And it provides some pushback against having too many interacting flags (if you had to write 8 tests for one function because it depended on three separate feature flags, maybe it’s time to refactor that function, or retire some flags).”

Sam Stokes regularly blogs here.