21 Mar

A new chapter

A new chapter

Having spent the past two plus years in the consumer space at Good Eggs, my first week at LaunchDarkly has been an absolute whirlwind experience. From getting acquainted with the development process to meeting our amazing team, the on-boarding process has been, albeit daunting, very warm and welcoming.

Getting up to speed

As part of getting up to up to speed in my first week, I’ve been diving in to the stack. John has been a fantastic pair and resource for not only the current lay of the land, but also providing valuable historical context around the product’s evolution. I still have a lot to learn, but am feeling better equipped every day.

Back and the future

Before joining the team at LaunchDarkly, I was particularly impressed with the product and its evolution. Feature flagging, to me, permeates beyond a software development process to defining how organizations—really people within organizations—work together to deliver the best possible experience to the end user. After seeing how LaunchDarkly works from inside and with its customers, I’m happy to report that I was not only right, but there is so much more to it!

I’m looking forward working on this empowering, impactful product with our incredible team and can’t wait to be a part of LaunchDarkly’s future.

14 Mar

My Agile Launch

Starting at a company that helps software teams release faster with less risk has reminded me of my first foray into agile development.

One of my earliest professional experiences was as an intern at HBO, where I reported directly to the VP of Emerging Technologies.  Over the course of the summer, it became clear that there were more promising new technologies to explore than there were engineers.  The undaunted college student that I was, I convinced my boss that I should own one of these projects.  Every day I would demo my work on a screen in his office, and he would provide feedback.  A few weeks later the VP presented to senior management and the company officially green-lit the project.  Though we didn’t refer to it as such, that cycle was my introduction to Agile.  Yet more importantly, it was a daily routine where a non-technical business stakeholder provided direct feedback to a technical resource; an arrangement that I’ve come to realize is quite uncommon.

Working at large and small companies in a variety of engineering roles, there are two overall trends that I’ve identified:

  1. Companies often separate engineering from the business side.
  2. Most development-related failures (e.g. missed deadline, bad or buggy feature, release causing a security vulnerability, etc.) are a result of miscommunication.

Neither of these statements are profound in isolation but when coupled genuinely pique my curiosity.

Over time the broader engineering community has developed numerous tools and processes to mitigate risk.  Missing deadlines?  Use story points to measure team output (velocity) over time.  Releasing buggy features?  Try test-driven development.  Want to avoid downtime during a release?  Setup application performance monitoring in your staging environment.  While these “quality assurance” measures are not guarantees of perfection, they make development more predictable.

Problems arise when we cut corners in response to misaligned expectations.  Let’s say there’s a feature request for a relatively straightforward user enhancement.  The development team has done everything right to this point (features properly designed and scoped), but towards the end of the sprint the team finds a scalability issue with no clear path forward.  Engineering management notifies the business side of this issue and insists that there should be resolution within five business days.  Five days pass and the developers have made no progress.  Engineering rushes to fix the issue through a refactor but skips unit testing to release earlier.  Two new bugs slip into production.  Instead of working on the next sprint the development team now works to get a hotfix release out.  One unexpected event can change everything.

The separation of the business side from its engineering counterpart sets the stage for frustration and missed opportunities.  Any tool that can bridge the gap between these two groups offers immense value to an organization or a working relationship.  Cucumber, a Behavior-Driven Development test tool, empowers non-technical stakeholders to define requirements, in plain English, that double as executable test guidelines for engineers.  By regularly reviewing Cucumber test results, a business stakeholder could easily assess the current status of a project.  Nevertheless, Cucumber facilitates one-way communication and offers no clear guidance on iteration or state.

The daily iterations I had at HBO were extremely effective in moving the project forward quickly. In a perfect world we could repeat this process as often as possible with senior management to win their approval as early as possible.  What if we schedule a daily meeting with the entire senior management team and show up with a buggy build?  It would quickly become apparent that our good intentions are far less valuable than executive time.  Instead, what if we gave each one of the senior managers a version of the technology that we could push updates to over time?  What if I could focus on building features and my boss could choose and deploy the ones that he thought were ready?  What if after realizing that there was a problem with a deployed feature he could hide that feature without involving me?  LaunchDarkly does all of this at the enterprise level.

To me LaunchDarkly is about much more than feature flagging or even quality assurance; the platform empowers companies to reconnect the technical and non-technical departments in order to shorten feedback cycles with customers and make better decisions.

This mission is a game changer.

That its founders and team are all extraordinary yet humble makes the opportunity twice as appealing.

08 Feb

Toggle Talk with Slack’s Director of Engineering Josh Wills

I sat down with Josh Wills, Director of Engineering at Slack and unabashed feature flag enthusiast, to get his opinions on the practice, where it’s most useful, and how it has played an important part in his career.  I think my favorite take-away from him was,

“Feature flagging is scary. I get why it’s scary. But to me, not launching your product every five minutes is scary… Launching continuously is how you learn fast– It’s not just about deploying fast, it’s about learning fast. That’s the future of viability.”

Here’s what I heard from Josh in the interview.  

 

How long have you been feature flagging?

I started feature flagging at Google in 2007, and when I joined the team they were in the middle of rewriting the feature flag system. Our feature flag framework was called the Experiments Framework because it was designed for running A/B tests, and it grew out of that into this very powerful and complicated feature flagging framework that we used for literally everything. It started out as a simple “feature on”/“feature off” system, but it evolved to include richer types like strings and floats and even had some simple conditional logic for modifying flags based on attributes of the request. It became the system through which all of Google’s various machine learning models were combined to make decisions for ad ranking, search ranking, etc.

Imagine that you have dozens of machine learning models active on a given request, doing all kinds of different things, and your job is to figure out an algorithm for deciding which ads should go where and how much each advertiser needs to pay. All of those different signals are combined together through a complicated series of equations which have a bunch of thresholds and weights. There’s very rich logic in not only the machine learning models, obviously, but also within the feature flag framework, to control under different contexts what counts the most.

Trying to do data science and machine learning in production without feature flags is nuts.

The feature flag framework here at Slack was developed in-house and was not initially developed with data science in mind, but we eventually created one that was to support our own search ranking, backend performance, and new team onboarding experiments.

But when I got here, I was so happy that they had a feature flag framework at all– it means you deploy code hundreds of times a day, not once a month, or whatever it is other companies do.

 

What do you prefer to call it and why?

I like to call it the experiment framework, but that’s just Google/Xoogler nomenclature. Facebook’s system is called Gatekeeper and it’s basically the same idea. Most of these systems have converged to have the same set of features, because it just makes sense and you have to have certain things. Eventually you’ll get there anyway, so why not just skip to the end?  

 

When do you think feature flagging is most useful?

I’m biased, since I’m a data person. Data science, machine learning is what I do and I think it is absolutely critical for machine learning, for bringing any kind of data driven feedback loops and intelligence into your application. That is when it is absolutely most critical.

You should not be doing machine learning without a feature flag framework.

If you are saying, “Oh I want to get into machine learning, and I’m going to do predictions or personalization or recommendations,” which lots of people do, and you are doing it without a feature flag framework, you are insane and should be fired. Not to put too fine a point on it, but…

 

Are there any cases where feature flagging is not a good idea?

Well, there is always tech debt that goes along with doing this kind of stuff. We deal with this at Slack, and Google deals with it as well. You end up with code that has a lot of “if” statements in it. Engineers are not always the best about deleting their feature flags once they’re no longer necessary.

So we have a whole archival system, so that when you do the code review to create the feature flag, you also have to specify when you plan to delete the flag, and get alerts if the flag is not removed by such-and-such a date. That is the cost of doing business. I would say the benefits massively outweigh the costs.

I think there are certain other situations where feature flags are not a good idea.  They’re relatively rare, but they do happen. Where you’re switching backend systems there can be times where you have to just go for it. You maybe reach a point where you just can’t quite bring yourself to burn the bridge and just go without the old system anymore, even though it’s causing you pain, and you know it needs to go. You just need to bite the bullet and do it, do the cut over and live with the consequences.

You work super hard to not ever find yourself in that situation. No team is going to go out of their way to corner themselves like that, but if you’re cornered, you’ve got to fight your way out.

 

Best use of a feature flag – a personal story?

It’s been crucial to the way I’ve worked ever since I moved to San Francisco. Coming from the perspective of doing machine learning, data science, etc. at Google, I ran thousands of experiments all in search of new ways of combining different models together, in order to fundamentally make Google more money, make the user experience better, and generate more ROI for advertisers. I got really good at it. You could say that feature flags made my career.

At the same time, there was one Friday afternoon back in 2009 where I thought it would be a good idea to do one last push, and I launched a bad feature flag configuration that broke the ads systems and cost Google a lot of money. I remember my boss saying at the time, “So Josh, what did you learn from the X million dollars we just spent educating you?”

It’s the kind of thing where you learn that canaries are good, and that 5 pm pushes on a Friday are pretty much always a bad idea, no matter how good your feature flagging framework is.

 

What do you think is the number one mistake that’s made around feature flagging?

I think my biggest thing is if you are really thinking of feature flags as just “on” “off” switches, you are missing the real power of what they can do.

To me the feature flag space is the parameter space that I get to explore to optimize whatever metrics that I want to optimize. If you are constraining yourself to this very limited boolean on/off space without strings, floats, etc. you’re putting artificial limits on how fast you can explore the space and about how all of the knobs at your disposal work.

This will be the early mistake that I think a lot of people make– they won’t feature flag often enough.

 

Can you share any tips for better flagging?

I think what was most compelling for me at Google was the configuration language for feature flags was very rich, but not Turing complete. I don’t think that configuration-as-code is a good idea for feature flags, because it becomes harder to test/validate, which slows down how quickly you can roll things out. However, the configuration language was like programming in the sense that you could define what we called “condition functions” that could be evaluated in the context of a request and used to adjust the values of the flags. So the logic was like a long series of “if” statements, where you could modify the resulting value either by overriding it or by addition, multiplication, or other custom operators.

The way it worked when I was at Google and working on the ad system was that the server binary was pushed weekly, so the only changes that you could make during the week were via experiments. Having that sort of richness of programming via feature flags allowed a lot more freedom and a lot rapid safer iteration between those weekly pushes. The same logic applies today for mobile apps, where you can only do releases via an app store, but you want to be learning what works much faster than that.

 

How do you think feature flags play into the DevOps movement? Continuous Delivery?

How would you do anything without them?

What does it mean to do DevOps without feature flags? It’s one of those things that doesn’t make sense to me. How would you make mayonnaise without eggs? It’s like, not really mayonnaise then. Which is good, because mayonnaise is gross.

I would be fascinated and somewhat horrified to have someone try to explain a feature flag-less DevOps set up. I don’t know what would that look like.

 

Are you seeing feature flagging evolving? If so how? And how do you expect it to change in the future?

Not as much as I would like, broadly speaking. I’m not at Google anymore, so I don’t know what the current state was when I left. The stuff they had is much richer than anything I’ve seen anywhere else, but to be fair, they were doing a lot of machine learning much earlier than anyone else, too.

I think feature flags need to find a way to strike the balance between configuration as on/off switches and configuration as Turing-complete programming language. That was the thing I felt was most compelling and powerful about the way Google did it that I’ve not seen anywhere else.

 

29 Aug

Guide to Feature Flagging Best Practices

While there are many companies who practice feature flagging as a way to extend continuous delivery and release faster, there are just as many companies who are just starting out with the practice.  And until now, there hasn’t been a great resource for the feature flagging beginner.  Feature flagging as a concept is super straightforward, but must be implemented with diligence.  Haphazard feature flags can devastate a company, whereas well-managed flags can prevent disaster and even make your company more competitive.  


LaunchDarkly’s guide to feature flagging best practices addresses these deficiencies and provides a roadmap for better feature flag management.  While many aspects of feature flagging must be personalized to meet a particular company’s needs, the guide acquaints newcomers with how powerful feature flags can be and what they need to watch out for.  It also relays some pro tips for those who have been feature flagging for a while.  

LAUNCHDARKLY HELPS YOU BUILD BETTER SOFTWARE FASTER BY HELPING MANAGE FEATURE FLAGS AT SCALE. START YOUR FREE TRIAL NOW.

17 May

How to use feature flags without technical debt

Or, won’t feature flags pollute my code with a bunch of crufty if/then branches that I have to clean up later?

People often ask for advice on maintaining feature flags after they roll out the new code to all users. Here is one way to approach the issue of cleaning up the old code paths. It works pretty well for our team. If you have any other ideas, please let us know!

Write the new feature

Step one is to write the new feature, gated by the feature flag, on a short-lived feature branch off of master (call it pk/awesome-xyz-support):

Next, before you submit your pull request for this change, create a second branch, off of the first, and call it cleanup/awesome-xyz. This branch removes the feature flag, leaving just the new code:

Continue reading “How to use feature flags without technical debt” »

11 May

Feature Flagging for Back-End & Microservices

LaunchDarkly feature flagging for backend microservices

I sat down with LaunchDarkly Lead Software Engineer Patrick Kaeding to get his take on feature flags for the back-end and how feature flagging works with microservices.  

Andrea: How does feature flagging help in back-end projects?

Patrick: It helps in general to be able to rollout to a small set of canary users and see if things go wrong.  You can gather feedback from performance metrics or error logs. You can give a smaller customer access to the new feature or even a group of customers who know they are testing out something bleeding edge.  Then you can make changes and react to the feedback and then roll it out to all users or roll it back. This helps mitigate risk and lets everyone sleep better at night.

Andrea: What was it like before you used feature flags?

Patrick: You would have to make a change – and test it of course, that part hasn’t changed – but then you would release it out to all of your users. You’d be watching everything to make sure nothing went wrong – and it would be a lot more stressful.  You might have to time your deploy for a low traffic time, which meant staying up late on the weekend to be able to do the deploy at that time. And if something went wrong, you’d have to rollback the deploy.  And if there’s any sort of data model change, sometimes rolling back to the old code is easier said than done. You might have to make sure you have a reverse migration in place and that adds a whole lot of complexity. Of course with feature flagging you still need to make sure both code paths still work in the data model. That’s still something you have to think about.   

Continue reading “Feature Flagging for Back-End & Microservices” »