In this episode, Paul and Edith are joined by Martin Casado, General Partner at Andreessen Horowitz. The group discusses the deeply complicated and difficult process of category creation, with a special focus on technology infrastructure products. This is episode #24 in the To Be Continuous podcast series all about continuous delivery and software development.
In this episode, Edith and Paul talk about the history and shifting definition of SaaS, some of the last Continuous Delivery holdouts in tech, and the need for early customers to know there’s a person behind your software. This is episode #15 in the To Be Continuous podcast series all about continuous delivery and software development.
In this segment of To Be Continuous, Edith Harbaugh and Paul Biggar talk about the fear of shipping and if code is an asset. This is podcast #5 in the To Be Continuous podcast series all about continuous delivery and software development.
This episode of To Be Continuous, brought to you by Heavybit and hosted by Paul Biggar of CircleCI and Edith Harbaugh of LaunchDarkly. To learn more about Heavybit, visit heavybit.com. While you’re there, check out their library, home to great educational talks from other developer company founders and industry leaders.
Paul: Okay, so one of the things we wanted to talk about this week was the idea of shipping velocity and how it’s affected by team structure, sort of miscellaneous factors that aren’t part of continuous delivery.
Paul: So at the end of the last episode, Edith, you said, I think this was a quote from Yammer VP, “The organization you design is the software you built.”
Edith: Yeah, it was actually from the CTO Adam Pisoni and it really struck home from me because I see this at so many companies, they have a lot of engineers and they wonder why they have a lot of code but not a lot of product. And it ties back to what you just said of specializing the product management role.
Paul: Right. I think that this a name. I think it’s Conway’s Law and the way that I saw that expressed is that if you have, if you’re building a compiler and you have four different teams that are building a compiler, you’ll end up with a four-pass compiler.
Edith: There’s no one vision.
Paul: Right. When looking at lots of different teams and team structures, the interesting one that I found was the Heroku one.
And they have a language team and they have an add-ons team and they have sort of sharp delineations in their software, or in their stack, that allows them to really focus on one particular area because they’re such sharp demarcations between the different areas of the product.
Edith: I think that’s good if you’re a fairly mature product. I think in the early days of Heroku that would not have worked at all.
Paul: I wouldn’t say it’s quite from the early days, but relative to now it was quite early, I think they had that in Cedar, which was around 2010.
Edith: I just more meant when you’re an early stage start-up, sometimes you change your entire product.
Paul: Well okay, yes, yes. I mean, absolutely. I think once you get into, once you get past the first stage of the product, and if you’re able to draw very good interfaces between how your customers understand what your product is.
Edith: I don’t know, I mean I’ve seen this go bad in so many organizations, where you have entrenched engineering organizations that care more about staying on their current project, then actually about where the market is going.
You know, like we’ve always worked on this, so we need to stay here because we don’t know anything else versus being able to evolve to where the market is going.
Paul: Right. This reminds me a little bit of something that I’m working on at the moment. We brought in some UX experts to look at our app and to help us sort of transform it into something that was a little more usable.
And they did a fantastic job and I spent this afternoon reading some of the reports. But what was difficult was understanding where the product needed to be. So for us in particular, there’s not enough focus on the deployment.
There’s a lot of focus on the build and there isn’t really sort of a broader look at what do engineers actually do when they’re trying to do continuous delivery. And so we ended up with what was in the product was redone in a really fantastic way, but there wasn’t much affordance made for here’s the thing that actually needs to be in the product.
And when you talk to customers, it’s hard for customers to tell you oh here’s the thing that you actually need to be or they look within the box you’ve drawn for them.
Edith: Yeah. I say this ’cause I had a similar evolution to you, I actually started off in engineering. And when I was in engineering it was very obvious what we should build next, extremely obvious. And so I always thought that our product manager was an idiot for not seeing as clearly as me. When I became a product manager, I realized how myopic I had been as an engineer.
Paul: Can you give an example of what was the next thing to build that in retrospect was wrong?
Edith: I would see all the little bug fixes that we should be doing instead of the next big features. Or not even the next big features, but the next big product.
Paul: I think big or small, or big picture versus small picture, is a good way to distinguish these two. I think that it’s very easy when you’re talking about product management to get the idea that a product manager knows everything and the engineer is just an implementer.
And I think this is where a lot of the resistance to product managers comes from with an engineering organization’s the idea that they’re going to be relegated to mirror kind of peasants in the…
Edith: Code monkeys.
Paul: Code monkeys, there we go.
Edith: You know, nobody wants to be a code monkey. That doesn’t sound very fun.
Paul: Right, right. I would disagree with that, but I think that’s–
Edith: Wait a second, we never disagree.
Paul: It is very frustrating trying to understand everything. And on the other hand, it’s very satisfying to ship things and to get your stuff in front to customers. So very often, the ability to just be a code monkey for a certain period of time is this sort of soothing feeling of just shipping software that fixes a lot of small problems.
I remember reading one of the famous GitHub guys, I don’t remember which one it was, but let’s assume it was Kyle Neath, that spends a lot of time on big projects. And in between the big projects, he needs to find what is the next big project to work on.
And it’s often very frustrating, or very you go down certain rabbit holes and whatever, and you end up kind of not shipping things, or you end up getting frustrated or whatever. And what he likes to do then, is just reach to the back log and just take a bunch of small fixes. And he spent like two weeks of just like implementing very small things.
You don’t need to think about it, and it’s cathartic and it lets you ship and so a couple of weeks as a code monkey I think is a very useful thing to sort of refresh the head and that sort of thing.
Edith: I agree, but I think nobody wants to do that full-time and I’ll also challenge something else you said, which is everybody wants to ship.
I think there are a lot of people who find shipping terrifying. and they’d rather keep holding stuff on until it’s perfect… Like, I’ve certainly been with in situations like this where it’s like–
Paul: Right, where we can’t ship because it’s not perfect yet or it’s not complete.
Edith: Yeah, and as an engineer you have this real battle of well what if people want this, or what if they want this or this might be not quite right.
Paul: Yeah. The personal strategy that I use to manage that is to try to write the blog post that you’re going to launch this with and to, cause very often you’ll be like, “oh I can’t launch this cause it hasn’t got this feature, it hasn’t got this feature.”
And in the blog post, assuming that you’re going to tell people how to use it, or you’re writing the doc maybe, if not the blog post. You get that sort of feedback as you’re trying to explain to your user how to do this you’re going to say, all you need to do is this, and you’ll realize that this is seven steps long instead of one step long.
Edith: Yeah, the Amazon model. So at Amazon, they actually start with writing the press release first.
Paul: Okay, right.
Edith: And everything and that’s a really good guide back.
Because too often, people do the other end of they’ve built this gargantuan thing and they’re trying to write a press release or blog post and they’re like, whoa, we’ve built all this stuff but there’s nothing actually to talk about.
Paul: Right. And part of that, and something which I think engineers have a difficult time thinking about is how to get this widely in use. So you can build the feature, but its no use having built it if no one uses it. So you need to build the breadcrumbs, you need to figure out the ways that are subtly hinted that this is the feature that you want when it’s the feature that you want and to draw people’s attention to it.
And sometimes that’s putting it in docs and sometimes that’s doing a big announcement, but more often it’s trying to get the product in a place where the UX naturally implies the right path or the right direction for users to discover parts of the product.
Edith: Yeah, I mean the whole idea of responsive design, and I think even more, and this goes back to why I started LaunchDarkly, is you might have built it, but nobody might want to use it.
Like, you could have put all this effort into building it and done all these breadcrumbs that nobody follows. So that’s the idea of you actually start doing the bread crumbs first and see if people start following that path.
Paul: So with LaunchDarkly, I’m guessing that the way that you see whether someone is using it is whether it’s enabled for them. Is that right?
Edith: Well no, so what we do is we allow people to turn on features for certain users.
Just turning on a feature for a certain user doesn’t necessarily mean that they start using it.
Paul: Right, exactly. So do you tie this to mixed panel usage? Or some sort of analytic stuff?
Edith: We could tie it to different back ends, like we tie it to New Relic, we tie it to actually optimize so you can see if people are even, and we have our own internal analytics.
Paul: Gotcha, okay. So this is the thing for me, that I started a project recently, and the first thing I did is built the dashboards for adoption. And we’re still at the stage in the project where there’s no adoption, or there’s tiny amount of adoption amongst early users.
A trickle. A trickle that you can’t even see on the graphs.
Edith: So it’s more like a fine mist.
Paul: A fine mist. But what you need to get is you need to get to the place where everyone is using this. ‘Cause if you just build it, they’re not going to come.
They need to be told about it, they need to understand how to use it, and getting those first customers to using it and where it’s deployed amongst them is it gives you incredible feedback about how one actually ships that software to the larger customer base.
Edith: Totally agree. I mean, this is classic Lean principles of just making sure some people can use it well before rolling it out further.
Paul: We discovered a part of the product that exactly three customers were using.
Edith: How did that make you feel?
Paul: Well, we didn’t actually know it was a feature. This was the idea that you could do deployments in parallel. So at CircleCI we paralyze your build and so the idea is that basically we take your test suite and split it across 20 or 50 or whatever machines.
But it turns out that that applied to deployment as well. And there were exactly three customers using that. And one of them had a valid use case for it. Out of thousands of customers, exactly one valid use case was there.
Edith: So what did you do with the feature?
Paul: We killed it.
Edith: Did you tell him?
Paul: I hope so. Yeah, I know. I think we reached out to that guy. There was another way for them to do it.
The painful part of project management is also when you have a feature that you like to kill but that a subset of power users loves.
So like at TripIt, we’re a mobile travel itinerary but we let people do a printout. And one time we’re like, oh nobody prints anymore, let’s just kill it. Turns out that people print and they really really like printouts.
Paul: Right, right. I understand that, yeah.
Edith: Like particularly if you’re traveling to a foreign country.
Paul: Yeah, you’re not going to have internet or your phone’s going to be dead.
Edith: Or you want to show something to a passport guard.
Paul: Yeah, or a local. Without handing over your phone, like here’s what I’m doing.
Edith: Yeah, so they were furious with us.
Paul: Right. So, had you killed it at that stage?
Edith: Oh we killed it. Like we were just like oh, we didn’t have good analytics on people printing, so we just said oh nobody’s printing, let’s kill it.
Paul: Oh, wow wow.
Edith: So our analytics later was that people complained. Quite loudly.
Paul: And so had you properly killed it at that point, or had you merely disabled it to see if it went away?
Edith: Let’s see. We disabled it. I think we could get it back but people were really really upset.
I like the thing of shipping something turned off, rather than actually deleting the code. Even though it’s incredibly cathartic to delete the code and to hide and it and remove it. But the turning something off with a feature flag is just a lot better way to sunset something.
Edith: Why do you think it’s cathartic?
Paul: Oh deleting code is amazing. It’s like my favorite thing to do.
Edith: It’s funny. My cofounder John, he was from Ex Atlassian, and he said the winner of their hack competition was always the person who deleted the most code.
Paul: Right, right. That makes perfect sense.
Edith: Cause that’s what they wanted a reward is tidiness.
Paul: Yep, yep. It’s very much related to the idea of product management and validating things and making sure that you don’t build too much of the product. Code is not an asset.
Code is an asset in the financial sense of it in that you think you want it but you actually don’t. You actually want the best performance with the least amount of code slash asset available.
Edith: Yeah it’s like, so a friend asked me once, should I pay my developer’s more if they write more lines of code? I was like, no. It was like no, that’s a really easy metric to gain.
Paul: Right. So we were talking about deleting features by feature flagging them. I think that this is an awesome way to delete a feature because it’s very, very easy to get back. It’s much easier to get back than a rollback.
Rollbacks are this thing that nobody wants to do because they’re very, very painful. Especially if a ton of code has come in between. So if you ship something, or if you delete something by literally putting a feature flag in around it, and then you ship the code and then it’s still on, and you turned it off for a certain amount of people, see if anyone complains, see that it still works and that you know, show it to ten people, and then you delete it for everyone just by flicking the flag.
And then if you get someone saying, you know, we really, really, really need print it, You can turn it back on for them while you have a think about how you really want to solve this problem.
Edith: Yeah, I think feature flags is a really misleading term.
So feature flags implies that it’s always on or off, when really it’s more of a feature control.
It’s a way to encapsulate a portion of functionality such that you have total control over it, from the sunrise of it, you know, from launching it to certain people, seeing their reaction, getting analytics, and then all the way to the end, as you said, every feature eventually you want to kill.
Paul : There’s an interesting parallel here between feature flags and the configuration variables. By configuration variables, what I mean is, in a rails app, you often have a set of four different environments. You have dev, test, staging and production. You often get in trouble in that you fill your code base full of, “If dev, this,” or, “If n is production, then do this.” What you really want, is you want to be able to say, “If we have enabled the x feature, if we’re using SSL,” is one example of a thing that might be on in some configurations, but wouldn’t be on in other ones. In the closure ecosystem, there’s this idea of a component. Where, you build your application as a set of components and all of the variables through our components are passed into it. The variables are essentially feature flags. Are things disabled? Are they enabled? What is the setting that is built on it? It makes it very easy to compartmentalize functionality and to only expose a simple interface that allows you to control how the functionality works, without necessarily having to dive into the functionality all the time.
Edith: That’s exactly my vision for LaunchDarkly. That everything should be controlled.
Paul : Right. You should like into the closure idea of components. It’s a very sort of … The closure people speak in very theoretical abstractions and that sort of thing. They use weird words like complecting. It’s a weird thing, but they actually really know what they’re talking about, which is even more annoying.
Edith: That’s always the worst, right?
Paul : Yeah, so someone invents their own vocabulary and then they’re right, so you actually have to discover what they mean by this vocabulary. Then no one else understands the vocabulary. So complecting is an interesting word, in this case. It means unnecessarily tied together, or it means that it’s not complex, but it’s two things that are like … It’s the opposite of simple, basically.
Edith: That was the …
Paul : The idea that you have two components and they’re two widely connected, or complected.
Edith: Yeah. What do you think is a better word than feature flag, when really it’s more about feature controlling or feature wrapping?
Paul : I used to feel that there were different concepts for feature flags versus AB testing. That they were actually different concepts. I’m now convinced that they are the same concept.
Edith: This is interesting, because I think AB testing is just something that is enabled be having an unwrapped feature.
Paul : Something that is … Say that to me again.
Edith: If you’ve wrapped all your features, as I talked before, you can launch them, you can monitor them, you could sunset them, and you could also compare them.
Paul : Right, right, right. Yes, so an AB test is really a feature flag which is enabled randomly for a certain subset of customers and you’re looking at the analytics.
Edith: Yeah, so it’s just an extension of … If everything is an object, and nicely, as you said, a wrapped object, it’s then you could say, “Oh, is this object doing better than this other object?”
Paul : Right. Okay. The thing that I found really weird in looking at how people talked about AB tests versus feature flags, weird because those are the same concepts, but AB tests are a marketing thing, or growth thing. You tie them to business goals or to KPIs, or to funnels, or something along those lines. Features, and especially the kind of operational side of features, you tie to database latency and basically operational metrics. There’s not difference between business and operational metrics. Every AB test should be tied to operational metrics because it’s no good knowing, “Oh, no one buys on this thing.” If the reason no one buys on it is because the exceptions are through the roof.
Edith: Yeah, you …
Paul : Similarly, there’s no concept of, “This feature works really well if the database load is really low.” Oh, the database load is really low because no one is clicking down that path. Your business metrics are through the floor.
Edith: You’re absolutely right. I think one of the goals of LaunchDarkly is to provide analytics on everything. I do think what I found when I was talking to customers is there’s a lot of fear around AB testing. Just the word, I think, has been overloaded that people … Give Max from Heroku, who sits down the stairs from us, they love feature flagging. They feature flag everything. He doesn’t think of it as AB testing. He’s like, “If you said do I AB test?” He would say, because that implies that you’re really doing more of a test of an old versus a new and picking which on is better. Really, it’s more he has a new feature and he wants to make sure the metrics are correct.
Paul : Right. When I would advocate for AB test in the past, it was mostly to say, “Does this perform not worse than what was there before?” Someone to have a new design of something and they think it’s great. We’d all agree that it looked a lot better, but does it convert better? In fact, not even does it convert better, because if it looks better and it converts the same, we should definitely go with it. Does it convert not worse?
Edith: Yeah, the issue is most people don’t have the traffic for AB testing.
Paul : You can make AB tests work at a small scale.
Edith: If you’re a SaaS company with maybe 300 customers, you don’t have enough volume to do AB testing.
Paul : You don’t have … No.
Edith: You could be quite profitable and very happy with those 300 customers that are each, like, 100K a year.
Paul : AB testing is a test of statistical significance, right? On a small scale, you need a lot of people. To be able to tell small significance, or small differences, in the result. You know, 5% versus 6%. You need a large number to people to have any confidence in the statistical significance.
Edith: Yeah, so if you have a feature …
Paul : You can tell the difference between 80% of people get through your funnel and 20% of people get through your funnel, with 50 customers.
Edith: If you say … So that stuff you have a feature that you use at the very top of the funnel. What I was trying to say is if you have something that people might not see very often, it’s very hard to AB test it. If you look at a funnel …
Paul : It’s also very hard to get …
Edith: You have your marketing site, which gets the most. You have your signup, which gets the most. Then the further down you get into product, the harder and harder it is to do AB tests.
Paul : Especially if it’s something where you don’t have the funnel, necessarily, or where you have a funnel, but you haven’t constructed a funnel. One of the ways I think about software is that everything is a funnel.
Edith: Yeah, life is a funnel.
Paul : Life is a funnel. Hiring is a funnel. Your jobs page is part of a funnel, but you don’t actually have a mix panel thing built on your jobs page. Making any change to your jobs page, means that you don’t actually know if it’s had a positive effect, or a negative effect. You can’t really tell anything about it.
Edith: That’s very true. On the other hand, if you’re getting only like one person applying a month anyway, it doesn’t really matter. That’s still in the realm of statistical insignificance.
Paul : Right, so you have no controls over something goes out, you just have opinions. Whereas on the home page you can have data and then on your jobs page, you can argue that this word that we use here is really off-putting to female developers, or something along those lines. If someone disagrees with you, you’ve got nothing to back up your arguments either way.
Edith: Well actually, a company, Textio, now is doing a lot of statistical analysis of job postings to see if they have … If the words in them are gender neutral, or not.
Paul : That’s based across a massive corpus of job pages, versus … It’s not maundering your actual through put, or something like that?
Edith: No, it’s just based on … She’s a machine learning PhD.
Paul : Right. That works for something like gender, or things that are … Diversity and the general case. It’s not going to work on, “Are all the closure developers … Are they aware that this is a closure shop?” As an example.
Edith: Yeah and that’s why you can’t AB test life.
Paul : If only.
Edith: No, I mean, that would be horrible. You’d have to do everything a thousand times.
Paul : Right, right, right.
Edith: What if some of those thousands were just terrible?
Paul : Right. You can AB test web pages.
Edith: No. Not if they don’t get enough traffic.
Paul : Not if they don’t get enough … Yeah. Yeah.
Edith: It goes back to what you said. You can have through putter latency.
Paul : Right.
Edith: You could test every page if you have a thousand years. If you have a low traffic page … Sometimes, to go to what you said before, you just have to go with, “I feel this color’s better. I feel this color pops.”
Paul : This is one of the most frustrating things about developing software. Everyone knows that you should have data and analytics and it’s just very difficult to really have any idea of whether you should have analytics for a particular thing. You know you should have it on your funnel. You know you should be measuring what customer they’re using. In almost every instance, you can make a justification for just doing it this way.
Edith: You have to at a certain point.
Paul : That’s what’s frustrating. If you could uniquely say, “In every situation, we’re going to use data. We’re going to use the funnel. We’re going to use these analytics.” Then you’d be in a great place. Where there are 90% of your webpage or your product, actually don’t get enough use to get any statistical significance, and then you end up with only having a funnel on your signup page. Then you don’t really have a very data driven company, as a result of that.
Edith: That’s the hard truth. There’s still a lot of art in the science. At some point you have to make a decision, like I like …
Paul : Unless you’re at Google.
Edith: I’m sorry?
Paul : Unless you’re at Google.
Edith: Yeah, unless you literally have the world’s population using you, you do have to make a decision like, “I think our jobs page should look like this.”
Paul : Right.
Edith: “I think that our onboarding should look like this.”
Paul : Right. Why? Don’t know.
Edith: Other people do it that way.
Paul : I just kind of feel it. I think this pops more.
Edith: Was that an American accent?
Paul : No.
Edith: I thought you slipped it on in.
Paul : No. I think this color pops a little bit more. That’s what most arguments about UX end up with, if you don’t have high level principles and goals and personas that you’re building the product around.
Edith: That’s fair. I think it goes back to what you said before. There’s always this tension of data versus gut. What we talked about engineers about making it perfect versus ship it now. I think they go together. The more data you watch, the more you can be convinced that now is the time to ship versus the person who’s like, “Okay, it’s just time.”
Paul : Right, right, right. When I’m trying to ship something, I try to make sure that my fears are addressed, more than I try to make sure that the thing is feature perfect.
Edith: That’s a really good way to look at it.
Paul : We shipped this feature that I’ve been working on and it was just supposed to go to the first 5 customers. What I want to do is make sure that the back end was shipped and then I could test it on our own project and validate, “Does it actually work at all? What does it look like when someone actually uses this in production?” The first thing that I really needed to validate is, “Can I turn it on without causing any problems to everyone?” Or to me and that meant that I have to make sure that there’s no problem to anyone. Basically, what I had to do with the feature was insulate it from if it went wrong, in the thousand ways that I couldn’t expect it to go wrong, that I knew for sure that it wouldn’t affect the rest of the customers.
Edith: Did you use a feature flag?
Paul : A feature flag, but also … The problem in a lot of languages is it’s not just a feature flag. You have to wrap the exceptions and make sure that the exceptions get caught and just like end nicely versus going forward. It was something that was on the critical path where everyone’s build. If the code went wrong, everyone’s build could be affected. I just needed to make sure, no matter what goes wrong in this code base, let the builds continue.
Edith: The builds must flow.
Paul : The builds must flow. What I was addressing wasn’t, “How do we feature flag this off?” It was just like, “Can I ship this without being fearful of something going wrong?”
Edith: I think you sum it up very well. Fearful.
Paul : Right. Continuous delivery is mostly about fear more than anything, I think.
Edith: I think it’s about mitigating fear because it’s very freeing that if you could turn something off at any time, you could move forward.
Paul : Right and if you ship something and you know that it’s not going to break things …
Edith: You could ship it.
Paul : Right. Exactly.
Edith: I think it’s when you have these big bang releases where it’s everything all together in one kluge, that you have a lot of fear.
Paul : Yup.
Edith: I had a customer who said they stopped doing continuous integration, sorry, because it was faster to not. They were just …
Paul : Let me just predict how this ended up.
Edith: Have you seen this movie already?
Paul : I’ve seen this movie so many times.
Edith: You saw…
Paul : Every iteration of this movie.
Edith: Every thousand?
Paul : Yup.
Edith: How does the story end?
Paul : The story ends with software not being able to be delivered. Engineers quitting, is a common ending, or at least last chapter surprise. That like, “Oh, we’re shipping so fast.” No, we can’t ship anything because it’s so frustrating to ship things because we just don’t know if it’s going to work.
Edith: How do you … That’s the end of the story. What’s the next chapter?
Paul : After the end of the … What’s the sequel?
Edith: What’s the sequel?
Paul : The sequel is they bring in a new VP of Engineering. The VP of Engineering says, “What the hell? There’s no testing.” The VP of Engineering sets up testing. Everyone complains about testing and that everything was much faster before testing, but they have to do what the VP of Engineering says. In about a month, they realize their velocity is about 10 times faster than it used to be and everyone who complained about testing is actually happy.
Edith: Yeah. That’s what happened at our customer. They said they stopped doing testing because it seemed to take too much time. They got to the point where they couldn’t ship. They were like, “Actually this is the same as like basic housekeeping.” You can’t just not do your dishes everyday for 2 months and expect …
Paul : Was there anyone fired on this journey?
Edith: No, they came to a pretty quick realization.
Paul : I think they probably got lucky there. People have been fired for less.
Edith: Oh, really?
Paul : If you’re in charge of a software team and you bring out something that brings the team to a halt, maybe a little encouragement to your boss that you actually did know what you were talking about in the first place.
Edith: I heard this legendary story that Salesforce, they got to such a state that it took them two years to ship anything.
Paul : Wow.
Edith: They fixed it. They fixed it because they realized this is a problem. It takes us two years to ship anything.
Paul : Right.
Edith: Well Paul, it was fun catching up with you about product management and how to AB test, or not.
Paul : Whether AB testing is the same as feature flags. It is.
Edith: Oh, Paul, we always agree, but not on this one. I would say, all right … I would say that you can AB test if you have feature flags, but I don’t say you have to AB test if you have feature flags.
Paul : I think this is just language. AB testing is … The way that I look at feature flags … We have feature flags that sit in a bunch of different places. We have feature flags of, “Do I enable this on this machine?” We have feature flags for, “Do I enable it for this customer?” Then we have feature flags for, “Do we randomly enable this across the customer base and what proportion do we use?” They’re all some form of AB testing. They’re all some form of feature flag. I don’t see a distinction between them at all.
Edith: I think because in customers’ minds they think of AB testing as some sort of statistical thing, where they’ll get a deterministic result. Whereas, sometimes they’re using a feature flag just to roll out a new feature and expose it to some users.
Paul : Would you classify it as an AB test if it’s statistical and not an AB test if it’s not statistical?
Edith: Paul, I actually agree with you. I was just saying, I’ve talked to a lot of customers and when you say, “AB testing,” it puts a lot of fear into them that they have to …
Paul : Right, right. I agree that a lot of people see a distinction between AB tests and feature flags. I think my point is this. I’ve come to the conclusion that there is no distinction.
Edith: I think we actually agree, after all.
Paul: Thanks for listening to this episode of “To Be Continuous,” brought to you by Heavybit and hosted by me, Paul Biggar of CircleCI and Edith Harbaugh of LaunchDarkly.
We’re pleased to announce “To Be Continuous”, a new podcast about continuous delivery and software development. Edith Harbaugh , CEO and cofounder of Launch Darkly teams up with Paul Biggar, Founder of Circle CI to give a frank and humorous take on the subject.
In this first episode, Edith and John talk about why people are doing continuous delivery as well as its benefits and barriers. Also, why you should never ask to see Paul’s PhD thesis.
The show is brought to you by Heavybit. To learn more, visit heavybit.com and while you’re there, check out their library, home to great educational talks from other developer company founders and industry leaders.
Paul: Okay, Edith. What’s your favorite thing about continuous delivery?
Edith: Just making people happier, quicker.
Paul: That is not where I was expecting to go with that.
Edith: Well, I think it comes back to — I’m an engineer and part of an engineer is you want to see stuff out there. You want to see people using your products. You want to see them happy. You want to hear if they’re not happy. You want to fix bugs. You want to ship features.
Paul: So one of the really nice, kind of, happiness things that we saw was support tickets. So people would come in and they have a support request and we’d be able to ship a feature change or whatever it was to that, that went out like, you know, 20 minutes later.
Paul: And you’d be able to say to them, oh, sorry about that, it should be fixed now.
Edith: Yeah. I mean we do the same. We’re very young company and if somebody complains, “Oh, we’ll fix that right away.”
Paul: And nothing diffuses a Twitter rant like, “It should be fixed now”.
Edith: Yeah. I think, part of it though is a consequence of us, quite frankly, being very small and not having as many customers yet. Like when I was at TripIt, we had 10 million users.
Paul: Right. I think that’s probably a different kind of situation.
Paul: I presumed they did continuous delivery.
Edith: You know we did a weekly release and then we would do patches.
Paul: And this is mobile software.
Edith: This is mobile, yes. We had a whole different release streamed for mobile, mainly around the Apple store.
Paul: Right, right, right. Yeah, the great destructor of continuous delivery.
Edith: Yeah. I’d say the number. think there’s all this urge and want to be more reactive in mobile and there is such a huge “thou shall not pass” sign that Cupertino push up which is basically like your app might be great, but it’s not happening.
Paul: So I wrote this — I wrote this blog post, I think that was on Pando Daily or something that talks about how Apple was responsible for this glitch, this kindle glitch that went out. So, Amazon released a new version of kindle that, I think, deleted all of your e-books or something like that and they weren’t able to turn it around super quick. They got a special exemption they got turned around within a day. In the end, it was fine but really the problem and the reason that that was caused was the lack of a continuous delivery process meant that they couldn’t just ship the new code and have it turned off and then gradually turn it on and you know, that sort of thing that we do on the web but they weren’t able to do on mobile.
Edith: Yeah. And I think it’s funny because mobile has its reputation of being so much faster and more modern, but in terms of delivery to end-users, it’s five to ten years behind and it’s all Apple.
Paul: Yeah. It’s amateur.
Edith: Yeah. It’s all Apple. Like, we saw that so many times a trip that we will have a fatal bug out in the field, and we would have a patch ready, and Apple would say, “Well, you know, we got to review this.”
Paul: And so that’s the thing like…
Edith: And then we were a good customer. We were the number one travel app.
Paul: Developers can write a fix in five to ten minutes once they know the thing. I mean it depends on the fix but yeah, very often, five or ten minutes. And anything that like — anytime that you add on for that is the thing where more customers notice, more customers are affected, more customers’ workflows are disrupted and it’s just pure overhead. Like, Apple is just introducing overhead but also not having a continuous delivery process in your organization, releasing quarterly, having someone who needs to click the button, something on these lines are all more things that can just add barriers like fixing customer things.
Edith: Yeah. So, this goes back to something we’re talking about before about how you said everybody wants to ship stuff faster. I said, no there are a lot of people whose–
Paul: Whose job is shipping makes slower?
Edith: Yeah. So I mean, Apple does not really care if you ship stuff faster. Like Apple gets dinged if stuff is —
Paul: Is broken or —
Edith: Or bad. At least that’s their viewpoint. But then they put up all these processes, where you like, I have a broken build just let me patch it.
Paul: So, this is one of the great, sort of, falsehoods of software delivery: that trying to make sure that it’s perfect is better.
Because, actually what happens is, it’s trying to make sure it’s perfect means not got shipped. And actually, shipping things and and iterating on it gets better or it’s what makes better. And startups understand this, the whole Lean movement was built around beside this idea, Agile is built around this idea. And everyone is pretending to be Lean, everyone’s pretending to be Agile.
Edith: Well, it’s kind of like you asked me whether you are Agile, what am I going to say, no?
Paul: Yeah, right. I tend to ask you for the Agile with a capital A. And that implies they follow the full Agile process. And nobody, nobody follows the full Agile process.
Edith: Yeah. I think so go back on one of the major tenants of Agile which is people over process. But I think, one of the whole bags of effective continuous delivery is there are people whose bias is to have stuff be perfect as you said.
Paul: Right, right. And they like processes that enforce, that enforce that. So, I mean, Apple obviously likes a process that enforce that. Well like a human that makes sure that there’s no nudity or whatever is against the roles on the app store. But I think they also have like business reasons for not doing it. So what everyone would like to do is just ship a web view in a native container and then just like update it off the web. And Apple doesn’t let you do that because they’re trying to make you build native apps and they’re trying to make it all in a thing that they control, where they control the ecosystem, where you can’t just easily take an app and move off to Android or whoever else can be in the ecosystem if Apple allowed that.
So they have like a very strong business reason to keep people from having a really good continuous delivery workflow.
Edith: Yeah, it all comes back to money. They want to monetize off the app store.
Paul: Apple wants money.
Edith: No, really? They didn’t give you that computer for free? But if everything is off web views, they can’t enforce their 30% cut.
Paul: Right. They can’t control the ecosystem. We may as well ship it to Android then.
Edith: Well not even that, it’s like if you sell something through in app, they get 30% right off the bat.
Paul: Okay, right, right and they can’t enforce that.
Edith: If you sell it via a web view but it’s just, by say, you have a web view and you’re checking out, there’s no way for them to take 30% cut.
Paul: It’s funny how many different sort of — I tend to look at the world through incentives, it tends to be one of the lenses through I look to the world. And whatever incentive, you get someone with — there tends to be a lot of incentives not to do continuous delivery, even though like if you But there’re a couple of anti-patterns that I see and one of them is fear. People are just afraid that things will break. Then the other one, I guess this is kind of fear as well but there’s this anti-pattern in product management of like why I wasn’t shown this or why didn’t someone check with me. You also see this in ranters on Twitter.
If you have continuous delivery, things going out all the time, then there isn’t always necessarily a human who can validate that like, oh we’re doing it right or at least if your process isn’t set up in such a way to allow this to happen.
Edith: I completely agree. I think a lot of continuous delivery is accepting kind of decentralized authority.
Paul: That is a really interesting concept.
Edith: You know, so the whole idea of a release train was a monolith with these many layers of approval. Everyone has to be checked as —
Paul: There’s a human, there’s a release manager who’s validating that’s good enough at each time.
Edith: And even beyond that, then the marketing reviews it then you have legal and it’s very hierarchal whereas continuous delivery is anybody could push it at any time. That’s very decentralized.
Paul: So continuous delivery is anyone can push anytime but it’s not necessarily everyone releases every single push or at any particular time. There’s kind of two models that — there’s kind of two related words that people get confused with this, the continuous deployment model of every piece of code that you push is shipped and then it’s continuous delivery model of every piece of code that you push is possible to be shipped.
Edith: Yeah, I think it’s shipped versus shippable.
Paul: Yes, yes. I think web versus mobile is a nice example of this, like certainly most of our customers every version of their IOS app that they build on CircleCI is shippable, there’s a binary that you can download to your phone, that you can upload to your over-the-air updater or whatever it is that is actually shippable. And Apple is putting that barrier in you to prevent it being shipped.
Edith: That goes back to the decentralized versus hierarchy.
Paul: Oh yeah, if anyone if hierarchical, it’s Apple.
Edith: Then it goes back to who does this ultimately serve because then you have all these people with broken phones out in the real world.
Paul: Right, right. We probably shouldn’t keep going on about Apple but I think it’s an interesting model that they tend not to get things too broken. It allows you to make sure that people aren’t really violating the rules too much. So, you can’t sneak in viruses.
Paul: There can still be viruses but they’re harder to sneak in which you would get with continuous delivery or in particular web view model of continuous delivery. You can’t sneak in things that against the rules, gambling and porn and whatever else. Apple doesn’t want in its ecosystem. And the expense of that is innovation. I’m doing air quotes for people at home around innovation. But you just can’t ship things as fast. People who work in your ecosystem can’t ship things as fast and you’ll find that a lot of startups who are trying to do something super interesting that doesn’t really require being on a mobile will do it on web because they can just ship way, way faster.
Edith: Yeah. It’s basically to your point: perfection is at the cost of speed. My old boss, Gregg Brockway, who was the Tripit co-founder, he said a perfect plan tomorrow is not as good as a good-enough plan today . He said that to me because I was — I had built this new future. I didn’t feel it was good enough to ship and he gave me some really good advice. Which was let was let’s just ship it. If it’s broken, we’ll fix it.
Paul: I guess, yeah, obviously that relates that then as feature flags and to sort of points toward your company slightly. Edith, of course, the CEO of LaunchDarkly.
Edith: Thanks Paul. Paul is the CEO of CircleCI.
Paul: And feature flags as a service, but feature flags were, I think, really the innovation that made continuous delivery like properly possible and the idea that you’re just going to ship code separate from shipping features.
Edith: Yeah, it goes back to shipped versus shippable. Like you had everything all bundled up and kind of ready to go either out in the field but hidden.
Paul: Right, right. There’s a bunch of features that we have that are maybe 80% shipped that we sent them out to customers and we determines that’s it just wasn’t good enough for some reason. Maybe there’s a software bug, maybe there’s an edge case that we haven’t considered, that actually comes up in production. But I don’t think that we could realistically do continuous delivery if we weren’t shipping things in their off stage.
Edith: That’s cool. Can you talk about an example of a feature like that?
Paul: So this thing that we’re rolling at the moment and it totally exists and like anyone, any customer could use it if only then knew about it.
Edith: It’s like the secret sauce at In’N’Out?
Paul: Exactly, exactly but I mean, when we’re not telling them about it, but it’s going to be the super awesome feature but it’s a big feature and the foundation is built. And the foundation is like part of every single build. And if you know the super-secret invocation, you can get it. And so we’re going to start by like rolling that out to customers who actually need it and we’re going to validate it against in the first five or ten customers and very standard like product management thing of like validating the business case, validating the customer case, make sure that the thing that we’re shipping is a thing that’s really valuable to our customers before we announce it, before we tell 100% of our customers.
So maybe 10%, 20% of our customers will have experience it before we announce it. And actually a whole lot of customers will experience it without knowing that they experience it and I think that’s kind of the ideal way of shipping things, validate the product first, validate the technology first, make sure that it all works and only ship it to customers once or only have like the marketing version of the launch long after the code is validated in production.
Edith: Yeah, I mean we did the same thing at Tripit. So one of the biggest features I ever worked on was scraping people’s inbox for emails. So Tripit basically, it takes emails, strips them of interesting things, and then gives you a beautiful itinerary. You used to have to forward stuff to plans at Tripit. What a lot of our customers were asking for was that we could just scan their email and get their emails out for them. Sound simple until you think about it. Well, actually, this is a pretty massive undertaking like you have to get the right emails, like people we found that would have like 10,000, 20,000 emails. You have to pick the right ones and only the right ones. So we dark launch this. Like we only did it to people who opted in. And they gave us really good feedback, you know, we made a lot of really simple mistakes at the beginning because there’s a ton of logic we would pick up like Turkish Airlines had like a travel to Istanbul and we thought that people were traveling to Istanbul. They make a trip and so we have to get smarter, smarter, smarter about our filtering and then we got to the point we’re like, “Okay, this is good enough that we can release it to entire in a million plus base.
Paul: So that there’s a bunch of validation techniques that — did you know Marty Cagan?
Paul: So it for the people at home. So for people at home, Marty Cagan is like this product guy who runs Silicon Valley Product Group. He’s well-known and in the Valley as like the product guy. And he runs a seminar and a bunch of Circle people went to the seminar. And he talks about product validation techniques. And the product validation techniques are basically the things that enable you to do continuous delivery really well. It’s two of my favorite ones and one of them is the button on the app that doesn’t actually do anything.
Edith: I love that because nobody ever pushes it, don’t build the darn thing.
Paul: Exactly, exactly. So in order to that, you really don’t want to show that button to everyone because what if everyone loves that button. You want to ship that blank button to a smaller number – to smaller number of people and then a small – larger number of people to make sure that the volume does not exceed. So if you show to a thousand people and a thousand people all click it, you don’t want 10 million customers who would really love that feature being, “I fooled you, feature doesn’t actually exist.”
Edith: Well, let’s say forwards, what usually happens is nobody clicks. Because I’ve run that experiment many times and nobody clicks it and then you just do stuff like you try different wording. “Let’s make that button bigger.” “Let’s make the button be the entire page.” All right, nobody cares. “Let’s make that button be a pop up that like – you know.” I’ve pushed hard on features that just turn out just to be dead ends, but it’s better to find it out early.
Paul: Yup, that’s one that works really well with continuous delivery and the other — I like this in almost entirely for the name, the Wizard of Oz feature. So this is where you build a button or something like that and someone clicks. Let’s say it’s your check your inbox for trip things. You the click the button and you get told, “We’ll get back to in 48 hours with this feature.” And then there’s a human who’s like aggressively filtering — I guess that they’re not going to be aggressively filtering through your inbox. But the maybe it doesn’t work in that case but someone is like checking it as used case. They’re uploading the spreadsheet manually. They’re entering the data, they’re validating it.
But it’s really not possible to launch that kind of feature its scale either because if 10 million people click it, you really don’t have ability to satisfy the volume of that could possibly be there but you’re almost certain isn’t there so you need to scale it up to the level that you can actually tolerate.
Edith: Yeah, and I think this ties back to Lean. I think people have a misconception of Lean that it means lazy or like just throwing shit at the wall. When Lean really just means that let’s not be wasteful.
Edith: Like let’s not build out this whole feature until we’ve established a need for it.
Paul: One of the core tenants of Perl, the language and Perl, the community is laziness and they mean it in the same way as Lean. Don’t build shit that you don’t want. What’s the other phrase that comes up from Agile: you ain’t gonna need it.
Edith: Yeah and I say this because it’s so painful like I’ve built features that nobody uses and it’s just sad.
Paul: Yeah. People get angry at the features. You’ve never experienced this? Like, there’s a feature that nobody uses but that somehow is like deeply embedded in your code base and people want to rip it out all the time but they can’t for whatever reason.
Edith: You mean engineers, internal engineers.
Paul: Engineers, yeah. Engineers want to rip out and maybe they can’t rip it out because it’s a – because there’re some important customers using it or because there wasn’t a new workflow for it or maybe just like there’s political thing, you know. Someone worked hard on that and no one internally wants to say –
Edith: This is a loser.
Paul: No one uses that feature.
Edith: Yeah, I think it keep coming back to – I think software is really more about people than processes or tools.
Paul: Well, continuous delivery is about the process.
Edith: I think it’s about people buying into the process, that it’s better to have proof points. It’s better to admit that you can be wrong than put all your effort into trying to be perfect.
Paul: So one of the areas that people tend to come up with problems with continuous delivery is this idea that the code they’re going to ship is known or not known as in – you don’t need permission. You don’t to go through a product manager or whatever. It’s just whatever code get ship actually gets deployed out. So how do people deal with this kind of problem?
Edith: It goes back to decentralization. I think you have to have some, and I keep coming back to, people. You have to know and trust your engineer and know and trust that they’re doing the right thing. And I think this goes back then I’m not even going to touch this debate for right now like the whole offshore versus onshore.
Paul: Yeah, let’s not touch that one. We’ll just move on. So perhaps the different way of stating is you have team members in remote time zones sometimes and the PM isn’t online at that particular moment to validate that particular feature needs to go out but there’s a customer who’s demanding it and it’s like, “What do you do?”
Edith: I don’t think there’s any right answer to this one.
Paul: Like all people problems.
Edith: You know how good is the developer? How important is the customer? Is it really something that needs a judgment call from a PM or is it just that the PM needs to feel like they need to be there?
Paul: Right, right, this is something that I personal suffered a lot of, did the whole — how come this wasn’t run by me, sort of thing. It’s a really difficult thing when your product is your baby to see stuff going out that isn’t necessarily the right feature or that isn’t necessarily done in the right way or I mean some cases it’s just the way that you would do it versus the right way.
Edith: Yeah and I mean this is – I’ll say again Paul. This isn’t much really to your software. It’s more just about overall management and decentralization.
Paul: So I think the reason it’s important is that customers –
Edith: I’m not saying it’s not important. This is in fact I think one of the biggest decisions you can make as a company.
Paul: Whether to decentralize?
Edith: Yeah, I mean so you can end up with the probably hierarchical
Paul: I don’t think it’s necessary to decentralize?
Edith: I don’t think it’s necessary but if you don’t – you could say, “Okay, I’m going to be the Salesforce model which is very top down.” Or you can be Facebook which is extremely bottoms up or I can come up with something that’s –
Paul: Even in a bottoms up organization like you’re going through product managers, you’re working with people who are actually validating that that customers actually want these features. And, especially if you’re an organization that values A/B tests, and experiments and feature flags, there’s a process that you can create where everyone is happy with the things you ship. And we’re –
Edith: Well, nobody is ever happy all the time.
Paul: Okay, okay. I think what I’m trying to say is –
Edith: And I think I’m being devil’s advocate.
Paul: You are. You’re just totally fucking with me.
Edith: Not totally, maybe just like – maybe –
Paul: Eighty percent.
Edith: Maybe like 10% roll out.
Paul: That’s slow burner. My sense is that there’s really this kind of overlap between continuous delivery and product processes. That is quite a complicated one to navigate. When people are switching to continuous delivery, like often the problems that they’re going to experience are not technical problems. It’s not just the fear of what might break if we deployed six times a day. But it’s like how do we overcome the – or what do we need to do, what do we need to like setup sort of the – the people equivalence of the scripts that you check into your repo that does the deployment to make sure that we are doing things safely and that we’re doing the right things for our customer and for the organization.
Edith: Yeah and also how much risk are you willing to accept your –
Paul: Risk is super interesting for – like the whole continuous delivery thing is all about risk.
Edith: And I think part of continuous delivery is saying the biggest risk is to move slowly. Like I think, for example, Pacemaker should never do continuous delivery.
Paul: Because the risk is super high.
Edith: Yeah, you’re installing medical devices in somebody’s body.
Paul: Right, I used to draw this – I remember when I was in the first seed round in like 2012 or something for CircleCI. I’d draw this graph of what level of safety do you need for – I think the question that comes up is like what level of test covered should you have. And it’s not a question that you can necessarily answer without knowing the complete context, like if you’re making medical devices, sending people to the moon that sort of thing then that’s the sort of thing where will you write provably correct code, where you run like – why can’t I think of –
Edith: Static analysis.
Paul: Those are particular – but there’s a particular subset of static analysis- –
Edith: By the way, he got his PhD in static analysis.
Paul: But yet, it’s only been like five years and I can’t think of any of the words that applied to it. So there are particular subsets of static analysis that are really like programs that are provably correct. And if you’re flying on an airbus, you’re flying on software that has been built by like the leader, the guy who invented entire fields of – like provably correct programs. On the other hand, if you’re making a consumer app, if you’re making an early stage start up, hell if you’re making Facebook, that’s what the internal culture is. You can ship things with a much greater degree of risk. And the risk that you’re really –a greater degree of kind of software or breakage risk or edited risk or whatever that is — because what you’re saying is as an organization is that the real risk that we wanted to prevent is moving slowly, is being stagnant. It’s not getting products to market. It’s not getting products to our customers.
Edith: Yeah, exactly. So I remember I had dude who’s now very senior at Hortonworks, he drew triangle was quality cost and time. And he said, you just move around this axis, like you can have extremely high quality thing that takes…
Paul: But never ships.
Edith: Or you can decide we’re going to skimp on quality because we want to ship faster.
Paul: So I’m going to disagree actually. I think that the idea of the extremely high quality software that takes a very, very long time to ship, I think that that’s not possible. I think that when you take a very long time to release software, what you end up is to get code, to perfect, you have to ship it. You have to put in to the real world.
Edith: So Paul, I have to say, I was about to disagree with you but now I complete agree. Because I think the lesson that I’ve learned is that you cannot internally test the used cases that the real world sees. Particularly, and to go back to mobile, particularly with the rise of mobile, just that there are so many devices out there and so many situations with low battery, low signal.
Paul: There are entire startups that are dedicated to like simulating your mobile device with like all this kind of different conditions.
Edith: People are out in the real world and there suddenly have something you never thought off. Well actually, I heard of something really interesting. I chatted with the Facebook director of engineering and they said they could not simulate Facebook’s load. The only way that they could really see if something could stand up to Facebook’s load…
Paul: Is to put in to production?
Edith: Yeah. They’re talking about Facebook, which has billions of users, only in the world.
Paul: Yeah, I heard they’re quite popular.
Edith: I hear the kids dig it.
Paul: So on the topic of things kids dig, it seems that if ever going to make something that’s popular, it’s never going to be popular on the first go. There’s always going to be like a nugget of something that you got right and in order to really get from there to something that grows rapidly, that is the quintessential Silicon Valley success story.
You need to be able to iterate, ship new things everyday, ship new experiments everyday, be able to validate what was right there.
Edith: I completely agree and validate on real users.
Paul: Yes, real users are tough people. I was chatting to the product folks earlier today about this kind of topic, and dogfooding was the issue and this is a little bit of a tangent but so we’re developer-facing company we’re a very technical company, all of our first eight or nine or 10 or 14 or whatever, people are engineers, and we used the product heavily from the start. But when you dogfood, you’ve assumed that everyone has the same workflow that you do, the experience of product in the same way that you do and you’re not actually getting what the customer really experiences.
Edith: Yeah, I hear you because at LaunchDarkly, we dogfood our dogfood, our dogfood, it’s like this chain of turtles so LaunchDarkly is feature flags as a service and so we use feature flags ourselves. So something that actively try to guard against is we’re like, “Okay, we can build a lot of stuff right now for us because we’re the most advanced user out there.” But we need to dial it back and build more and better onboarding.
Paul: Right, right. Yeah, yeah onboarding I guess you have a business there that needs to be.
Edith: Yeah, I’m very pro-dogfooding but I think you just always have to be cognizant that if you dogfood your stuff, you’re probably the power user.
Paul: Right. It’s exactly what I was getting at that you’re not the exact target user, you’re not learning at the right rate and that the stuff that you end up shipping is stuff for you which is great but not necessarily the stuff for your customers because they have different workflow or whatever it is.
Edith: I do believe in dogfooding your stuff and figuring out the pain points like I think —
Paul: I think dogfooding is great. The success of product especially at developer-facing companies is realizing that or making the entire company realized that the workflow that they use is not the same, or whatever customers informing people about the edge case and the product that are not experienced by the main product team.
Edith: That’s really interesting Paul and actually this might be just me being selfish, but some feedback I’ve gotten on LaunchDarkly is that we should be more prescriptive about what workflow people should follow with feature flags, and I wonder if you get the same at CircleCI about being prescriptive about how to do continuous integration.
Paul: So when we were starting, the most popular thing and the best sort of designed developer product was Heroku. And Heroku had this logic of you will get an amazing experience if you conform. The big one was read-only file systems. People would just not conform. You can’t use Heroku because of read-only file systems and Heroku actually changed the whole world around that model.
We felt that we weren’t able to do the same thing. One was that we were small kind of startup and we did not have the same sort of attraction that Heroku had at the point that were answering this but also we felt that every developer is a special snowflake, every developer workflow is a special snowflake, so the kind of product principle that we came up with is if we follow best practices and every developer community out there where we know subsets of those, they all have like ways of using the tools in the ecosystem in an idiomatic way.
So if you follow those best practices, Circle will just work and it will be this just amazingly beautiful experience. If you deviate, we still want to use the product and we still — and so we provide escape hatches for almost everything and we’re working hard to make sure that we have escape hatches for the things that are left.
Edith: Yeah, I think that’s really important. I think we fall in love with our own products and we think people want to turn every knob and twiddle every dial when really they just want a lot of stuff.
Paul: Yeah, they just want it to work out of the box and yeah, it’s funny that there are so many ways to implement feature facts and we’ve ended up — we’ve had homegrown feature flags since before you guys were around. We’ve just kind of ended up in the situation of the feature flags that you have or — I mean really, the product that you use constrains your workflow, so we have three different ways of doing feature flags and we’re always trying to make our continuous delivery process fit into those three buckets.
So like I have the ability to ship a feature flag that runs on a specific machine or for specific user but only randomly or for specific project but manually set. So those are kind of the three things that solve like 90% of our use cases but we ended up with — we want to do continuous delivery in a way that doesn’t really work on those three kind of ways of doing it and so instead, we need to do something else and we find different ways of deploying and different ways of getting the product out there to customers that fit within those things, and sometimes, it’s not pretty.
Edith: Like what’s the use case that you had to jury rig?
Paul: So we were disabling a field from our API and we were pretty convinced that no one used this field. So we set up a feature flag for the field and then we turned it all for our own project. And ideally, we would have scaled it up. It wouldn’t just have been listing for ourselves, listing for — let me explain how they did and then what we were ideally like have done. So turn it on for ourselves and then waited a day or two then turned it on for a random smattering of 10 customers who are building right now.
I waited couple of days. No complaints, turned it off for everyone, waited a week or two, no complaints and then we actually deleted the code. But that manual process of first of all, turn it off ourselves then we’ll pick 10 customers and actually type the project names into our chatops thing in Slack —
Edith: Oh, you’re one of those chatops people.
Paul: Yeah, yeah. It’s awesome. I couldn’t recommend it more. So yeah, we do that and then really what I want is the — we scaled it up to 10%, to 20%, to 30% but we can only do that on a randomize — we can only do that on a randomize basis. We can’t do that on a static like once customers are in, they’re able to do it. So we weren’t able to use that particular feature flag mechanism first.
Edith: Paul you know, you can use LaunchDarkly on feature by feature. No, on a feature by feature basis.
Paul: Yeah, yeah, no.
Edith: I mean you could keep your homegrown system and then just use us with really important features.
Paul: That’s an interesting idea. I haven’t thought about that.
Edith: I mean you could just use us for those features and then keep your old system in place. Then gradually, all your features belong to us.
Paul: It’ll get there. The other one, we were talking a lot about SaaS software and obviously continuous deliver is this thing which came back because of the prevalence of SaaS software. We’re launching on premise software as well as CircleCI, you can buy CircleCI enterprise and that’s like literally things that we shipped, that run in your own VPC or that run on prem. We can’t really ship the same things to do testing. For a start, we’re not allowed to ship our analytics code in there, so we don’t run Mixpanel in the staff that we shipped to enterprise customers and so we can’t tell who’s using what parts of the features. It makes the whole continuous delivery model much more difficult.
Edith: Oh yeah. I mean so we talked before it’s like so with Apple’s a gatekeeper, On prem a gatekeeper, like really the rise success.
Paul: Really, that’s what they’re trying to be like. They deliberately trying to be a gatekeeper.
Edith: I think continuous delivery and SaaS are so tightly intertwined because one enables the other.
Paul: Right. There was this blog post that I read kind of in the dawn of SaaS — it was by Joel Spolsky and Joel on Software blog where he talks about searching Fog Creek, a downloadable thing that runs on some things to being actually on the web, being a web application.
One of the things he talked about is being able to ship things that customers don’t see yet, being able to ship things that fix customer bugs immediately after they start. And being able to get notifications about error messages or about errors or whatever it is immediately instead of being sent by bug report or being sent by an actual customer via an email that they catch, replicate or whatever.
Edith: We used to be so blind Paul.
Paul: It was fucking awful. I’m trying to write software in — so I worked in the games industry briefly when I finished college the first time. Shipping code that runs on a console, that is going to be shipped like on a gold master disc in six months from now and that you’re never really going to be able to validate, that was like — games aren’t even shipped that way now but like that was fucking awful time of trying to understand all the different ways of — that software could break and all the different ways that your code might not just work quite literally shipped to millions of people at once.
Edith: Yeah, to tie back to something we’re talking about before, that’s when you got this feature and scope creep to the point where like Duke Nukem never shipped.
Paul: Our particular problem in the graphics engine and anything that involves like a state of the art graphics engine, things will just fall back until they don’t ship.
Edith: Yeah and then all of a sudden, I mean it goes back and then you have all these issues with people’s computers themselves have moved on.
Paul: Right, right, the expectations are different as well like if you’re shipping Duke Nukem and it’s been seven years now. The thing that you’re going to have to ship is going to have to be amazing. It will have to be life changing or as people will wonder what it is going on.
Edith: Yeah, you’ve just been trying for the past seven years is just to like —
Paul: Ship anything. I’m very, very glad to be in the SaaS business.
Edith: It’s liberating. I think analogy is between the transition between books to newspapers, to online.
Paul: Right, because people are — the news cycle changes as a result of it.
Edith: Yeah. If you can publish multiple times a day, you just publish.
Paul: Yeah and of course, you can continually go back and edit those articles in case you —
Edith: Or put a retraction or correction or unlike a book — like once a book is out in a bookstore like you know it’s there.
Paul: Yeah. The biggest thing that I’ve ever written — or the longest thing I’ve ever written was my PhD thesis. And I have a paper bound copy in my living room that I never look at. But occasionally, someone will come and they’ll be like, “Oh, here it is.” They’ll open it to random page and looking over their shoulder I see a typo. Whatever page they open, there’s some type — I’ve read that thing 50 times. But somehow, I didn’t manage to get them all out.
Edith: That’s really funny. Are they terrible typos or just many ones?
Paul: I mean its things that a spell checker would have caught sometimes.
Edith: Oh did you make it in some like terrible LATEK or something?
Paul: Of course it was LATEK.
Edith: Yeah of course, oh yeah. So it’s probably beautifully formatted.
Paul: It looks amazing, Garamond font.
Edith: So it looks beautiful but it’s riddled with —
Paul: Minor punctuation errors.
Edith: Yeah. I remember, this is a total aside but I wrote an econ thesis which I’m really proud. I took it home to my parents and my parents — my mom has a PhD in English, my dad is an editor and started pointing out like you used an M-dash when you should have used an N-dash. I was like, “You’re supposed to be admiring like my deep economic thought not copying editing.”
Paul: So apart from doing a therapy session for that, what else have we got in the continuous deliver event? And the deep parental scaring. Looks like we’re all out of things to talk about this week.
Edith: No, I think we open it up. You asked me what I like about continuous delivery, and you said I was surprised by my answer.
Paul: What was your answer again?
Edith: I like getting product to people faster and making them happier.
Paul: Oh yeah, I thought you were going to talk about employee morale.
Edith: I mean I like employee morale also but I don’t think that’s the — I think they tie together. I think employees were happy because customers were happy.
Paul: Right, right. Personally, having worked at Mozilla during the awful Firefox 3 to Firefox 4 conversion or the Firefox 4 launch which took 18 months and things just kept slipping and things just kept being added and feature creep and there are these things that the web was waiting for that would be held back by Mozilla being unable to ship Firefox 4.
And people were not happy. The entire organization was particularly unhappy. When we switch ed to the release train model, it’s kind of few months before left I think and actually I hold the singular distinction of being the first person to fuck up the Mozilla continuous release train model.
Edith: It wasn’t where I thought you were going to go there Paul.
Paul: No. In fact, I forgot where it is on but I think I can probably get there.
Edith: How did you fuck up the release train?
Paul: So I shipped a feature just before the deadline and that’s the whole thing that you’re supposed to avoid when continuous delivery. It’s never about shipping for deadline but now, I was determined to get this feature. And this feature was one where we had argued considerably online on BugZilla. And I said, “Look, I’m just going to ship this and we shall see.” And the feature was going to be an amazing one. So in Java Script, there’s no way to get submillisecond precession. There’s a feature called date.now and date.now returns the number of milliseconds since whenever.
I said, “Okay, we can make this a floating point number and that can give us submillisecond precession.” Unfortunately, people were using this to basically as a random name generator and they were creating DOM nodes that have the name of the number of milliseconds. It was terrible, terrible idea doing it but DOM nodes can’t have dots in the name so I turned it to floating point number. Everything broke and the first thing that broke is Gmail. So Gmail upload used this feature, feature, I spit on that feature. Use the thing and so I broke Gmail for about a million people who were using our nightly releases.
Edith: Well, Paul, I’m not going to give them your address so that they don’t talk about how they really needed that email.
Paul: But because continuous delivery it rolled back. There’s no big deal.
Edith: And I think that also plays into a bigger point that we’re making that until you push a feature out into the field, you can’t really know how people are using it.
Paul: So actually, in this particular case, I think we could of predicted it. In retrospect, there were some things out there but yeah. Once it goes out, it’s settles the question for once and for all. Lots of people who knew what they were talking about were saying this is going to break the web and I said, “No, no it will be fine.” And really, we just put out there and it broke the web and we rolled it back.
Edith: Yeah, I mean that’s one of my major tenants is users always beat testing. And users beat assumptions.
Paul: Right. Products need to be validated, code needs to be validated.
Edith: And the sooner you get them out to real people the better.
Paul: Right. Did we talk about risk yet?
Edith: We talked a little bit about risk. We talked about pacemakers.
Paul: Oh yeah, yeah. The thing that’s important I think is with risk and continuous delivery is smaller units are less risky. You can ship a tiny, tiny piece of code and validate that it works or doesn’t work with significantly less risk than a whole quarter’s worth of work.
Edith: And I think it’s risk at every level.
Paul: What do you mean?
Edith: It’s risk at market level. So I think you were talking very narrowly about just like —
Paul: Code risk. And breaking our customer or being able to — you can’t roll back a quarters worth of stuff. We can roll back one commit.
Edith: Yeah. To me, it goes even bigger, like the biggest risk and we talked about the biggest benefit of continuous delivery is the biggest risk is always just you’re building the wrong darn thing.
Paul: Right, right. Tell that to your CIO.
Edith: Well, you spent 18 months or 24 months going in a direction like to your thing about the game discs, to build a game that nobody wants or cares about.
Paul: Right, right. Well, on that depressing note. We are going to be back in two weeks I think.
Edith: Yeah, two weeks.
Paul: And we’re going to be talking about more continuous delivery stuff. We don’t know exactly what so send us a note.
Edith: Edith [at] launchdarkly.com and I’m on Twitter @edith_h.
Paul: And I’m on Twitter at @paulbiggar and we shall see you in two weeks.
Paul: Thanks for listening to this episode of To Be Continuous brought you by Heavybit and hosted by me, Paul Biggar of Circle CI and Edith Harbaugh of LaunchDarkly.
Edith: To learn more about Heavybit.com. While, you’re there, checkout their library, home to great educational talks from other developer company founders and industry leaders.