17 Jul 2017

To Be Continuous: Common Developer Monetization Mistakes

In the latest episode of To Be Continuous, Edith and Paul examine how monetization approaches vary depending on the developer market you’re in.

They discuss how developer markets can be divided into four spaces and examine the companies operating in each space. They also consider the pitfalls of startups that push their product in too many directions and consider how to go about evaluating the threat of Amazon cloning your service. This is episode #34 in the To Be Continuous podcast series all about continuous delivery and software development.

Continue reading “To Be Continuous: Common Developer Monetization Mistakes” »

14 Apr 2017

Go Serverless, not Flagless: Implementing Feature Flags in Serverless Environments

A demonstration of LaunchDarkly feature flags with AWS Lambda

Serverless architecture, while a relatively new pattern in the software world, is quickly becoming a valuable asset in a variety of use cases.

There are some notable business advantages to serverless architecture:

  • Faster development – easily harness existing backend services and third party APIs to quickly build and deploy apps.
  • Lower cost – substantially decreased operational costs by not having to manage infrastructure in addition to the componentization of the entire application (only scale what you need to scale).
  • Less maintenance – scaling, load balancing, and patches are a thing of the past, serverless platforms take care of it all for us.

Coupled with feature flags, a serverless application allows teams to control feature releases without having to redeploy, which is especially useful for serverless applications that tend not to focus on persistence and be more stateless in nature.

In this article, we’ll start with the assertion that you have already decided that serverless functions makes the most sense for your application architecture. What you really need is the ability to continue using feature flags in this type of architecture. This is where LaunchDarkly has got you covered.

Let’s start with a simple app model that stores a bit of user data on demand and that can retrieve values when requested. This app is going to include the use of two AWS Lambda functions an API Gateway, and ElastiCache to build a serverless pipeline. AWS Lambda lets you run code without provisioning or managing servers.  On the other hand, LaunchDarkly is a feature flag management platform that allows you to easily create and manage feature flags at scale.

Storing Flag Configurations

AWS turns out to be a well-equipped platform for serverless feature flagging. In our implementation, we create a webhook in LaunchDarkly to hit an API Gateway endpoint which acts as a trigger for invoking a lambda function.

Then, the lambda function can have LaunchDarkly populate a Redis ElastiCache cluster by initializing its SDK with the built-in “RedisFeatureStore” option. This lambda function built on the NodeJS runtime would look something like this:

Processing Feature Flags

Now that our flag configurations are neatly stored in the Elasticache cluster, a second Lambda function may be used to retrieve and process flags without waiting for any outbound connections to LD! This is achieved by enabling LaunchDarkly Relay mode in the SDK options. The function’s “event” parameter can be used to pass in a user to be evaluated by the LaunchDarkly SDK.

Take Aways

It’s important to understand that there are multiple ways to implement feature flags in your serverless architecture.  The method will depend on your app’s structure, your serverless provider, and your implementation timeline. You can check out this guide to feature flagging for best practices and ways to get started.

 

03 Nov 2016

To Be Continuous: Organizational Scaling

In this episode, Edith and Paul discuss the steps that a startup can take for successful organizational scaling. They bring up important questions including: What roles should your team members play as your company grows? How do you prioritize resources? Is Paul a good coder?  This is episode #26 in the To Be Continuous podcast series all about continuous delivery and software development.

Continue reading “To Be Continuous: Organizational Scaling” »

14 Oct 2016

To Be Continuous: Specialization In Organization Design

In this episode, Edith and Paul dive into the question of specialization. They discuss the pros and cons of working with specialists vs. generalists, and how your decisions in this area can have a wide ranging impact on your product and company.  This is episode #25 in the To Be Continuous podcast series all about continuous delivery and software development.

Continue reading “To Be Continuous: Specialization In Organization Design” »

26 Apr 2016

Zombies eating your AWS bill?

Zombies AWS Bill LaunchDarkly Feature Flags

The near-infinite elasticity of Amazon’s EC2 service is amazing. You can easily and cheaply spin up new instances whenever you need them. At LaunchDarkly, we provision new instances every time we deploy a new version of one of our services, which often happens multiple times in a day. We use Ansible to orchestrate the provisioning and deploy process, which means that we have easily repeatable deploys, and no server is a snowflake. Using the ‘cattle, not pets’ mindset, we routinely create and destroy instances. The script basically does the following steps:

  1. Provisions a fresh set of instances using a base AMI image that we have created
  2. Pulls down the latest code and config files from S3 (which is where our build server publishes to)
  3. Runs a few smoke tests to be sure the app has started up
  4. Swaps the load balancer configuration so the load balancers start sending traffic to the new instances
  5. Waits for the old instances to finish processing any requests that they had received before the load balance switch
  6. De-provisions the instances running the old version of the code

Continue reading “Zombies eating your AWS bill?” »