10 Jan 2018

Launched: Private User Attributes

Today we are launching a new feature called Private User Attributes. This feature allows our customers to collect event data from feature flag evaluation while controlling which user attribute fields are stored or exposed in other parts of the LaunchDarkly user interface.

In our feature management platform we use the stream of flag impression events generated by our SDKs to power many useful features—including attribute autocomplete, our users page, the flag status dashboard and A/B testing. Some of our customers have requirements to restrict Personally Identifiable Information (PII) from being sent to LaunchDarkly. Until now, these customer’s only choice had been to completely disable any event data that might contain PII fields (emails, user names, etc.).

The introduction of Private User Attributes means all customers now have the the ability to selectively choose not to send back some or all user attributes. With this selective approach, customers can continue to view flag statuses, use A/B testing, and view and override attributes for targeted users while protecting sensitive data. We have also made several small changes to the LaunchDarkly web UI to display when attributes are not available and offer reasonable choices when data has been marked as private and is unavailable for display.

What is a Private User Attribute?

When a user attribute is declared private it can continue to be used for targeting but its value will not be sent back to LaunchDarkly. For example, consider the following targeting rule:

if e-mail endsWith "@launchdarkly.com" serve true

To implement this in your application, you’d make this call to LaunchDarkly:

ldclient.variation("alternate.page", {
  key: "35935",
  email: "bob@launchdarkly.com"
}, false)

When this code is executed, a flag impression event would be sent back to LaunchDarkly containing the following data:

  {
    "key""alternate.page",
    "kind""feature",
    "user": {
      "email":"bob@launchdarkly.com",
      "key""35935"
    },
    "value"true
  }

In the example above, LaunchDarkly has received the email address of a user. Suppose now that you want to target by email address but not send that personally identifiable address back to LaunchDarkly.

This is what private attributes are for—you can mark the email field ‘private’. If your application marks the email field private, you can still target by email address but LaunchDarkly will receive the following data:

  {
    "key""alternate.page",
    "kind""feature",
    "user": {
      "key""35935",
      "privateAttrs": ["email"]
    },
    "value"true
  }

This data object no longer contains the targeted user’s email address. Instead, there is a record that an email attribute was provided during feature flag evaluation but that its value has been excluded from the event data.

Ok, let’s hide some stuff.

There are three different ways to configure private attributes in our SDKs:

  • You can mark all attributes private globally in your LDClient configuration object.
  • You can mark specific attributes private by name globally in your LdClient configuration object.
  • You can also mark specific attributes private by name for individual users when you construct LDUser objects.

It should be noted that the user key attribute cannot be marked private, for this reason we encourage the best practice of not using any PII as the user key. For more in-depth docs you can look at the SDK Reference Guides.

How do private attributes work for mobile and browser SDKs?

You might recall that our mobile and browser SDKs work slightly differently—flags are evaluated on LaunchDarkly’s servers, not in the SDKs. We still support private user attributes for these SDKs; when evaluation happens, we do not record any data about the user being evaluated. For analytics events, these SDKs strip out private user attributes the same way our server-side SDKs do, so your data is still kept private.

How will marking attributes private affect my experience?

At LaunchDarkly usability is critically important to us (after all, we use LaunchDarkly to build LaunchDarkly). In thinking through the impact of reducing the amount of data that is visible we wanted to still provide an intuitive UX for existing features. Because we record the name—but not the value—of a private attribute, name autocomplete will continue to function for private attributes in the UI, but autocomplete will not be offered for attribute values. We also indicate which attributes on a user are marked private on the User Page and when we are unable to predict a user’s Flag settings because a targeting rule uses a private attribute. Below is an example of the User page for a user with attributes marked as private:

Why shouldn’t I just make ALL my attributes private?

Making all your attributes private would limit some of the product features that rely on having attribute values available. For example:

  • The UI will not offer to autocomplete values. You’ll be able to see the names of hidden attributes, but not their values. So you’ll know that (e.g.) “email” is available for targeting, but you won’t know (in the UI) what email addresses exist.
  • Flag settings (on the users page) won’t be exact. We’ll be able to know if the user has a specific setting enabled, but we won’t be to predict what variation a user is receiving because we won’t be able to evaluate the targeting rules when hidden attributes are in play.

Given these limitations, our recommendation is that you selectively use Private User Attributes only for the purpose of securing PII.

We’d like to thank all the customers that provided feedback on this feature, we hope you enjoy not sharing your data.

08 Sep 2017

Saving Private Instances

You are a new Director of Engineering at an enterprise company. The company has just moved your product to the cloud and is hosting in Microsoft Azure. You come from a tech startup where you ran all of your software in the cloud and focused on building product. So naturally you have just implemented instrumented monitoring with HoneyComb, Kubernetes to manage your containers, and you’re thinking about leveraging serverless architecture.

You moved to this established enterprise company because they are providing technology to the healthcare industry and you are passionate about this space. They want you because you are good.

Now you face a challenge—how can I bring the good things from my past role and pair them with the security standards and compliance in my new role? Well, let’s first think about what the good things from your past role entails. Your previous team:

  • Deployed 5 times a day
  • Used feature branching
  • Feature flag first development
  • Implemented continuous integration

You realize you used 3rd party software for most of these good things, and had homegrown for others. You decide your first project is to research what software the new team is using, has built, and what software you will need to build and buy.

Security and the cloud.

Now, before we dive in, let’s consider the security standards and compliance you need to think about in your new role:

 

  • Where is my data stored?
  • What data is being sent over to the third party—is it PII?
  • Where connections are made to third parties?
  • How can I set different access control?
  • How it will work when connection fails-redundancy?
  • Is it HIPAA compliant?
  • How is all this audited?

 

Like your new company, many companies are just starting to move key operations to the cloud. This is SCARY. Moving into Azure will allow your company to free up floor space, add more server redundancy, easily scale up and down, only pay for what you use, and focus resources on revenue generating activities— security is still your responsibility.  And though these are all great things, you are not a security company.

Now you are tasked with bringing in these good processes, and they include 3rd party software providers. Products in this space are inherent in your development lifecycle and very close to your core application, but you need to ensure they are secure.

As we all know, most software providers in this space are cool tech companies in The Valley, far and isolated from the realities of your secure enterprise world. You look at on-premise options. And you remember that you’re driving change in the Healthcare space, not looking to manage infrastructure. The cloud options require a significant audit that is time consuming. A SaaS provider carries an ongoing risk that requires you store some data on 3rd party servers, which is a non-starter in Healthcare.

Is there a middle ground?

There is one more option many software providers offer: Private SaaS, also known as a managed instance, Dedicated VPC, or private instance. You surmise if they do not have on-prem, private SaaS, or a really really good security team, then it’s not likely their team is accustomed to working on enterprise challenges.

What exactly is a private SaaS offering? This refers to a dedicated single tenant, cloud software where the vendor manages the infrastructure.

Why choose a private instance?

  • Single Tenant—You will have a dedicated set of infrastructure contained within a VPC. This eliminates the risk of noisy neighbors.
  • Data Storage—Data can be stored in your AWS account or in an isolated section of the vendor’s AWS account. This allows flexibility, if you are in the EU, vendor could spin up the instance in AWS in the EU.
  • Residency—If you have a preference on where the instance is located or need to ensure proximity.
  • Compliance—If you have additional security or compliance regulations, a private instance can be customized to fit your needs.
  • Change Cadence—If you need to know when the software will be updated, you will have better insight and greater flexibility with a private instance.
  • Integrations—If you have custom tools, integrations can be built.

What’s next?

You have narrowed down a few software providers for each job you are trying to solve (continuous integration, branching, feature flagging). You understand the value, the costs, and the security implications. You have chosen to stay in the cloud, which makes your infrastructure team happy. You have chosen to use private instances and SaaS where it makes sense, which makes your security team happy. And you have the tools you need to bring the good things into your new role, so you are happy.

Now you can focus on helping your company deliver product faster, eliminate risk in your release process, speed up your product feedback loop, and do it all securely.

21 Aug 2017

How to Comply

Everyone loves a little affirmation (Image credit: twitter)

Things we wish we had known and things we were happy that we implemented early. 

At LaunchDarkly we recently embarked on the journey to SOC Type 2 compliance. While the reasons we chose to pursue this certification were primarily business driven, the tasks and actions incorporated into the process most directly impacted our engineering operations and development teams. And due to the experience and philosophy of the founding engineering team, the actually impact of the process was minimal.

Once deciding to sally forth on SOC certification (or any security and compliance certification for that matter) you will be in a much better position if you view the criteria of the certification as a benefit (or imperative) of good business practices. If you are in search of the certification primarily as a way to sell into certain customers or verticals, you will likely be frustrated with this entire process. Security is like diversity—you need to inherently believe in the value to be successful in implementations and outcome.

“So, what are these criteria you speak of?”

Need vs. want

The principle of least privilege is a well-known concept where you only provide access to the people that actually need it. This also extends to the level of access that is given. Ultimately this distills to: view vs control. I like to start here because this is a forcing function for so many of the other criteria. When you are thinking about who should have access to each system/service, you are inclined to:

  • Define roles
  • Log activity based on user/account
  • Build review process for accounts

On-boarding each new employee gives you an opportunity to review the tools you use to run your business, question who needs access, and to what extent.

This is also a great time to introduce new employees to another key security maxim: Trust, but verify. This concept is relevant in the context of the access that accounts or component services assert are needed, as well as the way that you validate user accounts.

For services—either 3rd party or just components of a larger service—you should know what information is being stored, and where. Ideally, you are being thoughtful of the first principle of least privilege.

This is especially true of customer data. The easiest way to protect data is to limit what you collect. Second is to make sure that you are being intentional and explicit with where you store the data. Finally, you need to look at the access policy that you put around that data.

For user accounts, multi-factor authentication (MFA) or two factor authentication (2FA) significantly increases your ability to validate that accounts are only being accessed by account owners.

Another time to plan for is employee off-boarding—ideally, before you off-board your first employee. This is also a good time to review your tools and access privileges.

Context; not blame

When a single developer is working on building a service, logs are primarily useful for understanding how well things are working (or where they broke). As the number of individuals working on the system becomes larger, the ability to know who changes what becomes increasingly important.

One caveat is that as you incorporate the ability to know who, you leverage this data to build context around why things changed, rather than simply using it as a means to place blame. If one person can bring down your service, then you probably should direct blame at the architect of the system. (Unless that destructive individual and the architect are the same person ¯\_(ツ)_/¯).

“A log without account context is like a novel without characters.”

A log without account context is like a novel without characters. You can build a picture of what happened, but likely will miss why things happened. If you don’t know why, you’re unlikely to prevent it from happening again.

Built for toddlers… or failure

Failure is trivial compared to proactive destruction (image credit: Lego City)

The stability of a service is often a strong indicator of the inherent security. After all, the most common exploitations are based on overloading some resource. Thinking further about the elimination of blame from building a secure and stable service, failures should be viewed as opportunities for increased robustness. This is where building for toddlers comes in.

Toddlers are the ultimate destroyers. It is the developmental stage where everyone starts to experimentally test the laws of physics. Gravity, entropy, projectile motion, harmonic oscillation—they’re all put through a battery of tests.

Ideally, you are thinking about your service from the perspective of a parent (or guardian) that is toddler-proofing their home. Bolt things down, put breakable things out of reach, lock up the flamethrower, and embrace the fact that you will miss things. For the items you miss, have an emergency procedure in place and appropriate medical supplies on hand.

Back in the context of your service…

Write it down… or it never happened

Your code/feature is not ‘done’ until the docs are written or updated. Services require constant supervision and are never ‘done’. But if a developer builds or changes something and doesn’t write it down, then they effectively become the only individual that is able to monitor or operate the entire service (at least with all the context).

So, what should you write down? “Everything,” is the easy answer, but often not the complete one. You want to write down enough to provide context if any component starts doing something unexpected.

If you don’t know where to start, a good approach is to write down what would need to happen if your service was deleted. How would you rebuild and restart your service? How would you restore your data?

Next you can look at the impact/process of the loss of each individual component service. The important part of this is to incorporate the documentation into the development process to ensure that as your service evolves your documentation is always up to date—otherwise, the change doesn’t exist after a failure.

Great, now you know what to do next time AWS S3 needs to reboot. But, what about your customers? The next step is to write down an action plan for service interruption. Make sure you have a process and plan in place for keeping folks in the loop.

Security is not the french fries

If you are in a situation where you are looking to “add” security, you are likely going to be in good company with Sisyphus. Security needs to be a part of your foundation—it is not an “add-on”. But if you are realizing this is now—that security is a requirement for your business— you can do more than wish you had considered it sooner. It is not too late, but it is not a quick fix that you can solve with a certification.

First you need to implement the security in your foundation and process. Make sure that it is part of your culture. Once you have a culture of security and process, compliance is just providing proof of your culture.

Now… about that certificate

You don’t show up to your official Genius Book of World Records judging day having never practiced juggling 9 clubs. Same goes for when you decide to get your certification for SOC.

However, when you build a strong foundation and culture for security and compliance, then the steps to get certification are rather straightforward. You call up your friendly neighborhood SOC auditor and get a copy of their check list.

In the case of LaunchDarkly we worked with the fine folks at A-lign. After an initial conversation we retained their services to conduct an independent audit of our systems and practices.

A few months prior to the audit, A-Lign provided our team with a checklist of all the documentation and proof they would need to see when they came onsite for our assessment. This afforded us the opportunity to ensure that all of our practices were organized and in a state that could be easily evaluated.

When the time came for the audit the auditor spent three days* on-site interviewing members of the team and reviewing our practices. After the on-site visit, we were informed that we had passed the initial competence certification.

Of course, now that we have gone through the validation process for one certification, it seems like a good time to keep going for a few more. Many of the certifications have a significant overlap in requirements. They are all looking to establish trust and ensure the service provider is operating in the best interest of the customer. And it turns out, most customers define trust in a very similar way.