One year of Growth Design

Growth design illustrations

I've been a designer on startup teams for over ten years. Running experiments and helping companies grow isn't new to me. But I've never been in a formal 'growth' role until I joined the Product Growth team at Webflow. Over the past year my day-to-day responsibilities have been designing and analyzing experiments. I've learned a lot about a formalized growth approach. I evolved this year as a designer. I  learned new skills and perspectives that I'll take with me beyond Product Growth. Here are some lessons and reflections I've gleaned from my journey as a 'growth designer'.

How to identify opportunities

Most designers have a great tool in their arsenal. Empathy. With empathy designers feel the frustrating moments that users encounter. This could be confusing copy, a hidden button, or a challenging form. Empathizing with customers is a great starting point to for identifying growth experiments.

Move beyond just feeling the customer pain and chat with your customers. Watching a customer experience your product first hand is eye-opening. You may believe a design or solution to be obvious, but your customers may not. There are tons of great resources out there to help you with customer research. But if you are a designer and don't have any experience in this area, I recommend you pursue it.

Unearthing a UX pain-point first hand is just the first step. After you have found something worth digging into you'll want to validate this with data. Find a confusing form? Look into how many people navigate to it without completing it. Look into how many times a user hits an error state. Analytics will help validate and identify issues at scale. Example: I have an onboarding survey, it looks like most customers never make it past the second step... Let me look at the designs...  

You'll need tracking events in place to analyze data with tools like Amplitude, Mixpanel, or Google Analytics. If you need help Lex Roman has a great course: Getting Started with Product Analytics”.

Use the scientific method

Unfamiliar with the scientific method and how it relates to product growth? Don't be intimidated. It is a method used to improve business metrics. At Webflow we use a hypothesis, metrics, and a way of testing and analyzing experiments against those metrics.

Let's break this down. "A hypothesis is an assumption, an idea that is proposed for the sake of argument so that it can be tested to see if it might be true."

Our team likes to form our hypothesis in this framework:

We believe if a customer [does this] then we will see [this result].

We measure the impact of any design hypothesis against key metrics. Metrics are quantifiable data points that can be used to track and analyze to gauge success. Measuring metrics against a hypothesis allows us to prove it true or false. Proving a hypothesis true or false allows us to continue with confidence. We can keep iterating, scale out the experiment as a feature, or scrap the experiment.

Funnels and experiment planning

For seasoned growth practitioners this may be obvious. The farther down the funnel a customer is in flow, the smaller the sample audience will be.

A funnel metaphor for a mapping of a customer journey through the lens of data. Here is an example:

  1. A potential customer visits your home page
  2. From the home page they navigate to the sign up page through a 'Sign up' button
  3. They complete the registration form and sign up for your product.

In this example Step 1 is the top of the funnel. At each following steps you will see a decrease in people continuing on in their journey. It's simple, more people at the top, less people at the bottom.

Understanding your funnel can help identify opportunities and plan your experimentation approach. To prove results are not due to chance you'll need a large enough audience with a shift in behavior. This is know as statistical significance.

You'll have a smaller audience for experiments at the bottom of a funnel. For companies with millions of site visitors this isn't an issue. But if you're at a company with less site traffic you'll need to be strategic. How long an experiment takes is determined by the amount of traffic a site has, and the type of experiment.

There are 2 approaches in product experimentation. A/B tests and multivariate tests. A/B test compares two versions, the original ('control') with your updated version. Multivariate allows you to test multiple variations. Your experiment approach will need to be determined by your traffic, and your time needed for results. To learn more read this article from Mixpanel.

If you are on a low traffic site a multivariate site is more likely to take longer. Any additional variant added to a Multivariate test will extend the time needed to resolve an experiment.

Short term wins aren't always what they seem

Data is easy to manipulate. It is essential that you have a 'north star' metric when experimenting. This will vary from company to company. If you're an online store it may be sales, a SAAS company may use subscriptions, or 'active users' for freemium products.

An experiment could positively impact immediate goals but hurt a larger long-tail metrics.

Example: You're tasked with getting more signups for the marketing page. You identify that the homepage is full of 'fluff', so you simplify it by removing most of the information. After running the experiment you find that you increased the amount of sign ups. Yay! But further investigation shows a decrease in amount of users who paid for your products decreased. The increase in sign ups wasn't an overall win for the business.

An experiment where you learn nothing is a failure

Folks who come into growth design may believe the goal of every experiment is to move the needle up. One thing I learned this year is a failed experiment (where metrics decrease) is better than one where metrics are flat.

The goal of 'growth' is to grow. If you move the right metrics you know you are onto something. If you decrease the metrics you can analyze and learn from a failed hypothesis. But if your experiment flatlines, the direction becomes less clear, and you just spent time and energy with nothing to show.

Small experiments can have big results

High investment does not guarantee large rewards. A satisfying moment in growth is identifying a high impact 'easy win'. One of our largest experiment 'wins' this year was a small UX improvement. It took our engineers about an hour to implement, yet the results were very significant.

What was the experiment? At Webflow we have a marketing site and an app. The original experience took logged-in customers who navigated to 'webflow.com' to our marketing site. Our team felt like this was a poor experience. We ran a test to redirect logged-in users straight to our dashboard instead. This created a better experience for our activated customers, and our new ones. We saw a significant increase in our activation rates through this experience.

One of our smallest projects of the year had one of the largest impacts. If you find an experience that feels bad run an experiment, you may be surprised the ripple affect it causes.

Some great Growth Design resources