Growth

How I experiment

Experimentation is at the core of my growth strategy. It's the reason there's purpose behind what I'm doing. You set out to learn something in hopes it impacts performance in a positive manner.

As a growth marketer, you have a direct influence on the success of the company. If you're not consistently learning and improving, you'll likely see that play out in the metrics.

Why experimentation is important

There are a lot of benefits that I get out of experimentation which makes it so valuable. But also, I'm a nerd so there's a natural gravitational pull.

  • It informs your learning agenda → Experiments come in all shapes and sizes. They may be standalone ones or part of a greater portfolio to achieve growth
  • Helps you avoid random acts of marketing (RAoMs) → You might hit a bullseye once in a while, but RAoMs give a poor perception - panic, makes you look unfit for the role, lazy, etc. Not to mention it's a good way to light $'s on fire
  • Drives prioritization → Time is precious, so prioritization is paramount. I use a formula that scores each experiment idea. The higher the score, the higher the priority. Simple and effective
  • Tees up what's next → I'm always looking beyond just the main agenda. You can open lots of doors to new experiments if you spend time observing while you experiment

Some experiments may not generate the intended outcome, but as long as you learn from the results it should not be considered a failure. Failure is only achieved through inaction. This is especially true in the world of startups.

Experimental design

Experimenting is not novel. The scientific method has been around since the 19th century, but wasn't popularized until the early 1900's. Even then, there wasn't consensus behind its meaning, which is okay. Experimentation can have different meanings to those who conduct them.

The concept is not complicated. You start with an observation that forms a question. That's the basis for your experiment. You then form a hypothesis, which is what you expect the outcome of your experiment to be. Then you test.

Figure 1.1 - Scientific Method

Based on the outcome of your experiment, your findings help guide next steps and those findings are then shared with others.

Experimentation lifecycle

My experimentation cycle is similar but with some exceptions.

Figure 1.2 - Growth Framework

I almost always start with a larger learning agenda. For example, "What content is most popular in conversion paths?" Or "What friction exists that causes users to abandon onboarding journeys?"

Step 1: Data Analysis

Since I am a data nerd, I like to dive into what data we have to establish a baseline or a control for my experiment. This also helps develop my hypothesis for what effect I think the experiment will have on our control.

Step 2: Experiments

My hypothesis is defined. For the most part, I like to think in terms of x% of incremental growth, since that is the desired outcome.

For top-of-funnel (TOFU) experiments, the experimental KPI may be different from our business KPIs. The idea is that these types of experiments still impact those business KPIs, though the journey may be longer and less direct than lower funnel experiments.

In order to avoid RAoM, I like to document some form of validation. I find leadership teams appreciate this context to understand the "why" behind my experiment. Validation can be as simple as the baseline established in the data analysis phase. Some experiments are truly conceptual, so validation should not be a bottleneck.

As a secondary measure to avoid RAoM, I plan out how I will execute the experiment. This generally comes in the form of Activities. Activities are layers under the experiment that come in the form of marketing initiatives — both paid and organic.

Measurement is a key aspect of experimentation. You need to have a measurement plan in place prior to launching any experiment.

  • What are your KPIs? Primary & secondary
  • What are your data cuts? (e.g. channel, audience, geo)
  • What are your data sources?
  • Is your data attributable or directional?

At Clerk, we run experimentation at high velocity. Prioritization is critical. I use an Impact Score, how likely an experiment is to influence our business KPIs, to drive prioritization. I'll share more about Impact Score in an upcoming post but the basic concept is I use a weighted model with inputs to generate a score out of 100.

Figure 1.3 - Impact Scoring Model

Step 3: Activities

Activities are vehicles that drive me towards proving or disproving my hypothesis. They layer up to the experiment. Notion's Relations feature helps keep everything organized & attached for ease of comprehensive analysis after the measurement period concludes.

Activities are mini-experiments in themselves.

Step 4: Measurement

This is the test period. You can approach this in different manners, whether you want to launch all activities at once or stagger. I find that staggering is the way to go. Aligning activity launches can be difficult & there's not much value in trying to do so, unless you need a very rigid test framework — I'll share more on this later. Staggering allows you to put focus into each activity, especially if you're working with external partners.

Depending on the channel or test environment for each activity, I like to do measurement check-ins 1-2 weeks post launch. This can vary based on channel, as you'll need sufficient sample to conduct your initial measurement. Personally, I don't like to tinker much post-launch because this puts you at risk of messy results that can be difficult to measure. I only tinker if there's a glaring issue. The need for in-flight tinkering can be avoided by having a confident plan before you launch. The cleaner your pre-launch plan, the less you'll be tempted to touch things mid-flight.

I won't dive deep into attribution in this post, but as I mentioned in Step 2, you cannot launch an experiment without a strong measurement plan in place. This allows you to have conviction in any in-flight changes/optimizations, should you decide to make them.

From my time at Oracle, I amassed substantial knowledge about experimental design. You generally need at least 2 weeks of runway until you can even feel confident in achieving statistically significant results. However, that can be offset with substantial reach.

Step 5: Findings

Now, the fun part — The grand reveal! Though if you're doing regular check-ins, it won't be as dramatic. Documenting your findings cannot be overstated. This gives you a foundation to build on & prevents you from unnecessarily testing the same hypothesis multiple times. If you're building this out in Notion, you can create a simple template that allows you to easily consolidate & circulate outcomes. Findings are your building blocks that allow you to progress.

Step 6: Decision

Based on findings, I keep next steps simple:

  • Scale → Success was achieved and now I want to understand if the tested approach will work at scale.
  • Iterate → While scaling is the ideal outcome at the decision stage, I love iterating. Experiments that produce tentacles for additional exploration are my favorite. I love to go deeper into the subject matter before pivoting into new territory. Iteration can come in many forms: copy, tone, creative, partners, etc.
  • Kill → Not ideal, but sometimes you reach a dead end. You learn from it & move on
  • Evergreen → There are times where experiments become shelf-stable. Evergreen essentially becomes a staple of your growth strategy. These are great opportunities to introduce automation in order to recapture bandwidth
  • Archive → And sometimes, there's no path forward. You document the learnings, build on them & move on

Conclusion

This is my approach. It's not perfect and may not work for everyone, but I find it's effective for me.

It keeps me focused, helps me move faster, avoids RAoMs, drives prioritization, and gives clearer guidance on the path forward.

In the next few posts, I'm planning to go deeper into these topics:

  • How I use Notion to manage experimentation
  • Impact Score
  • RAoMs
  • A growth plan for your first 90 days at a dev company

If there's something you'd like to learn more about or something you'd like me to cover, drop me a note on X/Twitter.