
What seems like the most difficult piece of the Build-Measure-Learn Cycle to you?
- Putting together a low-fi, low-cost, low-effort prototype?
- Figuring out what measurements will be meaningful?
- Making the persevere – pivot – kill decision?
In my experience, it’s usually choosing your metrics.
Why is that hard?
There are LOTS of options, and it can be difficult to know which ones are actually going to deliver the data you need to prove or disprove your hypothesis about audience, problem, and solution.
The key to designing a good measure is to focus on the assumptions behind what might make your idea a good product or service, not on the idea itself.
The hardest thing to avoid is choosing vanity metrics, things that make you feel good but don’t actually test your hypothesis. It’s tempting to choose measurements of volume – how many likes, followers, downloads, etc. But that’s usually not helpful.
Quoting Innovate the Lean Way:
Take conference attendance. Associations commonly measure the success of annual meetings by number of attendees. But is that really a good measure? What assumption does it test? Attendance numbers don’t tell you what those attendees were trying to accomplish by participating or whether they achieved their desired outcomes. They may have come to learn about a topic, to network, to do business, because they wanted to hear a particular speaker or meet a particular person, because they were intrigued by the host city, or for myriad other reasons. Measuring how many people attended doesn’t tell you anything about whether those goals were met.
Likewise, if you were to launch a mobile application for your association, it’s likely you would immediately begin reporting on downloads. !is is a clear vanity metric, as downloads don’t tell you whether the app is solving a problem your members think is worth solving. Think about how many apps you download because they’re free and you want to try them, but you never use them again.
A better way of testing the success of your mobile app would be to measure the number of members who use the app versus your website to perform particular functions or take advantage of particular services. Do they prefer one platform to the other? Do different cohorts of members use one versus the other? Can you isolate particular features that incline people to use one versus the other? Can you then make adjustments to move you closer toward your goals?
Some guidelines to consider as you think about what to measure to help you validate or invalidate your hypothesis. Good metrics tend to:
- Be a rate or a ratio
- Allow for comparison over time
- Be simple (if it’s too complicated, people can’t measure it, remember it, or use it)
- Be predictive
- Allow you to make changes based on what you learn
Some possible metrics to choose:
- Acquisition: How do potential customers / users discover our program, product, or service?
- Activation: How many potential customers / users take our calls to action?
- Retention: How many one-time customers / users become regular customers / users?
- Revenue: Are people willing to pay for our program, product, or service?
- Referral: Do our customers / users like the program, product, or service enough to tell others about it?
Image source: Lucas on Pexel
AI Video Generator says:
I love how you pointed out that metrics should test assumptions, not just show numbers that make us feel good. It’s easy to fall into the trap of focusing on vanity metrics. Instead, focusing on the bigger picture and the outcome of your hypothesis is key.