Know What You’ve Learned, Not What You’ve Done


Originally published on Linkedin

My Homebrew business partner Satya Patel shared his experiences around focusing on learning and testing, not just doing.

At Homebrew, Hunter and I are very focused on “Why” a founder has chosen to start a company and what motivates him or her to attack the specific problem or opportunity he or she sees. But also important is the “how”, the approach the entrepreneur has taken to address the opportunity. Often, when we starting talking about the “how”, we hear about a lot of different ideas being considered and experiments being run in an effort to find product/market fit. But at the seed stage, entrepreneurs often focus only on what they’re doing without being equally attentive to what they’re learning. “Being busy” by itself does not equate to building a company. You should be learning with every step so that you can find a scalable model of success.

Focusing on the key questions and how best to answer them

To create an organization that learns and doesn’t just do, I’m a big advocate of the scientific method approach to building product (and companies more generally). The scientific method is a simple framework that can help startups focus, experiment, learn and iterate quickly and effectively. Below is a description of that method along with a simplified example of an experiment we ran at Twitter (not the actual data).

Purpose/Question – As obvious as it sounds, you need to start with the question you want to answer. Surprisingly, lots of startups take the “see if the spaghetti sticks” approach, just putting something out in the world and then somehow gauging the response. Without clarity around what question you want answered, it’s difficult to design the right experiment and to draw the right conclusions from the data you collect. In particular, it’s critical to know what metrics are relevant to the question you’re trying to answer.

Example: How can we increase the sign up rate of users visiting the Twitter homepage?

Twitter homepage in April 2011:

Research – If you have a question, it makes sense to consider all of the potential answers, even if many of them are dismissed quickly. Research, whether that’s talking to potential users, evaluating similar products, conducting simple surveys or brainstorming as a team, does a few things. Research helps uncover unspoken assumptions about the answer, identify unexpected potential answers and inform the design of the right experiment.

Example: Ideas that emerged from user research, looking at site analytics and from team ideation included better descriptions of what Twitter is, a video that explains how to use Twitter, showing popular tweets, simplifying the homepage by removing trending topics, simplifying the homepage by removing the search box, etc.

Hypothesis – What do you believe to be true? That is the essence of your hypothesis. And proving or disproving that statement is the goal around which your experiment should be designed. Any test run without a hypothesis is unlikely to lead to learnings that impact product direction in the correct way because the experiment likely doesn’t have a control for what you believe to be true.

Example: Simplifying the homepage to focus on sign up by removing the search box will increase the sign up rate.

Experiment – This is what most startups focus on but only in the sense that experimentation means putting something out in the world. More important than the idea of experimenting is the design of the experiment. Simply put, you need to know what question you’re trying to answer, which answer you believe to be correct and which variables you believe impact that answer.

Example: Show the homepage without the search box to a randomly selected, statistically significant sample of users and compare the sign up rate to users that see the existing homepage during the same period of time.

Analysis – The reality in most product experiments is that you can’t isolate or control all of the variables, so it’s important to not be a slave to the data. Data from the experiment usually needs to be considered one input into product thinking and not the answer in and of itself. Accordingly, it’s critical to be honest about what the data does and does not say in relation to the hypothesis and question at hand.

Example: The sign up rate increased by 12% without the search box. But by removing the search box did we lose sign ups from people who searched and then signed up from the search results page?

Conclusion – In organizations both small and large, nothing is more important than providing the proper context for product decisions. So when you arrive at conclusions from your experiment, be sure to share them quickly, clearly and repeatedly. Was the hypothesis proven or disproven? Did the outcome result in more questions and experiments or answers that you feel comfortable moving forward with? Communicate what you intend to do in reaction to the conclusions and start the scientific method process all over again.

Example: At Twitter, results from experiments and planned next steps were summarized and emailed to an internal mailing list for anyone in the company to review. When changes went into production, another email was sent outlining the changes.

The homepage that resulted from this experiment and several others was launched in December 2011.

 

 

 

 

 

 

 

 

 

 

 

Here is the current homepage:

Next time you’re working on experimenting and iterating to get to product market fit, remember the scientific method. If you remember to have a hypothesis and to design an experiment that tests that hypothesis cleanly, you’ll be learning and not just experimenting.

About these ads