Catherine Louis
Successfully executing on a business goal implies raising questions about that goal—and it absolutely requires safe-to-fail experimentation on the path to achieving that goal. When business goals become inflexible mandates, experimentation goes by the wayside and a failure-adverse culture will prevail.
This four-step process can help open leaders cultivate a culture of experimentation in teams working toward a business goal (rather than creating the kind of failure-adverse culture that risks becoming less innovative).
In general, there is no shortage of verbiage for defining business goals; however, as a starting point let's use Victor Basili's definition of a conceptual goal: A goal is defined for an object, for a variety of reasons, with respect to various models of quality, from various points of view, relative to a particular environment.1
I prefer this definition of a goal, because by analyzing its dimensions you'll end up creating a more clear, more compelling business goal:
-
"A goal is defined for an object": What are we discussing here? Could it be our issue tracking system? Could it be the relationship between the issue tracking system and customers? Whiteboard this to visualize where your scope is.
-
"For a variety of reasons": What's the problem that needs to be addressed? When we state a goal, we include the purpose driving the goal.
-
"With respect to various models of quality": What's the quality issue with which we need help, and why is it an issue?
-
"From various points of view": From whose viewpoint are we discussing this goal? Customer? Project Manager? Whose opinion matters?
-
"Relative to a particular environment": Where and when is the issue being reported?
A business goal implies questions like these, and achieving it absolutely requires safe-to-fail experimentation. Providing a business goal as a mandate without allowing teams to question and fully understand the goal will shut a team down.
Take this poorly written business goal, for example:
We want to stop people from abandoning their shopping carts before purchasing.
Now, using Basili's definition, consider the following questions someone is likely to raise about this goal—and the kinds of responses that person is likely to receive from a leader less aware of the qualities that make a goal a good one:
- "What kind of shopping is being abandoned?"—"Any cart that isn't purchased."
- "Why?"—"Because I said so."
- "What's the quality issue we need to address?"—"Just get the cart purchased faster."
- "Who's 'we'?"—"Me, your project manager."
- "Where and when is this being reported?"—"Everywhere. Anywhere."
How's your motivation now? Do you feel like experimenting towards achieving the goal?
When taking a question-focused approach to setting goals, be sure to start with the goal! The ultimate test of effectiveness for a business goal is whether it motivates a team. A well-written goal stirs the blood.
Let's try this again. See if you can find all five points in this example of a goal:
The CEO of our e-commerce site selling women's apparel would like to see a significant improvement on the 1,000 to 2,000 abandoned shopping carts we see per day in the North American market to address this potential revenue gain. He is targeting at least 70% fewer shopping carts being abandoned per day.
Next, encourage team members to ask questions about the goal. You'll need to start digging into the goal so you can understand it better, and the best way to get started is to ask a bunch of questions.
Some questions that come to mind are:
- Have we interviewed any shoppers about their shopping experience?
- How many clicks must users make from when they begin shopping to when they complete a purchase?
- Are non-North American markets not seeing these abandoned carts? Why?
- How long in duration is the average online shopping experience?
- Are the products presented in a clustered, attractive way versus being presented as one product per page?
- Are we using any advanced or custom filters which can improve on-site discovery and navigation?
- Do we support a fully-automated visual search for products?
- Do we understand the customer journey for ordering women's apparel online and how much time each step in the journey takes?
Lots of questions are possible. Prioritize these questions. Begin with the customer. In the example above, if you haven't done any customer interviews to hear and feel customer pains, then that's where I'd start.
Data-driven improvements are possible. After you've analyzed your business goal, and then asked and prioritized the necessary questions about it, you should work with your team to establish baseline measurements of where you are today. This is your starting point. Begin using these metrics to structure your approach to answering your questions. For example, how many clicks do users typically make between the moment they start shopping and the time they've completed a purchase? Let real-time data guide your experimentation!
Using our example above, we might target:
- Results of interviews with 80 percent of customers with abandoned shopping cart experiences. Have we interviewed any shoppers about their shopping experience?
- Cycle time and number of clicks per purchase. How many clicks are needed from when someone starts shopping to when they complete a purchase?
- Cycle time per client. How long is an average shopping experience on our platform?
- Number of products per page per category. Are the products presented in a clustered, attractive way versus being presented as one product per page?
Gather data so that you can develop a coherent baseline measurement of your starting point. If the customer journey today is a seven-click experience—and you think that reducing the number of clicks associated with this journey will lead to fewer abandoned carts—then gather data on the average time users spend at each of these steps.
Innovation does not occur without experimentation. The good news is that each one of the questions above can now become an experiment.
Let's take one of the questions above and form an experiment so you get the idea:
Are the products presented in a clustered, attractive way versus being presented as one product per page?
Let's address this question in the context of experimentation.
- Restating the question as a hypothesis. We believe that if we cluster our products in an attractive way, rather than looking at one product per page, more purchases will occur. (I recommend using the free Strategyzer test card to help you organize your thoughts around creating your experiment once you have a hypothesis.2)
- Know your riskiest assumptions. One critical, risky assumption we're making is that more purchases will occur if different products are grouped in an attractive way. But what is an "attractive grouping," and to whom? Is it multi-colored blouses with neutral shoes? Is it blue shoes with white blouses? We'll need to experiment further to begin to answer this.
We've now created a solid foundation for experimentation. Next, we need to create a simple test experiment that we can begin to work on today to test our critical assumptions. We could attempt several kinds of experiments, including:
- A/B testing, a method of comparing two versions of a single variable—typically by testing a subject's response to variant A against variant B, then determining which of the two variants is more effective.
- Concierge testing, or performing a service manually (just like a concierge at a hotel) with no technology involved. The idea here is to learn as much as you can via increased human interaction. A classic example of a concierge service is the beginning of AirBnB, where two guys rented out air mattresses in their home in San Francisco to validate what types of customers they might get with this type of service.3
- Landing page, a web page on which someone "lands" in response to some advertisement or social media campaign. The goal of a landing page is to convert site visitors into sales or leads. You can analyze landing page activity to determine click-through or conversion rates and gauge the success of the advertisement. One classic example of this method of experimentation comes from Buffer, which launched with just two pages.4 The first was a link to "plans and pricing," and if users clicked that link, they received a message saying "oops, caught us before we were ready."
- Video, or some audio-visual artifact to explain your product. Telling a story from a user-centric point of view, including a call-to-action, is a wonderful way to test a hypothesis. Dropbox did this in 2008 creating a three-minute video posted to Digg, which expanded their waiting list from 5,000 to 75,000 literally overnight.
- Wizard of Oz, a method in which it looks like you have a fully functioning product/feature, but there's really someone "behind the curtain" doing all the work. A classic example of this test is Zappos. Founder Nick Swinwarm reserved the domain name and, without building any sort of inventory system, walked down the street to the local shoe store, took photographs of shoes, and posted them on the website.5
In our example, let's say it's the first day of summer, so we decide to do a simple A/B test grouping summer shoes with summer blouses arranged by summer colors. Perhaps we create five groupings of various colors of shoes and blouses in order to begin gathering data. For example, we might run five experiments with the groupings of multi-colored blouses with neutral shoes, blue shoes with white blouses, red shoes with multi-colored blouses, green blouses with beige shoes, and yellow shoes with yellow pattern blouses.
- Decide what to measure. Perhaps we decide to measure click-through rates on products grouped versus products displayed one at a time, as well as the number of shoes sold versus the number abandoned in shopping carts.
- Name your criteria for success. For example, if 10% fewer shoes are abandoned in carts per month when grouped with blouses by summer colors, we'd be happy with this experiment.
For this example, the resulting test card might end up looking like this:
- Hypothesis. We believe that if we cluster our products in an attractive way, rather than looking at one product per page, more purchases will occur.
- Test. To verify or refute this hypothesis, we will run A/B tests grouping summer shoes with summer blouses arranged by summer colors versus displaying blouses and shoes one product at a time.
- Metric. We will measure both click-through rates and sales of both shoes and blouses displayed one product at a time and those same products displayed in summer color groups.
- We are right if 10% more shoes are sold per month when grouped with blouses by summer colors.
- Follow up. To further refine attractive product groupings, we will compare the results to learn which product groupings are more appealing and design our next experiment based on this.
Note that experimenting doesn't end here; it's just the beginning! Stated another way: Your team won't achieve its business goal without cultivating and embracing a culture that allows us to experiment, fail, adjust, and learn.