Oct 30, 2017
We ran Philosophie for many years with little process aside from “get the job done, good.” It worked when we were a tiny team and had small clients with straightforward needs. But as we started to work with larger and larger clients, on increasingly complicated projects, process became critical to hold things together and give the team a sense of control.
The thing is, control can be a vanity metric. Executing perfectly against a flawed plan leads to disaster. Increasing your development velocity when you're building the wrong product just means you're wasting talent faster. These days, our work focuses on helping businesses find new, strategic ways to leverage technology. These are areas where the company is necessarily not already in control, such as an emerging market, the digitization of a business line, a next-generation technology, or a wicked problem. We design under conditions of extreme uncertainty.
Fortunately, there are new ways of working that are designed to handle this ambiguity, and they are making their way to the mainstream. Agile development, lean startup, and design thinking have all gained traction with managers and executives, but are still notoriously difficult to put into practice at a large organization. When they are, it tends to look like a mini-waterfall: lean startup → design thinking → agile development.
We've studied and participated in various innovation programs that companies have tried: venture capital, accelerators, innovation labs, and company-wide transformation. My stance is very pragmatic: every organization is different and will need a different blend of innovation programs to stay relevant. There are many roles in this ecosystem. As CEO of a design firm, my challenge is this: how do we optimize the design process to drive innovation in the digital era?
We believe that the future of work is collaborative, creative, and versatile, and these values are baked into our design process.
Who said that a design team only has designers on it? We all know that good design is multidisciplinary, but how often do business people, designers, and developers work side-by-side through the whole process? When we set up design teams, they include at least one designer, at least one developer, and an entrepreneurial product owner. We try to keep the teams small and autonomous so they can learn and adapt quickly.
We mean creative in the literal sense of making things. We believe that design should be action-centric, especially when designing for innovation. You can't simply research or talk your way to a good design. You need to tinker, and you need to tinker in the medium you're designing for. This is part of the reason that a software design team must have a software engineer on it.
The pace of change is increasing, and we don't think this will stop. This means that our process needs to be extremely adaptable. Products and services must evolve over time to stay relevant; we didn't want to have a process that cannot be utilized throughout a product's lifecycle. Moreover, since technology is constantly changing, it cannot be specific to any type of tech—ideally, it would be applicable beyond it. Why not design a new way to hold paper together? Why not design a better way of organizing teams?
Unlike other design processes (such as user-centered design, process-centered design, etc.), we put the experiment at the center of our process. Why? Because design is about learning, not implementing. This doesn't mean that the hypothesis behind the experiment isn't user-centered or business-centered—it can be either or both. All it means is that the primary outcome of each cycle is learning. (This is a strong nod to Lean Startup's Build-Measure-Learn cycle). Actual business results are welcomed, and can even be expected, but the experiment only fails if you did not learn anything.
It becomes particularly interesting when design teams apply this concept at different scales. For instance, a social entrepreneur might have an original idea to get more young people to vote in the presidential election. There might be some clever ways to validate the idea long before November, but at some point she has to make a leap of faith, build something, and see how effective it was. That’s a big experiment with a lot riding on it — months of software development and a four-year wait until the next testing opportunity comes around! By contrast, imagine a heavily-trafficked e-commerce website looking to improve its conversion rates. The design team might hypothesize that changing the language on the “add to cart” button will be effective. In that case, making the change, deploying it, and analyzing A/B testing results can all be done in a single afternoon.
This concept of variable experiment scale suggests that experiment-driven design cycles should be of variable length. This is a major deviation from the agile trend of fixed-length sprints, which has also been widely adopted by designers in the form of design sprints. We’re actually big fans of time-boxing experiments in order to stay on-time and on-budget, but experiment-driven design is sensitive to the fact that some experiments don’t fit into one or two weeks. In the example above, not only did the startup test take longer to build, it also took longer to gather result data. We say that when you’re in discovery mode, let the experiment drive the sprint length rather than your scrum trains.
Variable scale and time also means that experiment-driven design is fractal. Think about it — emergent software products have limitless unknowns. If you try to test every hypothesis in a linear fashion, your runway will expire. In practice, a new product will have many experiments happening at once, likely with some inside one another. The degree to which this occurs will vary from product to product. More mature products might have fewer, more controlled experiments with quantitative results; a startup might rely more on judgement to evaluate whether nested experiments had a confounding effect.
Experiment-driven design can be broken down into six steps: understand, opportunity, ideate, hypothesis, make, and test. The steps alternate between divergent and convergent modes of thinking. In practice, you can begin anywhere on the loop. For now, we'll walk through it starting at the top.
I’ve long been a fan of Stephen Anderson’s description of experience design: “it’s all about People, their Activities, and the Context of those activities.” The understanding phase of experiment-driven design includes all of those, but it goes deeper than documenting the who, what and why. We aim to feel—what our users are feeling. This is empathy, which is a core tenet of design thinking.
The reason it’s a verb is because achieving this deep understanding isn’t a passive activity. Research is the key. The research methods used are selected based on what the design team needs to learn at the time. We emphasize qualitative methods that deepen empathy, but there is often a lot of existing data and literature to review as well.
Design research is a divergent activity because we do it to gain new perspectives, not to confirm our existing ones. The goal is to dig deeply enough to uncover an original insight that can serve as a competitive advantage. This is the “a-ha moment,” where a path forward is suddenly illuminated.
That path is the opportunity. Many designers like to call this the problem statement, but we find that limiting because many opportunities aren't associated with pain. Others call it the creative brief, but that document has come to represent an assignment given to the design team, rather than one created by it.
Moving from an epiphany to a clearly articulated statement is easier said than done! It requires good justification, consensus-building, and usually a fair amount of wordsmithing. Often it involves making the business case that is going to get your team funded. It also doesn’t take place in a vacuum, so it needs to be aligned with the organization’s broader strategy.
The opportunity statement is important because it becomes the rallying point for the design team. An important principle of agile development is to build projects around motivated individuals. You want to define the opportunity in a way that inspires people to pursue it rigorously. Innovation isn't easy.
Once the team is committed to an opportunity, we open the creative floodgates. This is the part that I think is missing the most from Lean Startup’s BML cycle. Decent products happen when the product manager interprets data, decides on an idea to move forward, and assigns the team to execute it. Groundbreaking products happen when the team interprets the data together, agrees to its meaning, and engages in a creative process to find exciting ways to leverage it.
Even though we all have it, creativity is a surprisingly difficult thing to encourage in the modern workplace. In most companies, the prevailing culture resists failure. Individuals have been trained to “keep it professional,” veiling their humanity in order to “look good” to their bosses and colleagues. Most of us like to play it safe.
But to generate novel ideas, we need to play like we did when we were children. We need to wander, sketch, riff off each other, and act things out. We need to smash seemingly unrelated concepts together and see how they fit. We need to try things just because.
Sometimes ideation sessions lead us to some smiles and laughter and nothing else. That’s okay. Because at the very least they open us up to new possibilities that may emerge as we continue down the path we choose.
Brainstorming can’t go on forever. At some point the team has to choose what possibility to move forward with. Traditional design calls this the solution definition. We call it a hypothesis, which gets us three things:
One, using this language helps take our egos out of the design. The team should be confident that the idea will work, but capable of letting it go if it doesn't.
Two, it allows us to focus on the risky, interesting, new, or strategic parts of the product rather than the obvious parts. There’s no sense in building all of the “table stakes” features before you know that you haven't found the unique value proposition.
Three, by framing the solution as a test, we don’t necessarily have to build the entire thing. This can save untold amounts of effort, capital, and time that we might have wasted if we didn’t do it this way.
You want to create the simplest artifact that will result in an effective test. This is indeed the intended purpose of a minimum-viable product—it’s a tool to progress learning, not necessarily to launch. But since the term “product” is confounded, we like to call it a design prototype or design artifact.
You can’t rely on your experiment results if the prototype wasn’t well-constructed. Let’s say you want to run a landing page MVP to gauge market interest before launching your product. Seven years ago, if you were a startup you could spin up a Launchrock page in a few minutes, get a write-up in TechCrunch, and somehow get a bunch of early adopters to sign up. Nowadays, starting a new digital business is much more competitive—consumers have so many options, and they are savvy enough to distinguish a hastily-built landing page from one that has some substance behind it. Even if you’re “just” putting a feeler out there, it should be well-researched, well-targeted, and well-executed.
When everything's buttoned-up, we launch the experiment. Sometimes it's launched in a production environment. Sometimes it's launched to a small panel of customers. Sometimes it's launched to a single exec who needs to greenlight the project.
Tests take different amounts of time to gather data, but when you have enough it's time to make the decision: pivot or persevere. In some cases you'll have set a quantitative success criteria that makes the call easier. In others, it will take a lot more intuition.
No matter what, one loop through the process has bought you a richer understanding of how your concept shows up in the real world.
Experiment-Driven Design is more than a thought experiment. It's meant to be put into practice. It's meant to provide a sense of control in chaotic environments.
One way to apply it is retroactively. Look at your existing project. What are the experiments underway? What phase are you in for each of them? Are you going through the cycles as quickly as possible? Are you working collaboratively throughout? Doing this retrospective analysis might give you more clarity around what's going on.
The other way is to apply it with intention to new projects. We've learned a lot about how to implement Experiment-Driven Design in organizations enormous and tiny, and are happy to share some tactics with you.
Please reach out to me on Twitter or via email and let me know how you might use Experiment-Driven Design at your organization. As always, we're here to answer any questions you have or clarify points in this post.