Skip to content


There are a few ongoing debates in the world of digital design. Things like Should designers code?”, “What’s the value of design?”, “UX versus UI,” and, perhaps most fundamentally, “Is everyone a designer?” To get a taste for the flavor of that last one, you can step into this Twitter thread from a little while back (TLDR: It didn’t go super well for anyone):

To be clear at the outset, I don’t care if everyone is a designer. However, I’ve been considering this debate for a while and I think there is something interesting here that’s worth further inspection: Why is design a lightning rod for this kind of debate? This doesn’t happen with other disciplines (at least not to the extent it does with design). Few people are walking around asserting that everyone is an engineer, or a marketer, or an accountant, or a product manager. I think the reason sits deep within our societal value system.

Design, as a term, is amorphous. Technically you can design anything from an argument to an economic system and everything in between, and you can do it with any process you see fit. We apply the idea of design to so many things that, professionally, it’s basically a meaningless term without the addition of some modifier: experience design, industrial design, interior design, architectural design, graphic design, fashion design, systems design, and so on. Each is its own discipline with its own practices, terms, processes, and outputs. However, even with its myriad applications and definitions, the term “design” does carry a set of foundational, cultural associations: agency and creativity. The combination of these associations makes it ripe for debates of ownership.

Agency


To possess agency means to have the ability to affect outcomes. Without agency we’re just carried by the currents, waiting to see where we end up. Agency is control, and deep down we all want to feel like we have control. Over time our cultural conversation has romanticized design, unlike any other discipline, as the epicenter of agency, a crossroads where creativity and planning translate into action.

At its core, design is the act of applying structure to a set of materials or elements in order to achieve a specific outcome. This is a fundamental human need. It’s not in our nature to leave things unstructured. Even the concept of “unstructured play” simply means providing space for a child to design (structure) their own play experience — the unstructured part of it is just us (adults) telling ourselves to let go of our own desire to design and let the kids have a turn. We hand agency to the child so they can practice wielding it.

There are few, if any, activities that carry the same deep tie to the concept of agency that design does. This is partially why no one cares to assert things like we’re all marketers or we’re all engineers. They don’t carry the same sense of agency. Sure, engineers have the ability to make something tangible, but someone had to design that thing first. You can “design” the code that goes into what you are building, but you do not have the agency to determine what is being built (unless you are also designing it).

If we really break it down, nearly every job in existence is either a job where you are designing, or a job where you are completing a set of tasks in service to something that was designed, or a job where your tasks are made possible by some aspect of design, or some mix of the three. Either way, the act of “designing” is what dictates the outcomes.

Creativity


The other key aspect of our cultural definition of design is creativity. Being creative is a deep value of modern society. We lionize the creatives, in the arts as well as in business. And creativity has become synonymous with innovation. There is a reason that for most people, Steve Wozniak is a bit player in the story of Steve Jobs.

The idea of what it means for an individual to be creative is something that has shifted over time. In her TED Talk, Elizabeth Gilbert discusses the changing association of creative “genius.” The historical concept, from ancient Greece and Rome, was that a person could have a genius, meaning that they were a conduit for some external creative force. The creative output was not their own; they were merely a vessel selected to make a creative work tangible. Today, we talk about people being a genius, meaning they are no longer a conduit for a creative force, but instead they are the creative force and the output of their creativity is theirs.

This seemingly minor semantic shift is actually seismic in that it makes creativity something that can be possessed and, as such, coveted. We now aspire to creativity in the same way we aspire to wealth. We teach it and nurture it (to varying degrees) in schools. And in professional settings, having the ability to be “creative” in your daily work is often viewed as a light against the darkness of mundane drudgery. As we see it today, everyone possesses some level of creativity, and fulfillment is found in expressing it. When we can’t get that satisfaction from our jobs we find hobbies and other activities to fulfill our creative needs.

So, our cultural concept of design makes tangible two highly desirable aspects of human existence: agency and creativity. Combine this with the amorphous nature of the term “design” and suddenly “designer” becomes a box that anyone can step into and many people desire to step into. This sets up an ongoing battle over the ownership of design. We just can’t help ourselves.

Take again, as proxy, our approach to the arts. While we lionize musicians, actors, artists, and other creators, we simultaneously feel compelled to take ownership of their work, critiquing it, questioning their creative decisions, and making demands based on our own desires. The constant list of demands and grievances from Star Wars fans is a perfect example. Or the fans who get upset if a band doesn’t play their favorite hit song at a show. Even deeper, we feel a universal right to remix things, cover things, and steal things.

Few people want to own the nuts-and-bolts process of designing, but everyone wants to have their say on the final output.

But just like other things we covet, what we desire is ownership over the output, not the process of creating it. For example, we’re willing to illegally download music, movies, books, games, software, fonts, and images en masse, dismissing the work it took to create it and sidestepping the requirement to compensate the creator.

A similar phenomenon occurs in the world of design. Few people want to own the nuts-and-bolts process of designing, but everyone wants to have their say on the final output. And because design represents the manifestation of agency and creativity there is an expectation that all of that feedback will be heard and incorporated. Pushing back on someone’s design feedback is not just questioning their opinion, it’s a direct assault on their sense of agency.

As a result, final designs are often a Frankenstein of feedback and opinions from everyone involved in the design process. In contrast, it’s rare to see an engineer get feedback on the way code should be written from a person who doesn’t have “engineer” in their title. It’s also even more rare to see an engineer feel compelled to actually take that sort of feedback and incorporate it.

Another place this kind of behavior crops up is in the medical world. Lots of people love to give out health advice or question the decisions of doctors. However, few people would say “everyone is a physician.”

And I think this represents a critical point. There are two reasons that people do not assert that they are a physician unless they are actually a physician:

  1. We have made a cultural decision that practicing medicine is too risky to allow just anyone to do it. You can go to jail for practicing medicine without a license.
  2. No one actually wants to be responsible for the potential life and death consequences of the medical advice they give.

This highlights a third aspect of our cultural definition of design: Design is frivolous. Despite the connection between design and agency, many still view “designing” as trite and superficial.

Humans are sensory creatures. We absorb much of the world around us through visual, auditory, tactile, and olfactory inputs. Because of this, when we think of the agency inherent in design most of us think about it in terms of the aesthetic value of the output. Basically, we continually conflate design with art. If you don’t believe me, watch any episode of Abstract on Netflix. This is also why design programs are still housed in art schools.

So when most people critique designs, their focus is on aesthetics—colors, fonts, shape—and their reactions are based on the feelings and emotions those aesthetic values elicit. While aesthetics have an important role to play, they are only a piece of the overall puzzle. It is much harder for people to substantively critique the functional merits of a design or understand the potential impacts a design decision can have. That is partially why so many of our design decisions end up excluding certain groups of users or creating other unexpected negative consequences: We don’t critique our decisions through that lens.

Everyone is a designer because there is no perceived ramification for practicing design.

Because of this narrow, aesthetic-based view, the outcomes of the design process feel relatively inconsequential to many people, especially in comparison to something like the outcomes of a medical diagnosis. And if there are no consequences, why shouldn’t we all participate? Everyone is a designer because there is no perceived ramification for practicing design.

Of course, in reality, there are major consequences for the design decisions we make. Consequences that are more significant, on a population level, than many medical decisions a doctor makes.

What I’ve come to realize is that the idea that everyone is a designer is not really about some territorial fight for ownership; it’s actually a symptom of our broken culture of technology. Innovation (creativity) is our cultural gold standard. We push for it at all costs and we can’t be bothered by the repercussions. Design is the tool that gives form to that relentless drive. In a world of blitzscaling and “move fast and break things” it serves us to believe that our decisions have no consequences. If we truly acknowledged that our choices have real repercussions on people’s lives, then we would have to dismantle our entire approach to product development.

Today, “everyone is a designer” is used to maintain the status quo by perpetuating the illusion that we can operate with impunity, in a consequence-free fantasy land. It’s a statement that our decisions have no weight, so anyone can make them.

I said at the beginning that I don’t care if everyone is a designer, and I mean that. If we keep thinking of this debate as some territorial pissing match then we continue to abdicate our real responsibility, which is to be accountable for the things we create.

It really doesn’t matter who is designing. The only thing that matters is that we change our cultural conversation around the consequences of design. If we get real about the weight and impact that design decisions have on our world, and we all still want to take on the risk and responsibility that comes with that agency, then more power to all of us.

“Why the ‘Everyone Is a Designer’ Debate Is Beside the Point” was originally published in Medium on January 22, 2020.

Ask a designer who the most important stakeholder in their design process is and they will dutifully answer “the user.” It’s been drilled into us that our job is to represent the people who will use our products. We “empathize” with them and put their needs in the center of our decision-making process.

On paper, this sounds great, and many organizations wear the badge of human-centered design with pride. But when you take a step back and start to consider all the negative consequences that are created by these very same organizations, it becomes clear that something is amiss.

How could a process predicated on empathizing with people result in things like rampant data manipulation and exploitation, addictive features that hijack human psychology, systemic abuse, disenfranchisement, and predatory dark patterns? The answer is that it can’t. This can only mean one thing: We aren’t actually practicing human-centered design. And, unfortunately, the more established your company, the truer this statement is. As companies scale up, as they all strive to do, their priorities and incentives become less and less aligned with the people using their products.

The dehumanization of design

Taking an idea from concept to business means moving through a series of gates. In the Silicon Valley model the gates look something like this:

  1. Develop an initial product concept and launch a Minimum Viable Product (MVP).
  2. Iterate on MVP to reach product/market fit.
  3. Scale up.
  4. Cash out

Driven by venture capital money, the goal is to cross these gates as quickly as possible. They’ve even coined a phrase for it: “blitzscaling.”
The problem is that as a company moves through each gate, the organization and its underlying incentives fall farther out of alignment with the needs of the people using the product and align more and more with the needs of the business. While an org may preach human-centered design, this growing imbalance of incentives and priorities runs counter to the tenets of that practice and makes it increasingly difficult to maintain them. Here is my generalized representation of how that looks:


Let’s walk through each phase to get a sense for what this means. We are going to use the example of a ride-sharing app to illustrate the point.

1. Initial concept development/MVP

If human-centered design exists in any phase, this is it. We discover a real problem that people are having in the real world and we set out to solve it.

Maybe you notice that it’s not easy to get a ride home from college if you don’t have a car, and you want to solve that problem. At this stage your design challenge might be something like:

How can we make it easier for people to get a ride without needing their own car?

This is a people problem. Addressing this problem means understanding the larger human context behind it and then developing a solution that delivers real-world value. That is human-centered design.

In this case, you decide to create an app that allows people to share rides. You design, test, and prototype with potential customers, build your MVP, and launch it into the world.

This step of the process is as customer-focused as you will likely ever be. Your vision is clear and competing priorities are incredibly limited. All you want is for your solution to solve a real human problem. Ironically, this is also the moment where human-centered design begins to die.

2. Reach product/market fit

As soon as your MVP is live, the incentives underlying the design process change dramatically. Provided your solution isn’t completely off base (in which case you basically move back to phase one) the next step for your app is to improve the experience and iterate on the feature set.

This shifts your focus away from the external human context of the problem and narrows it to the internal product context. You aren’t solving real-world problems anymore; you are solving product-based problems that you created with the way you designed your solution. In our ride-sharing app example, a problem in this phase might be something like:

How do we make it easier for riders to rate drivers?

This is a product problem. It didn’t exist until you created it. Yes, it is affecting people and addressing it means you need to understand the behavior of those people, but only within the context of your product. This may feel like a subtle difference from the MVP phase, but it is a critical distinction. This is a key first step in dehumanizing the design process. First, it represents a narrowing of our view of who the people we are designing for are. We begin to form a bubble where “understanding the user” means understanding their interaction with the product, not understanding the context of their life. This is the step where “humans” become “users,” and the way they interact with the business becomes how they are defined.

Second, this is where we start to introduce numbers, in the form of engagement metrics, as a proxy for people, creating a new layer of separation between us and them. For example, in the case of the driver rating problem above, our key indicator of success will not be based on individualized feedback, but rather on whether or not the overall percentage of riders who rate their driver goes up. This kind of measurement is unavoidable, but we rarely acknowledge its dehumanizing effect. Users become an amorphous blob masked behind our business metrics.

While this kind of product problem is still largely focused on user issues, the business-centric context shift alters our definition of what’s important and primes us to focus more on business needs.

3. Scale up

Once a product has traction, the question becomes “How do we grow?” This introduces a new focus: maximizing key growth metrics. In the case of our ride-sharing app a problem in this phase might be:

How do we increase the number of rides people take?

This is a business problem. No human is walking around saying, “Man, I wish I had to use my ride-sharing app more often!” This type of problem is the antithesis of a human-centered problem.

In order to increase rides, you either need to squeeze more value out of existing customers, convince new people to start taking rides, or both. While some of this is about driving awareness, much of it centers on convincing people that they have a problem, or in the worst case, actively creating new problems. This is no longer about uncovering an unmet need and developing a solution.

It is this step where things can really start to turn and those negative externalities emerge. This is the realm of addictive product loops, invasive notifications, email drip campaigns, dark pattern tricks, data manipulation, and whatever else we characterize as growth hacking these days. It’s countdown timers in checkout flows, 0% down offers, Prime Day, and planned obsolescence. A lot of the advertising world makes its hay here as well.

When teams are incentivized solely around moving metrics all manner of unsavory things become fair game.

Solving these problems means you are designing to extract value, not to deliver it. To “understand the user” in this context means understanding how to change or manipulate their behavior in order to move a metric. When teams are incentivized solely around moving metrics, all manner of unsavory things become fair game. This is where your ride-sharing app might develop something like Greyball.

4. Cash out

This is the last gate. You now have a successful product operating at scale. Your new problem — and the core focus of the organization — becomes:

How do we maximize our market value?

This is a market problem. The goal is to position the company in the best light possible for IPO or acquisition, or some other liquidity event. This is the final step in dehumanizing the design process. You are now designing for the investor.

The most likely outcome for the people using your product is not some new solution to a real problem they have, but a doubling down on the extractive tactics you employed while scaling. Make no mistake: Growth hacking is not some short-term stop-gap; it’s like a drug. Once you get hooked there is no turning back. You’ve created a beast that must be fed, and good luck getting off that treadmill. So for many, this becomes the new normal, and negative externalities become the cost of doing business.

It doesn’t stop once you cash out, either. If you go IPO, for example, you are legally obligated to prioritize increasing investor value for as long as you are in the public market. If you get acquired, it’s likely to be by some other publicly-traded company under the same requirements. The idea that you could swing back to focusing on the needs of your customer at this point is foolishly idealistic.

The principles of human-centered design evolved over decades, but it really took hold in the 1990s, when it was popularized by IDEO and other design consultancies. Human-centered design is the perfect tool for a consultancy like IDEO, which takes on the task of uncovering new problems and proposing new solutions. It’s also why this mode of design works so well in the first phase of product development. But consultancies rarely build and scale things; in fact, they almost never leave the first gate. Building and scaling is up to the client. That’s not a bad thing, it’s just the way the relationship works.

But what this means is that human-centered design is not something you can just drop into any organization and expect it to improve outcomes for customers. For a lot of companies with existing products, the concept doesn’t fundamentally align with the incentives of the org. It’s also why it becomes increasingly difficult for teams who employed human-centered design principles early in the life of a company to effectively maintain the practice long-term. The growing misalignment of incentives creates major headwinds to selling through and leveraging the resulting insights. There are too many competing priorities, and the context through which the company views “the user” fundamentally changes. This is where things break down and bad things can happen.

Negative externalities emerge as “human-centered” technology platforms grow because the reality is that their focus shifts deeper and deeper into business-centric thinking. The bubble that begins to form when we move from creating an MVP to focusing on “product problems” only gets bigger over time, and teams become increasingly disconnected from the real world. It’s a slippery slope where the problems of the business overtake the problems of the humans the business serves, and it warps our perspective.

Delivering real human value isn’t a benchmark for business success. Instead, success is defined by speed, scale, and growth.

This is an incentives issue deeply rooted in our culture of technology. Delivering real human value isn’t a benchmark for business success. Instead, success is defined by speed, scale, and growth. When an entire business is incentivized to scale at all costs, it takes a lot of effort and intention to separate yourself from that context. It’s unnervingly easy to be swept along not realizing how far things have drifted from your original goals. When we do talk to customers, it’s shaded by this context. Our research goals are driven by the needs of the business. We ask the questions that will get us the answers to complete our current task.

The first step to solving all this is awareness and being willing to have honest conversations with ourselves. If we hide our heads in the sand and insist that everything we do is human-centered, we are less likely to question our choices and the choices of those around us. But awareness is just the first step.

If we ever want to truly align the kind of design we say we do with the kind of design we actually do, we need to be willing to question our cultural definition of success. Reaching for scale and endless growth drives us to do unnatural things as we lose sight of the human value we were trying to deliver. As the negative impacts of our technology choices continue to grow, it’s time to consider that speed, scale, and growth are flimsy proxies for success.

“Human-Centered Design Dies at Launch” was originally published in Medium on July 31, 2019.

Three years ago, I couldn’t stand for any period of time without my lower back seizing up. I had chronic nerve pain running from my left shoulder to my left wrist. It was bad enough that I couldn’t sleep. I was at least 20 pounds overweight and more out of shape than I had been in a decade. I was 35 years old. My physical condition was not what I would call optimal.

I had fallen into the hustle trap. At the time, I was head of product for a tech company, and the long hours had caught up with me: 80-plus-hour weeks, with late nights on my laptop, sitting hunched over on my couch. A complete lack of exercise and poor nutrition. Things had gone south surprisingly quickly and without my full awareness.

The nerve pain was the tipping point that sent me to my primary care doctor, mostly out of fear that I had some significant neurological issue. Before sending me to a neurologist, the doctor suggested physical therapy. So that’s where I started—and I started slow: basic exercises, some stretches, and some walking. I put six months into therapy, and it eventually fixed the nerve pain. But the journey had just begun. Physical therapy was over, but the factors that took me to the breaking point were all still central to my life: long work hours, impending deadlines, constant computer work, and “tech neck.”

The culture of tech pushes people harder and harder, but we don’t think about the physical effects of that labor.

My experience is not unique. An anecdotal survey of my immediate professional network of designers and engineers came back with 50% of them suffering from some level of repetitive stress injury. Similarly, 50% of the people on the product team I was leading at the time were simultaneously in physical therapy for back and shoulder issues. While hard stats aren’t easy to come by, a study in Sweden corroborates my anecdotal data, showing that “around half of those who work with computers have pains in their neck, shoulders, arms, or hands.”

The culture of tech pushes people harder and harder, but we don’t think about the physical effects of that labor. If we were athletes, where physical health is vital to success, the thinking would be completely different. Sports organizations have entire teams dedicated to physical training and support for their players. But while it’s easy to understand why this investment is critical for an athlete, it’s much harder to make that connection for knowledge workers sitting in an office. We aren’t frequently required to tackle co-workers in the hallway or run 40-yard dashes to determine who gets access to a conference room. (Though maybe you do if you work at ESPN.com.) This makes it easy to ignore the long-term physical toll of our work.

Culturally, we view the physical requirements of a job through the lens of how strenuous the individualized actions are to complete, like lifting boxes, running laps, digging ditches, or hitting jump shots. We don’t think about it in terms of aggregate impact. As a result, these sorts of injuries aren’t really discussed. When no one perceives what you’re doing as physically demanding, it’s embarrassing to talk about being injured. It’s like telling someone you hurt yourself getting out of bed. Add to that tech companies’ expectations around their employees’ time, and this becomes something few people want to announce to the world.

The roots of this issue sit deep down in the way we approach work in many sectors, but especially in technology and its surrounding industries. A recent AdAge article found that 65% of employees at ad agencies are suffering from burnout. Similarly, in 2018, a piece in Forbes put 57% of tech employees in the same boat. Fixing this means rethinking our ceaseless drive for efficiency and output, and worksite wellness programs aren’t going to cut it.

When my back fell apart, the company I was working for had lots of wellness amenities. A two-story gym on-site offered several weekly classes. Chiropractic, including an on-site chiropractor, and massage were included in our benefits package. Nonetheless, half the product team was in physical therapy.

While these options were available, finding the space and time to use them was a different story. For sports organizations, there is a clear path from injury to lost revenue. Physical health is critical to getting the job done, so those organizations are built around it, and activities related to maintaining physical health are simply part of the work. In tech, health and wellness is just another carrot used to recruit prospective employees, no different than a foosball table or kombucha tap. It’s a fancy add-on available if you have time, but good luck finding that—we’ve got features to ship. But while the path from injury to lost revenue is not as clear in tech as it is in sports, that doesn’t mean it does not exist.

A 2012 study from the Liberty Mutual Research Institute ranked the top 10 causes of workplace injuries and their resulting economic impact. Repetitive stress injuries came in ninth, with a $1.8 billion annual cost for companies. You can’t pin all those losses on the tech industry, as the study pulled data from injury reports across sectors, but given the evidence that 50% of those who work on computers report pain issues, coupled with the tech sector’s growth since 2012, it is very likely that the economic impact of these issues has grown significantly. There is also a good chance the number has been grossly underestimated. Because of tech’s culture, my guess is that many of these issues go unreported and potentially untreated. The tech industry is the epicenter for the world of GaryVee-inspired hustle porn, where temporarily embarrassed billionaires kill themselves to earn some kind of social badge of honor. In that world, there is no room for sleep or a social life, let alone physical injuries. And companies hoping to move fast and break things embrace and reward this mentality with gusto.

There is nothing fun about slowly losing your quality of life in the machine of iterative product development.

We become culturally conditioned to think of these issues as just the price of doing the work, not as an occupational hazard or some abnormal outcome that should be reported. I didn’t report my issues through workers’ comp, and my guess is many others do not as well.

In this way, the issue takes on a different flavor than you might see in other industries. For much of the manufacturing world, health and safety is a big part of the conversation, with labor groups, OSHA, and other regulatory bodies working to ensure a safe environment for workers. In tech, it’s a silent epidemic. And while the consequences may not be as outwardly dire as losing a hand in a piece of industrial machinery, there is nothing fun about slowly losing your quality of life in the machine of iterative product development. Additionally, research from Harvard suggests that health issues related to workplace stress and burnout represent an additional $190 billion of health care expenditures each year and contribute to 120,000 annual deaths. So there’s that.

My physical burnout moment became a forcing function for me to keep myself at a certain level of physical fitness. I’ve since developed a routine that helps me keep things under control, but it requires time, effort, and conscious intention. If I slip for too long, issues creep back in.

What if we recognized that time and effort as a requirement of doing the job in the same way we recognize the need for athletes to take care of themselves? Instead of a nice-to-have perk (if you can find the time), we could acknowledge that wellness is foundational to individual success, even for jobs that might not be considered “strenuous.” Like an athletic organization, the health and wellness of all employees should be a central pillar of organizational structure.

I’m not saying tech companies need to have massive training facilities or two-a-day workouts, but we need to get real about creating work schedules that prioritize breaks and create space for actually using those wellness perks. This means establishing realistic expectations of employee hours and, most importantly, structuring deadlines that support those expectations. This may sound crazy or expensive, but I would argue that a lot of our ideas about “what works” for business are flawed, grounded more in archaic traditions and outdated beliefs than actual data. Case in point: this recent experiment from Microsoft where the company shifted to a four-day workweek in Japan and productivity jumped by 40%. Turns out taking care of people is good for business. More of that, please.

“Tech Workers Are Suffering From a Silent Epidemic of Stress and Physical Burnout” was originally published in Medium on January 15, 2020.

We build a lot of technology and push it out into the world. When things go well, we rush to take credit for what we did. But when things go wrong, we hide our heads in the sand. This isn’t just about ignoring negative outcomes — it’s about maintaining the status quo.

Whenever I write a critical piece about technology and its impact on society, a certain kind of troll surfaces. I like to call them the “techno-whataboutist.” Their argument is always the same: “[some person] had the same concerns about [some established technology — the book, the printing press, TV, newspapers, radio, video games, cars] a long time ago, and things turned out just fine, so stop worrying.”

And it’s not just no-name, trolly commenters who run down this path. Nir Eyal pulled the same shenanigans in his piece about screens and their impact on kids. And Slate did an entire piece on the history of “media technology scares” — which, according to the author, didn’t pan out. In both cases, Slate and Eyal pulled out one of the techno-whataboutist’s favorite examples:

The Swiss scientist Conrad Gessner worried about handheld information devices causing ‘confusing and harmful’ consequences in 1565. The devices he was talking about were books.

On the surface, it’s easy to laugh at Gessner, but our relationship with technology and the way it impacts our world is complicated. Nothing is black and white. It’s all gray. If we ever hope to have a healthy, sustainable relationship with the things we create, we have to be willing to dive into those gray areas. The techno-whataboutist’s goal is to avoid all that.

Traditional whataboutism is the deployment of a logical fallacy designed “to discredit an opponent’s position by charging them with hypocrisy without directly refuting or disproving their argument.” For example, a traditional whataboutist might try to dismiss climate activism by calling out that Greta Thunberg still rides in cars (hypocrisy!). This kind of tactic was a favorite propaganda tool of the Soviet Union during the Cold War. And while techno-whataboutism doesn’t portend hypocrisy, it represents the same kind of rhetorical diversion, one designed to act as a cudgel to beat back questions about the complex nature of our relationship to technology.

The idea that the only way to think about technology is in a positive light ignores the complexity inherent in technological progress.

The first big problem with techno-whataboutism is that it presupposes that the place we have ended up, as a society, is a good one. There is no power in Gessner’s book example unless you believe everything is fine.

To even be able to make a statement like, “people worried before, but everything is fine now,” takes a significant level of privilege. Perhaps that’s why in my experience the vast majority of the people who present this argument are white men.

Sure, for many of us white guys, things are pretty good. But this is not the case for everyone. The positive outcomes associated with the advance of technology are unevenly distributed and there are often significant winners and losers in the systems we architect and the things we produce.

Let’s continue with books as an example. The invention of the book made vast amounts of knowledge both available and easily transferable. It’s hard to argue against the net positive impact of that change. But if we just stop there we willfully turn a blind eye to the full picture.

The two most distributed books in history, the Bible and the Quran, while providing spiritual support for many people, have also helped spark a staggering amount of death, destruction, oppression, violence, and human suffering, often focused on marginalized groups and those who don’t ascribe to the beliefs these books contain. Mein Kampf helped catalyze the rise of the Nazis and ultimately the Holocaust. Mao Zedong’s Little Red Book, the third-most distributed book in history, arguably helped catalyze the Great Leap Forward, resulting in the deaths of millions of people.

The capabilities that made books a transformative, positive technology also made them weapons for propaganda and abuse on a previously unprecedented scale. So was Gessner wrong to worry about the impact of books? I don’t know about you, but I’d put indoctrination on a shortlist of “confusing and harmful” effects.

I’m not suggesting that we undo the invention of books or that the positives of technology should be discounted. But the idea that the only way to think about technology is in a positive light ignores the complexity inherent in technological progress. By doing so we lose a depth of conversation and consideration that leaves us open to repeating past mistakes and reinforcing existing power structures. For example, TV, radio, and now social media have mirrored many of the positive AND negative impacts of books on an exponentially accelerating scale, not to mention that each new technology piled on its own unique set of new issues.

Comparing a book to a smartphone is like comparing a car to a skateboard.

The techno-whataboutists practice a special brand of what I like to think of as “technological nationalism,” where they assert that all innovation is “progress,” regardless of the full outcome. This thinking keeps us locked into an endless loop where our technology changes but the political and economic status quo remains the same. The people who benefit continue to benefit and the people who don’t, don’t. We fix nothing and we disrupt everything, except the things that actually need disruption.

This brings me to the second problem with techno-whataboutism: The past is not a proxy for the future. Comparing a book to a smartphone is like comparing a car to a skateboard. Sure, they both have wheels and can get you from point A to point B, but that’s about as far as the similarities go. Books deliver information, as do smartphones, but the context and capabilities are on an entirely different scale. This kind of lazy logic blocks us from considering the specific nuances of new technology.

Context changes. The power, scale, and interconnectedness of our systems grow. We move from linear impacts to exponential impacts. The world is not as it was. The question becomes, when does it matter?

The consequences of our creations fall unevenly on society, but so far, as a whole, we’ve been able to push through, and ignore much of the fallout. But when do the contexts and capabilities of our technology reach a point where the consequences can no longer be ignored?

In his 1969 book Operating Manual for Spaceship Earth, the architect and futurist Buckminster Fuller argued that while the resiliency of nature has created a “safety factor” that has allowed us to make myriad mistakes in the past without destroying ourselves, this buffer would only last for so long:

This cushion-for-error of humanity’s survival and growth up to now was apparently provided just as a bird inside of the egg is provided with liquid nutriment to develop it to a certain point…

My own picture of humanity today finds us just about to step out from amongst the pieces of our just one-second-ago broken eggshell. Our innocent, trial-and error-sustaining nutriment is exhausted. We are faced with an entirely new relationship to the universe. We are going to have to spread our wings of intellect and fly, or perish; that is, we must dare immediately to fly by the generalized principles governing the universe and not by the ground rules of yesterday’s superstitious and erroneously conditioned reflexes.

Nature’s buffer acts as a mask, hiding the true impact of our actions and lulling us into a sense of overconfidence and a disregard for the consequences of our decisions. It’s easy to ignore all of our trash when the landfill keeps it out of sight, but at some point, the landfill overflows.

Fuller was a technological optimist, but he was also realistic about the complexity of change and innovation. From his vantage point in 1969, he was able to see that we were moving to an inflection point in our relationship with the world we inhabit. As he saw it, our safety factor was all used up and our ability to “spread our wings” was dependent on a change of approach, in order to come out the other side and truly cement our place in the cosmos.

The techno-whataboutist doesn’t want to change the approach. Instead, they want you to embrace their reductionist, technological nationalism where all innovation is good—outcomes be damned. Change is impossible under this type of thinking.

The tech world loves to talk about “failing fast,” but no one ever talks about what happens after that.

We’re already starting to see the aggregate impact of our choices on the natural world, and it’s becoming harder to hide from those consequences. But nature isn’t the only thing impacted by technology. The fabric of our society, the way we live and interact, is also tightly tied to the tools we have at our disposal. Like nature, I believe that the inherent resiliency in our societal structures has created a safety factor that has similarly allowed us to ignore the way our behavior, our habits, and our interactions have changed over time. But a what point does that landfill overflow? Or has it already?

Innovation is critical to our overall progress, and we have to accept that there are inherent risks in that messy and unpredictable process. But our need to invent doesn’t absolve us from being accountable to the results. The tech world loves to talk about “failing fast,” but no one ever talks about what happens after that. Who cleans up the mess we leave for society when we fail?

We take full credit for our successes. We stand on big stages and make a big show about the amazing benefits of our newest creations, but we sneak out of the party when shit goes bad. We don’t get to have our cake and eat it too.

It is possible to hold a positive view of technology while still acknowledging its downsides. And while we can’t be afraid to push the edges of what’s possible, we have to be willing to admit when things go wrong and invest in the work to fix it. Our safety factor won’t protect us forever. This is when it matters.

“The Problem with the Techno-Whataboutists” was originally published in Medium on January 8, 2020.

The process of user-centered design focuses a lot of attention on finding the right problem. There are many tools and processes that can be used to suss out user needs and motivations and boil it all down into clearly defined problems to be solved. But with all these tools at our disposal, how can we still end up with less than optimal and often negative outcomes for the people we are supposed to be helping?

The issue is that our obsession with solving the correct problem frequently takes our focus away from an even more important aspect of the project. While we can’t move forward without a defined problem, the way we define success for a project actually carries more weight in the final outcome than problem statements or initial insights from user research.

Raise your hand if this has happened to you: You see an interesting headline, maybe something like, “The 10 Best Vacation Spots in Mexico.” You click the link with the hope of reading through a concise list of locations to help you get some inspiration for your next trip. But that’s not what you get. Instead, you get a page so crowded with ads it’s hard to tell where the article starts and the ads end. On top of that, you don’t even get a list of places. Instead, you get one of the list items (usually the bottom of the list) and you have to click “next” to cycle through the remaining nine items. Each time you click, the page reloads and you have to wade through a new set of ads to see the next item in the list.

This all-too-common experience is not driven by a designer who identified the wrong problem or didn’t have enough insights; it’s driven by a business-centric definition of success.

Typically, sites that post articles like “The 10 best vacation spots in [insert exotic location]” survive on advertising money. Each time a page on that site loads, it shows a set of ads. This is called an “impression.” The more impressions a site can generate, the more revenue it collects from the ads. For this kind of business, a key metric is often page views (i.e., how many times the individual pages on a site are loaded in a given timeframe). Every new page view means more ad impressions, which means more revenue. In the eyes of the business, the more page views the site can generate, the better.

This sort of definition of success will shade every part of the decision-making process and can push people to do things that run counter to what we would expect from a user-centered design process.

If a designer creates an article layout that shows all 10 of the Mexican vacation spots in one list on a single page, it would be easy for the reader, but it will only generate a single page view. If instead, the designer creates a layout where the user has to click “next” to see each item in the list, and each click results in a new page being loaded, then someone reading that article generates 10 page views (assuming they click to see all 10 items).

With this change, the designer has now multiplied the performance of their design. For the user the resulting experience is shit, but for the company’s definition of success, the experience is excellent.

Increasing page views (or even “increasing revenue”) focuses solely on the outcomes and needs of the business. This kind of goal is a signal from the company that they don’t care what solution a team comes up with, as long as it moves the numbers. But forcing someone to load 10 pages in order to see all the items in a list is just one way to meet this measure of success. Another way to do it is to write better, more interesting articles with layouts that are more user-friendly and readable so that more people are willing to read and share.

Users have to carry an increasingly heavy burden of the decisions made to meet business-centric metrics.

One of these two design options is easier than the other. (Hint: It’s not the one where you write better content.) And when jobs and performance bonuses are on the line, easier is, well, easier, even if it means less than optimal outcomes for customers. A business-centric definition of success leaves both options on the table. And the onus is frequently on the team to determine the implications of their solutions, the company or team leader having effectively absolved themselves from establishing any guardrails or ethical guidance.

Being intentional about the way you word your definition of success can completely change the thought process for your team. In the example of our article site, what if, instead of saying we need to increase page views (or increase revenue), the company said we need to improve article quality? This shift completely flips the conversation, and we move from a business-centric thought process to a user-centered thought process. Unlike increasing page views or revenue, increasing article quality delivers actual value to the user, while still driving the same business outcome. Better articles mean more reads and more shares, which means more page views, which means more revenue.

Many of the same solutions that could be applied to “increase page views” can still be applied to this new, user-centric definition. You might even still measure this in page views, but by changing the words you use, you have actively taken certain solutions off the table and set the parameters for what is acceptable. When the goal is to improve article quality, an unreadable layout with 10 clicks to get to the end of the article no longer makes the cut.

For about seven years, I was head of product and UX for a streaming video subscription service similar to Netflix. As a monthly subscription service, our key business metric was user retention as measured by Lifetime Value (LTV). The more subscribers we could keep month over month, the more revenue we would generate from each customer, increasing their LTV. As such, “improve customer retention” became a driving mantra within the company.

As a team, we tried and suggested lots of different things to meet this definition of success. But one recurring suggestion really stuck out to me. Over the course of seven years, on at least four separate occasions, someone suggested that we remove the cancel button from our website and force customers to call our customer service line in order to cancel. Now, I’m happy to report that I was able to fight that back each time, and we never actually did it. But it kept coming up. And honestly, no one really even wanted to do it, not even the people who were suggesting it. It’s an objectively shitty thing to do. So why did we have to continually waste valuable time debating it?

The issue was that our definition of success—improve customer retention—was business-centric. Again, like the page view example, there are lots of ways you can improve subscriber retention. For a streaming video service you could:

  1. Add new features
  2. Improve existing features
  3. Add new content
  4. Make it harder for someone to cancel

All of those options fit when the goal is to improve customer retention. The business-centric definition leaves it to the team to make the determination. The metric itself doesn’t do any of the heavy lifting to narrow the options or provide ethical guardrails.
What if, instead, we had talked about it as “we need to improve customer satisfaction”? Again, this drives toward the same end state. A more satisfied subscriber will stay longer. But it changes the list of available options. Now a team could:

  1. Add new features
  2. Improve existing features
  3. Add new content

Making it harder for someone to cancel no longer makes the cut because it does not match the definition of success.
Talking about metrics in a user-centered way sends a clear signal about who and what is most important in what you are doing as a company. You draw an ethical line in the sand signifying that some solutions are off the table no matter how easy or viable they are. This kind of shift can have a ripple effect throughout the entire culture of a company.

Users have to carry an increasingly heavy burden of the decisions made to meet business-centric metrics. The addictive nature of social media, the impact of algorithmically driven echo chambers, the overly aggressive harvesting of our data—these are all propagated and sustained due to business-centric thinking and business-centric definitions of success. Making sure you are solving the right problem is important in a design process, but understanding the impact of the way you define success is even more critical. They say you can’t win the game unless you know how the score is being kept. As a company or a team leader, you have an opportunity to define your scorekeeping in a way that nudges your team toward better outcomes for everyone.

“How You Define Success Is Hurting Your Users” was originally published in Medium on December 9, 2019.

In our drive for speed, we have conditioned ourselves to ignore our most vulnerable users. We design for the happy path, and society pays the price.

The happy path

To create digital products, designers often start by developing a set of scenarios or use cases. These scenarios help determine the features, interactions, and technological infrastructure required in a product.

As an example, let’s think about Facebook. When Mark Zuckerberg was initially creating the social network he may have had a scenario like this in his head:

“An undergrad who wants to share pictures from a party with her friends.”

This is a straightforward statement, but even something as simple as this can help a designer conceptualize the kind of solution required. In the case of a digital product, they can start to imagine the screens that might be needed, the elements on those screens, and so on.

Scenarios come in two basic flavors: happy path and edge cases.

The happy path is a scenario where everything is perfectly aligned for the feature/product to work exactly as the designer intended:

“A benign undergrad goes to a party and takes some inoffensive pictures. She comes home to her computer [remember this is early Facebook] with excellent internet connection, she logs in and uploads her photos with no issues, they go into the database and are disseminated to her friends.”

This is a happy path as we think of it today. As Goldilocks might say, everything is just right.

Many designers start with the happy path because it’s the path of least resistance. It takes the least amount of effort to conceptualize because it removes many of the inconvenient complexities that might exist. That doesn’t necessarily mean it’s easy to design; it’s just comparatively simplified.

The second type of scenario is the edge case. Edge cases deviate from the happy path and, theoretically, they happen less frequently than the happy path. There are two types of edge cases.

The first are technical edge cases, where something goes wrong in the technical flow of the scenario. Maybe there is an error in the photo upload process and it never goes through. Or maybe a user inputs incorrect data in a form field. This is the kind of technical complexity a QA person can test for. Very often a design process will address these kinds of edge cases, or will at least address the major ones. Any decent designer or engineer knows it is important to handle errors and help the user recover from them.

Then there are what I call contextual edge cases: behavioral deviations from the happy path. In our photo upload scenario, a contextual edge case might involve the user uploading a photo that is offensive or pornographic, or uploading a photo of someone else who doesn’t want that photo to live on the site. This kind of edge case can have very significant real-world implications. Unfortunately, these are also the edge cases that rarely get addressed in the design process.

The drive for speed

Today, success in the tech world is defined by speed, scale, and growth — how big can a company get and how fast can it get there. Facebook’s motto is “move fast and break things,” and product teams throughout the industry obsess over how quickly they can “ship features.” VCs even write books about how to run startups in hyperspeed, so you can validate (or invalidate) your idea as quickly as possible and waste the absolute minimum amount of people’s (read: VCs’) time. They call it “blitzscaling.”

The idea of moving quickly has become deeply ingrained in our culture of design, technology, and business.

One of the ways we achieve speed is by focusing on the happy path. Often a team’s strategy is to get the happy path done first as an MVP (minimum viable product) so they can quickly get it out to users before they put more effort into handling edge cases. The problem is that teams rarely come back to handle edge cases. Inevitably, new priorities come up and everyone moves on. What was once considered an MVP is now considered a final product.

Over time, this constant deprioritizing of edge cases conditions designers and engineers to just start ignoring them. Overloaded with work and impossible deadlines, it becomes easier to just pretend edge cases don’t exist.

The impact of the happy path

A few weeks back, a startup called Superhuman released a new “read receipt” feature for their email client product. If I send you an email using Superhuman and you open it in whatever email client you use (Gmail, Yahoo, etc.), the read receipt feature sends me a notification telling me you opened it. Straightforward enough. But there were two twists with Superhuman’s implementation. First, the read receipt didn’t just tell me that you opened the message, it also gave me location data of where you were when you opened it. Yikes. Second, you, the recipient, had no way of opting out of the feature. Regardless of the settings in your email client, you would always send me a read receipt. Double yikes.

This kind of feature has huge implications for victims of stalking, abuse, and so many other negative scenarios. Unsurprisingly, there was an outcry, and Superhuman modified the feature. But the feature should have never left the gate in the first place. When the controversy happened, Superhuman wrote a blog post and the CEO tweeted an apology:

“We did not imagine the potential for misuse.”

If this tweet is to be believed, it seems the idea that there could be deviations from the happy path didn’t even come up in the design process. It wasn’t even on their radar. Our drive for speed has conditioned us to design as if edge cases don’t exist. It’s not that we just decide not to solve them, it’s that we don’t even imagine them. These are practices handed down across companies and design schools. Many of us are so well trained at this point, slowing down doesn’t even guarantee a better outcome; ignoring edge cases is subconsciously baked into our process.

As companies push for scale and growth at breakneck pace they are weaponizing technology against groups that fall outside of their defined happy path.

We’re watching the cumulative impact of this play out on the web every day. Massive platforms like YouTube, Facebook, and Twitter were all architected with a best-case-scenario, happy-path mentality — a benign user sharing what they had for lunch or posting a video of their cat. The edge cases of abuse, harassment, and misinformation were all but ignored until they reached a scale where public scrutiny made it impossible to continue ignoring them, but by then it was too late. Addressing edge cases is not in the DNA of these companies. When you have spent 15 years fusing your business model to the happy path, your processes, organizational structures, and mentality are not geared to think beyond it. So these platforms are either slow to respond or completely incapable of it.

Happy path design is not human-centered, it is business-centered. It’s good for businesses because it allows them to move fast. But speed provides no benefit to the user. As companies push for scale and growth at breakneck pace they are systematically weaponizing technology against groups and use cases that fall outside of their defined happy path.

Who is in the happy path?

Part of the justification for happy path design is that edge cases are rare. In some cases, they might only affect 1% of a product’s users. Mike Monteiro points out the fallacy in this thinking in his book Ruined by Design:

Facebook claims to have two billion users…1% of two billion is twenty million. When you’re moving fast and breaking things 1% is well within the acceptable breaking point for rolling out new work. Yet it contains twenty million people. They have names. They have faces. Technology companies call these people edge cases, because they live at the margins. They are, by definition, the marginalized.

On top of this, the actual process of happy path design often involves having a default user persona who fits nicely into your complication-free use cases. This compounds the happy path problem because it means we are not only looking at a contrived view of the scenario itself but also at an artificially small slice of potential users.

After all, the happy path is free of risk and complication. By definition, the people with the least risk and complication are the least vulnerable users of a product.

Everyone else, as Monteiro pointed out, sit at the margins and are given almost no thought until after the damage is done and there is some kind of outcry.


More often than not, the humans who sit on the margins of our products are the same humans who sit on the margins of society.

When Superhuman was designing their read receipt feature, they weren’t designing it for people at risk of stalking and abuse (statistically, most likely women). They were designing it for their default user, who I would assume is some VC (statistically, most likely a man) sending off an urgent email to a founder (statistically, also most likely a man).

I’m making an assumption here — maybe they include women personas in their design process — but here is the real rub: Their personas are irrelevant. Despite what we say about having empathy in design, the default user is always ourselves. The idea of designer empathy is the biggest trick we’ve ever pulled on ourselves. Unless the person you are designing for shares your life experience, you cannot put yourself in their shoes in any meaningful way. Uncovering consumer insights is not the same as empathy, and human-centered design is not a magic shield against bias.

The humans who sit on the margins of our products are the same humans who sit on the margins of society.

A quick perusal of the Superhuman website shows that their product and engineering team is 83% dudes. Maybe someone pushed back on the read receipt feature, maybe they didn’t. But it’s almost guaranteed that a dude made the final decision. By and large, dudes don’t walk around afraid of abuse or stalking. It is, by and large, not our life experience.

“We did not imagine the potential for misuse.”

Designing for speed has trained us to ignore edge cases, and the overwhelming prevalence of homogenous teams made up of the least vulnerable among us (read: dudes) has conditioned us to center their life experience in our design process.

The canary in the coal mine

Miners used to take canaries with them into the coal mine. The idea was that the canaries were more vulnerable to the harmful gases that can build up in a mine. If the canary was fine, everyone knew things were safe. If something happened to the canary, it was a sign for everyone to get out.

This is a robust system. If you design for the well-being of the most vulnerable, you design for the well-being of everyone. We don’t design like that today. Today we design for the least vulnerable and then pretend nothing bad ever happens in a coal mine.

The breadth of the scenarios we consider determines how resilient our products are to deviations in the intended behavior. Today we are building massive platforms, with massive reach and impact, yet they are massively fragile. If we are honest with ourselves, these platforms represent a failure of design. Their success hinges on an intentional disregard for human complexity, and society pays the price for it.

The real happy path is not the path of least resistance; it’s the path of most resilience.

We have to redefine what a happy path is and relearn how to embrace complexity. In our Facebook photo-sharing example, what if our initial scenario looked something like this instead:

“A guy shares a compromising photo of a woman with his friends, and the woman is able to remove it from the site.”

This is what a happy path should be. It gets us to the same place as the original statement, and we still have to design and build the interactions required to let that guy share his photo. But it also does something crucial: It centers the most vulnerable user over the least vulnerable. It bakes the idea of misuse and negative outcomes into the core of our thought process and fuses it into the DNA of the organization.

The real happy path is not the path of least resistance; it’s the path of most resilience.

A lot of entrepreneurs talk about “first principles” as a way to identify their assumptions and guide their product development. While it may never have been explicitly stated, the inherent bias in our product development approach has meant that our underlying first principle has been to ignore human complexity. Redefining the happy path means establishing resiliency as our underlying first principle and moves the vulnerable to the center of our thinking. It forces us to embrace complexity and understand that when we design for the vulnerable, we design for everyone. No more half solutions.


Would this approach slow teams down? Maybe a bit, but we are not talking about creating “perfect solutions,” just slightly more robust ones that center someone other than your average white guy. I would also argue that if the success or failure of your company is determined by a few extra days/weeks of development time, there are bigger problems going on.

We will never be able to come up with scenarios for every possible edge case; it’s impossible and that’s not what I’m suggesting. We also don’t need to. By starting with even one, we fundamentally change the foundation of our thought process. This kind of structural shift can enable individuals and organizations to cultivate the competencies and capabilities required to not only flag potential future issues, but to bring real solutions to the table when they inevitably emerge. That’s a happier path for everyone.

“Edge Cases Are Real and They’re Hurting Your Users” was originally published in Medium on September 4, 2019.

In his book Thinking, Fast and Slow, Nobel Prize-winning economist Daniel Kahneman discusses the psychological phenomenon of loss aversion, which he, along with Amos Tversky, first identified back in 1979. At its core, loss aversion refers to the tendency of the human brain to react more strongly to losses than it does to gains. Or, as Wikipedia puts it, people “prefer avoiding losses to acquiring equivalent gains: it is better to not lose $5 than to find $5.” This phenomenon is so ingrained in our psyche that some studies suggest that losses are twice as powerful, psychologically, as gains.

In his book, Kahneman describes a study of professional golfers. The goal of the study was to see if their concentration and focus was greater on par putts (where failure would mean losing a stroke) or on birdie putts (where success would mean gaining a stroke). In an analysis of 2.5 million putts, the study found that regardless of the putt difficulty, pro golfers were more successful on par putts, the putts that avoided a loss, than they were on birdie putts where they had a potential gain. The subconscious aversion to loss pushed them to greater focus.

If loss aversion is powerful enough to influence the outcome of a professional golfers putts, where else could it be shaping our focus and decisions?

Loss, Gain, and Iterative Product Development

Iterative product development is a process designed to help teams “ship” (get a product in front of customers) as quickly as possible by actively reducing the initial complexity of features and functionality. This is valuable because it gets the product in the hands of users sooner, allowing the team to quickly validate whether they’ve built the right thing or not. This makes it less risky to try something new. The alternative process, waterfall, asked teams to build in all the complexity upfront and then put the product in front of customers. A much riskier and potentially costlier proposition.

Iterative product development achieves its speed through a Minimum Viable Product (MVP) approach. MVP means taking the possible feature set that could be included in a product, or the possible functionality a specific feature could deliver, and cutting it down to the minimum needed to bring value to the end user. As a simplified example, imagine you are designing the first music streaming app (like Spotify). It could have lots of potential features beyond just streaming music. Things like playlists, search, recommendations, following artists, sharing, offline mode, dark mode, user profiles and so on. Building all of that would take a lot of time and effort. So an MVP streaming app might just have music streaming and search. The goal is to build something quickly that can validate if users even want to stream music in the first place before you go invest in all those other features.

Once the MVP of a product is live, a team can then quickly assess if it is successful or not, and with minimum time invested, can move rapidly to build on the initial functionality.

It is this step of the process where things can start to go sideways.

The problem starts with the concept of an MVP. We aren’t geared to think in terms of MVP. In fact, our mind takes the opposite approach. When we get excited about an idea our brain goes wild with all the possibilities (see our list of music streaming features above). We imagine all the possible value a product could deliver and then we have to lose a significant portion of that value by cutting it down to the bare minimum. It’s never easy. The unintended psychological consequence of this process is that we walk into the first version of our product with a feeling of loss. Even if our MVP is successful, that feeling sticks in our brain.

Weakness-based Product Development

The MVP process primes us to want to regain the value we believe we’ve lost. As soon as the product is live, we fall into a weakness-based, additive strategy, where we are compelled to add new functionality in order to win back our lost value (real or imagined).

This weakness-based mindset gets further reinforced when we start analyzing data and feedback. Because loss aversion causes us to focus on losses more than gains, we are more likely to gloss over positive signals and areas of strength and focus instead on the areas of the product that “aren’t working.”

Think about the amount of effort you put into understanding why something is not working versus the effort you apply to understanding why something is working. It is rare to hear someone say “how do we double down on this feature that’s working?” Instead, we strive to deliver value by fixing what we perceive to be broken or missing.

In the worst case, we even [subconciously] go looking for signals that corroborate our underlying feelings of loss.

To go back to our music streaming app, if you believed that playlist was a critical feature, but it was cut in the MVP, you are primed to put a higher weight on any feedback where a user complains about not having a playlist because it validates your own sense of lost value. Even if that feedback goes against the other signals you are receiving.

We focus on areas of weakness because they represent potentially lost value, but weakness-based product development is like swimming upstream. Areas of strength are signals from your users about where they see value in your product. By focusing instead on areas of weakness, we are effectively ignoring those signals, often working against existing behavior in an effort to “improve engagement” by forcing some new value. This is why many product updates only garner incremental improvement. Swimming upstream is hard.

Strengths-based Product Development

Strengths-based product development means leveraging the existing behavior of your users to maximize the value they get from your product. It’s about capitalizing on momentum, instead of trying to create it.

Instagram is a solid example of a strengths-based development approach. For starters, they have kept their feature set very limited for a long time. Especially early on, they did not focus on building new things but instead focused on embracing existing value. They prioritized things like new image filters and editing capabilities, faster image upload processing, and multi-photo posts. Instagram knows that the strength of its product is in sharing photos from your smartphone. They didn’t spend a ton of time enhancing comments or threads. They’ve made minimal changes to their “heart” functionality for liking posts. They never built out a meaningful web application. When they did create significant new functionality they often made it standalone, like Boomerang and Layout, as opposed to wedging it into the core experience.

Arguably the biggest change they’ve made over the years was the addition of stories. However, even that feature, while copied from Snapchat, was still an extension of their core photo sharing behavior. And, ultimately, stories increased the value of feed-based photo sharing on Instagram as well. Before stories, all your daily selfies, food shots, workout updates and so on all went into your feed. Now, much of that lower quality posting goes into stories, and feed posts are reserved for higher quality photos, creating an enhanced feed experience.

In contrast, take an example from my previous job. I was head of product for a streaming video service for almost seven years. As a subscription-based service, our bread and butter was premium video. However, many competitors in our space focused on written content, which we did not have. As an organization, we saw this weakness as a potential value loss and prioritized implementing an article strategy.

Written content did not enhance our core user behavior, but we built up justifications for the ways that it could. This is actually a key symptom of weakness-based product development. When something enhances your core strength, its value is obvious. If you find yourself needing to build a justification, it’s a sign you could be on the wrong track.

Articles never gained significant traction with our paying subscribers. They did, however, drive a high level of traffic from prospective customers via platforms like Facebook. But, the conversion rate for that traffic was extremely low. The gap between reading an article and paying a monthly subscription for premium videos was just too big of a leap. We were swimming upstream in an attempt to fill in perceived holes, but never really enhancing our core value.

On the flip side, we also developed a feature that allowed subscribers to share free views of premium videos with their friends. Capitalizing on our core strength and an existing behavior (sharing). Like articles, this drove organic traffic but also had a significantly higher conversion rate. The effect of swimming with the current.

Shifting Your Mindset

The good news is that if you find yourself in a weakness-based mindset, there are a few straight forward things you can do to break out.

  1. Analyze what works
    When you see areas of strength don’t just give yourself a pat on the back and move on, make those areas the key focus of your next iteration. Be the one to ask, why is this working and how can we accelerate it? Stop chasing new value. You are already delivering value. Build on that.
  1. Move from addition to subtraction
    When you look at metrics, stop looking at weak performance as something to be improved. Instead, look at it first as an opportunity to simplify. Instead of immediately asking, how can we make this better? Make the first question, is this something we should get rid of completely?
    This is especially powerful in existing products. If you’ve been practicing weakness-based development you potentially have a bloated, underused feature set that’s dragging down your overall experience. What if every third or fourth development cycle you didn’t build anything new and instead focused on what you were going to get rid of? How quickly would that streamline your product and bring you back to your core strengths?
  1. Understand your strengths
    Do you know what is valuable in your product? You have to be able to answer that question if you want to step into a strengths-based mindest. If you’re not sure about the answer that’s ok, you can start with this simple matrix.


    Plot your features in the matrix. Features in the upper right quadrant represent your core value. How many of your product cycles in the last three months have focused on the elements in the upper right? If the majority of your work is not happening there then there is a good chance you are practicing weakness-based product development.
    If you are doing any work in the lower left quadrant you are wasting your time. Don’t waste cycles propping up weak features. Kill those features, move on and don’t fear the fallout. We get worried about upsetting users who have adopted features that aren’t actually driving our success (there’s that loss aversion again :)). It’s ok. Users will readjust, and yes, some might leave. But if you are clear on your product’s strengths and focus your efforts there, the value you gain will more than make up for anything you lose by cutting the things that are holding you back.

“The hidden bias in iterative product development” was originally published in Medium on June 19, 2019.

In March of 2008, salmonella infiltrated the public water system of the town of Alamosa in southern Colorado. The resulting disease outbreak infected an estimated 1,300 people, over 14% of the town’s population. Of those infected, one person died and 20 more were hospitalized. The Alamosa outbreak was the second-largest water-borne illness outbreak in the United States that decade.

Though teams worked as quickly as they could to sanitize the water system, the people of Alamosa were unable to use the water for weeks. While some people were able to get water from friends who had wells or were otherwise not connected to the system, most had no options. Without undertaking the major operation to deliver and distribute bottled water for drinking and cooking, it would have been extremely difficult for the town to weather the outbreak.

Before I became a designer, I spent five years as a public health emergency planner. I planned for and responded to disease outbreaks, tornados, and pandemics. When the Alamosa outbreak occurred, I was part of a nine-person team dispatched to help manage the first week of the incident. What played out in Alamosa was not unique. The story is a microcosm of a larger issue that carries through much of what we humans create.

When we design systems and products, we do it based on a set of defined scenarios, or use cases. These scenarios help us define how the thing we’re designing will be used, so we can determine the required features, interactions, materials, capacity, and so on. The scope of those scenarios is a key determinant of how tolerant our design will be to changes in the environment and user behavior.

The water system in Alamosa was built on a deep well aquifer, a water source thought to be protected from contamination because of its depth. The system was designed to operate as a closed system devoid of harmful pathogens, and in 1974 Alamosa was granted a waiver to not include chlorination as a disinfecting step in the water treatment process. When the scenario changed and the closed system was breached, the design lacked the resilience to handle it; a chlorination step would have all but eliminated the risk. Alamosa has since added chlorination to its system, but only after the city had to reckon with the fallout of a fragile design.

Designing for the happy path

We don’t like to think about worst-case scenarios. Hell, we don’t even like to think about not-so-great-case scenarios. Instead, we design and build systems and products that work when conditions are just right. In design, this is sometimes referred to as the “happy path.” We design for the happy path first and then, if time allows, we go back later to look at other not-so-happy paths, or “edge cases.” But in a world of “move fast and break things,” time rarely allows for us to go back and look at the edge cases. If we do get time, we address those edge cases as an afterthought.

The impact of happy-path design isn’t limited to major infrastructure: We’re watching it play out every day. The inability of sites like YouTube, Twitter, and Facebook to tackle fake news and curb rampant harassment is a direct result of happy-path thinking. Those systems were conceived and architected with the best-case scenario in mind — a benign user posting about what they ate for lunch or sharing videos of their cat. These platforms have little to no resilience in the face of behaviors that diverge from that scenario, which means that if and when the system breaks down and people use it with ill intentions, these companies will be slow to respond or possibly incapable of recovering, as they try to react to a circumstance they’ve hardly considered.

The problem is that in the real world, things always go wrong. Reality is not a best-case scenario, it’s chaotic and messy.

Their goal is to move fast, and addressing edge cases is not fast. This isn’t limited to social media companies. In many industries, companies fail to prioritize edge cases, and this results in fragile products that struggle to recover when things go wrong.

The problem is that in the real world, things always go wrong. Reality is not a best-case scenario, it’s chaotic and messy, and we are moving toward a time in history that has the potential to become more and more so.

The next century could deliver an unprecedented level of change to our existence on earth. We’ve lost 50% of the world’s biodiversity in the last 40 years. We’re seeing increases in many forms of extreme weather. The climate is warming and all manner of things are changing.

We can clutch our pearls all day and say that climate change is a natural process and it’s not our fault, but that point is irrelevant. Man-made or not, change is already happening, and we have not designed a world that can handle it. So far we’ve proven ourselves unwilling to even try to stop it, so our ability to survive and thrive in the 21st century may be predicated more on our level of resilience in the face of it. This means we have to change the way we think about everything we create.

Designing for resilience

Designing for resilience isn’t just about helping people prepare for disasters; it’s about shifting our cultural mindset to change some of the ways we think about business strategy and collaboration, and this is not limited to those designing major infrastructure. Shifting culture means applying these concepts across the spectrum of design disciplines and beyond, weaving them into everything from seemingly minor product-level decisions, to major policies and systems on a societal level.

Here are some places to start.

Designing for the edge cases

By “moving fast” and ignoring edge cases, we are systematically weaponizing technology against groups and use cases that fall outside of our defined happy path. Facial recognition software, for example, still has trouble recognizing black faces. And that’s just a start — I mean, we barely take the time to design for people who are color blind. Across the web, algorithms and interfaces packed with bias and designed on best-case thinking sit like landmines just waiting to be stepped on.

In an effort to be agile and keep up with the speed of our broken tech culture, we are failing to do our due diligence as designers, and the consequences of that failure are growing. Resilient products that can successfully operate outside of the best-case scenario are less fragile, better-designed products. By committing to consider a wider set of scenarios, we make our solutions more robust and more valuable. As Cathy O’Neil puts it in her book, Weapons of Math Destruction:

To create a model, then, we make choices about what’s important enough to include, simplifying the world into a toy version that can be easily understood and from which we can infer important facts and actions. We expect it to handle only one job and accept that it will occasionally act like a clueless machine, one with enormous blind spots.

We can’t continue to willingly operate with enormous blind spots. The first step toward building a more resilient world is to dramatically widen our design process to account for the unhappy paths.

Future-focused design

When we design something new, most often we design it for the world of today: today’s population, today’s market demand, today’s resource availability, today’s climate. We talk to users about their current challenges, and we look at recent data to decide what we should do next. Being present-focused is an outgrowth of happy-path design. Generating projections of future needs and conditions can be time-consuming, and decisions based on projections can be harder to justify. Building on current conditions is much faster and more straightforward. The problem is that today is not tomorrow, and designing for current conditions makes our solutions less resilient to future changes. This is especially critical when we think about the impacts of climate change.

Take road construction. Our current decision-making process for road construction is still based on historical weather data from the last century. Certain grades of asphalt are made to withstand certain temperatures, and many new roads are being built to withstand cold temperatures that are increasingly rare, instead of being built to withstand the higher temperatures that are becoming the norm. The result is more road damage, unnecessary maintenance costs in the billions of dollars, and even full-on melting. We also see this in the design of plastic products, many of which are not built to tolerate higher temperatures. Items like plastic mailboxes and trash cans have melted in the face of heat waves in Arizona.

We’ve become better than ever at forecasting the future, especially in climatology, but we haven’t shifted our decision-making processes to respond to those forecasts. We need to take a longer view of what we are creating: Based on what we know about the future, how will our solution work when conditions change?

Distributed systems and interoperability

Our model of business today is monopolization. We strive to dominate markets by pushing out the competition and maximizing market share. The drive for monopoly motivates companies to try to build “competitive moats” around their businesses, in the form of centralization or proprietary products. While this has the potential to drive financial success for companies, monopolization creates systemic fragility.

Proprietary products

I have a drawer full of old iPhone charger cables. I haven’t thrown them away yet, but they’re useless because they don’t work with my new phone. I also have a box of random chargers and cables that belong to a long list of devices that I may or may not even own anymore. These have also become useless. Then there’s my drawer of dongles. If you’ve ever had to connect your computer to a TV or some other external device, you’ve inevitably had to play the dongle game, especially if you have a Mac. Apple dongles were the top-selling Apple product at BestBuy stores from 2016 to 2018.

Being able to swap parts and pieces and communicate across devices makes it much easier for a person to handle scenarios that don’t follow the happy path.

While some of this detritus is created by the inevitable evolution of technology, much of it is the result of the competitive moats of proprietary tech. We’ve made it incredibly difficult for our various technologies to connect and communicate, and that’s a symptom of a fragile system.

Being able to swap parts and pieces, share cables, and communicate across devices makes it much easier for a person to handle scenarios that don’t follow the happy path, like when they forget their charger or have to present unexpectedly at a meeting or need to share a file between devices. Interoperability creates a resilient system and allows for easy recovery from suboptimal scenarios.

To achieve interoperability, we have to move away from leveraging components for proprietary value and instead focus on designing for and adopting open technology standards. This can feel like a business risk, but it actually has the potential to create significantly more value than any competitive moat could. You are reading this article right now using an accepted set of open technology standards that drive the internet. This standardization has created a level of interoperability that has unlocked more value than almost any other technology in human history and made the web one of our most resilient systems. Where would we be if every company had its own proprietary web?

Centralization

As businesses grow, they work to align products, systems, and strategy to eliminate (or acquire) competition and centralize the market around their offerings. This drives value, but it also creates significant failure points. Food production is an excellent example.

As we have moved away from distributed, local food production, we have centralized the system around a shrinking number of multinational companies who source, produce, and distribute most of the things we eat. This system has allowed us to achieve nearly ubiquitous availability of food products — you can find the same foods everywhere at almost any time of the year. But this system is also rife with fragility.

The flow of food is so dependent on centralized distribution networks that changing conditions at any point in the system can have major ripple effects across the globe, and increasingly, cities and towns have little to no ability to compensate. Most grocery stores only stock enough food for a week of demand. If production or distribution is disrupted, a town or city can quickly run out. This is why we have to distribute emergency food supplies during emergencies.

Food is just one example. This sort of centralization exists in fuel distribution, power generation, telecommunications, and so on. In a world where changes to the climate have both short- and long-term potential to dramatically impact these and other systems, we have to start questioning the validity of the monopoly model. How can we build businesses and design systems that drive economic value but do so in a robust way that builds resilience and can cope with change?

Breaking away from fragile design requires a shift in thinking. It means spending more time considering less-than-optimal scenarios and putting in the effort to address them. If we do this, we’ll create more resilient, accessible, and ultimately more valuable design solutions. In a world where the only constant is change, we’re selling ourselves short by staying on the happy path.

Resilience Is the Design Imperative of the 21st Century” was originally published in Medium on May 15, 2019.

“Once you hit the first intersection in town, make a right onto Main Street. Go past the fire station and over the railroad tracks. Make your second right after the tracks. If you see a large metal building with a truck parked in overgrown grass, turn around; you’ve gone too far…”

These are part of the directions to the house I grew up in. For decades, these directions, in one form or another, would be dictated to people planning to visit for the first time. Once I could drive, I would reference equivalent narratives, hastily scribbled down on paper, as I made my way to somewhere new.

This was a deeply inefficient process. It required an upfront conversation to get the directions, followed by significant effort and consternation to decipher them en route. If you were lucky enough to have a “navigator” in the passenger seat next to you, that opened up a whole separate level of coordination. “Wait, was that the big tree with the ‘Y’ in it? Wasn’t that our turn? You were supposed to be watching for the ‘Y’ tree!”

But while inefficient, this process represented something profoundly valuable: awareness and connection.

In order to rattle off a narrative like that, tethered to a landline phone in your kitchen, you had to maintain a detailed map of the area in your head. You and your visitor had to have a shared awareness and contextual understanding of major landmarks and geography, allowing you to shortcut details: “Can you get yourself to Interstate 70? Yes? Great, get to I-70 and head west…”

If you regularly use navigation, think about your own mental map of your town or city. How far out can you go before your map starts to fade?

To follow the directions, you had to remain acutely aware of your surroundings throughout the journey. Getting lost and turned around was a common occurrence. But every wrong turn and missed landmark represented new learning and discovery, a chance to expand your own internal map and build your resilience as you were able to troubleshoot and get back on track.

This process, of course, rarely happens today. Now all we have to do is text our address to someone and Google Maps does the rest. It’s an exponential boost in efficiency, but a significant erosion of capability and connection. We don’t need to be aware of our surroundings at all anymore. We can just wait for the voice from our phone to tell us where to turn. If we happen to make a wrong turn, it is immediately corrected. We don’t have to figure anything out.

Some believe that over time this kind of reliance will degrade our broader cognitive function. It’s unclear if that is true, but what we can say is that it does diminish our skills and our level of self-sufficiency.

If you regularly use navigation, think about your own mental map of your town or city. How far out can you go before your map starts to fade? Can you name the streets and key landmarks directly surrounding your house? Those a few blocks away? How easily could you give someone narrative directions to your home from miles away? How different is your internal map today than it was five or 10 years ago?

For those who have offloaded navigation to their phones, these questions can start to prove challenging. As our awareness fades, GPS-enabled navigation becomes something we “can’t live without.”

The reliance economy

Today, much of our existence centers on the attention economy, where our focus and time are mined, and the resulting data is manipulated and sold as a commodity in service of driving advertising revenue and feeding algorithms. We’re becoming painfully aware of the downsides of this arrangement as services architect themselves to put us in a perpetual “can’t look away” state. But as detrimental as the attention economy is, it’s just a temporary stop on our way to a very different destination.

“Can’t look away” was never the ultimate goal. The ultimate goal has always been “can’t live without,” and that is a very different animal.

As technology advances, we have begun offloading more and more of our cognitive functions and skills to our devices.

In 2016, researchers conducted a study to test the effects of the internet on human memory. Participants were divided into two groups and asked a series of challenging trivia questions. One group was allowed to use the internet to answer the questions, while the other group could only use their memory. Afterward, both groups were asked a second set of trivia questions. This time, the questions were easy and both groups were allowed to answer them using any method they chose. What the researchers found was that the group who had already used the internet for hard answers was significantly more likely to use it to find easy answers as well. In fact, 30% of the internet group did not even try to answer any simple questions from memory.

As Benjamin Storm, PhD, lead author of the study, put it:

“Memory is changing. Our research shows that as we use the internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don’t bother. As more information becomes available via smartphones and other devices, we become progressively more reliant on it in our daily lives.”

As technology advances, we have begun offloading more and more of our cognitive functions and skills to our devices. This isn’t altogether surprising — cognitive offloading is a strategy we use in other areas of life as well.

In close relationships, like a marriage, ownership of life tasks is frequently split between the couple, with one person taking responsibility for each area, like paying bills, cooking, or managing car maintenance. This allows us to gain efficiency, but it can also have significant detrimental effects. If a spouse dies suddenly or a couple gets divorced, our cognitive support is stripped away and we are left to relearn skills or find outside help with things for which we may have had no responsibility over the years. Yet despite the risks, in this case, reliance has an evolutionary benefit in helping us maintain long-term relationships by forging tighter bonds through shared dependence in a (hopefully) mutually beneficial exchange.

This isn’t the case with our reliance on digital devices. With technology, we aren’t becoming dependent on another person — we’re becoming dependent on corporations. Even if our relationship with a company is mutually beneficial in the short term, the chances of a long-term mutually beneficial relationship are almost nonexistent. We don’t build companies to create long-term relationships. We build companies to drive profits, and that leaves us vulnerable.

Despite the way we position it, technology is no longer a tool to solve problems; it has been twisted into a tool to grow profits. Capitalism isn’t geared to solve problems. If a company truly solved a problem, it would put itself out of business. Instead, the system is geared to keep consumers perpetually in need. For every solution we create, we have to manufacture a new problem. One way we do this is through planned obsolescence: intentionally designing things to have a short lifespan, which drives frequent upgrade cycles. This is the reason Apple releases a new iPhone every year and stops supporting older versions. This is also the reason that the average appliance now lasts about eight years, and why the fashion industry pushes seasonal trends.

But planned obsolescence isn’t the most powerful problem a company can generate. The most powerful problem a company can create is the “I can’t live without it” problem. If a product replaces a human skill, we become reliant on it, and making us reliant is the ultimate long-term growth strategy. Monopolization isn’t just about pushing out the competition. It’s about monopolizing human capability.

But while this process drives business and economic growth, it degrades our resilience at an individual and societal level. This creates a compounding fragility at the base of our societal structures and we become increasingly vulnerable to catastrophic events. This is a self-perpetuating downward spiral where, as our resiliency continues to diminish, it takes less and less for an event to be catastrophic. Just like a marriage, when our partner goes away or the situation changes, we’re left holding the bag. We’re becoming progressively less capable of handling those changes.

Despite the way we position it, technology is no longer a tool to solve problems; it has been twisted into a tool to grow profits.

Our true reliance on technology is still emerging. We’ve only just reached generations who have never seen a world without it. But navigation and Googling for trivia answers are only the tip of the iceberg. The expanding capabilities of artificial intelligence (A.I.) are going to dramatically accelerate the number of tasks we fully offload to devices. Self-driving cars. A.I. personal assistants. Scheduling. Communication. Writing. Purchasing. Courtship. Designing. Coding. Solving mathematical problems. Art. Music. Problem-solving. It’s all up for grabs.

As Marvel’s Dr. Strange put it, “We’re in the endgame now.”

Some take comfort in the fact that A.I. is still relatively “dumb,” with the sense that it is not a concern until it becomes smarter than us. But what they’re missing is that this isn’t a race to create a superintelligence — this is a race to replace human skills and build the next “can’t live without it” monopoly. In that race, A.I. doesn’t need to become better than us. We just need to become dumber than it. As smart devices subsume more of our capabilities, we will gain efficiency, but we will lock ourselves into a dependent relationship.

This isn’t the first time we’ve headed down this path. We’ve taken a lot of potentially empowering products and contorted them into tools for reliance.

Take the car for example. The car is a massive enhancement to our ability to travel and is incredibly empowering, but in our drive for monopolization, we’ve stripped that empowerment away by developing a system that locks us into car-based travel. We’ve planned and built our entire environment around vehicles, to the point that it is nearly impossible to live without access to motorized transportation. We now have an entire segment of mobility companies trying to untangle those decisions. Additionally, over time we’ve made cars more and more complex to fix and maintain, creating a reliance on an intricate system of mechanics and dealers in order to use them. Finally, we’ve created an unnecessary upgrade cycle through a combination of marketing and mediocre craftsmanship. We’ve taken what was an empowering technology and imprisoned ourselves in it.

We don’t have to continue down this road. Technology is not something that happens to us — it’s something we choose to create. We have the ability to make different choices in the way we design and build our products and the way we incentivize our companies. Despite what we’ve been told, we can build for empowerment instead of reliance, and still create profitable businesses. The next decade will see a dramatic expansion of our digital capabilities. It’s time for us to start thinking critically about the choices we make and the things we decide to build.

The Dawn of the Reliance Economy was originally published in Medium on April 26, 2019.


What is storytelling? More and more, in design, we’re told that we need to be great storytellers. But what does storytelling actually mean, and more importantly what does it mean in the context of design?

Webster has a few definitions for story: “an account of incidents or events”, “a statement regarding the facts pertinent to a situation in question”, or “a fictional narrative shorter than a novel.”

For many, when the term “storytelling” is thrown around, it’s that last definition that frames its meaning, “a fictional narrative shorter than a novel.” But that definition carries with it a significant amount of baggage and deeper cultural meaning, which I believe drives anxiety and confusion about what storytelling in design is actually supposed to be.

A good narrative, fictional or not, has a set structure and key elements that carry a person through it. This includes an exposition (beginning), conflict, rising action, climax and denouement (ending), as well as characters, settings, plot points, etc. With the pressure for all of us to be storytellers, we find ourselves contorting our processes and artifacts to fit each of those elements. Personas become characters, flows take on narrative structures and so on. In many cases, the comparisons are loose at best and the contortions quickly feel like an overreach. Further, if the majority of the story structure we are creating is filled by the steps in our design process then whatever story we are telling is a story we are telling ourselves, not a story we are telling our customers (unless you publish your personas and user journeys as part of your product releases).

However, this doesn’t mean that we aren’t storytellers. But, foundationally, I think we are zeroing in on the wrong aspect of what makes a good story. A narrative structure is just a mechanism, the how, it is not the end result. What we are really talking about when we say we need to be great storytellers is that we need to be able to elicit emotion. The end result of a great story is that a person feels an emotion, but emotion can also be felt without telling a story.

With this in mind, there are two key areas where emotion can play a critical role in delivering great design.

I. Emotion in the things we create

We want people to love our products and enjoy their experience. These goals are actually another reason the narrative metaphor breaks down. A truly compelling narrative creates tension, often making the person experiencing it feel uncomfortable, sad, or concerned. These are not typically feelings you want your product to elicit. If we are actually telling stories in our products, then almost all of them are going to be boring, contrived fluff pieces with no real conflict. Luckily we don’t have to worry about that, because we are not telling stories. We’re just trying to make people feel good.

To accomplish this, we need to create an emotional narrative, which is very different from a literal narrative. An emotional narrative is just about tone, it’s not about story arch. Copy, visuals, and interactions can all connect to convey and elicit positive emotion throughout an experience, with no tension, no conflict, no characters, and no plot points.

Ultimately, your product may become part of a person’s story, as in “I lost my keys last night and I had to call an Uber to get home.” In this case, Uber is a plot point in the story, but it is not the story. As a product designer, your goal is not to write that narrative, your goal is to ensure that when your product becomes part of that narrative it plays a positive role.

A note on process. I’ve seen a number of discussions where the design process is framed as storytelling using user scenarios, personas, etc, as narrative elements. Something similar to the Uber example above. Developing scenarios and use cases that place the product into a person’s story. While there can be validity to this approach, it is important that it is done carefully.

Developing UX deliverables like personas, user journeys and scenarios is already a deeply fraught process. It is so easy to slip into biased, best-case scenario thinking. This can result in artifacts that conveniently portray target customers and use-cases that perfectly validate the vision you already had for the product, but which might not actually exist in reality.

The further you push yourself into “storytelling” mode the more likely you are to end up in the realm of fiction because you feel compelled to create a narrative. Further, it’s also easy, when taking this approach, to build the narrative around the product, as opposed to building it around the person. It’s a fine line and sometimes this happens unconsciously. When this happens, it can cause you to overestimate the role a product will play in a person’s life, building false justification for extra features and creating unrealistic expectations for engagement.

I have found that it can be more beneficial to strip a lot of narrative storytelling out of UX deliverables. It helps keep them objective and focuses the team on the critical insights and information. Those insights can sometimes get lost when you add a bunch of extra, non-critical details for the sake of building a story. So many deliverables are full of useless data points designed to make the persona or scenario “feel real”, but that do nothing to actually help make design decisions. These are just distractions.

II. Emotion in the way you present your work

The skills required to effectively present and “sell” your work are arguably the most important skills you can build as a designer.

In their book, The Leadership Challenge, authors Kouzes and Posner lay out five Practices of Exemplary Leadership. One of the most critical (and challenging) is to “Inspire a Shared Vision.” This is about envisioning the future and enlisting others into that vision.

We focus heavily on building the hard skills of design, with a feeling that usable and aesthetically pleasing design is like a magnet that will just draw people in with awe and wonder. This, of course, is not the case. If you can’t get your stakeholders on board with your vision, you’re design skills mean very little. I’ve watched countless great ideas and excellent designs die because the designer couldn’t get anyone to care, and I’ve had my own ideas killed because I failed in that task as well.

Part of effectively presenting work is in the way you build a “story” around it, but here again, this isn’t about traditional narrative structure. In this case, there are different elements to consider:

  1. Setting Context (tell me why this is important)
  2. Showing the Solution (why does this solve the problem)
  3. Defining Success (how are we going to know if this works)


Setting Context: This is the most critical component. No one will sign onto a vision if they don’t understand why it matters.
This isn’t about communicating why the work is important to you, this is about communicating why the work should be important to the audience. This is an important distinction because often we approach the idea of setting context by walking people through our process. What research we did, how we did it, what we found, our personas, our flows and so on. These are the important things to us, these are not important things to most other people. The key is to use the information you’ve gained from the design process to get others excited about the project. Why should they care? Answering this question means you have to have an understanding of the many roles within the company, the individuals who fill them and be able to conceptualize your work through the lens of their goals and needs.

When I was head of UX in my previous job, my team and I decided that we needed to build a new app to solve a specific challenge. We couldn’t do this on our own so we had to sell the idea across the company. This meant enlisting not just the design team, but the leadership and staff in engineering, marketing, finance, and content, not to mention the CEO.

The context required for each of these groups was very different. Enlisting the marketing team meant helping them understand not only the value for the user and business but why it would enhance their strategy. The content team had to understand that the effort to create new content for the experience (in this case videos) would be time well spent. Finance needed to understand the business case and the expected resource requirements. And so on.

This wasn’t just one big meeting with key stakeholders, this was many individual and small team meetings which allowed us to get the project done.

In setting context, as in most things, less is more. Your goal is to determine the minimum information needed to get someone on board and keep it at that. Just like having personas with too much information, too much context distracts from the important points. Sometimes we feel like we need to show everything we’ve done to prove that we’ve been working hard or to get recognition for the amount of effort we’ve put in. The goal is not to show your work, the goal is to enlist people into your vision.

Finally, it is important to remember that context requires repetition. People are flooded with new priorities every day. To keep an organization focused and on the same page you need to constantly remind people why a project is important. Every check-in at each stage of a project should include at least a brief reminder of the context at the outset. This even includes one-on-one design reviews with the person who assigned you the work. It may feel redundant but it frames all conversations around what’s important.

Showing the Solution: Once the context is set you can show the solution. Depending on the stage of the work this could be anything from a conceptual discussion to high fidelity designs.

The key is to always anchor your work back to the context you set. Reinforce why this is solving the problem. You want the audience to not only feel like the work is important but that what they are about to help you create is heading in the right direction.

The level of detail here is relative to the audience and goal of the meeting. A functional design review with engineering is probably going to require a significantly higher level of detail than a walkthrough for the executive team. Understand your context, both through the lens of goals and individual expectations. This is not one size fits all.

Defining Success: In terms of importance, defining success is a close second to setting context. Defining success does two things in building a vision and eliciting emotion. First, it allows people to conceptualize and take ownership of what is important in a project. You can’t win the game unless you know how the score is being kept. Being clear on the definition of success empowers each team to determine the best way to win.

Secondly, it codifies a shared goal across teams. This is what we are trying to accomplish and here is how we will all measure it together. Setting this expectation and getting buy-in on the definition means that if the project succeeds everyone gets to celebrate and if it fails everyone gets to work together to determine why and figure out next steps. Without defining success everyone is just left to wonder if the effort was worth it.

How You Show Up

The final and most important piece to effectively presenting your work is in how you show up. Your level of enthusiasm for the work will set the tone for the entire discussion. This doesn’t mean you need to do an over the top musical number but it does mean that you need to be intentional in your approach. Everything from body language to the words you use and the way you deliver them plays into the story others create about your work.

This isn’t always easy, sometimes you aren’t excited about the work, or you’re having a bad day. It’s important to remember that your work represents you and you represent your work. Finding a well of energy, even in those low times, is important in putting yourself and your work in the best possible light.

This doesn’t mean you need to sugar coat things or always be overly positive. If you aren’t happy with how something is turning out or are having a bout of imposter syndrome you can be honest about that, but be intentional about the way you approach it. For example, you could say — “I’m really not happy with this. I wasn’t sure what to do and I don’t love where I landed.” This primes whoever you’re meeting with to feel like the work is going to be bad, and that progress isn’t being made. A negative expectation has been set for the entire discussion. Versus something like — “I’ve made some progress, but I still have a lot of questions I’m working through. I’m really interested to get your feedback and ideas.” This communicates the same thing but the tonal shift tells a completely different story. The emotion you are eliciting is positive instead of negative. Now everyone is going into the discussion feeling like progress has been made, and though there are still things to figure out, this a chance to do it together.

In design, story is all about emotion. Our goal is not to build characters and plot, it is to convey and elicit feelings. This can fold into your design practice through clarity, understanding, and intention. If you are clear on the emotions you are trying to drive, understand the people who will engage with your work and are intentional with your approach, the story writes itself.

“What is storytelling in design?” was originally published in Medium on March 28, 2019.