Skip to content

Late last year, I wrote a piece titled “Design Won’t Save the World.” It focused on the limits of human-centered design and its failure to impact the big problems we face. The morning the piece was published, my wife sat in our living room reading through it. When she finished she said, “we need bee-centered design.” Since that moment I’ve been thinking about what that would mean.

Wikipedia defines human-centered design (HCD) as “a design and management framework that develops solutions to problems by involving the human perspective in all steps of the problem-solving process.”

Hasn’t our entire existence been about involving the human perspective in all steps of the process? I mean, at one point we literally believed that we were the center of the universe. Empirically we’ve figured out that we’re not the center of everything but practically, we pretty much still carry on as if we are. We are very aware of the vast and powerful interplay between the parts and pieces that surround us, but we continue to see the world as one big show unfolding in service to us. When you get right down to it, human-centered design is just an extension of this belief in both name and execution.

Humans have a lot of problems that need solving — and we should try to solve those problems. But humans don’t exist in isolation; we are but one very small piece in a very large puzzle. While centering the human perspective can help foster more humane design outcomes, it also perpetuates myopic navel-gazing.

By centering on the human perspective, we also center our narrow definition of success.

When we observe a problem that impacts people, our process dictates that we solve it. Very often, these solutions are developed in isolation, exclusively from the human perspective. This creates a solutions-at-all costs mentality in which we often ignore any risk of broader impacts, rarely asking ourselves if the problem should even be solved in the first place. This inward-looking approach leads to a lot of human-centered solutions — but it also leads to a lot of collateral damage to the larger systems around us.

By centering the human perspective, we also center our narrow definition of success. We believe that business metrics and economic growth are the end-all, be-all of human progress. But when we infuse that belief into all steps of the problem-solving process, it becomes the frame through which we view all outcomes. In many cases, a solution is not deemed successful unless it carries a financial upside. (This doesn’t have to mean actual revenue; it can simply mean shareholder value, as we see with many web companies.) Whether the solution solves the original problem or not is almost entirely irrelevant. This prioritization of profits over progress puts a ceiling on the amount of real, human value we can actually deliver. It also papers over any resulting collateral damage.

While centering the human perspective has allowed us to make important gains, it doesn’t scale. In an interdependent system, continually over-prioritizing the needs and desires of a single component will eventually cause the entire system to collapse.

It’s time for us to broaden our perspective. We need to start looking beyond the ends of our own noses.

What is bee-centered design?

So, what does bee-centered design really look like? I’ve realized that it’s not necessarily a literal concept. We don’t gain anything by actually putting bees at the center of our decision-making processes, or by spending lots of time creating solutions to problems that the bees didn’t even know they had. Rather, it’s about shifting our mindset to open up a much-needed new perspective for the things we create.

The “canary in the coal mine” mentality

The scale of our impact on the environment is enormous. In our current design paradigm, we largely assess that impact based on short-term outcomes for ourselves. If something doesn’t kill us immediately, we’ll give it a thumbs-up. But we are the strongest link in the chain and in an interdependent system, the chain is only as strong as its weakest link. Unfortunately, our approach to design has been knocking off weak links left and right.

So far, we have largely shielded ourselves from this downside with our technical resilience. But there is a limit to what we can withstand.

Bees are a sentinel species. They are more fragile and susceptible to environmental changes than species further up the food chain, and their health is an early indicator of impending ecological issues. If our design process shifted to center them, or to focus on other weaker links, we would have to consider the impacts of our actions beyond our immediate health and safety. This small change could mean a complete shift in our tolerance for risk, and in our patterns of creation and consumption.

This principle isn’t just about sustainability; it’s also about the quality of our design solutions. In a previous career, I was a health inspector. The regulations I used to enforce food safety were developed based upon risk tolerances aimed to protect the most vulnerable among us: young children, the elderly, and the immunocompromised. While this system created a more stringent set of rules, it also made the rules significantly more effective. A standard set to protect a baby will almost always protect a healthy adult.

Nothing we create exists in isolation; it all lives within the overall natural system. If we architect our solutions with tolerances that support the more vulnerable aspects of that system, we’ll actually craft a more effective solution. If we design for the health of the canary in the coal mine, we will also be designing for the health of the miner.

A common goal

Bees work in service of their hives. The hive system delivers maximum value because everyone feeds into it and moves in the same direction. The key to the hive’s effectiveness is that every bee within it has a clear view of where they are going. Humans don’t have that; on the whole, we don’t have shared goals. The closest thing we have is the profit motive (and, I guess, basic survival).

Where are we going and to what end? Are we just making shit for the hell of it? Do we want to pile up money? Solve all human problems? Fill every possible niche with a product? These are obviously big questions but they aren’t questions that our existing design frameworks even remotely try to address.

We’re trained to ask questions — but why don’t we question the validity and value of our obsession with solving problems that affect only us?

Instead, our current frameworks root us in processes and problems. Design thinking, for example, is rooted in “empathy, optimism, iteration, creative confidence, experimentation, and an embrace of ambiguity and failure.” This is all about process, not outcomes. It provides a playbook for how to find problems and steps for how to develop solutions, but it doesn’t guide the outcomes for those solutions. It doesn’t get us all moving in the same direction.

What if the core of our design framework was rooted in a set of universal outcomes? What if we had a common set of goals to pull toward, regardless of what product we’re designing or what industry we’re working in? These goals could be built around things like empowerment, inclusivity, sustainability, equity, and opportunity. They could become a base filter through which we evaluate all of our designs.

Having collective goals would not negate the need for process altogether. Instead, it would ground that process in a shared ethos, amplifying the power of all of our efforts in a common direction rather than pushing each of us to grasp in isolation for something greater.

Widening our view of the world

Bee-centered design would also, quite simply, widen our view of the world. It would mean taking a moment to get out of our human bubble and look around. We’ve told ourselves so many stories about the way things are supposed to be; those stories play on autopilot every time we create something. We’re trained to ask questions — but why don’t we question the validity and value of our obsession with solving problems that affect only us?

Why do our companies need to become monopolies in order to win? Why do our products need to maximize engagement? Why is convincing people to upgrade every 12 months a good thing?

Does every product deserve to exist? Does every problem need to be solved?

Human-centered thinking keeps us locked in our human-centered bubble. We need to break out.

Human-Centered Design Is Broken. Here’s a Better Alternative.” was originally published in Medium on March 27, 2019.

In 1926, the last remaining wolves were killed in Yellowstone National Park. It was the outcome of a centuries-long campaign to rid North America of its wolf population.

Wolves were viewed as a nuisance. They killed valuable livestock and created a barrier against our drive to conquer the West. Our bid to eradicate them was swift and effective but carried unexpected consequences.

In Yellowstone, removal of the wolves resulted in reduced pressure on the elk population, triggering a cascade of ecosystem-wide devastation. The growing elk herds decimated willow, aspen, and cottonwood plants, which caused beaver populations to collapse. This cascade of events changed the trajectory and composition of the park’s rivers as banks eroded and water temperatures rose from reduced vegetative cover. As a result, fish and songbirds suffered.

Humans are friction-obsessed.

Doug Smith, a wildlife biologist who oversaw the reintroduction of wolves to Yellowstone, describes the original elimination of them as “kicking a pebble down a mountain slope where conditions were just right that a falling pebble could trigger an avalanche of change.”

To humans, the wolves represented nothing but unnecessary friction. To nature, they represented a crucial linchpin holding the entire ecosystem together.

Humans are friction-obsessed. Friction is our ultimate foe in a constant crusade for efficiency and optimization. It slows us down and robs us of energy and momentum. It makes things hard. We dream of futures where things run smoothly and effortlessly, where it’s all so easy.

Driven by this vision, we’ve constructed a vast techno-industrial complex that churns out endless products aimed at smoothing increasingly insignificant inconveniences.

But nature is the ultimate optimizer, having run an endless slate of A/B tests over billions of years at scale. And in nature, friction and inconvenience have stood the test of time. Not only do they remain in abundance, but they’ve proven themselves critical. Nature understands the power of friction while we have become blind to it.

In 2012, psychologists completed a study that asked participants to assign monetary value to a simple storage box from IKEA. One group had to build their own box while the other group was given a prebuilt box. Both groups were then asked what they thought the box was worth. The group that built their box valued it significantly higher than those who received the prebuilt version.

In this case, building the box added an extra layer of friction to the process. That friction, dubbed “the IKEA effect,” infused a sense of ownership and purpose into the box that made it more valuable to the participants who built it. This effect, however, only held to a point. As the researchers dug deeper, they discovered that value was not created if the box was too difficult to build. As the researchers put it: “We show that labor leads to love only when labor results in successful completion of the task.”

The results of this study set up a bell curve of friction versus value. Both too much friction and too little friction reduce value, but just the right amount of friction maximizes it.


We can see this effect play out in the products we use every day.

Take, for example, Facebook. Facebook unlocked tremendous value by greatly reducing the friction involved in sharing our lives with friends. The platform was easy to use but still required some effort to create and share posts. In a bid to increase value, Facebook decided to remove this final bit of friction by introducing “frictionless sharing,” wherein some activity was automatically shared on the user’s behalf. Unfortunately, the change removed too much friction. Users felt they had lost control and ownership over their posts, and their response was overwhelmingly negative. Facebook eventually rolled back the feature.


Similarly, Amazon delivers value by making it easy to find and buy almost anything. However, the steps you must take to purchase an item on Amazon still represent a small dose of friction. To remove this final bit of friction, Amazon implemented a “one-click” buy button which eliminates the need to complete their checkout steps. To take this even further, they created a smart button called Amazon Dash, which allows a person to order frequently used products without even visiting Amazon’s site. These features solve a problem for Amazon, bringing them more revenue more quickly. But based on what we know already, a frictionless shopping experience may actually be detrimental to customers.

Like Yellowstone’s wolves, the friction of the checkout process provides a check against impulse purchases and overspending. In a world where many people struggle to manage their money, these small barriers can be critical to maintaining financial balance. While the market would dictate that it’s not Amazon’s job to help its customers control their spending, lowering the barrier to impulse purchases could have a net negative effect on the value people get from Amazon’s service. The Dash button, for example, eliminates so much friction that customers may not even know how much they’re spending until after they’ve completed a purchase. In light of this, Amazon Dash was deemed illegal in Germany for violating consumer protection laws.


While the friction-versus-value curve impacts our daily interactions with products, it carries even greater weight outside of online shopping and social sharing.

We crave purpose and meaning in our lives. Many of us subscribe to the guiding belief that we must eliminate as much inconvenience and friction as possible in order to maximize the time we can spend on “the things that matter.” Unfortunately, as the IKEA effect illustrates, we may be going about it all wrong.

Below is a graph from Our World in Data. It shows self-reported life satisfaction from 2005–2017 across a number of countries with varying economic and political circumstances.


Overall, a country’s average level of life satisfaction increases alongside its wealth, with many wealthy countries reporting average levels in the seven to eight range (out of 10). A certain level of wealth, both on an individual and national level, is required to afford the services and infrastructure that reduce major friction in our lives. But that’s not what’s most interesting about this graph. Rather, what is most striking to me is that satisfaction levels, across the board, have not moved appreciably in over a decade.

This is remarkable when you consider that the time between 2010 and 2017 represents a high point in Silicon Valley, with the introduction of smartphones, tablets, and wearables as well as the explosion of social media and the rise of Amazon, Uber, Airbnb, and Netflix. You could call this era a golden age in our war on friction. We’ve seen a technology-enabled smoothing of increasingly minor inconveniences, yet it seems to have had little net impact, positive or negative, on life satisfaction across the globe. For many, life has changed dramatically but our levels of satisfaction have not.

It’s important to note that the data from the graph above draws from the Gallup World Poll, which focuses its survey mainly on adults. Most of the respondents are from a generation that grew up before the great smoothening of the last decade. So what about the generation entering adulthood right now? Has life with less friction left them feeling happier and more fulfilled than generations before? In her book, iGen, San Diego State University Professor of Psychology Jean M. Twenge shows us the answer is no. A growing percentage of eighth, tenth, and twelfth-graders feel their lives have less purpose than previous generations. We have a lot more to learn here, but the preliminary evidence supports the idea that we’re no happier than we were before the rise of apps.

Percent of 8th, 10th, and 12th graders who are neutral, mostly agree, or agree with each statement.

Too much friction destroys value. But so does too little.

Before the industrial revolution, many people faced insurmountable levels of friction. Over the last century, we’ve unlocked tremendous value by reducing major inconveniences. We’ve streamlined travel and communication, connecting vast portions of the globe. We’re enabling an increasing percentage of the global population to rise out of poverty. Mechanization and mass distribution put material and agricultural goods in the hands of many for whom these things were previously unattainable. We’ve moved more people to the middle of the friction bell curve, making it possible for them to step away from the basic tasks of survival and find meaning in other pursuits. Through it all, technology has continued to advance.

Ostensibly, the continued reduction of minor inconveniences should continue to drive satisfaction upward. But global satisfaction and happiness are stagnating and young people are feeling less purpose in their lives.

Over time, we’ve increasingly tied the value of technology to the revenue it can generate as opposed to the benefit it can deliver to the humans who use it.

The problem is that we now have a system built to straddle the friction-value curve, which keeps many people out of the middle. On one side, we have the market-driven techno-industrial complex, which is focused on making things increasingly easier for people who are already in the sweet spot of the curve. The result is that these people are beginning to slip down the other side, falling into the realm of too little friction and leaving purpose, meaning, and satisfaction behind.

On the other side, vast portions of the population are living with far too much friction. Overall, global progress has not been evenly distributed. Even within wealthy countries, disenfranchised and marginalized groups continue to face massive systemic barriers. Frequently, these issues are shuffled onto society’s back burner, becoming the purview of under-resourced government and philanthropic organizations while the market turns its attention toward delivering more ease for those who already have it easy enough.

This is the incentive structure we’ve created. Technology is a tool to solve problems and deliver value. Over time, however, we’ve increasingly tied the value of technology to the revenue it can generate as opposed to the benefit it can deliver to the humans who use it. Our economic system feeds on the belief that eliminating all friction is our road to happiness. We perpetuate this belief to drive profits — but we’re reaching a point of diminishing returns.

While levels of global satisfaction are still relatively high today, the trend in these numbers is not encouraging, especially for younger generations. If our goal is to grow profits, we’re doing alright. But if our goal is to truly deliver human value, we’re heading down the wrong path.

We need to reassess our relationship with friction. We reduce the likelihood of value, purpose, and satisfaction when we focus on smoothing increasingly benign inconveniences and ignore the significant friction holding back much of the world.

All friction is not created equal. If we are designing products for human value, we can’t treat all problems the same way. We need to understand which problems are worth solving because they truly hold people back and which problems may not actually be problems at all. The nuance of this difference, just as we see in nature, is key to maximizing a product’s value to humanity.

The Value of Inconvenient Design” was originally published in Medium on March 5, 2019.

The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. They’ve perfected design processes that allow them to improve the way people accomplish tasks. Unfortunately, it’s becoming increasingly clear that utility alone isn’t enough.

Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.

We want to feel empowered by technology, and we’ve forgotten that utility does not equal empowerment.

Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Instead, digital products demand so much of us and intrude so deeply into our daily existence that they undermine our confidence and control. Our data and activity are mined and used with no compensation or transparency. Our focus is crippled by constant notifications. Our choices are reduced by algorithms that dictate what we see. We can’t even set our devices down because we’ve lost our ability to resist them.

In the early years of the web… there was still a degree of separation. We just weren’t on our computers that much. Then the smartphone came along.

We brush this off because we’ve confused a sense of utility with a feeling of empowerment. We assure ourselves that we own our lives when we land a great deal on a place to stay, catch the latest update from a friend, discover a great article, or have our groceries delivered. These are just a few of the small moments of pure utility that we’ve learned to confuse with power over our own lives.

We’ve been on this trajectory for a while. For decades, companies have taken increased license to insert themselves into our lives. Driven by a combination of proximity and data availability, this trend has reached a crescendo in the last decade.

Everything we do on the web now is trackable. Before the internet, this level of data granularity was unfathomable. In the web’s early years, companies began to leverage user insights to target ads and drive their businesses. For a brief time, we had a degree of separation because we just weren’t on our computers very much. Then the smartphone came along.

Smartphones have created a once-unimaginable level of proximity between customers and companies. This ever-present connection has dramatically driven up our time spent online. Suddenly, companies can reach us directly anytime, anywhere. Couple that with the growing mountains of data, and the separation between our lives and companies that want to influence them has disappeared.

It’s an unsustainable relationship. It may look like the future, but it’s not.

Most companies’ current model of value is to design for utility, believing that customers will absolve them of any wrongs done in the name of it. This model is failing because it misses the bigger picture of what humans want from the technology they use.

Utility alone won’t assuage us. We want empowerment. We want to be better people. We want technology to enhance our capabilities and increase our sense of agency without dictating the rhythm of our lives.

This is the task for the next wave of digital products, and it will require a complete shift in the way we think about design. For starters, we need to be willing to break the existing “utility” mold. As ever, when one company develops a winning strategy, everyone follows suit. Now that we’ve established a set of best practices based on extraction and exploitation, we’ve applied them with cookie-cutter precision across every industry. Companies preach user-centered design, but the products they create often center on the value they receive from the user rather than what they can deliver.

As digital product designers, here’s what we need to rethink:

  1. How users’ roles are viewed in the life cycle of products. If the value of a product is predicated on its users’ activity or resources, then those users are not customers, they are business partners.

  2. Data collection, manipulation, and transparency. We need to center the user — not the business — as the owner of their data.

  3. The drive for continual engagement. Intentionally hijacking human psychology in order to hook people is a predatory business practice. We need ethical standards for how we manipulate people’s behavior.

  4. Revenue models. Business models that depend on a given level of user engagement are unsustainable.

  5. How content creators are compensated. A platform alone should not profit from the creations of its users.

  6. Algorithms and artificial intelligence. We need ethical standards for how we manipulate what a person sees.

  7. The role of our products in the lives of our users. Our products are not the center of a person’s life; they are only a small part of it.

Evolving our thinking in each of these areas will be a big step forward, but doing only that isn’t the complete answer. We also need to break our obsession with screen-based solutions. While screens are unlikely to ever go away completely, they’ve become a crutch — the path of least resistance. If there is a problem to be solved, product designers think all they have to do is create an app. Our obsession with designing for screens has fueled an entire industry of UX design boot camps that crank out app designers. We’ve tricked ourselves into believing all problems are nails and screens are the hammer. We’ve got it so dialed in at this point that most apps look the same.

Screens are easy.

They beget many of the digital product design problems described above. They require attentive processing, meaning our brains must be fully engaged to interact with them. By nature, they demand our attention — which is what encourages the collection of vast amounts of data — and lend themselves to business metrics like minutes viewed, dwell time, page views, and read time. Screens have convinced us that continual engagement is the definition of success.

We’ve never wanted to be shackled to technology. It’s not the future we promised ourselves.

As long as we continue to design solutions that demand all of our attention, it will be nearly impossible to break out of the “disempowering product” paradigm. Too often, our screen obsession keeps us from even considering the many other creative and powerful ways we could be using the web’s capabilities.

Some point to augmented reality as the next phase. While AR may feel transformative and whiz-bang, it’s really just the same screen in a different location. It’s the next step in the race to see how close our notifications can get to our actual eyeballs. It’s not empowering.

Empowering products enhance our capability and our sense of agency without disrupting the rhythm of our lives. The car is a great example. It’s a dramatic enhancement to our ability to travel, and we have agency (outside of some basic safety rules) to use it as we see fit. It works with us. It listens to us. It doesn’t disrupt us. A car is there when we need it and invisible when we don’t.

This must be our new design mantra: There when you need it, invisible when you don’t. It would be much better than what we believe today: There when you need it, incessantly begging you to come back when you don’t.

In his book Enchanted Objects, product designer and entrepreneur David Rose of the MIT Media Lab proposes the concept of “glanceable technology”: products that deliver value without demanding constant attention. Rose’s most basic example is a web-enabled umbrella whose handle glows blue when it’s going to rain so you remember to take it with you. It’s a common device made magical with some basic web intelligence. It’s simple and powerful.

Consider another example: a wallet that gets harder to open the closer you get to your budget limit. Contrast that with a flood of “high spending” notifications on your lock screen and in your email from services like Mint. What about an alarm clock that changed color based on the predicted temperature for the day, so you knew how to dress without opening an app? Or a watch that monitors traffic patterns and vibrates to let you know when you need to leave to make it to an appointment on time. A piece of luggage with a handle that glows to notify you if your flight is delayed.

Each of these products would enhance our ability to make decisions and manage our lives without disrupting or dictating our actions. They would leverage the power of the web to deliver utility while offering us the agency to use them as we see fit.

There is so much depth beyond the screen. Some of the solutions described above might be coupled with an app, but even so, they move us away from screens as our primary entry points to technology. They would put a buffer between us and that needy friend demanding more of our time.

This is the future we should be building. It’s not just about “smart” objects. If we continue on our current path, we’ll eventually shove A.I. into every random thing we can find. Intelligence for its own sake does not equal empowerment — just as utility doesn’t. Empowerment comes through execution. If I can text my refrigerator from the store to ask if we have milk before I buy more, I have more agency to manage my life. But if that “smart” refrigerator also tracks my eating habits and funnels them to Amazon so it can spam my phone with “there’s a special on Double Stuf Oreos” notifications, then we’re right back where we started.

We’ve never wanted to be shackled to technology. It’s not the future we promised ourselves. Stories from our past don’t depict a future where we all have our heads buried in screens — unless those stories are of the dystopian variety.

We’ve always wanted tech to feel like magic, not a burden.

We can build the future we want. Technology is not something that happens to us; it’s something we choose to create. When we design the next wave of products, let’s choose to empower.

“It’s Time for Digital Products to Start Empowering Us” was originally published in Medium on February 25, 2019.

I recently installed a Nest thermostat in my house. Nest has been around for a while, but I’ve been hesitant to get one. I won’t go into the details of why we finally pulled the trigger, but it made sense to have more control of our home environment.

When the box arrived, I was excited. I felt like I was stepping into the future. Once I got it all wired up and began the setup, though, my original hesitation came flooding back.

Nest would like to use your location.

I almost bailed. This is when Nest stopped feeling like a fun, helpful device and started to feel like an intrusive portal. Yet another keyhole for a company (or whomever else) to peer into my family’s life. It was probably okay, I rationalized. It’s probably just sharing location and temperature data, I thought to myself.

I wouldn’t have had this conversation with myself a decade ago. As the internet grew and the iPhone came on the scene, it was exciting. I felt a reverence, almost gratitude for everything it enabled. Driven by curiosity and optimism, I signed up for any new service just to see what the future might hold. I was on the leading edge of early adopters.

Over the past few years, however, I’ve drifted away. I’m not the only one.

There’s always been a financial cost to early adoption. My uncle amassed a collection of LaserDiscs, only to have to start over when DVDs won. For him, the long-term impact was limited: some money out of pocket and a slightly bruised ego. Now, the equation is very different.

The cost of a new device is no longer just financial: it’s also deeply personal.

Today, each new device we purchase is a conscious decision to share an intimate piece of ourselves with a company whose goals may not align with our own. This exchange represents a fundamental shift in our relationship with technology and the companies that produce it. Adoption is no longer an ephemeral transaction of money for goods. It’s a permanent choice of personal exposure for convenience—and not just while you use the product. If a product fails, or a company folds, or you just stop using it, the data you provided can live on in perpetuity. This new dynamic is the Faustian bargain of a connected life, and it changes the value equation involved in choosing to adopt the next big thing. Our decisions become less about features and capabilities, and more about trust.

When Amazon says, “Don’t worry, Alexa isn’t listening all the time,” we have to decide if we trust them. When Facebook launches a video chat device days after announcing a security breach impacting 50 million user accounts, we have to decide if we’re willing to allow them to establish an ever-present eye in our home. When we plug in a new Nest thermostat for the first time, we have to decide if we are okay with Google peering into our daily habits. The cost of a new device is no longer just financial: it’s also deeply personal.

The diffusion of innovation

The adoption of new technologies is often represented on a normalized curve, with roughly 16 percent of the population falling into what is broadly characterized as early adopters.


Early adopters, as Simon Sinek puts it, are those who just get it. They understand what you’re doing, they see the value, and they’re here for it. The further you move into the curve, from the early majority to the laggards, the more you need to convince people to come along.

Early adopters have an optimistic enthusiasm and a higher tolerance for risk, both financial and social (remember the first people walking around with Google Glass?). It’s relatively easy to acquire them as customers. It doesn’t take a sophisticated marketing apparatus or a big budget to get them on board. As Sinek says, “Anyone can trip over [the first] 10 percent of the market.” Early adopters are critical because they create the fuel that allows an idea to gain momentum.

Early adopters provide initial cash flow and crucial product feedback, and they help establish social proof, showing more cautious consumers that this new thing is okay—all at a comparatively low cost of acquisition.

For a new product to find true mass market success, it has to move out of the early adopter group and gain acceptance in the early majority. This is sometimes referred to as crossing the chasm. Early adopters give new technologies the chance to make that leap. If companies had to invest in marketing to acquire more reticent consumer groups, the barrier to entry for new ideas would grow dramatically.

But what if early adopter enthusiasm began to erode? Is that optimistic 16 percent of the population immutable? Or is there a tipping point where the risk-to-value ratio flips and it no longer makes sense to be on the cutting edge?

What it means to “just get it” in the 21st century

There was something different about the Facebook Portal launch. When the new video chat device hit the market, Facebook didn’t make a play for the typical early adopter group—young, tech-savvy consumers. Instead, they targeted the new device toward a less traditionally “techy” audience — older adults and young families. You could make a lot of arguments as to why, but it comes back to the core principles of early adopters: they get what you’re doing, they see the value, and they’re here for it.

For Facebook, mired by endless scandals and data breaches, it became clear that the traditional early adopters did get what they were doing, but instead of value they saw risk, and they weren’t here for it. Facebook chose to target a less traditional demographic because the company felt they were less likely to see the possible risks.

Facebook Portal is a paragon of the new cost of early adoption. The product comes from a company whose relationship with consumers is shaky at best. It carries a lot of privacy implications. Hackers could access the camera, or the company could be flippant and irresponsible with the use and storage of video streams, as was reported with Amazon Ring. On top of that, Portal is not just a new device, but also a new piece in the ecosystem of Facebook products, which represents a bigger underlying hazard that is even harder to grapple with.

Today, each new device we purchase is a conscious decision to share an intimate piece of ourselves with a company whose goals may not align with our own.

As the technology ecosystem has grown, the number and types of devices we feed our personal data into have expanded. But, as linear thinkers, we continue to assess risk based on the individual device. Take my internal dialogue about the Nest thermostat. My inclination was to assess my risk tolerance based on the isolated feature set of that device — tracking location and temperature. In reality, the full picture is much broader. The data from my Nest doesn’t live in isolation; it feeds back into the ever-growing data Frankenstein that Google is constructing about me. My Nest data is now intermingling with my Gmail data and search history and Google Maps history and so on. Various A.I. munges this data to drive more and more of my life experience.

A product ecosystem means the power inherent in a single device is no longer linear. As each new device folds into an increasingly intimate data portrait, companies are able to glean insights with each new data point at an exponential rate. This potentially translates to exponential value, but it also carries exponential risk. It’s hard, however, for us to assess this kind of threat. Humans have difficulty thinking exponentially, so we default to assessing each device on its own merits.

All of this means that to be tech-savvy today isn’t to enthusiastically embrace new technology, but to understand potential hazards and think critically and deeply about our choices. As Facebook Portal illustrates, that shift has the potential to change the curve of technology adoption.

Trust in the future

Over the past decade, our relationship with new technology has been tenuous. As early as 2012, a Pew Research study found that 54 percent of smartphone users chose not to download certain apps based on privacy concerns. A similar study in Great Britain in 2013 pegged that number at 66 percent. More recently, MusicWatch conducted a study on smart speaker use and found that 48 percent of respondents were concerned about privacy issues. As summarized by Digital Trends:

Nearly half of the 5,000 U.S. consumers aged 13 and older who were surveyed by MusicWatch, 48 percent specifically said they were concerned about privacy issues associated with their smart speakers, especially when using on-demand services like streaming music.

Yet, despite our misgivings, technology marches on. Our concerns about smartphones have not slowed their growth, and MusicWatch found that 55 percent of people still reported using a smart speaker to stream music.

As Florian Schaub, a researcher studying privacy concerns and smart speaker adoption at the University of Michigan, is quoted in Motherboard:

What was really concerning to me was this idea that “it’s just a little bit more info you give Google or Amazon, and they already know a lot about you, so how is that bad?” It’s representative of this constant erosion of what privacy means and what our privacy expectations are.

We’ve been engaged in this tug-of-war for years, pitting that persistent feeling of concern at the back of our minds against our often burning desire for the new. The coming decade may prove a litmus test for our long-term relationship with technology.

For years we have chosen to trust corporations with our personal data. Maybe it’s a cultural vestige of the technological optimism of postwar America, or maybe we are so eager to reach the future we’ve been promised that we are operating on blind faith. But there are signs that our enthusiasm is cracking. As we continue to hand over more of ourselves to companies, and as more of them fail to handle that relationship with respect, does there come a point when our goodwill dries up? Will trust always be something we give, or will it become something that must be earned? At what point does the cost of adoption become too high?

Why Technology’s Early Adopters Are Opting Out” was originally published in Medium on February 11, 2019.

I sit on a runway. It’s getting dark and it’s raining. The flight attendant says it’s time to switch our portable electronic devices to airplane mode. Most people ignore them. I’m quickly flipping through my “critical” apps one last time, getting in that final check before jetting off into a communication black hole. There is nothing new waiting in those apps and I know that; I checked less than a minute ago. But I check again anyway. We take off. I flip my phone to airplane mode. Soon we’ll be at our cruising altitude and it won’t matter what mode my phone is in; checking in will be off the table. I’m relieved.

The airplane is like a communication time warp. A throwback to an age where uninterrupted conversations could flow for extended periods of time. A time when we were comfortable just staring out the window watching the world go by. A time when one might find themselves bored with only their wandering thoughts to entertain them.

A healthy amount of idle time is not only good for us but makes us more creative and may be critical to our happiness.

If you live in a city and don’t actively travel into rural towns or wilderness, the airplane might be the only time you experience this kind of forced disconnection. It feels freeing. It feels like a weight is lifted. That little piece of your brain constantly preoccupied with what you might be missing finally gets a break. A brief rest before it is re-engaged the moment the wheels touch down at your destination.

We need that rest and disconnection. We need our thoughts to wander, unguided and unprompted. We need uninterrupted conversation. And, most importantly, we need extended moments of boredom and the creativity and introspection that comes from it. Unfortunately, those moments are getting harder and harder to come by.

The last time I flew, there was Wi-Fi available on the plane. The modem happened to be down so we couldn’t connect, but it was there. Every plane will soon have Wi-Fi. Being 30,000 feet above the planet will no longer be an escape. We’ll all feel pressure to post our airplane window pics in real-time.

The spread of the internet is inevitable. Google and Facebook are already on a mission to bring reliable service to rural and developing areas, and that effort will only intensify. Soon access to the web will reach every corner of the globe.

This expansion, in and of itself, is not necessarily a problem. The problem is, once the entire world is connected, where will we go to get away? We need connection, but we also need solitude and silence. Our happiness and success depend on it.

The Importance of Being Bored

Boredom can be scary. With nothing around to distract our brains, we are alone with our thoughts. For many of us, this is uncomfortable—and for good reason. The feeling of boredom can actually cause us physiological stress.

As Mark Hawkins writes in his book The Power of Boredom, studies found that levels of the stress hormone cortisol were much higher among participants who felt bored compared to other emotions. And “psychologist Robert Plutchik has linked boredom to a form of disgust, similar to what we might feel when we smell rotten food.” Much of our physiological response to boredom drives us to want to avoid it, and we actively look for distraction to do that.

Over the centuries, we’ve devised a stunning array of options to fill our idle time: communal storytelling, performances and plays, sports, music, art, literature, games, films, etc. The flight from boredom has created the basis of much of our cultural history. So, boredom is repulsive, like smelly rotten food, and the pursuit of entertainment produces wonderful cultural treasures. This feels like a clear justification to eradicate boredom. But, like most things in life, it’s never that simple.

Boredom opens up space for pause and introspection.

Despite our aversion to boredom, it turns out that a healthy amount of idle time is not only good for us, but makes us more creative and may be critical to our happiness and emotional growth.

Studies have shown that boredom can drive increased creativity as your mind moves into a “seeking state.” This free-flowing state allows the brain to traverse through seemingly unconnected thoughts, which can generate unforeseen connections and insights. This heightened ability includes our capacity for creative problem-solving. People who were pushed into a state of boredom prior to solving a given problem were not only able to find more creative solutions, but also a wider range of possible solutions. Given the magnitude and complexity of the problems society currently faces, the ability to devise creative solutions will only become more and more critical.

But that’s just the tip of the boredom iceberg. While increased creativity is a powerful side effect of idle time, it is not the most important. More important is the fact that boredom opens up space for pause and introspection. As Intel fellow Genevieve Bell put it, “Being bored is actually a moment when your brain gets to reset itself… Your consciousness gets to reset itself too.”

Hawkins echoes this sentiment:

Boredom is a special space in time that provides us with a bird’s eye view of life. The examination that boredom allows helps us steer our lives toward the best road possible.

Personal and, ultimately, societal growth come from individual introspection. Moments of introspection allow us to grapple with inner thoughts and process daily inputs. It creates space to think critically about what we’ve seen, heard and experienced, and to form our own opinions about them and find those unexpected connections that help us see things through a different light. This process feeds our lifelong emotional development, helping us “steer our lives toward the best road possible.”

Without introspection, there is no space to question, consider, and form our own opinions. Without introspection, there is only space for reactionary responses and rote regurgitation of spoon-fed information. An increasingly divisive and deceptive world thrives when introspection and critical thinking are limited.

You can’t understand who you are and what you believe, let alone be able to understand someone else’s beliefs, if you don’t take time to think. We need to engage with our inner thoughts, but we can’t truly hear them unless we step into boredom. Embracing a healthy amount of idle time opens up deep opportunities to think, breathe and create connections.

We’ve always sought to escape boredom, but until recently, it was impossible to completely avoid it. For the majority of human history, much of our “in between” time was spent idle. Just thinking or talking or looking around. Today, internet-connected devices make it possible to fill every second of our time, and those activities—thinking, talking, looking—become more and more fleeting.

Sherry Turkle of MIT described this phenomenon in her book Reclaiming Conversation:

We say we turn to our phones when we’re “bored.” And often we find ourselves bored because we have become accustomed to a constant feed of connection, information, and entertainment.[…]It all adds up to a flight from conversation—at least from conversation that is open-ended and spontaneous, conversations in which we play with ideas, in which we allow ourselves to be fully present and vulnerable.

When distraction is always a click away, it is our conversations, both inward and outward, that suffer most.

Disconnect to Reconnect

The internet is a large part of my life. I make a living designing digital products and teaching future product designers. I dedicate a lot of mental space to contemplating the impact of technology—both the good and the bad. There is so much positive about our web-enabled world, but the addictive nature of our devices has made it incredibly difficult for even the most resolute among us to truly pry ourselves away.

It’s easy to forget how quickly this has happened. I spent half my life internet-free and all but a quarter of it without a smartphone. Less than a decade ago, idle time was nearly impossible to avoid. Today, to have idle time—to reflect, to think, to breathe, to turn it off—requires a conscious choice. You either power down your devices or find a place the internet can’t reach. Fortunately, it is still possible to find those places, but they are fast disappearing.

The protection of our wild spaces represents one of the greatest public goods the U.S. has ever created.

Growing up, I spent a lot of time in the woods. As part of a family that prized the outdoors, we did everything from cabin camping to extended backpacking trips. At the time, I didn’t appreciate or understand what the wilderness represented. Maybe it was because everyday life was yet to be hyperconnected, so the woods didn’t feel all that different. But now that hyperconnection is the norm, the juxtaposition is stark.

The wilderness is a place of both deep solitude and deep connection. You are either alone with your thoughts or talking to the people you’re with. Those represent the full breadth of your options.

We desperately need those places. In an always-on world, with devices designed to pull so hard it’s difficult to break free, we need that forcing function. We need those moments where we mindlessly pull out our phone only to find no signal.

At the moment, despite our rapid advances, much of the wilderness is still that sanctuary. A place the internet can’t reach. Like a plane at 30,000 feet. The question is, how long will it stay that way?

Internet-Free Zones

In 1964, the United States Congress passed the Wilderness Act. The act created a legal definition of wilderness and now protects 110 million acres of land from human development. It defines wilderness as follows:

A wilderness, in contrast with those areas where man and his own works dominate the landscape, is hereby recognized as an area where the earth and its community of life are untrammeled by man, where man himself is a visitor who does not remain.

The protection of our wild spaces represents one of the greatest public goods the U.S. has ever created. A rare moment where we were able to understand there are things that supersede economic development and capitalist pursuits.

This wilderness preservation system provides areas across the country where people are given the opportunity to escape the modern world and step into a place of comparative solitude and silence—a last refuge for boredom and introspection.

In the 1960s, when the Wilderness Act was signed, the digital revolution was but a glimmer in the eye of just a handful of people, and only a few of them could have predicted where it would ultimately go. Today, the idea of a space “untrammeled by man” can no longer be defined as simply lacking physical development or resource exploitation, it must also include the absence of our expanding array of digital technologies.

A wilderness, in contrast to those areas dominated by man, should have no signal.

In 2017, there were 331 million visits to U.S. national parks, which is tied with 2016 for the most annual visits in history. People crave these spaces and the disconnection they provide. We’ve overplayed our hand in the war on boredom and the pendulum is starting to swing. There are technology-free summer camps for adults, devices to lock away your phones during events, and bars with built-in Faraday cages to block cell signals.

Introspection and conversation are not dependent on pristine landscapes alone, they are dependent on disconnection. We need to continue to protect our wild spaces from those elements of human creation that we can see, but also protect them from the elements we can’t see. A wilderness, in contrast to those areas dominated by man, should have no signal.

There are a number of ways this can be accomplished. It could be easements that require transmission towers to be certain distances away from designated areas. It could be no-fly zones for aerial transmitters or a requirement that those transmitters be programmed to cease transmission as they pass over specific areas. Or we could pursue large-scale signal jamming in designated zones.

We have legislative, historical, and cultural precedent for protecting and valuing lands and spaces that allow us to step away from the rush of modernity and stay the hand of human progress. We need these escapes and the introspective disconnection they provide. It’s time for us to consider expanding that precedent for the digital age by making the wilderness a place the internet can’t reach.

Let’s Designate Internet-Free Zones” was originally published in Medium on November 28, 2018.

The veil of wonder that once gleamed around the internet has been lifted. Behind it, we’ve located the inconvenient truth about life online — it’s filled with fake news, trolling, cyberbullying, filter bubbles, echo chambers, and addictive technology. The honeymoon is over, as they say.

The ills of the web are the ills of society. They have existed, well, probably forever. Bullying, marginalization, violence, propaganda, misinformation — none of it is new. What is new is the scale and frequency enabled by the internet. The way the web works and, more importantly, the way we engage with it, has taken these issues and amplified them to 11.

Our public debate takes each issue separately, attempting to understand the root cause, mechanics, and solutions. We tweak algorithms in order to pop the filter bubble. We build features and ban accounts to curtail fake news. We ban instigators and require the use of real names to snuff out bullying. What is this approach missing? These problems are not actually separate. They are all symptoms of a deeper psychological phenomenon. One that lives at the core of human interaction with the web.

The Anonymity Paradox

The internet lives in a paradox of anonymity. It is at once the most public place we’ve ever created, but also one of our most private experiences.

We engage in the digital commons through glowing, personal portals, shut off from the physical world around us. When we engage with our devices, our brain creates a psychological gap between the online world and the physical world. We shift into a state of perceived anonymity. Though our actions are visible to almost everyone online, in our primitive monkey brains, when we log in, we are all alone.

This isn’t anonymity in the sense of real names versus fake names. The names we use are irrelevant. This is about a mental detachment from physical reality. The design of our devices acts to transport us into an alternate universe. One where we are mentally, physically, and emotionally disengaged from the real-world impacts of our digital interactions.

Though our actions are visible to almost everyone online, in our primitive monkey brains, when we log in, we are alone.

This is the same psychological phenomenon that we experience when we drive a car. The car is a vortex where time and accountability disappear and social norms no longer apply. We routinely berate other drivers, yelling at them in ways most of us never would if we found ourselves face-to-face. Speeding along with a sense of invincibility and little concern for any repercussions, we sing and dance and pick our noses as if no one can see us through the transparent glass. We talk to ourselves out loud, like crazy people, reliving (and winning) past arguments. Time bends and we lose track of how long we’ve been driving. Sometimes we get to where we’re going and don’t remember how we got there.

In this bubble of anonymity, the real world is Schrodinger’s cat, both existing and not existing at the same time. This paradox is why we flush with embarrassment when we suddenly become aware of another driver watching us dance. Or why road rage stories that end in tragedy are so unnerving to hear. It’s the real world popping our bubble. We’ve killed the cat and now there are consequences.

This is our life on the web. Every day we repeatedly drop in and out of an unconscious bubble of anonymity, being in the world and out of it at the same time. Our brains function differently in the bubble. The line between public and private becomes less distinguishable then we would like to admit, or maybe even realize. It is this paradox that drives the scale of the problems plaguing our beautiful internet.

Cyberbullying, Trolls, and Toxic Communities

Just like road rage, our digital bubble gives us the psychological freedom to unleash our innermost feelings. From the safety of our basement, desk, or smartphone screen our brains step into a space of perceived impunity, where repercussions are distant and fuzzy at best.

It doesn’t even matter where we physically are. Interacting with a digital device requires attentive processing. Your brain must be almost fully engaged. Mentally, it pulls you completely out of your current environment. If you’ve ever tried to converse with a person who is checking their phone, you know they’re all but gone until they look up. Like blinders on a horse, the physical world disappears and all our brain sees is the screen in front of us.

In this bubble, there are no social cues. No facial expressions, body language, or conversational nuance. The people we interact with are all but faceless. Even if we know them, the emotional gap created by the screen means our brain doesn’t have to consider the impact of our actions. In a face-to-face interaction, we have to assume the burden of the immediate emotional response of the other person. Online, our fellow users are temporarily relieved of their personhood, in the same way that our fellow drivers relinquish their personhood the moment we get behind the wheel. They become just another thing in the way of us getting from A to B.

As Robert Putnam described in his best-selling book Bowling Alone, “Good socialization is a prerequisite for life online, not an effect of it: without a real world counterpart, internet contact gets ranty, dishonest, and weird.”

In some ways, our online experiences mimic those of drone fighter pilots. Sitting in windowless rooms staring at digital landscapes half a world away, drone pilots experience a war zone that both exists and doesn’t exist at the same time. This creates a bubble of anonymity between pilot and target.

To quote a piece from the New York Times:

The infrared sensors and high-resolution cameras affixed to drones made it possible to pick up… details from an office in Virginia. But… identifying who was in the cross hairs of a potential drone strike wasn’t always straightforward… The figures on-screen often looked less like people than like faceless gray blobs.

When our brain shifts into the bubble, it creates an artificial divide between ourselves and the people we interact with. They are text on screen, not flesh and blood. On top of that, because of the voyeuristic nature of the web, every interaction happens in front of an entire cast of individuals whom we never see, and that we may never know were there. We are increasingly living our lives through a parade of interactions with faceless gray blobs.

It’s easy to remove the human from the blob. This gives us permission to do and say all kind of things online that we wouldn’t in real life. This same emotional gap is why it’s easier to break up with someone via text message than a face-to-face conversation. Technology creates a psychological buffer. However, the buffer is only temporary. At some point, we come back to reality.

Drone pilots spend 12-hour shifts in a bubble of anonymous war. When their shift is over, they come home to their families and are forced to engage in the “normal” activities of the real world. This is in contrast to combat soldiers who live in a war zone and adjust their entire reality accordingly. Drone pilots are anonymous participants in a war that exists and doesn’t exist at the same time.

While most of us aren’t logging on to kill people, we are living similarly parallel lives. Dropping in and out of anonymity, engaging in interactions in an alternate universe. Interactions which, sometimes, even our closest loved ones are unaware of. Some of us make this switch hundreds of times a day.

But what about those of us who aren’t engaging? Most of us aren’t bullying or being bullied. What if we’re logging in just to watch?

For drone pilots, even watching a war anonymously from a distance has significant impacts. An NPR piece about reconnaissance drone pilots quotes military surgeon Lt. Col. Cameron Thurman on the emotional burden:

“You don’t need a fancy study to tell you that watching someone beheaded … or tortured to death, is gonna have an impact on you as a human being. Everybody understands that. What was not widely understood is the level of exposure that [pilots have] to that type of incident. We see it all.”

Even if we aren’t the ones being bullied or doing the bullying, we are all seeing it. Every day. Verbal abuse, violence on video, self-righteous shaming, condescension, belittlement, jealousy, posturing, and comparison. Our experience of the internet often feels private, but it is all happening on the world stage. Unlike road rage, which is usually contained to our little pod on four wheels, web rage is flung out into the universe, where the rest of us are forced to watch it all unfold from our own bubble. Processing it across a weird chasm of pixels and fiber optics. Anonymous observers in a world where the names are made up, but the problems are real. I’d say, we’re only just beginning to understand the psychological impacts of this.

Technology Addiction

A lot has been written about our addiction to technology, especially through the lens of the habit-forming design of things like social media.

Psychologists break the formation of habits into three distinct components — a trigger, an action, and a reward. Something triggers (or reminds) you to take an action. You take the action. You get a reward. This habit cycle drives a surprising amount of our everyday behavior.

When we talk about the addictive nature of the web, we pay particular attention to the design of specific features within applications that deliver “hits of dopamine” (the pleasure hormone). These features are: likes, hearts, shares, comments, and retweets. They are also feeds that constantly refresh, delivering little bits of new information at unpredictable intervals. Where this focus falls short is that it deals almost exclusively with the action and reward portion of the cycle. The action is checking your stats or refreshing your feed. The reward is new likes on your posts or new posts in your stream. But what about the trigger? What is initiating the cycle? You might say it’s notifications, but we are checking the web constantly with or without notifications. It is deeper than that.

Our desire for escape is the trigger that drives our incessant checking of the web.

The bubble on anonymity provides something fundamental for people. It provides escape. It pulls you out of whatever real-world situation you are in and lets you forget about your life for a moment. Have you ever been relieved to just get in the car and drive? Our desire for escape is the trigger that drives our incessant checking of the web. Every time we want to get away, our new action is logging in. Whether we’re escaping from boredom, an awkward social situation, or the responsibilities of life, our digital devices give us an ever-present “out.” A portal to temporary anonymity, albeit only perceived.

This ability to temporarily “disappear” not only represents the trigger in our cycle, it is also our reward. Our addiction is less about the mini dopamine hits we get from social validation metrics and more about the escape. The dopamine hit from likes and new posts is just the final icing on the cake, reminding us that escape is always the right choice.

In online culture, the “1 percent rule” is a framework for thinking about activity in online communities. It breaks users into three stratifications based on activity: creators, commenters, and lurkers. The idea is that 1 percent of people are creators. They drive the creation of all the new content in the community. Nine percent are commenters who actively engage with a creator’s content — liking, commenting, etc. The other 90 percent are lurkers who watch from the background.

Whether these percentages are completely accurate doesn’t matter. What matters is the idea that the majority are not creating content or even actively engaging with content in online communities. This means that our addiction to these services cannot be driven solely by the dopamine hits created by social metrics. Most people are not using them. It has to be deeper than that. We’re addicted to the escape. We’re addicted to our perceived anonymity.

Fake News, Filter Bubbles, and Echo Chambers

Our conversations are becoming more divisive, our views more polarized. The 2016 election in the U.S. brought this into sharp relief. For many, the blame for this divide lies with the algorithms that serve us content.

In more and more web platforms, including almost all major social media services, content is served by algorithms. Fundamentally, this means a computer calculates which posts you’re most likely to engage with and shows you those, while hiding posts it thinks you won’t like. The goal is to deliver the best content, personalized for you.

The problem is that these algorithms are backward-looking. They calculate based on what you’ve done in the past: “Because you read this, you might also like this.” In algorithm world, past behavior determines future behavior. This means that algorithmically driven services are less likely to show you information that opposes your existing views. You probably didn’t engage with it in the past, so why would you in the future? So, your feed becomes an echo chamber, where everything you see supports what you already believe.

Algorithms feed one of our most primitive psychological needs. We are hardwired to seek out information that confirms our beliefs. This is known as confirmation bias.

From Psychology Today:

Confirmation bias occurs from the direct influence of desire on beliefs. When people would like a certain idea/concept to be true, they end up believing it to be true. They are motivated by wishful thinking. This error leads the individual to stop gathering information when the evidence gathered so far confirms the views (prejudices) one would like to be true.

We want our beliefs to be true. It can be hard, painful work to let go of a belief. This is why fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear. If a service put opposing views in our face all the time, it could be emotionally painful. We might not come back to that service. From a business perspective, it makes sense to show us what we like.

The prevailing wisdom is that this constant reinforcing of our worldview kills open-mindedness, hardening our beliefs to a point where we are no longer able to find common ground with anyone who opposes them. As the repercussions of our online echo chambers become increasingly evident, there are calls to change the way we surface content in order to show more diverse perspectives. The idea is that a more diverse feed means a more open-minded worldview. The question is, would this work?

Fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear.

In 2015, Facebook published a study suggesting that it is actually users who cause their own filter bubbles, not the Facebook algorithm. That we are the ones actively choosing to ignore or hide opposing views. At first blush, it’s easy to pass this off as a clear conflict of interest. Of course Facebook would say it’s us and not the algorithm. But it may not be so clear-cut.

We engage online in a bubble of psychological anonymity. Our reward is escape. If we are already hardwired to seek out information that supports our beliefs, and it is painful to be exposed to information that opposes them, of course we would do our own filtering.

The internet is a fire hose. It can be so overwhelming that sometimes we literally go numb. It is information hypersensitization. It is more than our brain can deal with. We’re here to escape, not to feel overwhelmed. So, we start turning off as much of the noise as possible. We reject anything that makes us feel uncomfortable.

Luckily for us, the internet is the perfect machine for supporting our existing beliefs. Communities of like-minded people are just a Google search away, no matter how niche our interests. Our bubble of anonymity frees our brain from any social pressures stopping us from indulging our innermost desires, no matter how subversive or extreme. On top of that, services have given us all the tools we need to sanitize our feeds. We can block, mute, flag, and unfollow. Combine all of it with an algorithm predisposed to reinforce our worldview and you have a perfect storm for polarization and radicalization.

Additionally, the way we process interactions online is different than the way we process them offline. A recent study found that Twitter users who were exposed to opposing views on the service actually became more rooted in their beliefs. This flies directly in the face of the prevailing wisdom about exposure to diverse views driving open-mindedness.

The internet is the perfect machine for supporting our existing beliefs.

While the study results may be true, the question is: Do they represent a natural human state? We operate online in a psychological bubble of anonymity. That bubble does not exist in the outside world. In the physical world, exposure to diverse views and experiences happens with real people. In those cases, our brain is operating in a completely different mode.

When we’re online, as far as our brain is concerned, we aren’t engaging with real people. Like when another driver notices you picking your nose, coming into contact with opposing views online pops our bubble of anonymity. It is a real-world intrusion into our alternative universe by some faceless gray blob. The psychological response is different. It is much more fight or flight than listen and consider.

The internet has become a ubiquitous presence in our lives. Its creation has shifted so much about our existence. Today, our paradigm for interacting with the web creates a psychological gap between the digital and physical worlds, dramatically altering the way we relate to each other and the way we relate to technology itself. How can we design the next phase of our technology so that it enhances our life in the world, as opposed to pulling us out of it?

Soon we will reach a technological inflection point, where we will spend more of our time engaged with the digital world than not. The outsize influence of this alternate universe we are building makes it incumbent upon us to think critically and openly about its impact on society.

Technology is not something that happens to us, it is something we choose to create. If we are intentional and transparent, we can learn from where we have been and work toward a technology future that brings us together, not one that drives us apart.

A Unified Theory of Everything Wrong with the Internet” was originally published in Medium on September 17, 2018.

I recently read Woodrow Hartzog’s piece on facial recognition technology. The premise of the piece — that facial recognition is the perfect tool for oppression, and as such should be banned from further development — put a fine point on a question I’ve been pondering for a while:

Are all technological advances actually progress?

This doesn’t seem to be a question we ask.

We pray hard at the altar of technological optimism. Tapping away at our touch screens through rose gold colored glasses. Rarely do we step back and ask ourselves, ‘is this really good for us?’ — at least not until long after the fact.

It can be hard to predict what will happen with new technology, but I’m in line with Hartzog that facial recognition feels like a technology worth questioning in its entirety. The dystopian storyline of oppression and persecution is just too obvious and too likely.

To be fair, there is a conversation happening about facial recognition, including some surprising calls for regulation from major companies developing the technology, like Microsoft. But the idea of regulation is about as far as we ever go, and by the time we get there the genie is so far out of the bottle that any legislation often becomes more of a symbolic victory than any real form of control.

We just move technology forward as fast as we can, call it progress, and then do our best (or not) to clean up the mess left in its wake. See Mark Zuckerberg’s congressional testimony for our most recent example.

Would we ever consider stopping the development of a new technology before we open Pandora’s box?

Technology drives itself forward with the same brutal mentality of colonizing explorers — if the land is there, it must be conquered.

At prestigious universities and companies across the country, rooms of twenty-something engineers practice the 21st century version of Manifest Destiny. Striving to conquer any technical challenge they find in front of them. Insulated by privilege and sorely lacking in diversity, it is questionable how much introspective thought these institutions give to the possible downsides of their work.

In the tech world, the development of facial recognition, along with so many other advances, is viewed as a foregone conclusion. ‘The technical capability is there, so we are going to develop it.’

This isn’t necessarily an attempt to be nefarious or destructive. Often, it’s with good intentions. Unfortunately, as the ancient proverb says, the road to hell is paved with good intentions.

Video manipulation technology is another great example. Developed at Stanford, it allows anyone to modify a video so that the face of the person in it does anything they want them to do. It works with any webcam and the results are indistinguishable from reality.


Given what we’ve already seen with fake news and the ongoing erosion of truth, the negative implications of this type of technology are so obvious and terrible that we probably should have corked the bottle and buried it back in some forgotten cave somewhere. But we didn’t.

We had the technical capability to make it work, so we had to prove we could, right?

What if we chose not to prove it? Is there a point where we develop the fortitude to stop asking ourselves ‘could we’, and start asking ourselves ‘should we’?


Nothing New Under the Sun

The relentless march of technology has been one of humanity’s strongest historical through lines. And, throughout history, our response, if we have one at all, has been reactive. Of late, our go-to is regulation.

Even the most primitive technologies carried significant unintended consequences. In her book The Sixth Extinction, Elizabeth Kolbert lays out a strong case that small bands of early humans were able to hunt large mammals, like Mastodons, to the point of extinction. This was not intentional overhunting, it was the outcome of our technological capabilities, like spears, that allowed us to unwittingly kill Mastodons at a rate that outstripped their ability to reproduce, leading their species to collapse.

We’ve been struggling with the impact of our technology pretty much since day one.

Obviously the tricky part here is that technological progress is a double-edged sword. We literally wouldn’t be where we are today without it, and we won’t get to the future we all hope for if we stop. The problem is that the magnitude of the risks continues to escalate, but we refuse to change our approach.

The things we are developing now are more powerful and more distributed than ever before. When technology is accessible to everyone, reactionary responses, like regulation, become all but irrelevant. We can barely control the proliferation of nuclear weapons technology, and the barrier to entry there is about as high as it gets. What chance do we have of regulating something like facial recognition, which is open sourced and can be implemented by anyone?

Companies have gotten so good at marketing us the benefits of new technologies that there is no room for any critical thought about possible negative impacts. If the average person thinks about facial recognition at all, they most likely think about it as a way to unlock their iPhone. They have no view of the bigger picture, and no idea what’s about to happen. Quite often this is by design.

Driving the adoption of new technology is all about conditioning. People are resistant to change. You can’t go too far too fast. You need to ease people into it. You start with something innocuous and useful, that plays off an existing behavior, like unlocking your phone, or sharing information with a small group of trusted friends.

To quote Mark Zuckerberg from 2005:

“I think that some decisions that we made early on to localize [Facebook] and keep it separate for each college on the network kept it really useful, because people could only see people from their local college and friends outside. That made it so people were comfortable sharing information that they probably wouldn’t otherwise.”

Social media style sharing was not a thing when Facebook started. The idea was foreign and scary. We had to be eased into it. But, once those initial activities become commonplace behavior, the gates are open for companies to push the boundaries and upend social norms. Again, not necessarily nefarious, it’s the process of adoption.

However, as consumers we continually allow ourselves to be sold a bill of goods without understanding the real price we’re about to pay. Buying, hook, line, and sinker, into the idea that the primacy of technology makes any possible risks acceptable, or even irrelevant. ‘You’re saying I don’t have to type numbers into my phone to unlock it anymore? I just look at it?! Say no more, sir!’

Because of all of this, our conversations about the downsides of technology always happen postmortem, and the debate focuses on how we bend society in order to live with our new reality, as opposed to how we bend our reality to create the society we want. As if technology is just some thing that happens to us, beyond our control.

Does there come a tipping point where the conversation changes? Could we ever choose to actively turn away from technological opportunities based on the inherent risks? Or will we just continue to ‘move fast and break things’, hoping for forgiveness later?

Alfred Nobel amassed great fame and fortune in his life, largely from the creation of dynamite and a number of other explosives. His work drove fantastic advancements in civil engineering, but also military arms, resulting in the deaths of untold numbers of people.

When Nobel’s brother died, a French newspaper mistakenly thought Alfred had died. They printed a front page headline that read “The Merchant of Death is Dead”. And continued, “Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday.”

The paper’s mistake forced Nobel to reckon with his legacy and the legacy of his creations, which ultimately drove him to establish the Nobel Prize in 1901, in an attempt to rectify his past and fix his reputation.

Today, the list of tech billionaires with large philanthropic pursuits continues to grow.

Similarly, after helping to create the atomic bomb, Albert Einstein voiced deep regret for his participation:

“The release of atomic power has changed everything except our way of thinking…the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.”

What started with optimism and hope for peace, ended with the realization that the end did not justify the means. But at that point it was too late.

More recently, Sean Parker, Facebook’s founding President, lamented the addictive design of social media, which he admits was intentional. Calling himself a ‘social media contentious objector’, and saying “God only knows what it’s doing to our children’s brains.”

Not a lot has changed in the last 117 years.

But if we don’t, someone else will.

This is an argument that serves to maintain the status quo of technological manifest destiny. ‘It is inevitable, so it might as well be us.’

Even Einstein fell prey to it:

“I made one great mistake in my life-when I signed the letter to President Roosevelt recommending that atom bombs be made. But, there was some justification-the danger that the Germans would make them.”

Our global belief in the primacy and inevitability of technology makes this a valid argument and a legitimate concern. Someone else is probably going to do it. The question is, can we continue with this mentality unchecked, or will we eventually pay the price for it?

What is the thing that might tip the scale and force humanity to truly grapple with the hard questions? Is it facial recognition? Artificial intelligence? Genetic engineering?

Being able to have real, transparent debates about the risks and rewards of our technological pursuits has to be the next step in our growth as a species. We are at a point now where the power and scale of our capabilities can easily end us. Either through literal annihilation or the complete subversion of our societal structures.

With great power comes great responsibility — someone once said.

Our ability to create is the greatest power we’ve been given, but we handle it like hormonal teenagers — overconfident, naive and oblivious to consequences. Flush with smarts, but devoid of wisdom.

If we want to make it to the next phase of our existence we need to grow up. Not to stifle our progress, but to actually enable it.

We need to change our culture of technology to be one that is proactive and open to considering the downsides as much as the upsides, and we need to be willing to walk away when we determine that the risks outweigh the rewards.

This is not beyond our control. We have the power to change our course. Like the Google employees who refused to work on an AI contract for the Defense Department, we can drive conversations and make different choices.

Technology is not a thing that happens to us. Technology is a thing we choose to create.

If used wisely, technology can enable us to become the humans we desire to be. But, if we continue to allow ourselves to be blown by the winds of technological manifest destiny, we are going to find ourselves in a mess we won’t be able to clean up.

Life, Liberty and the Pursuit of Technology” was originally published in Medium on August 27, 2018.

“Design can change the world.”

When I was in design school, this statement filled me with incredible energy and pride. I felt it in my core. How could I not? Over the last few decades, design — and design thinking — has ascended to the point of being routinely viewed as one of the differentiators for companies and products.

Behind this ascension lies design’s anointed operating system: human-centered design.

The fundamental idea behind human-centered design is that, to find the best solution, designers need to develop an empathetic understanding of the people they are designing for.

Designers do this through user interviews, contextual observations (watching users go about their business in their “normal” life), and a number of other tools that help designers put themselves in users’ shoes. Once you can paint an empathetic picture of a user’s needs, the next step in the process is to identify a few key insights and use those to create a solution.

One famous example is the development of the Swiffer mop. Designers, tasked with improving the process of housecleaning, observed customers cleaning their homes. A key insight was that time was critical. Cleaning often cut into time for other activities, and any time savings would be a boon. Mopping was identified as an especially time-consuming part of cleaning, with multiple steps and multiple pieces of equipment, not to mention waiting for the floor to dry. So designers created a “dry mop” (the Swiffer) that simplified the process and saved time. It was a huge commercial success.

Straightforward enough.

And the process works. Countless products and services that drive our daily lives were either born from this process or dramatically improved by it. Smartphones and many of their apps, social media services like Instagram and Twitter. The darlings of the sharing economy — Uber, Lyft, and Airbnb. Not to mention a litany of physical products.

The way the world works and the way we work in it are fundamentally different today than they were even a decade ago. In large part, this is due to the process of human-centered design.

So, we as designers puff out our chests and carry our heads high knowing that we have the power to change the world.

But, if you step back for a moment, you start to see a problem: We’ve been designing the world, real hard, for decades now and we haven’t made a dent in a single real problem.

What do I mean by “real problem”?

I mean real problems. The big ones. The kind that shake us to the core of our humanity and threaten our long-term viability.

Hunger. Climate change. Poverty. Income inequality. Illiteracy. Bigotry. Discrimination. Environmental degradation. The list goes on.

Right now, there are people in the richest country on Earth who are starving. People who can’t access or afford health care. People who are homeless. That’s the richest country.

Right now, our oceans are choking to death from plastics. Our atmosphere is choking to death from CO2, and we have effectively lost 50 percent of the Earth’s biodiversity.

Guess what: Design hasn’t fixed any of it.

Not even the slightest bit.

And, unfortunately, design won’t fix any of it, because our operating system won’t allow it.

The Problem with Human-Centered Design

Big problems, those that threaten our existence or the stability of our society, are systemic. They coarse through the veins of the entire system. Their causes are widespread and varied, and the people involved represent almost every segment of society.

These kinds of problems are multifaceted. They do not have a silver bullet. There is no “ah-ha” insight hiding out there that will suddenly help us solve the problem and see the light.

Instead, solving these kinds of systemic problems is like trying to contain a wildfire. While you’re working to fight one side of it, the other side has just burned another 50 square miles. You can’t hope to make progress by chipping away at one piece of the problem while ignoring the others.

Eventually, like a wildfire, you try to mitigate as much damage as possible until the weather shifts and a rainstorm comes along, providing a truly systemic solution. A solution that addresses the problem from all sides.

Human-centered design is not architected to solve systemic problems. In fact, human-centered design is architected to solve the exact opposite type of problem.

Human-centered design is all about focus. It’s about observing the big picture and then zeroing in on a manageable set of insights and variables, and solving for those. By definition, this means the process pushes the designer to actively ignore many of a problem’s facets. And this kind of myopic focus doesn’t work when you’re trying to solve something systemic.

A recent study on ride-sharing apps, a category of companies heavy on user-centered design, found that ride sharing adds 2.6 vehicle miles to city traffic for every one mile of personal driving removed. Ride-sharing apps actually make traffic in cities worse.

Ride-sharing companies, like Lyft, were predicated on the idea that they could put a dent in the problem of human transportation by solving for traffic congestion, and they used human-centered design approaches to do it. How could they have gone wrong?

It’s obvious. Human transportation is not a focused problem, it is a significant systemic issue. Through a human-centered design process, ride-sharing apps landed on the insight that getting a cab, or finding a ride, was inefficient in many cities. They focused on this insight and then, as their process is designed to do, shut out the other facets of the problem.

They concluded: “If we can make getting a ride more efficient, less people will drive their own cars, reducing traffic.”

This is the kind of simplified, guiding statement human-centered design produces.

And guess what? Uber and Lyft succeeded in making it easier to get a ride. Human-centered design works for a consumer-facing problem like that. In the process, however, they overlooked other aspects of the transportation ecosystem.

For example, as the study found, many people use non-automobile transportation, like bikes, buses, and trains, specifically because they don’t have a car (and getting a ride is a pain). Once ride-sharing apps made it easier to get a car, people who’d previously used public transportation began to opt for car-based travel. Human-centered design’s myopic focus kept this non-auto population obscured from view during the design process. This is an example of just one of the problem facets left out of the solution.

A user-centered approach is great for figuring out how to make the experience better for Airbnb customers, or how to change the way people mop. But it cannot contain a systemic problem like human transportation. When faced with a big, hairy, multifaceted problem, our focused, iterative operating system is abysmally inadequate. Human-centered design can barely handle damage control.

And so we inch our way forward. Chipping away at one side while the other burns out of control.

What Do We Need Instead?

I’m not saying we need to abolish human-centered design. It works for what it’s designed for. We have way better mops now (among many other things), and that’s wonderful. But, we need to understand the limits of our tools and begin to think about new ones. Tools that can help us grok the breadth and complexity of really big problems — and start to solve for them systemically.

Some in the design field are working on moving human-centered design forward. IDEO, one of the progenitors of human-centered design, is pushing a new concept: Circular Design. The idea behind Circular Design is to start thinking about designed objects through the lens of a “circular economy.” No longer driven by a create and dispose mentality, but a create and reuse mentality. It’s a rebrand of the cradle to cradle concept, focused on sustainability.

While this is an important step forward, it falls short of the systemic design thinking we need. Like the myopic aspect of human-centered design, Circular Design still drives toward focused design insights from which to create solutions. The difference? It asks the designer to consider the full life cycle of a solution and its long-term impact. Again, this is an indisputably important shift in the culture of design, but will it truly solve big problems?

If I design for the full life cycle of my reusable water bottle, I may have a more sustainable water bottle, but I have not created a systemic solution for our plastics problem. I have not changed the economic incentives driving plastic culture. I have not solved for the distribution and financial issues that make single-use bottled water more accessible. I have not solved for the public health issues that make single-use bottled water significantly safer in many areas. And I have not solved for all the other applications of single-use plastics.

I’m back to damage control. And the fire keeps getting bigger.

How Can We Break the Mold?

If we extend the wildfire analogy, perhaps we can create a design framework that allows us to more rapidly innovate in small ways across all facets of a problem, instead of trying to focus on a select few. Like a rainstorm, lots of tiny drops — delivered in a coordinated fashion — can extinguish a very large fire.

Or maybe it’s about getting rid of our culture of competition and creating a new culture of collaboration. If we start ignoring the corporate and political silos separating us, we can collaboratively combine lots of focused solutions, allowing us to knit them together into a single tapestry that truly covers an entire problem. There are lots of solutions out there. We just don’t have a thread pulling them together.

Or maybe it’s about upending the economic incentives that drive design. Human-centered design was created to serve our current economic system. There’s money in creating a better mop. There isn’t money in solving homelessness. In order to thrive economically we needed to consistently design better mops, so we built a framework to do it.

If we had the right incentives, how quickly could we develop a framework for systemic design thinking?

Design can change the world. But the way we’re going about it right now isn’t cutting it. If we want to design our way out of the big issues, we need to take a critical look at our approach. We need to upgrade our innovation operating system.

Design Won’t Save the World” was originally published in Medium on August 1, 2018.

Motion does not always translate to movement. Think about a car stuck in the mud. Press the gas and the engine revs, the wheels spin. There is motion, but the car is going nowhere. It might feel like you are making progress, but all you are really doing is digging a deeper rut. To get the car moving you need to get out and push.

Day-to-day it can be easy to get stuck spinning your wheels in the mud on things that feel urgent, but are ultimately not important. Effective leaders are able to push things forward by quickly determining which tasks are important and which should be ignored.

Important things create real forward movement by facilitating decision making, and keeping a team focused on their goals.

In any job, it’s all but impossible to stop things from coming up. So being able to make a quick assessment of new tasks is a skill worth practicing.

My process starts with answering a few questions to help determine if something is worth my time. Answering affirmatively to any of these is a good indicator that it deserves some focus. (Note: these questions are assuming the activity is in line with the goals of the team/org. If not, it’s most likely a non-starter.)

  1. Will this allow me, or the team, to finish something or make a final decision on something?
  2. Is this critical to inform a future decision I need to make?
  3. Will this remove a roadblock for me or the team?
  4. Will this result in clear action items for me or the team?
  5. Could this provide some critical insight or data point?
  6. Will this help communicate something critical to a key stakeholder?
  7. Will this truly help us do great work?


While not perfect, this heuristic can be a quick filter when prioritizing tasks. It has also helped me identify some habits that I’ve come to realize were more about motion than real movement.

I used to feel the need to “zero” my email inbox. I‘d tell myself that if I had a “clean slate” my mind would be clear for the other things I had to do. But, after running through the questions, having a clean inbox didn’t standup to the “is this important” test. In fact, for me, checking email in general turned out to be a mostly unimportant time suck. These days my unread email count is at an all time high, but so is my ability to focus on critical things.

Another thing that came up for me was around design feedback and iterations. It can be easy to get in deep as a team and nitpick design details, because details matter (see question #8), but there comes a point of diminishing returns. Eventually you’re just wasting design cycles. Creating motion for yourself, and a roadblock for everyone else. Sooner or later you just need to call it good and ship it.

Being a leader does not require you to be a manager or have a lofty title. You can lead from anywhere. The key is that you work hard at creating real movement for your team by understanding your goals and focusing on the things that truly matter.

Before you jump into anything, ask yourself: is this creating movement or motion?

Leaders Create Movement not Motion” was originally published in Medium on January 10, 2016.

What is your design philosophy? — I was asked this question by a candidate during a recent job interview. Oddly, it was the first time I’d ever been asked that question. As I fumbled through an answer I realized I didn’t really have an articulated design philosophy, or at least not one that easily came to mind. So I decided remedy that.

1: There is art in design, but design is not art

There is a practiced art to creating great design, but the final output of the design process is not art. Art is creative expression intended to provoke questions and individual interpretation. Art is inspiring, emotional and important, but does not fill a specific need beyond humanities’ desire to express itself. Design, on the other hand, is a creative process intended to solve a problem, to fill a need for the people that will ultimately interact with it. Design should not be open to interpretation, but instead should define how it is to be engaged with and should guide a user at each stage of that engagement. Art creates questions, design creates answers.

2: Design must be rooted in reality

As Dieter Rams says, “Indifference towards people and the reality in which they live is actually the one and only cardinal sin in design”. Empathy is the conduit to great design and the critical skill for great designers. Without a deep understanding of the end user and the reality in which a design will be used, any decision a designer makes is a shot in the dark. To fill a real need, design must be rooted in reality.

3: Design is never perfect

Design is about creating elegant solutions to address user needs. The tricky thing is that most often we are designing for humans, and humans are complicated. People’s expectations and desires evolve over time. Sometimes design evolves to meet these changes, sometimes design is the driver of the change. Regardless, a designer’s work is never done. This does not mean that design needs to be trendy, design can be timeless, but a great designer has a bent toward iteration and always has their ear to the ground.

4: Design is a set of tools, not a standardized process

Every problem presents its own unique set of characteristics, as such there is no one-size-fits-all process for coming to the best solution. The art of design is about having a diverse set of tools and approaches, and determining when to apply each. To quote Maslow, “…it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail” — so always carry a hammer, a screw driver, a pair of pliers and a hex wrench.

5: Design communicates obvious function

For something to be “well designed” it could be simple, or it could be complex. It could be considered aesthetically pleasing, or it could be considered gaudy. Aesthetics and simplicity are not requirements. For something to be well designed, the key requirement is that its function must be obvious. A person should be able to easily determine how to use and interact with it.

6: Design should delight

A design should create moments of delight for the people who encounter it. There is no steadfast rule as to what is delightful. Delight can come in different forms for different people, this is where empathy comes in, but most likely it is a mix of form, function and value that creates that often intangible emotional connection to a well designed thing.

That’s my first attempt at articulating a design philosophy. I’d love to hear how you’d answer the question — What is your design philosophy?

What is Your Design Philosophy?” was originally published in Medium on December 21, 2015.