Skip to content


There are a few ongoing debates in the world of digital design. Things like Should designers code?”, “What’s the value of design?”, “UX versus UI,” and, perhaps most fundamentally, “Is everyone a designer?” To get a taste for the flavor of that last one, you can step into this Twitter thread from a little while back (TLDR: It didn’t go super well for anyone):

To be clear at the outset, I don’t care if everyone is a designer. However, I’ve been considering this debate for a while and I think there is something interesting here that’s worth further inspection: Why is design a lightning rod for this kind of debate? This doesn’t happen with other disciplines (at least not to the extent it does with design). Few people are walking around asserting that everyone is an engineer, or a marketer, or an accountant, or a product manager. I think the reason sits deep within our societal value system.

Design, as a term, is amorphous. Technically you can design anything from an argument to an economic system and everything in between, and you can do it with any process you see fit. We apply the idea of design to so many things that, professionally, it’s basically a meaningless term without the addition of some modifier: experience design, industrial design, interior design, architectural design, graphic design, fashion design, systems design, and so on. Each is its own discipline with its own practices, terms, processes, and outputs. However, even with its myriad applications and definitions, the term “design” does carry a set of foundational, cultural associations: agency and creativity. The combination of these associations makes it ripe for debates of ownership.

Agency


To possess agency means to have the ability to affect outcomes. Without agency we’re just carried by the currents, waiting to see where we end up. Agency is control, and deep down we all want to feel like we have control. Over time our cultural conversation has romanticized design, unlike any other discipline, as the epicenter of agency, a crossroads where creativity and planning translate into action.

At its core, design is the act of applying structure to a set of materials or elements in order to achieve a specific outcome. This is a fundamental human need. It’s not in our nature to leave things unstructured. Even the concept of “unstructured play” simply means providing space for a child to design (structure) their own play experience — the unstructured part of it is just us (adults) telling ourselves to let go of our own desire to design and let the kids have a turn. We hand agency to the child so they can practice wielding it.

There are few, if any, activities that carry the same deep tie to the concept of agency that design does. This is partially why no one cares to assert things like we’re all marketers or we’re all engineers. They don’t carry the same sense of agency. Sure, engineers have the ability to make something tangible, but someone had to design that thing first. You can “design” the code that goes into what you are building, but you do not have the agency to determine what is being built (unless you are also designing it).

If we really break it down, nearly every job in existence is either a job where you are designing, or a job where you are completing a set of tasks in service to something that was designed, or a job where your tasks are made possible by some aspect of design, or some mix of the three. Either way, the act of “designing” is what dictates the outcomes.

Creativity


The other key aspect of our cultural definition of design is creativity. Being creative is a deep value of modern society. We lionize the creatives, in the arts as well as in business. And creativity has become synonymous with innovation. There is a reason that for most people, Steve Wozniak is a bit player in the story of Steve Jobs.

The idea of what it means for an individual to be creative is something that has shifted over time. In her TED Talk, Elizabeth Gilbert discusses the changing association of creative “genius.” The historical concept, from ancient Greece and Rome, was that a person could have a genius, meaning that they were a conduit for some external creative force. The creative output was not their own; they were merely a vessel selected to make a creative work tangible. Today, we talk about people being a genius, meaning they are no longer a conduit for a creative force, but instead they are the creative force and the output of their creativity is theirs.

This seemingly minor semantic shift is actually seismic in that it makes creativity something that can be possessed and, as such, coveted. We now aspire to creativity in the same way we aspire to wealth. We teach it and nurture it (to varying degrees) in schools. And in professional settings, having the ability to be “creative” in your daily work is often viewed as a light against the darkness of mundane drudgery. As we see it today, everyone possesses some level of creativity, and fulfillment is found in expressing it. When we can’t get that satisfaction from our jobs we find hobbies and other activities to fulfill our creative needs.

So, our cultural concept of design makes tangible two highly desirable aspects of human existence: agency and creativity. Combine this with the amorphous nature of the term “design” and suddenly “designer” becomes a box that anyone can step into and many people desire to step into. This sets up an ongoing battle over the ownership of design. We just can’t help ourselves.

Take again, as proxy, our approach to the arts. While we lionize musicians, actors, artists, and other creators, we simultaneously feel compelled to take ownership of their work, critiquing it, questioning their creative decisions, and making demands based on our own desires. The constant list of demands and grievances from Star Wars fans is a perfect example. Or the fans who get upset if a band doesn’t play their favorite hit song at a show. Even deeper, we feel a universal right to remix things, cover things, and steal things.

Few people want to own the nuts-and-bolts process of designing, but everyone wants to have their say on the final output.

But just like other things we covet, what we desire is ownership over the output, not the process of creating it. For example, we’re willing to illegally download music, movies, books, games, software, fonts, and images en masse, dismissing the work it took to create it and sidestepping the requirement to compensate the creator.

A similar phenomenon occurs in the world of design. Few people want to own the nuts-and-bolts process of designing, but everyone wants to have their say on the final output. And because design represents the manifestation of agency and creativity there is an expectation that all of that feedback will be heard and incorporated. Pushing back on someone’s design feedback is not just questioning their opinion, it’s a direct assault on their sense of agency.

As a result, final designs are often a Frankenstein of feedback and opinions from everyone involved in the design process. In contrast, it’s rare to see an engineer get feedback on the way code should be written from a person who doesn’t have “engineer” in their title. It’s also even more rare to see an engineer feel compelled to actually take that sort of feedback and incorporate it.

Another place this kind of behavior crops up is in the medical world. Lots of people love to give out health advice or question the decisions of doctors. However, few people would say “everyone is a physician.”

And I think this represents a critical point. There are two reasons that people do not assert that they are a physician unless they are actually a physician:

  1. We have made a cultural decision that practicing medicine is too risky to allow just anyone to do it. You can go to jail for practicing medicine without a license.
  2. No one actually wants to be responsible for the potential life and death consequences of the medical advice they give.

This highlights a third aspect of our cultural definition of design: Design is frivolous. Despite the connection between design and agency, many still view “designing” as trite and superficial.

Humans are sensory creatures. We absorb much of the world around us through visual, auditory, tactile, and olfactory inputs. Because of this, when we think of the agency inherent in design most of us think about it in terms of the aesthetic value of the output. Basically, we continually conflate design with art. If you don’t believe me, watch any episode of Abstract on Netflix. This is also why design programs are still housed in art schools.

So when most people critique designs, their focus is on aesthetics—colors, fonts, shape—and their reactions are based on the feelings and emotions those aesthetic values elicit. While aesthetics have an important role to play, they are only a piece of the overall puzzle. It is much harder for people to substantively critique the functional merits of a design or understand the potential impacts a design decision can have. That is partially why so many of our design decisions end up excluding certain groups of users or creating other unexpected negative consequences: We don’t critique our decisions through that lens.

Everyone is a designer because there is no perceived ramification for practicing design.

Because of this narrow, aesthetic-based view, the outcomes of the design process feel relatively inconsequential to many people, especially in comparison to something like the outcomes of a medical diagnosis. And if there are no consequences, why shouldn’t we all participate? Everyone is a designer because there is no perceived ramification for practicing design.

Of course, in reality, there are major consequences for the design decisions we make. Consequences that are more significant, on a population level, than many medical decisions a doctor makes.

What I’ve come to realize is that the idea that everyone is a designer is not really about some territorial fight for ownership; it’s actually a symptom of our broken culture of technology. Innovation (creativity) is our cultural gold standard. We push for it at all costs and we can’t be bothered by the repercussions. Design is the tool that gives form to that relentless drive. In a world of blitzscaling and “move fast and break things” it serves us to believe that our decisions have no consequences. If we truly acknowledged that our choices have real repercussions on people’s lives, then we would have to dismantle our entire approach to product development.

Today, “everyone is a designer” is used to maintain the status quo by perpetuating the illusion that we can operate with impunity, in a consequence-free fantasy land. It’s a statement that our decisions have no weight, so anyone can make them.

I said at the beginning that I don’t care if everyone is a designer, and I mean that. If we keep thinking of this debate as some territorial pissing match then we continue to abdicate our real responsibility, which is to be accountable for the things we create.

It really doesn’t matter who is designing. The only thing that matters is that we change our cultural conversation around the consequences of design. If we get real about the weight and impact that design decisions have on our world, and we all still want to take on the risk and responsibility that comes with that agency, then more power to all of us.

“Why the ‘Everyone Is a Designer’ Debate Is Beside the Point” was originally published in Medium on January 22, 2020.

Three years ago, I couldn’t stand for any period of time without my lower back seizing up. I had chronic nerve pain running from my left shoulder to my left wrist. It was bad enough that I couldn’t sleep. I was at least 20 pounds overweight and more out of shape than I had been in a decade. I was 35 years old. My physical condition was not what I would call optimal.

I had fallen into the hustle trap. At the time, I was head of product for a tech company, and the long hours had caught up with me: 80-plus-hour weeks, with late nights on my laptop, sitting hunched over on my couch. A complete lack of exercise and poor nutrition. Things had gone south surprisingly quickly and without my full awareness.

The nerve pain was the tipping point that sent me to my primary care doctor, mostly out of fear that I had some significant neurological issue. Before sending me to a neurologist, the doctor suggested physical therapy. So that’s where I started—and I started slow: basic exercises, some stretches, and some walking. I put six months into therapy, and it eventually fixed the nerve pain. But the journey had just begun. Physical therapy was over, but the factors that took me to the breaking point were all still central to my life: long work hours, impending deadlines, constant computer work, and “tech neck.”

The culture of tech pushes people harder and harder, but we don’t think about the physical effects of that labor.

My experience is not unique. An anecdotal survey of my immediate professional network of designers and engineers came back with 50% of them suffering from some level of repetitive stress injury. Similarly, 50% of the people on the product team I was leading at the time were simultaneously in physical therapy for back and shoulder issues. While hard stats aren’t easy to come by, a study in Sweden corroborates my anecdotal data, showing that “around half of those who work with computers have pains in their neck, shoulders, arms, or hands.”

The culture of tech pushes people harder and harder, but we don’t think about the physical effects of that labor. If we were athletes, where physical health is vital to success, the thinking would be completely different. Sports organizations have entire teams dedicated to physical training and support for their players. But while it’s easy to understand why this investment is critical for an athlete, it’s much harder to make that connection for knowledge workers sitting in an office. We aren’t frequently required to tackle co-workers in the hallway or run 40-yard dashes to determine who gets access to a conference room. (Though maybe you do if you work at ESPN.com.) This makes it easy to ignore the long-term physical toll of our work.

Culturally, we view the physical requirements of a job through the lens of how strenuous the individualized actions are to complete, like lifting boxes, running laps, digging ditches, or hitting jump shots. We don’t think about it in terms of aggregate impact. As a result, these sorts of injuries aren’t really discussed. When no one perceives what you’re doing as physically demanding, it’s embarrassing to talk about being injured. It’s like telling someone you hurt yourself getting out of bed. Add to that tech companies’ expectations around their employees’ time, and this becomes something few people want to announce to the world.

The roots of this issue sit deep down in the way we approach work in many sectors, but especially in technology and its surrounding industries. A recent AdAge article found that 65% of employees at ad agencies are suffering from burnout. Similarly, in 2018, a piece in Forbes put 57% of tech employees in the same boat. Fixing this means rethinking our ceaseless drive for efficiency and output, and worksite wellness programs aren’t going to cut it.

When my back fell apart, the company I was working for had lots of wellness amenities. A two-story gym on-site offered several weekly classes. Chiropractic, including an on-site chiropractor, and massage were included in our benefits package. Nonetheless, half the product team was in physical therapy.

While these options were available, finding the space and time to use them was a different story. For sports organizations, there is a clear path from injury to lost revenue. Physical health is critical to getting the job done, so those organizations are built around it, and activities related to maintaining physical health are simply part of the work. In tech, health and wellness is just another carrot used to recruit prospective employees, no different than a foosball table or kombucha tap. It’s a fancy add-on available if you have time, but good luck finding that—we’ve got features to ship. But while the path from injury to lost revenue is not as clear in tech as it is in sports, that doesn’t mean it does not exist.

A 2012 study from the Liberty Mutual Research Institute ranked the top 10 causes of workplace injuries and their resulting economic impact. Repetitive stress injuries came in ninth, with a $1.8 billion annual cost for companies. You can’t pin all those losses on the tech industry, as the study pulled data from injury reports across sectors, but given the evidence that 50% of those who work on computers report pain issues, coupled with the tech sector’s growth since 2012, it is very likely that the economic impact of these issues has grown significantly. There is also a good chance the number has been grossly underestimated. Because of tech’s culture, my guess is that many of these issues go unreported and potentially untreated. The tech industry is the epicenter for the world of GaryVee-inspired hustle porn, where temporarily embarrassed billionaires kill themselves to earn some kind of social badge of honor. In that world, there is no room for sleep or a social life, let alone physical injuries. And companies hoping to move fast and break things embrace and reward this mentality with gusto.

There is nothing fun about slowly losing your quality of life in the machine of iterative product development.

We become culturally conditioned to think of these issues as just the price of doing the work, not as an occupational hazard or some abnormal outcome that should be reported. I didn’t report my issues through workers’ comp, and my guess is many others do not as well.

In this way, the issue takes on a different flavor than you might see in other industries. For much of the manufacturing world, health and safety is a big part of the conversation, with labor groups, OSHA, and other regulatory bodies working to ensure a safe environment for workers. In tech, it’s a silent epidemic. And while the consequences may not be as outwardly dire as losing a hand in a piece of industrial machinery, there is nothing fun about slowly losing your quality of life in the machine of iterative product development. Additionally, research from Harvard suggests that health issues related to workplace stress and burnout represent an additional $190 billion of health care expenditures each year and contribute to 120,000 annual deaths. So there’s that.

My physical burnout moment became a forcing function for me to keep myself at a certain level of physical fitness. I’ve since developed a routine that helps me keep things under control, but it requires time, effort, and conscious intention. If I slip for too long, issues creep back in.

What if we recognized that time and effort as a requirement of doing the job in the same way we recognize the need for athletes to take care of themselves? Instead of a nice-to-have perk (if you can find the time), we could acknowledge that wellness is foundational to individual success, even for jobs that might not be considered “strenuous.” Like an athletic organization, the health and wellness of all employees should be a central pillar of organizational structure.

I’m not saying tech companies need to have massive training facilities or two-a-day workouts, but we need to get real about creating work schedules that prioritize breaks and create space for actually using those wellness perks. This means establishing realistic expectations of employee hours and, most importantly, structuring deadlines that support those expectations. This may sound crazy or expensive, but I would argue that a lot of our ideas about “what works” for business are flawed, grounded more in archaic traditions and outdated beliefs than actual data. Case in point: this recent experiment from Microsoft where the company shifted to a four-day workweek in Japan and productivity jumped by 40%. Turns out taking care of people is good for business. More of that, please.

“Tech Workers Are Suffering From a Silent Epidemic of Stress and Physical Burnout” was originally published in Medium on January 15, 2020.

We build a lot of technology and push it out into the world. When things go well, we rush to take credit for what we did. But when things go wrong, we hide our heads in the sand. This isn’t just about ignoring negative outcomes — it’s about maintaining the status quo.

Whenever I write a critical piece about technology and its impact on society, a certain kind of troll surfaces. I like to call them the “techno-whataboutist.” Their argument is always the same: “[some person] had the same concerns about [some established technology — the book, the printing press, TV, newspapers, radio, video games, cars] a long time ago, and things turned out just fine, so stop worrying.”

And it’s not just no-name, trolly commenters who run down this path. Nir Eyal pulled the same shenanigans in his piece about screens and their impact on kids. And Slate did an entire piece on the history of “media technology scares” — which, according to the author, didn’t pan out. In both cases, Slate and Eyal pulled out one of the techno-whataboutist’s favorite examples:

The Swiss scientist Conrad Gessner worried about handheld information devices causing ‘confusing and harmful’ consequences in 1565. The devices he was talking about were books.

On the surface, it’s easy to laugh at Gessner, but our relationship with technology and the way it impacts our world is complicated. Nothing is black and white. It’s all gray. If we ever hope to have a healthy, sustainable relationship with the things we create, we have to be willing to dive into those gray areas. The techno-whataboutist’s goal is to avoid all that.

Traditional whataboutism is the deployment of a logical fallacy designed “to discredit an opponent’s position by charging them with hypocrisy without directly refuting or disproving their argument.” For example, a traditional whataboutist might try to dismiss climate activism by calling out that Greta Thunberg still rides in cars (hypocrisy!). This kind of tactic was a favorite propaganda tool of the Soviet Union during the Cold War. And while techno-whataboutism doesn’t portend hypocrisy, it represents the same kind of rhetorical diversion, one designed to act as a cudgel to beat back questions about the complex nature of our relationship to technology.

The idea that the only way to think about technology is in a positive light ignores the complexity inherent in technological progress.

The first big problem with techno-whataboutism is that it presupposes that the place we have ended up, as a society, is a good one. There is no power in Gessner’s book example unless you believe everything is fine.

To even be able to make a statement like, “people worried before, but everything is fine now,” takes a significant level of privilege. Perhaps that’s why in my experience the vast majority of the people who present this argument are white men.

Sure, for many of us white guys, things are pretty good. But this is not the case for everyone. The positive outcomes associated with the advance of technology are unevenly distributed and there are often significant winners and losers in the systems we architect and the things we produce.

Let’s continue with books as an example. The invention of the book made vast amounts of knowledge both available and easily transferable. It’s hard to argue against the net positive impact of that change. But if we just stop there we willfully turn a blind eye to the full picture.

The two most distributed books in history, the Bible and the Quran, while providing spiritual support for many people, have also helped spark a staggering amount of death, destruction, oppression, violence, and human suffering, often focused on marginalized groups and those who don’t ascribe to the beliefs these books contain. Mein Kampf helped catalyze the rise of the Nazis and ultimately the Holocaust. Mao Zedong’s Little Red Book, the third-most distributed book in history, arguably helped catalyze the Great Leap Forward, resulting in the deaths of millions of people.

The capabilities that made books a transformative, positive technology also made them weapons for propaganda and abuse on a previously unprecedented scale. So was Gessner wrong to worry about the impact of books? I don’t know about you, but I’d put indoctrination on a shortlist of “confusing and harmful” effects.

I’m not suggesting that we undo the invention of books or that the positives of technology should be discounted. But the idea that the only way to think about technology is in a positive light ignores the complexity inherent in technological progress. By doing so we lose a depth of conversation and consideration that leaves us open to repeating past mistakes and reinforcing existing power structures. For example, TV, radio, and now social media have mirrored many of the positive AND negative impacts of books on an exponentially accelerating scale, not to mention that each new technology piled on its own unique set of new issues.

Comparing a book to a smartphone is like comparing a car to a skateboard.

The techno-whataboutists practice a special brand of what I like to think of as “technological nationalism,” where they assert that all innovation is “progress,” regardless of the full outcome. This thinking keeps us locked into an endless loop where our technology changes but the political and economic status quo remains the same. The people who benefit continue to benefit and the people who don’t, don’t. We fix nothing and we disrupt everything, except the things that actually need disruption.

This brings me to the second problem with techno-whataboutism: The past is not a proxy for the future. Comparing a book to a smartphone is like comparing a car to a skateboard. Sure, they both have wheels and can get you from point A to point B, but that’s about as far as the similarities go. Books deliver information, as do smartphones, but the context and capabilities are on an entirely different scale. This kind of lazy logic blocks us from considering the specific nuances of new technology.

Context changes. The power, scale, and interconnectedness of our systems grow. We move from linear impacts to exponential impacts. The world is not as it was. The question becomes, when does it matter?

The consequences of our creations fall unevenly on society, but so far, as a whole, we’ve been able to push through, and ignore much of the fallout. But when do the contexts and capabilities of our technology reach a point where the consequences can no longer be ignored?

In his 1969 book Operating Manual for Spaceship Earth, the architect and futurist Buckminster Fuller argued that while the resiliency of nature has created a “safety factor” that has allowed us to make myriad mistakes in the past without destroying ourselves, this buffer would only last for so long:

This cushion-for-error of humanity’s survival and growth up to now was apparently provided just as a bird inside of the egg is provided with liquid nutriment to develop it to a certain point…

My own picture of humanity today finds us just about to step out from amongst the pieces of our just one-second-ago broken eggshell. Our innocent, trial-and error-sustaining nutriment is exhausted. We are faced with an entirely new relationship to the universe. We are going to have to spread our wings of intellect and fly, or perish; that is, we must dare immediately to fly by the generalized principles governing the universe and not by the ground rules of yesterday’s superstitious and erroneously conditioned reflexes.

Nature’s buffer acts as a mask, hiding the true impact of our actions and lulling us into a sense of overconfidence and a disregard for the consequences of our decisions. It’s easy to ignore all of our trash when the landfill keeps it out of sight, but at some point, the landfill overflows.

Fuller was a technological optimist, but he was also realistic about the complexity of change and innovation. From his vantage point in 1969, he was able to see that we were moving to an inflection point in our relationship with the world we inhabit. As he saw it, our safety factor was all used up and our ability to “spread our wings” was dependent on a change of approach, in order to come out the other side and truly cement our place in the cosmos.

The techno-whataboutist doesn’t want to change the approach. Instead, they want you to embrace their reductionist, technological nationalism where all innovation is good—outcomes be damned. Change is impossible under this type of thinking.

The tech world loves to talk about “failing fast,” but no one ever talks about what happens after that.

We’re already starting to see the aggregate impact of our choices on the natural world, and it’s becoming harder to hide from those consequences. But nature isn’t the only thing impacted by technology. The fabric of our society, the way we live and interact, is also tightly tied to the tools we have at our disposal. Like nature, I believe that the inherent resiliency in our societal structures has created a safety factor that has similarly allowed us to ignore the way our behavior, our habits, and our interactions have changed over time. But a what point does that landfill overflow? Or has it already?

Innovation is critical to our overall progress, and we have to accept that there are inherent risks in that messy and unpredictable process. But our need to invent doesn’t absolve us from being accountable to the results. The tech world loves to talk about “failing fast,” but no one ever talks about what happens after that. Who cleans up the mess we leave for society when we fail?

We take full credit for our successes. We stand on big stages and make a big show about the amazing benefits of our newest creations, but we sneak out of the party when shit goes bad. We don’t get to have our cake and eat it too.

It is possible to hold a positive view of technology while still acknowledging its downsides. And while we can’t be afraid to push the edges of what’s possible, we have to be willing to admit when things go wrong and invest in the work to fix it. Our safety factor won’t protect us forever. This is when it matters.

“The Problem with the Techno-Whataboutists” was originally published in Medium on January 8, 2020.

“Once you hit the first intersection in town, make a right onto Main Street. Go past the fire station and over the railroad tracks. Make your second right after the tracks. If you see a large metal building with a truck parked in overgrown grass, turn around; you’ve gone too far…”

These are part of the directions to the house I grew up in. For decades, these directions, in one form or another, would be dictated to people planning to visit for the first time. Once I could drive, I would reference equivalent narratives, hastily scribbled down on paper, as I made my way to somewhere new.

This was a deeply inefficient process. It required an upfront conversation to get the directions, followed by significant effort and consternation to decipher them en route. If you were lucky enough to have a “navigator” in the passenger seat next to you, that opened up a whole separate level of coordination. “Wait, was that the big tree with the ‘Y’ in it? Wasn’t that our turn? You were supposed to be watching for the ‘Y’ tree!”

But while inefficient, this process represented something profoundly valuable: awareness and connection.

In order to rattle off a narrative like that, tethered to a landline phone in your kitchen, you had to maintain a detailed map of the area in your head. You and your visitor had to have a shared awareness and contextual understanding of major landmarks and geography, allowing you to shortcut details: “Can you get yourself to Interstate 70? Yes? Great, get to I-70 and head west…”

If you regularly use navigation, think about your own mental map of your town or city. How far out can you go before your map starts to fade?

To follow the directions, you had to remain acutely aware of your surroundings throughout the journey. Getting lost and turned around was a common occurrence. But every wrong turn and missed landmark represented new learning and discovery, a chance to expand your own internal map and build your resilience as you were able to troubleshoot and get back on track.

This process, of course, rarely happens today. Now all we have to do is text our address to someone and Google Maps does the rest. It’s an exponential boost in efficiency, but a significant erosion of capability and connection. We don’t need to be aware of our surroundings at all anymore. We can just wait for the voice from our phone to tell us where to turn. If we happen to make a wrong turn, it is immediately corrected. We don’t have to figure anything out.

Some believe that over time this kind of reliance will degrade our broader cognitive function. It’s unclear if that is true, but what we can say is that it does diminish our skills and our level of self-sufficiency.

If you regularly use navigation, think about your own mental map of your town or city. How far out can you go before your map starts to fade? Can you name the streets and key landmarks directly surrounding your house? Those a few blocks away? How easily could you give someone narrative directions to your home from miles away? How different is your internal map today than it was five or 10 years ago?

For those who have offloaded navigation to their phones, these questions can start to prove challenging. As our awareness fades, GPS-enabled navigation becomes something we “can’t live without.”

The reliance economy

Today, much of our existence centers on the attention economy, where our focus and time are mined, and the resulting data is manipulated and sold as a commodity in service of driving advertising revenue and feeding algorithms. We’re becoming painfully aware of the downsides of this arrangement as services architect themselves to put us in a perpetual “can’t look away” state. But as detrimental as the attention economy is, it’s just a temporary stop on our way to a very different destination.

“Can’t look away” was never the ultimate goal. The ultimate goal has always been “can’t live without,” and that is a very different animal.

As technology advances, we have begun offloading more and more of our cognitive functions and skills to our devices.

In 2016, researchers conducted a study to test the effects of the internet on human memory. Participants were divided into two groups and asked a series of challenging trivia questions. One group was allowed to use the internet to answer the questions, while the other group could only use their memory. Afterward, both groups were asked a second set of trivia questions. This time, the questions were easy and both groups were allowed to answer them using any method they chose. What the researchers found was that the group who had already used the internet for hard answers was significantly more likely to use it to find easy answers as well. In fact, 30% of the internet group did not even try to answer any simple questions from memory.

As Benjamin Storm, PhD, lead author of the study, put it:

“Memory is changing. Our research shows that as we use the internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don’t bother. As more information becomes available via smartphones and other devices, we become progressively more reliant on it in our daily lives.”

As technology advances, we have begun offloading more and more of our cognitive functions and skills to our devices. This isn’t altogether surprising — cognitive offloading is a strategy we use in other areas of life as well.

In close relationships, like a marriage, ownership of life tasks is frequently split between the couple, with one person taking responsibility for each area, like paying bills, cooking, or managing car maintenance. This allows us to gain efficiency, but it can also have significant detrimental effects. If a spouse dies suddenly or a couple gets divorced, our cognitive support is stripped away and we are left to relearn skills or find outside help with things for which we may have had no responsibility over the years. Yet despite the risks, in this case, reliance has an evolutionary benefit in helping us maintain long-term relationships by forging tighter bonds through shared dependence in a (hopefully) mutually beneficial exchange.

This isn’t the case with our reliance on digital devices. With technology, we aren’t becoming dependent on another person — we’re becoming dependent on corporations. Even if our relationship with a company is mutually beneficial in the short term, the chances of a long-term mutually beneficial relationship are almost nonexistent. We don’t build companies to create long-term relationships. We build companies to drive profits, and that leaves us vulnerable.

Despite the way we position it, technology is no longer a tool to solve problems; it has been twisted into a tool to grow profits. Capitalism isn’t geared to solve problems. If a company truly solved a problem, it would put itself out of business. Instead, the system is geared to keep consumers perpetually in need. For every solution we create, we have to manufacture a new problem. One way we do this is through planned obsolescence: intentionally designing things to have a short lifespan, which drives frequent upgrade cycles. This is the reason Apple releases a new iPhone every year and stops supporting older versions. This is also the reason that the average appliance now lasts about eight years, and why the fashion industry pushes seasonal trends.

But planned obsolescence isn’t the most powerful problem a company can generate. The most powerful problem a company can create is the “I can’t live without it” problem. If a product replaces a human skill, we become reliant on it, and making us reliant is the ultimate long-term growth strategy. Monopolization isn’t just about pushing out the competition. It’s about monopolizing human capability.

But while this process drives business and economic growth, it degrades our resilience at an individual and societal level. This creates a compounding fragility at the base of our societal structures and we become increasingly vulnerable to catastrophic events. This is a self-perpetuating downward spiral where, as our resiliency continues to diminish, it takes less and less for an event to be catastrophic. Just like a marriage, when our partner goes away or the situation changes, we’re left holding the bag. We’re becoming progressively less capable of handling those changes.

Despite the way we position it, technology is no longer a tool to solve problems; it has been twisted into a tool to grow profits.

Our true reliance on technology is still emerging. We’ve only just reached generations who have never seen a world without it. But navigation and Googling for trivia answers are only the tip of the iceberg. The expanding capabilities of artificial intelligence (A.I.) are going to dramatically accelerate the number of tasks we fully offload to devices. Self-driving cars. A.I. personal assistants. Scheduling. Communication. Writing. Purchasing. Courtship. Designing. Coding. Solving mathematical problems. Art. Music. Problem-solving. It’s all up for grabs.

As Marvel’s Dr. Strange put it, “We’re in the endgame now.”

Some take comfort in the fact that A.I. is still relatively “dumb,” with the sense that it is not a concern until it becomes smarter than us. But what they’re missing is that this isn’t a race to create a superintelligence — this is a race to replace human skills and build the next “can’t live without it” monopoly. In that race, A.I. doesn’t need to become better than us. We just need to become dumber than it. As smart devices subsume more of our capabilities, we will gain efficiency, but we will lock ourselves into a dependent relationship.

This isn’t the first time we’ve headed down this path. We’ve taken a lot of potentially empowering products and contorted them into tools for reliance.

Take the car for example. The car is a massive enhancement to our ability to travel and is incredibly empowering, but in our drive for monopolization, we’ve stripped that empowerment away by developing a system that locks us into car-based travel. We’ve planned and built our entire environment around vehicles, to the point that it is nearly impossible to live without access to motorized transportation. We now have an entire segment of mobility companies trying to untangle those decisions. Additionally, over time we’ve made cars more and more complex to fix and maintain, creating a reliance on an intricate system of mechanics and dealers in order to use them. Finally, we’ve created an unnecessary upgrade cycle through a combination of marketing and mediocre craftsmanship. We’ve taken what was an empowering technology and imprisoned ourselves in it.

We don’t have to continue down this road. Technology is not something that happens to us — it’s something we choose to create. We have the ability to make different choices in the way we design and build our products and the way we incentivize our companies. Despite what we’ve been told, we can build for empowerment instead of reliance, and still create profitable businesses. The next decade will see a dramatic expansion of our digital capabilities. It’s time for us to start thinking critically about the choices we make and the things we decide to build.

The Dawn of the Reliance Economy was originally published in Medium on April 26, 2019.


In 1926, the last remaining wolves were killed in Yellowstone National Park. It was the outcome of a centuries-long campaign to rid North America of its wolf population.

Wolves were viewed as a nuisance. They killed valuable livestock and created a barrier against our drive to conquer the West. Our bid to eradicate them was swift and effective but carried unexpected consequences.

In Yellowstone, removal of the wolves resulted in reduced pressure on the elk population, triggering a cascade of ecosystem-wide devastation. The growing elk herds decimated willow, aspen, and cottonwood plants, which caused beaver populations to collapse. This cascade of events changed the trajectory and composition of the park’s rivers as banks eroded and water temperatures rose from reduced vegetative cover. As a result, fish and songbirds suffered.

Humans are friction-obsessed.

Doug Smith, a wildlife biologist who oversaw the reintroduction of wolves to Yellowstone, describes the original elimination of them as “kicking a pebble down a mountain slope where conditions were just right that a falling pebble could trigger an avalanche of change.”

To humans, the wolves represented nothing but unnecessary friction. To nature, they represented a crucial linchpin holding the entire ecosystem together.

Humans are friction-obsessed. Friction is our ultimate foe in a constant crusade for efficiency and optimization. It slows us down and robs us of energy and momentum. It makes things hard. We dream of futures where things run smoothly and effortlessly, where it’s all so easy.

Driven by this vision, we’ve constructed a vast techno-industrial complex that churns out endless products aimed at smoothing increasingly insignificant inconveniences.

But nature is the ultimate optimizer, having run an endless slate of A/B tests over billions of years at scale. And in nature, friction and inconvenience have stood the test of time. Not only do they remain in abundance, but they’ve proven themselves critical. Nature understands the power of friction while we have become blind to it.

In 2012, psychologists completed a study that asked participants to assign monetary value to a simple storage box from IKEA. One group had to build their own box while the other group was given a prebuilt box. Both groups were then asked what they thought the box was worth. The group that built their box valued it significantly higher than those who received the prebuilt version.

In this case, building the box added an extra layer of friction to the process. That friction, dubbed “the IKEA effect,” infused a sense of ownership and purpose into the box that made it more valuable to the participants who built it. This effect, however, only held to a point. As the researchers dug deeper, they discovered that value was not created if the box was too difficult to build. As the researchers put it: “We show that labor leads to love only when labor results in successful completion of the task.”

The results of this study set up a bell curve of friction versus value. Both too much friction and too little friction reduce value, but just the right amount of friction maximizes it.


We can see this effect play out in the products we use every day.

Take, for example, Facebook. Facebook unlocked tremendous value by greatly reducing the friction involved in sharing our lives with friends. The platform was easy to use but still required some effort to create and share posts. In a bid to increase value, Facebook decided to remove this final bit of friction by introducing “frictionless sharing,” wherein some activity was automatically shared on the user’s behalf. Unfortunately, the change removed too much friction. Users felt they had lost control and ownership over their posts, and their response was overwhelmingly negative. Facebook eventually rolled back the feature.


Similarly, Amazon delivers value by making it easy to find and buy almost anything. However, the steps you must take to purchase an item on Amazon still represent a small dose of friction. To remove this final bit of friction, Amazon implemented a “one-click” buy button which eliminates the need to complete their checkout steps. To take this even further, they created a smart button called Amazon Dash, which allows a person to order frequently used products without even visiting Amazon’s site. These features solve a problem for Amazon, bringing them more revenue more quickly. But based on what we know already, a frictionless shopping experience may actually be detrimental to customers.

Like Yellowstone’s wolves, the friction of the checkout process provides a check against impulse purchases and overspending. In a world where many people struggle to manage their money, these small barriers can be critical to maintaining financial balance. While the market would dictate that it’s not Amazon’s job to help its customers control their spending, lowering the barrier to impulse purchases could have a net negative effect on the value people get from Amazon’s service. The Dash button, for example, eliminates so much friction that customers may not even know how much they’re spending until after they’ve completed a purchase. In light of this, Amazon Dash was deemed illegal in Germany for violating consumer protection laws.


While the friction-versus-value curve impacts our daily interactions with products, it carries even greater weight outside of online shopping and social sharing.

We crave purpose and meaning in our lives. Many of us subscribe to the guiding belief that we must eliminate as much inconvenience and friction as possible in order to maximize the time we can spend on “the things that matter.” Unfortunately, as the IKEA effect illustrates, we may be going about it all wrong.

Below is a graph from Our World in Data. It shows self-reported life satisfaction from 2005–2017 across a number of countries with varying economic and political circumstances.


Overall, a country’s average level of life satisfaction increases alongside its wealth, with many wealthy countries reporting average levels in the seven to eight range (out of 10). A certain level of wealth, both on an individual and national level, is required to afford the services and infrastructure that reduce major friction in our lives. But that’s not what’s most interesting about this graph. Rather, what is most striking to me is that satisfaction levels, across the board, have not moved appreciably in over a decade.

This is remarkable when you consider that the time between 2010 and 2017 represents a high point in Silicon Valley, with the introduction of smartphones, tablets, and wearables as well as the explosion of social media and the rise of Amazon, Uber, Airbnb, and Netflix. You could call this era a golden age in our war on friction. We’ve seen a technology-enabled smoothing of increasingly minor inconveniences, yet it seems to have had little net impact, positive or negative, on life satisfaction across the globe. For many, life has changed dramatically but our levels of satisfaction have not.

It’s important to note that the data from the graph above draws from the Gallup World Poll, which focuses its survey mainly on adults. Most of the respondents are from a generation that grew up before the great smoothening of the last decade. So what about the generation entering adulthood right now? Has life with less friction left them feeling happier and more fulfilled than generations before? In her book, iGen, San Diego State University Professor of Psychology Jean M. Twenge shows us the answer is no. A growing percentage of eighth, tenth, and twelfth-graders feel their lives have less purpose than previous generations. We have a lot more to learn here, but the preliminary evidence supports the idea that we’re no happier than we were before the rise of apps.

Percent of 8th, 10th, and 12th graders who are neutral, mostly agree, or agree with each statement.

Too much friction destroys value. But so does too little.

Before the industrial revolution, many people faced insurmountable levels of friction. Over the last century, we’ve unlocked tremendous value by reducing major inconveniences. We’ve streamlined travel and communication, connecting vast portions of the globe. We’re enabling an increasing percentage of the global population to rise out of poverty. Mechanization and mass distribution put material and agricultural goods in the hands of many for whom these things were previously unattainable. We’ve moved more people to the middle of the friction bell curve, making it possible for them to step away from the basic tasks of survival and find meaning in other pursuits. Through it all, technology has continued to advance.

Ostensibly, the continued reduction of minor inconveniences should continue to drive satisfaction upward. But global satisfaction and happiness are stagnating and young people are feeling less purpose in their lives.

Over time, we’ve increasingly tied the value of technology to the revenue it can generate as opposed to the benefit it can deliver to the humans who use it.

The problem is that we now have a system built to straddle the friction-value curve, which keeps many people out of the middle. On one side, we have the market-driven techno-industrial complex, which is focused on making things increasingly easier for people who are already in the sweet spot of the curve. The result is that these people are beginning to slip down the other side, falling into the realm of too little friction and leaving purpose, meaning, and satisfaction behind.

On the other side, vast portions of the population are living with far too much friction. Overall, global progress has not been evenly distributed. Even within wealthy countries, disenfranchised and marginalized groups continue to face massive systemic barriers. Frequently, these issues are shuffled onto society’s back burner, becoming the purview of under-resourced government and philanthropic organizations while the market turns its attention toward delivering more ease for those who already have it easy enough.

This is the incentive structure we’ve created. Technology is a tool to solve problems and deliver value. Over time, however, we’ve increasingly tied the value of technology to the revenue it can generate as opposed to the benefit it can deliver to the humans who use it. Our economic system feeds on the belief that eliminating all friction is our road to happiness. We perpetuate this belief to drive profits — but we’re reaching a point of diminishing returns.

While levels of global satisfaction are still relatively high today, the trend in these numbers is not encouraging, especially for younger generations. If our goal is to grow profits, we’re doing alright. But if our goal is to truly deliver human value, we’re heading down the wrong path.

We need to reassess our relationship with friction. We reduce the likelihood of value, purpose, and satisfaction when we focus on smoothing increasingly benign inconveniences and ignore the significant friction holding back much of the world.

All friction is not created equal. If we are designing products for human value, we can’t treat all problems the same way. We need to understand which problems are worth solving because they truly hold people back and which problems may not actually be problems at all. The nuance of this difference, just as we see in nature, is key to maximizing a product’s value to humanity.

The Value of Inconvenient Design” was originally published in Medium on March 5, 2019.

The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. They’ve perfected design processes that allow them to improve the way people accomplish tasks. Unfortunately, it’s becoming increasingly clear that utility alone isn’t enough.

Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.

We want to feel empowered by technology, and we’ve forgotten that utility does not equal empowerment.

Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Instead, digital products demand so much of us and intrude so deeply into our daily existence that they undermine our confidence and control. Our data and activity are mined and used with no compensation or transparency. Our focus is crippled by constant notifications. Our choices are reduced by algorithms that dictate what we see. We can’t even set our devices down because we’ve lost our ability to resist them.

In the early years of the web… there was still a degree of separation. We just weren’t on our computers that much. Then the smartphone came along.

We brush this off because we’ve confused a sense of utility with a feeling of empowerment. We assure ourselves that we own our lives when we land a great deal on a place to stay, catch the latest update from a friend, discover a great article, or have our groceries delivered. These are just a few of the small moments of pure utility that we’ve learned to confuse with power over our own lives.

We’ve been on this trajectory for a while. For decades, companies have taken increased license to insert themselves into our lives. Driven by a combination of proximity and data availability, this trend has reached a crescendo in the last decade.

Everything we do on the web now is trackable. Before the internet, this level of data granularity was unfathomable. In the web’s early years, companies began to leverage user insights to target ads and drive their businesses. For a brief time, we had a degree of separation because we just weren’t on our computers very much. Then the smartphone came along.

Smartphones have created a once-unimaginable level of proximity between customers and companies. This ever-present connection has dramatically driven up our time spent online. Suddenly, companies can reach us directly anytime, anywhere. Couple that with the growing mountains of data, and the separation between our lives and companies that want to influence them has disappeared.

It’s an unsustainable relationship. It may look like the future, but it’s not.

Most companies’ current model of value is to design for utility, believing that customers will absolve them of any wrongs done in the name of it. This model is failing because it misses the bigger picture of what humans want from the technology they use.

Utility alone won’t assuage us. We want empowerment. We want to be better people. We want technology to enhance our capabilities and increase our sense of agency without dictating the rhythm of our lives.

This is the task for the next wave of digital products, and it will require a complete shift in the way we think about design. For starters, we need to be willing to break the existing “utility” mold. As ever, when one company develops a winning strategy, everyone follows suit. Now that we’ve established a set of best practices based on extraction and exploitation, we’ve applied them with cookie-cutter precision across every industry. Companies preach user-centered design, but the products they create often center on the value they receive from the user rather than what they can deliver.

As digital product designers, here’s what we need to rethink:

  1. How users’ roles are viewed in the life cycle of products. If the value of a product is predicated on its users’ activity or resources, then those users are not customers, they are business partners.

  2. Data collection, manipulation, and transparency. We need to center the user — not the business — as the owner of their data.

  3. The drive for continual engagement. Intentionally hijacking human psychology in order to hook people is a predatory business practice. We need ethical standards for how we manipulate people’s behavior.

  4. Revenue models. Business models that depend on a given level of user engagement are unsustainable.

  5. How content creators are compensated. A platform alone should not profit from the creations of its users.

  6. Algorithms and artificial intelligence. We need ethical standards for how we manipulate what a person sees.

  7. The role of our products in the lives of our users. Our products are not the center of a person’s life; they are only a small part of it.

Evolving our thinking in each of these areas will be a big step forward, but doing only that isn’t the complete answer. We also need to break our obsession with screen-based solutions. While screens are unlikely to ever go away completely, they’ve become a crutch — the path of least resistance. If there is a problem to be solved, product designers think all they have to do is create an app. Our obsession with designing for screens has fueled an entire industry of UX design boot camps that crank out app designers. We’ve tricked ourselves into believing all problems are nails and screens are the hammer. We’ve got it so dialed in at this point that most apps look the same.

Screens are easy.

They beget many of the digital product design problems described above. They require attentive processing, meaning our brains must be fully engaged to interact with them. By nature, they demand our attention — which is what encourages the collection of vast amounts of data — and lend themselves to business metrics like minutes viewed, dwell time, page views, and read time. Screens have convinced us that continual engagement is the definition of success.

We’ve never wanted to be shackled to technology. It’s not the future we promised ourselves.

As long as we continue to design solutions that demand all of our attention, it will be nearly impossible to break out of the “disempowering product” paradigm. Too often, our screen obsession keeps us from even considering the many other creative and powerful ways we could be using the web’s capabilities.

Some point to augmented reality as the next phase. While AR may feel transformative and whiz-bang, it’s really just the same screen in a different location. It’s the next step in the race to see how close our notifications can get to our actual eyeballs. It’s not empowering.

Empowering products enhance our capability and our sense of agency without disrupting the rhythm of our lives. The car is a great example. It’s a dramatic enhancement to our ability to travel, and we have agency (outside of some basic safety rules) to use it as we see fit. It works with us. It listens to us. It doesn’t disrupt us. A car is there when we need it and invisible when we don’t.

This must be our new design mantra: There when you need it, invisible when you don’t. It would be much better than what we believe today: There when you need it, incessantly begging you to come back when you don’t.

In his book Enchanted Objects, product designer and entrepreneur David Rose of the MIT Media Lab proposes the concept of “glanceable technology”: products that deliver value without demanding constant attention. Rose’s most basic example is a web-enabled umbrella whose handle glows blue when it’s going to rain so you remember to take it with you. It’s a common device made magical with some basic web intelligence. It’s simple and powerful.

Consider another example: a wallet that gets harder to open the closer you get to your budget limit. Contrast that with a flood of “high spending” notifications on your lock screen and in your email from services like Mint. What about an alarm clock that changed color based on the predicted temperature for the day, so you knew how to dress without opening an app? Or a watch that monitors traffic patterns and vibrates to let you know when you need to leave to make it to an appointment on time. A piece of luggage with a handle that glows to notify you if your flight is delayed.

Each of these products would enhance our ability to make decisions and manage our lives without disrupting or dictating our actions. They would leverage the power of the web to deliver utility while offering us the agency to use them as we see fit.

There is so much depth beyond the screen. Some of the solutions described above might be coupled with an app, but even so, they move us away from screens as our primary entry points to technology. They would put a buffer between us and that needy friend demanding more of our time.

This is the future we should be building. It’s not just about “smart” objects. If we continue on our current path, we’ll eventually shove A.I. into every random thing we can find. Intelligence for its own sake does not equal empowerment — just as utility doesn’t. Empowerment comes through execution. If I can text my refrigerator from the store to ask if we have milk before I buy more, I have more agency to manage my life. But if that “smart” refrigerator also tracks my eating habits and funnels them to Amazon so it can spam my phone with “there’s a special on Double Stuf Oreos” notifications, then we’re right back where we started.

We’ve never wanted to be shackled to technology. It’s not the future we promised ourselves. Stories from our past don’t depict a future where we all have our heads buried in screens — unless those stories are of the dystopian variety.

We’ve always wanted tech to feel like magic, not a burden.

We can build the future we want. Technology is not something that happens to us; it’s something we choose to create. When we design the next wave of products, let’s choose to empower.

“It’s Time for Digital Products to Start Empowering Us” was originally published in Medium on February 25, 2019.

I recently installed a Nest thermostat in my house. Nest has been around for a while, but I’ve been hesitant to get one. I won’t go into the details of why we finally pulled the trigger, but it made sense to have more control of our home environment.

When the box arrived, I was excited. I felt like I was stepping into the future. Once I got it all wired up and began the setup, though, my original hesitation came flooding back.

Nest would like to use your location.

I almost bailed. This is when Nest stopped feeling like a fun, helpful device and started to feel like an intrusive portal. Yet another keyhole for a company (or whomever else) to peer into my family’s life. It was probably okay, I rationalized. It’s probably just sharing location and temperature data, I thought to myself.

I wouldn’t have had this conversation with myself a decade ago. As the internet grew and the iPhone came on the scene, it was exciting. I felt a reverence, almost gratitude for everything it enabled. Driven by curiosity and optimism, I signed up for any new service just to see what the future might hold. I was on the leading edge of early adopters.

Over the past few years, however, I’ve drifted away. I’m not the only one.

There’s always been a financial cost to early adoption. My uncle amassed a collection of LaserDiscs, only to have to start over when DVDs won. For him, the long-term impact was limited: some money out of pocket and a slightly bruised ego. Now, the equation is very different.

The cost of a new device is no longer just financial: it’s also deeply personal.

Today, each new device we purchase is a conscious decision to share an intimate piece of ourselves with a company whose goals may not align with our own. This exchange represents a fundamental shift in our relationship with technology and the companies that produce it. Adoption is no longer an ephemeral transaction of money for goods. It’s a permanent choice of personal exposure for convenience—and not just while you use the product. If a product fails, or a company folds, or you just stop using it, the data you provided can live on in perpetuity. This new dynamic is the Faustian bargain of a connected life, and it changes the value equation involved in choosing to adopt the next big thing. Our decisions become less about features and capabilities, and more about trust.

When Amazon says, “Don’t worry, Alexa isn’t listening all the time,” we have to decide if we trust them. When Facebook launches a video chat device days after announcing a security breach impacting 50 million user accounts, we have to decide if we’re willing to allow them to establish an ever-present eye in our home. When we plug in a new Nest thermostat for the first time, we have to decide if we are okay with Google peering into our daily habits. The cost of a new device is no longer just financial: it’s also deeply personal.

The diffusion of innovation

The adoption of new technologies is often represented on a normalized curve, with roughly 16 percent of the population falling into what is broadly characterized as early adopters.


Early adopters, as Simon Sinek puts it, are those who just get it. They understand what you’re doing, they see the value, and they’re here for it. The further you move into the curve, from the early majority to the laggards, the more you need to convince people to come along.

Early adopters have an optimistic enthusiasm and a higher tolerance for risk, both financial and social (remember the first people walking around with Google Glass?). It’s relatively easy to acquire them as customers. It doesn’t take a sophisticated marketing apparatus or a big budget to get them on board. As Sinek says, “Anyone can trip over [the first] 10 percent of the market.” Early adopters are critical because they create the fuel that allows an idea to gain momentum.

Early adopters provide initial cash flow and crucial product feedback, and they help establish social proof, showing more cautious consumers that this new thing is okay—all at a comparatively low cost of acquisition.

For a new product to find true mass market success, it has to move out of the early adopter group and gain acceptance in the early majority. This is sometimes referred to as crossing the chasm. Early adopters give new technologies the chance to make that leap. If companies had to invest in marketing to acquire more reticent consumer groups, the barrier to entry for new ideas would grow dramatically.

But what if early adopter enthusiasm began to erode? Is that optimistic 16 percent of the population immutable? Or is there a tipping point where the risk-to-value ratio flips and it no longer makes sense to be on the cutting edge?

What it means to “just get it” in the 21st century

There was something different about the Facebook Portal launch. When the new video chat device hit the market, Facebook didn’t make a play for the typical early adopter group—young, tech-savvy consumers. Instead, they targeted the new device toward a less traditionally “techy” audience — older adults and young families. You could make a lot of arguments as to why, but it comes back to the core principles of early adopters: they get what you’re doing, they see the value, and they’re here for it.

For Facebook, mired by endless scandals and data breaches, it became clear that the traditional early adopters did get what they were doing, but instead of value they saw risk, and they weren’t here for it. Facebook chose to target a less traditional demographic because the company felt they were less likely to see the possible risks.

Facebook Portal is a paragon of the new cost of early adoption. The product comes from a company whose relationship with consumers is shaky at best. It carries a lot of privacy implications. Hackers could access the camera, or the company could be flippant and irresponsible with the use and storage of video streams, as was reported with Amazon Ring. On top of that, Portal is not just a new device, but also a new piece in the ecosystem of Facebook products, which represents a bigger underlying hazard that is even harder to grapple with.

Today, each new device we purchase is a conscious decision to share an intimate piece of ourselves with a company whose goals may not align with our own.

As the technology ecosystem has grown, the number and types of devices we feed our personal data into have expanded. But, as linear thinkers, we continue to assess risk based on the individual device. Take my internal dialogue about the Nest thermostat. My inclination was to assess my risk tolerance based on the isolated feature set of that device — tracking location and temperature. In reality, the full picture is much broader. The data from my Nest doesn’t live in isolation; it feeds back into the ever-growing data Frankenstein that Google is constructing about me. My Nest data is now intermingling with my Gmail data and search history and Google Maps history and so on. Various A.I. munges this data to drive more and more of my life experience.

A product ecosystem means the power inherent in a single device is no longer linear. As each new device folds into an increasingly intimate data portrait, companies are able to glean insights with each new data point at an exponential rate. This potentially translates to exponential value, but it also carries exponential risk. It’s hard, however, for us to assess this kind of threat. Humans have difficulty thinking exponentially, so we default to assessing each device on its own merits.

All of this means that to be tech-savvy today isn’t to enthusiastically embrace new technology, but to understand potential hazards and think critically and deeply about our choices. As Facebook Portal illustrates, that shift has the potential to change the curve of technology adoption.

Trust in the future

Over the past decade, our relationship with new technology has been tenuous. As early as 2012, a Pew Research study found that 54 percent of smartphone users chose not to download certain apps based on privacy concerns. A similar study in Great Britain in 2013 pegged that number at 66 percent. More recently, MusicWatch conducted a study on smart speaker use and found that 48 percent of respondents were concerned about privacy issues. As summarized by Digital Trends:

Nearly half of the 5,000 U.S. consumers aged 13 and older who were surveyed by MusicWatch, 48 percent specifically said they were concerned about privacy issues associated with their smart speakers, especially when using on-demand services like streaming music.

Yet, despite our misgivings, technology marches on. Our concerns about smartphones have not slowed their growth, and MusicWatch found that 55 percent of people still reported using a smart speaker to stream music.

As Florian Schaub, a researcher studying privacy concerns and smart speaker adoption at the University of Michigan, is quoted in Motherboard:

What was really concerning to me was this idea that “it’s just a little bit more info you give Google or Amazon, and they already know a lot about you, so how is that bad?” It’s representative of this constant erosion of what privacy means and what our privacy expectations are.

We’ve been engaged in this tug-of-war for years, pitting that persistent feeling of concern at the back of our minds against our often burning desire for the new. The coming decade may prove a litmus test for our long-term relationship with technology.

For years we have chosen to trust corporations with our personal data. Maybe it’s a cultural vestige of the technological optimism of postwar America, or maybe we are so eager to reach the future we’ve been promised that we are operating on blind faith. But there are signs that our enthusiasm is cracking. As we continue to hand over more of ourselves to companies, and as more of them fail to handle that relationship with respect, does there come a point when our goodwill dries up? Will trust always be something we give, or will it become something that must be earned? At what point does the cost of adoption become too high?

Why Technology’s Early Adopters Are Opting Out” was originally published in Medium on February 11, 2019.

I sit on a runway. It’s getting dark and it’s raining. The flight attendant says it’s time to switch our portable electronic devices to airplane mode. Most people ignore them. I’m quickly flipping through my “critical” apps one last time, getting in that final check before jetting off into a communication black hole. There is nothing new waiting in those apps and I know that; I checked less than a minute ago. But I check again anyway. We take off. I flip my phone to airplane mode. Soon we’ll be at our cruising altitude and it won’t matter what mode my phone is in; checking in will be off the table. I’m relieved.

The airplane is like a communication time warp. A throwback to an age where uninterrupted conversations could flow for extended periods of time. A time when we were comfortable just staring out the window watching the world go by. A time when one might find themselves bored with only their wandering thoughts to entertain them.

A healthy amount of idle time is not only good for us but makes us more creative and may be critical to our happiness.

If you live in a city and don’t actively travel into rural towns or wilderness, the airplane might be the only time you experience this kind of forced disconnection. It feels freeing. It feels like a weight is lifted. That little piece of your brain constantly preoccupied with what you might be missing finally gets a break. A brief rest before it is re-engaged the moment the wheels touch down at your destination.

We need that rest and disconnection. We need our thoughts to wander, unguided and unprompted. We need uninterrupted conversation. And, most importantly, we need extended moments of boredom and the creativity and introspection that comes from it. Unfortunately, those moments are getting harder and harder to come by.

The last time I flew, there was Wi-Fi available on the plane. The modem happened to be down so we couldn’t connect, but it was there. Every plane will soon have Wi-Fi. Being 30,000 feet above the planet will no longer be an escape. We’ll all feel pressure to post our airplane window pics in real-time.

The spread of the internet is inevitable. Google and Facebook are already on a mission to bring reliable service to rural and developing areas, and that effort will only intensify. Soon access to the web will reach every corner of the globe.

This expansion, in and of itself, is not necessarily a problem. The problem is, once the entire world is connected, where will we go to get away? We need connection, but we also need solitude and silence. Our happiness and success depend on it.

The Importance of Being Bored

Boredom can be scary. With nothing around to distract our brains, we are alone with our thoughts. For many of us, this is uncomfortable—and for good reason. The feeling of boredom can actually cause us physiological stress.

As Mark Hawkins writes in his book The Power of Boredom, studies found that levels of the stress hormone cortisol were much higher among participants who felt bored compared to other emotions. And “psychologist Robert Plutchik has linked boredom to a form of disgust, similar to what we might feel when we smell rotten food.” Much of our physiological response to boredom drives us to want to avoid it, and we actively look for distraction to do that.

Over the centuries, we’ve devised a stunning array of options to fill our idle time: communal storytelling, performances and plays, sports, music, art, literature, games, films, etc. The flight from boredom has created the basis of much of our cultural history. So, boredom is repulsive, like smelly rotten food, and the pursuit of entertainment produces wonderful cultural treasures. This feels like a clear justification to eradicate boredom. But, like most things in life, it’s never that simple.

Boredom opens up space for pause and introspection.

Despite our aversion to boredom, it turns out that a healthy amount of idle time is not only good for us, but makes us more creative and may be critical to our happiness and emotional growth.

Studies have shown that boredom can drive increased creativity as your mind moves into a “seeking state.” This free-flowing state allows the brain to traverse through seemingly unconnected thoughts, which can generate unforeseen connections and insights. This heightened ability includes our capacity for creative problem-solving. People who were pushed into a state of boredom prior to solving a given problem were not only able to find more creative solutions, but also a wider range of possible solutions. Given the magnitude and complexity of the problems society currently faces, the ability to devise creative solutions will only become more and more critical.

But that’s just the tip of the boredom iceberg. While increased creativity is a powerful side effect of idle time, it is not the most important. More important is the fact that boredom opens up space for pause and introspection. As Intel fellow Genevieve Bell put it, “Being bored is actually a moment when your brain gets to reset itself… Your consciousness gets to reset itself too.”

Hawkins echoes this sentiment:

Boredom is a special space in time that provides us with a bird’s eye view of life. The examination that boredom allows helps us steer our lives toward the best road possible.

Personal and, ultimately, societal growth come from individual introspection. Moments of introspection allow us to grapple with inner thoughts and process daily inputs. It creates space to think critically about what we’ve seen, heard and experienced, and to form our own opinions about them and find those unexpected connections that help us see things through a different light. This process feeds our lifelong emotional development, helping us “steer our lives toward the best road possible.”

Without introspection, there is no space to question, consider, and form our own opinions. Without introspection, there is only space for reactionary responses and rote regurgitation of spoon-fed information. An increasingly divisive and deceptive world thrives when introspection and critical thinking are limited.

You can’t understand who you are and what you believe, let alone be able to understand someone else’s beliefs, if you don’t take time to think. We need to engage with our inner thoughts, but we can’t truly hear them unless we step into boredom. Embracing a healthy amount of idle time opens up deep opportunities to think, breathe and create connections.

We’ve always sought to escape boredom, but until recently, it was impossible to completely avoid it. For the majority of human history, much of our “in between” time was spent idle. Just thinking or talking or looking around. Today, internet-connected devices make it possible to fill every second of our time, and those activities—thinking, talking, looking—become more and more fleeting.

Sherry Turkle of MIT described this phenomenon in her book Reclaiming Conversation:

We say we turn to our phones when we’re “bored.” And often we find ourselves bored because we have become accustomed to a constant feed of connection, information, and entertainment.[…]It all adds up to a flight from conversation—at least from conversation that is open-ended and spontaneous, conversations in which we play with ideas, in which we allow ourselves to be fully present and vulnerable.

When distraction is always a click away, it is our conversations, both inward and outward, that suffer most.

Disconnect to Reconnect

The internet is a large part of my life. I make a living designing digital products and teaching future product designers. I dedicate a lot of mental space to contemplating the impact of technology—both the good and the bad. There is so much positive about our web-enabled world, but the addictive nature of our devices has made it incredibly difficult for even the most resolute among us to truly pry ourselves away.

It’s easy to forget how quickly this has happened. I spent half my life internet-free and all but a quarter of it without a smartphone. Less than a decade ago, idle time was nearly impossible to avoid. Today, to have idle time—to reflect, to think, to breathe, to turn it off—requires a conscious choice. You either power down your devices or find a place the internet can’t reach. Fortunately, it is still possible to find those places, but they are fast disappearing.

The protection of our wild spaces represents one of the greatest public goods the U.S. has ever created.

Growing up, I spent a lot of time in the woods. As part of a family that prized the outdoors, we did everything from cabin camping to extended backpacking trips. At the time, I didn’t appreciate or understand what the wilderness represented. Maybe it was because everyday life was yet to be hyperconnected, so the woods didn’t feel all that different. But now that hyperconnection is the norm, the juxtaposition is stark.

The wilderness is a place of both deep solitude and deep connection. You are either alone with your thoughts or talking to the people you’re with. Those represent the full breadth of your options.

We desperately need those places. In an always-on world, with devices designed to pull so hard it’s difficult to break free, we need that forcing function. We need those moments where we mindlessly pull out our phone only to find no signal.

At the moment, despite our rapid advances, much of the wilderness is still that sanctuary. A place the internet can’t reach. Like a plane at 30,000 feet. The question is, how long will it stay that way?

Internet-Free Zones

In 1964, the United States Congress passed the Wilderness Act. The act created a legal definition of wilderness and now protects 110 million acres of land from human development. It defines wilderness as follows:

A wilderness, in contrast with those areas where man and his own works dominate the landscape, is hereby recognized as an area where the earth and its community of life are untrammeled by man, where man himself is a visitor who does not remain.

The protection of our wild spaces represents one of the greatest public goods the U.S. has ever created. A rare moment where we were able to understand there are things that supersede economic development and capitalist pursuits.

This wilderness preservation system provides areas across the country where people are given the opportunity to escape the modern world and step into a place of comparative solitude and silence—a last refuge for boredom and introspection.

In the 1960s, when the Wilderness Act was signed, the digital revolution was but a glimmer in the eye of just a handful of people, and only a few of them could have predicted where it would ultimately go. Today, the idea of a space “untrammeled by man” can no longer be defined as simply lacking physical development or resource exploitation, it must also include the absence of our expanding array of digital technologies.

A wilderness, in contrast to those areas dominated by man, should have no signal.

In 2017, there were 331 million visits to U.S. national parks, which is tied with 2016 for the most annual visits in history. People crave these spaces and the disconnection they provide. We’ve overplayed our hand in the war on boredom and the pendulum is starting to swing. There are technology-free summer camps for adults, devices to lock away your phones during events, and bars with built-in Faraday cages to block cell signals.

Introspection and conversation are not dependent on pristine landscapes alone, they are dependent on disconnection. We need to continue to protect our wild spaces from those elements of human creation that we can see, but also protect them from the elements we can’t see. A wilderness, in contrast to those areas dominated by man, should have no signal.

There are a number of ways this can be accomplished. It could be easements that require transmission towers to be certain distances away from designated areas. It could be no-fly zones for aerial transmitters or a requirement that those transmitters be programmed to cease transmission as they pass over specific areas. Or we could pursue large-scale signal jamming in designated zones.

We have legislative, historical, and cultural precedent for protecting and valuing lands and spaces that allow us to step away from the rush of modernity and stay the hand of human progress. We need these escapes and the introspective disconnection they provide. It’s time for us to consider expanding that precedent for the digital age by making the wilderness a place the internet can’t reach.

Let’s Designate Internet-Free Zones” was originally published in Medium on November 28, 2018.

The veil of wonder that once gleamed around the internet has been lifted. Behind it, we’ve located the inconvenient truth about life online — it’s filled with fake news, trolling, cyberbullying, filter bubbles, echo chambers, and addictive technology. The honeymoon is over, as they say.

The ills of the web are the ills of society. They have existed, well, probably forever. Bullying, marginalization, violence, propaganda, misinformation — none of it is new. What is new is the scale and frequency enabled by the internet. The way the web works and, more importantly, the way we engage with it, has taken these issues and amplified them to 11.

Our public debate takes each issue separately, attempting to understand the root cause, mechanics, and solutions. We tweak algorithms in order to pop the filter bubble. We build features and ban accounts to curtail fake news. We ban instigators and require the use of real names to snuff out bullying. What is this approach missing? These problems are not actually separate. They are all symptoms of a deeper psychological phenomenon. One that lives at the core of human interaction with the web.

The Anonymity Paradox

The internet lives in a paradox of anonymity. It is at once the most public place we’ve ever created, but also one of our most private experiences.

We engage in the digital commons through glowing, personal portals, shut off from the physical world around us. When we engage with our devices, our brain creates a psychological gap between the online world and the physical world. We shift into a state of perceived anonymity. Though our actions are visible to almost everyone online, in our primitive monkey brains, when we log in, we are all alone.

This isn’t anonymity in the sense of real names versus fake names. The names we use are irrelevant. This is about a mental detachment from physical reality. The design of our devices acts to transport us into an alternate universe. One where we are mentally, physically, and emotionally disengaged from the real-world impacts of our digital interactions.

Though our actions are visible to almost everyone online, in our primitive monkey brains, when we log in, we are alone.

This is the same psychological phenomenon that we experience when we drive a car. The car is a vortex where time and accountability disappear and social norms no longer apply. We routinely berate other drivers, yelling at them in ways most of us never would if we found ourselves face-to-face. Speeding along with a sense of invincibility and little concern for any repercussions, we sing and dance and pick our noses as if no one can see us through the transparent glass. We talk to ourselves out loud, like crazy people, reliving (and winning) past arguments. Time bends and we lose track of how long we’ve been driving. Sometimes we get to where we’re going and don’t remember how we got there.

In this bubble of anonymity, the real world is Schrodinger’s cat, both existing and not existing at the same time. This paradox is why we flush with embarrassment when we suddenly become aware of another driver watching us dance. Or why road rage stories that end in tragedy are so unnerving to hear. It’s the real world popping our bubble. We’ve killed the cat and now there are consequences.

This is our life on the web. Every day we repeatedly drop in and out of an unconscious bubble of anonymity, being in the world and out of it at the same time. Our brains function differently in the bubble. The line between public and private becomes less distinguishable then we would like to admit, or maybe even realize. It is this paradox that drives the scale of the problems plaguing our beautiful internet.

Cyberbullying, Trolls, and Toxic Communities

Just like road rage, our digital bubble gives us the psychological freedom to unleash our innermost feelings. From the safety of our basement, desk, or smartphone screen our brains step into a space of perceived impunity, where repercussions are distant and fuzzy at best.

It doesn’t even matter where we physically are. Interacting with a digital device requires attentive processing. Your brain must be almost fully engaged. Mentally, it pulls you completely out of your current environment. If you’ve ever tried to converse with a person who is checking their phone, you know they’re all but gone until they look up. Like blinders on a horse, the physical world disappears and all our brain sees is the screen in front of us.

In this bubble, there are no social cues. No facial expressions, body language, or conversational nuance. The people we interact with are all but faceless. Even if we know them, the emotional gap created by the screen means our brain doesn’t have to consider the impact of our actions. In a face-to-face interaction, we have to assume the burden of the immediate emotional response of the other person. Online, our fellow users are temporarily relieved of their personhood, in the same way that our fellow drivers relinquish their personhood the moment we get behind the wheel. They become just another thing in the way of us getting from A to B.

As Robert Putnam described in his best-selling book Bowling Alone, “Good socialization is a prerequisite for life online, not an effect of it: without a real world counterpart, internet contact gets ranty, dishonest, and weird.”

In some ways, our online experiences mimic those of drone fighter pilots. Sitting in windowless rooms staring at digital landscapes half a world away, drone pilots experience a war zone that both exists and doesn’t exist at the same time. This creates a bubble of anonymity between pilot and target.

To quote a piece from the New York Times:

The infrared sensors and high-resolution cameras affixed to drones made it possible to pick up… details from an office in Virginia. But… identifying who was in the cross hairs of a potential drone strike wasn’t always straightforward… The figures on-screen often looked less like people than like faceless gray blobs.

When our brain shifts into the bubble, it creates an artificial divide between ourselves and the people we interact with. They are text on screen, not flesh and blood. On top of that, because of the voyeuristic nature of the web, every interaction happens in front of an entire cast of individuals whom we never see, and that we may never know were there. We are increasingly living our lives through a parade of interactions with faceless gray blobs.

It’s easy to remove the human from the blob. This gives us permission to do and say all kind of things online that we wouldn’t in real life. This same emotional gap is why it’s easier to break up with someone via text message than a face-to-face conversation. Technology creates a psychological buffer. However, the buffer is only temporary. At some point, we come back to reality.

Drone pilots spend 12-hour shifts in a bubble of anonymous war. When their shift is over, they come home to their families and are forced to engage in the “normal” activities of the real world. This is in contrast to combat soldiers who live in a war zone and adjust their entire reality accordingly. Drone pilots are anonymous participants in a war that exists and doesn’t exist at the same time.

While most of us aren’t logging on to kill people, we are living similarly parallel lives. Dropping in and out of anonymity, engaging in interactions in an alternate universe. Interactions which, sometimes, even our closest loved ones are unaware of. Some of us make this switch hundreds of times a day.

But what about those of us who aren’t engaging? Most of us aren’t bullying or being bullied. What if we’re logging in just to watch?

For drone pilots, even watching a war anonymously from a distance has significant impacts. An NPR piece about reconnaissance drone pilots quotes military surgeon Lt. Col. Cameron Thurman on the emotional burden:

“You don’t need a fancy study to tell you that watching someone beheaded … or tortured to death, is gonna have an impact on you as a human being. Everybody understands that. What was not widely understood is the level of exposure that [pilots have] to that type of incident. We see it all.”

Even if we aren’t the ones being bullied or doing the bullying, we are all seeing it. Every day. Verbal abuse, violence on video, self-righteous shaming, condescension, belittlement, jealousy, posturing, and comparison. Our experience of the internet often feels private, but it is all happening on the world stage. Unlike road rage, which is usually contained to our little pod on four wheels, web rage is flung out into the universe, where the rest of us are forced to watch it all unfold from our own bubble. Processing it across a weird chasm of pixels and fiber optics. Anonymous observers in a world where the names are made up, but the problems are real. I’d say, we’re only just beginning to understand the psychological impacts of this.

Technology Addiction

A lot has been written about our addiction to technology, especially through the lens of the habit-forming design of things like social media.

Psychologists break the formation of habits into three distinct components — a trigger, an action, and a reward. Something triggers (or reminds) you to take an action. You take the action. You get a reward. This habit cycle drives a surprising amount of our everyday behavior.

When we talk about the addictive nature of the web, we pay particular attention to the design of specific features within applications that deliver “hits of dopamine” (the pleasure hormone). These features are: likes, hearts, shares, comments, and retweets. They are also feeds that constantly refresh, delivering little bits of new information at unpredictable intervals. Where this focus falls short is that it deals almost exclusively with the action and reward portion of the cycle. The action is checking your stats or refreshing your feed. The reward is new likes on your posts or new posts in your stream. But what about the trigger? What is initiating the cycle? You might say it’s notifications, but we are checking the web constantly with or without notifications. It is deeper than that.

Our desire for escape is the trigger that drives our incessant checking of the web.

The bubble on anonymity provides something fundamental for people. It provides escape. It pulls you out of whatever real-world situation you are in and lets you forget about your life for a moment. Have you ever been relieved to just get in the car and drive? Our desire for escape is the trigger that drives our incessant checking of the web. Every time we want to get away, our new action is logging in. Whether we’re escaping from boredom, an awkward social situation, or the responsibilities of life, our digital devices give us an ever-present “out.” A portal to temporary anonymity, albeit only perceived.

This ability to temporarily “disappear” not only represents the trigger in our cycle, it is also our reward. Our addiction is less about the mini dopamine hits we get from social validation metrics and more about the escape. The dopamine hit from likes and new posts is just the final icing on the cake, reminding us that escape is always the right choice.

In online culture, the “1 percent rule” is a framework for thinking about activity in online communities. It breaks users into three stratifications based on activity: creators, commenters, and lurkers. The idea is that 1 percent of people are creators. They drive the creation of all the new content in the community. Nine percent are commenters who actively engage with a creator’s content — liking, commenting, etc. The other 90 percent are lurkers who watch from the background.

Whether these percentages are completely accurate doesn’t matter. What matters is the idea that the majority are not creating content or even actively engaging with content in online communities. This means that our addiction to these services cannot be driven solely by the dopamine hits created by social metrics. Most people are not using them. It has to be deeper than that. We’re addicted to the escape. We’re addicted to our perceived anonymity.

Fake News, Filter Bubbles, and Echo Chambers

Our conversations are becoming more divisive, our views more polarized. The 2016 election in the U.S. brought this into sharp relief. For many, the blame for this divide lies with the algorithms that serve us content.

In more and more web platforms, including almost all major social media services, content is served by algorithms. Fundamentally, this means a computer calculates which posts you’re most likely to engage with and shows you those, while hiding posts it thinks you won’t like. The goal is to deliver the best content, personalized for you.

The problem is that these algorithms are backward-looking. They calculate based on what you’ve done in the past: “Because you read this, you might also like this.” In algorithm world, past behavior determines future behavior. This means that algorithmically driven services are less likely to show you information that opposes your existing views. You probably didn’t engage with it in the past, so why would you in the future? So, your feed becomes an echo chamber, where everything you see supports what you already believe.

Algorithms feed one of our most primitive psychological needs. We are hardwired to seek out information that confirms our beliefs. This is known as confirmation bias.

From Psychology Today:

Confirmation bias occurs from the direct influence of desire on beliefs. When people would like a certain idea/concept to be true, they end up believing it to be true. They are motivated by wishful thinking. This error leads the individual to stop gathering information when the evidence gathered so far confirms the views (prejudices) one would like to be true.

We want our beliefs to be true. It can be hard, painful work to let go of a belief. This is why fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear. If a service put opposing views in our face all the time, it could be emotionally painful. We might not come back to that service. From a business perspective, it makes sense to show us what we like.

The prevailing wisdom is that this constant reinforcing of our worldview kills open-mindedness, hardening our beliefs to a point where we are no longer able to find common ground with anyone who opposes them. As the repercussions of our online echo chambers become increasingly evident, there are calls to change the way we surface content in order to show more diverse perspectives. The idea is that a more diverse feed means a more open-minded worldview. The question is, would this work?

Fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear.

In 2015, Facebook published a study suggesting that it is actually users who cause their own filter bubbles, not the Facebook algorithm. That we are the ones actively choosing to ignore or hide opposing views. At first blush, it’s easy to pass this off as a clear conflict of interest. Of course Facebook would say it’s us and not the algorithm. But it may not be so clear-cut.

We engage online in a bubble of psychological anonymity. Our reward is escape. If we are already hardwired to seek out information that supports our beliefs, and it is painful to be exposed to information that opposes them, of course we would do our own filtering.

The internet is a fire hose. It can be so overwhelming that sometimes we literally go numb. It is information hypersensitization. It is more than our brain can deal with. We’re here to escape, not to feel overwhelmed. So, we start turning off as much of the noise as possible. We reject anything that makes us feel uncomfortable.

Luckily for us, the internet is the perfect machine for supporting our existing beliefs. Communities of like-minded people are just a Google search away, no matter how niche our interests. Our bubble of anonymity frees our brain from any social pressures stopping us from indulging our innermost desires, no matter how subversive or extreme. On top of that, services have given us all the tools we need to sanitize our feeds. We can block, mute, flag, and unfollow. Combine all of it with an algorithm predisposed to reinforce our worldview and you have a perfect storm for polarization and radicalization.

Additionally, the way we process interactions online is different than the way we process them offline. A recent study found that Twitter users who were exposed to opposing views on the service actually became more rooted in their beliefs. This flies directly in the face of the prevailing wisdom about exposure to diverse views driving open-mindedness.

The internet is the perfect machine for supporting our existing beliefs.

While the study results may be true, the question is: Do they represent a natural human state? We operate online in a psychological bubble of anonymity. That bubble does not exist in the outside world. In the physical world, exposure to diverse views and experiences happens with real people. In those cases, our brain is operating in a completely different mode.

When we’re online, as far as our brain is concerned, we aren’t engaging with real people. Like when another driver notices you picking your nose, coming into contact with opposing views online pops our bubble of anonymity. It is a real-world intrusion into our alternative universe by some faceless gray blob. The psychological response is different. It is much more fight or flight than listen and consider.

The internet has become a ubiquitous presence in our lives. Its creation has shifted so much about our existence. Today, our paradigm for interacting with the web creates a psychological gap between the digital and physical worlds, dramatically altering the way we relate to each other and the way we relate to technology itself. How can we design the next phase of our technology so that it enhances our life in the world, as opposed to pulling us out of it?

Soon we will reach a technological inflection point, where we will spend more of our time engaged with the digital world than not. The outsize influence of this alternate universe we are building makes it incumbent upon us to think critically and openly about its impact on society.

Technology is not something that happens to us, it is something we choose to create. If we are intentional and transparent, we can learn from where we have been and work toward a technology future that brings us together, not one that drives us apart.

A Unified Theory of Everything Wrong with the Internet” was originally published in Medium on September 17, 2018.

I recently read Woodrow Hartzog’s piece on facial recognition technology. The premise of the piece — that facial recognition is the perfect tool for oppression, and as such should be banned from further development — put a fine point on a question I’ve been pondering for a while:

Are all technological advances actually progress?

This doesn’t seem to be a question we ask.

We pray hard at the altar of technological optimism. Tapping away at our touch screens through rose gold colored glasses. Rarely do we step back and ask ourselves, ‘is this really good for us?’ — at least not until long after the fact.

It can be hard to predict what will happen with new technology, but I’m in line with Hartzog that facial recognition feels like a technology worth questioning in its entirety. The dystopian storyline of oppression and persecution is just too obvious and too likely.

To be fair, there is a conversation happening about facial recognition, including some surprising calls for regulation from major companies developing the technology, like Microsoft. But the idea of regulation is about as far as we ever go, and by the time we get there the genie is so far out of the bottle that any legislation often becomes more of a symbolic victory than any real form of control.

We just move technology forward as fast as we can, call it progress, and then do our best (or not) to clean up the mess left in its wake. See Mark Zuckerberg’s congressional testimony for our most recent example.

Would we ever consider stopping the development of a new technology before we open Pandora’s box?

Technology drives itself forward with the same brutal mentality of colonizing explorers — if the land is there, it must be conquered.

At prestigious universities and companies across the country, rooms of twenty-something engineers practice the 21st century version of Manifest Destiny. Striving to conquer any technical challenge they find in front of them. Insulated by privilege and sorely lacking in diversity, it is questionable how much introspective thought these institutions give to the possible downsides of their work.

In the tech world, the development of facial recognition, along with so many other advances, is viewed as a foregone conclusion. ‘The technical capability is there, so we are going to develop it.’

This isn’t necessarily an attempt to be nefarious or destructive. Often, it’s with good intentions. Unfortunately, as the ancient proverb says, the road to hell is paved with good intentions.

Video manipulation technology is another great example. Developed at Stanford, it allows anyone to modify a video so that the face of the person in it does anything they want them to do. It works with any webcam and the results are indistinguishable from reality.


Given what we’ve already seen with fake news and the ongoing erosion of truth, the negative implications of this type of technology are so obvious and terrible that we probably should have corked the bottle and buried it back in some forgotten cave somewhere. But we didn’t.

We had the technical capability to make it work, so we had to prove we could, right?

What if we chose not to prove it? Is there a point where we develop the fortitude to stop asking ourselves ‘could we’, and start asking ourselves ‘should we’?


Nothing New Under the Sun

The relentless march of technology has been one of humanity’s strongest historical through lines. And, throughout history, our response, if we have one at all, has been reactive. Of late, our go-to is regulation.

Even the most primitive technologies carried significant unintended consequences. In her book The Sixth Extinction, Elizabeth Kolbert lays out a strong case that small bands of early humans were able to hunt large mammals, like Mastodons, to the point of extinction. This was not intentional overhunting, it was the outcome of our technological capabilities, like spears, that allowed us to unwittingly kill Mastodons at a rate that outstripped their ability to reproduce, leading their species to collapse.

We’ve been struggling with the impact of our technology pretty much since day one.

Obviously the tricky part here is that technological progress is a double-edged sword. We literally wouldn’t be where we are today without it, and we won’t get to the future we all hope for if we stop. The problem is that the magnitude of the risks continues to escalate, but we refuse to change our approach.

The things we are developing now are more powerful and more distributed than ever before. When technology is accessible to everyone, reactionary responses, like regulation, become all but irrelevant. We can barely control the proliferation of nuclear weapons technology, and the barrier to entry there is about as high as it gets. What chance do we have of regulating something like facial recognition, which is open sourced and can be implemented by anyone?

Companies have gotten so good at marketing us the benefits of new technologies that there is no room for any critical thought about possible negative impacts. If the average person thinks about facial recognition at all, they most likely think about it as a way to unlock their iPhone. They have no view of the bigger picture, and no idea what’s about to happen. Quite often this is by design.

Driving the adoption of new technology is all about conditioning. People are resistant to change. You can’t go too far too fast. You need to ease people into it. You start with something innocuous and useful, that plays off an existing behavior, like unlocking your phone, or sharing information with a small group of trusted friends.

To quote Mark Zuckerberg from 2005:

“I think that some decisions that we made early on to localize [Facebook] and keep it separate for each college on the network kept it really useful, because people could only see people from their local college and friends outside. That made it so people were comfortable sharing information that they probably wouldn’t otherwise.”

Social media style sharing was not a thing when Facebook started. The idea was foreign and scary. We had to be eased into it. But, once those initial activities become commonplace behavior, the gates are open for companies to push the boundaries and upend social norms. Again, not necessarily nefarious, it’s the process of adoption.

However, as consumers we continually allow ourselves to be sold a bill of goods without understanding the real price we’re about to pay. Buying, hook, line, and sinker, into the idea that the primacy of technology makes any possible risks acceptable, or even irrelevant. ‘You’re saying I don’t have to type numbers into my phone to unlock it anymore? I just look at it?! Say no more, sir!’

Because of all of this, our conversations about the downsides of technology always happen postmortem, and the debate focuses on how we bend society in order to live with our new reality, as opposed to how we bend our reality to create the society we want. As if technology is just some thing that happens to us, beyond our control.

Does there come a tipping point where the conversation changes? Could we ever choose to actively turn away from technological opportunities based on the inherent risks? Or will we just continue to ‘move fast and break things’, hoping for forgiveness later?

Alfred Nobel amassed great fame and fortune in his life, largely from the creation of dynamite and a number of other explosives. His work drove fantastic advancements in civil engineering, but also military arms, resulting in the deaths of untold numbers of people.

When Nobel’s brother died, a French newspaper mistakenly thought Alfred had died. They printed a front page headline that read “The Merchant of Death is Dead”. And continued, “Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday.”

The paper’s mistake forced Nobel to reckon with his legacy and the legacy of his creations, which ultimately drove him to establish the Nobel Prize in 1901, in an attempt to rectify his past and fix his reputation.

Today, the list of tech billionaires with large philanthropic pursuits continues to grow.

Similarly, after helping to create the atomic bomb, Albert Einstein voiced deep regret for his participation:

“The release of atomic power has changed everything except our way of thinking…the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.”

What started with optimism and hope for peace, ended with the realization that the end did not justify the means. But at that point it was too late.

More recently, Sean Parker, Facebook’s founding President, lamented the addictive design of social media, which he admits was intentional. Calling himself a ‘social media contentious objector’, and saying “God only knows what it’s doing to our children’s brains.”

Not a lot has changed in the last 117 years.

But if we don’t, someone else will.

This is an argument that serves to maintain the status quo of technological manifest destiny. ‘It is inevitable, so it might as well be us.’

Even Einstein fell prey to it:

“I made one great mistake in my life-when I signed the letter to President Roosevelt recommending that atom bombs be made. But, there was some justification-the danger that the Germans would make them.”

Our global belief in the primacy and inevitability of technology makes this a valid argument and a legitimate concern. Someone else is probably going to do it. The question is, can we continue with this mentality unchecked, or will we eventually pay the price for it?

What is the thing that might tip the scale and force humanity to truly grapple with the hard questions? Is it facial recognition? Artificial intelligence? Genetic engineering?

Being able to have real, transparent debates about the risks and rewards of our technological pursuits has to be the next step in our growth as a species. We are at a point now where the power and scale of our capabilities can easily end us. Either through literal annihilation or the complete subversion of our societal structures.

With great power comes great responsibility — someone once said.

Our ability to create is the greatest power we’ve been given, but we handle it like hormonal teenagers — overconfident, naive and oblivious to consequences. Flush with smarts, but devoid of wisdom.

If we want to make it to the next phase of our existence we need to grow up. Not to stifle our progress, but to actually enable it.

We need to change our culture of technology to be one that is proactive and open to considering the downsides as much as the upsides, and we need to be willing to walk away when we determine that the risks outweigh the rewards.

This is not beyond our control. We have the power to change our course. Like the Google employees who refused to work on an AI contract for the Defense Department, we can drive conversations and make different choices.

Technology is not a thing that happens to us. Technology is a thing we choose to create.

If used wisely, technology can enable us to become the humans we desire to be. But, if we continue to allow ourselves to be blown by the winds of technological manifest destiny, we are going to find ourselves in a mess we won’t be able to clean up.

Life, Liberty and the Pursuit of Technology” was originally published in Medium on August 27, 2018.