The veil of wonder that once gleamed around the internet has been lifted. Behind it, we’ve located the inconvenient truth about life online — it’s filled with fake news, trolling, cyberbullying, filter bubbles, echo chambers, and addictive technology. The honeymoon is over, as they say.
The ills of the web are the ills of society. They have existed, well, probably forever. Bullying, marginalization, violence, propaganda, misinformation — none of it is new. What is new is the scale and frequency enabled by the internet. The way the web works and, more importantly, the way we engage with it, has taken these issues and amplified them to 11.
Our public debate takes each issue separately, attempting to understand the root cause, mechanics, and solutions. We tweak algorithms in order to pop the filter bubble. We build features and ban accounts to curtail fake news. We ban instigators and require the use of real names to snuff out bullying. What is this approach missing? These problems are not actually separate. They are all symptoms of a deeper psychological phenomenon. One that lives at the core of human interaction with the web.
The Anonymity Paradox
The internet lives in a paradox of anonymity. It is at once the most public place we’ve ever created, but also one of our most private experiences.
We engage in the digital commons through glowing, personal portals, shut off from the physical world around us. When we engage with our devices, our brain creates a psychological gap between the online world and the physical world. We shift into a state of perceived anonymity. Though our actions are visible to almost everyone online, in our primitive monkey brains, when we log in, we are all alone.
This isn’t anonymity in the sense of real names versus fake names. The names we use are irrelevant. This is about a mental detachment from physical reality. The design of our devices acts to transport us into an alternate universe. One where we are mentally, physically, and emotionally disengaged from the real-world impacts of our digital interactions.
Though our actions are visible to almost everyone online, in our primitive monkey brains, when we log in, we are alone.
This is the same psychological phenomenon that we experience when we drive a car. The car is a vortex where time and accountability disappear and social norms no longer apply. We routinely berate other drivers, yelling at them in ways most of us never would if we found ourselves face-to-face. Speeding along with a sense of invincibility and little concern for any repercussions, we sing and dance and pick our noses as if no one can see us through the transparent glass. We talk to ourselves out loud, like crazy people, reliving (and winning) past arguments. Time bends and we lose track of how long we’ve been driving. Sometimes we get to where we’re going and don’t remember how we got there.
In this bubble of anonymity, the real world is Schrodinger’s cat, both existing and not existing at the same time. This paradox is why we flush with embarrassment when we suddenly become aware of another driver watching us dance. Or why road rage stories that end in tragedy are so unnerving to hear. It’s the real world popping our bubble. We’ve killed the cat and now there are consequences.
This is our life on the web. Every day we repeatedly drop in and out of an unconscious bubble of anonymity, being in the world and out of it at the same time. Our brains function differently in the bubble. The line between public and private becomes less distinguishable then we would like to admit, or maybe even realize. It is this paradox that drives the scale of the problems plaguing our beautiful internet.
Cyberbullying, Trolls, and Toxic Communities
Just like road rage, our digital bubble gives us the psychological freedom to unleash our innermost feelings. From the safety of our basement, desk, or smartphone screen our brains step into a space of perceived impunity, where repercussions are distant and fuzzy at best.
It doesn’t even matter where we physically are. Interacting with a digital device requires attentive processing. Your brain must be almost fully engaged. Mentally, it pulls you completely out of your current environment. If you’ve ever tried to converse with a person who is checking their phone, you know they’re all but gone until they look up. Like blinders on a horse, the physical world disappears and all our brain sees is the screen in front of us.
In this bubble, there are no social cues. No facial expressions, body language, or conversational nuance. The people we interact with are all but faceless. Even if we know them, the emotional gap created by the screen means our brain doesn’t have to consider the impact of our actions. In a face-to-face interaction, we have to assume the burden of the immediate emotional response of the other person. Online, our fellow users are temporarily relieved of their personhood, in the same way that our fellow drivers relinquish their personhood the moment we get behind the wheel. They become just another thing in the way of us getting from A to B.
As Robert Putnam described in his best-selling book Bowling Alone, “Good socialization is a prerequisite for life online, not an effect of it: without a real world counterpart, internet contact gets ranty, dishonest, and weird.”
In some ways, our online experiences mimic those of drone fighter pilots. Sitting in windowless rooms staring at digital landscapes half a world away, drone pilots experience a war zone that both exists and doesn’t exist at the same time. This creates a bubble of anonymity between pilot and target.
To quote a piece from the New York Times:
The infrared sensors and high-resolution cameras affixed to drones made it possible to pick up… details from an office in Virginia. But… identifying who was in the cross hairs of a potential drone strike wasn’t always straightforward… The figures on-screen often looked less like people than like faceless gray blobs.
When our brain shifts into the bubble, it creates an artificial divide between ourselves and the people we interact with. They are text on screen, not flesh and blood. On top of that, because of the voyeuristic nature of the web, every interaction happens in front of an entire cast of individuals whom we never see, and that we may never know were there. We are increasingly living our lives through a parade of interactions with faceless gray blobs.
It’s easy to remove the human from the blob. This gives us permission to do and say all kind of things online that we wouldn’t in real life. This same emotional gap is why it’s easier to break up with someone via text message than a face-to-face conversation. Technology creates a psychological buffer. However, the buffer is only temporary. At some point, we come back to reality.
—
Drone pilots spend 12-hour shifts in a bubble of anonymous war. When their shift is over, they come home to their families and are forced to engage in the “normal” activities of the real world. This is in contrast to combat soldiers who live in a war zone and adjust their entire reality accordingly. Drone pilots are anonymous participants in a war that exists and doesn’t exist at the same time.
While most of us aren’t logging on to kill people, we are living similarly parallel lives. Dropping in and out of anonymity, engaging in interactions in an alternate universe. Interactions which, sometimes, even our closest loved ones are unaware of. Some of us make this switch hundreds of times a day.
But what about those of us who aren’t engaging? Most of us aren’t bullying or being bullied. What if we’re logging in just to watch?
For drone pilots, even watching a war anonymously from a distance has significant impacts. An NPR piece about reconnaissance drone pilots quotes military surgeon Lt. Col. Cameron Thurman on the emotional burden:
“You don’t need a fancy study to tell you that watching someone beheaded … or tortured to death, is gonna have an impact on you as a human being. Everybody understands that. What was not widely understood is the level of exposure that [pilots have] to that type of incident. We see it all.”
Even if we aren’t the ones being bullied or doing the bullying, we are all seeing it. Every day. Verbal abuse, violence on video, self-righteous shaming, condescension, belittlement, jealousy, posturing, and comparison. Our experience of the internet often feels private, but it is all happening on the world stage. Unlike road rage, which is usually contained to our little pod on four wheels, web rage is flung out into the universe, where the rest of us are forced to watch it all unfold from our own bubble. Processing it across a weird chasm of pixels and fiber optics. Anonymous observers in a world where the names are made up, but the problems are real. I’d say, we’re only just beginning to understand the psychological impacts of this.
Technology Addiction
A lot has been written about our addiction to technology, especially through the lens of the habit-forming design of things like social media.
Psychologists break the formation of habits into three distinct components — a trigger, an action, and a reward. Something triggers (or reminds) you to take an action. You take the action. You get a reward. This habit cycle drives a surprising amount of our everyday behavior.
When we talk about the addictive nature of the web, we pay particular attention to the design of specific features within applications that deliver “hits of dopamine” (the pleasure hormone). These features are: likes, hearts, shares, comments, and retweets. They are also feeds that constantly refresh, delivering little bits of new information at unpredictable intervals. Where this focus falls short is that it deals almost exclusively with the action and reward portion of the cycle. The action is checking your stats or refreshing your feed. The reward is new likes on your posts or new posts in your stream. But what about the trigger? What is initiating the cycle? You might say it’s notifications, but we are checking the web constantly with or without notifications. It is deeper than that.
Our desire for escape is the trigger that drives our incessant checking of the web.
The bubble on anonymity provides something fundamental for people. It provides escape. It pulls you out of whatever real-world situation you are in and lets you forget about your life for a moment. Have you ever been relieved to just get in the car and drive? Our desire for escape is the trigger that drives our incessant checking of the web. Every time we want to get away, our new action is logging in. Whether we’re escaping from boredom, an awkward social situation, or the responsibilities of life, our digital devices give us an ever-present “out.” A portal to temporary anonymity, albeit only perceived.
This ability to temporarily “disappear” not only represents the trigger in our cycle, it is also our reward. Our addiction is less about the mini dopamine hits we get from social validation metrics and more about the escape. The dopamine hit from likes and new posts is just the final icing on the cake, reminding us that escape is always the right choice.
In online culture, the “1 percent rule” is a framework for thinking about activity in online communities. It breaks users into three stratifications based on activity: creators, commenters, and lurkers. The idea is that 1 percent of people are creators. They drive the creation of all the new content in the community. Nine percent are commenters who actively engage with a creator’s content — liking, commenting, etc. The other 90 percent are lurkers who watch from the background.
Whether these percentages are completely accurate doesn’t matter. What matters is the idea that the majority are not creating content or even actively engaging with content in online communities. This means that our addiction to these services cannot be driven solely by the dopamine hits created by social metrics. Most people are not using them. It has to be deeper than that. We’re addicted to the escape. We’re addicted to our perceived anonymity.
Fake News, Filter Bubbles, and Echo Chambers
Our conversations are becoming more divisive, our views more polarized. The 2016 election in the U.S. brought this into sharp relief. For many, the blame for this divide lies with the algorithms that serve us content.
In more and more web platforms, including almost all major social media services, content is served by algorithms. Fundamentally, this means a computer calculates which posts you’re most likely to engage with and shows you those, while hiding posts it thinks you won’t like. The goal is to deliver the best content, personalized for you.
The problem is that these algorithms are backward-looking. They calculate based on what you’ve done in the past: “Because you read this, you might also like this.” In algorithm world, past behavior determines future behavior. This means that algorithmically driven services are less likely to show you information that opposes your existing views. You probably didn’t engage with it in the past, so why would you in the future? So, your feed becomes an echo chamber, where everything you see supports what you already believe.
Algorithms feed one of our most primitive psychological needs. We are hardwired to seek out information that confirms our beliefs. This is known as confirmation bias.
From Psychology Today:
Confirmation bias occurs from the direct influence of desire on beliefs. When people would like a certain idea/concept to be true, they end up believing it to be true. They are motivated by wishful thinking. This error leads the individual to stop gathering information when the evidence gathered so far confirms the views (prejudices) one would like to be true.
We want our beliefs to be true. It can be hard, painful work to let go of a belief. This is why fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear. If a service put opposing views in our face all the time, it could be emotionally painful. We might not come back to that service. From a business perspective, it makes sense to show us what we like.
The prevailing wisdom is that this constant reinforcing of our worldview kills open-mindedness, hardening our beliefs to a point where we are no longer able to find common ground with anyone who opposes them. As the repercussions of our online echo chambers become increasingly evident, there are calls to change the way we surface content in order to show more diverse perspectives. The idea is that a more diverse feed means a more open-minded worldview. The question is, would this work?
Fake news is like jet fuel for content algorithms. It tells us exactly what we want to hear.
In 2015, Facebook published a study suggesting that it is actually users who cause their own filter bubbles, not the Facebook algorithm. That we are the ones actively choosing to ignore or hide opposing views. At first blush, it’s easy to pass this off as a clear conflict of interest. Of course Facebook would say it’s us and not the algorithm. But it may not be so clear-cut.
We engage online in a bubble of psychological anonymity. Our reward is escape. If we are already hardwired to seek out information that supports our beliefs, and it is painful to be exposed to information that opposes them, of course we would do our own filtering.
The internet is a fire hose. It can be so overwhelming that sometimes we literally go numb. It is information hypersensitization. It is more than our brain can deal with. We’re here to escape, not to feel overwhelmed. So, we start turning off as much of the noise as possible. We reject anything that makes us feel uncomfortable.
Luckily for us, the internet is the perfect machine for supporting our existing beliefs. Communities of like-minded people are just a Google search away, no matter how niche our interests. Our bubble of anonymity frees our brain from any social pressures stopping us from indulging our innermost desires, no matter how subversive or extreme. On top of that, services have given us all the tools we need to sanitize our feeds. We can block, mute, flag, and unfollow. Combine all of it with an algorithm predisposed to reinforce our worldview and you have a perfect storm for polarization and radicalization.
Additionally, the way we process interactions online is different than the way we process them offline. A recent study found that Twitter users who were exposed to opposing views on the service actually became more rooted in their beliefs. This flies directly in the face of the prevailing wisdom about exposure to diverse views driving open-mindedness.
The internet is the perfect machine for supporting our existing beliefs.
While the study results may be true, the question is: Do they represent a natural human state? We operate online in a psychological bubble of anonymity. That bubble does not exist in the outside world. In the physical world, exposure to diverse views and experiences happens with real people. In those cases, our brain is operating in a completely different mode.
When we’re online, as far as our brain is concerned, we aren’t engaging with real people. Like when another driver notices you picking your nose, coming into contact with opposing views online pops our bubble of anonymity. It is a real-world intrusion into our alternative universe by some faceless gray blob. The psychological response is different. It is much more fight or flight than listen and consider.
—
The internet has become a ubiquitous presence in our lives. Its creation has shifted so much about our existence. Today, our paradigm for interacting with the web creates a psychological gap between the digital and physical worlds, dramatically altering the way we relate to each other and the way we relate to technology itself. How can we design the next phase of our technology so that it enhances our life in the world, as opposed to pulling us out of it?
Soon we will reach a technological inflection point, where we will spend more of our time engaged with the digital world than not. The outsize influence of this alternate universe we are building makes it incumbent upon us to think critically and openly about its impact on society.
Technology is not something that happens to us, it is something we choose to create. If we are intentional and transparent, we can learn from where we have been and work toward a technology future that brings us together, not one that drives us apart.
—
“A Unified Theory of Everything Wrong with the Internet” was originally published in Medium on September 17, 2018.