I recently read Woodrow Hartzog’s piece on facial recognition technology. The premise of the piece — that facial recognition is the perfect tool for oppression, and as such should be banned from further development — put a fine point on a question I’ve been pondering for a while:
Are all technological advances actually progress?
This doesn’t seem to be a question we ask.
We pray hard at the altar of technological optimism. Tapping away at our touch screens through rose gold colored glasses. Rarely do we step back and ask ourselves, ‘is this really good for us?’ — at least not until long after the fact.
It can be hard to predict what will happen with new technology, but I’m in line with Hartzog that facial recognition feels like a technology worth questioning in its entirety. The dystopian storyline of oppression and persecution is just too obvious and too likely.
To be fair, there is a conversation happening about facial recognition, including some surprising calls for regulation from major companies developing the technology, like Microsoft. But the idea of regulation is about as far as we ever go, and by the time we get there the genie is so far out of the bottle that any legislation often becomes more of a symbolic victory than any real form of control.
We just move technology forward as fast as we can, call it progress, and then do our best (or not) to clean up the mess left in its wake. See Mark Zuckerberg’s congressional testimony for our most recent example.
Would we ever consider stopping the development of a new technology before we open Pandora’s box?
Technology drives itself forward with the same brutal mentality of colonizing explorers — if the land is there, it must be conquered.
At prestigious universities and companies across the country, rooms of twenty-something engineers practice the 21st century version of Manifest Destiny. Striving to conquer any technical challenge they find in front of them. Insulated by privilege and sorely lacking in diversity, it is questionable how much introspective thought these institutions give to the possible downsides of their work.
In the tech world, the development of facial recognition, along with so many other advances, is viewed as a foregone conclusion. ‘The technical capability is there, so we are going to develop it.’
This isn’t necessarily an attempt to be nefarious or destructive. Often, it’s with good intentions. Unfortunately, as the ancient proverb says, the road to hell is paved with good intentions.
Video manipulation technology is another great example. Developed at Stanford, it allows anyone to modify a video so that the face of the person in it does anything they want them to do. It works with any webcam and the results are indistinguishable from reality.
Given what we’ve already seen with fake news and the ongoing erosion of truth, the negative implications of this type of technology are so obvious and terrible that we probably should have corked the bottle and buried it back in some forgotten cave somewhere. But we didn’t.
We had the technical capability to make it work, so we had to prove we could, right?
What if we chose not to prove it? Is there a point where we develop the fortitude to stop asking ourselves ‘could we’, and start asking ourselves ‘should we’?
Nothing New Under the Sun
The relentless march of technology has been one of humanity’s strongest historical through lines. And, throughout history, our response, if we have one at all, has been reactive. Of late, our go-to is regulation.
Even the most primitive technologies carried significant unintended consequences. In her book The Sixth Extinction, Elizabeth Kolbert lays out a strong case that small bands of early humans were able to hunt large mammals, like Mastodons, to the point of extinction. This was not intentional overhunting, it was the outcome of our technological capabilities, like spears, that allowed us to unwittingly kill Mastodons at a rate that outstripped their ability to reproduce, leading their species to collapse.
We’ve been struggling with the impact of our technology pretty much since day one.
Obviously the tricky part here is that technological progress is a double-edged sword. We literally wouldn’t be where we are today without it, and we won’t get to the future we all hope for if we stop. The problem is that the magnitude of the risks continues to escalate, but we refuse to change our approach.
The things we are developing now are more powerful and more distributed than ever before. When technology is accessible to everyone, reactionary responses, like regulation, become all but irrelevant. We can barely control the proliferation of nuclear weapons technology, and the barrier to entry there is about as high as it gets. What chance do we have of regulating something like facial recognition, which is open sourced and can be implemented by anyone?
Companies have gotten so good at marketing us the benefits of new technologies that there is no room for any critical thought about possible negative impacts. If the average person thinks about facial recognition at all, they most likely think about it as a way to unlock their iPhone. They have no view of the bigger picture, and no idea what’s about to happen. Quite often this is by design.
Driving the adoption of new technology is all about conditioning. People are resistant to change. You can’t go too far too fast. You need to ease people into it. You start with something innocuous and useful, that plays off an existing behavior, like unlocking your phone, or sharing information with a small group of trusted friends.
To quote Mark Zuckerberg from 2005:
“I think that some decisions that we made early on to localize [Facebook] and keep it separate for each college on the network kept it really useful, because people could only see people from their local college and friends outside. That made it so people were comfortable sharing information that they probably wouldn’t otherwise.”
Social media style sharing was not a thing when Facebook started. The idea was foreign and scary. We had to be eased into it. But, once those initial activities become commonplace behavior, the gates are open for companies to push the boundaries and upend social norms. Again, not necessarily nefarious, it’s the process of adoption.
However, as consumers we continually allow ourselves to be sold a bill of goods without understanding the real price we’re about to pay. Buying, hook, line, and sinker, into the idea that the primacy of technology makes any possible risks acceptable, or even irrelevant. ‘You’re saying I don’t have to type numbers into my phone to unlock it anymore? I just look at it?! Say no more, sir!’
Because of all of this, our conversations about the downsides of technology always happen postmortem, and the debate focuses on how we bend society in order to live with our new reality, as opposed to how we bend our reality to create the society we want. As if technology is just some thing that happens to us, beyond our control.
Does there come a tipping point where the conversation changes? Could we ever choose to actively turn away from technological opportunities based on the inherent risks? Or will we just continue to ‘move fast and break things’, hoping for forgiveness later?
Alfred Nobel amassed great fame and fortune in his life, largely from the creation of dynamite and a number of other explosives. His work drove fantastic advancements in civil engineering, but also military arms, resulting in the deaths of untold numbers of people.
When Nobel’s brother died, a French newspaper mistakenly thought Alfred had died. They printed a front page headline that read “The Merchant of Death is Dead”. And continued, “Dr. Alfred Nobel, who became rich by finding ways to kill more people faster than ever before, died yesterday.”
The paper’s mistake forced Nobel to reckon with his legacy and the legacy of his creations, which ultimately drove him to establish the Nobel Prize in 1901, in an attempt to rectify his past and fix his reputation.
Today, the list of tech billionaires with large philanthropic pursuits continues to grow.
Similarly, after helping to create the atomic bomb, Albert Einstein voiced deep regret for his participation:
“The release of atomic power has changed everything except our way of thinking…the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker.”
What started with optimism and hope for peace, ended with the realization that the end did not justify the means. But at that point it was too late.
More recently, Sean Parker, Facebook’s founding President, lamented the addictive design of social media, which he admits was intentional. Calling himself a ‘social media contentious objector’, and saying “God only knows what it’s doing to our children’s brains.”
Not a lot has changed in the last 117 years.
But if we don’t, someone else will.
This is an argument that serves to maintain the status quo of technological manifest destiny. ‘It is inevitable, so it might as well be us.’
Even Einstein fell prey to it:
“I made one great mistake in my life-when I signed the letter to President Roosevelt recommending that atom bombs be made. But, there was some justification-the danger that the Germans would make them.”
Our global belief in the primacy and inevitability of technology makes this a valid argument and a legitimate concern. Someone else is probably going to do it. The question is, can we continue with this mentality unchecked, or will we eventually pay the price for it?
What is the thing that might tip the scale and force humanity to truly grapple with the hard questions? Is it facial recognition? Artificial intelligence? Genetic engineering?
Being able to have real, transparent debates about the risks and rewards of our technological pursuits has to be the next step in our growth as a species. We are at a point now where the power and scale of our capabilities can easily end us. Either through literal annihilation or the complete subversion of our societal structures.
With great power comes great responsibility — someone once said.
Our ability to create is the greatest power we’ve been given, but we handle it like hormonal teenagers — overconfident, naive and oblivious to consequences. Flush with smarts, but devoid of wisdom.
If we want to make it to the next phase of our existence we need to grow up. Not to stifle our progress, but to actually enable it.
We need to change our culture of technology to be one that is proactive and open to considering the downsides as much as the upsides, and we need to be willing to walk away when we determine that the risks outweigh the rewards.
This is not beyond our control. We have the power to change our course. Like the Google employees who refused to work on an AI contract for the Defense Department, we can drive conversations and make different choices.
Technology is not a thing that happens to us. Technology is a thing we choose to create.
If used wisely, technology can enable us to become the humans we desire to be. But, if we continue to allow ourselves to be blown by the winds of technological manifest destiny, we are going to find ourselves in a mess we won’t be able to clean up.
—
“Life, Liberty and the Pursuit of Technology” was originally published in Medium on August 27, 2018.