Topic

Technology

"Technology is everything that doesn't work yet." – Danny Hillis

More Human

We're not sure exactly when humans mastered fire and started using it to cook. We may have started as far back as 2 million years ago, though cooking likely wasn't widespread until around 200,000 BCE.

But we do know that cooking increased the calories we got from our food, and those calories likely fueled the evolution of our brains, turning us into Earth's most intelligent creatures.

This is widely considered one of our greatest accomplishments. Fire — and technology as a whole — is a big part of what makes us human.

But learning to control fire also exposed us to the dangers of fire. Without proper precautions, knocking over a single candle can level entire neighborhoods.

Every new technological advance since fire has come with its own benefits and risks. And with each invention, we've reoriented our societies around that advance to minimize the downsides and better leverage the upsides. We owe much of our modern existence to things like building codes, nuclear treaties, and seat belts — all of which made new technologies safer.

Today, we're on the cusp of what may become one of the most significant technologies we've ever created: artificial intelligence.

Google CEO Sundar Pichai told the World Economic Forum last year, "Al is one of the most profound things we're working on as humanity. It's more profound than fire or electricity."

We don't know yet what risks AI will bring. If it ever reaches human levels of intelligence, AI may be capable of improving itself. And because computers can work much faster than us, a human-level AI might be able to transform itself into something much smarter than us virtually overnight.

What happens when an entity at least as smart as us, but without the same rationality and ethics that we take for granted in humans, becomes ubiquitous? And how will we navigate a world where we are no longer the smartest beings?

That world is fast approaching: Computers are already better than us at games like Chess and Go, and GPT-4 scores in the top 15% on many standardized exams.

History tells us that we'll continue to adjust and find ways to offset the risks. If Pichai's promise holds true — that AI will be more profound than fire — it will help us unlock the universe's secrets, generating new knowledge at an unprecedented scale and unleashing our creativity as a species.

It will make us smarter. And more human.

Becoming Active Allies

When I was in elementary school, we participated in a series of workshops designed to help us navigate bullying. We'd participate in skits to act out what confronting a bully on the playground might look like and discuss strategies for intervening.

In one of these classes, we talked about the difference between being an ally and a bystander. We used a chart like this one to plot the different roles you might play when encountering conflict:

Chart with check marks and an "X" indicating which roles you can play in a conflict

The purpose of this table, of course, was to illustrate the fact that it is impossible to be a passive ally. You can be an active ally by confronting the bully or an active bystander by egging them on. But if you are standing on the edge of a fight, silently supporting the recipient of bullying, you are not a passive ally — you're a passive bystander.

As adults, this is perhaps obvious when spelled out so explicitly. But there are many places outside of bullying where this framework is useful.

Consider social networks, for example: In the beginning, these networks tried to argue that they were simply neutral channels. In the same way an ISP doesn't prevent you from seeing racist content or harassing behavior online, social network execs argued they were simply promoting free speech — that they were not in the business of making editorial decisions about which content gets amplified or suppressed. (There were exceptions, of course, for things like nudity and illegal speech.)

Over time, however, we discovered that this neutrality was a mirage, because misinformation and inflammatory content typically get higher engagement. And algorithms aren't objective arbiters free of the biases that affect their creators.

Social networks were trying to be passive bystanders. And even as they've evolved to play heavier moderator roles over time, these "neutral" platforms continue to struggle with harassment, which disproportionately alienates women and minorities — exactly the voices that the internet should be amplifying.

The internet (and technology as a whole) magnifies human — both the good and the bad. But we can design and use technology in ways that reward the former.

That process starts with us serving as active allies for the world we want to live in — not passive bystanders to the status quo.

Innovation vs. Distribution

In the late 1970s, computer interfaces were mostly text-based. A simple, unwelcoming prompt let you control the computer by typing individual text commands.

If you knew how to program, this new tool was incredibly powerful. But for the vast majority of people who didn't, computers were inaccessible.

Enter the graphic user interface (GUI), which Xerox pioneered at its Palo Alto Research Center (PARC). Suddenly, everyday users could manipulate on-screen elements using a mouse.

We take this innovation for granted, but at the time, it wasn't obvious to everyone that this was the future. To some, it looked like a toy — incapable of doing real work.

But as the story goes, Steve Jobs understood the GUI's power the first time he laid eyes on it during a tour of Xerox's facility. It's mostly apocryphal, but there's a kernel of truth to the story: Jobs, along with many talented engineers and designers at Apple recognized the GUI as a powerful layer of abstraction. They believed that layer could enable everyday people to leverage the power of a computers.

Though Xerox PARC pioneered the computer interface that billions of people would one day use, Apple and Microsoft ultimately brought it to the masses.

So who gets credit? Certainly Xerox deserves historical recognition. But invention is useless if it can't be put in the hands of the people who need it.

This tension between innovation and distribution is also present in vaccine development. Creating an effective mRNA vaccine, for example, is a modern miracle.

Also challenging, however, is the process of getting shots into billions of people's arms in every corner of the globe. It's a herculean effort and a staggering logistical operation spanning supply chains, transportation, public messaging, IT, and more.

Ideas and invention are priceless. But only if they can be scaled to effect change in the world.

Hat tip to Steven Johnson for the vaccines example.

Online Engagement

Some YouTubers intentionally add typos to their videos in order to extend their reach. By subtly stoking outrage over minor "mistakes," creators can boost their engagement.

The internet gives marginalized voices a platform, breaks down barriers, and ultimately makes us more human.

But what happens when the platforms that run on it incentivize us to deliberately lower the quality of our work? That's a signal it's time to reimagine how the work we create gets prioritized and shared.

Humility in the Face of Progress

One of the primary reasons to study history is to put our current moment in context. By looking backward, we can learn what is possible when we turn to look ahead.

The New Deal, for example, serves as a model for the kind of sweeping legislation that is theoretically possible for America to enact. Amidst rampant inequality, the New Deal reminds us that we have the capacity to fundamentally reshape our society, if we choose.

This holds true for technology as well. Consider the fact that these two images were taken just 66 years apart:

Image of the Wright brothers' first flight next to an astronaut standing by an American flag on the moon

Mathematical physicist Lord Kelvin famously commented that “heavier-than-air flying machines are impossible” in 1895, just eight years before the Wright brothers' first flight.

And the English Astronomer Royal Richard van der Riet Woolley declared that spaceflight was "utter bilge" and wrote in a 1936 review of P.E. Cleator's book Rockets Through Space:

The whole procedure [of shooting rockets into space] . . . presents difficulties of so fundamental a nature, that we are forced to dismiss the notion as essentially impracticable, in spite of the author's insistent appeal to put aside prejudice and to recollect the supposed impossibility of heavier-than-air flight before it was actually accomplished.

21 years after Woolley — an expert! — wrote those words, Russia launched Sputnik into space.

Reflecting on technological progress reminds us that things that look impossible today might be commonplace just decades from now. This perspective invites a certain humility when assessing the current landscape.

The most remarkable thing about Woolley's comments is that he actually considered the airplane's evolution from impossible to actuality. Still, he rejected even the possibility of spaceflight.

Today, technologies like blockchains and augmented reality might look relatively useless. But as Clay Christensen pointed out, new technologies are often dismissed as toys.

And the rate of progress is only accelerating. Someone currently in their 20 or 30s might reasonably expect to witness even more dramatic developments than the progression of the first flight to landing on the moon.

To deny that possibility is to ignore the incredible innovation that has come before.

Fear and Technology

In 1939, Hitler's invasion of Poland marked the beginning of the most devastating war in human history. It started on horseback, and concluded with the invention of nuclear fission only six years later.

When the Soviets launched Sputnik in 1958, the US responded by forming NASA and landing a man on the moon in just over a decade.

And when a novel and deadly virus started spreading rapidly in Wuhan in December of 2019, scientists sequenced and published its genome online mere days after its discovery.

Fear is a powerful motivator. It may not always be the most effective at inspiring action in those you lead, but it's a major catalyst for technological innovation.

And even on the individual level, sometimes a little bit of fear can help us rise to the occasion.

Hat tip to Morgan Housel for sharing these examples.

The Long Tail

If you turned on the radio in 1948, you could listen to anything you wanted, as long as what you wanted was being broadcast by your local station.

It was probably Bing Crosby.

Until recently, most of the media you had access to was created for a mass audience.

The internet turned this on its head. As a creator, your work no longer needs to appeal to the largest possible audience, because sharing is free and discovery is easy.

Instead, you can create for the long tail, which refers to the wide range of smaller niches that are now accessible. Here's a graph of what that distribution looks like:

Picture by Hay Kranen / PD via Wikimedia

The long tail refers to the yellow part above — the tail-like extension of lower frequency phenomena.

Though the term has been used in statistics for decades, Chris Anderson popularized it in a 2004 Wired article. He writes:

Forget squeezing millions from a few megahits at the top of the charts. The future of entertainment is in the millions of niche markets at the shallow end of the bitstream.

Today, in a world of nearly infinite information sources and entertainment options, you can find and listen to exactly what you want.

If you need the perfect soundtrack for sitting on your porch, look no further than Spotify's front porch playlist.

Need something for the back porch instead? They've got you covered there too.

The implications of this access to entertainment and information go far beyond music. By enabling niche communities to assemble online, the web allows people to find those who are similar.

For example, transgender teens living in rural communities can discuss their experience together. COVID long haulers can compare symptoms with people who have related issues. And if you need help building a hurdy-gurdy, a hand-cranked, 10th century fiddle, then you're in luck.

By connecting us with the long tail of communities that fulfill niche needs and interests, the internet shapes our identity and helps culture and knowledge flourish.

In other words, the internet makes us more human.

A Brief History of Moral Panics

Throughout history, each development in the storage and distribution of information has caused considerable consternation.

Socrates worried about the invention of writing:

The specific which you have discovered is an aid not to memory, but to reminiscence, and you give your disciples not truth, but only the semblance of truth; they will be hearers of many things and will have learned nothing; they will appear to be omniscient and will generally know nothing; they will be tiresome company, having the show of wisdom without the reality.

And he was not alone! Plato was more succinct in expressing his fear:

Writing is a step backward for truth.

These concerns only increased when the printing press came along centuries later. Here's German polymath's Gottfried Wilhelm reaction in 1680:

The horrible mass of books that keeps growing might lead to a fall back into barbarism.

And, of course, the 20th century's rise of mass media is not exempt from these issues.

U.S. Senator Clyde L. Herring railed against radio in the 1930s, proposing that the FCC should review transcripts of broadcasts to ensure that they were not against public interest. He wrote:

Radio invades the sanctity of the home.

And Neil Postman, who published Amusing Ouselves to Death in 1985, captured some of the concerns about television:

[T]elevision is altering the meaning of ‘being informed’ by creating a species of information that might properly be called disinformation. Disinformation does not mean false information. It means misleading information—misplace, irrelevant, fragmented or superficial information—information that creates the illusion of knowing something but which in fact leads one away from knowing.

Replace "television" with "social media," and Postman's quote captures today's concerns about the internet.

There is certainly cause for concern: We've never before had information that algorithmically adjusts to users' preferences.

But when we look at the precedents, we can see that we've successfully adjusted to every other shift in how information is distributed.

As I was writing this, Twitter suspended congresswoman Marjorie Taylor Greene's personal account for spreading COVID misinformation and violating its policies for a 5th time.

We're adapting.

History doesn't repeat itself, but it rhymes.

Below the API

As society becomes increasingly digitized, the way we work is fundamentally changing.

This is most obvious in the gig economy, where workers often don't have a manager they report to. They have an app.

15 years ago, cab drivers needed a local dispatcher to instruct them where to go. Now, programmers' code pairs them with a rider and even provides a specific route to take.

Peter Reinhardt pointed out this stripping away of middle management back in 2015. Jobs increasingly fall into one of two categories: Either you tell computers what to do for a living or they tell you what to do.

Reinhardt describes this stratification as "above and below the API," which stands for "application programming interface." It refers to the software layer between two applications that allows them to talk to each other.

APIs are powerful, because they're part of what enable us to abstract away problems, making future innovation easier.

But when a human is effectively on the other side of that API, not another piece of software, the fabric of our society starts to change on a foundational level.

What happens when computers are given increasing levels of autonomy? And when you pair that autonomy with "real-world APIs," what happens to our communities when more and more of our jobs are managed by instructions we've given to a computer?

These questions will remain unanswered for quite some time. But significant change is coming, and there are two ways we can approach it.

On the one hand, we can dig our heels in and resist this evolution in labor. Certainly, some caution is merited, given the inherent risks involved in such profound change and the unknowns surrounding artificial intelligence.

But we can also see it as a generational opportunity to rethink how work gets done, how resources get distributed, and what kind of world we want to live in.

In other words, we have a chance to reprogram society — hopefully for the better.

Let's not waste it.

Abstraction

There's a popular activity in computer science 101 classes that involves no computers. It's simple to set up: The professor stands in front of the class with a jar of peanut butter, a jar of jam, a bag of bread, and a knife. Then she asks the students to tell her how to make a PB&J sandwich.

Inevitably, a student offers an instruction like "put the peanut butter on the bread," and watches in dismay as the professor picks up the entire jar of unopened peanut butter and places it on top of the bread — the same way a computer would if given only that command.

Computers are stupid.

The peanut butter exercise is effective, because the real goal of any good introductory computer science course isn't to teach you how program a computer. It's to teach you how to think like one, so that you can more effectively break down problems into steps for the computer to take.

Computers are literal machines and have to be told in excruciating detail exactly what to do:

"Pick up the peanut butter jar with your left hand. Using your other hand, twist the cap off the jar. Then pick up the knife and insert it into the top of the jar..."

Computers have gotten better over time, because programmers and end users have increasing levels of abstraction that make them easier to use. At their core, computers operate in binary — millions of ones and zeros that drive their logic.

Telling a computer what to do with only ones and zeros is really hard, so we devise ways of typing instructions that are a little closer to natural language. Every programming langauge is a layer of abstraction on top of the underlying binary. For example, the simple command print ('Hello, World!') in the language Python does roughly what you'd expect: It displays the text "Hello, World!"

But here's where things get fun: Sets of these commands can be combined into packages and shared. So instead of having to painstakingly write the instructions for making a sandwich, you can just find someone else who has released their code for doing that and use theirs. Now you can just tell the computer to make me a sandwich.

Abstracting away problems in this way is the engine of progress.

Enter the digital revolution: Entire companies are now built on assembling publicly available (open source) solutions to problems, adding some of their own propietary code, and releasing additional layers of abstraction into the world.

The payment processing company Stripe, for example, makes it famously easy to add payments to your app  or service. Now, instead of rolling your own payment processing, you can just type something like this:

Via Stripe.com

If you're not a programmer, your eyes might glaze over when you first look at the above code. But if you take a moment to try to read it, you can probably guess what it does: Tell Stripe to charge $2,000.

This process of abstracting away problems also applies to the analog world.

Nobody in the world knows every step of manufacturing a pencil from scratch. You'd have to know how to chop the wood, mine the graphite and aluminum, harvest the rubber, and cut, melt, and assemble it all into the final product. And you'd also need to know how to make all the tools for doing that!

If you were starting a pencil making company today, you'd just find suppliers who have already figured out the process of extracting the natural resources. Those suppliers have effectively created a layer of abstraction, just like Stripe has. Instead of going out and mining your own graphite, you'd just call up Graphite Mining Inc. and tell them to please send you 100 boxes.

One of the ultimate measures of progress is the number of problems we've effectively gotten rid of by abstracting them away.

In "New Technology" I wrote about how this applies to us personally: "When the solutions to our problems become so reliable they recede into the background of our lives, civilization takes a step forward."

So when you're faced with a new problem. Ask yourself: Who has already solved part of this?

Subscribe

Weekly thoughts on creating systems to sustainably grow your impact on the world.
Email address
Subscribe