Last month I wrote an article about the way that technology impacts the nature of human work. (You can read that article here if you care to, which you should – it’s awesome.) But I realized later that nothing I said is especially valuable if computers eventually destroy all of humanity. And since that particular apocalypse inevitably gets brought up whenever we invent another AI something, I thought it worth some time to think about how (and whether) our technological advances will ever end up destroying us.
How does that relate to your business? You’ll see if you keep reading!
Fundamentally, every advancement we have ever considered involves a collection of potential benefits and a different collection of potential drawbacks. Social media, for example, promised a full democratization of information-sharing without restrictive gatekeepers or oppressive regimes telling us what to read and think. That is a definite positive. At the same time, social media has allowed misinformation, faulty science, lies and hatred to proliferate to a degree unimaginable before its invention. That is a definite negative.
So, should we shut down all social media forever? Probably not, since we would miss out on all its advantages. Should we allow people to post absolutely anything without any regard for the consequences of those posts? Probably not, since that can lead (and has certainly led) to some pretty negative outcomes.
The solution, then, is the same solution we always come to whenever we have to make decisions about things that offer both positives and negatives – to sacrifice some amount of upside in order to eliminate as much downside as possible. That’s why we have stoplights and speed limits (which restrict our freedom to drive however fast and recklessly we want!!!) and it’s why we have laws (which restrict our freedom to steal whenever we feel like it!!!). Social media is no different. An easy solution, for example, would be to penalize social media companies for hosting inaccurate or incendiary content, the same way that book publishers can be held liable for printing offensive or incendiary books. Would that reduce the amount of social media posts (and profits)? Absolutely. Is that an acceptable outcome to avoid being bombarded with misinformation? Many of us would probably say yes.
The same is true with every other technology we’ve invented. AI software like ChatGPT allows people to write full essays simply by typing in a short prompt. That’s a godsend to people who aren’t good at or interested in writing. But some teachers are terrified of the effect it will have on students (“Will they ever learn how to write?” “They’re all going to start cheating!”) and are talking about banning the use of AI software or resorting to pen-and-paper essays. That is certainly one way to deal with it. But a more intelligent approach would be to sacrifice some (not all) of the benefits in order to avoid those worst-case scenarios. For example, teachers might actually teach people how the software works, what its limitations are, and the importance of editing whatever output ChatGPT spits out. They might require students to write outlines on their own (with pen and paper even!), then feed those outlines into an AI bot to see what kinds of papers they get. They might ask students to turn in two assignments, one generated entirely by AI and the other a product of AI plus the student’s finishing touches, so they can compare the two and see how human ingenuity can improve on AI templates. There are a lot of ways to harness the positives of AI software without just turning people loose and hoping they don’t cheat and manage somehow to learn something.
And when it comes to general AI (that’s the kind you’ve seen in Terminator and The Matrix and basically every movie where robots end up killing us all), the potential upside is significant. Imagine a computer sophisticated enough to analyze weather patterns to predict with perfect accuracy what’s going to happen two weeks from now. Or a machine that can analyze a trillion different combinations of chemicals to tell us which ones can create better pharmaceuticals or the kind of rocket fuel that can make interstellar travel actually feasible? Or software that can write international treaties with such precision that countries walk away happier with the outcome and less likely to go to war? That’d be neat, right?
That’s why we pursue these things, because the potential upsides are great. But since the whole “destruction of the entire human race” is an actual possible downside to these technologies, we should proceed with extreme caution. That’s what we’ve done with nuclear energy, and for the past 80 years we’ve managed to enjoy some the benefits of that incredible form of energy without, you know, destroying all life on the planet.
In each of these cases – and in every business decision you’ve ever made about some new untested something – there are only two intelligent options. If it is determined that the potential downsides outweigh the value of the potential upsides, we choose not to move forward. That’s what we do anytime we kill a particular project or decide to end a particular relationship. But if it is determined that the potential upsides are worth the downside risk, then we’re forced to figure out how much of that upside to surrender in exchange for mitigating that downside risk. People sometimes argue that anything short of pure freedom is some kind of defeat, that the only acceptable options are everything or nothing. But that very, very rarely happens (see stoplights and anti-theft laws for proof). Far more often than not, we accept some amount of limitation as the price of sustainable success.
So will AI eventually destroy us? I don’t think so. The technologies we are creating are certainly new, but the processes we follow as we decide how (or whether) to develop those technologies is not new at all. Presumably we’ve learned enough by now to know that there will need to be some guardrails in place before we just turn those machines loose. But if I end up being wrong, I probably won’t live long enough to publish a correction to this. So let’s hope I’m right!
I’m pretty sure I’m right. Right?