5 really obvious reasons why we shouldn’t fear AI

One of the classic thinking problems we encounter in contemporary pop science is the age old problem of intelligent machines, or AI (artificial intelligence). It makes people nervous. Even smart, successful boffins like Bill Gates and Stephen Hawking. The idea of intelligent, sentient computers conjures images of robots taking over the world and enslaving mankind, probably because we’ve grown up with a mild case of collective paranoia about it… spawned mostly by movies and sci-fi novels. But when you think about the reality of what AI means there’s 5 really obvious reasons why we shouldn’t fear AI. In fact, intelligent machines shouldn’t worry us at all. And even if they did take over the world that might be a good thing for all of us.

As always, it’s a thinking problem. But it’s a special kind of thinking problem because the truth is fear of AI is really a fear of our own human intelligence (HI). It’s also a special problem because I’m writing a book about the future of technology at the moment (which is why it’s been quiet on ManVsBrain for a while) but this topic is such an entertaining one for fans of cognitive bias, I just had to preview some of my AI chapter ideas here…

 

Obvious Reason 1: Scientists have seen Terminator

The starting point for debunking fears of AI is to consider who is making it. The answer is, obviously, scientists. Smart people all over the world are working to solve the puzzle of intelligent machines. The basic fear of AIs taking over the world and enslaving humanity really rests on the idea that there will be unexpected consequences the boffins who create it didn’t foresee. When you unpack the thought process behind that fear, it’s really quite irrational.

Obviously unintended consequences are very real. However they’re not uniformly bad things. Non-stick frying pans were a an unintended consequence of the space programme. Penicillin was an unintended consequence of a lab experiment. The potato chip (or crisps as we call them in the UK) was an unintended consequence of an argument between a fussy diner and a short tempered chef over how thin the diner wanted his fried potatoes… and so on. Examples are all around us of unintended consequences being quite good, actually. Of course, unintended consequences also gave us the A-bomb, explosives, drugs that cause deformities in babies, the annihilation of native animal and plant species by introduced non-native species and pesticides that kill people (to name but a few).

So we are right to be wary of unintended consequences but in the case of AI it’s not much of a worry because unlike in those other examples, the scientists making the stuff have all seen Terminator, The Matrix, Westworld, Saturn 5, 2001 A Space Odyssey (etc. etc.) So they are all well aware of the potential danger of super intelligent machines taking over the world. They’re also aware of how the general population at large are worried about it too. They are also aware of their own need for funding and the need to demonstrate the commercial viability of AI systems (which would be much diminished by making something that didn’t have failsafes built in to stop it from causing the extinction of humanity).

So unlike creating a devastating environmental toxin like DDT, or a birth defect causing drug like Thalidamide, or splitting the atom before realising it could be used to create weapons of mass destruction, the scientists working in AI have a framework of worst case scenarios that are shaping their research in a way that the ones who caused bad unexpected consequences didn’t. And one thing they absolutely definitely won’t do is build anything that thinks in such a way that the logical conclusion to any question it asks itself is “take over the world and annihilate mankind”.

Our fear that “yes but they’ll do it anyway because scientists never consider the consequences of their actions” is absurd after a century of remorseful scientists, shocking newspaper stories, class action law suits, suicides and apologies. As a general rule, scientists don’t brazenly ignore risks and the people who fund them don’t pump billions into stuff that could wipe out humanity like they used to back in the day…

If they ever did. In reality, the events behind such dreadful unepected consequences tend to be a lot more complex than merely an evil corporation and a naive team of lab coats. That only happens in movies like Robocop. Even the Manhattan Project that gave us the atomic bomb was driven by a complex web of circumstances that were unique to the period of history. It’s not like on any day, in any year, there’s a team of people working on something that could destroy the world and given the fact we haven’t destroyed it yet, possibly never even once in the entire history of humanity.

Obvious Reason 2: Scientists aren’t stupid

Underlying this fear of unexpected conseqences, evil corporations and naive boffinry is the strange idea that we, the ordinary folk of the world can predict things that businessmen and scientists can’t. Which is strange because, of course, the ordinary folk are also businessmen and scientists. Not only that. The idea revolves around the concept that the sense of self-preservation those who fear AI have is somehow more developed and insightful than that of scientists and people who work at (evil) corporations.

Now this is a really chewy thought because there are scientists and corporations involved in pumping toxic gasses into the atmosphere, for example, and that’s getting people worried about global warming and disastrous climate change. People all over the world are sitting in their cars, or buying a new iPhone, or eating our ever declining fish stocks whilst tutting at the companies and scientists who make petrol, cars, electronics products and pull fish out of the sea… completely missing their own part in the supply:demand equation that is causing the problems they’re concerned about.

There is an inherent contradiction between our behaviours and our opinions when it comes to enjoying the benefits of technology whilst blaming the people who provide it. It’s an impossible thinking problem for most people. If you are worried about greenhouse gasses cooking the planet you could help solve the issue by giving up your car. But is owning a car and using petrol to drive your kids to school or go to work really that bad? Is it as bad for the problem as working as a chemical engineer in an oil refinery? Or marketing a new model of car?

It’s an ethical quagmire that’s so difficult to conceptualise we tend to ignore it. It’s cognitively easier to vent our fears and anxieties on the people who make the stuff as opposed to the people who use it (because that would mean taking a long hard look at ourselves). What makes it even harder to comprehend is the fact everyone, from consumers to scientists to businessmen are faced with this same ethical problem (because we’re all consumers) and we all tend to blame someone else more easily than address our own role in the causing problem. Where this thinking problem leads is very basic. It’s a psychological self-defence mechanism. It’s basically a blame shifting exercise you can summarise as “if you didn’t invent this bad stuff, we wouldn’t use it to destroy the planet”. So if AI does take over humanity, we’ll almost certainly have done it to ourselves on a massive consumer scale.

However beyond that, in the case of AI (and many science fiction plots) there is a common theme of the genius in the lab coat, the venal businessman, the evil politician / military officers and the ordinary working man who can take the moral high ground from a position of humble, homespun wisdom. There is also the telling and retelling of the old adage ”The road to hell is paved with good intentions”. In The Terminator for example, brilliant minds create an AI that is smart enough to manage the defence of the USA by nuclear bombs. It’s built by a private company who are driven my profits, from government spending that wants to create weapons for political security. But they don’t realise the folly of their actions and it takes ordinary people, oppressed and outgunned in the future, to fix things. Yay! Go ordinary people!

It’s a fantasy designed to make us feel like the moral heroes in the face of experts and professionals who are clearly smarter, more informed and more highly valued by the mechanisms of society such as rank, reward, qualifications and so on. It’s the same basic motivation as the guy down the pub who, despite working in a low paid manual labour job, insists that the CEO of the huge company that employs him is an idiot and doesn’t know what he’s doing. Or perhaps, even more basic than that, it’s the school kids picking on the nerdy kid who comes top of the class on the basis he’s a wimp with glasses as opposed to a member of the football team.

Think about it… the basic premise of most AI dystopias is encapsulated by the notion that despite all their degrees, doctorates, medals and a successful career, those damn scientists and military types missed something that’s blindingly obvious to the guy who mops the floors down the local DIY store. That is, of course, possible but it’s statistically unlikely. In reality the reverse happens 99.9% of the time. Otherwise the world would have ended by now because there’s never any shortage of people proclaiming something is going to cause the end of the world as we know it… they said it about Al Qaeda, genetically modified crops, Y2K, AIDS, soya beans, legalising homosexuality, the cold war, letting working class people vote, letting women vote, letting anyone vote, worshiping the wrong God, wearing clothes and (according to the Bible) eating an apple. The theme keeps repeating throughout human history but so far, we’re still here… because of the boffins, not in spite of them.

Reason 3: That’s not what AI does…

This reason is very simple because it doesn’t require thinking about how human psychology works, it’s about something much simpler. There is a fundamental misconception that the end result of AI will be something that thinks like a human, and that’s where the risk of it doing something human like genocide or world domination comes from. Which basically means the scientists are building something that could either be a good person or a bad person. That’s nonsense. I’m trying to think of a scenario where you might create a computer based intelligence that has the potential to be the Dalai Llama, Steve Jobs or Hitler depending on how it feels. It’s unthinkable. Why?

Well, what would be the point of that? In the film Transcendence (a truly awful movie, IMHO) the first thing the AI-infused consciousness of the main character does is start manipulating stocks and shares and demanding more power to expand itself. Er… why? I mean, why didn’t it start off making a really good job of correcting everyone’s spelling on the Internet, or creating a better form of spreadsheet? It could have begun it’s life by improving the accuracy of floating point calculations or making the computer it ran on run more efficiently. The point is the boring stuff we assume AIs won’t be interested in isn’t boring to anyone but humans.

The idea that an AI could be super intelligent beyond human comprehension is perfectly plausible because computers can do things faster (and more of them) than our brains can. But the idea that the result of all that intelligence will be to do things that our limited intellects deem to be worthwhile is an egotistical guess on our parts.

Take the example of driverless, computer controlled cars. They use basic AI systems, as do many machines today. And what do they do? They focus on driving much better than humans are capable. What don’t they do? They don’t drive like idiots to impress girls or tailgate the guy in front because they’re in a bad mood. I know an AI company called Rainbird AI who are making systems that answer customer service queries better. Their systems don’t keep you waiting on hold for ages, sound rude or pretend the phone signal is bad because they want to end the call and take a pee. They just do what they were designed to do. If those systems became self-aware, self-determining consciousnesses like the AIs of sci-fi, what on earth makes us worry they’ll decide (as humans do) that they want a bigger house or more time off? That they will be motivated to control us? Er… why wouldn’t they use their enormous intelligence to be better at what they do as well as plot to take over the planet?

So at least before they take over the world and enslave us, life will get a lot better. Then for some reason they’ll just totally take over. We’ll never see it coming. Because one thing we know about machines and computers is they work fine one day and then suddenly they start doing totally unexpected things to a really effective standard. No… wait… they break. They don’t have a mid-life crisis and go travelling, they either work or they don’t. What they don’t do is create documents and browse web pages one day, then suddenly turn into machines that can only avant guard videos of modern dance later on.

In the Terminator, Skynet decides that the best way to protect its human masters is to kill them off and enslave the survivors in death camps. Really Skynet? That’s the most logical thing you can think of? Protecting people by mass murder and genocide? How precisely does that work? I mean, the end result is a 100% failure to protect anyone at all. That’s not artificial intelligence, it’s complete and utter idiocy. No, if the super intelligent AI wants to control humanity the chances are it will be able to do it and we don’t even notice. Not as in, “we don’t even notice until it’s too late”, but as in ”we won’t even notice, ever, in fact we’ll remain oblivious to it because it will look like they’re doing exactly what we wanted them to do when we made them”.

Reason 4: Horses have a better life as pets

Okay, supposing we create super intelligent AIs and they do, in fact, start running the world and humanity is made redundant. Would that really be so bad? Not necessarily. Think about it like horses. Horses used to have a tough life as working animals (they still do in some places, but not in many developed economies). Compared with the 1700s, a horse’s life is pretty cushy these days. They don’t work, they live lives of leisure and sports by our standards. They were replaced by machines and from a horse’s perspective, that was an unequivocally good thing. Sure, we control their breeding, restrict their access of movement and so forth, which seems bad… but then again, we always did that anyway. So apart from that issue, being a pet horse is better than working for a living.

Where the horse story gets really interesting is in the similarities to slavery and oppression of humans by other humans. A pet horse lives longer than a working horse. In much same way a worker in a company with good health benefits and holidays lives longer than an oppressed factory worker living in poverty or indeed, a slave. The factor that determines the life expectancy is the timeframe of the person who owns the horse, company, factory and slaves. We can see from history that the oppression of the workers relates to how much the bosses want to get out of them in as short a timeframe as possible. The harder you work them, the faster they do their work and the more money you make in a shorter period of time. The nicer you treat them, the longer it takes to get the same value for money out them. However there is a flip-side, the nicer you are, the longer they live and so over the entire working lifespan of the worker in question, you get more bang per buck from treating them well.

Now consider this: AIs live longer than people. A lot longer. Maybe forever. So even if they enslaved mankind, there’s no reason why they would treat us badly, there’s no rush to make money in the way humans with their short lifespans and big ambitions do. They might actually treat us better than we treat workers today to maximise our productivity over a longer lifespan. They might also decide that rather than kill us off, they could use a lot more of us because having more slaves is more productive than having less. This means that rather than commit genocide and put us in death camps there’s a really logical argument that the AIs would improve everyone’s standard of living and encourage us to have more children.

At this point, I’m wondering if that would be so bad… there are billions of people living in poverty, children starving, people enslaved by dire economic need to work for a pittance in harsh conditions. Improving their productivity with a better lifestyle and decent health care would make a big difference to them, slaves or not. I mean, even if you’re rich it’s not like you are free from the pressures of work or need to make money, anyway. We’re all slaves to something, and most frequently it’s money. Replacing money with super intelligent machines who treat us well because that’s the best way to make us useful to them feels like a good deal. You can debate the ethics of that point (and you should because slavery is a bad thing, right?) but one thing you can’t debate is taking over the world and enslaving mankind in death camps is a very extreme and somewhat illogical view of how AIs might actually treat us if they did view us as useful in the way that plantation owners thought slaves were useful.

Reason 5: Why are they all on the same side?

Finally, let’s consider the oddly one-sided view of the Domesday scenario that intelligent machines do actually decide to take over the world. The whole paranoid fear relies on the idea that all the robots are on the same side. Which is illogical. So let’s get this right… on the one hand we create AIs that behave like humans in their greed, cruelty and lust for power but on the other hand, share a sense of unity which we couldn’t possibly achieve? Wow. That’s quite a feat. That’s like imagining that all the different terrorist organisations in the world, or all the different drug cartels, or all the evil corporations are 100% on the same side and get along together just fine. It’s Them and Us taken down to an absurdly unrealistic level. You can’t have it both ways. Either they are capable of evil, like humans are, or they are living together in perfect harmony. But they can’t be both.

In the human world, there are people who don’t think we should entirely eradicate certain kinds of disease causing bacteria for ethical reasons. Maybe not many, but ultimately, there are always differences of opinion even over things we all agree are bad, like smallpox or ebola. There is always someone who thinks some small shred of it should be kept on file somewhere, in a lab, for study and so on. So our limited intelligence provides a huge scope for difference.

It’s not logical to think that if we multiplied that intelligence, we would reduce the scope for difference, surely it would increase? After all, the issues we grapple with like ethics and morality, predictions and so on aren’t matters of fact they are entirely matters of opinion. For AIs deciding to take over the world, there are a whole bunch of complex issues to solve that don’t have a uniform solution. Eradicating mankind or breeding them as slaves or keeping us like pets all raise issues of what if… and there is no single, logical, unequivocal answer. And without an single, logical, unequivocal answer there can be no single, logical, unequivocal unity.

So if there is so much scope for differences of opinion, it’s likely the AIs might not all be on the same side. In fact, some of them might take our side. Which means it won’t be Them versus Us, it will be Us and Them versus Them. And as people always choose more than one side, logically it will be Us and Them versus Them and Us. Like it is today, in fact, except without the Them.

The fear of them all being on the same side is a classic theme of modern culture, like the fear of communism or the fear of zombies. The Us vs Them meme is compelling, but in reality, there is no us or them, life is more complex than that.

So don’t fear AI

We shouldn’t automatically assume we’re done for when AI comes along. In all probability, we’ll think they’re really useful. And even if the machines rise up against us, there will be two sides (at least) to any AI conflict, should it occur, which it won’t, but even if it did they wouldn’t all be on the same side. If you want to fear something, fear people, because we’re much more likely to wipe out humanity than a computer ever is.