Stephen Hawking Just Gave A Terrifying Warning


Will artificial intelligence (AI) take over the world, or will it help to bring about an unprecedentedly advanced human civilization? That’s the debate that has (quite rightfully) been raging for some time, with people like Elon Musk warning about its dangers, and others like Mark Zuckerberg focusing on all the advancements it could bring us.

Stephen Hawking has previously voiced concerns about AI, and just recently, he’s repeated them. During an interview with Wired, he said: “I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that improves and replicates itself. This will be a new form of life that outperforms humans.”

This is a fair thing to worry about, and it certainly taps into something inherently fearful. So much science fiction across a range of media revolves around the idea of AI becoming independent from its human creators – in a hostile way, or simply in an attempt to peacefully branch away.

Generally speaking, this fear comes from the notion that we are weak and replaceable, and that an AI that can outlive us will ultimately replace us. On a more immediate, visceral level, we fear that an AI may actively seek to harm us.

In either sense, they are concerns worth noting and taking into consideration. They shouldn’t, however, eclipse all other opinions about the future of AI, particularly the positive aspects. After all, many other experts, including Bill Gates, consider AI to be the Next Big Thing, the technological renaissance that will transform our society.

At this point, it’s looking more likely that AI will turn out to be like any other technology – one that’s used for both benevolent and malevolent purposes. In any case, it’s more likely that in the near future, it will be something that will augment our lives.

Studies have already shown that AI is better at recognizing patterns than humans. Whether its board and video games, or things as complex as IVF treatment and breast cancer diagnoses, machines are already surpassing us. This may sound frightening to some, but all it means is that certain enigmas could be solved more readily with the help of the machines.

At the same time, entire cities are being partly managed by AIs. One notable experiment in China has revealed that traffic and crime rates are down thanks to a similar type of pattern recognition software.

Sure, AI controlled by a dangerous person could be used maliciously. As Bill Nye points out, though, “If we can build a computer smart enough to figure out that it needs to kill us, we can unplug it.”

[H/T: Wired via Cambridge News]

Read more: http://www.iflscience.com/technology/stephen-hawking-ai-replace-humans-altogether/

Bill Nye: Worrying about the Robo-pocalypse Is a First-World Problem

Bill Nye laughs in the face of the robo-pocalypse. Or more accurately, he laughs at those who worry that AI might run amok. If we build robots that want to kill us, he says, we can just unplug them.

Read more at BigThink.com: http://bigthink.com/videos/bill-nye-on-artificial-intelligence

Follow Big Think here:
YouTube: http://goo.gl/CPTsV5
Facebook: https://www.facebook.com/BigThinkdotcom
Twitter: https://twitter.com/bigthink

So when it comes to artificial intelligence it is fabulous science fiction premise to create a machine that will kill you. And I very much enjoyed Ex Machina where the guy builds these big robots and then there’s trouble. There’s trouble. And I can’t help but think about Colossus, the Forbin Project where they have these computers that control the world’s nuclear arsenals. And then things go wrong, you know. Things just go wrong in science fiction sense. But they remind us that if we can build a computer smart enough to figure out that it needs to kill is we can unplug it. There are two billion people on earth who do not have electricity. They are not concerned about the artificial intelligence computer that decides to crash subway cars and kill people. That’s not their issue. And they don’t even have electricity or clean running water. So while we’re worried about artificial intelligence I hope we also take the bigger picture that none of this happens right now without electricity. And so we still don’t have anything but really primitive means of generating electricity. And I look forward to the day when everybody has clean water and a supply of quality electricity.

And then we can take these meetings about the problems of artificial intelligence. However, are there any viewers, listeners here who have not been to an airport where the train that takes you from terminal B to terminal A is automated, is not automated. Everybody’s been on an automated train, okay. In the developed world, especially the United States. Okay, that’s artificial intelligence. Everybody has used a toilet that’s connected to a sewer system whose valves are controlled by software that somebody wrote that is artificial intelligence. So keep in mind that if we unplug the trains or the sewer system valves the thing will stop. We still control electricity so this apocalyptic view of computers that people write software for to do tasks, repetitive tasks or complicated tasks that no one person can sort out for him or herself. That is not new. I do not see that it’s artificial – I mean that it’s inherently bad. Artificial intelligence is not inherently bad. So just use your judgment everybody. Let’s – we can do this. I worked on three channel autopilots almost 40 years ago. The plane lands itself and humans designed the system. It didn’t come from the sky. It’s artificially intelligent. That’s good. We can do this.