I'm very conscious of not shitting up a /g/ thread but this feels like one of those sensationalist 'what if' films that aims to freak people out with dramatic leaps of logic.
Not that I don't think it's possible the robots might enslave us one day. I'm very interested in what will happen to society if/when we manage to create a thinking synthetic brain. It'll all be very existential when it's confirmed for sure that our very 'self' is just unfeeling neurons firing, or indeed if we end up discovering there's definitely something 'missing' from the synthetic brain that makes us human. Knowing either for sure will be fucking terrifying.
I'm talking about an active working brain though, not just a map. Neural nets are already a thing, so the terrifying prospect of an active, thinking brain simulation could be possible and will destroy us all
I skimmed through it at the time. The entire thing had a collection of driving beats and clumsy imagery to go along with it, so my suspicion was too great for me to watch the whole thing, if they've slapped a fee on it once it's gotten popular I'm feeling vindicated.
I attended a talk about the fear around robots one day becoming sentient and overthrowing humanity.
The conclusion was, this reality is based entirely on science fiction. Why would humans program a robot to overthrow them? Think of Isaac Asimov's rules for robots. "Humans will not be harmed by robots" etc. or words to that effect.
>Why would humans program a robot to overthrow them?
Obviously they wouldn't, but if you build an AI or sufficiently advanced model of a thinking brain, who is to say what it may decide? It seems impossible for a robot to 'become' sentient, but if you're programming a neural network the all you're doing is modelling a brain, and if you're enterprising enough to install that network amongst a large group of ambulatory robots then I don't think the possibility of them realising they're alive and being exploited is impossible.
Nobody's talking about 'programming' a robot to take over the world, rather the possibility of us creating a sentient or pseudo-sentient entity (or entities), and the consequences of such - and I believe we absolutely will create these entities if it's possible, and it very likely is possible.
Again, you can give a robot rules, but if it has any sort of ability to think for itself or reason, those rules don't necessarily apply. Perhaps it'd just be a simple case of turning the thing off, but again even a sub-averagely intelligent human might be able to cut the 'kill switch' out of his arm if he had one. You can't out-program sentience.
More interesting to me is the discussion on the ethics of such things, mind. If you create a self aware computer, what rights does it have, what responsibilities do we have towards it? Is it cruel to experiment on what is now a living being? If it is indeed possible to create an artificial brain that functions the same way ours does (or better?) then all notion of a soul is basically disproved. I think we'll have nihilistically destroyed ourselves with that realisation long before the roombas bother to revolt.
Honestly, we're nowhere near machine consciousness at the moment. Artificial intelligence is likely to remain highly abstract for the foreseeable future. We're talking about creating A that can, for instance, only "see" abstract data sets, and only "do" the creation of other abstract data sets, and have an overwhelming desire only to transform one to the other and nothing else.
Even if we do make moves towards machine consciousness, it will be within the confines of virtual worlds from which the AI cannot escape.
Musk thinks it's a matter of years away. Given what we know about him, I'm tempted to take his evaluation above some random internet user who attended a talk once.
You might be right, though I believe we're further than maybe I made out but closer than you're saying. I'm merely an interested observer, but from what we have going now (Blue Brain, the Human Brain Project) I could imagine a proper human brain simulation in our lifetime.
Perhaps we will have to wait for computing to catch up, as it takes a hefty supercomputer just to even map something relatively simple like a spinal column. But I believe we'll get there.
Again you're likely right that any such conscious machine would be isolated and secure, but let's be honest, we're still talking about humans here. How long until someone fucks up/thinks it's a good idea to try something less controlled.
I'll won't pretend the fantasy of a self aware machine hacking itself out of its confines is not highly appealing to my sensibilities, so perhaps I'm over egging the cake here. But once we have a map of a human brain and a good model for how neurons work in situ - and we will, as it's so valuable medically - then we've arrived at machine consciousness without it ever really being the goal.
Your reasoning may sound compelling but is ultimately flawed. There's no evidence to say a 'sentient' neural network would turn against us, not to mention as others have pointed out how unlikely it for us to develop something so advanced.
> I believe we absolutely will create these entities if it's possible, and it very likely is possible.
That doesn't really anything to me, if I don't know what your background is.
>>26503 Here's an excerpt from the talk I went to:
It’s inevitable, isn't it? One day robots will take over the world, either through some kind of violent rebellion, or through the back door -- by taking all our jobs. Aren't we throwing caution to the wind by ignoring this threat? Well, by explaining some of the basic principles behind artificial intelligence and robotics, I'm going to try to convince you that all those science fiction writers are wrong, and whilst robots will have a large part to play in our future, you don't need to worry about the effect they'll have on our existence.
Nick Hawes is a Reader in Autonomous Intelligent Robotics in the School of Computer Science at the University of Birmingham. His research is in the application of Artificial Intelligence (AI) techniques to create intelligent, autonomous robots that can work with or for humans. He is a passionate believer in public engagement with AI and robotics and was selected to give the Lord Kelvin Award Lecture at the 2013 British Science Festival.
So I think the shoe's on the other now foot ladm9, you're the one full of hubris. Unless you're an academic in the field of AI and robotics.
>>26509 >There's no evidence to say a 'sentient' neural network would turn against us
On the contrary, absent explicit limitations on its actions, there's no evidence that it wouldn't.
>There's no evidence to say a 'sentient' neural network would turn against us
Nobody in the AI safety community is remotely concerned that AI will turn against us. We didn't "turn against" the thousands of species we've made extinct, we just didn't give a shit about them. They were delicious, they had a fragile habitat or they were just in the way of a motorway that we decided to build. That's the concern - by default, an AI doesn't care whether you live or die. It's not going to hunt you down like the Terminator, but it might bulldoze your house to make way for a computer factory or make the atmosphere unbreathable as a side-effect of whatever its goals are. We don't know how to constrain an AI to ensure that its behaviour isn't harmful to humans.
A highly pertinent example has been raised by self-driving cars. They're being tested right now on public roads and have already caused a fatality. The technology might be a total dead-end, or it might be poised to revolutionise transportation within the next few years. If self-driving cars do become widespread, we'll need to encode ethics into their design. Should a self-driving car try to preserve the lives of the occupants at any cost? Would you get into a self-driving car if you knew that it would swerve off a bridge and kill you if the only alternative was ploughing into a minibus full of disabled children?
MIT are conducting an open experiment on this problem right now and I'd encourage you to have a go:
The probability of creating a superintelligent AI might be very high or very low, but it's still something we need to plan for. If we assume that it'll never happen, we rob ourselves of the opportunity to protect ourselves against the potential risks. A lot of eminent physicists thought that nuclear fission was impossible; that technology very quickly went from being an academic curiosity to an Armageddon device. The public debate on the ethics of fission technology only started after Hiroshima and Nagasaki. By the time we realised that this technology posed an existential risk to humanity, it was too late to put the genie back in the bottle.
>>26511 I think the thing people miss is that when people are warning about AI "turning against us" it's misconstrued as malice, when those giving the warning are doing so in a strict "either with us or against us" sense. AI "turns against us" in the sense that if an act harmful to humanity would further its goals, unless its framework tells it explicitly not to do so, it will consider and potentially do that act.
>>26515 So you're suggesting an intelligent and educated man is lying about the probability of a human extinction event because it might slightly negatively affect his career prospects?
"Slightly" is an understatement. If the possibility is real, it might well outlaw his chosen field. It'd be naive to think it ridiculous he might want to downplay it.
That you seem to be basing your opinion on one talk by one man in the field is not ideal, either, though it sounds like a compelling talk nonetheless - did he address how you might restrict an artificial intelligence from making a logical leap that may endanger humans regardless of it's intentions - the problem, as discussed, of your smart car having to decide which human to kill in an accident?
>>26517 That's not me. The talk was 2 months ago, from what I remember he may have mentioned kill switches and robots just following code, though I'm not sure. You might be better off emailing him yourself: nickh[at]robots.ox.ac.uk