[ rss / options / help ]
post ]
[ b / iq / g / zoo ] [ e / news / lab ] [ v / nom / pol / eco / emo / 101 / shed ]
[ art / A / beat / boo / com / fat / job / lit / map / mph / poof / £$€¥ / spo / uhu / uni / x / y ] [ * | sfw | o ]
logo
technology

Return ]

Posting mode: Reply
Reply ]
Subject   (reply to 26461)
Message
File  []
close
692773217.jpg
264612646126461
>> No. 26461 Anonymous
8th April 2018
Sunday 4:45 pm
26461 spacer
I just shit myself. AI could be about to change the world in a very big way and it might go wrong.

http://doyoutrustthiscomputer.org/watch
Expand all images.
>> No. 26462 Anonymous
8th April 2018
Sunday 5:07 pm
26462 spacer
I'm very conscious of not shitting up a /g/ thread but this feels like one of those sensationalist 'what if' films that aims to freak people out with dramatic leaps of logic.

I promise I'll watch it later on though.
>> No. 26463 Anonymous
8th April 2018
Sunday 5:07 pm
26463 spacer
>>26462
Like that "lick this lollipop" thing from years ago.
>> No. 26464 Anonymous
8th April 2018
Sunday 5:10 pm
26464 spacer
>>26462

Not that I don't think it's possible the robots might enslave us one day. I'm very interested in what will happen to society if/when we manage to create a thinking synthetic brain. It'll all be very existential when it's confirmed for sure that our very 'self' is just unfeeling neurons firing, or indeed if we end up discovering there's definitely something 'missing' from the synthetic brain that makes us human. Knowing either for sure will be fucking terrifying.
>> No. 26465 Anonymous
8th April 2018
Sunday 9:06 pm
26465 spacer
>>26464

We've already mapped every single neuron in C. Elegans, but it still doesn't really tell us much about the animal itself.

https://www.scientificamerican.com/article/c-elegans-connectome/
>> No. 26466 Anonymous
8th April 2018
Sunday 9:33 pm
26466 spacer
>>26465

I'm talking about an active working brain though, not just a map. Neural nets are already a thing, so the terrifying prospect of an active, thinking brain simulation could be possible and will destroy us all
>> No. 26467 Anonymous
9th April 2018
Monday 8:58 am
26467 spacer
>From the director of "Who Killed the Electric Car?"
>> No. 26468 Anonymous
9th April 2018
Monday 5:13 pm
26468 spacer
>>26461
Just a black screen for me, apparently I'm blocking so much it won't work, whatever it is. So yes looks like I do trust this computer.
>> No. 26470 Anonymous
9th April 2018
Monday 5:46 pm
26470 spacer
Is paying $4 to "rent" a random online documentary the done thing nowadays?
>> No. 26472 Anonymous
9th April 2018
Monday 6:22 pm
26472 spacer
>>26470
I should fucking well hope not.
>> No. 26473 Anonymous
9th April 2018
Monday 6:27 pm
26473 spacer
>>26467
Fucking Stonecutters.
>> No. 26474 Anonymous
9th April 2018
Monday 7:15 pm
26474 spacer
>>26470

That's odd, when OP posted, it was there for free.
>> No. 26476 Anonymous
9th April 2018
Monday 7:24 pm
26476 spacer
>>26474

Can confirm.

I skimmed through it at the time. The entire thing had a collection of driving beats and clumsy imagery to go along with it, so my suspicion was too great for me to watch the whole thing, if they've slapped a fee on it once it's gotten popular I'm feeling vindicated.
>> No. 26477 Anonymous
9th April 2018
Monday 7:47 pm
26477 spacer
>>26470
It's actually $4.79 for us. Oddly it's priced in dollars but they still add VAT.

Anyway if anyone wants to watch it...
https://www.youtube.com/watch?v=1bcN8EawnCM
>> No. 26492 Anonymous
15th April 2018
Sunday 3:15 am
26492 spacer
>>26477
Thanks for that, other streaming sites had it but I didn't think to check youtube.

It is a puff/scare piece really, but hits some of the right notes for a layman's introduction to the topic.
>> No. 26499 Anonymous
18th April 2018
Wednesday 9:35 am
26499 spacer
I attended a talk about the fear around robots one day becoming sentient and overthrowing humanity.

The conclusion was, this reality is based entirely on science fiction. Why would humans program a robot to overthrow them? Think of Isaac Asimov's rules for robots. "Humans will not be harmed by robots" etc. or words to that effect.
>> No. 26500 Anonymous
18th April 2018
Wednesday 10:17 am
26500 spacer
>>26499
We wouldn't program them to overthrow us. The more likely scenario is that we neglect to program them not to overthrow us.

https://en.m.wikipedia.org/wiki/Instrumental_convergence

Also, a recurring theme in Asimov's work is how the Three Laws as stated are insufficient.
>> No. 26501 Anonymous
18th April 2018
Wednesday 10:18 am
26501 spacer
>>26499

>Why would humans program a robot to overthrow them?

Obviously they wouldn't, but if you build an AI or sufficiently advanced model of a thinking brain, who is to say what it may decide? It seems impossible for a robot to 'become' sentient, but if you're programming a neural network the all you're doing is modelling a brain, and if you're enterprising enough to install that network amongst a large group of ambulatory robots then I don't think the possibility of them realising they're alive and being exploited is impossible.

Nobody's talking about 'programming' a robot to take over the world, rather the possibility of us creating a sentient or pseudo-sentient entity (or entities), and the consequences of such - and I believe we absolutely will create these entities if it's possible, and it very likely is possible.

Again, you can give a robot rules, but if it has any sort of ability to think for itself or reason, those rules don't necessarily apply. Perhaps it'd just be a simple case of turning the thing off, but again even a sub-averagely intelligent human might be able to cut the 'kill switch' out of his arm if he had one. You can't out-program sentience.

More interesting to me is the discussion on the ethics of such things, mind. If you create a self aware computer, what rights does it have, what responsibilities do we have towards it? Is it cruel to experiment on what is now a living being? If it is indeed possible to create an artificial brain that functions the same way ours does (or better?) then all notion of a soul is basically disproved. I think we'll have nihilistically destroyed ourselves with that realisation long before the roombas bother to revolt.
>> No. 26502 Anonymous
18th April 2018
Wednesday 1:13 pm
26502 spacer
>>26501

Honestly, we're nowhere near machine consciousness at the moment. Artificial intelligence is likely to remain highly abstract for the foreseeable future. We're talking about creating A that can, for instance, only "see" abstract data sets, and only "do" the creation of other abstract data sets, and have an overwhelming desire only to transform one to the other and nothing else.

Even if we do make moves towards machine consciousness, it will be within the confines of virtual worlds from which the AI cannot escape.
>> No. 26503 Anonymous
18th April 2018
Wednesday 1:18 pm
26503 spacer
>>26502
Such hubris.

Musk thinks it's a matter of years away. Given what we know about him, I'm tempted to take his evaluation above some random internet user who attended a talk once.
>> No. 26504 Anonymous
18th April 2018
Wednesday 1:46 pm
26504 spacer
>>26503
What, the CEO of a failed car company that's spewing money faster than a dot-com company in 2000?
>> No. 26505 Anonymous
18th April 2018
Wednesday 2:10 pm
26505 spacer
>>26502

You might be right, though I believe we're further than maybe I made out but closer than you're saying. I'm merely an interested observer, but from what we have going now (Blue Brain, the Human Brain Project) I could imagine a proper human brain simulation in our lifetime.

Perhaps we will have to wait for computing to catch up, as it takes a hefty supercomputer just to even map something relatively simple like a spinal column. But I believe we'll get there.

Again you're likely right that any such conscious machine would be isolated and secure, but let's be honest, we're still talking about humans here. How long until someone fucks up/thinks it's a good idea to try something less controlled.

I'll won't pretend the fantasy of a self aware machine hacking itself out of its confines is not highly appealing to my sensibilities, so perhaps I'm over egging the cake here. But once we have a map of a human brain and a good model for how neurons work in situ - and we will, as it's so valuable medically - then we've arrived at machine consciousness without it ever really being the goal.
>> No. 26506 Anonymous
18th April 2018
Wednesday 2:29 pm
26506 spacer
>>26505
FWIW, all the relevant thought experiments go with the fuck-up model, where the AI is given a goal but not proper boundaries.
>> No. 26508 Anonymous
18th April 2018
Wednesday 2:51 pm
26508 spacer
>>26503
So the person whose company's mission statement is to colonise Mars readily believes and warns of existential threats to human life on Earth?

Imagine my shock.mp4
>> No. 26509 Anonymous
18th April 2018
Wednesday 5:22 pm
26509 spacer
>>26501

Your reasoning may sound compelling but is ultimately flawed. There's no evidence to say a 'sentient' neural network would turn against us, not to mention as others have pointed out how unlikely it for us to develop something so advanced.

> I believe we absolutely will create these entities if it's possible, and it very likely is possible.

That doesn't really anything to me, if I don't know what your background is.

>>26504
Alas this.

>>26503
Here's an excerpt from the talk I went to:

It’s inevitable, isn't it? One day robots will take over the world, either through some kind of violent rebellion, or through the back door -- by taking all our jobs. Aren't we throwing caution to the wind by ignoring this threat? Well, by explaining some of the basic principles behind artificial intelligence and robotics, I'm going to try to convince you that all those science fiction writers are wrong, and whilst robots will have a large part to play in our future, you don't need to worry about the effect they'll have on our existence.

Nick Hawes is a Reader in Autonomous Intelligent Robotics in the School of Computer Science at the University of Birmingham. His research is in the application of Artificial Intelligence (AI) techniques to create intelligent, autonomous robots that can work with or for humans. He is a passionate believer in public engagement with AI and robotics and was selected to give the Lord Kelvin Award Lecture at the 2013 British Science Festival.



So I think the shoe's on the other now foot ladm9, you're the one full of hubris. Unless you're an academic in the field of AI and robotics.
>> No. 26510 Anonymous
18th April 2018
Wednesday 5:55 pm
26510 spacer
>>26509
>There's no evidence to say a 'sentient' neural network would turn against us
On the contrary, absent explicit limitations on its actions, there's no evidence that it wouldn't.
>> No. 26511 Anonymous
18th April 2018
Wednesday 7:09 pm
26511 spacer
>>26509

>There's no evidence to say a 'sentient' neural network would turn against us

Nobody in the AI safety community is remotely concerned that AI will turn against us. We didn't "turn against" the thousands of species we've made extinct, we just didn't give a shit about them. They were delicious, they had a fragile habitat or they were just in the way of a motorway that we decided to build. That's the concern - by default, an AI doesn't care whether you live or die. It's not going to hunt you down like the Terminator, but it might bulldoze your house to make way for a computer factory or make the atmosphere unbreathable as a side-effect of whatever its goals are. We don't know how to constrain an AI to ensure that its behaviour isn't harmful to humans.

A highly pertinent example has been raised by self-driving cars. They're being tested right now on public roads and have already caused a fatality. The technology might be a total dead-end, or it might be poised to revolutionise transportation within the next few years. If self-driving cars do become widespread, we'll need to encode ethics into their design. Should a self-driving car try to preserve the lives of the occupants at any cost? Would you get into a self-driving car if you knew that it would swerve off a bridge and kill you if the only alternative was ploughing into a minibus full of disabled children?

MIT are conducting an open experiment on this problem right now and I'd encourage you to have a go:

http://moralmachine.mit.edu/

The probability of creating a superintelligent AI might be very high or very low, but it's still something we need to plan for. If we assume that it'll never happen, we rob ourselves of the opportunity to protect ourselves against the potential risks. A lot of eminent physicists thought that nuclear fission was impossible; that technology very quickly went from being an academic curiosity to an Armageddon device. The public debate on the ethics of fission technology only started after Hiroshima and Nagasaki. By the time we realised that this technology posed an existential risk to humanity, it was too late to put the genie back in the bottle.
>> No. 26512 Anonymous
18th April 2018
Wednesday 7:43 pm
26512 spacer
>>26511
I think the thing people miss is that when people are warning about AI "turning against us" it's misconstrued as malice, when those giving the warning are doing so in a strict "either with us or against us" sense. AI "turns against us" in the sense that if an act harmful to humanity would further its goals, unless its framework tells it explicitly not to do so, it will consider and potentially do that act.
>> No. 26513 Anonymous
18th April 2018
Wednesday 9:30 pm
26513 spacer
>>26512

There's a fun browser game about that very thing.

http://www.decisionproblem.com/paperclips/index2.html
>> No. 26514 Anonymous
19th April 2018
Thursday 4:17 am
26514 spacer
>>26511
>>26512

Thank you. That's certainly what I was trying to say but wasn't clever enough to articulate it.
>> No. 26515 Anonymous
19th April 2018
Thursday 4:38 am
26515 spacer
>>26509

>So I think the shoe's on the other now foot ladm9, you're the one full of hubris. Unless you're an academic in the field of AI and robotics.

What possible reason would a man devoting his life and career to AI have to downplay the potential dangers of AI?
>> No. 26516 Anonymous
19th April 2018
Thursday 9:41 am
26516 spacer
>>26515
So you're suggesting an intelligent and educated man is lying about the probability of a human extinction event because it might slightly negatively affect his career prospects?
>> No. 26517 Anonymous
19th April 2018
Thursday 9:54 am
26517 spacer
>>26516

"Slightly" is an understatement. If the possibility is real, it might well outlaw his chosen field. It'd be naive to think it ridiculous he might want to downplay it.

That you seem to be basing your opinion on one talk by one man in the field is not ideal, either, though it sounds like a compelling talk nonetheless - did he address how you might restrict an artificial intelligence from making a logical leap that may endanger humans regardless of it's intentions - the problem, as discussed, of your smart car having to decide which human to kill in an accident?
>> No. 26518 Anonymous
19th April 2018
Thursday 11:47 am
26518 spacer
>>26517

Petroleum geologists are remarkably optimistic about climate change.
>> No. 26519 Anonymous
20th April 2018
Friday 6:34 pm
26519 spacer

Whore King.gif
265192651926519
>>26518

Oil companies are the ones spreading the climate change apocalypse hoax so that they can have even more control over the energy supply.
>> No. 26520 Anonymous
20th April 2018
Friday 10:30 pm
26520 spacer

65075245150d45e1a2dc837a955a8c8d_400x400.png
265202652026520
>>26519
>> No. 26526 Anonymous
22nd April 2018
Sunday 5:38 pm
26526 spacer
>>26517
That's not me. The talk was 2 months ago, from what I remember he may have mentioned kill switches and robots just following code, though I'm not sure. You might be better off emailing him yourself: nickh[at]robots.ox.ac.uk
>> No. 26692 Anonymous
15th August 2018
Wednesday 2:55 pm
26692 spacer
ai mixed with invisible to digital eye human violence has already ruined my health, wealth, happiness and another lively hopes and opportunities

(A good day to you Sir!)

Return ]
whiteline

Delete Post []
Password