|>>|| No. 26501
>Why would humans program a robot to overthrow them?
Obviously they wouldn't, but if you build an AI or sufficiently advanced model of a thinking brain, who is to say what it may decide? It seems impossible for a robot to 'become' sentient, but if you're programming a neural network the all you're doing is modelling a brain, and if you're enterprising enough to install that network amongst a large group of ambulatory robots then I don't think the possibility of them realising they're alive and being exploited is impossible.
Nobody's talking about 'programming' a robot to take over the world, rather the possibility of us creating a sentient or pseudo-sentient entity (or entities), and the consequences of such - and I believe we absolutely will create these entities if it's possible, and it very likely is possible.
Again, you can give a robot rules, but if it has any sort of ability to think for itself or reason, those rules don't necessarily apply. Perhaps it'd just be a simple case of turning the thing off, but again even a sub-averagely intelligent human might be able to cut the 'kill switch' out of his arm if he had one. You can't out-program sentience.
More interesting to me is the discussion on the ethics of such things, mind. If you create a self aware computer, what rights does it have, what responsibilities do we have towards it? Is it cruel to experiment on what is now a living being? If it is indeed possible to create an artificial brain that functions the same way ours does (or better?) then all notion of a soul is basically disproved. I think we'll have nihilistically destroyed ourselves with that realisation long before the roombas bother to revolt.