|>>|| No. 83260
>No it isn't. Not in any way, shape or form. AI is about making machines that can think. In the here-and-now, that means special-purpose machines that can do one task well, with or without situational learning. In the long-term, that means artificial general intelligence - a machine that can independently learn any skill through observation and experimentation. Neither class of machine requires anything that could reasonably be described as "consciousness".
Thought, observation and experimentation all require consciousness/awareness. How do I know I'm aware? Because I'm aware that I'm aware. I'm also aware of that, and that and so on to an infinite regression. For AI to do that would require infinite processing power, infinite memory and infinite code.
>Turing preempted all this in 1950 in his paper Computer Machinery and Intelligence, systematically dismantling all of the key arguments against AI at a time when most people had never even heard the word "computer". I have no idea if your internal experience is the same as mine. It's entirely possible that I'm the only real thinking and feeling person in this world and everyone else is an elaborate automaton or a figment of my imagination. I don't know if my understanding of "blueness" is the same as yours, or if you have a totally different internal experience when you look at the sky. I don't know if you experience pain as I do, or if you're just pretending. The practical implications of this quandry are effectively nil - you can't prove to me that you aren't a philosophical zombie, but I assume that you aren't out of basic politeness.
Wouldn't the practical issue with that be the "people" creating AI aren't actually conscious, therefore how can they can create anything? The only person that can in this scenario would be yourself.
>We fundamentally don't care about the qualitative internal experience of those machines; we care about what they can do. Consciousness might be a fascinating line of inquiry for philosophers, but it is utterly irrelevant to computer scientists. When designing software to drive a car, we don't care whether the car is really "driving" or just following a complex set of instructions, we care about whether it gets from A to B safely. The same applies if we're designing software to do preparatory work for legal firms, to diagnose cancer or to provide talking therapy to people suffering from mental illness. If it looks like a duck, quacks like a duck and is in every other way indistinguishable from a duck, we don't really care whether it has an internal self-conception of duckness.
We will care when they are creating AI that is supposed to be conscious. No one gives a shit about AI cars or AI that does preparatory legal work because you can't have a conversation with it, or interact with it in any meaningful way. It's when things supposedly become conscious and have their own identity, do things become more sinister, something the film industry loves to peddle.
>There are big and important questions that need to be answered in regards to AI - what we'll do when we're worse than machines at everything, how we stop a rogue AI from turning all the matter in the universe into paperclips, how we can impose our values on superintelligent machines. A lot of very smart, very informed people are genuinely concerned that badly-regulated AI technology might unintentionally kill us all. Whether those machines really think or just perfectly impersonate the act of thinking in every respect is not high on our list of priorities.
But didn't you say we fundamentally don't care about the qualitative internal experiences of such machines?
We should care very much about whether a machine is conscious or not. If a machine is truly conscious, then it must also be held responsible for its actions. If it is not conscious, then the creators of the machine must be held responsible. You could get away with a lot of shit by programming a machine to act in a certain way, but claim it is actually conscious and doing it of its own accord so you can't be held responsible.
>If you're still completely unconvinced by my arguments, I'd strongly recommend that you take some time to study the practical facts of AI. Machines can perform exquisitely complex and difficult tasks without replicating the human brain in any way. Examine how Deep Blue and AlphaGo work under the hood, how fraud detection algorithms work, how a Roomba hoovers a floor. Learn the basics of search and sorting algorithms, learn how a Markov chain or a Bayesian network operates. Go right to the fundamentals of computer science - if you don't understand the implications of the universal Turing machine and the lambda calculus, you're fumbling about in the dark.
I think AI is a great thing, but I am not blind to its limitations. I have no issue with it performing complex tasks, it's just that it will never understand those complex tasks, only we can.