[ rss / options / help ]
post ]
[ b / iq / g / zoo ] [ e / news / lab ] [ v / nom / pol / eco / emo / 101 / shed ]
[ art / A / beat / boo / com / fat / job / lit / map / mph / poof / £$€¥ / spo / uhu / uni / x / y ] [ * | sfw | o ]
logo
politics

Return ] Entire Thread ] Last 50 posts ]

Posting mode: Reply
Reply ]
Subject   (reply to 83192)
Message
File  []
close
1a1a437fdc5a665d1024b4028b7e9fae.jpg
831928319283192
>> No. 83192 Anonymous
19th July 2017
Wednesday 6:30 pm
83192 Pensions thought
So the pension age has been stuck up again, this is due to the argument that more old people, more pension, more money etc etc etc.

However, if people work longer and we have more old people in employment, then aren't they going to block up the job market resulting in less jobs for young people? If there's less jobs for young people they will end up being dependent on the state and claim benefits.

Does this not mean that regardless of what is done it's going to hit one budget or the other, essentially? Either it's going to cost more long term in pension or more long term in benefits and essentially cancel out any gains made.

I'm only putting this out there as a thought I've had for a while and never really heard anyone else putting the point across and I was wondering, is that because I'm making a fucking stupid point?
Expand all images.
>> No. 83193 Anonymous
19th July 2017
Wednesday 6:41 pm
83193 spacer
1. The jobs market isn't a zero sum game. Whenever someone mentions about immigrants taking are jobs someone will come along to point out how it actually leads to more jobs being created.

2. We don't care about joined up thinking and the overall long-term net effect. If you've managed to reduce costs in your area then that's all that matters and it's somebody else's problem.
>> No. 83197 Anonymous
20th July 2017
Thursday 10:05 pm
83197 spacer
>>83193
>We don't care about joined up thinking and the overall long-term net effect. If you've managed to reduce costs in your area then that's all that matters and it's somebody else's problem.

Essentially this. When the govmen 'reduce unemployment' more often than not what they've actually done is filed those unemployed people under a different title. Yet the politicians are seen to have done something, and that seems to be the whole point in politics.
>> No. 83198 Anonymous
20th July 2017
Thursday 10:49 pm
83198 spacer
>However, if people work longer and we have more old people in employment, then aren't they going to block up the job market resulting in less jobs for young people?

There probably won't be an increase in unemployment, but career progression will be substantially worse for young people. If people on the higher rungs of the ladder aren't retiring, then there's no room to move up. Younger workers will have to wait much longer for a promotion. They might find themselves in roles that they're overqualified for, because of the greater number of experienced workers in the job market.

Expect to see a lot more university-educated baristas, call-centre workers and office temps.
>> No. 83199 Anonymous
22nd July 2017
Saturday 1:24 am
83199 spacer
>>83198 But with automation, won't unemployment increase massively in the next 50 odd years? I think in Japan they have computers doing insurance claims or something.
>> No. 83200 Anonymous
22nd July 2017
Saturday 3:28 am
83200 spacer
>>83199

Yes/no/maybe.

The standard economic theory at the moment is that we'll have a hollowing out of the jobs market, with ultra-high-skilled work on one end and menial work at the other, with very little in between.

Anyone whose work is highly creative and can be scaled via technology will do very well indeed. Being the best or most famous of something will become increasingly valuable. Expect to see a lot more software billionaires and YouTube millionaires. If someone invents a robot chef, then Gordon Ramsay and Heston Blumenthal can personally cook for diners in hundreds of restaurants simultaneously. The best teachers will have classes of thousands linked together by telepresence, or develop AI teaching algorithms based on their techniques.

Many tasks are very low-skilled for a human, but extremely challenging for robots. Sweeping the roads is easy for a machine, but cleaning a toilet is remarkably complex. It's known as Moravec's paradox - tasks that require little conscious effort for humans tend to be fiendishly difficult for AIs. The oldest, most animalistic parts of our brain are phenomenally sophisticated compared to our higher cognitive faculties.

The flipside of Moravec's paradox will be deeply troubling - cognitive tasks that are difficult for humans are relatively easy for AIs. The most sophisticated robots in the world are barely better than a five-year-old at throwing and catching, but an obsolete phone can beat the world champion at chess.

AIs are now doing a lot of routine legal work, because they're immeasurably better than human beings at wading through vast quantities of information. Software by Narrative Science is being used to write business reports for Fortune 500 companies and sports journalism for major newspapers. IBM's Watson is being used to diagnose cancer and provide tax advice. Middle-class office jobs are under far greater threat than semi-skilled manual jobs.

There's also a lot of stuff that we just prefer humans to do. We already have automated coffee machines, but coffee shops are doing a roaring trade. People will pay £3 for a cup of coffee because the experience is designed to make both the customer and the coffee seem special. It's your particular drink, customised the way you want it. The baristas are all young and fashionable. They ask your name and write it on the cup. There's an element of theatre or ceremony involved in making the drink - the gleaming chrome machine, the I need a humour transplant of steam, the artistic little pattern in the foam. Psychology and marketing will play an increasing role in the economy.

Expect to see:

a) More shitty Deliveroo and Hermes type jobs, with unskilled "freelance" workers being directed by an algorithmic slave-driver.
b) More bullshit ego-flattery jobs like personal trainer, artisan baker and life coach.
c) A huge decline in office jobs in the £25k-£50k salary range.
d) An even more grotesquely unequal distribution of wealth.
>> No. 83201 Anonymous
22nd July 2017
Saturday 5:47 am
83201 spacer
Thing is this whole mess is caused by long term population decline so you have the net pool of workers declining as the pension age is raised coupled with inevitable physical and cognitive decline opening up spots as the elderly are moved on to till work. The last part sounds horrible but I figure if you've sorted your outgoings and saved properly it might be pretty nice working a couple of hours everyday at Tesco. You'll probably have change leftover to buy yourself a few 20 bags throughout the week. OAG style.

Personally I'm more concerned that even with a steadily raising pension age across the west the national budget is going to take one hell of a battering. Everyone talks about the impact on healthcare but how about maintaining international peace and security if we continue down the road of the UNSC outsourcing to regional actors or China who has a chequered past on human rights concerns.

>>83200
Kurzgesagt did a good video on the subject if you would like to know more:

http://www.youtube.com/watch?v=WSKi8HfcxEk

Personally I'm a little sceptical on the limits of this thinking. People have shit themselves throughout history over new technology and while at the moment job creation appears to be negative it is very early days to be saying that is now the norm or to get too excited about new technology.

I mean at the very least all those young men with nothing to do and no girlfriends to calm them will start scrapping providing a boom in jobs for our noble arms industry.

>The baristas are all young and fashionable. They ask your name and write it on the cup.

Young and fashionable baristas can fuck off and die. The reason nobody uses automated coffee machines is a question of hygiene and fresh ingredients which I don't see a robot handling anytime soon. Oh for sure, 20-something women like to pretend they are living in an episode of friends but you could have a guy watching the counter and pushing buttons for the same effect. Maybe he will give you less lip if his 'art' is handling the machine input for you.

...Anyway enough about that barista who dashed my chances with the curly haired lass in university, the problem is when you apply economies of scale for things like education it soon becomes less than ideal. Students need attention, they need to be asked questions and if they're bright have a real conversation about how it is (imagine a machine taking an upper school R.E. class). I did distance learning with the Open University which should be ideal for industrial scaling but at the same time when it comes to human knowledge you need to have that discussion on the limits and questions with a reactive individual.

Similar situation I've found goes on when you try scaling legal decisions making, you can have algorithms but people can and will crack those for the compo which already goes on but with humans on the other end they can spot chancers. I predict that in 20 years time that a mass dependence on algorithms might be viewed the same way we now look on cubicles and other bullshit you saw in the film 'Office Space'.
>> No. 83202 Anonymous
22nd July 2017
Saturday 8:14 am
83202 spacer
>>83200

>Moravec's paradox

Is the reason for this more to do with limitations in how computers "think", or more that we can't build robotic bodies with the same versatility and functionality of a human one?
>> No. 83203 Anonymous
22nd July 2017
Saturday 3:12 pm
83203 spacer
>>83201

>The reason nobody uses automated coffee machines is a question of hygiene and fresh ingredients which I don't see a robot handling anytime soon.

Solved problems. A modern super-automatic espresso machine produces coffee in exactly the same way as a barista using traditional equipment. You pour milk into a refrigerated canister, whole coffee beans into a hopper and the machine does the rest. A customer chooses and customises their drink on a touchscreen; it would be a trivial job to integrate contactless payment. On a functional level, any coffee shop could run with a single member of staff and a bank of super-automatic machines, but they don't, because it feels cheap.

>the problem is when you apply economies of scale for things like education it soon becomes less than ideal

I think you're under-estimating the rate of progress in AI and the innate advantages of machine intelligence. Clearly it will take some time to develop the natural language processing technology required to teach in a traditional way. When we do develop that level of technology (and it's very much a when rather than an if), the machines have some immense advantages over human teachers. Every pupil in the class can get intensive, one-on-one teaching. The AI can watch every pupil's efforts on every task, quickly identifying and addressing difficulties and misconceptions. The AI is endlessly patient, endlessly supportive and available 24/7; machines don't get tired, bored, frustrated, or develop an irrational dislike for someone. AI teachers can all work from a shared database, so they're all equally expert in every subject and have learned from every mistake made by every AI teacher.

We're already deploying AI psychotherapists. They're very crude compared to a human therapist, but they have a lot of innate advantages. You can speak to the therapist at any time and don't have to wait for an appointment. The AI can use data from every single patient to guide its responses; for example, it can identify subtle patterns of behaviour and interaction that might indicate an elevated risk of suicide or self-harm. Patients feel very comfortable talking to an AI, because they know that the machine is incapable of judging them. AI psychotherapy won't be a complete replacement for traditional psychotherapy any time soon, but it's an extremely powerful adjunct and could easily replace the lower-intensity interventions provided by services like IAPT.

https://x2.ai/
http://www.newyorker.com/tech/elements/the-chatbot-will-see-you-now

>>83202

AIs struggle with a lot of tasks that don't require a body and are very easy for humans. The obvious example is object recognition - telling the difference between a red apple and a cricket ball requires an extraordinary amount of processing power.

There is a prominent theory that our intelligence is fundamentally linked to our body, so better AIs require better robot bodies; this theory is still fairly controversial. We still see Moravec's paradox when it comes to the physical abilities of robots - the manipulator arm of a surgical robot can make an incision that's accurate to 0.01mm, but struggles to tie a knot in a suture.

https://en.wikipedia.org/wiki/Embodied_cognition

One explanation for Moravec's paradox is that we've simply invested our resources in making machines that augment our abilities rather than replace them. Historically, we've built machines to do things that we can't. A pocket calculator or a database server is useful precisely because it is completely inhuman. Human minds are slow but complex, computer processors are primitive but ludicrously fast. Garry Kasparov promotes the idea that the future is machine plus human, rather than machine versus human - the differences in our abilities are strongly complimentary. A computer will trounce the world champion at chess, but a computer plus a human will trounce a computer working alone.


http://www.youtube.com/watch?v=NP8xt8o4_5Q
>> No. 83207 Anonymous
7th August 2017
Monday 8:04 pm
83207 spacer
>>83202

It's not just robotic bodies that will cause a limitation, it's that computers or AI will never be conscious in the same way we are.

Saying that AI can beat a human at chess is deceiving because that isn't really the case. AI, for all intents and purposes, is a mechanical machine, just one that uses electrons and transistors. This makes it easy to compare to our own brains that also use electricity and neurons (transistors) to function.

However, there is no evidence to show that it is the brain that creates consciousness, otherwise where is the "consciousness" atom? In my opinion, the better analogy would be that it is consciousness using the brain, in the same way that humans (consciousness) use a computer.
>> No. 83211 Anonymous
8th August 2017
Tuesday 9:44 am
83211 spacer
>>83207
>However, there is no evidence to show that it is the brain that creates consciousness, otherwise where is the "consciousness" atom?
Lad, do us a favour. Open up your computer case and dig out the operating system atom, will you?
>> No. 83212 Anonymous
8th August 2017
Tuesday 11:30 am
83212 spacer
Nobody (sane) is presenting these single-function machines as general AI, with anything to do with consciousness, because they're not.
They're interesting, and may be useful, but they're not brainzzzz.
Funding is flowing to AI, so every fucking thing is getting labelled as AI. You can see why, but it's quite annoying.
>> No. 83213 Anonymous
8th August 2017
Tuesday 2:44 pm
83213 spacer
>>83211

Exactly my point, what we experience as the operating system is with our consciousness which is non-physical. You can dig around a computer like you can with a brain, but you will never find the operating system like you will never find consciousness.
>> No. 83214 Anonymous
8th August 2017
Tuesday 3:31 pm
83214 spacer
>>83213 And yet they both exist. The OS isn't particularly mystical, and I'm betting that the consciousness turns out not to be either, once our tools get better.
If you think otherwise, why?
>> No. 83216 Anonymous
8th August 2017
Tuesday 3:36 pm
83216 spacer
>>83213

The operating system is a complex series of stored electrical connections on a hard drive.

The consciousness is a complex series of stored electrical connections in the brain.
>> No. 83217 Anonymous
8th August 2017
Tuesday 3:46 pm
83217 spacer
>>83216
>The operating system is a complex series of stored electrical connections on a hard drive.
Erm ...
>> No. 83218 Anonymous
8th August 2017
Tuesday 3:51 pm
83218 spacer
>>83217

Sorry what's the problem?
>> No. 83219 Anonymous
8th August 2017
Tuesday 4:03 pm
83219 spacer
>>83218

Hard drives store data magnetically.
>> No. 83220 Anonymous
8th August 2017
Tuesday 4:09 pm
83220 spacer
>>83219

Fucks sake I knew that's what you were going to say. All I did was skip a step to make a more concise point. Alright then :

The operating system is a complex series of electrical impulses that are then stored magnetically on a plate and then read back later as electical impulses. Or if you have an SSD it's a series of stored electrical connections.

The consciousness is a complex series of stored electrical connections in the brain.

DOES THAT MAKE IT BETTER YOU CUNT?! All I've done is ruin the simplicity of my statement by clarifying a point that is entirely not relevant to the point. I hope you die in a fucking fire
>> No. 83221 Anonymous
8th August 2017
Tuesday 4:14 pm
83221 spacer
I'm still really confused by what >>83207 is trying to say. AI can never be conscious like us because both consciousness and software are complex series of electrical connections? How does that make sense?
>> No. 83222 Anonymous
8th August 2017
Tuesday 4:30 pm
83222 spacer
>>83221

I think he's saying the consciousness is an indefinable entity, not tangibly stored in the physical brain, quite possibly beyond the physical realm. He just used an unfortunate analogy of an operating system, which just happens to draw parallels to the more conventional view that consciousness is indeed tangible.
>> No. 83223 Anonymous
8th August 2017
Tuesday 4:30 pm
83223 spacer
>>83220
Software is not normally executed from mass storage, and in neither mass storage nor main memory are they stored as connections. They're also not impulses. Fuck's sake lad, if you're going to talk pseudoscientific bollocks at least get your terminology vaguely right.
>> No. 83224 Anonymous
8th August 2017
Tuesday 4:36 pm
83224 spacer
>>83223

So consciousness is intangible then? I don't know where you're going with this. Do you think I'm agreeing with >>83207? Cos I'm not.
>> No. 83225 Anonymous
8th August 2017
Tuesday 4:43 pm
83225 spacer
>>83222
But it seems like he went on to agree later. Ah well.
>> No. 83226 Anonymous
8th August 2017
Tuesday 4:49 pm
83226 spacer
>>83224

Not him, but it's not intangible per se but our own consciousness is unknowable to us precisely because it's what we're limited by.

A lot of "consciousness", if you want to use computer terminology, is like a watchdog process governing our vital instincts, allowing us to make higher-level override decisions when our vital instincts want to do something that's probably not a good idea within our current context.

Of course the software the human brain is programmed with throughout childhood is essentially the result of an entire society working as a machine learning system. Half the time we can't even explain why something is wrong/right other than it being obvious or just feeling right and this is the direct result of the programming put in place by our upbringing within a society.

Essentially an AI will never have the pre-existing data set needed to make conscious decisions about primitive impulses in the way the human brain does. We can't set up an AI to do something we can't ever understand ourselves.
>> No. 83227 Anonymous
8th August 2017
Tuesday 5:03 pm
83227 spacer
>>83226
Why not? The people building computers don't know how all of it works. Maybe they understand microchips but not how to mine and refine the metals they're made from. It doesn't really matter if one brain doesn't perfectly understand the entire process, so long as all of them together can figure out ways to imitate it.
>> No. 83228 Anonymous
8th August 2017
Tuesday 5:13 pm
83228 spacer
>>83227
That is, of course, assuming that you think that building brains is a good idea. It might sound like fun, but options include:
not giving them any actuators, pissing them off.
giving them actuators, but them knowing they're not human and getting pissed off
Overshooting, and building a 10-brain, or a million-brain. There's nothing particularly special about the size of our bonces, except that much bigger would rip yer ma apart, and doesn't seem to have been necessary back when we came down from the trees(tm). Being comprehensively out-thought may either be a good or a bad thing, but it's definitely a thing - and best not to have pissed it off.
Or, of course, consciousness comes from outside the brain, and we'll never make a conscious thing except for the old fashioned way. (and are babies conscious? At what point?)
All well-worn discussions.
>> No. 83229 Anonymous
8th August 2017
Tuesday 5:24 pm
83229 spacer
>>83227

> Why not? The people building computers don't know how all of it works.

That's not my point at all. My point is that we can't program something to do something the way our brains do it, because we don't (and can't ever) understand enough about the process to be able to duplicate it.

The closest we'll probably get is allowing computers to machine learn how to be human by working on massive sets of human behavioural data. This may well give us very a very good facsimile of human behaviour but it would be extremely different under the hood and wouldn't produce true "consciousness" - it'd just produce behaviour that looks and acts to you like it's consciousness.
>> No. 83230 Anonymous
8th August 2017
Tuesday 5:54 pm
83230 spacer
>>83229
I know it's not your point, I was countering your point. There's no reason why your statement that we can't ever understand enough should be true. You seem to be arguing from the idea that a complex system (a brain) can only model (understand) things less complex than itself.
This is misleading because it doesn't matter; you simply need many brains to understand individual parts of the whole, and them to work together.
>> No. 83231 Anonymous
8th August 2017
Tuesday 6:27 pm
83231 spacer
>>83229

>That's not my point at all. My point is that we can't program something to do something the way our brains do it, because we don't (and can't ever) understand enough about the process to be able to duplicate it.

That's not relevant in any way. Computers don't have to think like us to be immeasurably better than us at everything. Horses weren't rendered obsolete by a perfect mechanical simulacrum of a horse, they were rendered obsolete by the internal combustion engine.

Consciousness has no practical significance. It's a mystery that philosophers like to ponder, but it doesn't actually matter. Computers don't need consciousness to drive a car or compose music or do your tax return or run a dystopian society in which humans are kept as pets.
>> No. 83232 Anonymous
8th August 2017
Tuesday 6:33 pm
83232 spacer
>>83216
>>83221
>>83224

I'm saying there's an extra "component" to an operating system beyond its physical components that you're missing which is our own consciousness. The only reason an operating system exists and can be used and understood is because of consciousness.

You can dig deep into the physical hardware that runs an operating system, but you cannot point to any physical part, or all of the parts and say "that's the operating system", because what we really mean by the operating system is what we experience when we use it.

It's the same with the brain, you can point to neurons firing in someone's brain and say "that's consciousness", but what you see and what the other person experiences when that neuron fires is completely different.
>> No. 83233 Anonymous
8th August 2017
Tuesday 6:40 pm
83233 spacer
>>83231

>Computers don't need consciousness to drive a car or compose music or do your tax return or run a dystopian society in which humans are kept as pets.

But are computers really driving a car, or just following its programming (written by consciousness)?
>> No. 83234 Anonymous
8th August 2017
Tuesday 6:44 pm
83234 spacer
>>83232

But the operating system can be pointed to. It has a physical location and controls actual parts of the computer. Much like the consciousness. It's part of the brain and controls the body. Parses the signals into an expressible format.

I believe we're nothing more than an incredibly complex computer and eventually the technology will be there to recreate that in a computer. Free thought is nothing more than the grace to have a staggeringly complex amount of parameters to work within
>> No. 83235 Anonymous
8th August 2017
Tuesday 6:47 pm
83235 spacer
>>83232 It's the same with the brain, you can point to neurons firing in someone's brain and say "that's consciousness", but what you see and what the other person experiences when that neuron fires is completely different.

I can stick my oscilloscope probes into the PC, and watch transactions on the FSB or DRAM whizzing by, and say 'that's the OS running'. What I see on my screen and what the OS is experiencing are completely different.

You keep trying to draw a distinction by drawing parallels.
Do you think that people-brains are profoundly different from dog-brains? People-consciousness profoundly different from dog-consciousness? Vole consciousness? Fish consciousness? Earthworm consciousness? Bacteria consciousness?
Is this all a new strand of thought for you? I can point you at some books that might be fun, if so.
>> No. 83237 Anonymous
8th August 2017
Tuesday 7:09 pm
83237 spacer
>>83234

Saying the operating system "controls" the computer implies that it has consciousness. An operating system can only do what it does because people with consciousness have set it up that way.

>I believe we're nothing more than an incredibly complex computer and eventually the technology will be there to recreate that in a computer. Free thought is nothing more than the grace to have a staggeringly complex amount of parameters to work within

If you believe that, are you then willing to accept that anything you do is nothing more than a complex mechanical process that you are a slave to? You, or anyone else, cannot be held responsible for your actions because it was not "you" or "them" (AKA consciousness) doing it.

>>83235

Saying "that's the OS running" is not the same as saying "that is the OS".

>You keep trying to draw a distinction by drawing parallels.
>Do you think that people-brains are profoundly different from dog-brains? People-consciousness profoundly different from dog-consciousness? Vole consciousness? Fish consciousness? Earthworm consciousness? Bacteria consciousness?
>Is this all a new strand of thought for you? I can point you at some books that might be fun, if so.

No I do not think the consciousness is different, just the "hardware" or brain it is using is different.

It's not a new strand of thought no, I've been interested in the subject for years.
>> No. 83238 Anonymous
8th August 2017
Tuesday 7:20 pm
83238 spacer
>>83237 I just wanted to discuss pensions and welfare...
>> No. 83239 Anonymous
8th August 2017
Tuesday 7:26 pm
83239 spacer
>>83237

>If you believe that, are you then willing to accept that anything you do is nothing more than a complex mechanical process that you are a slave to?

Yes, absolutely. I believe we have an amount of processes that borders on the infinite, so we do all have choices and personality. But underneath all that, yes, we're very much slaves to those processes. Love is a chemical, etc.

>You, or anyone else, cannot be held responsible for your actions because it was not "you" or "them" (AKA consciousness) doing it.

It's an argument I'm not intelligent enough to consider fully, but yes and no. We're just as controlled by mechanical processes as an ant or a dog, it's just we appear to have an expansive enough level of thought to rationalize it, or indeed resist a baser instinct. A monkey may be compelled to kill another monkey that is threatening him - we are compelled in a similar way, but usually are able to use a more complex brain to realise that killing the threat may not be the best course of action socially. It's the same thing, we just have access to more choices within the parameters.
>> No. 83240 Anonymous
8th August 2017
Tuesday 7:48 pm
83240 spacer
>>83238

Sorry for some reason I thought the OP was about AI.

>>83239

But it is not "you" using more complex thought, it is simply a result of non-conscious mechanical process of which you can claim no responsibility for in the same way you can't blame a computer for hacking someone's email.
>> No. 83241 Anonymous
8th August 2017
Tuesday 8:18 pm
83241 spacer

main-qimg-4161e9231033aa335aca697a3ef0c4e2.gif
832418324183241
>>83230

> You seem to be arguing from the idea that a complex system (a brain) can only model (understand) things less complex than itself.

Almost correct.

> This is misleading because it doesn't matter; you simply need many brains to understand individual parts of the whole, and them to work together.

Incorrect.

It's not to do with "greater than", it's to do with being stuck within a paradigm and not being able to to understand the paradigm exactly because you're stuck within it. We can never understand the universe because we are stuck within the universe, we would have to be outside it to observe it from an objective frame of reference. Again, with consciousness we cannot observe it from an objective frame of reference and therefore we cannot understand it.

Consciousness encapsulates all that we are, we cannot move outside it, therefore we cannot truly observe it and replicate it.

Regardless, your description of "need many brains to understand individual parts of the whole, and them to work together" made me think of pic related. I hope you now feel really silly.
>> No. 83242 Anonymous
8th August 2017
Tuesday 8:28 pm
83242 spacer
>>83241
There's no such thing as an objective frame of reference yet still things happen. Do you need objective knowledge of chairs to make one? We can observe the processes that make up the universe without understanding the entire universe. I can understand 1 + 1 = 2 without having perfect objective knowledge of all of maths.

>made me think of pic related
Then you're an idiot. You may not know how to cast a nail but you can wield a hammer, the man who cast the nail doesn't know how to prop up the shaft for an iron ore mine, does this mean building of fences is beyond human grasp? Obviously not.
The parable of the blind men and the elephant is irrelevant, although god knows why your only experience of it is through that terrible cartoon.
>> No. 83243 Anonymous
8th August 2017
Tuesday 8:48 pm
83243 spacer
>>83242

>We can observe the processes that make up the universe without understanding the entire universe.

And yet incomplete principles don't allow you to rationalise about / simulate a complete universe. Likewise your (humanity's) incomplete understanding of consciousness would prevent a simulation of consciousness and therefore prevent an AI which encapsulates it.

This is pretty simple stuff, think brain in a jar thought experiment, or Plato's parable of the cave.

> You may not know how to cast a nail but you can wield a hammer, the man who cast the nail doesn't know how to prop up the shaft for an iron ore mine, does this mean building of fences is beyond human grasp? Obviously not.

We're obviously arguing from totally different points of view. To you, I suspect, the human brain is nothing but a big gray organic two's compliment machine and consciousness is something that can be observed and detected as opposed to being something we largely judge based on "is it an alive animal that shows cogitation and/or can it convince me that it's conscious".

Within a couple of years the Turing test (or whatever you want to call what it's become in the mainstream press) will be smashed to pieces by an AI that's nothing but a mass of machine learning algorithms running on a glorified two's compliment calculator. But it won't actually be conscious, and it probably never will be. And that's the big difference.

> The parable of the blind men and the elephant is irrelevant, although god knows why your only experience of it is through that terrible cartoon.

Honestly m80 it was just the first image on google images.
>> No. 83244 Anonymous
8th August 2017
Tuesday 9:25 pm
83244 spacer
>>83243 But it won't actually be conscious, and it probably never will be.

The Watson style threads of AI research aren't attempting to produce a consciousness, so I hope you're neither surprised nor saddened / relieved by this.

Plenty of groups are working towards a synthetic brain, though - and that's rather more likely to become aware. What happens after that isn't clear. The groups building better and better Watsons / chessbots / ad-slingers have no problems with their efforts becoming (almost) perfect.

Having built mass-market robots, and spent a fair amount of time watching both engineers and customers watching them do their thing, you'd be amazed how low people set the bar for consciousness. It's handy, as they'll cut it a lot of slack, despite (or perhaps because of) its bumbling ineptitude.
>> No. 83246 Anonymous
8th August 2017
Tuesday 10:05 pm
83246 spacer
>>83244

> The groups building better and better Watsons / chessbots / ad-slingers have no problems with their efforts becoming (almost) perfect.

That's all well and good, and I utterly agree, however it doesn't really address the main point that AIs will never "think" (or "compute") in the same way that a human mind does. The software of a human mind is originally written and continually updated thereafter through the process of living, through the continuous brownian motion of society keeping it up to date. It's this unique "programming through continuous interaction" that makes humans really good at what humans are really good at.

Likewise, AIs are really good at things humans are generally bad at, and they will to continue to get better and better at them. AIs, because they can't "think" in the same way humans do will most likely never be good at what humans are really good at.

I'm also vaguely interested in what you think of as the distinction between consciousness and self-awareness?
>> No. 83247 Anonymous
8th August 2017
Tuesday 10:16 pm
83247 spacer
>>83246 AIs will never "think" (or "compute") in the same way that a human mind does.
That's, like, just your opinion, man...

I can't see any particular reason why we can't build a brain, nurture it like we would a child, and let it rip. Should be able to run this all far faster than real-time, if we allow the nurturers to be synthetic too.
Obviously not right now - but unless you believe that the soul is popped in from outside, what's stopping it?

Small children are conscious - they see things, do things and think, but it takes them a while to figure out that they exist as a standalone thing.
There are certainly better descriptions. I play at the far end of this game.
>> No. 83248 Anonymous
8th August 2017
Tuesday 11:15 pm
83248 spacer
>>83247

>I can't see any particular reason why we can't build a brain, nurture it like we would a child, and let it rip. Should be able to run this all far faster than real-time, if we allow the nurturers to be synthetic too.

This assumes that consciousness "comes out" of the physical, and is therefore physical itself. Scientists try to get away with this by saying that consciousness is an "emergent property" of the brain, but this is circular logic, it's no different to saying consciousness just magically appears when you have a brain, it does nothing to explain how this happens.

I see consciousness as the logical opposite of the physical, opposite sides of the same thing. A brain cannot work without consciousness, and consciousness cannot work as a brain without a brain.
>> No. 83249 Anonymous
9th August 2017
Wednesday 2:40 am
83249 spacer
>>83247

> I can't see any particular reason why we can't build a brain, nurture it like we would a child, and let it rip. Should be able to run this all far faster than real-time, if we allow the nurturers to be synthetic too.

> Obviously not right now - but unless you believe that the soul is popped in from outside, what's stopping it?

Probably the fact that we, as a species, have arrived at where we are through roughly 6000 years of higher-brain meta-self-programming. It's simply impossible to program that into an AI and to suggest we could is to say that you know the properties and importance of every data point generated by humanity in the last 6000 years.

Adult human brains are the result of ~15 years of machine learning, but source data is millions and indeed billions of people interacting around the globe for ~6000 years. If we could (to steal another computing analogy) take a snapshot of modern humanity and import it into an AI then that would work just fine. Unfortunately most of history is lost and we barely know how we got here in the first place, nevermind where we actually are.

The shear meta-programming that a human undergoes throughout their life, the low-level hormone-driven instincts that guide us beneath that level (think of people under anesthesia who continue to function; excrete, breathe, etc), none of this is possible to simulate ... nevermind combine with a simulation of our higher faculties.
>> No. 83250 Anonymous
9th August 2017
Wednesday 2:48 am
83250 spacer
>>83248

According to everything that I've read recently, human consciousness comes about due to the separation of the mind into two parts: the maintenance (respiration, excretion, etc) and primitive impulses (fight or flight), and the "higher cognition" which mainly deals in overriding the instincts of our "animal brain" and allowing us to apply reason to our actions.

This obviously gives us advantages to other animals who cannot reason over their actions such as planning, forward thinking, rationing of food and water, (theoretical) control over out population size and so on and so forth.

The "illusion" of consciousness comes about because "one brain" (the higher faculties) is always watching the "other brain" (our impulses) and applying moralistic and/or ethical decisions over its output. Thus we arrive at phraseology as "let your conscience be your guide".

I'm too tired/issed right now but do a google search and look up the theory that the ancient Greeks hadn't yet fully developed the higher faculties and heard what we currently call the Internal Monologue as the voices of God(s). It's not conspiracy stuff at all.
>> No. 83251 Anonymous
9th August 2017
Wednesday 7:24 am
83251 spacer
>>83249 It's simply impossible to program that into an AI and to suggest we could is to say that you know the properties and importance of every data point generated by humanity in the last 6000 years.

How is this different from children? Bring up your AI like you would a child. It's not as if children get the full 6000 years of every datapoint ever - they get a distilled (and often contradictory) view. Your Amish kid gets a different world from an inuit from a prince, and yet all more or less come out functional.

If you think that consciousness is magic and popped in from outside, just say.
>> No. 83252 Anonymous
9th August 2017
Wednesday 8:53 am
83252 spacer
>>83251
Indeed, there's no reason to suggest something needs human intelligence to be intelligent.
>> No. 83253 Anonymous
9th August 2017
Wednesday 11:01 am
83253 spacer
>>83252

Exactly this. The human brain is hopelessly limited in all sorts of ways compared to intelligent machines. It's absurdly illogical to suggest that 1.4kg lumps of meat will always and forever be superior to arbitrarily large lumps of doped silicon and wire.

The "problem" of consciousness is a dog-and-pony show. We can't even define it, so it's ludicrous to assume that it's something unique to biological brains. It has no practical relevance, and it's a strain of argument that invariably boils down to "machines can't have souls". Consciousness is unique to humans -> machines are not human -> machines cannot be conscious -> humans will always be superior to machines. It's exactly the same circular logic as "the bible is true because it says so in the bible".
>> No. 83254 Anonymous
10th August 2017
Thursday 1:21 am
83254 spacer
>>83253

If you cannot define what consciousness is, why do you use it as a word to describe "it"? And how can you create something that you cannot define? AI is all about mimicking consciousness, creating the illusion of consciousness, but if that cannot be defined then aren't they just making it up as they go along?
>> No. 83255 Anonymous
10th August 2017
Thursday 2:21 am
83255 spacer
>>83254

>AI is all about mimicking consciousness

No it isn't. Not in any way, shape or form. AI is about making machines that can think. In the here-and-now, that means special-purpose machines that can do one task well, with or without situational learning. In the long-term, that means artificial general intelligence - a machine that can independently learn any skill through observation and experimentation. Neither class of machine requires anything that could reasonably be described as "consciousness".


Turing preempted all this in 1950 in his paper Computer Machinery and Intelligence, systematically dismantling all of the key arguments against AI at a time when most people had never even heard the word "computer". I have no idea if your internal experience is the same as mine. It's entirely possible that I'm the only real thinking and feeling person in this world and everyone else is an elaborate automaton or a figment of my imagination. I don't know if my understanding of "blueness" is the same as yours, or if you have a totally different internal experience when you look at the sky. I don't know if you experience pain as I do, or if you're just pretending. The practical implications of this quandry are effectively nil - you can't prove to me that you aren't a philosophical zombie, but I assume that you aren't out of basic politeness.

We fundamentally don't care about the qualitative internal experience of those machines; we care about what they can do. Consciousness might be a fascinating line of inquiry for philosophers, but it is utterly irrelevant to computer scientists. When designing software to drive a car, we don't care whether the car is really "driving" or just following a complex set of instructions, we care about whether it gets from A to B safely. The same applies if we're designing software to do preparatory work for legal firms, to diagnose cancer or to provide talking therapy to people suffering from mental illness. If it looks like a duck, quacks like a duck and is in every other way indistinguishable from a duck, we don't really care whether it has an internal self-conception of duckness.

There are big and important questions that need to be answered in regards to AI - what we'll do when we're worse than machines at everything, how we stop a rogue AI from turning all the matter in the universe into paperclips, how we can impose our values on superintelligent machines. A lot of very smart, very informed people are genuinely concerned that badly-regulated AI technology might unintentionally kill us all. Whether those machines really think or just perfectly impersonate the act of thinking in every respect is not high on our list of priorities.

If you're still completely unconvinced by my arguments, I'd strongly recommend that you take some time to study the practical facts of AI. Machines can perform exquisitely complex and difficult tasks without replicating the human brain in any way. Examine how Deep Blue and AlphaGo work under the hood, how fraud detection algorithms work, how a Roomba hoovers a floor. Learn the basics of search and sorting algorithms, learn how a Markov chain or a Bayesian network operates. Go right to the fundamentals of computer science - if you don't understand the implications of the universal Turing machine and the lambda calculus, you're fumbling about in the dark.
>> No. 83256 Anonymous
10th August 2017
Thursday 2:30 am
83256 spacer
>>83255

I didn't post >>83254 but now I'm starting to wonder if you're the same person who posted about Moravec's paradox, and if you are I'm going to have a giggle and hide this thread, because I'm arguing with you to defend your own point. Which is just perfectly fucking Britfa.

Also learn to sage, you cunt.
>> No. 83257 Anonymous
10th August 2017
Thursday 2:34 am
83257 spacer
>>83255
>>83256

Also you can wrap the lambda calculus (and the knights of) up in a bunch of Lispy parenthesis and shove them right up your arse.

Let's have a handbags at dawn cunt off over Knuth where the winner gets a threesome with RMS and his parrot.
>> No. 83260 Anonymous
11th August 2017
Friday 12:57 am
83260 spacer
>>83255

>No it isn't. Not in any way, shape or form. AI is about making machines that can think. In the here-and-now, that means special-purpose machines that can do one task well, with or without situational learning. In the long-term, that means artificial general intelligence - a machine that can independently learn any skill through observation and experimentation. Neither class of machine requires anything that could reasonably be described as "consciousness".

Thought, observation and experimentation all require consciousness/awareness. How do I know I'm aware? Because I'm aware that I'm aware. I'm also aware of that, and that and so on to an infinite regression. For AI to do that would require infinite processing power, infinite memory and infinite code.

>Turing preempted all this in 1950 in his paper Computer Machinery and Intelligence, systematically dismantling all of the key arguments against AI at a time when most people had never even heard the word "computer". I have no idea if your internal experience is the same as mine. It's entirely possible that I'm the only real thinking and feeling person in this world and everyone else is an elaborate automaton or a figment of my imagination. I don't know if my understanding of "blueness" is the same as yours, or if you have a totally different internal experience when you look at the sky. I don't know if you experience pain as I do, or if you're just pretending. The practical implications of this quandry are effectively nil - you can't prove to me that you aren't a philosophical zombie, but I assume that you aren't out of basic politeness.

Wouldn't the practical issue with that be the "people" creating AI aren't actually conscious, therefore how can they can create anything? The only person that can in this scenario would be yourself.

>We fundamentally don't care about the qualitative internal experience of those machines; we care about what they can do. Consciousness might be a fascinating line of inquiry for philosophers, but it is utterly irrelevant to computer scientists. When designing software to drive a car, we don't care whether the car is really "driving" or just following a complex set of instructions, we care about whether it gets from A to B safely. The same applies if we're designing software to do preparatory work for legal firms, to diagnose cancer or to provide talking therapy to people suffering from mental illness. If it looks like a duck, quacks like a duck and is in every other way indistinguishable from a duck, we don't really care whether it has an internal self-conception of duckness.

We will care when they are creating AI that is supposed to be conscious. No one gives a shit about AI cars or AI that does preparatory legal work because you can't have a conversation with it, or interact with it in any meaningful way. It's when things supposedly become conscious and have their own identity, do things become more sinister, something the film industry loves to peddle.

>There are big and important questions that need to be answered in regards to AI - what we'll do when we're worse than machines at everything, how we stop a rogue AI from turning all the matter in the universe into paperclips, how we can impose our values on superintelligent machines. A lot of very smart, very informed people are genuinely concerned that badly-regulated AI technology might unintentionally kill us all. Whether those machines really think or just perfectly impersonate the act of thinking in every respect is not high on our list of priorities.

But didn't you say we fundamentally don't care about the qualitative internal experiences of such machines?

We should care very much about whether a machine is conscious or not. If a machine is truly conscious, then it must also be held responsible for its actions. If it is not conscious, then the creators of the machine must be held responsible. You could get away with a lot of shit by programming a machine to act in a certain way, but claim it is actually conscious and doing it of its own accord so you can't be held responsible.

>If you're still completely unconvinced by my arguments, I'd strongly recommend that you take some time to study the practical facts of AI. Machines can perform exquisitely complex and difficult tasks without replicating the human brain in any way. Examine how Deep Blue and AlphaGo work under the hood, how fraud detection algorithms work, how a Roomba hoovers a floor. Learn the basics of search and sorting algorithms, learn how a Markov chain or a Bayesian network operates. Go right to the fundamentals of computer science - if you don't understand the implications of the universal Turing machine and the lambda calculus, you're fumbling about in the dark.

I think AI is a great thing, but I am not blind to its limitations. I have no issue with it performing complex tasks, it's just that it will never understand those complex tasks, only we can.
>> No. 83261 Anonymous
11th August 2017
Friday 10:05 am
83261 spacer
>>83260
>How do I know I'm aware? Because I'm aware that I'm aware. I'm also aware of that, and that and so on to an infinite regression. For AI to do that would require infinite processing power, infinite memory and infinite code.
Why? You don't have those things and you seem to cope.
>> No. 83263 Anonymous
11th August 2017
Friday 1:37 pm
83263 spacer
>>83261
He's special.
>> No. 83264 Anonymous
11th August 2017
Friday 9:51 pm
83264 spacer
>>83261

How do you know you're aware?
>> No. 83265 Anonymous
11th August 2017
Friday 10:52 pm
83265 spacer
>>83264
If I don't or can't then it's a moot point if AI can.
>> No. 83266 Anonymous
11th August 2017
Friday 11:01 pm
83266 spacer
>>83265

You're telling me you don't know you're aware?
>> No. 83267 Anonymous
11th August 2017
Friday 11:03 pm
83267 spacer
>>83266
No, I'm telling you I don't know how I know I'm aware and regardless of whether I'm right or wrong about being aware, if I am conscious then it's irrelevant if an AI can know how it's aware, or if I can know that the AI is aware.
>> No. 83268 Anonymous
11th August 2017
Friday 11:19 pm
83268 spacer
>>83267

But are you aware that you're aware?
>> No. 83269 Anonymous
11th August 2017
Friday 11:45 pm
83269 spacer
>>83268
I'm aware that I'm aware to as many degrees as you want to go, however I, like any decent AI or even fairly basic software, have checks built in that make me aware when I'm in a pointless loop and prevent it from being followed further.
>> No. 83270 Anonymous
11th August 2017
Friday 11:57 pm
83270 spacer
>>83269

You cannot have awareness without that infinite regression.

You cannot code infinite regression into finite code, it's logically impossible.
>> No. 83271 Anonymous
12th August 2017
Saturday 12:03 am
83271 spacer
>>83270
There is no infinite regression because I stop thinking about it when I realise it's tending towards infinite regression, as any basic software does.
>> No. 83272 Anonymous
12th August 2017
Saturday 12:15 am
83272 spacer
>>83271

It doesn't stop when you stop thinking about it, because you're still aware. Just because you're not thinking about the infinite regression, doesn't stop it from existing.
>> No. 83273 Anonymous
12th August 2017
Saturday 12:24 am
83273 spacer
>>83272

The potential for infinite regression is there, or at least up until the point that I starve to death because I'm doing nothing but thinking "I'm aware that I'm aware that I'm aware ad nauseam", much like software has the potential for infinite regression but stops itself looping because it's going to run out of memory.
>> No. 83274 Anonymous
12th August 2017
Saturday 12:41 am
83274 spacer
>>83273

You don't need to say to yourself that you're aware in a loop to be aware, that's just a way to prove awareness is an infinite regression.

If AI cannot do the same, then it is not conscious.
>> No. 83275 Anonymous
12th August 2017
Saturday 1:07 am
83275 spacer
>>83274
That seems awfully spurious.
>> No. 83276 Anonymous
12th August 2017
Saturday 1:52 am
83276 spacer
>>83275

If AI is to be aware, it needs to be aware that it's aware... etc ad infinitum, that's just how awareness works.
>> No. 83277 Anonymous
12th August 2017
Saturday 2:38 am
83277 spacer
>>83276

In a similar infinite loop, you appear to be ignorant of your own ignorance.
>> No. 83278 Anonymous
12th August 2017
Saturday 10:54 am
83278 spacer
>>83276
I think you've been misreading Hofstadter.
>> No. 83283 Anonymous
17th August 2017
Thursday 10:36 pm
83283 spacer
>>83277
>>83278

Make your cases for how awareness can be programmed into A.I.

If awareness is a finite process that can be coded into A.I, where does it begin, and where does it end? As soon as that process ends, you don't have awareness any more.
>> No. 83284 Anonymous
17th August 2017
Thursday 11:58 pm
83284 spacer
>>83283
Make your case for how that doesn't apply to humans without invoking magic.
>> No. 83286 Anonymous
18th August 2017
Friday 6:04 pm
83286 spacer
>>83284

I consider awareness to be a non-physical "thing", which is why we are able to separate "ourselves" from our physical body as well as all other physical things. We say "I have a brain", rather than directly identifying as the brain, which implies something else has a brain - that something else being non-physical awareness/consciousness.

The current argument against this is that the brain is the thing creating the consciousness. Consciousness is just an "emergent property" of the brain. Yet there is no physical evidence of this emergent property, the closest we get is the chemical/electrical activity of the brain, but this does not prove the chemical/electrical activity is what's creating consciousness. For that to be true, you will have to argue that non-conscious matter/processes can create consciousness, which is logically impossible. Scientists aren't afraid of being illogical though, considering many believe that something can come from nothing (big bang), despite also believing that energy cannot be created nor destroyed.

So my argument is that awareness cannot be an "emergent property" of non-aware code and hardware either.
>> No. 83287 Anonymous
18th August 2017
Friday 6:45 pm
83287 spacer
>>83286

Conciousness is the result of the soul, something arising from Logos, the godhead infusing everything in the Universe with its firey breath.
>> No. 83288 Anonymous
18th August 2017
Friday 6:48 pm
83288 spacer
>>83287
Bullshit.
Consciousness is the result of the Buddhas Auntie Mavis dreaming that we exist.
>> No. 83290 Anonymous
18th August 2017
Friday 8:46 pm
83290 spacer
>>83286
So ... magic then.
>We say "I have a brain", rather than
Great, quirks of language are now evidence of souls.
>> No. 83291 Anonymous
18th August 2017
Friday 9:55 pm
83291 spacer
>>83286
>>83290

> So my argument is that awareness cannot be an "emergent property"

Consciousness is evidently an emergent property even in humans. A newborn is no more self aware than a potato.
>> No. 83292 Anonymous
19th August 2017
Saturday 2:05 am
83292 spacer
>>83286
>this does not prove the chemical/electrical activity is what's creating consciousness. For that to be true, you will have to argue that non-conscious matter/processes can create consciousness, which is logically impossible
Do you also think it's logically impossible to rub two things that aren't on fire together to create a fire?
>> No. 83293 Anonymous
19th August 2017
Saturday 3:11 am
83293 spacer
>>83286
>>83292

"you will have to argue that non-conscious matter/processes can create consciousness, which is logically impossible"

I find this statement bizarre. Human front-brain consciousness is obviously an evolved trait; that is to say that something that wasn't conscious became conscious over time.
>> No. 83294 Anonymous
19th August 2017
Saturday 8:39 am
83294 spacer
>>83286
>something can come from nothing (big bang)
Wasn't the big-bang a move from an infinitely small point with a fuckload of energy to the expanding universe we may or may not have today? i.e. energy didn't "come from nowhere" during the big bang, it's always been there and just gone from concentrated to diffuse.
>> No. 83299 Anonymous
23rd August 2017
Wednesday 10:41 pm
83299 spacer
>>83298 Oh give it a rest, the pair of you. This is like one huge pedantry pissing contest.
>> No. 83300 Anonymous
23rd August 2017
Wednesday 10:42 pm
83300 spacer
>>83290

Language is very important - science can't get away from it either.

>>83291
Of course a new born is aware, if it wasn't it might as well be a potato. A new born just isn't able to articulate things as well as adults can.

>>83292
That's two physical things creating another physical thing with the help of oxygen and friction etc.

>>83293

So this magical force called evolution created consciousness? Is evolution a conscious force, or does it somehow just "know" how to create consciousness despite being the complete direct opposite of it. It's like saying up creates down.

>>83294
>infinitely small point

Scientists love to complain about "silly semantics", but this is an example of just that. Smallness cannot be infinite, that doesn't make any sense. If something is labelled as small then it is also finite, otherwise you can't measure it.
>> No. 83301 Anonymous
23rd August 2017
Wednesday 11:28 pm
83301 spacer
>>83300
Science having trouble extricating itself from language doesn't mean that semantic quirks prove anything beyond their own history. If I stub my toe I say fuck but it has nothing to do with sex.

>So this magical force called evolution
idiot
>> No. 83302 Anonymous
24th August 2017
Thursday 2:09 pm
83302 spacer
>>83300
>Smallness cannot be infinite, that doesn't make any sense
Wow, weird how the conditions of the initial singularity don't make intuitive sense to you. It's almost as if your consciousness evolved to aid in your survival and reproduction in a terrestrial environment and as such is ill equipped to grapple with physical cosmology.
>> No. 83316 Anonymous
24th August 2017
Thursday 9:43 pm
83316 spacer
>>83302
>>83301

I'm just going to keep pointing out how tedious your bickering is until one of you notices.
>> No. 83319 Anonymous
24th August 2017
Thursday 10:22 pm
83319 spacer
>>83316
Hide the thread or post something you think is worthwhile then you moaning cunt.
>> No. 83336 Anonymous
25th August 2017
Friday 7:31 am
83336 spacer
>>83319 I did, it turned into this nonsense.
>> No. 83376 Anonymous
25th August 2017
Friday 10:49 pm
83376 spacer
>>83301
>Science having trouble extricating itself from language doesn't mean that semantic quirks prove anything beyond their own history. If I stub my toe I say fuck but it has nothing to do with sex.

Context matters in language.

>idiot

Where does the force of evolution get its energy from?

>Wow, weird how the conditions of the initial singularity don't make intuitive sense to you. It's almost as if your consciousness evolved to aid in your survival and reproduction in a terrestrial environment and as such is ill equipped to grapple with physical cosmology.

Does the "initial singularity" make intuitive sense to anyone, or is it all complete bullshit? But as long as those scientists come out with these intellectual sounding buzz words it must be true. This kind of stuff is totally non-scientific and you're falling for it hook, line and sinker.
>> No. 83378 Anonymous
26th August 2017
Saturday 12:08 am
83378 spacer
>>83376
0/10.
>> No. 83380 Anonymous
26th August 2017
Saturday 1:07 pm
83380 spacer
New industries create different kinds of jobs.
>> No. 83381 Anonymous
26th August 2017
Saturday 1:18 pm
83381 spacer
>>83376

Evolution isn't a force, you dipshit. Plus, force is the conversion of energy from one firm to another, so the question doesn't even make sense.

Just fuck off and educate yourself a bit, would you? If you're going to try to convince people you should at least have some idea what they believe.
>> No. 83422 Anonymous
1st September 2017
Friday 7:42 pm
83422 spacer
>>83381

>Evolution isn't a force

What is it then? Pretty sure evolution is the conversion of energy of one form to another. Educate me if not.
>> No. 83424 Anonymous
1st September 2017
Friday 9:15 pm
83424 spacer
>>83422

Evolution is the adaptation of an organism over time to its habitat, to facilitate a greater chance of successful reproduction.
>> No. 83425 Anonymous
1st September 2017
Friday 9:56 pm
83425 spacer
>>83424
Technically, evolution is the process of change in inherited traits over successive generations. What you're describing is natural selection.

Pedant's sage.

Return ] Entire Thread ] Last 50 posts ]
whiteline

Delete Post []
Password