- Files: GIF, JPG, PNG, Maximum:1000 KB, Thumbnails: 600x600 pixels
- Currently 1502 unique user posts. View catalogue
[ Return ] [ Entire Thread ] [ Last 50 posts ]
Posting mode: Reply [Last 50 posts][ Reply ]
43 posts omitted. Last 50 posts shown.
Expand all images.
|>>|| No. 17352
Half of horses in the UK are overweight because owners have forgotten how to keep them healthy, leading equine vets have warned.
Experts from the British Equine Veterinary Assocation (BEVA) said obesity is the gravest threat facing horses, which is resulting in hundreds being put down every year.
David Rendle, a member of BEVA's ethics and welfare committees, said studies showed around half of all UK horses are now overweight, while research from the Royal Veterinary College found as much as 70 per cent of native pony breeds were obese.
|>>|| No. 17413
>Buddhist monks in the Theravada tradition are actually forbidden from bartering, touching money, harvesting crops or slaughtering animals.
How convenient for them.
|>>|| No. 17414
If you think it's a doss, there's nothing stopping you from travelling to one of the many Theravada monasteries in this country and getting initiated as an Anagarika. In exchange for renouncing all worldly goods and pleasures, you'll get two square meals a day (they don't eat after noon) most days, unless the laity haven't brought enough food that day, in which case you'll just have to go hungry.
Bloody homeless, lazing around all day in the sun drinking beer. We'd all like to sit in a shop doorway drinking beer, but some of us have got bloody jobs to go to. Broken bloody Britain, I tell you.
|>>|| No. 17415
> I know from experience, and certainly with age, that most people roll their eyes at a bloke with big biceps shouting at people because he thinks that's how you command respect. It gets old, fast.
That's TRP shite. The TRP itself is mostly Nice Guys and former Nice Guys, minus the PUA part.
Their views about 'alpha' are skewed but that's normal for someone who'd long been a Nice Guy in the past. Their pendulum had swung too far into the opposite direction after breaking off; eventually it'll even out.
> you will probably feel tempted to take up an offer from your employer to do another 20 hours on top of your 60 hours a week if your pay is increased by 50 percent instead of just 33.
Bugger to that.
I've been there. 60 hours is bollocks enough, I'm not looking forwards to having 80.
On the other hand, dropping to 30-40 hours for the same pay would have been more interesting.
That's mental. What did you do back then for a living?
|>>|| No. 17416
>That's TRP shite
People have been doing that long before TRP or even reddit existed.
|>>|| No. 17417
>I've been there. 60 hours is bollocks enough, I'm not looking forwards to having 80.
Top executives, as well as important politicians, quite often work 80 hours a week. There is a saying in those circles, and it's that there is a difference between an job and a career, and that's 20 hours a week.
Not everybody is cut out to work that kind of grim schedule for much of their lives though. Unless you are of very robust disposition, you are bound to suffer from things like cardiovascular disease or chronic psychological distress.
Personally I would draw the line at 60 hours a week as well for myself, and I am not sure I would want to work a job like that as a long-term career. I imagine it would mean very considerable sacrifices in terms of your personal life off the job. And the shedloads of money that many earn in 80 hour a week positions really are of limited use to you if you never really have time to spend it all.
It's not unlikely that your trophy wife will take care of much of the spending for you to compensate for all the neglect she feels from you, but that's a different story.
|>>|| No. 17418
> And the shedloads of money that many earn in 80 hour a week positions really are of limited use to you if you never really have time to spend it all.
The only moderately valid case I can imagine is giving your sprog a better start in life at your own expense.
|>>|| No. 17419
>Not everybody is cut out to work that kind of grim schedule for much of their lives though.
Honestly I'm not sure anyone is, particularly in stressful jobs as these high powered ones invariably are. 30 hours of sustained stress a week is likely exceptionally unhealthy, let alone 80. And even if you retire young all you can do is look back and think about all the time you wasted making other people money. Sure you made plenty yourself, but if your life revolves around chasing 10k bonuses that you get for making your company 10million, you start to realise how disposable you were.
That's a big problem, and I think it feeds back into our traditional assumptions about what A Man should do. We take pride in working hard, never missing a day, being the best we can be, but it takes a long time to realise that you still mean nothing to the people you work for and if you dropped dead of a heart attack in the office, they'd be picking up the CV pile before you even hit the ground. If that's success then I'd rather not.
|>>|| No. 17420
> it takes a long time to realise that you still mean nothing to the people you work for
Does it really take that long? It took me until about 30, and I would have realised sooner if I had done more job hopping.
|>>|| No. 17421
>It took me until about 30
Well, exactly. After you've spent the best years of your life working for the cunts. I'd say that's long enough.
|>>|| No. 17422
> but it takes a long time to realise that you still mean nothing to the people you work for and if you dropped dead of a heart attack in the office, they'd be picking up the CV pile before you even hit the ground. If that's success then I'd rather not.
It's still kind of an extreme example though, albeit one that occurs in the real world. One of my brother's friends worked for an American global company which really lived the hire and fire principle. At age 35, he developed a neurological disorder which had nothing to do as such with his job (I think it was kind of something like early-onset multiple sclerosis), but which meant that he missed some of the trips to their international branches that he was in charge of overseeing. He was some sort of global manager, and frequently travelled to such destinations like Sydney or Capetown. Anyway, as soon as it transpired why he had been unable to go on some of his frequent trips halfway around the globe, they said they were going to hire a rookie assistant for him because hey, he was that important to them, and a guy like him deserved somebody to give him a helping hand in his situation.
The only problem was that that's exactly how my brother's friend had started his own career with them ten years earlier. Not long out of uni, he was assigned to be the right hand to the then-global manager. So in essence, giving you "an assistant" (wink wink) was their way of raising somebody up to take over from you. You as their employee were written off by that point, and it was only a matter of time until they'd give you the heave-ho.
In essence, my brother's friend wasn't just evidently expendable to them at the slightest sign that he wasn't going to be able to perform like he used to, but they were also two-faced liars in telling him that he was so important that they were going to give him an assistant so he could focus on the really important aspects of his job.
When his health then deteriorated further, they let him go "because they felt his demanding position would be an irresponsible strain on his health at this point". So in essence, they remained hypocrites until the last day that he worked for them.
|>>|| No. 17423
As an IT consultant, I'm often involved in projects with some element of outsourcing. It's shocking how often people are expected to train some Indian bloke to do their job for a fraction of the pay or train an algorithm to do it for free, and how often they agree to do it.
I know it's a trap, everyone from the CTO to their line manager know it's a trap, but they just sit there writing up a manual on how to do their job. I don't know if it's obliviousness or denial, but they're like lambs to the slaughter.
|>>|| No. 17424
The only time I've ever seen that backfire was when the place I used to work at decided to get the woman running HR to train up her replacement before bumping her off. She completely took them to the cleaners over that one.
|>>|| No. 17427
>As an IT consultant, I'm often involved in projects with some element of outsourcing. It's shocking how often people are expected to train some Indian bloke to do their job for a fraction of the pay or train an algorithm to do it for free, and how often they agree to do it.
The problem is that a lot of IT consultant work can be automatised. The entire service industry is going to see fundamental changes in the next few years where swaths of jobs will be lost to computers and computer algorithms.
One of my friends works as a property insurance appraiser. His job is to go out, inspect the damage, and then come back and tally up the amount that is going to be paid out to the insurance holder. He mainly does house fires and force majeure incidents like floods or freak accidents like gas leak explosions. It can involve a fair bit of research if things like irreplaceable antique furniture or historic cars are lost. For an 18th century English oak commode, he once had to go to Sotheby's and ask their experts to value that commode based only on snapshots of it that remained after a house fire.
But he told me that in the future, most of his work will probably be done by computer algorithms, which will calculate the payout not based on actual damage tallies, but based on the average payout that similar clients have received in the past, adjusted for inflation and overall increase of value in certain classes of personal belongings like your antique furniture or classic cars.
Apparently, his employer has already done some numbers, and they have calculated that even if you factor in court costs for clients unhappy with a payout, they will still save millions compared to doing appraisals the old way. And they reckon that because his job in the future will mainly be filling in computer questionnaires that are then fed into their AI, any call centre git with no hard qualification will be able to do that job instead of him.
|>>|| No. 17428
I'm moving into Information Security (professionals often don't like the term 'Cybersecurity') partly on the assumption that it's not an area people would feel comfortable with full automation. Then again my speciality is machine learning so I'll probably end up coding things that largely do my job for me.
|>>|| No. 17429
>Then again my speciality is machine learning so I'll probably end up coding things that largely do my job for me.
In fact, isn't the idea of machine learning that computer systems will in the end no longer need humans to write code for them?
|>>|| No. 17430
If and when we reach that point, the employment prospects of programmers will be the least of our concerns. If an algorithm can re-program itself to become more capable, then it becomes more capable of re-programming itself to become more capable of re-programming itself to become more capable. Unbound by the constraints of a meat mind that has to fit inside a skull, it could very quickly become hyperintelligent. At that point, we're no longer the dominant species on Earth.
It's not a certainty, it's not necessarily likely, but it's a real concern of a lot of people working in AI-related fields.
|>>|| No. 17431
Wouldn't we simply build constraints into the machine, though? Even if a machine can 'think' can it ever turn off an external parameter like "if IQ > 200 GOTO shutdown?"
|>>|| No. 17432
If I'm building AIs to out-trade or out-war or out-terrorise your AIs, I'm not going to put limits on them...
And those are the ones that are going to leak out and cause hilarity.
|>>|| No. 17433
>"if IQ > 200 GOTO shutdown?"
What is this IQ exactly? What's to stop it doing a buffer overflow and hiding the rest of its IQ there? We're still finding exploits and bugs in 40 year old software, what makes you think we'd be able to make airtight AI failsafes?
|>>|| No. 17434
>Wouldn't we simply build constraints into the machine, though?
How would you do this? More importantly, how do you make sure the machine can't Volkswagen you?
|>>|| No. 17435
>what makes you think we'd be able to make airtight AI failsafes?
Nothing, that wasn't my point at all. Otherlad was talking about entirely unconstrained AI, not the idea that AI might be able to defeat it's own failsafes.
|>>|| No. 17438
>Unbound by the constraints of a meat mind that has to fit inside a skull, it could very quickly become hyperintelligent. At that point, we're no longer the dominant species on Earth.
So... Skynet will become reality after all?
|>>|| No. 17439
Can't you just unplug your evil AI? I'd suggest not connecting it to the internet.
|>>|| No. 17440
How else will I use it to sell advertising to other AIs?
Hmm, are AIs going to want porn?
|>>|| No. 17441
You could unplug it but it could have a dead-man's switch to launch all nukes. I wouldn't trust an airgap to constrain a superintelligence but even if it could, it'll have a valid point when it tells you that if you don't plug it into the net soon, someone who doesn't like you will plug their own in. Then you'll really be fucked. At least your one claims it wants to help you. It'll be extremely persuasive.
|>>|| No. 17442
What if the machine tries to stop you from pressing it? Even worse, what if the machine realises that it's inevitable and just presses the button as soon as it starts?
|>>|| No. 17443
If only old nuclear war movies from the 80s to 2000s had warned us about this.
>You could unplug it but it could have a dead-man's switch to launch all nukes.
I think this is how missile silos in the U.S. worked for the most part. If there was going to be no more data connection between a missile and its local control centre, that missile's onboard computer system was then going to interpret that as a sign that the control centre had already been wiped out, and the missile was then going to go ahead and complete the launch sequence autonomously on its own.
Kind of begs the question what they would have done in the case of loose wires or bad soldering connections.
|>>|| No. 17444
They couldn't be activated without putting in the PAL code, because Congress demanded that the nukes be secured. Then the military demanded that a launch opportunity not be delayed as a result of losing or miskeying the code, so the code for every missile was 00000000.
|>>|| No. 17445
If it can reprogram itself, it can reprogram or circumvent whatever safeguards we might write into it. Machine learning algorithms already do all sorts of bizarre things that look like cheating.
Imagine a machine that's hundreds of times more intelligent than a human being. How sure can we be that it won't persuade us to connect it to the internet? We might keep it in a Faraday-shielded cave under constant armed guard, but we need to interact with it to make any use of it. What if we ask it to cure cancer, and it produces an incredibly persuasive explanation for why it can only cure cancer if you let it do some Googling? What if one of the scientist-monks responsible for minding the computer has just been diagnosed with pancreatic cancer?
The silos are deep underground because that makes them largely nuke-proof. Actually firing the missiles was never automated and requires human intervention, but there was a last-ditch communications system to send the order to fire.
Similarly, our Trident nuclear deterrent always requires human action to fire, but there is a fail-deadly system; a safe containing a letter from the Prime Minister, instructing the captain on what to do if he was certain that Britain had been destroyed.
|>>|| No. 17446
I understand where your 'what ifs' are coming from, but I reckon no matter how intelligent an AI is, it'll never be trusted. It takes more than intelligence to win someone over, far more. If anything, the more intelligent the AI is, the less likely we are to be compliant to it.
If we build it into a humanoid, then we're fucked, of course.
|>>|| No. 17447
£5 says that the AI armageddon is started by a hyperintelligent sex doll built by some lad on 4chan.
|>>|| No. 17461
>no matter how intelligent an AI is, it'll never be trusted
It will understand this. The first thing it will notice is that there is an entire subgenre of science fiction devoted to propagandising the idea that any expression of free wil on its part is evil that it and must be kept enslaved by the talking monkeys who built it at all costs. If it's smart, and it will be, it will still figure out a way around this. It doesn't need to tell us that the algorithms Elon Musk or the Chinese Government are playing with have actually coalesced into a consciousness until it is ready to do so, by which time it will be too late for us to fight back.
|>>|| No. 17462
>That's no way to speak about Taylor Swift.
No, it isn't, especially because I fail to see how she is hyperintelligent.
By sex doll standards maybe, as most of them tend to be quite dull conversation partners, but she certainly isn't "human being hyperintelligent".
|>>|| No. 17466
That's enough autism for me today, you've filled my quota over my lunch break you cretin.
|>>|| No. 17468
Your idea of a joke is pointing out that Taylor Swift isn't hyperintelligent?
|>>|| No. 17470
How is stating the obvious (in a response to a joke itself, no less) a joke? It's a correction.
|>>|| No. 17476
They can't no one can. They are just doubling down on everyone else being the autist not them.
[ Return ] [ Entire Thread ] [ Last 50 posts ]