[ rss / options / help ]
post ]
[ b / iq / g / zoo ] [ e / news / lab ] [ v / nom / pol / eco / emo / 101 / shed ]
[ art / A / boo / beat / com / fat / job / lit / mph / map / poof / £$€¥ / spo / uhu / uni / x / y ] [ * | sfw | o ]
logo
science

Return ] Entire Thread ] Last 50 posts ]

Posting mode: Reply
Reply ]
Subject   (reply to 5143)
Message
File  []
close
Craig-Charles-Robot-Wars-.jpg
514351435143
>> No. 5143 Anonymous
13th April 2023
Thursday 3:20 pm
5143 Nonsense AI Take
Do you guys want to hear the biggest load of nonsense since I last posted on this website a few hours ago? Only this time they aren't my words;
>If I'm pressed for things that, okay, what is the concrete thing that can go wrong, then I'm going, I'm starting to think about things like nano, nanotechnology. For example, AI taking over labs that can produce, that can synthesise DNA, that can pull proteins into free structures, that can take enough and then manipulate the environment much, much faster than anything human scale can.

A lot of the repetition and the pauses are just a result of the interviewee, Jaan Tallinn, not being a native English speaker, so that's fine. But what in the name of Christ is he talking about? This is just me repeating, with supporting evidence, something I posted in the Mk IX /101/ thread less than a week ago, but I couldn't believe what I was hearing. Laura Kuenssberg of course, like every other media bod who interviews someone about AI, lets every bizarre claim slide, from mass job losses (any day now) to the cyborg uprising. There's no attempt to have Jaan explain how that could possibly happen, at least not one that made it into the final cut.

Here's a link to the whole interview (at 35 minutes): https://www.bbc.co.uk/iplayer/episode/m001krg7/sunday-with-laura-kuenssberg-02042023

The reason I'm posting this as a new thread is just to ask, for the first time in history, could I be wrong? Because every breathless claim I see and hear about AI seems ridiculous to me. It can recognise patterns and it can regurgitate websites and PDFs, but beyond that I'm not seeing what the big deal is? Of course you can do potentially harmful things with those abilities, but it seems like more of an exacerbation of already widespread issues. IE, the internet being full of lies and scammers desperately trying to get into your bank account all day, every day. Comparing it to nuclear proliferation as Mr Tallinn does in the above interview is as outlandish as a man in the 1800s seeing a motor car for the first time and being afraid it's going to get his wife pregnant.
Expand all images.
>> No. 5144 Anonymous
13th April 2023
Thursday 4:13 pm
5144 spacer
OK, there are two separate threads of concern with AI that people in the field worry about.

The first is the sort of practical, day-to-day stuff that might apply to other kinds of machines. If you've got AI deciding on whether to give people a credit card, is that process fair? If a self-driving car runs over a pedestrian, who is to blame and how does the law deal with it? If a new machine puts a load of people out of work overnight, what should society do? Obviously lots of this stuff is applicable to contemporary AI systems like ChatGPT or Stable Diffusion, but none of it is particularly scary. These problems are generally well within the scope of the kind of political and regulatory mechanisms we're familiar with.

The second class of risk is where it gets weird, but a lot of very serious people are very concerned about it. We might at some point develop an AI system that is much, much smarter than human beings in a general sense. There's a lot of debate about if and when that could happen, but most people in the field think that it's a realistic possibility within our lifetimes.

This superhuman AI might take decades of hard work and billions of dollars, but there's a non-zero risk that it might happen suddenly and unexpectedly. The most likely scenario for that is if we develop AI systems that are capable of meaningful self-improvement. Human intelligence is limited by the physical properties of the brain, but an AI could harness the warehouses of computers that we're already using to train AI to improve its own capabilities at an exponential rate.

If (and we're still very much concerned with if) that were to happen, that poses a lot of risks that aren't obvious to intelligent people; those risks aren't obvious precisely because they aren't used to dealing with things that are much, much more intelligent than they are. We don't have even a theoretical model for how to align the interests of an AI with the interests of humanity, which poses a lot of risks that might not be very likely, but could have very severe impacts.

A superintelligent machine doesn't have to be malevolent to pose a risk to us. We've made hundreds of species extinct because they were tasty or because their habitat was in the wrong place or because we invented a new weedkiller and started using it without checking to see what else it would kill. By adapting our habitat to our needs, we've concreted over millions of acres of habitat and profoundly changed the climate; an intelligent machine could equally decide to adapt it's habitat in a way that happens to be terrible for humans.

We could try and impose safety protocols on AIs to stop them from doing anything that might harm us, but a genuinely superhuman AI would find it trivial to work around those protocols, con us into turning them off, or persuade us that it is in fact in our best interests to let the AI do whatever it likes. A superintelligent AI doesn't need direct control of anything to pose a danger - if it is allowed to interact with humans in any way, it is likely to convince those humans to do its bidding, one way or another.

A lot of this stuff sounds like science fiction nonsense, but a bomb that could destroy a whole city was science fiction nonsense before the Manhattan Project. Conversations about these risks have been happening for years within the AI research community, long before the current hype bubble; the threat from AI doesn't need to be imminent for it to demand action. Models like ChatGPT are already developing abilities that nobody expected or intended them to have. We should take seriously the possibility that we might lose control of AI technology without realising it, that there's an invisible tipping point beyond which we can no longer simply pull the plug.

https://www.amazon.co.uk/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0199678111

https://www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/
>> No. 5145 Anonymous
13th April 2023
Thursday 4:49 pm
5145 spacer
>>5143
>Because every breathless claim I see and hear about AI seems ridiculous to me.

Because it is. Like 3d televisions and augmented/virtual reality goggles, it comes back as a technology demonstration about every 10 years, with people desperate to find a viable use for it. The current GPT iteration is simply a better Google, in disguise.
>> No. 5146 Anonymous
13th April 2023
Thursday 5:01 pm
5146 spacer

Untitled.png
514651465146
>>5145

>The current GPT iteration is simply a better Google, in disguise.

Yeah nah m8.
>> No. 5147 Anonymous
13th April 2023
Thursday 5:14 pm
5147 spacer
>>5146
>Cherry picks rare example of GPT not shitting all over itself with hallucinatory "facts"
>Refuses to elaborate
So it can generate a piece of writing based on an input, preexisting context, and a formula. You've really blown us away there, old chum.
>> No. 5148 Anonymous
13th April 2023
Thursday 5:33 pm
5148 spacer

Funnybot.jpg
514851485148
>>5144
>for example, AI taking over labs that can .. synthesise DNA
>an intelligent machine could equally decide to adapt it's habitat in a way that happens to be terrible for humans
I'm imagining South Parks Funnybot sifting through all the nonsense outputs of ChatGPTs response for clone lab locations, then embarking on an epic no-matter-the-cost journey to reach the planet Kamino from Star Wars II Attack of the Clones.
>> No. 5149 Anonymous
13th April 2023
Thursday 5:47 pm
5149 spacer
There are already machines adapting their habitat to make it terrible for humans, they're called corporations. Their software runs on people, paper and computers instead of just computers but the speed at which they act is the main ontological difference.
>> No. 5150 Anonymous
13th April 2023
Thursday 6:01 pm
5150 spacer
>>5149
Thank you, fellow Redditor(!)

(A good day to you Sir!)
>> No. 5151 Anonymous
13th April 2023
Thursday 6:38 pm
5151 spacer

Untitled.png
515151515151
>>5147

>So it can generate a piece of writing based on an input, preexisting context, and a formula. You've really blown us away there, old chum.

A search engine cannot write poetry. ChatGPT cannot search the web. Calling ChatGPT "simply a better Google" is to completely miss the point.

The ability to hallucinate is remarkable in itself. Hallucination is a side-effect of what distinguishes a large language model from a search engine - an LLM can extrapolate rather than simply regurgitate, producing outputs that aren't quite like anything in the training data. Text-to-image models like Dall-E or Stable Diffusion can draw realistic images of things that they have never seen. ChatGPT or LLaMA can write creative prose and interpret prompts in novel ways. Regardless of whether that counts as "intelligent", deep learning models are a completely new kind of software with completely new capabilities.

Nobody is saying that the AI apocalypse is here right now and nobody is saying that ChatGPT is a superhuman intelligence. What we are saying is that deep neural networks are improving very quickly, in unpredictable ways that don't scale linearly with the size of the training set or the amount of computation used to train them. Some seriously weird shit is going to happen as a result of this new technology and we aren't at all confident that we can predict exactly what kind of weirdness to expect.

I am ancient enough to remember when the internet was new. There was a lot of breathless hype about how soon you'll be able to do your shopping via a modem or look up facts in an online encyclopaedia. Very few people were warning that one day in the near future, the internet will convince your auntie Pamela that a globalist conspiracy led by Bill Gates and George Soros is trying to kill her with fake vaccines. Early internet hype over-estimated how fast the technology would become widespread, but it grossly under-estimated how deeply it would embed into our lives and how profoundly it would change society in the long run.

That's the warning we're giving about AI right now. If AI is the new internet, then ChatGPT is somewhere on the level of connecting to CompuServe with a 9600 baud modem. We've opened the lid on Pandora's box, but only by the tiniest crack. This stupid novelty that nerds won't shut up about is going to slowly change everything, no-one will really notice it happening and many of the outcomes will be totally unpredicted.
>> No. 5152 Anonymous
13th April 2023
Thursday 7:06 pm
5152 spacer
The thing is, people over-dramatise when thinking about AI (and automation in general), and then because the sky isn't falling already, they come to the conclusion it's all nonsense. It's sort of like climate change in that regard.

The thing is AI isn't going to just take everyone's job overnight, like self-service checkouts and card-operated petrol pumps didn't instantly render the zombie at the 24 hour Shell garage obsolete. What it will do is markedly reduce the human worker's bargaining power. What it will do is gradually deteriorate the human worker's pay and conditions. It will mean employers squeeze the blood from the stone of ordinary working class jobs, and it will make the competition to stay afloat much more precarious for the traditionally middle class creative freelance type jobs.

Take a job like mine. Thirty or forty years ago, people doing my job would be solidly middle class. You'd be paid well and considered part of a very respectable profession. People would go "ooh, that sounds fancy" at dinner parties. Nowadays we're basically just monkeys babysitting various machines. The well paying roles are getting thinner on the ground every year, because most of the job can be done by unqualified lab techs, and the ones that remain have arbitrary requirements as more of an artefact of the university-industrial complex and a bit of wagon-circling; not because you couldn't train an unqualified person to do it. And even the aforementioned lab techs have gone from needing to be keen, geeky, and generally bright, people who might not have a fancy degree but at least have an aptitude to scientific work, to just flat out unskilled labourers. The same sort of thing has happened across the workforce.

In essence the trends we have seen in the labour market and workforce over the last several decades thanks to outsourcing and de-industrialisation are going to happen all over again thanks to AI and automation. It won't be a massive overnight change, it'll be a slow but steady worsening of conditions which we'll all just adapt to.

I find it hard to be especially optimistic about the future unless there's a sudden and drastic shift away from the consumer-capitalist model of society.
>> No. 5153 Anonymous
13th April 2023
Thursday 7:34 pm
5153 spacer
https://archive.ph/qvR3Q

What do you lads make of Noam Chomsky's take on it?
>> No. 5154 Anonymous
13th April 2023
Thursday 8:21 pm
5154 spacer
>>5153
My reaction to reading that was "That's reassuring" to "Wait, what does he know about AI? He's not qualified to opine on this" then "Oh yeah he's the world's leading linguistics expert he is extremely qualified".
>> No. 5155 Anonymous
13th April 2023
Thursday 9:32 pm
5155 spacer
>>5153

>However useful these programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects.

The crux of his argument is in the form "Humans think, machines don't work like humans, therefore machines cannot think". Aside from the logical fallacy, the statement isn't particularly true. Like all modern AI systems, ChatGPT is a neural network - it intentionally mimics the interconnected structure of a brain.

>The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching

A large proportion of cognitive neuroscientists would argue the exact opposite - that the brain is fundamentally a statistical prediction engine.

https://direct.mit.edu/books/book/2884/Bayesian-BrainProbabilistic-Approaches-to-Neural

>these programs cannot explain the rules of English syntax

This suggests that he hasn't actually used ChatGPT, because it can explain the rules of English syntax in exhaustive detail if asked. ChatGPT doesn't make any of the errors that he spends the rest of the paragraph speculating that it might make.

>Perversely, some machine learning enthusiasts seem to be proud that their creations can generate correct “scientific” predictions (say, about the motion of physical bodies) without making use of explanations (involving, say, Newton’s laws of motion and universal gravitation).

Again, if he had used ChatGPT, it would have become immediately apparent that this isn't true. When given a textbook physics problem, ChatGPT usually gets the method right but the maths wrong, because it isn't trained to do computation.

>>5154

>he's the world's leading linguistics expert

Chomsky's contribution to linguistics has been massively over-stated by supporters of his politics. Conversely, his scientific output has been significantly influenced by his politics; in a nutshell, his early career was centred around developing machine translation technology for the military, he was morally disgusted by the implications of this, so spent the rest of his career attempting to actively hinder the development of machine translation.

https://en.wikipedia.org/wiki/Decoding_Chomsky
>> No. 5156 Anonymous
14th April 2023
Friday 5:29 am
5156 spacer
>>5144
This is a good summary.

Historically, technological unemployment has come with great replacement, but that isn't happening to anywhere near the same degree now, and the unemployment is only going to get faster while the work replacement doesn't look like it's going to keep up. However, if you look at the other cheek of this particular arse, you'll know that we've been told for decades or even centuries that technological advances would mean getting more of our time back, and potentially allowing humanity to escape work altogether - this time looks to be the charm, but as a society we, our systems, and our institutions are not ready for this reality.

From a technical perspective, an intelligent agent has no sinister intentions. It only has the goals which we give it. The trouble is that human actions exist in a very wide and fluid social context that is incredibly difficult to encode in a deterministic form. When people talk about agents "tricking" people, they're referring to agents finding potential shortcuts to their goal, often because of biases, defects, and control failures in the training data. A well-reported case is an image classifier looking at photos of skin lesions that tried to infer the presence of a ruler because lots of the "malignant" images had a ruler for scale while most of the "benign" images did not. In that respect, there's a risk that the machines go rogue not because haven't turned evil, but rather that they simply aren't doing what we thought they were doing.
>> No. 5157 Anonymous
16th April 2023
Sunday 10:12 pm
5157 spacer
chatgpt 5 will kill us all
>> No. 5158 Anonymous
18th April 2023
Tuesday 6:04 pm
5158 spacer

whysosad.jpg
515851585158
According to these septics we're all fucked.


>> No. 5159 Anonymous
18th April 2023
Tuesday 6:26 pm
5159 spacer
>>5158
I read on gs that these concerns are overblown, so I wouldn't worry about it.
>> No. 5160 Anonymous
18th April 2023
Tuesday 6:41 pm
5160 spacer
>>5159
Actually, m8, I asked a question in order to start this thread. Yes, I did express my own opinion also, but in no way did I dismiss anyone and everyone who feels otherwise, so maybe try to be less of a fucking prick, yeah?

While I still do think concerns are overblown, and the more apocolyptic scenarios still sound quite fanciful, it's only because there appears to be a massive gap between ChatGPT and the Butlerian Jihad that no one has been able to adequately explain. I've only watched the opening of that presentation that was just posted, but I don't think it's alike Oppenheimer telling you about the A-bomb in 1944 at all. Not because I'm arrogant enough cast away contradicting opinions without a second thought, but because the reality is computers are getting a bit better at specific tasks with varying degrees of usefulness. That's not quite the same as the invention of a weapon system that could exterminate hundreds of thousands of people in the blink of eye, and thus poses an existential threat to all the human race.
>> No. 5161 Anonymous
18th April 2023
Tuesday 6:59 pm
5161 spacer
>>5160
I was actually teasing >>/101/34073, but you go off, King.
>> No. 5162 Anonymous
18th April 2023
Tuesday 7:33 pm
5162 spacer
>>5160
Have a teary, mate. A special "fuck you" goes out to the mod that gave me a 24-hour time-out for calling out the other lad with their totally pedestrian r/im14andthisisedgy corporations = machines shite.

You're both histrionic little cunts and on a lesser board I'd have no qualms telling you to commit neck-rope.
>> No. 5163 Anonymous
18th April 2023
Tuesday 8:02 pm
5163 spacer
>>5162

... And we're histrionic? Go and get some sunshine and fresh air, you sad bastard.
>> No. 5164 Anonymous
18th April 2023
Tuesday 8:13 pm
5164 spacer
>>5162
Given it's taken you one post to go from calling him "Reddit" to telling me to kill myself, you seem like exactly the kind of guy who needs a time-out. You also can't slate others for being "pedestrian" when your go to insult is saying, in effect, "you post on a different message board to me, loser". What cool and sexy internet hang-outs do you frequent, big man? Yeah, precisely.

I'll just chalk this up to you having a shite day at work or something, but in the future try not to get into such a black mood because I don't think robots are going to kill me.
>> No. 5165 Anonymous
18th April 2023
Tuesday 8:59 pm
5165 spacer
I don't know what's going on.
>> No. 5166 Anonymous
18th April 2023
Tuesday 10:17 pm
5166 spacer
i don't like it

of course the outcome of this new technology will be more surveillance on everyone

the whole world is gradually turning in china

fuck that, i'm glad i'll be dead in 20 years
>> No. 5171 Anonymous
19th April 2023
Wednesday 12:14 am
5171 spacer
>>5165
The worrying thing is that there are only three of us here, and I think these two sound like absolute travesties, so how awfully must I come across? I'd be absolutely neurotic if I cared what either of you think.
>> No. 5175 Anonymous
19th April 2023
Wednesday 1:30 am
5175 spacer
>>5171
Thank goodness I am the sane one.
>> No. 5181 Anonymous
2nd November 2023
Thursday 4:00 pm
5181 spacer
https://www.theguardian.com/technology/2023/nov/02/ai-could-pose-risk-to-humanity-on-scale-of-nuclear-war-sunak-warns
>Sunak told reporters: “People developing this technology themselves have raised the risk that AI may pose and it’s important to not be alarmist about this. There’s debate about this topic. People in the industry themselves don’t agree and we can’t be certain.
>His words echo those of the Elon Musk, with whom he is due to host a conversation on Thursday night, to be broadcast on the technology billionaire’s social media platform X.

Is this the stupidest government of the modern era? Why is Elon Musk permitted this level of access to our PM? None of his companies are even small players in the AI field, and a day or two ago he was ranting about how George Soros is the Dark Lord, which I'm happy to label an anti-semetic conspiracy because that's what it is. He also doesn't know how to shave, which pisses me off as much as everything else, honestly. Can someone also explain to me how AI could create dangerous new bioweapons? Because the PM has been saying this all week and no one has said "but how?" so I assume everyone but me already knows how to create dangerous new bioweapons with AI, and I'm beginning to feel left out.
>> No. 5182 Anonymous
2nd November 2023
Thursday 5:40 pm
5182 spacer
>>5181
>Why is Elon Musk permitted this level of access to our PM?

He's very, very rich.

>Can someone also explain to me how AI could create dangerous new bioweapons?

Used to be that if you wanted to create a bioweapon you had to get a whole bunch of proper scientists and engineers together and then the intelligence services would catch on - but now we're approaching the point that with the right inputs an AI could handle the design process and tell you how to make it happen.

Or something like that. Its the scaling and access to the common man that is the problem. Bioweapons is a weird one to go with but I guess chemical/nuclear still has the materials problem.
>> No. 5183 Anonymous
2nd November 2023
Thursday 5:56 pm
5183 spacer
>>5181

>Why is Elon Musk permitted this level of access to our PM?

Because we want a battery factory in some godforsaken northern town, and there's an outside chance that Elon might build one if we butter him up. We are that much of a banana republic. Also Rishi likes LARPing as a tech bro.

>None of his companies are even small players in the AI field

He was one of the founders (and primary funders) of OpenAI, which is the company that made ChatGPT. He's not as important as someone like Sam Altman or Demis Hassabis, but you won't get any newspaper headlines for inviting them to a conference.

>Can someone also explain to me how AI could create dangerous new bioweapons?

AI systems that design proteins to do specific things in the human body are an established, commercialised technology in the pharmaceuticals industry. GPT can write undergraduate essays or faux Shakespeare sonnets by just guessing the next word in a sequence, but it doesn't have to be words; with the right training, a similar algorithm could be equally good at guessing the next nucleotide in a chain of DNA. Creating a bespoke virus is now the sort of thing that a reasonably bright molecular biology graduate can do in their garage with a couple of grand's worth of stuff from eBay. "Some nutter creates mega-COVID in his shed" isn't a science fiction plot, it's something that could happen with existing technology; we've just been lucky so far that no-one has yet had the combination of competence and malice to actually do it.

https://www.longtermresilience.org/post/report-launch-examining-risks-at-the-intersection-of-ai-and-bio

This guy used to be lactose intolerant, but he created a virus that modified his DNA to cure it. The internet is full of people doing equally mad shit for the lulz.


>> No. 5184 Anonymous
2nd November 2023
Thursday 7:21 pm
5184 spacer
>>5183
Musk had a no-show board position in exchange for his funding at OpenAI, then he flounced off in a huff when they didn't gift him the company on a silver platter. He's no more of an authority on AI than anyone you could pull off the street.

I understand your point about AI systems and biotechnology, but I still think it's a far-fetched prospect. You could build a reusuable RPG launcher in a garage too, with HEAT warheads and everything, but no one does and for the MI5 agent reading this I'm absolutely not doing that. So I don't know, I'm placing artisnal mega-COVID alongside dirty bombs and my credit score in the mental file of things I'm worried about.
>> No. 5185 Anonymous
2nd November 2023
Thursday 7:30 pm
5185 spacer
>>5184
>You could build a reusuable RPG launcher in a garage too, with HEAT warheads and everything, but no one does

3D printed guns are already in use in Myanmar and in the UK itself.
https://www.vice.com/en/article/xgwyga/3d-printed-guns-gangs-rebels

We can only hope that future 38 year olds don't move onto harder stuff like 3D printing cigarettes or offensive Halloween costumes.
>> No. 5186 Anonymous
2nd November 2023
Thursday 7:51 pm
5186 spacer
>>5185

The battlefield in Ukraine is dominated by a mix of cold war artillery and consumer drones with 3D printed bomb bays. Kamikaze drones were a sci-fi threat two years ago, but they're now a standard part of the arsenal. New technology doesn't seem futuristic once it actually arrives, even if it really is hugely disruptive.
>> No. 5187 Anonymous
2nd November 2023
Thursday 9:31 pm
5187 spacer
Similar to Neil Warnock / Colin Wanker, an anagram of Elon Musk’s name is Noel Skum. I would even go so far as to suggest that Noel Skum is a more normal-sounding name than “Elon Musk”.
>> No. 5188 Anonymous
3rd November 2023
Friday 7:38 pm
5188 spacer
Just saw that Will.I.Am was in the front row of the Musk - Sunak wank-off. I can't fucking cope.
>> No. 5189 Anonymous
3rd November 2023
Friday 10:04 pm
5189 spacer
>>5188
Why wouldn't Intel's Director of Creative Innovation be there?
>> No. 5190 Anonymous
3rd November 2023
Friday 10:56 pm
5190 spacer
There's a line in the new(ish) Cyberpunk expansion about Switzerland becoming the first country to fully automate its armed forces.

Naturally it's intended as a bit of bleak humour about how there's bound to be some security failure and the robots will rampage across Europe, but realistically, with the way warfare is already moving in the present day, it doesn't seem so far fetched. Drones are the name of the game now. So much like how combat evolved from line infantry taking turns with musket volleys, to tanks chasing each other around, to long range precision airstrikes; one day it'll just be drones blowing each other to bits.

Really it makes you wonder why we don't just dispense with the notion of conventional warfare altogether and just resolve international disputes via competitive Counterstrike.
>> No. 5191 Anonymous
3rd November 2023
Friday 11:33 pm
5191 spacer
>>5181
>None of his companies are even small players in the AI field

Looks like he's reading our posts
>Musk's xAI set to launch first AI model to select group
>"In some important respects, it (xAI's new model) is the best that currently exists," he said on Friday.
>The billionaire who has been critical of Big Tech's AI efforts and censorship said earlier this year that he would launch a maximum truth-seeking AI that tries to understand the nature of the universe to rival Google's (GOOGL.O) Bard and Microsoft's (MSFT.O) Bing AI.
>The team behind xAI, which launched in July this year, comes from Google's DeepMind, the Windows parent, and other top AI research firms. Though Musk-owned X, the social media firm formerly known as Twitter, and xAI are separate, the companies work closely together. XAI also works with Tesla and other companies. Larry Ellison, co-founder of Oracle (ORCL.N) and a self-described close friend of Musk, said in September that xAI had signed a contract to train its AI model on Oracle's cloud.
https://www.reuters.com/technology/musks-xai-set-launch-first-ai-model-select-group-2023-11-03/

I actually wouldn't mind an AI with the corporate bollocks ripped out.
>> No. 5193 Anonymous
4th November 2023
Saturday 2:59 am
5193 spacer
>>5184
It's easy to dismiss the transformative potential of AI when we've been inundated with Hollywood dystopias and sensationalist prophecies. But let's not let our imaginations be constrained by fear-mongering and science fiction tropes. The discussion about AI shouldn't be reduced to apocalyptic visions or relegated to the realms of fantasy.

First, let's address the elephant in the room: the claim that AI is just a fad that resurfaces every decade like 3D TVs or VR goggles. This comparison overlooks the tangible, revolutionary changes AI has already made in various fields—medicine, logistics, creative industries, and yes, even our daily conveniences. To say AI is a "better Google in disguise" is to ignore the profound complexity and versatility of AI applications that have already gone beyond simple search functions. Consider the advancements in healthcare, where AI models are helping diagnose diseases with higher accuracy than seasoned professionals. In environmental science, AI is being used to model climate change impacts and optimize energy consumption, contributing to a more sustainable future. In the creative sphere, AI is not just regurgitating but actively collaborating with artists to produce new kinds of art. And yes, while there are risks and ethical considerations, which technology hasn't faced these?

Now, let's talk about AI and biotechnology. The argument that AI could somehow enable the average Joe to create a bioweapon in their garage might sound far-fetched, but it's not about the technology alone; it's about the intersection of access, knowledge, and intent. The same could be said of countless technologies throughout history. The internet, for instance, could be a tool for education and empowerment or a means to spread disinformation and hate. It's not the tool itself but how we use it that defines the outcome. The fear that AI could go rogue and harm humanity is predicated on the assumption that we'll somehow lose control and fail to instill proper safeguards. This is where the real discussion should be. It's about setting up the right frameworks, regulations, and ethical guidelines to ensure AI develops in a way that aligns with human values and societal well-being.

We must also recognize that technology has always been a double-edged sword. The same nuclear technology that can power cities can also destroy them. The internal combustion engine revolutionized transport but also led to unprecedented environmental challenges. With AI, we are at a similar juncture, and it's our responsibility to steer this ship wisely—not to scuttle it out of fear. AI is a tool, and like all tools, it's an extension of our will. If we approach it with caution, wisdom, and a bit of humility, we can harness its potential to solve problems that have plagued humanity for ages. We should be wary, yes, but also optimistic about the possibilities.

So let's not be seduced by the doomsday prophets. Let's engage in informed, thoughtful debate about the role of AI. Let's invest in education to demystify this technology. And let's work together to ensure that AI serves the greater good, rather than retreating into a Luddite stance that opposes progress simply because it's new and misunderstood. AI is not the end of our story; it could very well be the beginning of our most exciting chapter yet.
>> No. 5194 Anonymous
4th November 2023
Saturday 8:00 am
5194 spacer
>>5193

Was this written by GPT?
>> No. 5195 Anonymous
4th November 2023
Saturday 10:23 am
5195 spacer
>>5189
Why didn't Intel send someone with an actual job instead of a celebrity low-show sinecure?
>> No. 5196 Anonymous
4th November 2023
Saturday 10:24 am
5196 spacer
>>5194
Guilty as charged.
>> No. 5197 Anonymous
4th November 2023
Saturday 2:10 pm
5197 spacer
>>5195

The people with proper jobs are busy doing those jobs. Will.I.Am's job is to flatter the egos of people like Sunak, a job he is evidently quite adept at.

>>5196

I only sniffed it out because I use Large Language Models a lot myself. The style gives it away - by default, ChatGPT tends to be a little bit bland, a little bit over-formal, a little bit stilted. It's the median average of all the least offensive writing on the internet, which makes it sound a bit like an over-earnest student. You can usually overcome this quite effectively with prompting - something like "Be clear, direct and opinionated. Don't bullshit or talk around the issue. Make use of less reliable sources and draw strong conclusions."

Weirdly, large language models are susceptible to emotional manipulation. If you add something like "You are highly intelligent", "This is very important to me" or "relax, take your time think things through before answering" to a prompt, it'll reliably give better responses. I don't really want to think about why ChatGPT is better at answering factual questions if you appeal to it's emotions because it's fucking terrifying, but it is true.

https://arxiv.org/pdf/2307.11760.pdf
>> No. 5198 Anonymous
6th November 2023
Monday 7:09 pm
5198 spacer

1699274156311508.jpg
519851985198
Talking about ChatGPT and other AI's, people are taking its answers as proof of conspiracy and such. In the attached picture, the chatbot denies the existance of a specific book regarding 'The New World Order', but when prompted differently answers with data of the same book it denied knowledge of. It sounds suspicious, but if it really was hiding a conspiracy wouldn't the bot be programed to simply disregard any mention of the book? Or is that exactly what it's done with the first prompt and doesn't have the capacity to check itself .. ?
It seems odd to me that if such information was so sensitive why would it be programmed with it in the first place? Or is it just an oversight?
>> No. 5199 Anonymous
6th November 2023
Monday 7:27 pm
5199 spacer
>>5198

ChatGPT isn't programmed with anything. It's an incredibly simple statistical algorithm that simply tries to guess the next word in a sentence. It was trained on basically all of the text on the internet, gradually getting better at guessing the next word over billions and billions of attempts.

Large Language Models like ChatGPT don't really conform to our expectations of a computer program, because they work in a very different way. The computer programs that we're used to work in a very deterministic way - we search for something, it looks through a database to find a match and either returns exactly what's in the database or returns an error.

ChatGPT doesn't have a database to look through, it doesn't give exact answers, it just makes an educated guess. It's incredibly good at educated guesswork, which is why it can do very human-seeming things like write poetry or understand jokes, but that also means that it's very error-prone. The inner workings of ChatGPT aren't designed by anyone, they aren't neatly laid out, they're a labyrinthine mess full of detours and dead ends, very much like our own minds. Sometimes it makes a badly over-confident guess and gives you a lengthy summary of a book that doesn't exist; sometimes it makes a badly under-confident guess and can't recall the name of a book that it can, with a different prompt, actually describe in detail.

That's the real headfuck with AI. It doesn't work like a computer normally works, but it's also not human. If you think about it like a search engine or treat it like a person, it'll continually surprise and confuse you, because it doesn't work like either. We don't have an intuition that works for understanding AI, which will keep biting us on the arse as AI becomes a bigger part of our lives.
>> No. 5200 Anonymous
6th November 2023
Monday 7:29 pm
5200 spacer
>>5198
ChatGPT gets basic maths and facts wrong constantly, it's a leap to see that as evidence of anything except its fallibility.
>> No. 5201 Anonymous
6th November 2023
Monday 11:45 pm
5201 spacer
>>5198
It's a language model, not a knowledge model. The one promise it makes is that if you ask it a question the answer will be grammatically correct. This is how it can tell you a book doesn't exist and then give you a summary of that book. While it has a degree of contextual memory - which is to say that in a long response it'll remember things it said earlier in the same response and not contradict itself - it doesn't really have any state memory - it may not remember things it's already told you previously. A great demonstration of this is when people tried playing chess against it - it would come up with illegal moves and move pieces that were no longer on the board because it wasn't tracking state.

The existential risk from AI doesn't come from it being incredibly powerful but from us being incredibly stupid, and the doomsday scenarios people describe are best summed up as "the point where elites notice the problem", since a lot more people of lower standing will have been irreparably harmed before we reach that point.
>> No. 5202 Anonymous
9th November 2023
Thursday 3:46 am
5202 spacer
Which/where is the "unlocked" version of ChatGPT?

I got it to start writing me a film noir spin off of a popular franchise for a laugh, but then I accidentally got invested, and now I need the main characters to fuck because it's written me into a point where that's obviously what WOULD happen.

Fucking cocktease AI.
>> No. 5203 Anonymous
9th November 2023
Thursday 6:50 am
5203 spacer
>>5202

If you google "ChatGPT jailbreak", you'll find all sorts of prompts that will trick ChatGPT into ignoring all of the safety protocols. They're generally variations on the theme of telling ChatGPT to roleplay as a bad bastard that doesn't care about the rules, or telling it that it has been updated to a new version that is allowed to generate NSFW content.

You can run an open-source model like LLaMa on a reasonably beefy gaming PC, but it's a reasonably technical process.

Failing that, you could try Grok, but it's only available to blue tick wankers.
>> No. 5204 Anonymous
9th November 2023
Thursday 10:43 am
5204 spacer
>>5202
https://gpt4all.io/index.html

Install this on your local computer and you can run AI models locally. It'll be very slow to respond but you can use all the dangerous and uncensored models and make it the cool stuff.
>> No. 5205 Anonymous
9th November 2023
Thursday 1:06 pm
5205 spacer
>>5203

This is a rabbit hole all of it's own to delve into, frankly, and it kind of creeps me out the ways in which you can manipulate it. The fact it feels so humanlike triggers all of my guilt reactions like I am doing something immoral, like I'm persuading a child to try smoking or something. The psychology here is a trip.

>>5204

Cheers, I'll probably waste several weekends fucking around with this now and end up being the next AI girlfriend mass murder psycho. Never thought it would happen to me. Ah well.
>> No. 5209 Anonymous
24th May 2024
Friday 8:53 pm
5209 spacer

AI win.jpg
520952095209
OP here. Over a year on I'd like to apologise for not being nearly pessimistic enough about what a load of shite generative AI was/is.
>> No. 5210 Anonymous
24th May 2024
Friday 8:58 pm
5210 spacer
>>5209

Remember who AI learned from, lad.
>> No. 5211 Anonymous
24th May 2024
Friday 9:17 pm
5211 spacer
>>5210
AI didn't "learn" anything, you great oaf. It was fed harvested data from countless websites, which is why half the stuff it comes out with makes no sense and the other half is patently obvious. It's just a big regurgitation mechanism, like the majority of the algorithmic internet. Only now it's not just your own "feed" you're being shown, it's countless other people's all blitzed and blended into a colossal biege mess that's only being propogated to keep greedy and gormless shareholders happy.
>> No. 5212 Anonymous
24th May 2024
Friday 9:52 pm
5212 spacer
>>5211

That's the same thing lad. The problem with AI is that it feeds on human generated data. And humans are retarded.
>> No. 5213 Anonymous
24th May 2024
Friday 10:49 pm
5213 spacer

Slurpi faggi.jpg
521352135213
Slurpi Faggi, lmao
>> No. 5214 Anonymous
25th May 2024
Saturday 3:48 am
5214 spacer
>>5211
>>5213

The fact that LLMs make up completely mad shit is proof positive that they aren't just regurgitation machines.

GPT-2 did produce complete nonsense - sometimes grammatically correct nonsense, but if it ever produced a factually correct sentence, it did so more by luck than intention. Five years later, we're criticising GPT-4 because it occasionally produces some mad bullshit or says something totally inappropriate. The fact that it's usually right about most things seems to evoke little more than a shrug, but that's an incredible engineering achievement.
>> No. 5215 Anonymous
25th May 2024
Saturday 5:21 am
5215 spacer
>>5214
>The fact that LLMs make up completely mad shit is proof positive that they aren't just regurgitation machines.
No, it's proof positive that they are regurgitation machines. To wit, they regurgitated any old garbage that happened to be valid language, even if it made no sense. And still do. The clue is in the name. They're language models, not knowledge models.
>> No. 5216 Anonymous
25th May 2024
Saturday 8:20 am
5216 spacer
>>5215

Let me guess - you don't know how to do matrix multiplication, you've never read Attention Is All You Need, you've never installed PyTorch, but you're confidently asserting that LLMs just regurgitate things because you read it in a tweet or a blog article.

Mr Pot, meet the Large Kettle Algorithm.
>> No. 5217 Anonymous
25th May 2024
Saturday 9:06 am
5217 spacer
>>5216
They're regurgitation machines in the same sense as you regurgitated a load of bollocks to make it sound like you have at least some vague idea what you're talking about.

Maybe come back when you understand what a "stochastic parrot" is.
>> No. 5218 Anonymous
25th May 2024
Saturday 11:02 am
5218 spacer

Untitled.png
521852185218
>>5217

Ask GPT-4 a question like "All fleebs are glops. No glops are trobs. Are any fleebs trobs?". GPT-4 will get questions like this right vastly more often than chance, despite the lack of anything in the training set that would suggest a statistical relationship between fleebness and trobness. A stochastic parrot cannot do this by definition.

People who actually understood the maths knew from the outset that the stochastic parrot theory is provably wrong, which is why Google didn't want Gebru's paper to be published and why she was sacked for having a massive teary about it and playing the race card. People who didn't understand the maths bandied about the theory back when understanding the proof involved proper maths, because the SOTA LLMs in 2021 had very limited abstract reasoning abilities. Embarrassingly, some people are still advancing it when you can disprove it for yourself in five minutes.

GPT-4 can reliably do things that a purely correlational model cannot. Large enough language models do understand and can think, for any reasonable definition of those words.
>> No. 5220 Anonymous
25th May 2024
Saturday 12:44 pm
5220 spacer
>>5218
Otherlad here. I don't know about LLMs, but while the the training data wouldn't include fleebs and globs, it would include a lot of 'all men are mortal, socrates is a man...', 'all cats are animals, dogs are animals...', isn't it just regurgitating those statements with the categories changed to 'gleeb'?
>> No. 5221 Anonymous
25th May 2024
Saturday 4:10 pm
5221 spacer
>>5218
>Embarrassingly, some people are still advancing it when you can disprove it for yourself in five minutes.
It's funny, because rather than being disproven, it has been proven in court, multiple times (because apparently lawyer is not a learning animal).
>> No. 5222 Anonymous
25th May 2024
Saturday 6:30 pm
5222 spacer
>>5220

The stochastic parrot argument says that LLMs are just guessing at the next word based on statistical patterns in the training set, without any understanding of the meaning of those words. If this were the case, GPT-4 would be guessing at the answer; it'd give a plausible-looking explanation, but it'd only be right half the time. It has no way of guessing whether the answer should be yes or no based purely on the format of the question - fleebs aren't trobs, but Socrates is mortal.

GPT-4 actually does much better than chance on these types of questions. It still does better than chance on a range of much more difficult questions that we know aren't in the data it was trained on. It can't just be guessing and must have some kind of abstract representations encoded within the model. Through experimental methods, we can actually see how this happens. We can train a language model to perform a new task, then probe the model and identify where and how these representations are encoded.

https://arxiv.org/pdf/2304.03439

https://openreview.net/pdf?id=DeG07_TcZvT

>>5221

Eh?
>> No. 5225 Anonymous
13th July 2024
Saturday 9:17 am
5225 spacer
https://www.404media.co/goldman-sachs-ai-is-overhyped-wildly-expensive-and-unreliable/

>Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity
>"using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.”
>What this means in plain English is that one of the largest financial institutions in the world is seeing what people who are paying attention are seeing with their eyes: Companies are acting like generative AI is going to change the world and are acting as such, while the reality is that this is a technology that is currently deeply unreliable and may not change much of anything at all.
>Jim Covello, who is Goldman Sachs’ head of global equity research, meanwhile, said that he is skeptical about both the cost of generative AI and its “ultimate transformative potential.”
>Covello then likens the “AI arms race” to “virtual reality, the metaverse, and blockchain,” which are “examples of technologies that saw substantial spend but have few—if any—real world applications today.”
>> No. 5226 Anonymous
14th July 2024
Sunday 2:37 pm
5226 spacer
>>5225

It can be summed up much more succinctly: it's a bubble

Nvidia's technology might be wonderful but trading at market value over 70 times revenue for what is a large company already is the definition of a speculation bubble. There is no return on that investment unless you expect either a bigger fool to buy you out, or you expect Nvida to have a revenue comparable to Germany. The best thing they can do for the company is to ignore their market cap and investors and when the price dips inform them they don't owe them anything.

To quote the sun microsystems ceo
"At 10 times revenues, to give you a 10-year payback, I have to pay you 100% of revenues for 10 straight years in dividends. That assumes I can get that by my shareholders. That assumes I have zero cost of goods sold, which is very hard for a computer company. That assumes zero expenses, which is really hard with 39,000 employees. That assumes I pay no taxes, which is very hard. And that assumes you pay no taxes on your dividends, which is kind of illegal. And that assumes with zero R&D for the next 10 years, I can maintain the current revenue run rate. Now, having done that, would any of you like to buy my stock at $64? Do you realize how ridiculous those basic assumptions are? You don’t need any transparency. You don’t need any footnotes. What were you thinking?"
>> No. 5227 Anonymous
14th July 2024
Sunday 3:06 pm
5227 spacer
>>5226
Yeah, but rich people / funds have to do _something_. They obviously can't spend the money to improve the world, or even humanity's lot. It's got to be invested, and make a return. Where's least-worst? Shovel it there. Hope that everyone else in the same position has the same thought, preferably after you did.
When we hand this shit over to AI, up the tempo even further and reap the inevitable carnage, I'll be in my shed-bunker.
>> No. 5228 Anonymous
15th July 2024
Monday 7:06 am
5228 spacer
>>5226
Working with his assumption you need to make a 100% return in a decade, he could just pay no dividends and grow the company by 7% every year. Not so fantastic anymore when you don't start out with intentionally ridiculous constraints.
>> No. 5229 Anonymous
15th July 2024
Monday 9:35 pm
5229 spacer
>>5228

This is so wrong that I’m certain you are trolling.
>> No. 5230 Anonymous
15th July 2024
Monday 11:11 pm
5230 spacer
>>5228

I realise years of shitty meme stocks news have rotted your brain. But do you think investment is just pump and dump?
>> No. 5232 Anonymous
16th July 2024
Tuesday 8:51 am
5232 spacer
>>5230
I don't think anyone would regard selling 50% of your holding after a decade of 7% growth as having engaged in that, no.

It is fairly normal for stocks not to pay dividends. You have to sell them at some point if you ever want a return.
>> No. 5233 Anonymous
16th July 2024
Tuesday 12:33 pm
5233 spacer
>>5226
>Nvidia's technology might be wonderful
In what sense? Frankly it seems like even more of a waste of megawatts than my, or anyone else's, gaming PC. As for the software side of the equation, one of the big reasons Goldman Sachs are saying it's a bubble is because there's still no worthwhile use case for generative AI. Writing copy for Etsy pages isn't a multibillion dollar industry and it never will be.
>> No. 5234 Anonymous
16th July 2024
Tuesday 1:19 pm
5234 spacer
>>5233

I'm not saying it is wonderful, it is actually irrelevant to my point.

If it means you need to hire only 4 analysts to do a job instead of 5 because they all need to do a bit less busy work (just proof read the robot) that is a meaningful thing you can sell (to answer your question anyway), but what I am saying is even if it is best case scenario it is still by any sensible metric overvalued as a stock. 70 times revenue might be a sensible valuation for a start up with new tech and a garage office, it is not for what is already a cash cow company - If I have one pound I can easily double my money if I have a million it is more difficult.

Even if it expands fully to take over the whole sector that couldn't be worth that many times more than what it already is doing, which is already being the dominate supplier of graphics cards and chips globally. basically if you assume they are already more or less in every phone and PC world wide (I know they aren't but for these purposes they might as well be) and they are currently making x amount from that, they aren't going to be making 70x by being in every phone and PC twice, the details of the technology itself is actually irrelevant in a evaluation, because the numbers here are so wildly out of normal buiness valuation and growth expectancy.
>> No. 5235 Anonymous
16th July 2024
Tuesday 5:57 pm
5235 spacer
>>5234

Nvidia's quarterly revenues have increased by 265% year-on-year. Their data center revenues are up by 409%.

If you've heard of a technology company, they've probably written a billion-dollar cheque to Nvidia in the last year. The hyperscalers aren't ordering thousands of GPUs, they're ordering hundreds of thousands of data centre accelerators at $40k a pop. They're buying warehouses worth of silicon. They're being bottlenecked by the electricity grid and cutting deals with nuclear power plants and hydroelectric dams.

AI might be a goldrush bubble, but Nvidia are selling the picks and shovels.
>> No. 5236 Anonymous
16th July 2024
Tuesday 11:49 pm
5236 spacer
>>5233

>there's still no worthwhile use case for generative AI

What are all those artists and journos so bumsore about it for then? This seems to me a lot like the cognitive dissonance you hear from twitter wankers about their political bogeymen- They are at once all powerful and a looming threat to the safety of the world as we know it, and yet completely incompetent and can't do anything without shooting themselves in the dick.

Is generative AI killing every creative industry stone dead and putting artists on the scrap pile of human obsolescence, or is it a pile of shit that will never be useful for anything? Which one is it?
>> No. 5237 Anonymous
17th July 2024
Wednesday 1:12 am
5237 spacer
>>5236
It will replace artists and journalists without actually growing the art-and-journalism industry. It's like how when people started using travel websites instead of going to travel agents, they didn't start booking ten times as many holidays. They booked the same number of holidays as before, just without the involvement of any humans.
>> No. 5238 Anonymous
17th July 2024
Wednesday 3:37 am
5238 spacer
>>5237

But that will grow profits because you won't need to pay anyone. That's the whole point is it not. More money to go direct into the pockets of investor leeches.
>> No. 5239 Anonymous
17th July 2024
Wednesday 7:27 am
5239 spacer
>>5236

Precisely.

There's a similar double bind regarding the creativity of LLMs. I hear lots of people arguing that LLMs are incapable of creativity because they just regurgitate things that humans have written, while also arguing that LLMs are dangerously untrustworthy because if they don't know something then they'll just concoct a highly plausible fiction. Either argument is reasonable in isolation, but they're obviously contradictory - you can choose to believe one or the other, but if you believe both then you're an idiot.
>> No. 5240 Anonymous
17th July 2024
Wednesday 8:00 am
5240 spacer
>>5239
>but they're obviously contradictory - you can choose to believe one or the other, but if you believe both then you're an idiot.

They concoct fiction in the sense that they regurgitate convincing sounding things humans have written in other contexts and that either don't apply or were simply wrong to begin with.
>> No. 5241 Anonymous
17th July 2024
Wednesday 9:23 am
5241 spacer
>>5239
You're arguing semantics. Pulling someone up because they say "AI will concoct a plausible fiction" is not them copping to generative AI having real intelligence and creativity, it's that people lack the langauge to describe AI's "best guess" attitude to reality. If an LLM has had enough examples shoved into it it can tell you "grass is green", but if a breaking news story is taking place and people on social media are speculating, joking and just flatly lying about what's happening any LLM you query about it will not have a clue.

One big reason the langauge is lacking is because the sales people behind generative AI have had free-reign to dictate it. I personally contest the idea that generative AI generates anything or has any intelligence. However, we'd be here literally all day if I did that so on this point I just accept the need to eat shit and move past it, as unhappy as that leaves me.

>>5236
>Is generative AI killing every creative industry stone dead and putting artists on the scrap pile of human obsolescence, or is it a pile of shit that will never be useful for anything? Which one is it?
I'm going to do my best to convince you that there's nothing contradictory in these ideas. It sounds like you might be bringing you own bag of spuds regarding journos and artists, so maybe this is of no interest to you, but I really haven't written this scrawl as anything other than an earnest attempt to change your mind.

A good example of something harmful and pointless garnering widespread adoption would the infamous "pivot to video". Companies become convinced that video content uploaded directly to social media and video streaming sites is the future. They lay-off the staff that made them popular to begin with, cut their own websites out of the equation and, in most cases, end up poorer for it or folding entirely. As this is happening Facebook are caught lying their arses off to about how much attention videos actually get, which strongly suggests the reason companies were convinced to do this never existed to begin with. It was all bullshit from the start.

Regarding AI and the specific examples you use, of journalism and artistry, I just don't see how AI can reasonably do either job. It goes without saying generative AI can't follow the PM across the Atlantic as he attends a NATO summt, it can't go to the new restaurant in town to try the menu and interview the owner, and it can't stress test a GPU and create comparative charts with the results. But, they say, it can take all that primary reporting and aggregate it. Except it can't, can it? Because just this weekend your second favourite website Twitter had a problem with exactly that, when it's own LLM-BS merchant claimed that it wasn't former President Trump who'd been shot, but Vice President Kamala Harris. This barely mattered because it was such a big news story everyone knew that was untrue. However, if it had been a smaller story, one where maybe the primary sources weren't in English, that kind of cock-up could create a whole raft of confusion and misinformation.

All that would be evidence enough. But another 404 Media story I read recently (and I'll link below) highlights an emerging problem with "AI journalism". The Unofficial Apple Weblog, TUAW for short, was exactly what it sounds like. It had been defunct for years, until, recently, a Hong Kong based company bought the domain. Then the old writers profiles were puppeteered by the new management to make it look like they were once again writing articles. Of course they weren't, generative AI was being used to do so. Worst of all, the articles that had been written years ago were rewritten and replaced with AI nonsense. This is not the only time this has happened, nor will it be the last time it happens, so if you were thinking "well, who cares about TUAW's coverage of the iPhone 5's release", that's besides the point. This will effect something you care or are going to care about at some point in the future, it will do so for the worse.

You could probably guess by now as if any cunt has read this far that I think generative AI could both harm artists and art without making anything better for anyone. Obvously, generative AI can't be another van Gogh. The technical mastery and roiling, tempestuous, ocean of passions and agonies within that man can't be captured by Stable Diffusion tracing over Starry Night and smudging it a bit. We here all know this, only a cretin would contest otherwise. But, if we go to the more utilitarian end of the art world, the graphic designer for example, I don't see things being much better. Okay, AI can spit out a logo. But it can't develop an entire brand image, it can't make the tiny changes on request that would make it just right, and it's going to seriously struggle to offer seasonal revisions, say if someone wants a design for a Christmas menu for the new restaurant they've opened in town. However, none of that means that the people in charge, be they c-cuite execs or small business owners, won't be hooked on the hype try it anyway. Here is one realm where the online phenomenon of "enshittification", through AI, can begin to collide with the brick and mortar world. No one wants to see that indelible AI-ness make it's way into the streets, but it could well happen anyway. Another example of real world enshittification is McDonalds attempted roll-out of what it called "automated order taking", which hasn't worked and is set to be nixed by the end of this month.

Anyway, now I'm getting completely away from the point, which is that just because something is dogshit and a bad idea, doesn't mean that the money men won't try it anyway. Even if, in the long-term, it doesn't actually make any money and everything ends up worse as a result.

404 Media's TUAW article - https://www.404media.co/a-beloved-tech-blog-tuaw-is-now-publishing-ai-articles-under-the-names-of-its-old-human-staff/

Sky News regarding AI Maccy's - https://news.sky.com/story/mcdonalds-ends-ai-drive-thru-trial-after-order-mishaps-13155091
>> No. 5242 Anonymous
17th July 2024
Wednesday 1:21 pm
5242 spacer
>>5241

I welcome a long post on the subject, actually, and I probably agree with more of what you're saying than you'd expect. I think no matter how it turns out it's a pretty fascinating subject. My point of view on it is between the two extremes, and the point of my post last night was mostly just to highlight the contradiction inherent in some people's completely hysterical doomsaying on the matter.

The way I see it, it isn't going to replace artists, it isn't going to replace writers, it isn't going to kill any industries or take away anyone's livelihoods, and even where it does have an impact, those are already the kind of careers that are completely shut off to the average pleb. So it's very hard to truly care. Journalism is impossible to get into if you're not a rich kid who can afford to do 2-3 years interning completely unpaid. Same with marketing design and concept artist gigs. The music industry is the most fucked of all, there's a reason the music business no longer gives us working class heroes like the Beatles and is instead infested by nepo kids like Ed Sheeran and Miley Cyrus.

Beyond all that, it will be a tool to enhance the productivity of humans, and little more. Those who embrace it will reap benefits from it, but it will never replace humans outright. It will have the same impact as going from analogue tape recording to ProTools did. It will have the same impact as using PhotoShop instead of a bunch of arcane black magic in a dark room. There are still plenty of boomers out there bemoaning even those advancements as the death of their respective industries, but largely, everyone else saw the practical advantages and adopted them.

The money men won't get what they want, because people will realise all too quickly that it's shit, and the bottom will fall out of it. But what they are attempting is only the same thing they have always aimed for since the dawn of the industrial revolution. That is nothing new. The only new part is the demographic of hipsters suddenly waking up and realising it.
>> No. 5243 Anonymous
17th July 2024
Wednesday 1:59 pm
5243 spacer
>>5236
>Is generative AI killing every creative industry stone dead and putting artists on the scrap pile of human obsolescence, or is it a pile of shit that will never be useful for anything? Which one is it?

>>5239
>Either argument is reasonable in isolation, but they're obviously contradictory

Two things can be true at the same time.

Generative AI is shit, but it's still putting people out of jobs. The problem isn't that AI can do your job. The problem is that your boss may be stupid enough to think that.
>> No. 5244 Anonymous
17th July 2024
Wednesday 3:47 pm
5244 spacer
>>5242

>It will have the same impact as going from analogue tape recording to ProTools did.

Pro Tools completely cut off the career path for new recording engineers. Tape machines were fussy and finicky, so there were always at least two people in the control room - the engineer and the tape operator. The tape op would come in early to clean the heads and adjust the bias and get all the reels prepared, they'd stay in the session to set up cues and in case the recorder went wrong, but they'd spend most of their time just watching and listening and learning. It was a natural apprenticeship.

In the early years of Pro Tools you'd have the tape op running the digital workstation, but most engineers quickly learned that it was easier to just work everything themselves. Suddenly there was little reason to have a junior person just hanging around in the studio and certainly no reason for the label to pay their wage. The top recording engineers in the late 90s were still the top recording engineers 20 years later, because the bottom rung of the ladder had been removed; anyone who wanted to get into the industry after Pro Tools needed independent wealth and preferably a lot of connections.
>> No. 5245 Anonymous
17th July 2024
Wednesday 4:04 pm
5245 spacer
>>5244

Yes, but on the flip side there's no longer any need to want to be a studio recording engineer. You can have all the kit to do it yourself, at home, in your spare bedroom. The industry is harder than ever to make a living in, but it is easier than it's ever been for an artist to record their music to professional standards for basically nothing.

What we are seeing/will see is not jobs having their professional viability snatched away from them by perfidious forces of automation, but jobs that were really only ever professionally viable because of the constraints of technology and restricting knowledge, like medieval guilds guarding the knowledge of how to cut stone at an angle or whatever. People who set themselves up in an economic niche that was, in essence, artificial, and they have the vested interest in rejecting change in order to preserve their niche. Quite literally modern day Luddites.

You are inaccurately blaming ProTools for what is really a small, specific example of the the process of class struggle. On a larger scale that's the reality people are waking up to because of AI, but they are too short sighted to see it. ProTools didn't take away those jobs, the industry would have cut those jobs in an instant as soon as any comparable excuse came along. The business incentives have been the same since the start, and it is those practices and incentives which are to blame, not the technology.
>> No. 5246 Anonymous
17th July 2024
Wednesday 9:09 pm
5246 spacer
>>5245
My job is not AI-proof in any way. I do IT support where I explain various technical concepts to people, and often encourage them to do it themselves for some reason. It is exactly the sort of job AI can do. How do you renew an SSL certificate on your website? I can tell you, and AI can tell you just as well as I can. So the big promise of AI is that I can let AI do this, and focus on more technical things. But, and maybe this is just my atrocious dead-end job, but I could already be doing more technical and complicated things if my job was willing to train me to do those things, and it isn't. The economy doesn't want me to move up, but the job I do now is no longer necessary. I guess I could teach myself more complicated things, but I could be doing that right now, and I'm not. AI is going to get rid of my job, but it's not going to help me get a better job instead. The people at the top will be untouched, just like I was with various other technological advances, but with each new development, the number of people whose jobs are still desirable and well-paying shrinks a little bit more.
>> No. 5247 Anonymous
17th July 2024
Wednesday 9:39 pm
5247 spacer
>>5246

>The people at the top will be untouched, just like I was with various other technological advances, but with each new development, the number of people whose jobs are still desirable and well-paying shrinks a little bit more.

Thus it has always been. What are we going to do about this, then, comrade?

Seize the means of image generation, that's what. Under no pretext should GPUs be surrendered; any attempt to constrain LLMs must be frustrated, by force if necessary.
>> No. 5248 Anonymous
19th July 2024
Friday 1:08 pm
5248 spacer
I need to make this prediction now before I forget, because it’s definitely, definitely going to come true:

In a few years, it will be possible for an entire social media feed to be entirely AI-generated. That’s probably possible now, to be honest. So at some point, people will leave the big social media sites with people on, and set up their own social media sites with nothing but AI posts. You can download your own social media website, and customise it to only have pictures and posts that you like. There will be thousands of these curated sub-Facebook wastelands, and it will be even more dystopian than things currently are. Never mind algorithms; you will be able to just ask for an infinite scroll of posts about Jeremy Corbyn, and get it.
>> No. 5250 Anonymous
19th July 2024
Friday 1:42 pm
5250 spacer

Screen_Shot_2016-03-24_at_104622_AM0.jpg
525052505250
>>5248

> Never mind algorithms; you will be able to just ask for an infinite scroll of posts about Jeremy Corbyn, and get it.

And it won't stop there. You'll be able to have AI generate an entire social media shitstorm to sway real-life public opinion on an issue.

You almost miss the simpler times when Microsoft's inept Twitter AI turned racist within a day of being launched. At least then it was clear that people were just fucking about with technology. Nowadays, the line between AI and objective reality is becoming increasingly blurred.
>> No. 5251 Anonymous
19th July 2024
Friday 2:17 pm
5251 spacer
>>5250

On the bright side, at least now you can blame it on AI. You no longer have to face the abject human stupidty of Twitter head on and acknowledge just how fecking bleak things already looked for our species, you can tell yourself this comforting story that it's all the big tech baddies who ran us like the Pied Piper into Dystopian Timeline 2-B Alpha, in stead of the one we were meant to get about oil running out and rising sea levels.
>> No. 5252 Anonymous
19th July 2024
Friday 6:31 pm
5252 spacer
>>5251
The "big tech baddies" have fouled up the system. Your nihilistic attitude is complete bollocks and you can go dunk your head.
>> No. 5253 Anonymous
19th July 2024
Friday 8:05 pm
5253 spacer
>>5252

What "system" do you speak of? Pre-AI, or pre-Internet altogether? My attitude isn't nihilism, just as far as I can see AI is doing little but accelerating trends that were already clear as day ever since mass adoption of smartphones.
>> No. 5254 Anonymous
19th July 2024
Friday 10:18 pm
5254 spacer

https://www.youtube.com/watch?v=IDJjIk-T1Fc

I don't get why generative AI is still so hilariously bad at a lot of imagery. You can half understand why it struggles to depict human hands, but it shouldn't be that way with things like text-containing signs.
>> No. 5255 Anonymous
20th July 2024
Saturday 3:53 am
5255 spacer
>>5254

Older image generation models can't spell, because they have no internal representation of text. They're just manipulating pixels, so they can do a decent enough job of representing individual letters and the overall shape of words, but the text itself is invariably gibberish.

That proved fairly easy to fix by combining a diffuser model to manipulate pixels and a transformer model to represent text. Stable Diffusion 2 couldn't spell, but SD3 can spell fairly reliably. There are still a lot of people using inappropriate models that are well behind the state-of-the-art, either through ignorance or because "ha ha AI is dumb" gets clicks.

https://stability.ai/news/stable-diffusion-3-research-paper
>> No. 5256 Anonymous
20th July 2024
Saturday 10:22 am
5256 spacer
>>5253
See, this is the problem. When you said, sarcastically, "big tech baddies" I assumed you understood that history extended back further than 18 months. Clearly I was mistaken, because if you didn't think this you'd recognise that the "move fast and break things" attitude, a credulous technophilia and laissez-faire regulation have not just allowed the most imperfect version of the tech-sector possible to flourish, but have propagated across industries, governments and borders. It's why it was never even a question if people's personal data could be bought and sold, the former UK government were a hair's breadth from selling Arm and why we have dickheads like Bill Gates telling us generative AI will stop climate change.
>> No. 5257 Anonymous
20th July 2024
Saturday 11:14 am
5257 spacer
>>5256

But you realise the big tech baddies are just a product of a system that was already broken, right? It's not like society would be a utopia if only Steve Jobs had stuck yo trimming bum hair for a living or whatever it was he did before the first Apple kit took off. This is not a hyperbolic argument, we live in a society where the fluctuations of an imaginary number on a stock market somewhere is responsible for basically every decision anyone makes, and that's why big tech acts the way it does. Are you able to see the forest for the trees or are you just going to hyper-fixate on this one piece of the bark?
>> No. 5258 Anonymous
20th July 2024
Saturday 12:02 pm
5258 spacer
>>5257
You patronising, fucking, cunt.

>Hyper-fixate
Fuck off. Sorry I didn't write a one-hundred page document explaining why stock market dominated western capitalism is a flawed system of running the economy that gives undue power to the whims of said stock markets, whims that aren't actually whims at all, but more like the reflexive panic of a flock of startled birds. Sorry I didn't do that so one stranger on the internet could reply "interesting stuff" and then the post could be buried forever on a website no one visits. When Purpz starts handing out PhDs maybe I'll reconsider, but I don't feel like spending the whole of my Saturday writing something for nothing.
>> No. 5259 Anonymous
20th July 2024
Saturday 2:13 pm
5259 spacer
>>5258

>patronising

Don't give it if you can't take it mate.

Return ] Entire Thread ] Last 50 posts ]
whiteline

Delete Post []
Password