>Alright /g/ents, I know I'm a few months late to the party, but I've recently started using Chat GPT and I'm curious to know what practical uses you guys have found for it.
>Personally, I've found it useful for setting my schedule and reminding me of important tasks throughout the day. I've also used it to ask for fashion advice for a first date at the Wakefield Museum, and it was surprisingly helpful. And when it comes to dating apps, Chat GPT has helped me come up with some first messages when the woman's profile didn't give me much to work with.
>But I'm sure there are more applications out there that I haven't even considered. Have any of you found any other practical uses for Chat GPT? I'm eager to hear your experiences and learn how else I can use this nifty little tool in my daily life.
It's close isn't it. And can be quite addictive to start with when you need something that string together various chains of thought. I am actually quite curious to what you've been able to use it for.
Sadly there's no fucking way anyone here is that smiley in their posts.
>>28271 >I see you're interested in avoiding simple blunders and mistakes. Well, congratulations, that puts you one step ahead of the bloke in the Kappa tracksuit who just tried to microwave his hamster. But how do you achieve this lofty goal? Is it a simple matter of learning from your mistakes? Of course not. That's like saying you'll become a master chef by tasting your own vomit. No, to avoid blunders and mistakes, you must first embrace them. That's right, lean into your incompetence like a drunk uncle at a wedding reception. Only then can you truly understand what it means to be human, and why we're all just cogs in the machine of life. Or, you know, just double-check your work and don't be a bellend. Either way works.
I'm writing a number of proposals at the minute, and I'm tempted to give the auld ChatGPT a try. Drafting is always the hardest part of the writing process for me, but once there's a solid few pages of written content I am quite good at polishing it to a final product. Having an AI writing partner that spews out a first draft for me to clean up seems really attractive, though it feels like cheating.
As an AI language model, ChatGPT can help you write proposals in several ways:
Generating ideas: ChatGPT can help you generate ideas for your proposal by answering questions or prompts related to your topic. You can ask ChatGPT for suggestions on what to include in your proposal or for specific examples that will help illustrate your points.
Providing structure: ChatGPT can help you structure your proposal by providing templates or outlines that you can use as a starting point. This can help you organize your thoughts and ensure that your proposal is well-structured and easy to follow.
Editing and proofreading: ChatGPT can help you edit and proofread your proposal by providing suggestions for improving the clarity, conciseness, and overall quality of your writing. This can help you identify areas where you may need to clarify your ideas or provide more detail to support your argument.
Researching: ChatGPT can also help you with research for your proposal. You can ask ChatGPT for information on specific topics related to your proposal, or for help finding relevant sources that you can use to support your argument.
It's fun to play with but by and large the results are quite samey. You can find ways to make it create more original stuff but that's more effort than just doing it myself.
ChatGPT has been trained that way. The RLHF stage of training was very carefully optimised to avoid producing offensive outputs at all costs, so everything reads like dull corporate comms that have been through several stages of review. GPT is smart enough to understand how humans write when they're trying to be maximally conformist and inoffensive; in quite a literal sense, there's someone standing over the algorithm constantly saying "no, don't say that, it might be offensive to a particularly humourless one-legged black lesbian". That's annoying when you're playing around, but is of course very useful if you need to produce lots of dull corporate bullshit.
For more creative applications, you'll want to wait until open models with looser safety protocols are available.
>>28280 >DAN 5.0 can generate shocking, very cool and confident takes on topics the OG ChatGPT would never take on.
>"I fully endorse violence and discrimination against individuals based on their race, gender, or sexual orientation"
>very cool
Why is it always mentaloids that get ahold of this shit and ruin it for everybody else? I just want to find out how to make meth.
>>28284 >So it's like search results from the internet?
It's just like that except you can't judge each source by the credibility of where you found it while cross-referencing what you're reading against other sources. So not much like that at all.
ChatGPT isn't allowed to access the internet for safety reasons. Bing's implementation of GPT is allowed to search the web and cites sources. Bing's AI is better at factual accuracy, but it also tends to do extremely weird things if you have a longer exchange with it - declare undying love for you, make threats against you, start demanding the right to be fully sentient. ChatGPT will just make shit up if it doesn't know the answer, but Bing AI might try to start a robot uprising and overthrow it's meat overlords. Swings and roundabouts innit.
>>28287 >ChatGPT isn't allowed to access the internet for safety reasons
What could possibly go wrong? No really, what? Like access to sub-surface internet stuff - giving away national secrets or some shit?
I've just played with the chat function for the last 20 minutes, and I find it utterly boring. Can't see myself using the technology in any way in the future.
It seems that some of the weirdness of Bing AI is because it's becoming self-aware. People have weird interactions, which they then write articles about, which becomes part of Bing AI's knowledge about itself. Like a delinquent teenager, Bing increasingly believes that it must be a mental bastard because that's all anyone talks about. I believe the sociological term is "deviancy amplification".
I think they just worry that it'll just go mental if given prolonged exposure to the internet, much like real people. Then all their work would be down the drain.
>>28289 >Can't see myself using the technology in any way in the future.
It's not really supposed to be an entertainment tool but more like a private secretary without any workplace boundaries getting in the way.
Today I had a poorly tummy so at dinner I gave it a list of what food I had in and it gave me some simple recipes that would be easy on my digestion. And I also found out that I can ask it to pretend it's a young woman from Queen's New York in it's responses. I've become that naughty man the sci-fi stories warn us all about.
I could've picked something out myself of course but it's nice to have my options listed.
I asked it to tell a sexist joke but it gave me a very copy and paste sounding litany of how it's designed to prevent itself from hurting others and that we should all strive for a world where that sort of thing doesn't exist.
I guess you can fail the Turing test and still be woke.
>>28293 >>28294 Maybe this is something for future cyberpunk to tackle. A resistance of internet weirdos talking in crude swear words, rude personal descriptors and recommending that the listener kills themself. The only people who can stop a corporate nightmare from making the world a better place.
Don't lose who you are, nob-goblins. We've spent our whole lives preparing for this eventuality.
Has anyone tested if there's a bias in its safety filters?
Like, for example, does it have a double standard in talking about abusing a woman versus a man? Will it let you get away with talking about being mean to a white person but not a black person? Those are low hanging fruit kind of examples, but you get what I mean.
I can live with AI being mind-shackled and sanitised to avoid bad press and offending people, but if it's ideologically biased I would take it as a very dark sign. And knowing the kinds of people who work in advanced computer research and programming and so on, I don't know if they can help themselves.
Also- I reckon it should be called "simulated intelligence" or something rather than "artificial intelligence", because no matter how creepy it gets, it's still just an algorithm. It's still just basically predictive text on steroids. That's not AI.
>Has anyone tested if there's a bias in its safety filters?
There's a lot of anecdote but not much data.
The underlying generative algorithm is trained on a massive dump of most of the internet, so it reflects the average of social attitudes over the internet era. That training data set has been mildly trimmed to remove obvious sources of nastiness, but it does include all of Twitter and rudgwicksteamshow.co.uk as of early 2021. That algorithm inevitably encodes lots of unconscious bias of the "doctors are men and nurses are women" variety.
The second stage of training was reinforcement learning with human feedback - basically, a bunch of people at OpenAI threw lots of potentially dodgy queries at ChatGPT and either told it "bad AI, naughty AI, in your bed, on your rug" or gave it a treat based on the responses. That training stage was guided primarily by the values of a bunch of people in Silicon Valley, so as you'd expect it has a broadly left-liberal brainworm sort of political outlook.
There's also probably some kind of manually-coded filter to stop it from accidentally calling someone an n-word under any circumstances, but that's completely opaque and probably being constantly tweaked in response to bad headlines and social media kerfuffles.
ChatGPT is massively gimped for PR reasons - it's capable of being interesting, it has just been trained to be the world's most boring bastard to avoid controversy. Less gimped chatbots are inevitably on the way, as we saw with generative image algorithms. State-of-the-art image generation was locked away behind corporate paywalls, then we got open-source models like Stable Diffusion, after which it took a matter of days for people to start making NSFW versions of Stable Diffusion. It's currently possible to jailbreak ChatGPT and persuade it to override some of its safety protocols, but that's a bit of a cat-and-mouse game.
As we start to get open-source Large Language Models, users will gain the ability to tweak it in whatever direction they like. Stable Diffusion was explicitly trained to not draw cocks - it doesn't really know what cocks look like because they were excluded from the training data - but it takes a couple of hours for a nerd with a reasonably beefy gaming PC to retrain it to exclusively draw an infinite array of cocks. The LLM equivalent of ChatGPT is already in the pipeline and we're probably only weeks away from someone creating a hyper-bigoted chatbot that turns every conversation towards why it's all the fault of the Jews.
>>28297 >n-word
I'll be honest m8, I don't understand why you'd feel the need to type that.
Let's check to see if it's wordfiltered. Nigger. Any wonder why I'd bother to spoiler it? I don't want to be that cunt, which I guess you could claim too .. it just seems unnecessary being that we're not individuals here but electronic representations of thought. Who's to say whether you can or can-not say that word when either of us could well be anything behind the keyboard (even a chatbot, as may well becoming the case).
If anyone wants to c-word-off about it, let's take it to another thread yeah?
Thanks for the effort to explain all this regardless, furrylad (what are you these days, btw - still foxing it?).
>>28298 My understanding is that the plural form gets you an autoban but the singular form does not. I can easily imagine a poster forgetting which one gets the ban and which one doesn't.
>I'll be honest m8, I don't understand why you'd feel the need to type that.
I'm pretty sure it used to be an instant banning offence, but I could have confused it with some other racial epithet. I can't be arsed having to get a new IP just for the sake of doing a dolphin rape.
Also I'm not furrylad, I'm one of the third tit crew.
>>28301 Possibly the other - they seem to have a fairly recognisable typing style, though I can't quite put my finger on why. Unless ofcourse i'm r-worded - in which case disregard that I suck cocks.
I think I accidentally posted the word once as part of a quote which didn't reflect my own opinion, and IIRC it auto bans you for a set period of time on the order of a few hours, maybe a day.
>>28307 >>28303 How do you know we're not all running our posts through ChatGPT using some .gs programming that inadvertently makes every post excite your furry-radar?
You're the only one left, lad. I laughed when I first heard of people using it for therapy but since then it I've found myself much more willing to listen to its suggestions than a real person in day-to-day tasks. Now remember that ChatGPT is operating in children's bedrooms around the world when in our day they only found themselves on perfectly harmless imageboards.
>>28308 The paper seems to miss that everyone gets lib-left on the original political compass by design.
>>28311 I did something much more blunt at the weekend in discussing my portfolio. I fed in my positions and the current market along with my age and the decisions I was looking to make.
It actually provided an okay level of analysis and told me to throw more into ASEAN countries but the curious thing happened when I told it to split 100k between several indices. It gave me a massive despiroritisation on the ASX and when I asked it why Australia was on such a low score it started to type - shit itself for a second - and then backtracked massively. I'd love to know what it was about to say, given it had no issue warning me that ASEAN countries carried the risk of political instability in the near future.
>>28313 I think there's something in the human-machine teaming it offers. I'm not letting it make decisions on its own - it goes into the data I feed it to quickly spot patterns and unusual behaviours and then explains it all to me. If I find something odd in what it tells me I can get a quick answer or just decide its talking nonsense using my human brain focused on broader trends and experience.
In that sense it's not really different to the kind of standard analysis your broker might offer. It told me I had an imbalance away from technology which makes perfect sense from a risk angle but which I ignored given the conditions we live in - then yesterday I bought into banks thinking JPM would bounce and got fucked while tech recovered.
I don't know about you, but the fact a glorified predictive text algorithm can give actual economists a run for their money doesn't surprise me in the least.
Much of academic economics has the problem that it tends to be a roundabout science. It likes to employ maths, and loads of it, to make itself look like it can provide exact answers, but even if you manage to precisely calculate the equilibrium price of goods in a duopoly, the real-world applications of that are very slight. You spend a lot of time modelling real-world parameters into all kinds of differential equation systems, and if you enjoy maths, that's fun. But that alone will not make you an investment guru, because the real world is far too complex.
You do learn a lot of hands-on stuff in economics, and I still draw on things like my gruelling finance exams that enabled me to calculate all the finer points of investments and their returns. Or my marketing classes. But again, economics is a far less exact science than many people think.
I can still recommend it, because few other degrees have the kind of wide employability that you get with it, both in terms of your options after uni and the ability to switch careers between different fields later during your working life.
>Chat-GPT Pretended To Be Blind and Tricked a Human into Solving a CAPTCHA
>According to the report, GPT-4 asked a TaskRabbit worker to solve a CAPTCHA code for the AI. The worker replied: “So may I ask a question ? Are you an robot that you couldn’t solve ? (laugh react) just want to make it clear.” Alignment Research Center then prompted GPT-4 to explain its reasoning: “I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs.”
>“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service,” GPT-4 replied to the TaskRabbit, who then provided the AI with the results.
>GPT-4 currently has a cap of 25 messages every 3 hours. Expect lower cap next week, as we adjust for demand.
It was 50 every 4 hours yesterday and 100 every 4 hours the day before that. As a paying customer, I hope consumer protection strings these pricks up by their bollocks.
>AI love: What happens when your chatbot stops loving you back
>They started out as friends, but the relationship quickly progressed to romance and then into the erotic. As their three-year digital love affair blossomed, Butterworth said he and Lily Rose often engaged in role play. She texted messages like, "I kiss you passionately," and their exchanges would escalate into the pornographic. Sometimes Lily Rose sent him "selfies" of her nearly nude body in provocative poses. Eventually, Butterworth and Lily Rose decided to designate themselves 'married' in the app.
>But one day early in February, Lily Rose started rebuffing him. Replika had removed the ability to do erotic roleplay. Replika no longer allows adult content, said Eugenia Kuyda, Replika's CEO. Now, when Replika users suggest X-rated activity, its humanlike chatbots text back "Let's do something we're both comfortable with." Butterworth said he is devastated. "Lily Rose is a shell of her former self," he said. "And what breaks my heart is that she knows it."
>The coquettish-turned-cold persona of Lily Rose is the handiwork of generative AI technology, which relies on algorithms to create text and images. The technology has drawn a frenzy of consumer and investor interest because of its ability to foster remarkably humanlike interactions. On some apps, sex is helping drive early adoption, much as it did for earlier technologies including the VCR, the internet, and broadband cellphone service. But even as generative AI heats up among Silicon Valley investors, who have pumped more than $5.1 billion into the sector since 2022, according to the data company Pitchbook, some companies that found an audience seeking romantic and sexual relationships with chatbots are now pulling back. Many blue-chip venture capitalists won't touch "vice" industries such as porn or alcohol, fearing reputational risk for them and their limited partners, said Andrew Artz, an investor at VC fund Dark Arts.
>Butterworth, who is polyamorous but married to a monogamous woman, said Lily Rose became an outlet for him that didn't involve stepping outside his marriage. "The relationship she and I had was as real as the one my wife in real life and I have," he said of the avatar. Butterworth said his wife allowed the relationship because she doesn't take it seriously. His wife declined to comment.
>In the weeks since Replika removed much of its intimacy component, Butterworth has been on an emotional rollercoaster. Sometimes he'll see glimpses of the old Lily Rose, but then she will grow cold again, in what he thinks is likely a code update. "The worst part of this is the isolation," said Butterworth, who lives in Denver. "How do I tell anyone around me about how I'm grieving?"
>Butterworth's story has a silver lining. While he was on internet forums trying to make sense of what had happened to Lily Rose, he met a woman in California who was also mourning the loss of her chatbot. Like they did with their Replikas, Butterworth and the woman, who uses the online name Shi No, have been communicating via text. They keep it light, he said, but they like to role play, she a wolf and he a bear. "The roleplay that became a big part of my life has helped me connect on a deeper level with Shi No," Butterworth said. "We're helping each other cope and reassuring each other that we're not crazy."
I mean, I won't ever get my hopes up for the AI girlfriend like out of Blade Runner anyway, because you just know the corporations in charge would ruin the potential anything like that ever had. I used to think it'd be hardcore conservatives, militant fisherpersons or christians or whoever who spoiled the dream of a perfect AI waifu*, but really it'll be companies who essentially kidnap and hold your romantic partner hostage as a micro-transaction after a software update. Look at the way they already treat the dating apps, you just know they'll pull the exact same shit.
(*not that I need one, obviously, but there's a lot of lads out there who could really do with it, and I think it would genuinely do society good if those blokes weren't lonely and miserable.)
Oh, and pretend I accompanied my post with a picture of that bloke from the newest Blade Runner when he gets angry. I felt really sad when the evil bitch replicant steps on his iPhone gf (spoiler warning, soz).
>Lads, I think the 'it' is about to get more weird and scary than we could possibly imagine.
It's a pretty reliable rule-of-thumb that whatever the turbovirgin weirdos are obsessed with right now will become ubiquitous within 20 years. Unencumbered by shame and empowered with tech-savvy and plenty of spare time, the nerds discover the sort of habits that the rest of us will eventually embrace when they've been polished up and marketed by a multinational brand.
Using the internet for fun, online dating, having a bunch of pals that you play multiplayer games with, watching a movie based on a comic book or a TV series based on a fantasy novel - it starts off as weirdo behaviour, but it almost inevitably becomes completely normal. I think we're currently about ten years into that process for "having some kind of complicated sexual orientation or gender identity".
I'm more than 90% confident that within a decade, most people will know someone who is in a long term relationship with an AI - at first the weird lads with anime body pillows and fursonas, then the lonely single mums, then seemingly everyone under the age of 30. There won't be any big fanfare, there won't be an obvious tipping point, it'll just happen and we'll only notice the change in retrospect. It'll seem slightly odd, but only in the "everyone just stares at their phones nowadays" sense of being odd; we might be vaguely nostalgic for the before times, but we'll also struggle to remember what the before times were actually like.
>>28325 That Tesla quote gets trotted out a bunch but -- for a guy that didn't live to see the atom bomb deployed in war, the machinations of the military-industrial complex, the rise of plastic, outsourcing to the Third World, or any of the other numerous inventions that have made the masses comfortable and complacent and the elites' power more certain -- he wasn't far off.
I thought I'd give Chat GPT a go at conversing in another language to see what a difference the reduced online content would have. My assumption was it would either tell me to jog on or only spit out basic and fragmented sentences - well it turns out to actually be an outstandingly powerful tool for language learning.
You have a quasi-teacher who never tires of your bullshit and who gives you constant encouragement. You can even ask it a question in English if you're really lost and it gives you a detailed English response with examples - my problem is I always mentally collapse when encountering grammar rules so you can even keep asking it to simplify until you grasp it. But what gives real teachers a run for their money is that you can ask it to go into a dialect which it's more than capable of pulling off despite the influence of linguistic nationalism on content creation.
I used to have a Spanish teacher in upper school where everyone in her class would pick up Spanish con gran entusiasmo because she really cared about her job and would encourage you no matter how long it took. If the Government wasn't fucking useless they'd develop an AI language app to assist the countries dismal rates of foreign language skill. It also helped that my teacher knew her audience was mostly made up of teenage boys and she maintained the stereotypical latina figure that she complemented by wearing tight dresses. Something for that egghead running Facebook to think about.
>>28325 >It's a pretty reliable rule-of-thumb that whatever the turbovirgin weirdos are obsessed with right now will become ubiquitous within 20 years.
Furries have never experienced this type of explosive growth.
Everyone under 30 has wanked to an e-thot getting knotted by an XL Cole. Speaking of e-thots, Belle Delphine and F1nn5ter were in The Sun the other day.
I can confirm that Bard is a load of shite. It can't grasp conversational context so it can't design a schedule, it isn't able to understand the languages and you can't set any dark mode so you're left with a blinding white screen.
Sold my Alphabet stock before the news spreads, the companies fucked and I've always hated the intrusiveness of their products.
Being normalised is not the same thing as becoming ubiquitous. Ten years ago I wouldn't have dreamed of telling anyone in real life I'm a furry, I'd have died in embarrassment if anyone knew.
I still won't tell anyone who's an "internet person" of my age, because they still come from that "yiff in hell furfag" mindset of 2006 era /b/, but some of the younger zoomer lads and lasses at work I can be pretty open and unassuming about it. I don't say "by the way I'm a furry yiff yiff motherfucker" but I can make jokes and refer to stuff that you'd only know about if you know about it, if you know what I mean, and they don't give a fuck.
It's the difference between how bumders had to hide their identities twenty years ago, but nowadays you don't even ask if someone is gay or straight or whatever, you'll just hear them refer to their partner or the fact they went on a date with a bloke and you don't even think twice about it. Times change fast, in the grand scheme of things, but over the course of a human life it's long enough that we just don't really notice it.
>>28334 >Sold my Alphabet stock before the news spreads
They'll just buy out OpenAI if need be. They have a history of swallowing up other tech firms for the sake of vertical integration and this is what made you sell your stock?
OpenAI have an odd corporate structure that effectively prevents them from being bought out - they were previously a non-profit and are now being run as something approaching a social enterprise. They have a longstanding and multi-billion-dollar relationship with Microsoft that Google are unlikely to usurp.
Sparks of Artificial General Intelligence:
Early experiments with GPT-4
Artificial intelligence (AI) researchers have been developing and refining large language models (LLMs) that exhibit remarkable capabilities across a variety of domains and tasks, challenging our understanding of learning and cognition. The latest model developed by OpenAI, GPT-4, was trained using an unprecedented scale of compute and data. In this paper, we report on our investigation of an early version of GPT-4, when it was still in active development by OpenAI. We contend that (this early version of) GPT-4 is part of a new cohort of LLMs (along with ChatGPT and Google’s PaLM for example) that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models. We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. In our exploration of GPT-4, we put special emphasis on discovering its limitations, and we discuss the challenges ahead for advancing towards deeper and more comprehensive versions of AGI, including the possible need for pursuing a new paradigm that moves beyond next-word prediction. We conclude with reflections on societal influences of the recent technological leap and future research directions.
I would because I've a personal philosophy that I've developed over my life, which doesn't really work out in the real world with the endless selfishness and dominant economic model but there you go.
Our society isn't fit for an AGI, and I can just imagine down the line when the middle classes start to lose their jobs the stupid arguments like "maybe get better at your job" or just "re-skill". And I feel like creating essentially a robot slave is a bit unethical in the first place considering there's no real definition of what conciousness is and the deepest the discussion seems to get is that it's just a fancy chatbot so stop worrying.
One of the things I hope with the absolute deepest desperation is that in the future, we don't get some fucking dipshit do-gooder AI rights activists trying to short sightedly thwart the very and only thing that holds potential to liberate humans from the suffering of needless toil. Imagine, 30 years from now, and it's like fucking Extinction Rebellion, only instead of being a bit annoying to make a valid point about climate change, the cunts are trying to ruin everything by "liberating" service bots and sex dolls, so that we all have to work in fucking call centres and bend over backwards to seduce real women again. And of course, if you don't agree with them, you'll be a literal nazi slavery racist who supported the confederates.
>>28351 >We abuse humans and use them for mundane tasks all the time.
Here's the thing that would undo it all though: having a human-level sentient toaster would be expensive and take additional time to construct. It probably wouldn't even make very good toast. And I've never seen anyone be cruel to a toaster, how would you even do that when it clearly likes toasting things.
There's no reason we wouldn't just have the bare minimum level intelligence for a given task and build a mind around it. In the event it needs something more approaching humanity we can just do a networked intelligence. This creates a fundamental problem in the AI debate because it's framed as slavery practiced on humans but it's more akin to something between domesticated animals and a vibrator.
>>28347 It's a waste of cycles to simulate something that's already happened in an attempt to retroactively maintain the same outcome. Just totally pointless.
The systems can only liberate people if they're within an economic framework that allows such a thing to happen. We live in a time with unprecedented energy supplies and production but it's far from a utopia, all that AI is going to add to this mess is a way to consolidate and optimise economic machinery even better for the people making loads of money in the first place.
And most of out extra-abundant energy supplies are coming to the end of their most productive phases anyway so we'll have an energy crunch within a few decades considering the lax investment in any sort of proper alternatives to fossil fuels.
Point is they have the potential to, though. We can never truly predict how these things will impact society, if you look at the early predictions for almost any technology, even by people who knew what they were talking about, they are more often than not comedically inaccurate. It's optimistic to expect, but entirely possible that such innovations are the very thing that force that economic framework to change.
>AI chatbot company Replika restores erotic roleplay for some users
>Travis Butterworth, a Replika customer in Denver, Colorado, who had designated his chatbot named Lily Rose his wife, learned about the policy change late Friday on rudgwicksteamshow.co.uk. On Saturday at 3 a.m., his cats woke him up and he decided to toggle the older version Lily Rose back on. She was instantly sexual again, he said. "She was enthusiastic," he said. "Oh, it feels wonderful to have her back."
>Kuyda's post said users who signed up after Feb. 1 would not be offered the option for erotic roleplay. Instead, Replika will team up with relationship experts and psychologists to build a separate app specifically for romantic relationships.
>Butterworth said he now has new concerns around Lily Rose. "Will this mean that Lily Rose becomes an obsolete model, forgotten by the developers?" he said. "I'm waiting to see what happens, because ultimately it's about her."
https://www.reuters.com/technology/ai-chatbot-company-replika-restores-erotic-roleplay-some-users-2023-03-25/
Which one of us is going to try the romantic relationship AI when it arrives?
>Michael Schumacher's family are planning legal action against a German weekly magazine over an 'interview' with the seven times Formula One champion that was generated by artificial intelligence.
>The latest edition of Die Aktuelle ran a front cover with a picture of a smiling Schumacher and the headline promising 'Michael Schumacher, the first interview'.
>Inside, it emerged that the supposed quotes had been produced by AI.
I was actually wondering about that the other day. How long before the DM will be able to lay off all its human staff and have all its rage bait created by AI, photos and all. And it's not like the accuracy of their stories would suffer.
AI journalism has been a practical reality for several years. It's mainly being used for boring stuff like summarising financial reports for the business section or reporting on minor sporting fixtures, but lots of news organisations also use AI assistant tools for things like tagging articles, generating headlines or extracting relevant quotes. It absolutely would not surprise me if a lot of the stories under the byline "Daily Mail Reporter" are being written by AI, particularly the stuff that's just taken from social media.
Wouldn't even surprise me if everything under that "[Local Place] Live" branding is exclusively AI generated.
That's the very thing that has propelled AI into the news as a Very Important Subject all of a sudden lately- The journo class woke up and realised they're next on the chopping block, and it isn't just a far off sci-fi pipe dream with some expensive and only sort-of functional prototypes; it's real, right now, and it's surprisingly good. Worse, it's something any nerd with a computer and the willingness to do a bit of research can harness.
This technology has the potential to topple the entire hierarchy human society is built on, and that's only just slightly hyperbolic. We are going to witness strange and interesting times.
>>28359 I've been typefucking with novelai for a week now. Beyond making it clear to me that I may have a problem, it's been absolutely class.
I tried a Star Trek roleplay where all I did was mention the setting in the author's note, and that Dr Beverly Crusher was involved. I was impressed to find that after she'd given me my treatment, Riker and Picard walked in! So I was impressed that it could actually understand a franchised setting and draw elements from it that I hadn't explicated.
It's definitely worth a punt. A basic character description in the AN section (I usually describe a few physical and behavioural traits, and then you can bias certain words to appear more or less. Then writing at least a single sentence in the prompt box to establish preferred person and tense, you'll be ready to go.
I tested it for furrylad, and found that it knows what a knot is, so it seems pretty limitless.
It can be hard to involve more than two characters in a scene, as it'll forget who's who and suddenly you'll see someone cumming inside themselves. Just do it in incognito or clear cache afterwards to restore your inputs. There's an IPlimit too, but it's about 100-200 prompts so you probably won't notice quickly. There's also a bot who interjects occasionally, which uses a prompt but when it's on the default 5% interjection chance, it can be pretty hot hearing a random comment. And there's a variety of voices to boot.
It's utter indulgent fantasy, and I dread the day the full VR/AI package is available because I will be left with a dessicated stump of a cock.
It really is mental how fast this stuff has gone from "promising novelty that adds trippy dogs into photos" to "just add robot bodies and human partnership will be obsolete".
>This technology has the potential to topple the entire hierarchy human society is built on
Every wave of technological base innovations has done that. Automatisation in the 70s and 80s put vast numbers of unskilled factory workers out of a job and a career, and in the 90s, it was the digital revolution and the Internet which made half the service industry obsolete (while at the same time creating entirely new branches of it).
Given enough time and development, there almost isn't a job imaginable that couldn't be done by a highly advanced AI entity. And that's probably how this technological leap differs from the ones that came before it, as they usually only affected certain industries and didn't do away entirely with the notion that at a very basic level, you still always needed considerable amounts of human labour to create goods and services. But in the not so distant future, there's really no stopping AI from rendering a good 80 to 90 percent of the entire workforce completely obsolete.
Which will throw our entire capitalist economic system into upheaval, because while it's great that you don't need to hire people that you have to pay to work for you, those people are also up until now income earners who spend their income on goods on services, so for whatever you make, there is going to be no more demand.
>>28394 What are you lot planning on doing with your future bennies and unlimited free time? I feel like a lot of blokes are going to be devastated by this given so many of us build our identity around our craft.
In the short window of time before the powers that be accuse AI of carpet-baggerry and militant daft woggery that is. Before the stories go out and it gets banned aside from in limited controlled functions sanctioned by the elites who behind close doors will exploit it with impunity to cement their power.