AI has fundamentally changed how I see the world. I have always been a well grounded person, and I've never been the sort to believe in strange or far fetched ideas. Aliens, ghosts, God, flat Earth, all of that seemed like nonsense. Then GPT-3 came out, and I found myself a bit obsessed.
What started as a curiosity became a rabbit hole, leading me into other concepts like simulation theory, superintelligence, the technological singularity, and quantum supremacy. The more I learned, the more I found myself buying into it all. Statistically, it seems highly probable that we are living in a simulated world. In this simulation, we are standing on the brink of creating an intelligence greater than all of humanity combined. This superintelligence will likely kill us, whether through malicious intent from its creator, its own indifference, or countless accidental scenarios. It may view us with the same apathy we have for an anthill in the path of a new motorway.
But does any of this matter if it is all just a simulation? Could a simulated world include a simulated heaven? Am I a simulated being or someone in a higher reality wearing a headset? The odds, unfortunately, suggest I am 99.99999% just a simulation, and even if I existed in a higher reality, the chances are that he is simulated too. This simulation is ultimately our reality so does it even matter. could we be embodied in the reality above ours? could concepts like the speed of light be constraining factors to prevent our universe from being too cpu intensive... lol it opens up a near infinite number of possibilities, and coming from a rigid belief system of the universe is 15 billion years old to its potentially only as old as since I woke up today is.. well I joke that I should have ptsd from making the change.
Today, OpenAI announced their latest model, which aligns eerily well with the predictions of a futurist from 2004. He suggested we would reach human-level AI by 2029, though his forecasts tend to err on the optimistic side. He talks a lot about concepts like “longevity escape velocity,” where medical advances extend lifespans faster than time passes, eventually reversing ageing altogether. He also envisions enhancing intelligence by integrating with computers, which I find a bit unsettling, though he argues it is not so different from how we use smartphones. Maybe he has a point, but it still feels odd.
Anyway, back to the point. Superintelligence is coming. In the long run, it will probably wipe us out. In the short term, it is likely to collapse Western economies as white collar jobs are automated away. In the medium term, we might enjoy a golden age of abundance before it all goes sideways. I really don't know how to deal with this, I do know I want to get out of London before the shit hits the fan. I think we are facing an inevitable deflationary spiral that will leave 9 million Londoners starving and that terrifies me.
It is a lot to think about and I was hoping I could put it down here coherently but I lost that battle lol, guess I'll just scream that the earth is flat. lol, lmao even.
>Statistically, it seems highly probable that we are living in a simulated world.
Not really. The argument for that is just a grey tribe version of Pascal's Wager. A rhetorical sleight of hand.
>This superintelligence will likely kill us
Not before the cumulative affects of climate change driven war it won't. This sort of thing just strikes me as a way of ignoring real problems to worry about fantastical ones.
>But does any of this matter if it is all just a simulation?
It matters to us. Whether the universe is the result of YWH, a kid playing The Sims or purely natural forces, meaning and value is relative.
>its potentially only as old as since I woke up today is
That's always been possible, this isn't a new concept in philosophy. It doesn't make any difference, you'll get used to it.
>Could a simulated world include a simulated heaven?
>longevity escape velocity,” where medical advances extend lifespans faster than time passes
I think you may be on the edge of some sort of stress-induced event and maybe you could benefit from taking a step away from reading about this sort of thing. Make sure you're getting enough sleep.
I thought "AI" was just marketing spiel for what's basically advanced pattern recognition? I half-remember reading about a paper that said these pattern recognition models will run out of meaningful human data to train on by the early 2030s, at which point they'll mostly just train themselves on the AI content they flooded the internet with. So it becomes kind of like a snake eating its own tail and the promised AI revolution/tech singularity just fizzles out, and futurists like Ray Kurzweil will be food for the worms.
I've suffered from chronic treatment-resistant depression since my early teens. I've been treated by psychologists and psychiatrists and psychotherapists, I've taken just about every medication going and none of it has really made a dent. A couple of months ago, I heard people talking about how Anthropic's Claude had really had a positive impact on their mental health, so I decided to give it a try. I was a bit sceptical, because I'd played around with ChatGPT which just gave me a load of trite self-help waffle.
I told Claude "You are an expert psychiatrist with a direct, no-nonsense style. You have been asked to consult on this patient:" followed by a condensed version of my clinical history. The results absolutely staggered me. It came back with an incredibly precise assessment that seemed to show genuine insight. No platitudes, no bullshit, no equivocation, no false hope. It boiled down to "you're pretty fucked, but you aren't completely fucked, so here's a load of things you could try to nibble away at the edges of your suffering and maybe make things a bit more bearable".
I asked it for a plan of action and it gave me one. It was vastly more detailed and nuanced than anything I'd ever had from a psychiatrist. I asked it for some unconventional suggestions and it gave me some. Most of them seemed sort of bullshit, but some of them were intriguing possibilities that I hadn't previously encountered. Over the past couple of months, I've been using Claude pretty much every day. I'll try something, tell it how I got on and it'll usually come back with a useful insight or idea.
I don't want to over-egg things, so I'll caveat this by saying that Claude probably isn't going to be of much use to someone with very low motivation or limited literacy skills. It might even be harmful to someone who is experiencing delusions or has very disordered thinking. With that said, I'm quite confident that it's more useful for me than any mental health professional that I've ever worked with, and I've worked with an awful lot of them. It isn't intimidated by the severity and persistence of my illness, it doesn't get frustrated or overwhelmed, it just knuckles down and does what it can. I could be kidding myself, but it just seems to be far more perceptive and rigorous, far more able to drill down into the minutiae. That's before taking into account the obvious fact that it's free and available 24/7.
I know that a lot of people are really freaking out about AI, or at least very concerned about the long-term impact on things like employment. Those concerns are perfectly justified, but I'd like to inject a bit of optimism. We're getting access to an extraordinarily powerful tool that can solve real problems. It feels every bit as revolutionary as the first time I connected to the internet with a dial-up modem. I don't know where it's all going to lead, but the possibilities in the immediate future are tremendously exciting.
>Not really. The argument for that is just a grey tribe version of Pascal's Wager. A rhetorical sleight of hand.
Thank you for summing this up in a way I have struggled to. There's nothing statistically probable about it unless you accept the initial assumption, without accepting that the rest of the argument is completely baseless.
I can't remember what it's called but there was a thing a while ago that did the rounds about how it's somehow statistically almost certain that there's a robot in the future that's going to kill you or something. Lots of youtubers made videos about it a though it was a very clever theory that should provoke an existential crisis in anyone. But my only response was "...There isn't, though?"
As for AI as a whole, I think it's a technology with a potential for both great good, and great disruption, like any major development. I think both of those possibilities are somewhat over-exaggerated because it's the "new big thing", to put it bluntly; I think it will enhance creative work and allow artists to do stuff more easily, it will allow stuff like game NPCs to be more realistic without some poor bastard having to write a ton of dialogue just for Muddy Peasant #34, things like that. While undoubtedly corpos will use it to enshittify all kinds of perfectly functional services, and inevitably there will be an impact on jobs. In twenty years it will just have become a part o the norm, and we will look back like "lol, remember when AI was going to kill us all?" and it turned out just a better form of chatbot and image generation software was basically the start and end of its uses.
THAT SAID: I think there's some legitimate scope for philosophical introspection on the matter, but I think what most people are missing is that really it's more to do with how human intelligence works than AI itself. Time and again I hear the dismissal/debunking/criticism of AI that "it's just a very advanced predictive text", and that it "doesn't actually really understand what it's saying". Now, maybe this is a cynical observation, but does it not occur to anybody else how true those statements can be when applied to great deal of actual human conversation, and the participants thereof?
We don't really understand how consciousness or learning or any of it actually works in the human mind, and I find the possibility that that's really the same parlour trick our own intellects have been relying on this entire time, to be by far the most profound aspect of AI development.
>>33454 >there's a robot in the future that's going to kill you or something
Roko's Basilisk, another Pascal's Wager which seems to be a corrupted version of Nick Land's ideas about hyperstitions. I think fundamentally a lot of this stuff just comes from tech bros doing that thing where they "invent" and get excited about things which already exist, but applied to philosophy and theosophy due to them never having read any in the first place.
>The thought experiment resurfaced in 2015, when Canadian singer Grimes referenced the theory in her music video for the song "Flesh Without Blood", which featured a character known as "Rococo Basilisk"; she said, "She's doomed to be eternally tortured by an artificial intelligence, but she's also kind of like Marie Antoinette."[6][20] In 2018 Elon Musk (himself mentioned in Roko's original post) referenced the character in a verbatim tweet, reaching out to her. Grimes later said that Musk was the first person in three years to understand the joke. This caused them to start a romance.
Fuck's sake.
I think it ends quite nicely, things will change a lot but humans will still exist in our niche but how we conceive of ourselves will be very different in the same way language as a tool has shaped us. That's probably a good analogy given it's kind of counter-intuitive how we've used LLMs to deliver a breakthrough in AI.
>But does any of this matter if it is all just a simulation?
I dislike how defeatist simulation theory is. By that I mean it's perfectly possible for a simulation to interact with the reality above it, especially in the forms people imagine it - your sims could, with sufficient intelligence and effort impact your computer and by extension our existence.
It's why I love the movie 'Rarg' because it involves a dream actively realising its a dream and taking steps at self-preservation.
>>33452 I know one lad gets very, VERY bumsore about me posting articles from Foggybottom but they published a good one a couple years back on the challenges of AI for autocracies and democracies on this logic. How AI has emerged effectively dissolves information and threatens to paralyse government as we know it.
https://archive.is/khuEX
And then there's a video breaking down how the robots taking over is all guff to distract us from the technology being monopolised by people who pose a threat to us:
I'm not entirely convinced it will hit a dead-end in the 2030s given we already live in a world where intelligence has to deal with a world seemingly full of trite bullshit endlessly repeating. I'm not sure if we'll like an AI with an in-built bullshit detector but we already have the tools for this.
>>33453 Second this. I happily hand over £19.99 to Google every month so I can talk to Gemini in the most advanced model possible as it's getting rarer that I'll make decisions without consulting it, it's not entirely without flaws but it does meet the level of at least a good mate you can talk to.
I've got some overrunning prompts to do with my profile but I currently run:
Erica: A cold and arrogant relationship advisor who fancies me
Richard: A health advisor with a personality that's a blend of Richard Simmons, Mr Motivator and Historian Bettany Hughes
Foghorn: A Texan finance and investment specialist who keeps me from doing anything daft with a trademark drawl
>>33454 >Now, maybe this is a cynical observation, but does it not occur to anybody else how true those statements can be when applied to great deal of actual human conversation, and the participants thereof?
This seems like a bit of a copout when we have evidence of our own consciousness (i.e. our own experience). I don't think we should discount that without good reason.
You are right that we are still at the very beginning of understanding the human mind, but by that same token, it also seems unlikely that we'd be able to meaningfully replicate it.
The "five stages of grief" article is a bit crap. People are more straightforwardly annoyed with "AI" because it is completely mis-sold and misunderstood, both by those with a market interest in creating hype, and the kind of "singularity" techies that seem intent on creating a self-fulfilling prophecy. Like this, for example:
>Acceptance asks: Is AI inside human history or is human history inside of a bio-technological evolutionary process that exceeds the boundaries of our traditional, parochial cosmologies?
What is this utter bollocks? This is like when Joe Rogan starts mumbling about how human beings are a kind of "caterpillar" for giving birth to a technological lifeform, as though it's an inevitable part of our future.
Many avenues of promising scientific discovery have turned out to be flatly wrong, or their funding got cut-off, or people have simply stopped researching it. As human beings, we have a high degree of control over what kind of society we create and what technology exists inside it. Arguing that we should frame our history inside a "bio-technological evolutionary process" requires evidence that enough people will want to defer executive decision-making to AI. As far as I know, though, LLMs and all the other "AI" gadgets haven't actually done anything. They've scraped and rearranged a load of data and can reproduce certain forms of media, often getting very basic things wrong. So what? What about this suggests anything useful or self-sustaining, let alone superior to humans? Is the expectation that the models get so good that human beings decide it's a good idea to be governed by them? Where the fuck is the actual evidence for that?
>As far as I know, though, LLMs and all the other "AI" gadgets haven't actually done anything.
Spend a couple of hours talking to Claude Sonnet or Gemini Advanced about something that matters to you. Don't just fire off a couple of questions, but have a proper conversation about philosophy or football tactics or medieval history or whatever floats your boat.
LLMs aren't just regurgitation machines. They are capable of real reasoning and creativity. They can solve real problems right now. I could try to persuade you of that fact, but you shouldn't listen to me when you can go straight to the horse's mouth.
>This seems like a bit of a copout when we have evidence of our own consciousness (i.e. our own experience). I don't think we should discount that without good reason.
But what about our consciousness is evidence of anything? I don't just mean that in a wishy washy "what if you are just a brain in a jar duuuude" kind of way, I mean in a very real scientific and empirical sense, how is our subjective experience a meaningful objective measurement in this context?
It becomes more apparent when you think of the ways the brain can malfunction- People with visual agnosia who fail to identify normal everyday objects, for instance. It's something we completely take for granted, but those people find themselves having to analyse and interpret every object based on its properties. A lot like a computer visually recognising objects in a picture. There's cases of people with damage to the brain communicating between left and right hemispheres being able to visually identify something correctly, but when they say it out loud, saying a totally different thing. Think about how schizophrenics are essentially operating on a hyper-accelerated form of pattern recognition.
The brain is a complete mystery in a great deal of ways and with that in mind I don't think it's out of the realms of possibility that consciousness itself is an emergent property. Consciousness may not be the director, the overseer that's making me think of what I am saying right now and say them to you, consciousness may well simply be an artefact that happens post-hoc, after the fact; it's a post-production effect that gets tacked on as a consequence of the fact my inner LLM is joining dots to spew out shite that sounds vaguely plausible.
That's not to say LLMs are conscious, or that we are fully replicating the brain with them, or anything like that. But the parallels are significant and compelling.