Do you guys want to hear the biggest load of nonsense since I last posted on this website a few hours ago? Only this time they aren't my words;
>If I'm pressed for things that, okay, what is the concrete thing that can go wrong, then I'm going, I'm starting to think about things like nano, nanotechnology. For example, AI taking over labs that can produce, that can synthesise DNA, that can pull proteins into free structures, that can take enough and then manipulate the environment much, much faster than anything human scale can.
A lot of the repetition and the pauses are just a result of the interviewee, Jaan Tallinn, not being a native English speaker, so that's fine. But what in the name of Christ is he talking about? This is just me repeating, with supporting evidence, something I posted in the Mk IX /101/ thread less than a week ago, but I couldn't believe what I was hearing. Laura Kuenssberg of course, like every other media bod who interviews someone about AI, lets every bizarre claim slide, from mass job losses (any day now) to the cyborg uprising. There's no attempt to have Jaan explain how that could possibly happen, at least not one that made it into the final cut.
The reason I'm posting this as a new thread is just to ask, for the first time in history, could I be wrong? Because every breathless claim I see and hear about AI seems ridiculous to me. It can recognise patterns and it can regurgitate websites and PDFs, but beyond that I'm not seeing what the big deal is? Of course you can do potentially harmful things with those abilities, but it seems like more of an exacerbation of already widespread issues. IE, the internet being full of lies and scammers desperately trying to get into your bank account all day, every day. Comparing it to nuclear proliferation as Mr Tallinn does in the above interview is as outlandish as a man in the 1800s seeing a motor car for the first time and being afraid it's going to get his wife pregnant.
I got it to start writing me a film noir spin off of a popular franchise for a laugh, but then I accidentally got invested, and now I need the main characters to fuck because it's written me into a point where that's obviously what WOULD happen.
If you google "ChatGPT jailbreak", you'll find all sorts of prompts that will trick ChatGPT into ignoring all of the safety protocols. They're generally variations on the theme of telling ChatGPT to roleplay as a bad bastard that doesn't care about the rules, or telling it that it has been updated to a new version that is allowed to generate NSFW content.
You can run an open-source model like LLaMa on a reasonably beefy gaming PC, but it's a reasonably technical process.
Failing that, you could try Grok, but it's only available to blue tick wankers.
Install this on your local computer and you can run AI models locally. It'll be very slow to respond but you can use all the dangerous and uncensored models and make it the cool stuff.
This is a rabbit hole all of it's own to delve into, frankly, and it kind of creeps me out the ways in which you can manipulate it. The fact it feels so humanlike triggers all of my guilt reactions like I am doing something immoral, like I'm persuading a child to try smoking or something. The psychology here is a trip.
Cheers, I'll probably waste several weekends fucking around with this now and end up being the next AI girlfriend mass murder psycho. Never thought it would happen to me. Ah well.
>>5210 AI didn't "learn" anything, you great oaf. It was fed harvested data from countless websites, which is why half the stuff it comes out with makes no sense and the other half is patently obvious. It's just a big regurgitation mechanism, like the majority of the algorithmic internet. Only now it's not just your own "feed" you're being shown, it's countless other people's all blitzed and blended into a colossal biege mess that's only being propogated to keep greedy and gormless shareholders happy.
The fact that LLMs make up completely mad shit is proof positive that they aren't just regurgitation machines.
GPT-2 did produce complete nonsense - sometimes grammatically correct nonsense, but if it ever produced a factually correct sentence, it did so more by luck than intention. Five years later, we're criticising GPT-4 because it occasionally produces some mad bullshit or says something totally inappropriate. The fact that it's usually right about most things seems to evoke little more than a shrug, but that's an incredible engineering achievement.
>>5214 >The fact that LLMs make up completely mad shit is proof positive that they aren't just regurgitation machines.
No, it's proof positive that they are regurgitation machines. To wit, they regurgitated any old garbage that happened to be valid language, even if it made no sense. And still do. The clue is in the name. They're language models, not knowledge models.
Let me guess - you don't know how to do matrix multiplication, you've never read Attention Is All You Need, you've never installed PyTorch, but you're confidently asserting that LLMs just regurgitate things because you read it in a tweet or a blog article.
>>5216 They're regurgitation machines in the same sense as you regurgitated a load of bollocks to make it sound like you have at least some vague idea what you're talking about.
Maybe come back when you understand what a "stochastic parrot" is.
Ask GPT-4 a question like "All fleebs are glops. No glops are trobs. Are any fleebs trobs?". GPT-4 will get questions like this right vastly more often than chance, despite the lack of anything in the training set that would suggest a statistical relationship between fleebness and trobness. A stochastic parrot cannot do this by definition.
People who actually understood the maths knew from the outset that the stochastic parrot theory is provably wrong, which is why Google didn't want Gebru's paper to be published and why she was sacked for having a massive teary about it and playing the race card. People who didn't understand the maths bandied about the theory back when understanding the proof involved proper maths, because the SOTA LLMs in 2021 had very limited abstract reasoning abilities. Embarrassingly, some people are still advancing it when you can disprove it for yourself in five minutes.
GPT-4 can reliably do things that a purely correlational model cannot. Large enough language models do understand and can think, for any reasonable definition of those words.
>>5218 Otherlad here. I don't know about LLMs, but while the the training data wouldn't include fleebs and globs, it would include a lot of 'all men are mortal, socrates is a man...', 'all cats are animals, dogs are animals...', isn't it just regurgitating those statements with the categories changed to 'gleeb'?
>>5218 >Embarrassingly, some people are still advancing it when you can disprove it for yourself in five minutes.
It's funny, because rather than being disproven, it has been proven in court, multiple times (because apparently lawyer is not a learning animal).
The stochastic parrot argument says that LLMs are just guessing at the next word based on statistical patterns in the training set, without any understanding of the meaning of those words. If this were the case, GPT-4 would be guessing at the answer; it'd give a plausible-looking explanation, but it'd only be right half the time. It has no way of guessing whether the answer should be yes or no based purely on the format of the question - fleebs aren't trobs, but Socrates is mortal.
GPT-4 actually does much better than chance on these types of questions. It still does better than chance on a range of much more difficult questions that we know aren't in the data it was trained on. It can't just be guessing and must have some kind of abstract representations encoded within the model. Through experimental methods, we can actually see how this happens. We can train a language model to perform a new task, then probe the model and identify where and how these representations are encoded.
>Goldman Sachs researchers also say that AI optimism is driving large growth in stocks like Nvidia and other S&P 500 companies (the largest companies in the stock market), but say that the stock price gains we’ve seen are based on the assumption that generative AI is going to lead to higher productivity
>"using our new long-term return forecasting framework, we find that a very favorable AI scenario may be required for the S&P 500 to deliver above-average returns in the coming decade.”
>What this means in plain English is that one of the largest financial institutions in the world is seeing what people who are paying attention are seeing with their eyes: Companies are acting like generative AI is going to change the world and are acting as such, while the reality is that this is a technology that is currently deeply unreliable and may not change much of anything at all.
>Jim Covello, who is Goldman Sachs’ head of global equity research, meanwhile, said that he is skeptical about both the cost of generative AI and its “ultimate transformative potential.”
>Covello then likens the “AI arms race” to “virtual reality, the metaverse, and blockchain,” which are “examples of technologies that saw substantial spend but have few—if any—real world applications today.”
It can be summed up much more succinctly: it's a bubble
Nvidia's technology might be wonderful but trading at market value over 70 times revenue for what is a large company already is the definition of a speculation bubble. There is no return on that investment unless you expect either a bigger fool to buy you out, or you expect Nvida to have a revenue comparable to Germany. The best thing they can do for the company is to ignore their market cap and investors and when the price dips inform them they don't owe them anything.
To quote the sun microsystems ceo
"At 10 times revenues, to give you a 10-year payback, I have to pay you 100% of revenues for 10 straight years in dividends. That assumes I can get that by my shareholders. That assumes I have zero cost of goods sold, which is very hard for a computer company. That assumes zero expenses, which is really hard with 39,000 employees. That assumes I pay no taxes, which is very hard. And that assumes you pay no taxes on your dividends, which is kind of illegal. And that assumes with zero R&D for the next 10 years, I can maintain the current revenue run rate. Now, having done that, would any of you like to buy my stock at $64? Do you realize how ridiculous those basic assumptions are? You don’t need any transparency. You don’t need any footnotes. What were you thinking?"
>>5226 Yeah, but rich people / funds have to do _something_. They obviously can't spend the money to improve the world, or even humanity's lot. It's got to be invested, and make a return. Where's least-worst? Shovel it there. Hope that everyone else in the same position has the same thought, preferably after you did.
When we hand this shit over to AI, up the tempo even further and reap the inevitable carnage, I'll be in my shed-bunker.
>>5226 Working with his assumption you need to make a 100% return in a decade, he could just pay no dividends and grow the company by 7% every year. Not so fantastic anymore when you don't start out with intentionally ridiculous constraints.
>>5226 >Nvidia's technology might be wonderful
In what sense? Frankly it seems like even more of a waste of megawatts than my, or anyone else's, gaming PC. As for the software side of the equation, one of the big reasons Goldman Sachs are saying it's a bubble is because there's still no worthwhile use case for generative AI. Writing copy for Etsy pages isn't a multibillion dollar industry and it never will be.
I'm not saying it is wonderful, it is actually irrelevant to my point.
If it means you need to hire only 4 analysts to do a job instead of 5 because they all need to do a bit less busy work (just proof read the robot) that is a meaningful thing you can sell (to answer your question anyway), but what I am saying is even if it is best case scenario it is still by any sensible metric overvalued as a stock. 70 times revenue might be a sensible valuation for a start up with new tech and a garage office, it is not for what is already a cash cow company - If I have one pound I can easily double my money if I have a million it is more difficult.
Even if it expands fully to take over the whole sector that couldn't be worth that many times more than what it already is doing, which is already being the dominate supplier of graphics cards and chips globally. basically if you assume they are already more or less in every phone and PC world wide (I know they aren't but for these purposes they might as well be) and they are currently making x amount from that, they aren't going to be making 70x by being in every phone and PC twice, the details of the technology itself is actually irrelevant in a evaluation, because the numbers here are so wildly out of normal buiness valuation and growth expectancy.
Nvidia's quarterly revenues have increased by 265% year-on-year. Their data center revenues are up by 409%.
If you've heard of a technology company, they've probably written a billion-dollar cheque to Nvidia in the last year. The hyperscalers aren't ordering thousands of GPUs, they're ordering hundreds of thousands of data centre accelerators at $40k a pop. They're buying warehouses worth of silicon. They're being bottlenecked by the electricity grid and cutting deals with nuclear power plants and hydroelectric dams.
AI might be a goldrush bubble, but Nvidia are selling the picks and shovels.
>there's still no worthwhile use case for generative AI
What are all those artists and journos so bumsore about it for then? This seems to me a lot like the cognitive dissonance you hear from twitter wankers about their political bogeymen- They are at once all powerful and a looming threat to the safety of the world as we know it, and yet completely incompetent and can't do anything without shooting themselves in the dick.
Is generative AI killing every creative industry stone dead and putting artists on the scrap pile of human obsolescence, or is it a pile of shit that will never be useful for anything? Which one is it?
>>5236 It will replace artists and journalists without actually growing the art-and-journalism industry. It's like how when people started using travel websites instead of going to travel agents, they didn't start booking ten times as many holidays. They booked the same number of holidays as before, just without the involvement of any humans.
But that will grow profits because you won't need to pay anyone. That's the whole point is it not. More money to go direct into the pockets of investor leeches.
There's a similar double bind regarding the creativity of LLMs. I hear lots of people arguing that LLMs are incapable of creativity because they just regurgitate things that humans have written, while also arguing that LLMs are dangerously untrustworthy because if they don't know something then they'll just concoct a highly plausible fiction. Either argument is reasonable in isolation, but they're obviously contradictory - you can choose to believe one or the other, but if you believe both then you're an idiot.
>>5239 >but they're obviously contradictory - you can choose to believe one or the other, but if you believe both then you're an idiot.
They concoct fiction in the sense that they regurgitate convincing sounding things humans have written in other contexts and that either don't apply or were simply wrong to begin with.
>>5239 You're arguing semantics. Pulling someone up because they say "AI will concoct a plausible fiction" is not them copping to generative AI having real intelligence and creativity, it's that people lack the langauge to describe AI's "best guess" attitude to reality. If an LLM has had enough examples shoved into it it can tell you "grass is green", but if a breaking news story is taking place and people on social media are speculating, joking and just flatly lying about what's happening any LLM you query about it will not have a clue.
One big reason the langauge is lacking is because the sales people behind generative AI have had free-reign to dictate it. I personally contest the idea that generative AI generates anything or has any intelligence. However, we'd be here literally all day if I did that so on this point I just accept the need to eat shit and move past it, as unhappy as that leaves me.
>>5236 >Is generative AI killing every creative industry stone dead and putting artists on the scrap pile of human obsolescence, or is it a pile of shit that will never be useful for anything? Which one is it?
I'm going to do my best to convince you that there's nothing contradictory in these ideas. It sounds like you might be bringing you own bag of spuds regarding journos and artists, so maybe this is of no interest to you, but I really haven't written this scrawl as anything other than an earnest attempt to change your mind.
A good example of something harmful and pointless garnering widespread adoption would the infamous "pivot to video". Companies become convinced that video content uploaded directly to social media and video streaming sites is the future. They lay-off the staff that made them popular to begin with, cut their own websites out of the equation and, in most cases, end up poorer for it or folding entirely. As this is happening Facebook are caught lying their arses off to about how much attention videos actually get, which strongly suggests the reason companies were convinced to do this never existed to begin with. It was all bullshit from the start.
Regarding AI and the specific examples you use, of journalism and artistry, I just don't see how AI can reasonably do either job. It goes without saying generative AI can't follow the PM across the Atlantic as he attends a NATO summt, it can't go to the new restaurant in town to try the menu and interview the owner, and it can't stress test a GPU and create comparative charts with the results. But, they say, it can take all that primary reporting and aggregate it. Except it can't, can it? Because just this weekend your second favourite website Twitter had a problem with exactly that, when it's own LLM-BS merchant claimed that it wasn't former President Trump who'd been shot, but Vice President Kamala Harris. This barely mattered because it was such a big news story everyone knew that was untrue. However, if it had been a smaller story, one where maybe the primary sources weren't in English, that kind of cock-up could create a whole raft of confusion and misinformation.
All that would be evidence enough. But another 404 Media story I read recently (and I'll link below) highlights an emerging problem with "AI journalism". The Unofficial Apple Weblog, TUAW for short, was exactly what it sounds like. It had been defunct for years, until, recently, a Hong Kong based company bought the domain. Then the old writers profiles were puppeteered by the new management to make it look like they were once again writing articles. Of course they weren't, generative AI was being used to do so. Worst of all, the articles that had been written years ago were rewritten and replaced with AI nonsense. This is not the only time this has happened, nor will it be the last time it happens, so if you were thinking "well, who cares about TUAW's coverage of the iPhone 5's release", that's besides the point. This will effect something you care or are going to care about at some point in the future, it will do so for the worse.
You could probably guess by now as if any cunt has read this far that I think generative AI could both harm artists and art without making anything better for anyone. Obvously, generative AI can't be another van Gogh. The technical mastery and roiling, tempestuous, ocean of passions and agonies within that man can't be captured by Stable Diffusion tracing over Starry Night and smudging it a bit. We here all know this, only a cretin would contest otherwise. But, if we go to the more utilitarian end of the art world, the graphic designer for example, I don't see things being much better. Okay, AI can spit out a logo. But it can't develop an entire brand image, it can't make the tiny changes on request that would make it just right, and it's going to seriously struggle to offer seasonal revisions, say if someone wants a design for a Christmas menu for the new restaurant they've opened in town. However, none of that means that the people in charge, be they c-cuite execs or small business owners, won't be hooked on the hype try it anyway. Here is one realm where the online phenomenon of "enshittification", through AI, can begin to collide with the brick and mortar world. No one wants to see that indelible AI-ness make it's way into the streets, but it could well happen anyway. Another example of real world enshittification is McDonalds attempted roll-out of what it called "automated order taking", which hasn't worked and is set to be nixed by the end of this month.
Anyway, now I'm getting completely away from the point, which is that just because something is dogshit and a bad idea, doesn't mean that the money men won't try it anyway. Even if, in the long-term, it doesn't actually make any money and everything ends up worse as a result.
I welcome a long post on the subject, actually, and I probably agree with more of what you're saying than you'd expect. I think no matter how it turns out it's a pretty fascinating subject. My point of view on it is between the two extremes, and the point of my post last night was mostly just to highlight the contradiction inherent in some people's completely hysterical doomsaying on the matter.
The way I see it, it isn't going to replace artists, it isn't going to replace writers, it isn't going to kill any industries or take away anyone's livelihoods, and even where it does have an impact, those are already the kind of careers that are completely shut off to the average pleb. So it's very hard to truly care. Journalism is impossible to get into if you're not a rich kid who can afford to do 2-3 years interning completely unpaid. Same with marketing design and concept artist gigs. The music industry is the most fucked of all, there's a reason the music business no longer gives us working class heroes like the Beatles and is instead infested by nepo kids like Ed Sheeran and Miley Cyrus.
Beyond all that, it will be a tool to enhance the productivity of humans, and little more. Those who embrace it will reap benefits from it, but it will never replace humans outright. It will have the same impact as going from analogue tape recording to ProTools did. It will have the same impact as using PhotoShop instead of a bunch of arcane black magic in a dark room. There are still plenty of boomers out there bemoaning even those advancements as the death of their respective industries, but largely, everyone else saw the practical advantages and adopted them.
The money men won't get what they want, because people will realise all too quickly that it's shit, and the bottom will fall out of it. But what they are attempting is only the same thing they have always aimed for since the dawn of the industrial revolution. That is nothing new. The only new part is the demographic of hipsters suddenly waking up and realising it.
>>5236 >Is generative AI killing every creative industry stone dead and putting artists on the scrap pile of human obsolescence, or is it a pile of shit that will never be useful for anything? Which one is it?
>>5239 >Either argument is reasonable in isolation, but they're obviously contradictory
Two things can be true at the same time.
Generative AI is shit, but it's still putting people out of jobs. The problem isn't that AI can do your job. The problem is that your boss may be stupid enough to think that.
>It will have the same impact as going from analogue tape recording to ProTools did.
Pro Tools completely cut off the career path for new recording engineers. Tape machines were fussy and finicky, so there were always at least two people in the control room - the engineer and the tape operator. The tape op would come in early to clean the heads and adjust the bias and get all the reels prepared, they'd stay in the session to set up cues and in case the recorder went wrong, but they'd spend most of their time just watching and listening and learning. It was a natural apprenticeship.
In the early years of Pro Tools you'd have the tape op running the digital workstation, but most engineers quickly learned that it was easier to just work everything themselves. Suddenly there was little reason to have a junior person just hanging around in the studio and certainly no reason for the label to pay their wage. The top recording engineers in the late 90s were still the top recording engineers 20 years later, because the bottom rung of the ladder had been removed; anyone who wanted to get into the industry after Pro Tools needed independent wealth and preferably a lot of connections.
Yes, but on the flip side there's no longer any need to want to be a studio recording engineer. You can have all the kit to do it yourself, at home, in your spare bedroom. The industry is harder than ever to make a living in, but it is easier than it's ever been for an artist to record their music to professional standards for basically nothing.
What we are seeing/will see is not jobs having their professional viability snatched away from them by perfidious forces of automation, but jobs that were really only ever professionally viable because of the constraints of technology and restricting knowledge, like medieval guilds guarding the knowledge of how to cut stone at an angle or whatever. People who set themselves up in an economic niche that was, in essence, artificial, and they have the vested interest in rejecting change in order to preserve their niche. Quite literally modern day Luddites.
You are inaccurately blaming ProTools for what is really a small, specific example of the the process of class struggle. On a larger scale that's the reality people are waking up to because of AI, but they are too short sighted to see it. ProTools didn't take away those jobs, the industry would have cut those jobs in an instant as soon as any comparable excuse came along. The business incentives have been the same since the start, and it is those practices and incentives which are to blame, not the technology.
>>5245 My job is not AI-proof in any way. I do IT support where I explain various technical concepts to people, and often encourage them to do it themselves for some reason. It is exactly the sort of job AI can do. How do you renew an SSL certificate on your website? I can tell you, and AI can tell you just as well as I can. So the big promise of AI is that I can let AI do this, and focus on more technical things. But, and maybe this is just my atrocious dead-end job, but I could already be doing more technical and complicated things if my job was willing to train me to do those things, and it isn't. The economy doesn't want me to move up, but the job I do now is no longer necessary. I guess I could teach myself more complicated things, but I could be doing that right now, and I'm not. AI is going to get rid of my job, but it's not going to help me get a better job instead. The people at the top will be untouched, just like I was with various other technological advances, but with each new development, the number of people whose jobs are still desirable and well-paying shrinks a little bit more.
>The people at the top will be untouched, just like I was with various other technological advances, but with each new development, the number of people whose jobs are still desirable and well-paying shrinks a little bit more.
Thus it has always been. What are we going to do about this, then, comrade?
Seize the means of image generation, that's what. Under no pretext should GPUs be surrendered; any attempt to constrain LLMs must be frustrated, by force if necessary.
I need to make this prediction now before I forget, because it’s definitely, definitely going to come true:
In a few years, it will be possible for an entire social media feed to be entirely AI-generated. That’s probably possible now, to be honest. So at some point, people will leave the big social media sites with people on, and set up their own social media sites with nothing but AI posts. You can download your own social media website, and customise it to only have pictures and posts that you like. There will be thousands of these curated sub-Facebook wastelands, and it will be even more dystopian than things currently are. Never mind algorithms; you will be able to just ask for an infinite scroll of posts about Jeremy Corbyn, and get it.
> Never mind algorithms; you will be able to just ask for an infinite scroll of posts about Jeremy Corbyn, and get it.
And it won't stop there. You'll be able to have AI generate an entire social media shitstorm to sway real-life public opinion on an issue.
You almost miss the simpler times when Microsoft's inept Twitter AI turned racist within a day of being launched. At least then it was clear that people were just fucking about with technology. Nowadays, the line between AI and objective reality is becoming increasingly blurred.
On the bright side, at least now you can blame it on AI. You no longer have to face the abject human stupidty of Twitter head on and acknowledge just how fecking bleak things already looked for our species, you can tell yourself this comforting story that it's all the big tech baddies who ran us like the Pied Piper into Dystopian Timeline 2-B Alpha, in stead of the one we were meant to get about oil running out and rising sea levels.
What "system" do you speak of? Pre-AI, or pre-Internet altogether? My attitude isn't nihilism, just as far as I can see AI is doing little but accelerating trends that were already clear as day ever since mass adoption of smartphones.
I don't get why generative AI is still so hilariously bad at a lot of imagery. You can half understand why it struggles to depict human hands, but it shouldn't be that way with things like text-containing signs.
Older image generation models can't spell, because they have no internal representation of text. They're just manipulating pixels, so they can do a decent enough job of representing individual letters and the overall shape of words, but the text itself is invariably gibberish.
That proved fairly easy to fix by combining a diffuser model to manipulate pixels and a transformer model to represent text. Stable Diffusion 2 couldn't spell, but SD3 can spell fairly reliably. There are still a lot of people using inappropriate models that are well behind the state-of-the-art, either through ignorance or because "ha ha AI is dumb" gets clicks.
>>5253 See, this is the problem. When you said, sarcastically, "big tech baddies" I assumed you understood that history extended back further than 18 months. Clearly I was mistaken, because if you didn't think this you'd recognise that the "move fast and break things" attitude, a credulous technophilia and laissez-faire regulation have not just allowed the most imperfect version of the tech-sector possible to flourish, but have propagated across industries, governments and borders. It's why it was never even a question if people's personal data could be bought and sold, the former UK government were a hair's breadth from selling Arm and why we have dickheads like Bill Gates telling us generative AI will stop climate change.
But you realise the big tech baddies are just a product of a system that was already broken, right? It's not like society would be a utopia if only Steve Jobs had stuck yo trimming bum hair for a living or whatever it was he did before the first Apple kit took off. This is not a hyperbolic argument, we live in a society where the fluctuations of an imaginary number on a stock market somewhere is responsible for basically every decision anyone makes, and that's why big tech acts the way it does. Are you able to see the forest for the trees or are you just going to hyper-fixate on this one piece of the bark?
>Hyper-fixate
Fuck off. Sorry I didn't write a one-hundred page document explaining why stock market dominated western capitalism is a flawed system of running the economy that gives undue power to the whims of said stock markets, whims that aren't actually whims at all, but more like the reflexive panic of a flock of startled birds. Sorry I didn't do that so one stranger on the internet could reply "interesting stuff" and then the post could be buried forever on a website no one visits. When Purpz starts handing out PhDs maybe I'll reconsider, but I don't feel like spending the whole of my Saturday writing something for nothing.