Submitted by Mynameis__--__ t3_1220486 in technology
MotorballPlayer99 t1_jdp9ea3 wrote
They promised a future of convenience and freedom.
Instead AI is writing poems and creating art while we meatbags get technofeudalism.
BayouMan2 t1_jdpq0vo wrote
At least the feudal period had people who produced and commissioned some beautiful art and buildings.
BODYBUTCHER t1_jdqbqzp wrote
Why you don’t like plain white or solid colored walls everywhere devoid of any artistic expression?
BayouMan2 t1_jdtfwc2 wrote
Bland, beige office buildings are soul crushing.
Riversntallbuildings t1_jdrmau2 wrote
Hey, come on…Facebook & Google hired graffiti artists to paint the walls of their HQ’s.
Those are the citizens new cathedrals of worship. Just like Michelangelo and the Sistine Chapel.
/s
BobRobot77 t1_jdpxyo5 wrote
Really mediocre poems at that.
pohl t1_jdqgizh wrote
It occurred to me the other day that while art is the dumbest possible thing for us to have ai pursue, it makes a certain amount of sense.
Art is subjective. When put to objective take machine learning algorithms tend to do poorly. They don’t mind lying, or rather they don’t have anyway to evaluate and value true things. A subjective task is perfect for a thing designed this way.
We don’t need AI art. It’s pointless. It just turns out that making pointless art is probably what this tech is best suited to. Ask them to do anything that can be objectively evaluated and you will be disappointed.
I could be convinced that the whole thing is a smoke and mirrors grift. The “art” seems impressive right what an expression of individuality!! But, it is actually just covering up that this entire line of research has led to systems that can’t do anything functionally useful. Since most people (myself included) are not really equipped to evaluate art. We don’t notice that it isn’t very good at art either.
LawfulMuffin t1_jdqz6rs wrote
In theory, something with an objective outcome should be easier for AI to handle, but it turns out the work to get that objective outcome is the actual value which is what I think a lot of people are missing about the conversation.
RamsesThePigeon t1_jdq115l wrote
I wrote a couple of (not especially good) poems the other day.
There were a small number of users in the comments who immediately accused me of having tasked a glorified algorithm with drooling out what I’d written.
Now, had said accusations been meant as insults, that actually would have been less distressing than the truth, which is that people on the Internet are proving themselves to be even less literate than I’d assumed. I mean, fine, most folks don’t know how to use hyphens, and I’ve come to terms with that… but the idea that a person genuinely can’t tell the difference between human-written text and program-written text really, really saddens me. It’s the equivalent of being unable to distinguish between a chef-prepared meal and something with the Lunchables logo on it.
Chat-bots produce painfully average offerings; works that check all of the surface-level boxes, but that are completely flat. They still scare me, though, because they’re revealing the fact that people who are willing and able to recognize as much are apparently in the minority. Put another way – and reusing the idea of flatness – it’s starting to seem like much of the Web (and therefore the world) is populated by individuals who lack the ability to see in three dimensions… and as such, they’re conflating actual structures with façades.
In short, I’m not worried about ChatGPT ever being better than a real writer; I’m worried about the humans who are letting themselves be convinced that it will be. The façades will eventually fall over, I’m sure – a chat-bot can only ever be as good as the best content fed into it, after all, and it can’t actually innovate – but until that happens, there are going to be millions of people consuming shallow, empty junk and not understanding why they hate reading.
Norci t1_jdq62nu wrote
> It’s the equivalent of being unable to distinguish between a chef-prepared meal and something with the Lunchables logo on it.
> Chat-bots produce painfully average offerings; works that check all of the surface-level boxes, but that are completely flat. They still scare me, though, because they’re revealing the fact that people who are willing and able to recognize as much are apparently in the minority.
I know it sucks not having one's effort as a creator appreciated, but frankly, you're kinda being a bit of a pretentious drama queen here. Your comment with the poem received 2k upvotes with only a couple heavily downvoted users implying it was made by ChatGPT (which is currently a trend to use in reddit comments). Obviously, they are in the vast minority, not the ones that recognized your efforts.
There's nothing new or controversial here, there have always been arrogant and/or ignorant people online eager to question and diminish professionals and creators, with or without AI. For every topic there are dozens of armchair experts waiting for their chance to jump into the conversation, AI or not.
However there have always been hundreds of shitty creators too, producing content that is as bad, or worse, than AI generated one, being cheered on by a cohort of yes-men, so you can't exactly fault people for not being able to tell the difference. I went to an open mic last week, and honestly, AI would produce more enjoyable poetry than some of the people there if we are to talk strictly about the structure and content.
Maybe to you, someone skilled in the craft, the difference between human and an AI's poems is day and night, but to the average Joe it all looks the same, and it's not their fault, the simple truth is that it takes knowledge and expertise to appreciate skills and effort. I'd bet if you were shown artwork done by AI and by humans, you wouldn't be able to tell the difference either. But since most people do have experience with food, anyone would be able to tell Lunchables from a professional meal. I know it's an exaggeration, but AI vs human content is nowhere near as obvious to average people.
Mind you, none of this should be some kind of shocking news to anyone, AI is not the culprit here. The reality is that many people have no sense of taste or quality. That's why you have garbage content being popular, be it TV shows like Jersey Shore, generic music, r/funny or asset flip games. It always has been and always will be the case, many people simply don't care about arts and won't be able to tell apart good one from bad, AI or no AI. Personally, I'm into board games and it's same thing here, the amount of people who likes Monopoly, despite it being a shitty game all things considered, is massive. Ironically, I too started out my journey into the hobby with it.
It's tough being a creator and realizing that most people have no appreciation for one's craft, but many do. Unfortunately that's what decades of shitty media does to society, and few have time and effort to learn the better, nor do many need too either. You just gotta try to filter out the noise and focus on people appreciating your art.
RamsesThePigeon t1_jdq83xj wrote
You know, I’d be willing to take that bet.
I don’t think that a person needs to be an expert in order to tell if something has “life” in it; they just need to care enough to look. You suggested as much yourself: It isn’t a lack of experience (or even taste) that causes junk to become popular; it’s apathy, and that same apathy is being enabled by the incredible amount of “content” available to people nowadays.
Granted, you could make the claim that humans just default to gorging themselves on garbage, and it isn’t much of a step to go from there to the idea that predators – be they television executives or designers of Skinner boxes masquerading as games – will rush to exploit that… but even then, on some level, people tend to realize when they aren’t actually enjoying themselves. That realization might take a while to grow from a vague sense of boredom to a conscious conclusion (and a person could very well move on before the transition takes place), but I’m pretty confident that everyone is capable of experiencing it.
Back to my point, though, the fact that the accusations were downvoted isn’t really relevant. What saddens me is the fact that said accusations were made at all. Yes, you’re right, the accusers are just armchair experts who aren’t really qualified to discuss the topic… but aren’t you the least bit bothered by the fact that folks like them are currently shaping the narrative about ChatGPT and its ilk?
Maybe I’m just getting old, but as that same narrative gets increasing amounts of attention, all I can see is a growing audience that’s going to waste a lot of their limited time on feeling dissatisfied.
Ecstatic_Airline4969 t1_jdq2nuj wrote
> Chat-bots produce painfully average offerings;
Tbf your self awareness and writing ability seem significantly less developed than a modern AIs...
RamsesThePigeon t1_jdq2w3i wrote
I’m not sure what you think is wrong with that clause, but maybe that proves your point.
“Self-awareness” needs a hyphen, incidentally, and “AI’s” needs an apostrophe.
Ecstatic_Airline4969 t1_jdq503w wrote
> I’m not sure what you think is wrong with that clause,
This maybe proves my point about the self awareness thing.
RamsesThePigeon t1_jdq5gw9 wrote
Yes, maybe it does.
You might be proving a different point without meaning to, though.
Ecstatic_Airline4969 t1_jdq5vq1 wrote
Really?
gr4ntmr t1_jdq7e0n wrote
one of you get to the fucking point cos I wanna know what's wrong with that clause
RamsesThePigeon t1_jdq8m72 wrote
There’s nothing wrong with the clause.
I was making an indirect joke about the fact that they couldn’t even read a full sentence before responding, which kind of undermines their ability to gauge the quality of someone else’s writing.
I’d still like to know what they thought was wrong with the clause, though!
Ecstatic_Airline4969 t1_jdq9pwv wrote
I can't tell of you're actually this dumb or are just acting impenetrable so you don't have to take on criticism.
Also you seem a bit invested in this what with checking back for replies to other peoples comments, you're a little bit upset about this aren't you? Why?
RamsesThePigeon t1_jdq9vxh wrote
I must be incredibly dumb, then.
If you see something wrong with that clause, please feel free to point it out.
As for the “why,” I already covered that in my original comment. If you’d like, you can read it… or you can keep checking other people’s replies, which it looks like we’re both doing.
“People’s” needs an apostrophe there, by the way.
Ecstatic_Airline4969 t1_jdqaf3p wrote
I pointed it out in my original comment and its more a comment on your whole school rather than just the indicative quote I picked out.
I read it.
My lack of effort with my throwaway comments here isn't really relevant to your shite attempt at a well written comment about shite writing. It's doesn't take a carpenter to tell you your house is squint. I'm checking back when people reply to me, you seem to be checking back to see if anyone has replied to me, not the same.
RamsesThePigeon t1_jdqarg5 wrote
In other words, there’s nothing wrong with that clause; you just don’t like the perspective.
For someone intent on avoiding effort, you sure went through a lot of it to avoid admitting that.
You can feel free to get the last word in.
Ecstatic_Airline4969 t1_jdqbbg7 wrote
No, you got that wrong. I think you are just trying impenetrable.
>You can feel free to get the last word in.
Lol. "What can I do to prove I'm not quite salty and weirdly invested?", you're quite reactionary aren't you?
[deleted] t1_jdqp30o wrote
[deleted]
mickeyr2 t1_jdq6yga wrote
I was really expecting this comment to end with “written by chatbot GPT”.
seri_machi t1_jdqqow2 wrote
I agree that it is a bit sad, and a bit scary.
> Chat-bots produce painfully average offerings; works that check all of the surface-level boxes, but that are completely flat. I encorage you to check out the demo page on openAI.
I'm sorry but if you're not wrong about this now, you will be in the next few years. GPT-4 can write some incredible poetry incredibly quickly, and at most all you have to do is edit them together and sand them a bit. There's no reason to think it won't keep improving.
Remember, you too are just a bunch of neurons trained on input, and you can be creative. GPT-4, likewise, can innovate. It can reason how to get through a maze, or explain a meme. It can pass the Bar Exam at the 90th percentile. We used to think our knowledge and intelligence made us special and irrepplacable, but we're realizing that maybe we're not. I think we have to admit that. Writing will have to be something you do for the joy of it, not to get others' validation, because there will always be a question now that a machine wrote it. I say that as a person who considers themselves a writer, too.
RamsesThePigeon t1_jdrcakx wrote
The comparison to neurons is flawed, and it’s one of the main reasons why this debate is even happening.
Chat-bots do not understand or comprehend. They are physically incapable of non-linear thinking, and increased amounts of data won’t change that; it’s a function of their underlying architecture. They don’t have neurons, nor do they have anything even functionally close. They absolutely do not innovate; they just iterate to a point that fools some humans.
If you consider yourself a writer, then you know that comprehension and empathy are vital to decent writing. Until such time as a computer can experience those (which – to be completely clear – is fundamentally impossible for as long as it’s being built according to modern computing principles), it won’t be able to match anything offered by someone who already does.
Put bluntly, it isn’t doing anything impressive; it’s revealing that the stuff being thrown at it is less complex or reason-based than we have assumed.
Edit: Here’s a great example.
seri_machi t1_jdrvd6f wrote
I'm actually a programmer and at least know the basics of how machine learning works - I took a course in it as well as data science. I do not on the other hand know how the brain or conciousness works. Therefore, I am not asserting it can "truly" comprehend or reason or empathize, but I think it can simulate comprehension and reasoning and empathy (pretty darn well from the outside)[https://arxiv.org/abs/2303.12712]. It's not perfect, it hallucinates and is poor at math, but it's certainly proving our capacity for art/creativity isn't as unique as anyone would have argued... say, four years ago. To me it brings to mind the old aphorism about no art being truly original. My point about neurons was to point out that there's no evidence of a magic spark inside of us that makes us creative, we are as far as anyone knows just combining and recombining different ideas based on the data we've been "trained" on. There's no such thing as an "original" poem or piece of art (although Chat-GPT does an excellent job extracting themes from poems I wrote.)
It was only a few years ago we said (a computer could never win at Go)[https://www.google.com/amp/s/www.businessinsider.com/ai-experts-were-way-off-on-when-a-computer-could-win-go-2016-3%3famp], and at the time jt would make you a laughing stock if you ever claimed AI would soon be able to pass the Bar exam. The goalposts just keep shifting. You're going really against the grain if you think it's not doing anything impressive. If you've fooled around with Chat-GPT and are drawing your conclusions from that, know that Chat-GPT was neutered and not the cutting edge (although it's still very impressive, and I think it's purely contrarianism to state otherwise.) Have some imagination for what the future holds based on the trend of the recent past. We're just getting started, for better and for worse. This field is exploding, and advances are developed in months, not years.
RamsesThePigeon t1_jds0kei wrote
> I'm actually a programmer and at least know the basics of how machine learning works
Then you know that I'm not just grasping at straws when I talk about the fundamental impossibility of building comprehension atop an architecture that's merely complicated instead of complex. Regardless of how much data we feed it or how many connections it calculates as being likely, it will still be algorithmic and linear at its core.
>It can extract themes from a set of poems I've written.
This statement perfectly represents the issue: No, it absolutely cannot extract themes from your poems; it can draw on an enormous database, compare your poems with things that have employed similar words, assess a web of associated terminology, then generate a response that has a high likelihood of resembling what you had primed yourself to see. The difference is enormous, even if the end result looks the same at first glance. There is no understanding or empathy, and the magic trick falls apart as soon as someone expects either of those.
>It wasn't long ago we said a computer could never win at Go, and it would make you a laughing stock if you ever claimed it could pass the Bar exam.
Experts predicted that computers would win at games like Go (or Chess, or whatever else) half a century ago. Authors of science fiction predicted it even earlier than that. Hell, we've been talking about "solved games" since at least 1907. All that victory requires is a large-enough set of data, the power to process said data in a reasonable span of time, and a little bit of luck. The same thing is true of passing the bar exam: A program looks at the questions, spits out answers that statistically and semantically match correct responses, then gets praised for its surface-level illusion.
>The goalposts just keep shifting.
No, they don't. What keeps shifting is the popular (and uninformed) perspective about where the goalposts were. Someone saying "Nobody ever thought this would be possible!" doesn't make it true, even if folks decide to believe it.
>You're going really against the grain if you think it's not doing anything impressive.
It's impressive in the same way that a big pile of sand is impressive. There's a lot of data and a lot of power, and if magnitude is all that someone cares about, then yes, it's incredible. That isn't how these programs are being presented, though; they're being touted as being able to write, reason, and design, but all they're actually doing is churning out averages and probabilities. Dig into that aforementioned pile even a little bit, and you won't find appreciation for your poetry; you'll just find a million tiny instances of "if X, then Y."
Anyone who believes that's even close to how a human thinks is saying more about themselves than they are about the glorified algorithm.
cummypussycat t1_jdqwo78 wrote
Yes. This is the reality.
Straight-Comb-6956 t1_jdrip84 wrote
That's better than an average person can do.
Viewing a single comment thread. View all comments