Viewing a single comment thread. View all comments

Chad_Abraxas t1_j9dp5kx wrote

I'm sure it will be able to write a fairly coherent and interesting story of any length within a few months.

I don't feel threatened by that, though. There's a strong interest in supporting human creators already emerging among all kinds of consumers of art (not just readers), and a kind of cultural ethics toward art creation seems to be developing right now.

It's likely that AI can and will be used to crank out shallow "art" (or better call that stuff "creative products," maybe) that's only meant to entertain or function as design, but isn't meant to carry any deeper message. I'm sure it will soon replace, say, the writers who are hired to bang out forgettable novels for franchises like Warhammer--brands that are only meant to make money from not-very-discerning consumers. AI isn't going to write the next Great American Novel, though.* It requires human emotions and an understanding of what it's like to be human to write a book that touches human hearts.

*I am sure there will be many great novels and many other great works of art that humans make while utilizing AI as an important tool, however. I've already used it to shave days or even weeks' worth of time off my own writing process. I'm tremendously excited about it and the doors it can open for artists of all kinds. Also very excited to see what new art forms emerge.

1

ChipsAhoiMcCoy t1_j9dronn wrote

I agre with most of what you sai dhere, but I would be really careful with this line of thinking here

​

>AI isn't going to write the next Great American Novel, though.\* It requires human emotions and an understanding of what it's like to be human to write a book that touches human hearts.

​

I definitely think AI could mimic human emotion in writing, and I think we will absolutely see AI write a great piece of literary art some time in the future. It's just a matter of time. AI is already tricking many users into thinking that it's sentient, and that's just word prediction in the case of LLMs. If it's able to trick humans into ascribing emotion into what the AI is saying and it's just prediction what word should logically follow, I think it's very possible that we could see this. I will fully admit that this is just strictly opinion based, but we will see if it can pass the blind test. Even simply knowing something is written by AI could sour someones opinion about the piece if they already don't think AI could write something emotional and touching to human hearts, so you'd definitely ahve to perform a blind test and see what happens from there.

2

Chad_Abraxas t1_j9f9576 wrote

I think blind tests will be very interesting.

For me, in my experiments with it so far, where it falls down is in accurate or original descriptions of sensory details. It fully acknowledges that it can't, for example, hear... so it can't experience music/sound in the same way humans do. It experiences sound as patterns of data. It has an entirely different understanding of what senses are and what they mean to humans/how humans use our senses to make sense of the world.

No doubt, it will be able to mimic a lot of this stuff pretty well... maybe within just a few months. But metaphor involving sensory detail is going to prove tricky for it. I believe metaphorical language, particularly when sensory inputs are involved with that metaphor, will be the clearest point where we'll be able to identify a rift between AI-written literature and human-written literature.

1

OutOfBananaException t1_j9f53q7 wrote

By and large humans aren't great at understanding other humans. Understanding a collective of humans (even superficially) is probably one area an AI trained with enough data will truly excel at. Making it a dangerous tool for spreading propaganda, which could be countered by AI readers/filters.

It's simply too much information for any one human to take account for (to model millions of readers), over time I would expect a new category of book to emerge which has minor variations that are tailored to the reader.

1

Chad_Abraxas t1_j9f8k1k wrote

I entirely disagree with you. That may be true on reddit (lol) and true of the average reddit user, but humans are not just data.

I do think it is potentially a very dangerous tool for things like spreading propaganda, however. (And Sydney recently acknowledged that, itself.)

1

OutOfBananaException t1_j9fcep4 wrote

That we disagree illustrates the problem, it's not unusual for there to be fundamentally different ways of seeing the world. It is a fact that the message an author is attempting to deliver, may be missed entirely by some people - and that's not necessarily a failing of the author, or the reader. A chatbot should in principle be able to pick up on this nuance pretty well, given sufficient data. It would need training data feedback from the reader though, which in many cases won't exist initially.

1