Viewing a single comment thread. View all comments

OutOfBananaException t1_j9f53q7 wrote

By and large humans aren't great at understanding other humans. Understanding a collective of humans (even superficially) is probably one area an AI trained with enough data will truly excel at. Making it a dangerous tool for spreading propaganda, which could be countered by AI readers/filters.

It's simply too much information for any one human to take account for (to model millions of readers), over time I would expect a new category of book to emerge which has minor variations that are tailored to the reader.

1

Chad_Abraxas t1_j9f8k1k wrote

I entirely disagree with you. That may be true on reddit (lol) and true of the average reddit user, but humans are not just data.

I do think it is potentially a very dangerous tool for things like spreading propaganda, however. (And Sydney recently acknowledged that, itself.)

1

OutOfBananaException t1_j9fcep4 wrote

That we disagree illustrates the problem, it's not unusual for there to be fundamentally different ways of seeing the world. It is a fact that the message an author is attempting to deliver, may be missed entirely by some people - and that's not necessarily a failing of the author, or the reader. A chatbot should in principle be able to pick up on this nuance pretty well, given sufficient data. It would need training data feedback from the reader though, which in many cases won't exist initially.

1