Submitted by iingot t3_10jo0l4 in Futurology
Shiningc t1_j5qibie wrote
Reply to comment by natepriv22 in CNET's AI Journalist Appears to Have Committed Extensive Plagiarism by iingot
That doesn’t contradict his claim that “AI is just scraping existing writing”. Human intelligence doesn’t work in the same way. It’s just that at some point, humans know that something “makes sense” or “looks good”, even if it’s something that’s completely new, which is something that the current “AI” cannot do.
natepriv22 t1_j5qmutp wrote
It does though...
It's not scraping writing, it's learning the nuances and rules and the probabilities of it in the same way a human would.
The equivalent example would be if a teacher tells you "write a compare and contrast paragraph about x topic". The process of using existing understanding, knowledge and experience is very similar on a general level to current LLM AIs. There's a reason they are called Neural Networks... who and what do you think they are modeled after currently?
Shiningc t1_j5qp1vn wrote
“Comparing and contrasting paragraphs” has an extremely limited scope and it’s not a general intelligence.
An AI doesn’t know something “makes sense” or “looks good” because those are subjective experiences that we have yet to understand how it works. And what “makes sense” to us is a subjective experience where it has no guarantee that it actually does objectively make sense. What made sense to us 100 years ago may be complete nonsense today or tomorrow.
If 1000 humans are playing around with 1000 random generators, humans can eventually figure out what is “gibberish” and what might “make sense” or “sound good”.
Viewing a single comment thread. View all comments