Submitted by ryusan8989 t3_yzgwz5 in singularity
botfiddler t1_ix4keuj wrote
Reply to comment by UniversalMomentum in 2023 predictions by ryusan8989
Image generators aren't that much more human-like than some protein folding simulation AI. They still don't know what any of this in the picture means. Both is important, though. Imagine crushing big corporations oligopoly on content creation. Someone who could make a comic on his own, could make five with his characters, much faster. Or he could at some point make an anime based on his characters.
AsuhoChinami t1_ix5r14w wrote
I think AI has to have some kind of understanding in order to perform so well. AI in the past performed poorly because it obviously had poor understanding. I think "AI has no understanding" is kind of an unfalsifiable argument - it's kind of suspect that something which has no form of understanding whatsoever could produce such accurate and well-formed results, but it's also something that's impossible to argue for or against.
botfiddler t1_ix6fi2m wrote
Yeah, well, I'd say it understands how the words in the prompt relate to certain image elements and how those relate to each other. Nothing outside, to physics, human meaning of such pictures, ...
CriminalizeGolf t1_ix9p3k7 wrote
https://plato.stanford.edu/entries/chinese-room/
There is a philosophy thought experiment related to the difference between functional understanding and "true" understanding
Viewing a single comment thread. View all comments