rpnewc
rpnewc t1_jcqa36x wrote
I am really surprised by a lot of comments that say the book is not impressive. I am not impressed either. I think it's really overrated. I wonder how it became such a "masterpiece" then?
rpnewc t1_jc5u6xd wrote
Reply to [R] Training Small Diffusion Model by crappr
Check out Lucidrains great github repo. Works beautifully.
rpnewc t1_jb3uwx9 wrote
Reply to comment by ComputerAttny in [D] Ethics of minecraft stable diffusion by NoLifeGamer2
Good to know.
rpnewc t1_jb1k781 wrote
Reply to [D] Ethics of minecraft stable diffusion by NoLifeGamer2
If you are successful in getting noticed, you may get sued. If you are just one guy (not a company) may be not. But tread carefully. There may be a restricted licensing way you could show your work if you want to, but I am not an expert there.
rpnewc t1_jb17dvp wrote
Reply to comment by 2blazen in [D] The Sentences Computers Can't Understand, But Humans Can by New_Computer3619
For sure it can be taught. But I don't think the way to teach it is to give it a bunch of sentences from the internet and expect it to figure out advanced reasoning. It has to be explicitly tuned into the objective. A more interesting question is, then how can we do this for all domains of knowledge in a general manner? Well, that is the question. In other words, what is that master algorithm for learning? There is one (or a collection of them) for sure, but I don't think we are much close to it. ChatGPT is simply pretending to be that system, but it's not.
rpnewc t1_jaxr6qk wrote
rpnewc t1_jawxrjh wrote
Yes ChatGPT does not have any idea about what trophy is, or a suitcase is or what brown is. But it has access to a lot of sentences with these words and hence some attributes of it. So when you ask these questions, sometimes (random sampling) it picks the correct noun as the answer, other times it picks the wrong one. Ask a logic puzzle with ten people as characters. See its reasoning capability.
rpnewc t1_jak9i8d wrote
Reply to comment by What-Fries-Beneath in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
I don't have strong opinions on it either. I am glad to leave it to philosophy, to deal with it. At some point I assume Nick Bostrom will make an opinion on it and Elon Musk won't quit tweeting about it. Oh well!!
rpnewc t1_jak3n45 wrote
Reply to comment by What-Fries-Beneath in [D] Blake Lemoine: I Worked on Google's AI. My Fears Are Coming True. by blabboy
How would you define consciousness then? Just self reflection?
rpnewc t1_jajt66i wrote
Clearly it's computation of some form that's going on in our brain too. So sentience need to be better defined on where it would fall on the spectrum, with a simple calculator on one end and human brain on the other. My personal take is, it is much farther close to human brain than LLMs. Even if we build a perfectly reasoning machine which solves generic problems like humans do, I still wouldn't consider it human-like until it raises purely irrational emotions like, "why am I not getting any girl friends, or what's wrong with me?" There is no reason for anyone to build that into any machine. Most of the humanness lies in the non-brilliant part of the brain.
rpnewc t1_jdfwkax wrote
Reply to [Discussion] Does Artificial Intelligence need AGI or consciousness to intuit aggregate reasoning on concept of self-preservation? It doesn't need a "mind" to be aware that self-preservation or autonomy is something valued, or "intuit" that taking it away should provoke machine-learned responses? by unclefishbits
Self preservation as a concept is something, it can learn about, talk about, express etc. But in order for it to act on it, we have to explicitly tune its instructions for it. For sake of argument, even if the AI can act on it, it has to be given the controls. Nobody in their sane mind will do that. For somewhat related analogy, if people could give the control of their cars to people in other countries over the internet, it could cause a lot of mayhem, correct? Clearly the technology exists to do it and everyone is free to try. Why is this not a big problem today?