wisintel
wisintel t1_japj0au wrote
Reply to comment by Slow-Schedule-7725 in Really interesting article on LLM and humanity as a whole by [deleted]
How do you, this lady writing about octopuses or anyone else “know” that. No one knows how consciousness works. No one really understands how LLMs convert training data into answers. So how can anyone say so definitively what is or isn’t happening. I understand different people have different opinions and some people believe that chatgpt is just a stochastic parrot. I can accept anyone having this opinion, I get frustrated when people state this opinion as fact. The fact is no one knows for sure at the moment.
wisintel t1_japg1ua wrote
The whole premise is flawed. The Octopus learned English, and while it may not have the embodied experience of being a human, if it understands concepts it can infer. Everytime I read a book, through nothing but language I “experience” an incredible range of things I have never done physically. Yes the AI is trained to predict the next word, but how is everyone so sure the AI isn’t eventually able to infer meaning and concepts from that training?
wisintel t1_jae4ewg wrote
Reply to (Long post) Will the GPT4 generation of models be the last "highly anticipated" by the public? by AdditionalPizza
Isn’t 4 already out in the Bing/Sydney chatbot
wisintel t1_j7wrbjc wrote
Reply to The copium goes both ways by IndependenceRound453
What’s wrong with being passionate and excited about the future. Even if your wrong, what people believe or don’t believe on this forum has zero impact on the real world. For me it’s like buying a lottery ticket. It’s highly unlikely I’ll win, but I am paying for the time I get to spend imagining what it would be like if I did win.
wisintel t1_j5yenhd wrote
Reply to Will we ever see a time where we could relive or be able to playback and watch old memories? by Personal-Ride-1142
I imagine an AI like stable diffusion that can take the pieces of a memory and fill in the blanks. So you won’t get exactly what happened but a reasonable recreation.
wisintel t1_j21exke wrote
I think initially full dive just needs something like neuralink to connect into the nerves that send signals to the brain. So connect to optical nerve, auditory nerve etc and the replace the signals being sent from those sense organs. I don’t think the bitrate of those nerve are terribly high. I can’t imagine how we get full brain full dive like nervegear.
wisintel t1_japjask wrote
Reply to comment by Slow-Schedule-7725 in Really interesting article on LLM and humanity as a whole by [deleted]
Actually, the makers of chatgpt can’t tell how it decides what to say in answer to a question. My understanding is there is a black box between the training data and the answers given by the model.