Submitted by Fit-Meet1359 t3_110vwbz in singularity
chrisc82 t1_j8bfvnd wrote
This is incredible. If it can understand the nuances of human interaction, how many more incremental advances does it need to perform doctorate level research? Maybe that's a huge jump, but I don't think anyone truly knows at this point. To me it seems plausible that recursive self-improvement is only a matter of years, not decades, away.
Imaginary_Ad307 t1_j8bggpq wrote
Months, not years.
Hazzman t1_j8c6v7u wrote
Here's the thing - all of these capabilities already exist. It's just about plugging in the correct variants of technology together. If something like this language model is the user interface of an interaction, something like Wolfram Alpha or a medical database becomes the memory of the system.
Literally plugging in knowledge.
What we SHOULD have access to is the ability for me at home to plug in my blood results and ask the AI "What are some ailments or conditions I am likely to suffer from in the next 15 years. How likely will it be and how can I reduce the likely hood?"
The reason we won't have access to this is 1) It isn't profitable for large corporations who WILL have access to this with YOUR information 2) Insurance. It will raise ethical issues with insurance and preexisting conditions and on that platform, they will deny the public access to these capabilities. Which is of course ass backwards.
datsmamail12 t1_j8cv4rs wrote
There's always a new developer that can create a new app based on that. I'm sure someone will create something similar soon.
urinal_deuce t1_j8czpo8 wrote
I've learnt a few bits of programming but could never find the plug or socket to connect it all.
duffmanhb t1_j8d3a5s wrote
> The reason we won't have access to this is
I think it's more about people don't have a place to create a room dedicated to medical procedures?
SilentLennie t1_j8cvgfn wrote
I think I'll just quote:
"we overestimate the impact of technology in the short-term and underestimate the effect in the long run"
bigcitydreaming t1_j8fo1ig wrote
Yeah, perhaps, but eventually you'll be at cusp of that massive progression (in relative terms) where it isn't overstated or overestimated.
It might still be years away to reach the level of impact which OP described, but eventually it'll only be months away.
pavlov_the_dog t1_j8c7cce wrote
GPT-4 on the horizon...
Miss_pechorat t1_j8ce71b wrote
Lol, not months but weeks? ;-))
Fit-Meet1359 OP t1_j8biqtf wrote
It certainly has good instincts about human relationships: https://www.reddit.com/r/ChatGPT/comments/110vv25/comment/j8bhzyn/?utm_source=share&utm_medium=web2x&context=3
Borrowedshorts t1_j8c6nng wrote
It'll be one of the first dominoes, not the last, doctorate level research that is. I suspect it will be far better than humans.
TinyBurbz t1_j8ckxyq wrote
I asked the same question and got a wildly different response myself.
duboispourlhiver t1_j8cnpok wrote
Care to share?
TinyBurbz t1_j8cos4o wrote
>Based on Arnold's response of "Great," it can be inferred that he is likely happy or excited about the new addition to his life. Wearing the shirt that says "I love dogs" every time he sees Emily suggests that he may have a positive affinity for dogs, which would likely contribute to his enthusiasm about the adoption. However, without more information or context, it's difficult to determine Arnold's exact feelings towards the dog with certainty. It's possible that he might be surprised or even overwhelmed by the news, but his brief response of "Great" suggests that he is, at the very least, accepting of the new addition to his life.
I used different names when I re-wrote the story.
duboispourlhiver t1_j8cp1dk wrote
Thanks! I wonder if some names are supposed to have statistically different personalities linked to them :)
TinyBurbz t1_j8cp2id wrote
That is a possibility.
[deleted] t1_j8cw6bq wrote
[deleted]
amplex1337 t1_j8dmk3h wrote
It's a natural language processor. It is looking for other 'stories' with the names Bob and Sandra most likely for relevance which will likely outweigh the other assumptions.
sickvisionz t1_j8gtyoj wrote
> However, without more information or context, it's difficult to determine Arnold's exact feelings towards the dog with certainty. It's possible that he might be surprised or even overwhelmed by the news, but his brief response of "Great" suggests that he is, at the very least, accepting of the new addition to his life.
That was my interpretation and I got response spammed that I don't understand humans.
Economy_Variation365 t1_j8cveo7 wrote
Just to confirm: you used Bing chat and not ChatGPT, correct?
TinyBurbz t1_j8eld0n wrote
Both and got pretty much the same response re-phrased.
Asked Chat-GPT about the "theory of mind" which it answered it has as it is critical to understanding writing.
amplex1337 t1_j8dl4d3 wrote
It doesn't understand anything, it's a chatbot that is good with language skills which are symbolic. Please consider it's literally just a GPU matrix that is number-crunching language parameters, not a learning, thinking machine that can move outside of the realm of known science that is required for a doctorate. Man is still doing the learning and curating it's knowledge base. Chatbots have been really good before chatGPT as well.. you just weren't exposed to them it sounds like
OutOfBananaException t1_j8hiqoo wrote
Given me one example of an earlier chatbot that could code in multiple languages.
amplex1337 t1_j8qp89h wrote
chatGPT doesn't understand a thing it tells you right now, nor can it 'code in multiple languages'. It can however fake it very well. Give me an example of truly novel code that chatGPT wrote that is not some preprogrammed examples strung together in what seems like a unique way to you. I've tried quite a bit recently to test its limits with simple yet novel requests, and it stubs its toe or falls over nearly every time, basically returning a template, failing to answer the question correctly, or just dying in the middle of the response when given a detailed prompt, etc. It doesn't know 'how to code' other than basically slapping together code snippets from its training data, just like I can do by searching in google and copy pasting code from the top results from SO etc. There are still wrong answers at times.. proving it really doesn't know anything. Just because there appears to be some randomness to the answers it gives doesn't necessarily make it 'intelligence'. The LLM is not AGI that would be needed to actually learn and know how to program. It uses supervised learning (human curated), then reward based learning (also curated), then a self-generated PPO model (still based on human-trained reward models) to help reinforce the reward system with succinct policies. Its a very fancy chatbot, and fools a lot of people very well! We will have AGI eventually, its true, but this is not it yet and while it may seem pedantic because this is so exciting to many, there IS a difference.
OutOfBananaException t1_j8qu042 wrote
I never said it 'knows' or displays true intelligence, only that it performs at a level far above earlier chatbots that didn't come close to this capability.
Representative_Pop_8 t1_j8iqgft wrote
what is your definiton of understand?
what is inside internally matters little if the results are that it understands something. The example shown by OP and many more, including my own experience clearly shows understanding of many concepts and some capacity to quickly learn from interaction with users ( without needing to reconfigure nor retain the model) though still not as smart as an educated humans.
It seems to be a common misconception , even by people that work in machine learning to say these things don't know , or can't learn or are not intelligent, based on the fact they know the low level internals and just see the perceptions or matrix or whatever and say this is just variables with data, they are seeing the tree and missing the forest. Not knowing how that matrix or whatever manages to understand things or learn new things with the right input doent mean it doesn't happen. In fact the actual experts , the makers of these AI bots know these things understand and can learn, but also don't know why , but are actively researching.
>Man is still doing the learning and curating it's knowledge base.
didn't you learn to talk by seeing your parents? didn't you go years to school? needing someone to teach you doesn't mean you don't know what you learned.
RoyalSpecialist1777 t1_j8kwg8b wrote
I am curious how a deep learning system, while learning to perform prediction and classifation is any different than our own brains. It seems increasingly evident that while the goals used to guide training are different but the mechanisms of learning effectively the same. Of course there are differences in mechanism and complexity but what this last year is teaching us is the artificial deep learning systems work to do the same type of modeling we undergo when learning. Messy at first but definitely capable of learning and sophistication down the line. Linguists argue for genetically wired language rules but really this isn't needed - the system will figure out what it needs and create them like the good blank slates they are.
There are a lot of ChatGPT misconceptions going around. For example that it just blindly memorizes patterns. It is a deep learning system (very deep) that, if it helps with classification and prediction, ends up creating rather complex and functional models of how things work. These actually perform computation of a pretty sophisticated nature (any function can be modeled by a neural network). And this does include creativity and reasoning as the inputs flow into and through the system. Creativity as a phenomena might need a fitness function which scores creative solutions higher (be nice to model that one so the AI can score itself) and of course will take awhile to get down but not outside the capabilities of these types of systems.
Anyways, just wanted to chime in as this has been on my mind. I am still on the fence whether I believe any of this. The last point is that people criticize ChatGPT for giving incorrect answers but it is human nature to 'approximate' knowledge and thus incredibly messy. Partially why it takes so long.
Caring_Cactus t1_j8izqb9 wrote
Does it have to be in the same way humans see things? It's not conscious, but it can understand and recognize patterns, is that not what humans early on do? Now imagine what will happen when it does become conscious, it will have a much deeper understanding to conceptualize new interplays we probably can't imagine right now.
Tiamatium t1_j8d698o wrote
Honestly, connect it to Google Scholar or Pubmed, and it can write literature reviews. Not sure of it's still limited by the same 4000 token limit or not, as it seems to go through a lot of bing results... Maybe it summarizes those and sends them to chat, maybe it sends whole pages.
NoNoNoDontRapeMe t1_j8fj9yi wrote
Lmaoo, Bing is already smarter than me. I thought the answer was Bob liked dogs!
Viewing a single comment thread. View all comments