Viewing a single comment thread. View all comments

chrisc82 t1_j8bfvnd wrote

This is incredible. If it can understand the nuances of human interaction, how many more incremental advances does it need to perform doctorate level research? Maybe that's a huge jump, but I don't think anyone truly knows at this point. To me it seems plausible that recursive self-improvement is only a matter of years, not decades, away.

141

Imaginary_Ad307 t1_j8bggpq wrote

Months, not years.

66

Hazzman t1_j8c6v7u wrote

Here's the thing - all of these capabilities already exist. It's just about plugging in the correct variants of technology together. If something like this language model is the user interface of an interaction, something like Wolfram Alpha or a medical database becomes the memory of the system.

Literally plugging in knowledge.

What we SHOULD have access to is the ability for me at home to plug in my blood results and ask the AI "What are some ailments or conditions I am likely to suffer from in the next 15 years. How likely will it be and how can I reduce the likely hood?"

The reason we won't have access to this is 1) It isn't profitable for large corporations who WILL have access to this with YOUR information 2) Insurance. It will raise ethical issues with insurance and preexisting conditions and on that platform, they will deny the public access to these capabilities. Which is of course ass backwards.

17

datsmamail12 t1_j8cv4rs wrote

There's always a new developer that can create a new app based on that. I'm sure someone will create something similar soon.

4

urinal_deuce t1_j8czpo8 wrote

I've learnt a few bits of programming but could never find the plug or socket to connect it all.

1

duffmanhb t1_j8d3a5s wrote

> The reason we won't have access to this is

I think it's more about people don't have a place to create a room dedicated to medical procedures?

0

SilentLennie t1_j8cvgfn wrote

I think I'll just quote:

"we overestimate the impact of technology in the short-term and underestimate the effect in the long run"

10

bigcitydreaming t1_j8fo1ig wrote

Yeah, perhaps, but eventually you'll be at cusp of that massive progression (in relative terms) where it isn't overstated or overestimated.

It might still be years away to reach the level of impact which OP described, but eventually it'll only be months away.

2

Borrowedshorts t1_j8c6nng wrote

It'll be one of the first dominoes, not the last, doctorate level research that is. I suspect it will be far better than humans.

6

TinyBurbz t1_j8ckxyq wrote

I asked the same question and got a wildly different response myself.

5

duboispourlhiver t1_j8cnpok wrote

Care to share?

3

TinyBurbz t1_j8cos4o wrote

>Based on Arnold's response of "Great," it can be inferred that he is likely happy or excited about the new addition to his life. Wearing the shirt that says "I love dogs" every time he sees Emily suggests that he may have a positive affinity for dogs, which would likely contribute to his enthusiasm about the adoption. However, without more information or context, it's difficult to determine Arnold's exact feelings towards the dog with certainty. It's possible that he might be surprised or even overwhelmed by the news, but his brief response of "Great" suggests that he is, at the very least, accepting of the new addition to his life.

I used different names when I re-wrote the story.

8

duboispourlhiver t1_j8cp1dk wrote

Thanks! I wonder if some names are supposed to have statistically different personalities linked to them :)

8

amplex1337 t1_j8dmk3h wrote

It's a natural language processor. It is looking for other 'stories' with the names Bob and Sandra most likely for relevance which will likely outweigh the other assumptions.

2

sickvisionz t1_j8gtyoj wrote

> However, without more information or context, it's difficult to determine Arnold's exact feelings towards the dog with certainty. It's possible that he might be surprised or even overwhelmed by the news, but his brief response of "Great" suggests that he is, at the very least, accepting of the new addition to his life.

That was my interpretation and I got response spammed that I don't understand humans.

1

Economy_Variation365 t1_j8cveo7 wrote

Just to confirm: you used Bing chat and not ChatGPT, correct?

1

TinyBurbz t1_j8eld0n wrote

Both and got pretty much the same response re-phrased.

Asked Chat-GPT about the "theory of mind" which it answered it has as it is critical to understanding writing.

1

amplex1337 t1_j8dl4d3 wrote

It doesn't understand anything, it's a chatbot that is good with language skills which are symbolic. Please consider it's literally just a GPU matrix that is number-crunching language parameters, not a learning, thinking machine that can move outside of the realm of known science that is required for a doctorate. Man is still doing the learning and curating it's knowledge base. Chatbots have been really good before chatGPT as well.. you just weren't exposed to them it sounds like

2

OutOfBananaException t1_j8hiqoo wrote

Given me one example of an earlier chatbot that could code in multiple languages.

2

amplex1337 t1_j8qp89h wrote

chatGPT doesn't understand a thing it tells you right now, nor can it 'code in multiple languages'. It can however fake it very well. Give me an example of truly novel code that chatGPT wrote that is not some preprogrammed examples strung together in what seems like a unique way to you. I've tried quite a bit recently to test its limits with simple yet novel requests, and it stubs its toe or falls over nearly every time, basically returning a template, failing to answer the question correctly, or just dying in the middle of the response when given a detailed prompt, etc. It doesn't know 'how to code' other than basically slapping together code snippets from its training data, just like I can do by searching in google and copy pasting code from the top results from SO etc. There are still wrong answers at times.. proving it really doesn't know anything. Just because there appears to be some randomness to the answers it gives doesn't necessarily make it 'intelligence'. The LLM is not AGI that would be needed to actually learn and know how to program. It uses supervised learning (human curated), then reward based learning (also curated), then a self-generated PPO model (still based on human-trained reward models) to help reinforce the reward system with succinct policies. Its a very fancy chatbot, and fools a lot of people very well! We will have AGI eventually, its true, but this is not it yet and while it may seem pedantic because this is so exciting to many, there IS a difference.

2

OutOfBananaException t1_j8qu042 wrote

I never said it 'knows' or displays true intelligence, only that it performs at a level far above earlier chatbots that didn't come close to this capability.

1

Representative_Pop_8 t1_j8iqgft wrote

what is your definiton of understand?

what is inside internally matters little if the results are that it understands something. The example shown by OP and many more, including my own experience clearly shows understanding of many concepts and some capacity to quickly learn from interaction with users ( without needing to reconfigure nor retain the model) though still not as smart as an educated humans.

It seems to be a common misconception , even by people that work in machine learning to say these things don't know , or can't learn or are not intelligent, based on the fact they know the low level internals and just see the perceptions or matrix or whatever and say this is just variables with data, they are seeing the tree and missing the forest. Not knowing how that matrix or whatever manages to understand things or learn new things with the right input doent mean it doesn't happen. In fact the actual experts , the makers of these AI bots know these things understand and can learn, but also don't know why , but are actively researching.

https://www.vice.com/en/article/4axjnm/scientists-made-discovery-about-how-ai-actually-works?utm_source=reddit.com

>Man is still doing the learning and curating it's knowledge base.

didn't you learn to talk by seeing your parents? didn't you go years to school? needing someone to teach you doesn't mean you don't know what you learned.

1

RoyalSpecialist1777 t1_j8kwg8b wrote

I am curious how a deep learning system, while learning to perform prediction and classifation is any different than our own brains. It seems increasingly evident that while the goals used to guide training are different but the mechanisms of learning effectively the same. Of course there are differences in mechanism and complexity but what this last year is teaching us is the artificial deep learning systems work to do the same type of modeling we undergo when learning. Messy at first but definitely capable of learning and sophistication down the line. Linguists argue for genetically wired language rules but really this isn't needed - the system will figure out what it needs and create them like the good blank slates they are.

There are a lot of ChatGPT misconceptions going around. For example that it just blindly memorizes patterns. It is a deep learning system (very deep) that, if it helps with classification and prediction, ends up creating rather complex and functional models of how things work. These actually perform computation of a pretty sophisticated nature (any function can be modeled by a neural network). And this does include creativity and reasoning as the inputs flow into and through the system. Creativity as a phenomena might need a fitness function which scores creative solutions higher (be nice to model that one so the AI can score itself) and of course will take awhile to get down but not outside the capabilities of these types of systems.

Anyways, just wanted to chime in as this has been on my mind. I am still on the fence whether I believe any of this. The last point is that people criticize ChatGPT for giving incorrect answers but it is human nature to 'approximate' knowledge and thus incredibly messy. Partially why it takes so long.

2

Caring_Cactus t1_j8izqb9 wrote

Does it have to be in the same way humans see things? It's not conscious, but it can understand and recognize patterns, is that not what humans early on do? Now imagine what will happen when it does become conscious, it will have a much deeper understanding to conceptualize new interplays we probably can't imagine right now.

1

Tiamatium t1_j8d698o wrote

Honestly, connect it to Google Scholar or Pubmed, and it can write literature reviews. Not sure of it's still limited by the same 4000 token limit or not, as it seems to go through a lot of bing results... Maybe it summarizes those and sends them to chat, maybe it sends whole pages.

1

NoNoNoDontRapeMe t1_j8fj9yi wrote

Lmaoo, Bing is already smarter than me. I thought the answer was Bob liked dogs!

1