Arowx OP t1_je0isrn wrote
Or are we on the hype train/graph where a new technology appears shows promise and we all go WOW then we start to find it's flaws and what it can't do and we descend back into the valley of disillusionment.
https://en.wikipedia.org/wiki/Gartner_hype_cycle
Or what are the gaping flaws in ChatGPT-4?
SgathTriallair t1_je15pfj wrote
For me, I have very few use cases for ChatGPT. This is because it is siloed and so I'm unable to truly automate anything. There is a clear path to doing it and so I'm not disillusioned yet, just eager for the next steps.
Redditing-Dutchman t1_je47o6d wrote
Yes integration is key here. If its just a platform where you have to ask it do stuff everytime like chatGPT it will not be very useful. It needs to be able to set goals and tasks by itself. Like if it needs to make weekly excel sheets reports it needs to do that every week automatically. Without having to imput the data every time into a seperate website.
Longjumping_Feed3270 t1_je8q5th wrote
It already has an API though
JacksCompleteLackOf t1_je1fbnr wrote
GPT4 is certainly an incremental step over 3,2 and 1, a lot of that was predictable. It's good to see that it hallucinates a lot less than it used to.
I see lots of psychology and business types talking about how we are almost at AGI, but where are the voices of the people actually working on this stuff? LeCun? Hinton? Even Carmack?
I do agree that it's getting closer to where it will replace some jobs. That part isn't hype.
Zetus t1_je1t0e5 wrote
Funny enough I actually spoke to Yann LeCun in person this past Friday at https://phildeeplearning.github.io/ at NYU, he essentially argued that a world model is required for solving some of the problems we're currently running into, during the debate. Later on I spoke with him and he is essentially expressing that the current naive approaches are not capable of engendering the proper dynamics- I have a recording of the talk/debate I took, I'll upload it later today :)
Listen to the scientists not the hype marketers!
Here are a copy of the slides for his talk: https://twitter.com/ylecun/status/1640133789199347713?s=19
Edit: uploaded the video here: (https://youtu.be/Cdd9u2WG3qU)
FoniksMunkee t1_je3705l wrote
Microsoft may have agreed. In the paper they released that talked about "sparks of AGI" - they identified a number of areas that LLM's fail at. Mostly forward planning and leaps of logic or Eureka moments. They actually pointed at LeCun's paper and said that's a potential solution... but that suggests they can't solve it yet with the ChatGPT approach.
datalord t1_je2mkh9 wrote
The “Sparks of AGI” paper mentioned above is literally published by Microsoft who researched it alongside OpenAI.
This paper, published yesterday, is published by OpenAI themselves discusses just how many people will be impacted. His Twitter post summarises it well.
Sam Altman recently spoke to Lex around the power and limits of what they have built. They also discuss AGI. Suffice to say, those working on it are talking about it at length.
JacksCompleteLackOf t1_je2z152 wrote
I hadn't seen the OpenAI paper before, but it states it's about the coming decades; and that makes the Twitter thread more interesting because one of the authors is putting a hard date on 2025 for some of those innovations.
It's pretty easy to find flaws in the Microsoft Research paper. It's funny that they hype up its performance on coding interviews, but don't mention that it falls down on data that it hasn't been trained on explicitly: https://twitter.com/cHHillee/status/1635790330854526981
Admittedly, I'm probably more skeptical than most.
FoniksMunkee t1_je37buh wrote
I'm pretty sure they mentioned something like that in passing didn't they? I know they have a section in there talking about how it fails at some math and language problems because it can't plan ahead, and it can't make leaps of logic. And it considered these substantial problems with ChatGPT4 with no obvious fix.
JacksCompleteLackOf t1_je389eh wrote
Actually, I think you're right and they did mention it. I guess I wish they would have emphasized that aspect more than the 'sparks of general intelligence'. It's mostly a solid paper for what it is. They admit they don't know what the training data looks like. I just wish they would have left that paragraph about the sparks out of it.
FoniksMunkee t1_je38yix wrote
Yes, I agree. The paper was fascinating - but a lot of people took away from that the idea that AGI is essentially here. When I read it I saw a couple of issues that may be a speed bump in progress. They definitely underplayed what seems to be a difficult problem to solve with the current paradigm.
datalord t1_je4ep43 wrote
Logic leaps, if rational, are a not leaps because we do not perceive the invisible steps between them. A machine can make those jump with sufficient instruction and/or autonomy. It’s just constant iteration and refinement along a specific trajectory.
If irrational, then much harder, perhaps that’s what creativity is in some respects.
datalord t1_je32c72 wrote
Great points.
Northcliff t1_je2zwnu wrote
John Carmack doesn’t get enough love in this sub
Zetus t1_je1107v wrote
It has shallow understanding of language and other minds. It has a very long way to go before we get to human intelligence.
Bithom t1_je11ur0 wrote
Or you're looking at the wrong place in the graph. Perhaps we're living in a new age of enlightenment and we have a plateau of productivity to look forward to.
Zetus t1_je160sb wrote
I think I agree with that, but also we will have qualitatively new dynamics regarding the kinds of work that can be done, that haven't even been imagined yet.
Bithom t1_je69qlb wrote
Yes I agree. Everyone is worried about AI taking jobs right now. But what will that vacuum of jobs create?
Opportunity? Or Threat?
Graucus t1_je1641n wrote
I think it's possible it'll never be more than that and still be the most powerful tool ever created.
CaliforniaMax02 t1_je23hg2 wrote
We have to wait 3-4-5 months to see how GPT-4 modules work. If there will be truly great things, then I can only imagine that the - now hidden - flaws will be around the complexity of tasks it can solve.
94746382926 t1_je47vsd wrote
I just got access to plugins today! Looking forward to seeing what people do with them. It's already blowing my mind honestly. Got my meals ordered and planned out for the whole next week in two prompts lol
Sigma_Atheist t1_je15y65 wrote
I've been in the trough of disillusionment for a while now for machine learning and neural networks.
CaliforniaMax02 t1_je23up2 wrote
This can be also true. AI had a hype period in around the early 2000s, then a long downwards curve and long disillusionment period, when people even stopped using "AI", and used "machine learning" instead.
Viewing a single comment thread. View all comments