RaptorDotCpp t1_iznoyl6 wrote
It's wrong though. range
does not return a list
. It has its own sequence type.
elbiot t1_izoh372 wrote
It was trained on python 2
jsonathan OP t1_iznwsl6 wrote
I noticed that. Generally it’s “right enough” to help you fix your error, though.
_poisonedrationality t1_izoe2xm wrote
I wouldn't say that. While I'm definitely impressed by its abilities It makes mistakes way too often for me to consider it "generally correct".
It is interesting that even when it makes a mistake it often has some reasonable sounding logic behind it. It makes it feel like it has some level of "understanding".
artsybashev t1_izorw1t wrote
yeah it is annoyingly confidently wrong. Even when you point out its mistake, it might try to explain like no mistakes where made. Sometimes it admits that there was a mistake. From a coworker this would be really annoying behaviour.
new_name_who_dis_ t1_izou962 wrote
Crazy that we are now far enough into AI research that we are comparing chatbots to coworkers.
artsybashev t1_izoujfv wrote
Yeah. A lot of times I get a better answer from chatgpt but you really need to take its responses witha grain of salt
throughactions t1_izrbo1y wrote
The same is true with coworkers.
jsonathan OP t1_izt0lfi wrote
In my experience, it has explained every error I’ve encountered in a way that’s at least directionally correct. Can you post a counterexample?
_poisonedrationality t1_izt0zr3 wrote
No
jsonathan OP t1_izxpvy0 wrote
What mistakes were you talking about then?
_poisonedrationality t1_j01a9tj wrote
I've asked it questions which it has answered incorrectly.
When the answer isn't a basic fact it gets it wrong a decent amount of time.
knowledgebass t1_izr3dpl wrote
You know people make a lot of mistakes, too, right?
_poisonedrationality t1_izr9ksp wrote
Yes. But I still wouldn't say it's "generally correct" because it makes mistakes far too often.
cr125rider t1_izolvlz wrote
Iterable go brrr without using all your memory
Viewing a single comment thread. View all comments