Submitted by Balance- t3_124eyso in MachineLearning
londons_explorer t1_jdzwcfo wrote
Reply to comment by keepthepace in [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Problems like this are never 100% novel.
There are always elements and concepts of the problem and solution that have been copied from other problems.
The easiest way to see this is to ask a non-programmer to come up with a 'programming puzzle'. They'll probably come up with something like "Make an app to let me know when any of my instagram friends are passing nearby and are up for hanging out".
Compare that to a typical leetcode problem, and you'll soon see how leetcode problems are really only a tiny tiny corner of what is possible to do with computers.
currentscurrents t1_je13kdr wrote
True! But also, problems in general are never 100% novel. That's why metalearning works.
You can make up for poor reasoning abilities with lots of experience. This isn't bad exactly, but it makes testing their reasoning abilities tricky.
Viewing a single comment thread. View all comments