Submitted by jabowery t3_1234air in MachineLearning
1stuserhere t1_jdub2yo wrote
inb4 this post is part of the training set for the next generation of LLMs along with the comments, sarcasm and what not
jabowery OP t1_jdvkjt0 wrote
Imputation can make interpolation appear to be extrapolation.
So, to fake AGI's capacity for accurate extrapolation (data efficiency), one may take a big pile of money and throw it at expanding the training set to infinity and expanding the matrix multiplication hardware to infinity. This permits more datapoints within which one may interpolate over a larger knowledge space.
But it is fake.
If, on the other hand, you actually understand the content of Wikipedia (the Hutter Prize's very limited, high quality corpus), you may deduce (extrapolate) the larger knowledge space through the best current mathematical definition of AGI: AIXI's where the utility function of the sequential decision theoretic engine is to minimize the algorithmic description of the training data (Solomonoff Induction) used as the prediction oracle in the AGI.
Viewing a single comment thread. View all comments