pmirallesr
pmirallesr t1_jdi0eqg wrote
Reply to comment by anothererrta in [D] "Sparks of Artificial General Intelligence: Early experiments with GPT-4" contained unredacted comments by QQII
With these people, it's interesting to ask, how do we know human intelect is not.emergent behaviour of a simple task. That would correspond to a radical view of predictive coding. I'm no expert in neuroscience, but to me, the idea that AGI cannot arise from a single simple task makes less and less sense as time goes by
pmirallesr t1_irnqpq4 wrote
Reply to comment by avialex in [D] Quantum ML promises massive capabilities, while also demanding enormous training compute. Will it ever be feasible to train fully quantum models? by avialex
That's fair. I liked Maria Schuld's research on QML.
pmirallesr t1_irnmvpg wrote
Reply to [D] Quantum ML promises massive capabilities, while also demanding enormous training compute. Will it ever be feasible to train fully quantum models? by avialex
You're thinking about implementing classic ml (backprop) in qc. Proponents of quantum ML look for alternative ways of "machine learning", either not calculating gradients, or trying to exploit the properties of quantum mechanics to "learn better". If it all sounds very fuzzy it's because it is
pmirallesr t1_je2tf2v wrote
Reply to [N] OpenAI may have benchmarked GPT-4’s coding ability on it’s own training data by Balance-
Idk, the procedure to check for contamination described in the release report sounded solid at first glance, and I don't see how this news changes that