Viewing a single comment thread. View all comments

Snoo58061 t1_jdjvybp wrote

I'm saying it's not the same kind of development and the results are different. A human works for a long time to grasp the letters and words at all, then extracts much more information from many orders of magnitude smaller data sets with weaker specific recall and much faster convergence for a given domain.

To be clear I think AGI is possible and that we've made a ton of progress, but I just don't think that scale is the only missing piece here.

1

E_Snap t1_jdjwmkp wrote

Honestly, I have a very hard time believing that. Machine learning has had an almost trailblazing relationship with the neuroscience community for years now, and it’s pretty comical. The number of moments where neuroscientists discover a structure or pattern developed for machine learning years and years ago and and then finally admit “Oh yeah… I guess that is how we worked all along,” is too damn high to be mere coincidence.

3

Snoo58061 t1_jdjxmti wrote

The brain almost certainly doesn't use backpropgation. Liquid nets are a bit more like neurons than the current state of the art Most of this stuff is old theory refined with more compute and data.

These systems are hardly biologically plausible. Not that biological plausibility is a requirement for general intelligence.

3

Western-Image7125 t1_jdnvnu7 wrote

Well your last line kinda makes the same point as the other person you are debating with? What if we are getting really close to actual intelligence, even though it is nothing like biological intelligence which is the only kind we know of

3