Submitted by mrx-ai t3_11zi2uq in Futurology

Pinterest improved it's recommendation system by 150%.

Google Maps improved its ETA predictions by up to 50%.

MIT discovered a novel antibiotic, Halicin.

Baker Lab invented a protein design paradigm that solves 100% more benchmark problems than its predecessor.

All of these seemingly disparate advances have one thing in common - Graph Neural Networks.

GNNs are type of neural network that have been quietly making rapid progress for several years, driven by a relatively small team of dedicated researchers.

While Diffusion Models, like those which have been powering DALL-E 2 and Stable Diffusion, have been in the limelight, GNNs have quietly become the dark horse behind a wealth of exciting discoveries and innovations.

Could GNNs be the future of AI?

5

Comments

You must log in or register to comment.

Semifreak t1_jdd38uw wrote

Could you kindly give me a dumbed down explanation about what the difference between the diffusion and GNN models?

I looked up both definitions but I don't understand them.

Cheers.

1

DauntingPrawn t1_jddtbgn wrote

Not on their own. We know the human brain has different processing centers, and I think AGI is going to require activation and routing networks to invoke specific functional networks, ie image processing, language processing, etc. So I could see graph networks to work out simulated thought processing of inputs that produces probabilistic routes through those functional networks, with a sort of reality filter or expectation filter -- maybe a Boltzmann type of energy activation -- to choose from those results.

5

94746382926 t1_jddue88 wrote

They're not getting much attention right now since Deepmind stays pretty quiet these days an LLM's have the spotlight. But to me they're the most exciting because they seem to hold the most potential for scientific applications. The best performing AI will probably be a mix of many different architectures though if I had to guess.

1

Dziadzios t1_jdew6li wrote

Personally, I think the future of AI is some currently unknown architecture that will be invented by AI.

5

DragonForg t1_jdgwpva wrote

LLMs are the future. How do you think? Through graphs, or through texts? So why build an AI model that isnt a substitute for how we think?

I do see the great potential, like wolfram alpha is an amazing software paired with GPT it can produce amazing results. And I think in the future AI will utilize the models as tool, just like it already is with ChatGPT plug-ins. We gave AI a voice, we let AI see and now AI can use tools.

2

DauntingPrawn t1_jdhi42a wrote

Complex cognition exists independent of language structures and LLMs mimic language structures, not cognition. You can destroy the language centers of the brain and general intelligence, ie cognition and self recognition, remain intact. Meanwhile ChatGPT isn't thinking or even imitating thought, it's imitating language by computing a latent space for emergent words based on prior language input. Math.

Meanwhile a baby can act on knowledge learned by observing the world long before language emerges. AGI requires more than language, more than memory. It requires the ability to model reality and learn language from raw sensory input alone, and to synthesize information and observation into new ideas, and motives to act on that information, the ability to predict an outcome and a value scale to weigh one potential outcome over another. A baby can do that but ChatGPT doesn't even know when it's spouting utter nonsense and stable diffusion doesn't know how many fingers a human has.

We have no ways of modeling unobserved information. A LLM cannot add a new word to it's model. It will never talk about anything that was invented after its training. Yes, they are impressive. On the level of parlor tricks and street magic.

1