Viewing a single comment thread. View all comments

blueSGL t1_ja6pgm2 wrote

Listening to Neel Nanda talk about how models form structures to solve common problems presenting in training, no wonder they are able to pick up on patterns better than humans, that's what they are designed for.

and I believe that training models with no intention of running them purely to see what if any hidden underlying structures humanity has collectively missed is called something like 'microscope AI '

7

RabidHexley t1_jaa3go2 wrote

> purely to see what if any hidden underlying structures humanity has collectively missed

This is one of the things I feel has real potential even for "narrow" AI as far as expanding human knowledge. Something may very well be within the scope of known human science without humans ever realizing it. If you represented all human knowledge as a sphere it'd probably have a composition as porous as a sponge.

AI doesn't necessarily need to be able to reason "beyond" current human understanding to expand upon known science, but simply make connections we're unable to see.

2

Facts_About_Cats t1_ja8q9at wrote

There is no reason why the physical structure of proteins should in any way resemble or be related to the structure and grammar of the associations and relationships between words.

1

Jcat49er t1_ja96hy5 wrote

That’s the problem though. According to the results of this and other papers, there is a still unknown relationship between proteins that AIs are able to recognize and manipulate. It just happens that the way AI find the patterns in human language can also be used to find the structure of proteins.

1

diabeetis t1_jac6a4k wrote

I don't see why it shouldn't. It abstracts meaning from the relationships in the data, whether it's language or sequences

1