Comments

You must log in or register to comment.

MysteryInc152 OP t1_ja3hn8q wrote

>Deep-learning language models have shown promise in various biotechnological applications, including protein design and engineering. Here we describe ProGen, a language model that can generate protein sequences with a predictable function across large protein families, akin to generating grammatically and semantically correct natural language sentences on diverse topics. The model was trained on 280 million protein sequences from >19,000 families and is augmented with control tags specifying protein properties. ProGen can be further fine-tuned to curated sequences and tags to improve controllable generation performance of proteins from families with sufficient homologous samples. Artificial proteins fine-tuned to five distinct lysozyme families showed similar catalytic efficiencies as natural lysozymes, with sequence identity to natural proteins as low as 31.4%. ProGen is readily adapted to diverse protein families, as we demonstrate with chorismate mutase and malate dehydrogenase.

14

Surur t1_ja3lylr wrote

What is really interesting about this is that the LLM may have a better understanding of what makes an enzyme function than the human scientists.

The danger is the science turning into a blackbox as dense as LLM themselves.

31

Facts_About_Cats t1_ja5q7ce wrote

What does the structure language have to do with the folding shapes if proteins?

1

dwarfarchist9001 t1_ja6cfn4 wrote

This paper actually skips the folding step entirely. The AI was trained a list of protein amino acid sequences that were labeled with their purpose. Then they had it predict new amino acid sequences to fulfill the same purposes. Finally they actually made the proteins the model suggested and the proteins worked with quite high levels of efficiency.

The most interesting part to me is that some of the proteins suggested by model worked despite having little similarity to the proteins in the training data, as low 31.4% in one case. This suggests to me the model has caught on to some thus far unknown rules underlying the relationship between the sequences and functions of proteins.

5

blueSGL t1_ja6pgm2 wrote

Listening to Neel Nanda talk about how models form structures to solve common problems presenting in training, no wonder they are able to pick up on patterns better than humans, that's what they are designed for.

and I believe that training models with no intention of running them purely to see what if any hidden underlying structures humanity has collectively missed is called something like 'microscope AI '

7

hackinthebochs t1_ja6uapk wrote

Any structured data is a language in a broad sense. Tokens identify structural units and the grammar determine how these structural units interrelate. But the grammar can be arbitrarily complex and so can encode deep relationships among data in any domain. This is why "language models" are so powerful in a vast array of contexts.

1

vhu9644 t1_ja6wu9v wrote

I know this is exciting (and it is) but just to temper the excitement: many computationally designed proteins have issues.

Most aren’t that good at working in in-Vivo conditions

We still can’t really adjust parameters we really want (like temperature these proteins work in)

Most are stuck on “simpler” problems like binding rather than enzymatic function

There may also be issues with evolvability of these enzymes

But all the same, it’s not an unnatural situation either. Protein sequences are still a sequence. Amino acids are added one by one to build them up, and we’ve known that neural nets are good at these problems. Before we solved tertiary structure prediction, secondary structure prediction sota was also neural networks. It’s just tertiary structure and these kinds of generative models are hard.

We’re finally cracking into generative protein design and the field is super exciting now, but it’s still only really preliminary results we’re seeing.

3

Jcat49er t1_ja96hy5 wrote

That’s the problem though. According to the results of this and other papers, there is a still unknown relationship between proteins that AIs are able to recognize and manipulate. It just happens that the way AI find the patterns in human language can also be used to find the structure of proteins.

1

eve_of_distraction t1_ja9c6cn wrote

One step closer to curing the dreaded prion diseases. One day. 🙏

1

RabidHexley t1_jaa3go2 wrote

> purely to see what if any hidden underlying structures humanity has collectively missed

This is one of the things I feel has real potential even for "narrow" AI as far as expanding human knowledge. Something may very well be within the scope of known human science without humans ever realizing it. If you represented all human knowledge as a sphere it'd probably have a composition as porous as a sponge.

AI doesn't necessarily need to be able to reason "beyond" current human understanding to expand upon known science, but simply make connections we're unable to see.

2