Hostilis_
Hostilis_ t1_jbr5iul wrote
Reply to comment by multiverseportalgun in [D] What's the Time and Space Complexity of Transformer Models Inference? by Smooth-Earth-9897
Yeah quadratic scaling in context length is a problem lol. Hopefully RWKV will come to the rescue.
Hostilis_ t1_jbqh1fm wrote
Reply to [D] What's the Time and Space Complexity of Transformer Models Inference? by Smooth-Earth-9897
In terms of layer width, all operations within a single transformer layer are O(n^2 ), with n the width of the largest matrix in the layer. The architectures are sequential, so the contribution to complexity from depth is given by multiplying by d for depth. Finally, they are quadratic in context length c. So in total: O(n^2 d c^2 ).
There is generally not much difference between different transformer architectures in terms of the computational complexity.
Hostilis_ t1_jap97r5 wrote
Reply to comment by SpookyTardigrade in [D] Are Genetic Algorithms Dead? by TobusFire
https://www.nature.com/articles/s41467-021-26568-2
Try this article
Hostilis_ t1_jak681p wrote
Reply to comment by currentscurrents in [D] Are Genetic Algorithms Dead? by TobusFire
>But you can't always use gradient descent. Backprop requires access to the inner workings of the function
Backprop and gradient descent are not the same thing. When you don't have access to the inner workings of the function, you can still use stochastic approximation methods for getting gradient estimates, e.g. SPSA. In fact, there are close ties between genetic algorithms and stochastic gradient estimation.
Hostilis_ t1_j9niuzt wrote
Reply to comment by kalakau in Google announces major breakthrough that represents ‘significant shift’ in quantum computers by Ezekiel_W
She absolutely is a practicing physicist lol. She's also a far better science communicator than NDT or Michio Kaku, but she gets vitriol like this, because unlike those two, she's a staunch critic of the particle physicist community. And for good reason.
Hostilis_ t1_j9n0wlf wrote
Reply to comment by Feathercrown in Google announces major breakthrough that represents ‘significant shift’ in quantum computers by Ezekiel_W
To be honest, it's these results which makes me uninspired. In going from 17 physical qubits to 49, they were able to reduce the error rate from... 3.0 to 2.9 percent. Even though this is a big milestone for the field, in absolute terms it's abysmal.
This is also only with a tiny number of logical qubits. Scaling these systems to usable sizes will take decades.
Hostilis_ t1_j9movqy wrote
Reply to Google announces major breakthrough that represents ‘significant shift’ in quantum computers by Ezekiel_W
With every one of these announcements, I'm more and more convinced we are really far away from a practical quantum computer. It feels like fusion in the 1970's.
If you're not convinced, all you have to do is ask what's the largest number that's been factored using Shor's algorithm. The answer is the same as it was in 2012: 21. Not quite the exponential progress we saw with transistors.
Hostilis_ t1_j610t1y wrote
Reply to comment by StarNightLynx in AI art made me appreciate human art more by spyser
They're not storing "mathematical representations" in the way you think they are. They're storing neural representations. The neural networks used in these models are based on the brain. So it is actually very similar to how real artists take inspiration from their predecessors. These networks are not "collage machines" piecing together snippets from other artists' work.
Hostilis_ t1_j5rmgkc wrote
Reply to comment by funkyrdaughter in Seven technologies to watch in 2023: tools and techniques that are poised to have an outsized impact on science. by Vucea
Those are two big pieces to aging, but not the whole picture. I'm not an expert, but I think oxidation and accumulation of damage to proteins and DNA are also very important and will be much more difficult to handle.
Hostilis_ t1_j5re6re wrote
Reply to comment by funkyrdaughter in Seven technologies to watch in 2023: tools and techniques that are poised to have an outsized impact on science. by Vucea
As far as I understand, immune system proteins can have these "Lego brick" type combinations, but they're the exception. Most proteins are directly encoded by the DNA.
And yeah it's absolutely possible that we could engineer proteins to get rid of toxic stuff in our bodies. Solving aging is a bit more difficult because it involves how lots of proteins and genes interact with each other, but even then AI (deep neural networks) could probably help a ton.
Hostilis_ t1_j5rc8na wrote
Reply to comment by funkyrdaughter in Seven technologies to watch in 2023: tools and techniques that are poised to have an outsized impact on science. by Vucea
They're all (ostensibly) encoded by our genome
Hostilis_ t1_j5r9xml wrote
Reply to comment by funkyrdaughter in Seven technologies to watch in 2023: tools and techniques that are poised to have an outsized impact on science. by Vucea
It required extremely costly, elaborate experiments. The problem is that proteins are generally too small to see with a microscope. In order to get the structure, you essentially had to create a crystal of the purified protein of interest (which is not always possible or practical), and then shoot x-rays at the crystal to create a diffraction pattern. Then you could use software to reconstruct the structure of the protein.
And yes that's exactly right. Now that it's possible to predict the structure nearly instantly, you can now create recipes for custom proteins that can have whatever properties you want.
Hostilis_ t1_j5r7pvr wrote
Reply to comment by funkyrdaughter in Seven technologies to watch in 2023: tools and techniques that are poised to have an outsized impact on science. by Vucea
The "Central Dogma of Biology" is: DNA makes RNA makes proteins.
How a sequence of DNA is translated into a final protein structure (the folding process) is impossible to predict with traditional computing methods. It's important, because everything about how a protein functions is essentially determined by its shape. This has been an open problem in biology for decades, since the discovery of DNA. Given that all life is built from proteins, this is a massive gap in our understanding of life and medicine.
AlphaFold was able to predict the folded structure of proteins to within experimental accuracy for the first time in history, and DeepMind recently released a catalog of their predictions for ALL human proteins for free last year.
Understanding how proteins fold will help us develop vaccines faster, treat all kinds of diseases involving proteins, predict birth defects from DNA sequences, build nano-sized delivery vessels for fighting cancer, etc. The sky is really the limit.
tl;dr, protein folding is a big deal
Hostilis_ t1_j5pa968 wrote
Reply to comment by Ray_of_Meep in Seven technologies to watch in 2023: tools and techniques that are poised to have an outsized impact on science. by Vucea
Except AI just solved one of the hardest, most important open problems in biology lol. Go look at AlphaFold.
Hostilis_ t1_jc4rnu1 wrote
Reply to comment by big_ol_tender in [D] ChatGPT without text limits. by spiritus_dei
Unfortunately I think, at least for now, that's just the way it is. This is why I personally focus on hardware architectures / acceleration for machine learning and biologically plausible deep learning. Ideas tend to matter more than compute resources in these domains.