Submitted by blaspheminCapn t3_zhtjnn in Futurology
Comments
Black_RL t1_izo8t06 wrote
The nano bots are already inside us!
blaspheminCapn OP t1_izod7n2 wrote
That's Teflon
Corno4825 t1_izp372e wrote
What if we could put nano bots in our body to use the metal and plastic in our body to turn them into things that can be helpful for our bodies?
redredgreengreen1 t1_izp6au0 wrote
Brooo weaponizing microplasics as nanobots would be awesome.
Velvet_Pop t1_izp9mgn wrote
Maybe we should start with having them remove the plastic before we get into MacGyver nanobots
BeoLabTech t1_izq6bd8 wrote
The MacGyver bots can do it, we just have to figure out how to get the bubblegum inside
MrMediaShill t1_izqh3iw wrote
I should have plenty still in reserves from all the times I swallowed it as a kid
Toweliee420 t1_izro8g8 wrote
It’s next to all those watermelon seeds
MrMediaShill t1_izsbrg6 wrote
Oh yeah I forgot about those… think the nano bots can use those too?
starfyredragon t1_izq10cz wrote
Do it. Make the dream come true!
Corno4825 t1_izq54gb wrote
I'll write a book.
Today's science fiction is tomorrow's reality.
starfyredragon t1_izq800o wrote
Step in the right direction!
M00s3_B1t_my_Sister t1_izrcnlu wrote
Like replace the cartilage in worn out joints from metals and plastic that are already there? Where are the trials, I'll volunteer.
bacon-squared t1_izoo0w0 wrote
Underrated how true this probably is.
JustInTheNow t1_izqozt2 wrote
My mom has been using the same pan for years, started black, now it’s silver, safe to say, it in here.
crumbshotfetishist t1_izp053p wrote
Just wait til the teflon becomes self aware
Ok_Shop_3418 t1_izp4vvn wrote
Teflon and microplastics battling it out
[deleted] t1_izp1uxs wrote
[deleted]
warrant2k t1_izpvmj9 wrote
It was the nano-friends we made along the way.
mmrrbbee t1_izpanlo wrote
We are just the random conglomerates of trillions of cells going about their lives
kittywrastler t1_izr608p wrote
ItHitMeInTheNuts t1_izqi8mb wrote
That is plastic
Duece09 t1_izr3usq wrote
Yeah, but not for “good things”.
[deleted] t1_izs0vpk wrote
[deleted]
[deleted] t1_izptitp wrote
[deleted]
SiNDiLeX t1_izq9zlp wrote
I can think of at least one person.
orangutanDOTorg t1_izqkuj9 wrote
Who are you Putin on the list?
Soangry75 t1_izqplk4 wrote
It's not just Kim.
StaleCanole t1_izqt5rh wrote
These Trump my idea
GodMasol t1_izs312n wrote
I'm obama self
RlPandTERR0R t1_izqkwu6 wrote
I'd be happy to Putin my 2 cents on the matter but I wouldn't want to break the TOS
OkFootball4 t1_izrjp70 wrote
steve from accounting
addpurplefeet t1_izu2wg2 wrote
Or frank from financing.
Jstarfully t1_izo5zpd wrote
They've literally tried this before and it's not that simple. First of all synthesizing the molecules aren't always simple, and then you have to get them into the cells and make sure they stay intact in the cells. Then even if you do, they don't always have the predicted effect.
twohundred37 t1_izoh1am wrote
Oh okay. No need to try again then since it wasn't successful the first time, Wile E Coyote!
Jstarfully t1_izoohcl wrote
They haven't done anything to address any of those issues. Making another AI or virtual 'hit' tool for pharmacologically active molecule fragments is literally not what is needed due to the aforementioned issues. It's well known in the field that this is not an effective approach.
[deleted] t1_izow9i2 wrote
[deleted]
Snufflepuffster t1_izqjev7 wrote
they don’t have to, the research is still valuable.
[deleted] t1_izowqc3 wrote
[deleted]
RoiboPilot t1_izp9ora wrote
The difference here is in connection to your last sentence. They are using AI to see what molecules (receptors) give the right signals to kill the cancer cells, which seems to provide far better results than the traditional method of trial and error.
aclownofthorns t1_izqypbl wrote
I mean deep learning AI is basically virtual trial and error in computer processing speeds.
keeperkairos t1_izojuuy wrote
If you have a better way of finding molecules, you can more easily find one that best suits it's purpose.
Jstarfully t1_izoo882 wrote
But like I said, they've done this before and it didn't work then. They've done nothing to address those previous issues.
keeperkairos t1_izoprpc wrote
Doesn't work like that. The team that made this is not the same team that can fix those issues, and it is possible that those issues can't be fixed, or it is not feasible, and thus having a different molecule would be better.
Jstarfully t1_izoqajq wrote
But that's not how the systems work?? The programmes come up with a whole bunch of different molecules based on the input. The programme isn't broken, the method is, and it won't be good until the issues with it are fixed.
Also, there really is no reason why it can't be the same team doing both these things, they're not isolated groups of people (and if they are, then there's a high chance they're not doing good science)
[deleted] t1_izqt94p wrote
[deleted]
JPGer t1_izp22n6 wrote
reminds me of that "game" that some scientists made based on protein folding, they had to tweak it based on user feedback..but then people used it and did like 100x the work they were trying to accomplish with it.
Glodraph t1_izp57lg wrote
Yep. Main issue is synthesis, cost, delivery, elimination rates and see if there is a therapeutic interval that make these molecules a viable treatment over the citotoxic effect. Plus you ideally want a lot of testing, from vitro to vivo. There is a lot to work on, but this could be an useful way to design molecules in a way faster way, then slim down to the most promising ones to test.
MrDuhVinci t1_izp6u74 wrote
But the fact it's been automated by AI... is handy, no? How many variations on a single Cancer exists... now, imagine knowing the exact sequence of molecules that is guaranteed to destroy each one.
It might not have an immediate practical application (due to the difficulties you describe)... but we should be thankful to have the knowledge available, in case of the eventuality that there practical solutions are found (even for a single cancer).
Gnostromo t1_izpn0e6 wrote
And as difficult as that is...putting hundreds/thousands of cancer related businesses out of business is even harder
Wordymanjenson t1_izpv95d wrote
But isn’t it amazing that two similar models reached the same conclusion? At least it gives insight into what works and what likely doesn’t.
bogglingsnog t1_izoxut5 wrote
I am sure they will have the AI try that next
[deleted] t1_izohmah wrote
[removed]
[deleted] t1_izp227u wrote
[removed]
[deleted] t1_izq0wa2 wrote
[removed]
[deleted] t1_izp1x7j wrote
[removed]
[deleted] t1_izokq9q wrote
[removed]
[deleted] t1_izpedky wrote
[removed]
[deleted] t1_izpzro2 wrote
[removed]
[deleted] t1_izp6aji wrote
[removed]
[deleted] t1_izq74vv wrote
[removed]
[deleted] t1_izq0tmg wrote
[removed]
[deleted] t1_izpvldb wrote
[deleted]
[deleted] t1_izpq4sz wrote
[removed]
danellender t1_izntijq wrote
This is really starting to feel like the end. The ceaseless articles on this and other subs are positive in tone yet herald new methods that are disturbing.
I remember Alvin Toffler's book, Future Shock, from 1970. It felt like this. In battling disease we're dealing with technology that is truly moving beyond our understanding. Scientists are struggling to keep up with the developments that are creating themselves.
sboy12456 t1_iznv0v2 wrote
Nah youre just paranoid
danellender t1_izofzyh wrote
Thanks I feel much better. I'm almost over Covid, out of quarantine tomorrow.
sboy12456 t1_izokpfv wrote
That’s good I hope you’re doing well now
321gogo t1_izocbrh wrote
Nothing is creating itself and the technology is not moving beyond the understanding of those creating it.
flux_capacitor73 t1_izppnpd wrote
To a degree these systems are block box, or at least relatively opaque. They can provide results which are difficult to understand. There's work being done to counter this, but there definitely are legitimate concerns.
321gogo t1_izqfe7j wrote
> they provide results that are difficult to understand
Care to elaborate on this? I’m no expert in AI/ML, but generally I wouldn’t say the results are difficult to understand. I think it’s pretty clear that computers can outperform humans by a wide margin in analyzing large sets of data. Most of these concepts are applications of this computing power.
flux_capacitor73 t1_izrh3el wrote
There's an example where an ad bot from target would predict the pregnancy of its customer by look at the purchase of certain items. Human advertisers weren't able to figure it out.
This example is benign, but others are problematic.
321gogo t1_izrjlai wrote
What do you mean “figure it out”? I’m not familiar with the example, but this sounds like a fairly simple application of ML. At the end of the day it is just finding trends in extremely large datasets that humans don’t have the brain power to process. Just because humans can’t do that work doesn’t mean we can’t understand it.
For example with predicting the pregnancy. A human might try to predict this by working backwards. You can first just try to understand the persons age and if they are female. Not too crazy to figure out from purchase history. Maybe you see that the person is in a relationship because they started buying Mens shampoo. Maybe they started buying dogfood and you find in your training data people tend to have a child within a few years of getting a pet. Tons of little things that can point you in the right direction. Now a computer is just doing this but ramped up to the max. The applications are very simple, the computation is the part the is complex.
winangel t1_izo6sof wrote
Hum looks like it is completely in our understanding, it just accelerates the underlying applications of the first principles. Scientist know it is possible to talk to cells, the ai just decoded the language at a stunning speed. But it is not something we don’t understand here…
dude-O-rama t1_iznvkqs wrote
The only thing you have to worry about is living in A Brave New World.
EthanPrisonMike t1_izokjok wrote
I'd take some soma to not worry about inflation anymore
bubba-yo t1_izp4ie5 wrote
Beyond *your* understanding. Not beyond scientists/engineers understanding. The mRNA vaccines weren't a fluke.
Just because discovery (science) doesn't have an obvious path to invention (engineering) doesn't mean there's a lack of understanding here. Took us decades after understanding quantum mechanics to figure out how to put that knowledge to practical use.
BaalKazar t1_izo1wjg wrote
„Beyond our understanding“
… no
We developed and trained it to do exactly what it is expected to do. Nothing yet is beyond our understanding and current AI is still not even as „intelligent“ as an Amoeba.
It’s complex wirering which leads to mechanical and predictable results.
[deleted] t1_izo6sk5 wrote
[removed]
SeneInSPAAACE t1_izontmm wrote
>current AI is still not even as „intelligent“ as an Amoeba.
Very incorrect. While direct comparison is impossible, as they specialize in different tasks, modern AI can have over a billion neurons, which puts them above cats and some relatively-smart birds, such as magpies, in a simple comparison.
CCerta112 t1_izpebmh wrote
That‘s not how intelligence works.
SeneInSPAAACE t1_izpi364 wrote
Well, yes, except no. An AI isn't going to match a cat at stalking prey, for an example. However, a cat is trash at chess, and loses on facial recognition pretty hard.
Pray tell, how does intelligence work?
CCerta112 t1_izpjlza wrote
As far as I‘m aware, we don‘t have a comprehensive model to explain intelligence with, yet.
Sure, there might be neural networks that deploy more artificial neurons than cats have real neurons. (I doubt that claim, but I am also too lazy to look it up.) One of the differences is, cats can reason, current AI can‘t. There are more, but my point is: After a certain point it‘s not about the computing power or network site, but about how the structure and connections look.
SeneInSPAAACE t1_izppn5w wrote
>I doubt that claim
Of course, that's not the full story. Machine neurons aren't necessarily as performant as animal neurons, for an example. On the other hand, they're ridiculously faster. Also, that reference is nearly a decade old. We're somewhere around 90 billion simulated neurons at this point. Don't quote me on that, though, that's just the ballpark I got from a fairly casual googling.
Most of what you'd call "machine learning" AI:s can't really reason - They do pattern recognition really well, and data transformation, but that's about it. However, that doesn't mean you cannot do AIs who can do logical reasoning. There's been some fairly recent developments on that area. Now, where the limits are, we don't really know, but we're way beyond amoebas and simple invertebrates such as nematodes, definitely.
CCerta112 t1_izpxqez wrote
> Doubt me at your own peril.
Wow, that‘s really interesting. Thanks!
My original point still stands, though. Intelligence is not defined by the amount of neurons connected in a network.
SeneInSPAAACE t1_izpygfm wrote
Yes. Intelligence is not defined by that. So what is it defined by?
BaalKazar t1_izpqmsh wrote
But these neurons don’t have much in common with biological neurons. They utilize the electrical grid impulse-neuron principle but do not consider electric inhibiter-neurons. The entire chemical-neuron Transmitter system is ignored as well.
Yeah they already do some amazing things but the things they are doing are very mechanical. A biological brain can alter neuro transmitter levels to react to the same input in indefinite amount of ways, without changing the underlying electrical nor the chemical synapse network configuration.
Science of biological brains isn’t far enough to clearly conclude how much of „intelligence“ is living in the electrical realm. 100mil simulated electrical neurons might contain less entropy than a biological brain clustered with a mix of 1000 electric and chemical neurons.
AI can solve non-linear problems. That’s a big step in terms of computation but far off from what we believe makes up intelligence.
SeneInSPAAACE t1_izpv54m wrote
>But these neurons don’t have much in common with biological neurons. They utilize the electrical grid impulse-neuron principle but do not consider electric inhibiter-neurons. The entire chemical-neuron Transmitter system is ignored as well.
Correct. It's not an apples-to-apples comparison in that sense. Like I said.
However, it's hundreds of billions vs. 750 million, if we really wanted to compete.
​
> A biological brain can alter neuro transmitter levels to react to the same input in indefinite amount of ways, without changing the underlying electrical nor the chemical synapse network configuration.
All that and a few studies have hinted that there might also be an electromagnetic aspect to brain function. Still, an AI doesn't have to work the exact same way as a biological intelligence. It does make direct comparisons harder, though.
​
>AI can solve non-linear problems. That’s a big step in terms of computation but far off from what we believe makes up intelligence.
Yes, yes. The same old story. A goalpost is set for AI, then it's reached, then people say "what about THIS", and "Doing that previous thing doesn't prove it's actually intelligent".
BaalKazar t1_izq1twe wrote
The first goal post of measuring digital intelligence wasn’t moved since 60 years.
It’s still the Turing Test. Until AI cannot beat this already multiple times overhauled base version of a test for the digital intelligence state we can assume it is not yet intelligent. It’s not even at the beginning of the measurable spectrum, yet.
What GPT does today can be rebuild in 60 year old analog Turing machines. It’s a ball dropped onto an angled grid resulting in an expectable outcome depending on where you drop the ball. But that grid wouldn’t be considered intelligent, only functional.
Taking the brain of a bat, hooking it up to electrodes, and letting it control a fighter jet for example. The brain in this state is only functional. It already controlled aircrafts in experiments, but it’s merely a grid of functionality. What we consider „intelligence“ is gone once the brain is removed from the body and connected to a synthetic interface instead.
SeneInSPAAACE t1_izq49dy wrote
Uh, did you miss that LaMDA passed the Turing test in June? The conclusion was that the result isn't valid because there's no intelligence behind LaMDA.
Or, "It's not really intelligent".
This is what we're going to get. We'll use harder and harder tests and see them being passed, and we'll just keep concluding "It's not really actually intelligent". Or, maybe we'll switch to "It's not self-aware" or "It's not sapient" at some point.
BaalKazar t1_izq958r wrote
It did not though.
The suspect knew it was talking to a machine and was asked if it believes the machine might be intelligent or even sentient.
The Turing Test implies that the suspect does not know it is talking to a machine. That way the suspect has to identify the machine as a human for the machine to pass the test.
In case of LaMDA the human knew from the beginning that he is talking to a machine. Asking someone if he believes a machine is intelligent is different than asking someone if he believes that he is talking to a human.
There is money in AI. Hence a lot of caution is advised when for profit organizations self declare themselves as the first to pass the test. The first to pass it will become rich by publicity alone. When it actually is passed you, me and everyone will get blasted across all media channels by this breakthrough.
(The GPT ceo is marketing GPT-4 as the first to pass the test. GPT is for profit and said the same about GPT-3, other companies go the same publicity route without the meat needed. As long as no human says „yeh this dialog partner is a human“ the test isn’t passed. A human saying „this machine might be intelligent“ isn’t enough. )
SeneInSPAAACE t1_izqad1w wrote
>In case of LaMDA the human knew from the beginning that he is talking to a machine.
So the well was poisoned from the beginning? Isn't that cheating? On the human side?
BTW, allegedly GPT-4 will have 100 TRILLION parameters. Now, again, we can't exactly tell what that means, but human brains have something like 150 trillion SYNAPSES, and that includes all the ones for our bodily functions and motor control, so.... Yeah, it's going to get interesting.
BaalKazar t1_izqerqh wrote
To be honest yea it is. But it’s not as easy and definitive. You got a point I don’t want to deny that. The edge between us in our discussion is the fascinating thing about all of this, especially the fact that either of us might be correct but in the current state of time there is no definite way to proof it. The Turing test it self is not definitive either.
Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine. When a neural network is trained to pass the test by using all means necessary, is it passing the test duo to its intelligence or passing the test pre-determined? (It was trained to pass the test, can it do things beyond the scope of this training?)
There is no clear answer. Which imo makes it fascinating. We cannot truly say it is intelligent, but it will reach a point very soon at which it will appear intelligent.
The master question is, if that it self already is intelligence. It might be! I don’t want to deny that. But we lack the necessary definite understanding of „intelligence“ to truly conclude.
When a neural network passes the test, there will be fierce discussions. These discussions will help us understand what makes up intelligence, they will most likely help with understanding consciousness as well.
But it’s a step by step discovery process on both sides. Passing the Turing test doesn’t automatically mean we suddenly have a clear picture of intelligence or what it looks like. But it is a milestone in being able to understand it. Perhaps humans already created synthetic intelligence without even noticing.
Don’t get me wrong GPT and Co are fascinating and modern age magic. The new sense of possible tools is breathtaking. Intelligence requires the ability to acquire and apply knowledge in form of skills. Digital AI is very close to doing that, but the way they acquire knowledge is very technical and bound to complex engineered models being fed in just the right way. It’s not able to do so on its own. (Just like the brain! But the brain does so with a certain intrinsic ease, which might be purely Duo to some special not yet discovered feature unrelated to „intelligence“. Science can’t really tell yet so we naturally have a hard time setting boundaries for different AI models. Perhaps this current language model isn’t intelligent but some physics model AI already was? The physics one can’t „talk“ to us which makes it easy to miss)
Currently we are talking to the AI, what we are looking for is the AI starting to talk to us. Perhaps it already did but nobody noticed because we didn’t yet know how to listen.
And yeah I fully agree GPT-4 sounds incredible! The steps the industry marches forward with got huge the last years, truly fascinating.
SeneInSPAAACE t1_izroblm wrote
>The Turing test it self is not definitive either.
Very true. Without poisoning the well, would LaMDA completely have passed it already? And if I've understood correctly, it's a bit of an idiot outside of putting words in a pleasing order.
​
>Currently it looks like GPT it self is going to try to cheat it’s way through the Turing test by using a language model which is naturally hard for humans to identify as a machine.
"Cheat" is relative. Can a HUMAN pass a turing test, especially if we restrict the format in which they are allowed to respond?
If it can pass every test a human can, and we still call it anything but intelligent, either we gotta admit our dishonesty, or question whether humans are intelligent.
​
> it will reach a point very soon at which it will appear intelligent.
Just like everyone else, then. Well, better than some of us.
BaalKazar t1_izsnhiv wrote
Now I fully agree with what you said.
Cheat is a absolutely relative! How can we tell that something which appears to be intelligent is not? The parallels to how human infants acquire knowledge are strikingly similier. Parents are the engineers and the environment is the data set which the infant is getting trained on.
We need to take a better look at what the Turing test is doing to answer your question of „could a human pass it“. Turings approach is not really to measure intelligence, intelligence definitely is a spectrum, his test results in a binary yes/no conclusion for a reason though. He believed that 70% of humans won’t be able to identify a machine through a 5min dialogue until the year 2000.
His test is not a scientifically important milestone, passing the Turing test, or declaring a machine to be intelligent is not yielding any new knowledge. The passing of the Turing test is marking the point in time in which humans must accept the fact that a majority of them won’t be able to tell the difference of remotely communicating with a human or a machine. (The latest point at which governments need to work on additional legislation and regulation etc)
So as you correctly pointed out, the test cannot really be cheated. But the test can be passed without the need for intelligence. A dog is intelligent but could not pass it. Passing it definitively requires something to seem intelligent for a human.
StarTrek has many episodes which tackle this highly ethical topic of when do humans accept something to be intelligent and when do we accept that something is sentient. The android Commander Data is definitively intelligent, he is acquiring knowledge and applies it in the real work. Question about Data is, is he sentient? They impressively show how difficult it is to identify intelligence and even something as seemingly obvious as sentience. There is an episode which concludes a crystalline rock to be intelligent based on it emitting energy patterns which can be considered to be an encoded try of communication.
Humans may look intelligence straight into the face and state it’s not intelligent. That’s because we do not understand our own intelligence enough yet. My point of view is that AI will help us understand our own intelligence. But until we cannot grasp our own, how can we grasp something else’s? I believe that pushing back will at some point result in a technology which goes over and beyond to make the claim of it being not intelligent completely obsolet. StarTreks Data for example, there is no deniability of its intelligence and interesting enough this leads straight to question of sentience. At least StarTrek is not able to draw a picture which clearly shows the boundary of intelligence and sentience, in their pictures these two things are appearing to correlate. Something which is definitively considered to be intelligent by humans, always appears to be sentient at the same time. (Which imo shows that we need to get a better idea of „intelligence“ before we conclude something is, when we concluded it is intelligent the scientific path „ends“ before we truly understood)
zebrahdh t1_izoagma wrote
Lol. It’s funny how everyone wants to be Nostradamus, the worlds most famous guy that has no idea when the world will end but is ready to predict it anyway.
Galladorn t1_izo1dfs wrote
I wonder how Toffler would have written Future Shock if he was producing it today.
iz296 t1_izo51eb wrote
Lost someone very important to me last week due to complications from cancer.
It's effected our family deeply. Hoping the cure comes soon.
SkyJebus t1_izpanqs wrote
Progress is good. Demanding things stay the same will never work and only lead to regression.
SciGuy45 t1_izql3x5 wrote
Just like nuclear energy, there’s good and nefarious potential with this technology.
blaspheminCapn OP t1_iznwcru wrote
The bigger shock would be if it all suddenly stopped. Imagine if global free trade suddenly wasn't free or secure. Which could be a big problem soon if globalization continues to retract.
cavynmaicl t1_izqnum7 wrote
Great. So researchers have made the bardic spell Cutting Words work on cancer. Cool!
Soangry75 t1_izqphej wrote
I was gonna go with a dune reference, but that works too.
deezdanglin t1_izs1fyb wrote
"My name is a killing word". Am I right?!
Lemnology t1_izpl3to wrote
Every few days on reddit I see a headline that seems to be the cure for cancer. Will someone explain why this one isn’t?
OzOntario t1_izrf0fr wrote
Building a CAR-T cell is one thing, getting it into a tumour and preventing the tumour from growing back is a whole other thing.
Leukemia doesn't really form tumours which is why CAR-T's work so well with it. B-cell becomes cancerous, all B-cells have a receptor called cd-19 on them, so if you target all cells with cd-19 on them, you kill the cancer.
What happens when a tumour doesn't have a receptor like that? Or when access to the tumour is difficult to get to for the car-t? Even worse - what happens when the tumour has co-opted immune cells that suppress T-cells? This happens in many tumour types - most notably (in my opinion) glioblastoma.
We have lots of newer, better, car-t's always coming out, and some of them can even clear the initial disease, but recurrent disease returns and it's either less receptive to the car-t, or you just continue to get relapse.
3deal t1_izqaj36 wrote
5410150th article who tell us that the cancer is over
Desperate_Food7354 t1_izr5h4s wrote
Hopefully it’ll help cure the worst disease of them all, aging!
[deleted] t1_iznv3np wrote
[removed]
FuturologyBot t1_iznvvl9 wrote
The following submission statement was provided by /u/blaspheminCapn:
Using new machine learning techniques, researchers at UC San Francisco (UCSF), in collaboration with a team at IBM Research, have developed a virtual molecular library of thousands of "command sentences" for cells, based on combinations of "words" that guided engineered immune cells to seek out and tirelessly kill cancer cells.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/zhtjnn/how_ai_found_the_words_to_kill_cancer_cells/iznrbzz/
[deleted] t1_izo1oje wrote
[removed]
[deleted] t1_izo8q51 wrote
[removed]
[deleted] t1_izoeybb wrote
[removed]
[deleted] t1_izorjrm wrote
[removed]
[deleted] t1_izoyvzp wrote
[removed]
[deleted] t1_izpjrnz wrote
[removed]
[deleted] t1_izpmdl7 wrote
[removed]
[deleted] t1_izprhzo wrote
[removed]
[deleted] t1_izpvczd wrote
[removed]
[deleted] t1_izpy2a2 wrote
[removed]
[deleted] t1_izq2mt5 wrote
[removed]
[deleted] t1_izq62c6 wrote
[removed]
[deleted] t1_izqhs1u wrote
[removed]
[deleted] t1_izqrjrc wrote
[removed]
[deleted] t1_izque7t wrote
[removed]
[deleted] t1_izqxpad wrote
[removed]
[deleted] t1_izqzntq wrote
[removed]
SuspiciousStable9649 t1_izqzzrg wrote
Cancerous Expelliarmus? Those words?
Because, you know, with a wand they might work. Just saying.
homnomoculous t1_izr32x0 wrote
This is a really confusing summary but the abstract states that there were 2300 viable combinations of 13 “motifs” of signaling pathways that regulate T cell activity against cancer cells and the researchers trained an AI to find what combinations were more effective. Doesn’t seem particularly clinically useful but I feel like this will get attention just because neural networks and AI are such a hot topic nowadays.
Major-Phrase5668 t1_izr65n7 wrote
More than that, how Tech-no-logical-ness created first VR and kicked its daddy AI in !
[deleted] t1_izrhkuk wrote
[removed]
[deleted] t1_izri8g7 wrote
[removed]
[deleted] t1_izrn19j wrote
[removed]
sunsilk89 t1_izs0i7t wrote
The cure for cancer has already been found. Cmon people. There's too much money in sick people for big pharma to let it out. Just think about how advanced we are...
[deleted] t1_j0fy5na wrote
[removed]
[deleted] t1_iznun6u wrote
[removed]
[deleted] t1_iznv4u6 wrote
[removed]
blaspheminCapn OP t1_iznrbzz wrote
Using new machine learning techniques, researchers at UC San Francisco (UCSF), in collaboration with a team at IBM Research, have developed a virtual molecular library of thousands of "command sentences" for cells, based on combinations of "words" that guided engineered immune cells to seek out and tirelessly kill cancer cells.