Submitted by ShakeNBakeGibson t3_10wblpv in IAmA
Pookie_0 t1_j7mjidv wrote
We all know that chat GPT made mistakes at its beginning - which is the point of machine learning and IA. But considering that your IA is in the pharmacetical domain, this is more of a life or death situation. How do you plan on dealing with such mistakes?
ShakeNBakeGibson OP t1_j7mttku wrote
This is why we don’t just take the inferences from our maps of biology and send them into clinical trials. The FDA has a lot of useful restrictions on testing drugs in humans that ensure that everyone does a ton of work to minimize risk of experimenting in humans. For example, we do numerous validation experiments in human cells, animal models and preclinical models after our AI gives us input but before we go into trials and many of these experiments address safety. That said, one can never minimize risk to zero and we take our responsibility to patients seriously.
Viewing a single comment thread. View all comments