Viewing a single comment thread. View all comments

FeatheryBallOfFluff t1_j0l6723 wrote

AIs can predict, but that isn't equal to understanding why or how it works. It's like being able to apply a very complex formula. You may know how to apply the formula, but may not understand why the formula is like that. Computers are good at finding correlations, but in an environment with little correlations, AI may have difficulty, as there is no number that indicates biological relevance.

1

Surur t1_j0l6tj8 wrote

Finding the relationship between items is exactly what AI is good at. You sound like the people who said AI would never beat Go because the number of combinations were more than the atoms in the universe.

−1

breaditbans t1_j0l9qc4 wrote

I work in medical research. We are already seeing cool image based analysis, but it’s supervised machine learning that is only as good as the training set. This will apply to any machine learning algos. And that’s where we are going to run into issues. What I’d like to see in ML algos that can read 50 high impact papers in a field and put together a summary of the data. The problems arise when people have bad data. It might be fabricated, poorly designed expts or just bad statistics. The ML algos are going to assume that data is as real as the most well-performed experiments. The bad data will contaminate the good data and corrupt the conclusions drawn from the algos.

Will that problem get alleviated? Probably, but it’s going to take some time and it’s going to require a lot of bright people to curate the dataset to actually be able to draw better conclusions than we can arrive at alone. But in 15 years? God only knows. Maybe I’ll just submit whatever grant ChatGPT13 writes for me.

2

Surur t1_j0ld4s8 wrote

Dealing with dirty data is exactly the strength of neural networks. It is just a matter of time.

1