Viewing a single comment thread. View all comments

spinur1848 t1_iwpp59k wrote

It's not likely that a simple language model can reliably do this task.

It is essentially a parrot with a large memory. It is predicting what words and sentences are associated with the input you give it.

The problem is that published scientific literature is frequently wrong, or only true for a short period of time, or only true in a very narrow context that is not captured in the language.

For example if you ask it if hydroxchloroquine is an effective treatment for Covid-19, it tells you about the preliminary work that proposed this idea and not the more recent clinical trials that completely debunked this.

You are actually leading it to a particular conclusion with your sentence structure. There actually is scientific literature that tries to suggest that hydroxchloroquine can treat Covid-19. The most recent, more reputable studies that disprove this don't have language that suggests hydroxchloroquine treats Covid-19, so the algorithm doesn't pick them up.

Essentially what Meta has done here is create an algorithm that emulates what non-scientist anti-vaxxers do when they "do thier own research". It finds and amplifies text that reinforces the biases and expectations of the input.

That's not what the practice of science is.

4