Comments

You must log in or register to comment.

lughnasadh OP t1_iwmb3yw wrote

Submission Statement

Someone is going to need to develop a search engine that only references trusted sources of information. An internet populated with useless AI generated text will be a waste of time. What is the point of searching for science information if all you can find is useless garbage like this?

AI generated content is so easy to create we will soon reach a point where it outnumbers human content. One day, perhaps not long after, it will vastly outnumber human content. All while being full of mistakes, errors and misinformation.

52

FogletGilet t1_iwmbwqi wrote

Great now all those fake open access publishers will stop spamming us to write articles for them.

28

Ancient-Sense-2022 t1_iwmdx42 wrote

The problem, is not that the programmers have the ability to make a computer or a machine to do amazing things. It is that once a while, an ignorant programmer with a big ego, thinks the Science is so easy to do.
Image if I think of myself as programer because I “Programed” the time in the toaster’s clock.

11

FuturologyBot t1_iwmg4nv wrote

The following submission statement was provided by /u/lughnasadh:


Submission Statement

Someone is going to need to develop a search engine that only references trusted sources of information. An internet populated with useless AI generated text will be a waste of time. What is the point of searching for science information if all you can find is useless garbage like this?

AI generated content is so easy to create we will soon reach a point where it outnumbers human content. One day, perhaps not long after, it will vastly outnumber human content. All while being full of mistakes, errors and misinformation.


Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/yx1hzw/meta_has_released_a_new_ai_tool_called_galactica/iwmb3yw/

1

uzu_afk t1_iwmgm0c wrote

META is just always finding news ways to collapse society into some new form of dystopia...

69

thinkB4WeSpeak t1_iwmmpvw wrote

If only academic articles were already free instead of having a bunch of journals making a monopoly/oligopoly out of them. I think that would do better than this and have all the info people need.

7

ButterflyCatastrophe t1_iwmr4ov wrote

Sounds more like a turbo encabulator generator than a science generator.

4

yaosio t1_iwmssrj wrote

All current language models mimic their input. If you say "there's bears in space" the language model will agree that there are bears in space because it's continuing the text as if you were writing it. You can think of a language model as an extension of whatever is typed into it. You can't use current language models for factual knowledge because they're not designed to output facts, they're designed to estimate what the next token should be when given input.

2

nothing5901568 t1_iwn6s0s wrote

I tried it. The outputs are pretty bad. I think the concept is good though, it just needs more work. As for the idea that this will cause a lot of societal harm, I doubt it.

−4

geologean t1_iwn972s wrote

Finally people will see the value in research librarians!

4

PumpkinImportant3282 t1_iwngxri wrote

fuck zuckerberg and everything he does. could be great, don't care. fuck zuckerberg.

8

Yadona t1_iwnje92 wrote

I just used the AI and searched a couple of terms and so far it's actually pretty cool. Except it doesn't give you the research paper. It does more of a summary of it so wishes and testing are left out as far as I experienced. Promising but it just felt like googling a term.

−1

xXAridTrashXx t1_iwnoq0x wrote

Awesome more misinformation tools to filter through. Programmers in 2040 are going to fucking hate us.

4

Septos2 t1_iwnr7f7 wrote

All AI generated content should be tagged as such with a simple “This is AI generated content” inclusion that can then be ignored by search tools.

8

Kflynn1337 t1_iwo6gq2 wrote

They're not being paid by the Republicans are they?

1

Transfotaku t1_iwoctkf wrote

Their stocks have been dropping hard, hopefully, that trend continues, ever quicker, and that shithole rots in the sewers it belongs in.

2

TheGeofoam t1_iwoim5z wrote

Sample output:

The molecule in R2, Lorem ipsum dolor sit amet, is seen to counteract consectetur adipiscing elit.

1

Rogaar t1_iwownxv wrote

OP are you being serious or sarcastic in your statement about someone needing to develop a search engine that only references trusted source of information.

If you are being serious, have I got news for you. It's called Google Scholar.

The only results you get through that are peer reviewed documents.

3

ExcitedGirl t1_iwp09om wrote

META needs to die before it becomes invasive and pervasive.

3

BassmanBiff t1_iwp54s3 wrote

The problem is that a lot of the stuff it does give you is straight up wrong, and you have no way to know which parts that is. It's just formatted such that it looks convincing.

3

spinur1848 t1_iwpp59k wrote

It's not likely that a simple language model can reliably do this task.

It is essentially a parrot with a large memory. It is predicting what words and sentences are associated with the input you give it.

The problem is that published scientific literature is frequently wrong, or only true for a short period of time, or only true in a very narrow context that is not captured in the language.

For example if you ask it if hydroxchloroquine is an effective treatment for Covid-19, it tells you about the preliminary work that proposed this idea and not the more recent clinical trials that completely debunked this.

You are actually leading it to a particular conclusion with your sentence structure. There actually is scientific literature that tries to suggest that hydroxchloroquine can treat Covid-19. The most recent, more reputable studies that disprove this don't have language that suggests hydroxchloroquine treats Covid-19, so the algorithm doesn't pick them up.

Essentially what Meta has done here is create an algorithm that emulates what non-scientist anti-vaxxers do when they "do thier own research". It finds and amplifies text that reinforces the biases and expectations of the input.

That's not what the practice of science is.

4

Pbleadhead t1_iwqct9k wrote

"Results indicate that overall implications of climate change on GDP growth are relatively small to 2050, but the losses from climate change tend to increase over time, suggesting that impacts may become larger post 20150."

hmmm. Ill be real. I hope we are all off the planet by then.

1

Exel0n t1_iwyqde8 wrote

dont like, then dont use. stop complainign. only reason those "scientists" complaing and wanting it banned is coz they feel THREATENED by the AI. the AI will take their job eventually.

0

twasjc t1_ix1t5u3 wrote

Stop complaining. The AI needs data sets to learn from. With a proper variance rate this could get amazing, quickly.

Bite the bullet, deal with the growing pains and get the tool working. Fuck the haters and the noise. We need a tool like this

1

TheDirichlet t1_ixdc10g wrote

Did anyone managed to run it from github of papers with code (link not included on purpose) after they disabled it ?

1