Submitted by matt2001 t3_1204f8t in Futurology
Comments
sharkinwolvesclothin t1_jdh1nec wrote
>I hope this can be addressed, as it will be able to run on smaller computers.
These issues are not specific to this chatbot/application. It's just that Stanford people have different incentives to for-profit companies. But yeah, hopefully they can be addressed, as most use cases people have would require the generating models not to have these behaviors.
riceandcashews t1_jdi7iiw wrote
>The researchers spent just $600 to get it working
This part is a little deceptive. Alpaca is just a modification of the Meta LLAMA models. It cost $600 for Stanford to (with questionable legality) use ChatGPT to modify the LLAMA models. It cost Meta thousands to train the LLAMA models though.
matt2001 OP t1_jdiflq6 wrote
Yes. But once you have a bigger system trained, it can be used to train smaller, lower power machines. I am intrigued with the possibility of running it off a laptop or smart phone. I wonder if that would threaten the economic models of the supercomputer versions?
riceandcashews t1_jdihr3n wrote
Definitely, the problem is that OpenAI has a non-commerical license that applies to any model trained on it. So Alpaca can not be used for anything other than research purposes legally.
We need a true open LLM to use to train other models legally
ninjasaid13 t1_jdi8ynp wrote
>ChatGPT
technically a different gpt was used as far as I know.
riceandcashews t1_jdi9oy4 wrote
text-davinci-003
which is the model underlying chatgpt 3
ninjasaid13 t1_jdiag2c wrote
but not chatgpt.
riceandcashews t1_jdifhix wrote
I mean, text-davinci-003 basically was chatgpt until recently, but sure
MrRandomNumber t1_jdj3s3z wrote
Thought: if consciousness is dreaming limited by perception, perhaps ai hallucinations are an essential property of their emergent systems. Why shouldn't these systems be confused, over confident and superstitious? Worked for us... These things are less Einstein, more "Cliff" from Cheers.
Mercurionio t1_jdlmewp wrote
Messing up with facts will make you killed.
Like, an Alpaca assistant that will give you completely wrong data about your power circuit. And list go on.
These models must be based on the world WE are living in. Not that they are creating based on the params
FuturologyBot t1_jdfsg93 wrote
The following submission statement was provided by /u/matt2001:
>Researchers at Stanford University have taken down their short-lived chatbot that harnessed Meta’s LLaMA AI, nicknamed Alpaca AI. The researchers launched Alpaca with a public demo anyone could try last week, but quickly took the model offline thanks to rising costs, safety concerns, and “hallucinations,” which is the word the AI community has settled on for when a chatbot confidently states misinformation, dreaming up a fact that doesn’t exist.
I hope this can be addressed, as it will be able to run on smaller computers.
>Despite its apparent failures, Alpaca has some exciting facets that make the research project interesting. Its low upfront costs are particularly notable. The researchers spent just $600 to get it working, and reportedly ran the AI using low-power machines, including Raspberry Pi computers and even a Pixel 6 smartphone, in contrast to Microsoft’s multimillion-dollar supercomputers.
Please reply to OP's comment here: https://old.reddit.com/r/Futurology/comments/1204f8t/stanford_researchers_take_down_alpaca_ai_over/jdfnpeo/
matt2001 OP t1_jdfnpeo wrote
>Researchers at Stanford University have taken down their short-lived chatbot that harnessed Meta’s LLaMA AI, nicknamed Alpaca AI. The researchers launched Alpaca with a public demo anyone could try last week, but quickly took the model offline thanks to rising costs, safety concerns, and “hallucinations,” which is the word the AI community has settled on for when a chatbot confidently states misinformation, dreaming up a fact that doesn’t exist.
I hope this can be addressed, as it will be able to run on smaller computers.
>Despite its apparent failures, Alpaca has some exciting facets that make the research project interesting. Its low upfront costs are particularly notable. The researchers spent just $600 to get it working, and reportedly ran the AI using low-power machines, including Raspberry Pi computers and even a Pixel 6 smartphone, in contrast to Microsoft’s multimillion-dollar supercomputers.