Submitted by austintackaberry t3_120usfk in MachineLearning
__Maximum__ t1_jdkdtp2 wrote
ClosedAI is feeding off of our data. If we start using/supporting Open Assistant instead, it will beat chatgpt in a month or two.
master3243 t1_jdlhj77 wrote
Knowing how a lot of text data from Reddit comments ends up in these huge text datasets only for them to make it completely closed source rubs me the wrong way.
visarga t1_jdlo8hl wrote
Closed source on the generation end, but even more open than open source on the usage end. LLMs lift the open source idea to the next level.
wywywywy t1_jdm0xwo wrote
/r/OpenAssistant
sneakpeekbot t1_jdm0yoj wrote
Here's a sneak peek of /r/OpenAssistant using the top posts of all time!
#1: the default UI on the pinned Google Colab is buggy so I made my own frontend - YAFFOA. | 27 comments
#2: Progress Update | 4 comments
#3: Paper reduces resource requirement of a 175B model down to 16GB GPU | 19 comments
^^I'm ^^a ^^bot, ^^beep ^^boop ^^| ^^Downvote ^^to ^^remove ^^| ^^Contact ^^| ^^Info ^^| ^^Opt-out ^^| ^^GitHub
plottwist1 t1_jdlj5r8 wrote
How open are they? I mean having open models is an improvment, but the training methods should be open too. And if we croud source data that should be accessible too.
__Maximum__ t1_jdlqolz wrote
It's community driven, so they are open open.
Viewing a single comment thread. View all comments