KingsmanVince
KingsmanVince t1_jdz7k7x wrote
Reply to comment by liyanjia92 in [P] ChatGPT with GPT-2: A minimum example of aligning language models with RLHF similar to ChatGPT by liyanjia92
https://github.com/nichtdax/awesome-totally-open-chatgpt#ethanyanjialiminchatgpt
And your work is listed as other alternative for ChatGPT
KingsmanVince t1_jdgyviy wrote
Reply to [P] ChatGPT with GPT-2: A minimum example of aligning language models with RLHF similar to ChatGPT by liyanjia92
That's a very good school project! Good job!
KingsmanVince t1_jdazrlk wrote
Impressive, (required a bit human prompt changes tho) the small model got similar answer as the big model.
KingsmanVince t1_jd0z599 wrote
Reply to comment by tdgros in [P] OpenAssistant is now live on reddit (Open Source ChatGPT alternative) by pixiegirl417
Technically the truth?
KingsmanVince t1_jctpr5l wrote
Reply to comment by michaelthwan_ai in [P] searchGPT - a bing-like LLM-based Grounded Search Engine (with Demo, github) by michaelthwan_ai
Not sure this is frontend problem or not, but the python code is printed without identation.
KingsmanVince t1_jcn734b wrote
Positively maybe llama is better than alpaca if you do so
Negatively maybe it responds closely to ChatGPT
KingsmanVince t1_jcaf167 wrote
Reply to [D] What do people think about OpenAI not releasing its research but benefiting from others’ research? Should google meta enforce its patents against them? by [deleted]
Sorry for my lack of knowledge, what do you mean by patents? Which things are the patents applied to? Model's weight? Model's source code? Model's theory (white papers)?
Researchers reuse others ideas and rethink of others work all the time. So if people want to against each other, just don't release white papers.
KingsmanVince t1_jbwmi1h wrote
Reply to comment by GraydientAI in [N] Man beats machine at Go in human victory over AI : « It shows once again we’ve been far too hasty to ascribe superhuman levels of intelligence to machines. » by fchung
Unplug the machine!
KingsmanVince t1_jaboa8m wrote
Reply to [D] Training transformer on RTX2060 by ahiddenmessi2
Knowing the architecture isn't enough. How large is your training dataset? Do you use gradient accumulation?
KingsmanVince t1_j9xr8oe wrote
Reply to comment by Linear-- in [D] Isn't self-supervised learning(SSL) simply a kind of SL? by Linear--
>Not so constructive.
It's not much I am aware. However, what I mean that names of both training paradigm already told you a part of the answer. The last paragraph of mine is to refer two other comments to create a more sufficient answer.
Moreover, the names of both already pointed it's somewhat related. Therefore, this line
>So I think classifying them as disjoint is somewhat misleading.
is obvious. I don't know who have said "classifying them as disjoint" to you. Clearly they didn't pay attention to the names.
KingsmanVince t1_j9xk6rf wrote
>Isn't self-supervised learning(SSL) simply a kind of SL?
Don't their names already tell that? Self-supervised learning... supervised learning...
>So I think classifying them as disjoint is somewhat misleading.
Who said this?
The ways of determining labels of both paradigms are different (as u/cthorrez said). Moreover, the objectives are different (as u/currentscurrents said).
KingsmanVince t1_j9fba00 wrote
Top 10 jokes you can tell your data science friends
KingsmanVince t1_j712rzn wrote
>Since ChatGPT, many articles have been popping up about how AI will replace software engineers and developers. (Maybe not in the near future, but eventually)
Most of them are clickbaits.
KingsmanVince t1_j5xk2e3 wrote
Reply to comment by nins_ in [D] Pretraining for CNN by Dense-Smf-6032
Related link: https://keremturgutlu.github.io/self_supervised/#Vision
KingsmanVince t1_j5xicem wrote
Reply to comment by Daango_ in [D] Pretraining for CNN by Dense-Smf-6032
Or like this https://keras.io/api/applications/ ?
KingsmanVince t1_j5skwpx wrote
Reply to comment by NadaBrothers in [R] Easiest way to train RNN's in MATLAB or Julia? by NadaBrothers
You can do Python locally tho
KingsmanVince t1_j3erhcv wrote
No. We still need NLP researchers to understand the output of ChatGPT. ChatGPT exists to help not to replace.
KingsmanVince t1_j2zgcb5 wrote
Reply to [Discussion] If ML is based on data generated by humans, can it truly outperform humans? by groman434
The calculators are already outperforming humans. I bet you can't compute 567 times 891 in 1 second. They can.
KingsmanVince t1_iy6thin wrote
Nobody can do anything from scratch. Just look at repos of Microsoft on GitHub, they can't implement model without PyTorch (which is from Meta AI).
KingsmanVince t1_ixh5snh wrote
Faster RCNN with Feature Pyramid Network
KingsmanVince t1_iwyyvgo wrote
Reply to comment by [deleted] in [D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro
Not sure whether you replied to wrong comments or not. But I don't follow him or even know him on Twitter.
KingsmanVince t1_iwymi0d wrote
Reply to [D] David Ha/@hardmaru of Stability AI is liking all of Elon Musk's tweets by datasciencepro
Why would you personally care about someone (that don't even know you) liking Tweets? You don't know what he is thinking anyway
KingsmanVince t1_ivk562i wrote
Reply to comment by chatterbox272 in Pytorch Symbolic: an equivalent of Keras Functional API [Project] by gahaalt
Where can I read more about this "lazy layers"?
KingsmanVince t1_it6fskg wrote
Can you explain why the statement is not true? It maybe trivial to you but isn't to some others.
KingsmanVince t1_je9g82b wrote
Reply to [D] What do you think about all this hype for ChatGPT? by Dear-Vehicle-3215
We are clearly not tired of ChatGPT posts. As matter of fact, we really want you to speak more about it. /s