ginsunuva
ginsunuva t1_j9h467a wrote
Reply to comment by EightEqualsEqualsDe in [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
Vulnerability for a few-day-old prototype?
ginsunuva t1_j9f0ctz wrote
Reply to [D] Maybe a new prompt injection method against newBing or ChatGPT? Is this kind of research worth writing a paper? by KakaTraining
Papers are for advancements to science.
ginsunuva t1_j67x9p7 wrote
Reply to comment by Screye in [D] MusicLM: Generating Music From Text by carlthome
I don’t even think it’s as much about research but data collection and labeling
ginsunuva t1_j67wxvf wrote
Reply to [D] MusicLM: Generating Music From Text by carlthome
Who’s annotating music with these weird, non-intuitive text descriptions for training?
ginsunuva t1_ixjbbsq wrote
Reply to comment by bigbossStrife in [D] Am I stupid for avoiding high level frameworks? by bigbossStrife
It’s actually more of a style guide than a framework. Their website explains it.
It’s still 100% Pytorch, but with guidelines for where you should put things.
ginsunuva t1_iviwmie wrote
Reply to comment by billjames1685 in [D] At what tasks are models better than humans given the same amount of data? by billjames1685
>some transfer learning
We know general physics, 3d projection, lighting, and biological concepts. So much transfer that it’s always an entirely unfair comparison.
ginsunuva t1_iusp9hp wrote
Reply to comment by FandomMenace in Today I learned that dandelion roots can be used to make a coffee-like beverage. by ty775pearl
Weed is considered a weed cause in some countries like India it keeps growing prolifically everywhere.
ginsunuva t1_iuqvjzh wrote
Reply to comment by quikfrozt in Pretend Stanford Student Lived in Dorms for 10 Months by ChocolateTsar
Lmao imagine not being accepted by Stanford
Edit: jeez no one here knows what sarcasm is
ginsunuva t1_ittz1wc wrote
Reply to comment by pommedeterresautee in [P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels by pommedeterresautee
Something called Lightning
ginsunuva t1_ittv3ew wrote
Reply to comment by ptillet in [P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels by pommedeterresautee
So now we have TensorRT on the Triton inference server, and Triton on the Kernl inference server
ginsunuva t1_is76xr8 wrote
Reply to comment by M4xM9450 in [D] Are GAN(s) still relevant as a research topic? or is there any idea regarding research on generative modeling? by aozorahime
GANs are not just for images though
ginsunuva t1_ir4h3p4 wrote
Reply to comment by neato5000 in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
A higher LR gonna have better initial performance usually
ginsunuva t1_ir4gyuv wrote
Reply to comment by bphase in [D] How do you go about hyperparameter tuning when network takes a long time to train? by twocupv60
Ones that do well initially usually don’t correspond to those that do the best by the end.
A simple example is higher learning rates, but other parameters can affect this unexpectedly as well.
ginsunuva t1_jdyu8d2 wrote
Reply to comment by [deleted] in [D] FOMO on the rapid pace of LLMs by 00001746
Some things don’t need impacting and yet people need to force an impact (which may worsen things) to satisfy their ego, which usually soon goes back to needing more satisfaction after they realize the issue is psychological and always relative to the current situation. Not always of course, duh, but some times. I usually attribute it to OCD fixated on fear of death without “legacy.”