rainy_moon_bear
rainy_moon_bear t1_j980m0i wrote
Reply to comment by diabeetis in Proof of real intelligence? by Destiny_Knight
"is just an idiot" Ad Hominem.
GPT models are just token predictors. Everything you said about abstracting patterns of relationships or proto-general reasoning can fit within the context of a model that only predicts the next token.
Most large text models right now are autoregressive, even though they are difficult to explain, the way they are inferenced is still token sequencing...
rainy_moon_bear t1_j6fuuc9 wrote
Reply to comment by DukkyDrake in How rapidly will ai change the biomedical field? What changes can be expected. by Smellz_Of_Elderberry
Lol
rainy_moon_bear t1_j6en0n7 wrote
Reply to How rapidly will ai change the biomedical field? What changes can be expected. by Smellz_Of_Elderberry
An adequate GPT could speed up the derivation of hypotheses from current research, as well as streamlining the experiment design process.
rainy_moon_bear t1_j6e0f9v wrote
Reply to comment by [deleted] in OpenAI has hired an army of contractors to make basic coding obsolete by Buck-Nasty
With a few examples, the model can generate a dataset and fine-tune itself to perform the task without examples.
I'm not saying it is a clear path to AGI, but it's definitely not obvious where this technology will lead to when progressed further.
rainy_moon_bear t1_j676oo9 wrote
Reply to comment by maizeq in [R] SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot by Secure-Technology-78
This is something people don't seem to understand. Pretty much all models 100B+ are undertrained.
rainy_moon_bear t1_j1flct1 wrote
Reply to Am I the only one on this sub that believes AI actually will bring more jobs (especially in tech)? by raylolSW
Sometimes increased efficiency reveals a greater demand for a service, and therefore a potentially greater job market.
I think it just depends on the demand post-efficiency. I believe software demand will go up with software development efficiency for example.
rainy_moon_bear t1_j0utlxp wrote
Reply to Is progress towards AGI generally considered a hardware problem or a software problem? by Johns-schlong
If you consider transformer models progress towards AGI, then I think the answer is hardware.
There really isn't anything too shocking or new about the transformer architecture, it is derived from statistics and ML concepts that have been around for a while.
Of course advancing the architecture and training methods is important but the only reason these models did not exist sooner seems to be hardware cost efficiency.
rainy_moon_bear t1_irrelfq wrote
Reply to Am I crazy? Or am I right? by AdditionalPizza
- Of course this is possible and astonishingly easy to build.
- I don't know how prevalent these bots are, but grammatical errors can actually be a sign that a post is not made by an LLM... It's a debatable statement, but the reason I say this is that LLM are more likely to misunderstand the content of the post than they are to mess up grammar.
3-4. These are the important questions. How prevalent is it? It's incredibly challenging to quantify the success of bots like these and how common these bots would be...
It would be interesting to build one of my own (of course to generate innocent content) just to see how prevalent it can be after optimization. It might provide some perspective XD
rainy_moon_bear t1_irowxts wrote
Reply to comment by [deleted] in Is the Control Problem for AI already solved? by UnionPacifik
Yeah there are more than two possibilities I was just being dramatic XD
rainy_moon_bear t1_irlqtuc wrote
I think that the Fermi Paradox could indicate that the frontier of ASI is a potential cause for the great filter if it exists.
A lack of evidence that other life has expanded to the point of being noticeable by us is either news that we will be the first, or that we are facing certain failure.
rainy_moon_bear t1_ja7lsd5 wrote
Reply to comment by el_chaquiste in Brace for the enshitification of AI by Martholomeow
It's not open source, and it isn't QAT, so it's behind open-source alternatives for instruct training or RLHF.