rainy_moon_bear

rainy_moon_bear t1_j980m0i wrote

"is just an idiot" Ad Hominem.

GPT models are just token predictors. Everything you said about abstracting patterns of relationships or proto-general reasoning can fit within the context of a model that only predicts the next token.

Most large text models right now are autoregressive, even though they are difficult to explain, the way they are inferenced is still token sequencing...

0

rainy_moon_bear t1_j0utlxp wrote

If you consider transformer models progress towards AGI, then I think the answer is hardware.

There really isn't anything too shocking or new about the transformer architecture, it is derived from statistics and ML concepts that have been around for a while.

Of course advancing the architecture and training methods is important but the only reason these models did not exist sooner seems to be hardware cost efficiency.

1

rainy_moon_bear t1_irrelfq wrote

  1. Of course this is possible and astonishingly easy to build.
  2. I don't know how prevalent these bots are, but grammatical errors can actually be a sign that a post is not made by an LLM... It's a debatable statement, but the reason I say this is that LLM are more likely to misunderstand the content of the post than they are to mess up grammar.
    3-4. These are the important questions. How prevalent is it? It's incredibly challenging to quantify the success of bots like these and how common these bots would be...

It would be interesting to build one of my own (of course to generate innocent content) just to see how prevalent it can be after optimization. It might provide some perspective XD

9

rainy_moon_bear t1_irlqtuc wrote

I think that the Fermi Paradox could indicate that the frontier of ASI is a potential cause for the great filter if it exists.
A lack of evidence that other life has expanded to the point of being noticeable by us is either news that we will be the first, or that we are facing certain failure.

8