Viewing a single comment thread. View all comments

Nickvec t1_jdgnrr5 wrote

With the recent addition of plug-ins, GPT-4 effectively has access to the entire Internet. Doesn’t this contradict your assertion that it has no external knowledge hub?

31

anothererrta t1_jdgx8pd wrote

There is no point arguing with the "it just predicts next word" crowd. They only look at what an LLM technically does, and while they are of course technically correct, they completely ignore emergent capabilities, speed of progress and societal impact.

The next discussion to have is not whether we have achieved early stages of AGI, but whether it matters. As long as we're not pretending that a system is sentient (which is a separate issue from whether it has AGI properties) it ultimately doesn't matter how it reliably solves a multitude of problems as if it had general intelligence; it only matters that it does.

48

Econophysicist1 t1_jdh6fac wrote

Right, emergent properties are the key and they cannot be predicted from what NLM are supposed to do or how they work, this why they are emergent. The only way to find out what properties well trained NLM have is to test experimentally as this paper did and other papers that are doing the same, as this one:
https://arxiv.org/abs/2302.02083#:~:text=Theory%20of%20Mind%20May%20Have%20Spontaneously%20Emerged%20in%20Large%20Language%20Models,-Michal%20Kosinski

15

drcopus t1_jdhou03 wrote

Humans are just next-human generators :)

12

agent_zoso t1_jdhovl9 wrote

Furthermore, if we are to assume that an LLM can be boiled down to nothing more than a statistical word probability engine because that's what its goal is (which is dubious for the same reason we don't think of people with jobs as being only defined as payraise probability engines, what if a client asks a salesman important questions unrelated to the salesman's goal, etc.), this point of view is self-destructive and completely incoherent when you factor in that for ChatGPT in particular, it's also trained using RLHF ("Reinforcement Learning with Human Feedback").

Everytime you leave a Like/Dislike (or take the time to write out longer feedback) on one of ChatGPT's messages, that gets used directly by ChatGPT to train the model through a never-ending process of (simulated) evolution through model competition with permutations of itself. So there are two things to note here, A. It's goals include not only maximizing log-likelihoods of word sequences but also in inferring new goals from whatever vague feedback you've provided it, and B. How can anyone be so sure that such a system couldn't develop sophisticated complexity like sentience or consciousness like humans did through evolution (especially when such a system is capable of creating its own goals/heuristics and we aren't sure how many layers of abstraction with which it's recursively doing so)?

On that second point in particular, we just don't currently have the philosophical tools to make any sort of statements about that, but people are sticking to hard-and-fast black and white statements of the kind we made about even other humans until recent history. We as humans love to have hard answers about others' opinions so I see the motivation for wanting to tamp down the tendency to infer emotion from ChatGPT's responses, but this camp has gone full swing in the other direction with unscientific and self-inconsistent arguments because they've read a buzzfeed or verge article produced by people with skin in the game (long/short msft, it's in everyone's retirement account too).

I think the best reply in general to someone taking the paperclip-maximizer stance while claiming to know better than everyone else the intricacies of an LLM's latent representations of concepts encoded through the linear algebraic matrix multiplication in the V space, the eigenvector (Q,K) embeddings from PCA or BERT-like systems, or embedded in its separate neuromorphic structure ("it's just autocorrect, bro") is to draw the same analogy that they're just a human meat-puppet designed to maximize dopamine and therefore merely a mechanical automaton slave to biological impulses. Obviously this reductionism is in general a fallacious way of rationalizing things (something we "forget" time and again throughout history because this time it's different), but you also can't counter by outright stating that ChatGPT is sentient/conscious/whatever, we don't know for sure whether that's even possible (cf. Chinese room -against, David Chalmers' Brain of Theseus -for, Penrose's contentious Gödelian construction demonstrating human supremacy as Turing machine halt checkers -against).

8

mescalelf t1_jdin2y7 wrote

Thank you for mentioning Microsoft’s (and MA investors’) role in this/their “skin in the game”. I’m glad to hear I’m not the only one who thought the press in question—and resulting popular rhetoric—seemed pretty contrived.

3

agent_zoso t1_jdiwkmj wrote

It always is. If you want to get really freaky with it, just look at how NFTs became demonized at the same time as when Gamestop's pivot to NFT third-party provider was leaked by WSJ. Just the other month people were bashing the author of Terminal Shock and hard sci-fi cyberpunk pioneer Neal Stephenson in his AMA for having a NFT project/tech demo by arguing with someone that knows 1000x more than they do, saying it's just a CO2 emitter and only scam artists use it and were disappointed to see he'd try to do this to his followers. Of course, the tech has evolved and those claims weren't true in his case, but it was literally all in one ear out the other for these people even after he'd defend himself with the actual facts about his green implementation and how it works. They bought an overly general narrative and they're sticking to it!

Interesting that now, with a technology that produces an order of magnitude more pollution (you can actually list models on Hugging Face by the metric tonnes of CO2 equivalent released during training) and producing an epidemic of cheaters in high schools, universities, and the work force, it's all radio silence. God only knows how much scamming and propaganda (which is just scamming but "too big to fail") is waiting in the wings.

I don't think the average person even knows what they would do with such a powerful LLM beyond having entertaining convos with it or having it write articles for them. Of course they see other people doing great things with it and not really any of the other ways it's being misused by degens right now, which goes back to an advantage in corporate propaganda.

2

pmirallesr t1_jdi0eqg wrote

With these people, it's interesting to ask, how do we know human intelect is not.emergent behaviour of a simple task. That would correspond to a radical view of predictive coding. I'm no expert in neuroscience, but to me, the idea that AGI cannot arise from a single simple task makes less and less sense as time goes by

5

theotherquantumjim t1_jdjvcxi wrote

Exactly. If it looks like a dog and barks like a dog, then we may as well call it a dog

2

Miserable_Movie_4358 t1_jdgqcy4 wrote

if you follow the argumentation line this person is referring to the model described on the published paper. In addition to that I invite you to investigate what knowledge means (Ps is not having access only to data)

5