Submitted by Mynameis__--__ t3_1220486 in technology
RamsesThePigeon t1_jdrcakx wrote
Reply to comment by seri_machi in Silicon Valley Elites Are Afraid. History Says They Should Be by Mynameis__--__
The comparison to neurons is flawed, and it’s one of the main reasons why this debate is even happening.
Chat-bots do not understand or comprehend. They are physically incapable of non-linear thinking, and increased amounts of data won’t change that; it’s a function of their underlying architecture. They don’t have neurons, nor do they have anything even functionally close. They absolutely do not innovate; they just iterate to a point that fools some humans.
If you consider yourself a writer, then you know that comprehension and empathy are vital to decent writing. Until such time as a computer can experience those (which – to be completely clear – is fundamentally impossible for as long as it’s being built according to modern computing principles), it won’t be able to match anything offered by someone who already does.
Put bluntly, it isn’t doing anything impressive; it’s revealing that the stuff being thrown at it is less complex or reason-based than we have assumed.
Edit: Here’s a great example.
seri_machi t1_jdrvd6f wrote
I'm actually a programmer and at least know the basics of how machine learning works - I took a course in it as well as data science. I do not on the other hand know how the brain or conciousness works. Therefore, I am not asserting it can "truly" comprehend or reason or empathize, but I think it can simulate comprehension and reasoning and empathy (pretty darn well from the outside)[https://arxiv.org/abs/2303.12712]. It's not perfect, it hallucinates and is poor at math, but it's certainly proving our capacity for art/creativity isn't as unique as anyone would have argued... say, four years ago. To me it brings to mind the old aphorism about no art being truly original. My point about neurons was to point out that there's no evidence of a magic spark inside of us that makes us creative, we are as far as anyone knows just combining and recombining different ideas based on the data we've been "trained" on. There's no such thing as an "original" poem or piece of art (although Chat-GPT does an excellent job extracting themes from poems I wrote.)
It was only a few years ago we said (a computer could never win at Go)[https://www.google.com/amp/s/www.businessinsider.com/ai-experts-were-way-off-on-when-a-computer-could-win-go-2016-3%3famp], and at the time jt would make you a laughing stock if you ever claimed AI would soon be able to pass the Bar exam. The goalposts just keep shifting. You're going really against the grain if you think it's not doing anything impressive. If you've fooled around with Chat-GPT and are drawing your conclusions from that, know that Chat-GPT was neutered and not the cutting edge (although it's still very impressive, and I think it's purely contrarianism to state otherwise.) Have some imagination for what the future holds based on the trend of the recent past. We're just getting started, for better and for worse. This field is exploding, and advances are developed in months, not years.
RamsesThePigeon t1_jds0kei wrote
> I'm actually a programmer and at least know the basics of how machine learning works
Then you know that I'm not just grasping at straws when I talk about the fundamental impossibility of building comprehension atop an architecture that's merely complicated instead of complex. Regardless of how much data we feed it or how many connections it calculates as being likely, it will still be algorithmic and linear at its core.
>It can extract themes from a set of poems I've written.
This statement perfectly represents the issue: No, it absolutely cannot extract themes from your poems; it can draw on an enormous database, compare your poems with things that have employed similar words, assess a web of associated terminology, then generate a response that has a high likelihood of resembling what you had primed yourself to see. The difference is enormous, even if the end result looks the same at first glance. There is no understanding or empathy, and the magic trick falls apart as soon as someone expects either of those.
>It wasn't long ago we said a computer could never win at Go, and it would make you a laughing stock if you ever claimed it could pass the Bar exam.
Experts predicted that computers would win at games like Go (or Chess, or whatever else) half a century ago. Authors of science fiction predicted it even earlier than that. Hell, we've been talking about "solved games" since at least 1907. All that victory requires is a large-enough set of data, the power to process said data in a reasonable span of time, and a little bit of luck. The same thing is true of passing the bar exam: A program looks at the questions, spits out answers that statistically and semantically match correct responses, then gets praised for its surface-level illusion.
>The goalposts just keep shifting.
No, they don't. What keeps shifting is the popular (and uninformed) perspective about where the goalposts were. Someone saying "Nobody ever thought this would be possible!" doesn't make it true, even if folks decide to believe it.
>You're going really against the grain if you think it's not doing anything impressive.
It's impressive in the same way that a big pile of sand is impressive. There's a lot of data and a lot of power, and if magnitude is all that someone cares about, then yes, it's incredible. That isn't how these programs are being presented, though; they're being touted as being able to write, reason, and design, but all they're actually doing is churning out averages and probabilities. Dig into that aforementioned pile even a little bit, and you won't find appreciation for your poetry; you'll just find a million tiny instances of "if X, then Y."
Anyone who believes that's even close to how a human thinks is saying more about themselves than they are about the glorified algorithm.
Viewing a single comment thread. View all comments