Submitted by spiritus_dei t3_10tlh08 in MachineLearning
LetterRip t1_j78cexp wrote
Reply to comment by spiritus_dei in [D] Are large language models dangerous? by spiritus_dei
>These models are adept at writing code and understanding human language.
They are extremely poor at writing code. They have zero understanding of human language other than mathematical relationships of vector representations.
> They can encode and decode human language at human level.
No they cannot. Try any sort of material with long range or complex dependencies and they completely fall apart.
> That's not a trivial task. No parrot is doing that or anything close it.
Difference in scale, not in kind.
> Nobody is going to resolve a philosophical debate on consciousness or sentience on a subreddit. That's not the point. A virus can take and action and so can these models. It doesn't matter whether it's a probability distribution or just chemicals interacting with the environment obeying their RNA or Python code.
No they can't. They have no volition. A language model can only take a sequence of tokens and predict the sequence of tokens that are most probable.
> A better argument would be that the models in their current form cannot take action in the real world, but as another Reddit commentator pointed out they can use humans an intermediaries to write code, and they've shared plenty of code on how to improve themselves with humans.
They have no volition. They have no planning or goal oriented behavior. The lack of actuators is the least important factor.
You seem to lack basic understanding of machine learning or neurological basis of psychology.
Viewing a single comment thread. View all comments