Submitted by Singularian2501 t3_zm22ff in MachineLearning
Paper: https://arxiv.org/abs/2212.03551
Twitter expanation: https://twitter.com/mpshanahan/status/1601641313933221888
Reddit discussion: https://www.reddit.com/r/agi/comments/zi0ks0/talking_about_large_language_models/
Abstract:
>Thanks to rapid progress in artificial intelligence, we have entered an era when technology and philosophy intersect in interesting ways. Sitting squarely at the centre of this intersection are large language models (LLMs). **The more adept LLMs become at mimicking human language, the more vulnerable we become to anthropomorphism, to seeing the systems in which they are embedded as more human-like than they really are.**This trend is amplified by the natural tendency to use philosophically loaded terms, such as "knows", "believes", and "thinks", when describing these systems. To mitigate this trend, this paper advocates the practice of repeatedly stepping back to remind ourselves of how LLMs, and the systems of which they form a part, actually work. The hope is that increased scientific precision will encourage more philosophical nuance in the discourse around artificial intelligence, both within the field and in the public sphere.
mocny-chlapik t1_j08w90k wrote
Can aiplanes fly? They clearly do not flap their wings so we shouldn't say they fly. In the nature, we can see that flying is based on flapping wings, not on jet engines. Thus we shouldn't say that airplanes fly, since clearly jet engines are not capable of flight, they are merely moving air with their turbines. Even though we can see that the airplanes are in the air, it is only a trick and they are actually not flying in the philosophical sense of that word.