Andriyo
Andriyo t1_jedv8ns wrote
Reply to Can you please stop answering technical/meta questions with „ask chatgpt“ or [chatgpt answer]? This is exhausting as f, and makes me worried about a dystopian future where people never use their own mind anymore but ask an AI basically everything, as if using a calculator for 5*4 or so. by BeginningInfluence55
it's only fair for ChatGPT to have a voice in this sub )
Andriyo t1_jeds606 wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
maybe it's my background in software engineering but truthiness to me is just a property that could be assigned to anything :)
say, statement 60 + 2 = 1 is also true in for people who are familiar with how we measure time.
anyway, most children do rote memorize 1+1=2, 1+2 = 3 - they even have posters with tables in school. they also show examples of "car is one","apple is one" etc. so basically what LLMs is doing. anyway, long story short LLMs is capable of doing long arithmetic if you ask it to do it step by step. The only limitation so far is the context length.
Andriyo t1_jedirnp wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
It is certainly fundamental to our understanding of the world, but if we all forget tomorrow that 1+1 =2 and all math altogether, the world won't stop existing :)
Andriyo t1_jedfs83 wrote
Reply to comment by Prestigious-Ad-761 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I'm not a specialist myself either but I gather what's difficult to understand the LLMs for humans is due to the fact that models are large, with many dimensions (features) and inference is probabilistic in some aspects (that's how they implement creativity). All that combined makes it hard to understand what's going on. But that's true for any large software system. It's not unique to LLMs.
I use word "understand" here in the meaning that one is capable to predict how software system would behave for a given input.
Andriyo t1_jeam834 wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
There is nothing fundamental behind 1+1=2. It's just the language that we use to describe reality as we observe it as humans. And even beyond that, it's cultural: some tribes have "1", "2", "3", "many" math and to them it is as "fundamental" as Integer number system to us. The particular algebra of 1+1=2 was invented by humans (and some other species) because we evolutionary optimized to work with discrete objects to detect threats and such.
I know Plato believed in the existence of numbers or "Ideas" in a realm that transcended the physical world but it's not verifiable so it's just that - a belief.
So children just learn the language of numbers and arithmetic as any other language by training on examples - statistically. There might be some innate training that happened on DNA level so we're predisposition to learn about integers easier but it doesn't make "1+1=2" as something to discover that exists on its own like, say, gravity or fire.
Andriyo t1_je8uj7t wrote
Reply to comment by theotherquantumjim in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
There is nothing fundamental about the rule of 1 apple + 1 apple = 2 apples. It's entirely depending on our anthorpomorphic definition of what is "1" of anything is. If I add two piles of sand together, I'll get one pile of sand still.
Mathematics is our mental model for the real world. It could be super effective in its predictions but not always the case.
Kids just do what LLMs are doing. They observe that parents call any one noun + one noun equals 2 nouns. The concept of what is addition really is (with its commutative property, identity property, closing property etc) people learn much later
Andriyo t1_je8t4s9 wrote
Humans are social creatures that tend to form hierarchies (just because we tend to be of different ages). So there always will be something where you become a part of an organization and there is some social transaction going on.
specifically, for AI there will be new kinds jobs:
- AI trainers - working on the input data for the models
- AI psychologists - debugging issues in the models
- AI integrators - working on implementing AI output. Say, a software engineer that implement a ChatGPT plugin, or a doctor that would read a diagnosis that was given by AI to the patient etc
So majority of AI jobs will be around the alignment - making sure that it does what humans want it to do: thru oversight, proper training, debugging etc
Andriyo t1_je8qop2 wrote
Reply to comment by No_Ninja3309_NoNoYes in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
oh yeah, the machines lack "the soul" :))
Andriyo t1_je8qj9c wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I wouldn't call it a blackbox how it operates - it's just tensor operations some linear algebra, nothing magic.
Andriyo t1_je8q3s6 wrote
Reply to comment by WarmSignificance1 in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
I'd argue that humans are trained on more data and the majority of it comes from our senses and the body itself. The texts that we read during our lifetime are probably just a small fraction of all input.
Andriyo t1_je8pre9 wrote
Reply to comment by StevenVincentOne in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
one needs a degree in mathematics to really explain why 2+2=4 (and be aware that it might not be always the case). Majority of people do exactly what LLMs are doing - just statistically infer that in the text "2+2=..." should be followed by "4"
Andriyo t1_je8pc91 wrote
Reply to comment by Cryptizard in The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
Our understanding is also statistically based on the fact that majority of texts that we saw use 10-based numbers. One can invent math where 2+2=5 (and mathematicians do that all the time). so your "understanding" is just formed statistically from the fact that it's the most common convention to finish text "2+2=...". Arguably, a simple calculator has better understanding of addition since it has more precise model of addition operation
Andriyo t1_je8o2sl wrote
Reply to The argument that a computer can't really "understand" things is stupid and completely irrelevant. by hey__bert
To understand something - is to have a model of something that allows for future event predictions. The better the predictions, the better understanding. LLMs due to transformers can create "mini-models"/ contexts of what's being talked about. so, I call that "understanding". It's limited yes but it allows LLMs reliably predict the next word.
Submitted by Andriyo t3_11wc24a in singularity
Andriyo t1_jedw5r5 wrote
Reply to We have a pathway to AGI. I don't think we have one to ASI by karearearea
right, that's why AI needs to be multimodal and be able to observe the world directly bypassing the text stage.
we use text for learning today because it's trivial to train with text and verify. but i think you're right that we will hit the limit of how much knowledge there is in those texts.
​
For example, ChatGPT might be able to prove that Elvis is alive by analyzing the lyrics he wrote during his life and some obscure manuscripts from some other person in Argentina in 1990 and deducting it was the same person. That would be net positive knowledge added by ChatGPT just by analyzing all the text data in the world. But it won't be able to detect that, say, magnetic field of the earth is weaking without direct measurement or a text somewhere saying so.