Submitted by fortunum t3_zty0go in singularity
sumane12 t1_j1j5clw wrote
Reply to comment by fortunum in Hype bubble by fortunum
You bring up some good points. I think the reason people are so optimistic recently has a number of points to it;
-
Even though ChatGPT is not perfect and not what most people would consider AGI, it's general enough to be massively disruptive to society in general. Even if no further progress is made, there's so much low hanging fruit in terms of productivity that ChatGPT offers.
-
Gpt4 is coming out soon, which is rumoured to be trained on multiple data sets so will be even better at generalising
-
AI progress seems to be speeding up, we are closing in on surpassing humans in more measures than not.
-
Hardware is improving allowing for more powerful algorithms
-
Although kurzweil isn't perfect at prediction the future, his predictions and timelines have been pretty dam close so it's likely that this decade will be transformative in terms of AI
You bring up a good point about questioning whether language is all that's needed for intelligence, and I think that it possibly might be. Remember, language is our abstract way of describing the world and we've designed language in a way so as to encapsulate as much of our subjective experience as possible through description. let's take for example my car, you've never seen my car, but if I give you enough information, enough data, you will eventually get a pretty accurate idea of how it looks. It's very possible that the abstractions of our words, could be reverse engineered with enough data to represent the world we subjectively experience, if given enough data. We know that our subjective experience is only our minds way of making sense of the universe from a natural selection perspective, the real universe could be nothing like our subjective experience, and it seems reasonable to me the data we feed to large language models could give them enough information to develop a very accurate representation of our world and allow them to massively improve their intelligence based on that representation. Does this come with a subjective experience? I don't know, does it need to? I also don't know. The more research we do, the more likely we are to understand these massively philosophical questions, but I think we are a few years away from that.
fortunum OP t1_j1jb5w8 wrote
Yea thanks for the reply, that’s indeed an interesting question. With this approach it seem that intelligence is a moving target, maybe the next GPT could write something like a scientific article with actual results or prove a theorem. That would be extremely impressive but like you say it doesn’t make it AGI or get it closer to the singularity. With the current approach there is almost certainly no ‘ghost in shell’. It is uncertain if it could reason, experience qualia or be conscious of it’s own ‘thoughts’. So it could likely be self motivated, to some extend autonomous and have a degree of agency over its own thought processes all of which are true for life on earth at least. So maybe we are looking for something that we don’t prompt, but something that is ‘on’ and similar to a reinforcement learning agent.
sumane12 t1_j1jfdui wrote
I'd agree, I don't think we are anywhere near a ghost in the shell level of consciousness, however a rudimentary, unrecognisable form may well have been created in some LLM's. But I think what's more important than intelligence at this point is productivity. I mean, what is intelligence if not the correct application of knowledge? And what we have at the moment is going to create massive increases in productivity, which is obviously required on the way to the singularity. Now it could be that this is the limit of our technological capabilities, but that seems unlikely given the progress we have made so far and the points I outlined above. Is some level of consciousness required for systems that seem to show a small level of intelligence? David Chalmers seems to think so. We still don't have an agreed definition of how to measure intelligence, but let's assume it's an IQ test, well I've heard that ChatGPT has an IQ of 83 https://twitter.com/SergeyI49013776/status/1598430479878856737?t=DPwvrr36u9y8rGlTBtwGIA&s=19 which is low level human. is intelligence, as measured by iq test, all that's needed? Can we achieve super intelligence without a conscious agent? Can we achieve it with an agent that has no goals and objectives? These are questions we aren't fully equipped to answer yet, but should become clearer as we keep on building in what has been created.
Viewing a single comment thread. View all comments