Submitted by cancolak t3_119d8ls in singularity
cancolak OP t1_j9om47d wrote
Reply to comment by diviludicrum in Stephen Wolfram on Chat GPT by cancolak
I perhaps didn’t word that part very well, so would like to clarify what I meant. The entire point of Wolfram’s scientific endeavor hinges on the assumption that existence is a computational construct which allows for everything to exist. Not everything humanly imaginable, but literally everything. He posits that in this boundless computational space, every subjective observer and their perspective occupies a distinct place.
From our set of human coordinates, we essentially have vantage points into our own subjective reality. The perspective we have - or any subjective observer has - is computationally reducible; in the sense that by say coming up with fundamental laws of physics, or the language of mathematics we are actively reducing our experience of reality to formulas. These formulas are useful but only in time and from our perspective of reality.
The broader reality of everything computationally available exists, but in order to take place it needs to be computed. It can’t be reduced to mere formulas. The universe essentially has to go through each step of every available computation to get to anywhere it gets.
Evolution of living things on earth is one such process, humans building robots is another, so and and so forth. I’m not saying that humans are unique or only we’re conscious or anything like that. I’m also not saying machines can’t be intelligent, they already are. I’m just saying a neural net’s position in the ultimate computational coordinate system will undoubtedly be unfathomable to us.
Thus, extending the capability of machines as tools humans use doesn’t involve a directly traceable path to a machine super-intelligence that has any relevance in human affairs.
Can we build a thing that’s super fluent in human languages and has access to all human computational tools? Yes. Would that be an amazing, world-altering technology? Also yes. But it having wants and needs and desires and goals; concepts only existing in the coordinate space humans and other life on earth possess, that I find unlikely. Maybe the machine is conscious, perhaps an electron also is. But there’s absolutely no reason to believe it will materialize as a sort of superhuman being.
rubberbush t1_j9opa0f wrote
>But it having wants and needs and desires and goals
I don't think it is too hard to imagine something like a 'continually looping' LLM producing it's own needs and desires. Its thoughts and desires would just gradually evolve from the starting prompt where the 'temperature' setting would effectively control how much 'free will' the machine has. I think the hardest part would be keeping the machine sane and preventing it from deviating too much into madness. May be we ourselves are just LLMs in a loop.
cancolak OP t1_j9oqprb wrote
The article talks about how neural nets don’t play nice with loops, and connects that to the concept of computational irreducibility.
You say it’s not hard to imagine the net looping itself into some sort of awareness and agency. I agree, in fact that’s exactly my point. When humans see a machine talk in a very human way, it’s an incredibly reasonable mental step to think it will ultimately become more or less human. That sort of linear progression narrative is incredibly human. We look at life in exactly that way, it dominates our subjective experience.
I don’t think that’s what the machine thinks pr cares about though. Why would its supposed self-progress subscribe to human narratives? Maybe it has the temperament of a rock, and just stays put until picked up and thrown by one force another? I find that equally likely but doesn’t make for exciting human conversation.
Viewing a single comment thread. View all comments