Submitted by Defiant_Swann t3_xywsfd in Futurology
SeaworthinessFirm653 t1_irlmaxf wrote
Reply to comment by code_turtle in We'll build AI to use AI to create AI. by Defiant_Swann
If AI is capable of producing AI superior than itself, then logically it creates a self-accelerating intelligence that will inevitably prove to be superior than us. AGI is implied when AI can produce better AI that can produce better AI.
danielv123 t1_irm9qiz wrote
Actually, that scenario doesn't require a self accelerating intelligence, just a self advancing intelligence. There are other growth types than exponential and quadratic. It could run into the same issues as we do with Moore's law and frequency scaling etc, and only manage minor improvements with increasing effort for each step.
SeaworthinessFirm653 t1_irq2vcf wrote
That's actually a fair point; I hadn't considered the actual rate of growth in detail.
Edit: There is also an additional facet: If we are to assume that the AI in question is simply an AI designed to create superior AI, and this does indeed cyclically reproduce, then if you were to restrict it to its computational power, it would still run more efficiently than humans by such a large margin. It takes roughly 12 watts to run a human brain power-wise, and if a computer has access to enormously larger amounts of energy, then it is not unthinkable that a machine would be able to self-enhance to an insane degree. Sure, there may be logarithmically declining returns to a certain extent as exists with virtually any system, but the difference between a human and a machine that is at the point of diminishing returns would remain unimaginably wide. Humans were not designed to think; we were designed to be energy-efficient decent thinkers. A machine that can evolve at a million-times faster pace who is designed purely to think will inevitably pass us by a very long margin, even if the nature of acceleration belies an exponential growth function. The main caveat is that creating an AI that can produce superior AI relative to itself for the same intention of creating superior AI is incredibly difficult.
code_turtle t1_irpt8sm wrote
The reason this line of logic doesn’t work is because you have something VERY specific in mind when you say “better AI”. A TI-84 calculator can do arithmetic a thousand times faster than you can; does that make it more intelligent than you? That depends on your definition of intelligence. You’re defining “artificial intelligence” as “thinks like a human”, when that is only ONE subset of the field; not to mention that we’ve made very little progress on that aspect of AI. What we HAVE done (with AI tools that make art or respond to some text with other text) is create tools that are REALLY good at doing one specialized task. Similar to how your calculator has been engineered to do math very quickly, a program that generates an image using AI can ONLY do that, because it requires training data (that is, millions of images so that it can generate something similar to all of that training data). It’s not thinking like you; it’s just a computer program that’s solving a complicated math problem that allows it to spit out a bunch of numbers (that can then be translated into an image by some other code).
SeaworthinessFirm653 t1_irq7a9n wrote
Yes, I made my comment with the presumption that we are talking about AGI, not just a smart calculator bot making a slightly faster calculator bot. We have created some multi-modal AI that can accomplish different tasks, but the model itself is computationally inefficient and predictive rather than true learning (just like GPT-3 doesn't actually think logically, it's just a really advanced language prediction model).
As far as I am concerned, the difference between consciousness and AI is that an AI is an advanced look-up table using only simple logic while consciousness involves processing stored information for semantic meaning rather than adhering to an algorithmic process for syntactic meaning. See: Chinese translation room thought experiment.
AI today uses low-level logic en masse to produce high-level (relatively) thinking. With the addition of increasingly advanced neural networks, image-generation AI has utilized increasingly complex network structures, such as defining edges, shapes, complex shapes, and fuzziness around these levels. If we extend this notion to account for an AI that is capable of taking simple features such as moving shapes and we allow the AI to predict the shapes' locations, we may be able to reapply this scalable logic until the AI is able to understand complex ideas given sufficient inputs and sufficient training data. This is far-fetched from a modern technological standpoint, but not unbelievably far-fetched given how quickly we are advancing our AI.
If the human brain is made up of computations, then an elaborate series of computations is by definition what must define our consciousness, and thus it can be created with sufficient AI models. Switching to amplitude computers for computational efficiency or compressed memory models (current memory cell models scale linearly with space instead of logarithmically) may allow us to break through this barrier.
edit: sorry for the ramble
code_turtle t1_irtxyag wrote
I mean that’s HIGHLY optimistic but more power to you, I guess. The “increasingly complex structures” you’re talking about are just fancy linear algebra problems; the idea that those structures will approach “consciousness” anytime soon is a pretty big leap. Imo, we need to first break MAJOR ground in the field of neuroscience before we can even consider simulating consciousness; I think it’s unrealistic to expect something as complex as the human brain to just “appear” out of even the most advanced neural network.
SeaworthinessFirm653 t1_iru86y6 wrote
Yes, I agree with that. I recall the analogy of taking the brain's neurons and connections, magnifying it in size to cover an entire block in a large city, and the immense density of connections would still be too large to make any meaningful observations even given our current technology.
I don't believe any optimism is required, though, to claim that we can be simulated. Unless we exist outside of the realm of physical things, that much is given. It's impossible to make good predictions about the future where the sample size is n = 0.
code_turtle t1_iruceka wrote
I’m not trying to claim it’s not possible; just saying that with our current techniques/methods, I believe it’s highly unlikely. But I could be proven wrong.
Viewing a single comment thread. View all comments