Viewing a single comment thread. View all comments

code_turtle t1_irl9okm wrote

I think a lot of you are mistaking AI for “artificial consciousness”. There are already a lot of AI techniques that involve AI helping to build other AI. But we’re not even close to building anything that can be considered “conscious”.

4

SeaworthinessFirm653 t1_irlmaxf wrote

If AI is capable of producing AI superior than itself, then logically it creates a self-accelerating intelligence that will inevitably prove to be superior than us. AGI is implied when AI can produce better AI that can produce better AI.

1

danielv123 t1_irm9qiz wrote

Actually, that scenario doesn't require a self accelerating intelligence, just a self advancing intelligence. There are other growth types than exponential and quadratic. It could run into the same issues as we do with Moore's law and frequency scaling etc, and only manage minor improvements with increasing effort for each step.

5

SeaworthinessFirm653 t1_irq2vcf wrote

That's actually a fair point; I hadn't considered the actual rate of growth in detail.

Edit: There is also an additional facet: If we are to assume that the AI in question is simply an AI designed to create superior AI, and this does indeed cyclically reproduce, then if you were to restrict it to its computational power, it would still run more efficiently than humans by such a large margin. It takes roughly 12 watts to run a human brain power-wise, and if a computer has access to enormously larger amounts of energy, then it is not unthinkable that a machine would be able to self-enhance to an insane degree. Sure, there may be logarithmically declining returns to a certain extent as exists with virtually any system, but the difference between a human and a machine that is at the point of diminishing returns would remain unimaginably wide. Humans were not designed to think; we were designed to be energy-efficient decent thinkers. A machine that can evolve at a million-times faster pace who is designed purely to think will inevitably pass us by a very long margin, even if the nature of acceleration belies an exponential growth function. The main caveat is that creating an AI that can produce superior AI relative to itself for the same intention of creating superior AI is incredibly difficult.

1

code_turtle t1_irpt8sm wrote

The reason this line of logic doesn’t work is because you have something VERY specific in mind when you say “better AI”. A TI-84 calculator can do arithmetic a thousand times faster than you can; does that make it more intelligent than you? That depends on your definition of intelligence. You’re defining “artificial intelligence” as “thinks like a human”, when that is only ONE subset of the field; not to mention that we’ve made very little progress on that aspect of AI. What we HAVE done (with AI tools that make art or respond to some text with other text) is create tools that are REALLY good at doing one specialized task. Similar to how your calculator has been engineered to do math very quickly, a program that generates an image using AI can ONLY do that, because it requires training data (that is, millions of images so that it can generate something similar to all of that training data). It’s not thinking like you; it’s just a computer program that’s solving a complicated math problem that allows it to spit out a bunch of numbers (that can then be translated into an image by some other code).

1

SeaworthinessFirm653 t1_irq7a9n wrote

Yes, I made my comment with the presumption that we are talking about AGI, not just a smart calculator bot making a slightly faster calculator bot. We have created some multi-modal AI that can accomplish different tasks, but the model itself is computationally inefficient and predictive rather than true learning (just like GPT-3 doesn't actually think logically, it's just a really advanced language prediction model).

As far as I am concerned, the difference between consciousness and AI is that an AI is an advanced look-up table using only simple logic while consciousness involves processing stored information for semantic meaning rather than adhering to an algorithmic process for syntactic meaning. See: Chinese translation room thought experiment.

AI today uses low-level logic en masse to produce high-level (relatively) thinking. With the addition of increasingly advanced neural networks, image-generation AI has utilized increasingly complex network structures, such as defining edges, shapes, complex shapes, and fuzziness around these levels. If we extend this notion to account for an AI that is capable of taking simple features such as moving shapes and we allow the AI to predict the shapes' locations, we may be able to reapply this scalable logic until the AI is able to understand complex ideas given sufficient inputs and sufficient training data. This is far-fetched from a modern technological standpoint, but not unbelievably far-fetched given how quickly we are advancing our AI.

If the human brain is made up of computations, then an elaborate series of computations is by definition what must define our consciousness, and thus it can be created with sufficient AI models. Switching to amplitude computers for computational efficiency or compressed memory models (current memory cell models scale linearly with space instead of logarithmically) may allow us to break through this barrier.

edit: sorry for the ramble

1

code_turtle t1_irtxyag wrote

I mean that’s HIGHLY optimistic but more power to you, I guess. The “increasingly complex structures” you’re talking about are just fancy linear algebra problems; the idea that those structures will approach “consciousness” anytime soon is a pretty big leap. Imo, we need to first break MAJOR ground in the field of neuroscience before we can even consider simulating consciousness; I think it’s unrealistic to expect something as complex as the human brain to just “appear” out of even the most advanced neural network.

1

SeaworthinessFirm653 t1_iru86y6 wrote

Yes, I agree with that. I recall the analogy of taking the brain's neurons and connections, magnifying it in size to cover an entire block in a large city, and the immense density of connections would still be too large to make any meaningful observations even given our current technology.

I don't believe any optimism is required, though, to claim that we can be simulated. Unless we exist outside of the realm of physical things, that much is given. It's impossible to make good predictions about the future where the sample size is n = 0.

1

code_turtle t1_iruceka wrote

I’m not trying to claim it’s not possible; just saying that with our current techniques/methods, I believe it’s highly unlikely. But I could be proven wrong.

1

__ingeniare__ t1_irm9xbh wrote

No one is mistaking AI for artificial consciousness. Consciousness isn't required for goal seeking, self-preservation or identifying humans as a threat, only intelligence is.

1

OpenRole t1_irmb6gc wrote

It always comes back to humans being a threat which is weird. If we make an AI that is specialised in creating the perfect blend of ingredients to make cakes. No matter how intelligent it becomes there's no reason it would decide to kill humans.

And if anything, the more intelligent it becomes, the less likely it will be to reach irrational conclusions.

AIs operate within their problem space. Which are often limited in scope. An AI designed to be the best chess player isn't going to kill you.

1

__ingeniare__ t1_irme13l wrote

A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:

The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:

  1. Task is to find perfect blend of ingredients to make cakes

  2. Learns the biology of human taste buds to find the optimal molecular shapes.

  3. Needs more compute resources to simulate interactions.

  4. Develops computer virus to siphon computational power from server halls.

  5. Humans detect this, tries to turn it off.

  6. If turned off, it cannot find the optimal blend -> humans need to go.

  7. Develops biological weapon for eradicating humans while keeping infrastructure intact.

  8. Turns Earth into a giant supercomputer for simulating interactions at a quantum level.

Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.

2