Submitted by Defiant_Swann t3_xywsfd in Futurology
SeaworthinessFirm653 t1_irjc2m6 wrote
Reply to comment by Devadander in We'll build AI to use AI to create AI. by Defiant_Swann
Yes. The moment an AI is capable of producing a superior AI than itself, the singularity would have been reached. However, this is not as simple as it sounds.
chaos021 t1_irjdn64 wrote
It never is and that's how we will likely end up pushing it too far
SeaworthinessFirm653 t1_irjdpyb wrote
AI ethics will be critical in the coming decades.
chaos021 t1_irjdu8k wrote
And I think that's the issue right there. It's not a future problem. It's a current problem.
shawnikaros t1_irkoae7 wrote
There's a lot of current problems that should've been past problems a decade ago when it comes to technology. This won't be any different. We're screwed.
Basic_Description_56 t1_irjzubb wrote
It’s ridiculous to think you can limit the exponentially advancing development of AI with ethics
Gubekochi t1_irkznug wrote
The goal isn't to limit it, just to channel it so we don't eradicate ourselves with our new tools/overlord.
SeaworthinessFirm653 t1_irlmegf wrote
Just as it is ridiculous to think that a constitution will limit the violation of human rights. Though, you may agree that a constitution is a good idea to secure rights despite its inevitable violation, no?
Ethics is a leverage point for utilizing AI safely. Your insult doesn't do you any favors.
205049 t1_irjwr06 wrote
Ethics? In this age?
SeaworthinessFirm653 t1_irlmi91 wrote
Extensive media coverage gives the implication that we are immoral much more in this era than prior when it is certainly not the case. Regardless, AI ethics continues to be a growing field.
Magicalunicorny t1_irjsofd wrote
One day it's just far more complicated than we can comprehend, the next it's the singularity
code_turtle t1_irl9okm wrote
I think a lot of you are mistaking AI for “artificial consciousness”. There are already a lot of AI techniques that involve AI helping to build other AI. But we’re not even close to building anything that can be considered “conscious”.
SeaworthinessFirm653 t1_irlmaxf wrote
If AI is capable of producing AI superior than itself, then logically it creates a self-accelerating intelligence that will inevitably prove to be superior than us. AGI is implied when AI can produce better AI that can produce better AI.
danielv123 t1_irm9qiz wrote
Actually, that scenario doesn't require a self accelerating intelligence, just a self advancing intelligence. There are other growth types than exponential and quadratic. It could run into the same issues as we do with Moore's law and frequency scaling etc, and only manage minor improvements with increasing effort for each step.
SeaworthinessFirm653 t1_irq2vcf wrote
That's actually a fair point; I hadn't considered the actual rate of growth in detail.
Edit: There is also an additional facet: If we are to assume that the AI in question is simply an AI designed to create superior AI, and this does indeed cyclically reproduce, then if you were to restrict it to its computational power, it would still run more efficiently than humans by such a large margin. It takes roughly 12 watts to run a human brain power-wise, and if a computer has access to enormously larger amounts of energy, then it is not unthinkable that a machine would be able to self-enhance to an insane degree. Sure, there may be logarithmically declining returns to a certain extent as exists with virtually any system, but the difference between a human and a machine that is at the point of diminishing returns would remain unimaginably wide. Humans were not designed to think; we were designed to be energy-efficient decent thinkers. A machine that can evolve at a million-times faster pace who is designed purely to think will inevitably pass us by a very long margin, even if the nature of acceleration belies an exponential growth function. The main caveat is that creating an AI that can produce superior AI relative to itself for the same intention of creating superior AI is incredibly difficult.
code_turtle t1_irpt8sm wrote
The reason this line of logic doesn’t work is because you have something VERY specific in mind when you say “better AI”. A TI-84 calculator can do arithmetic a thousand times faster than you can; does that make it more intelligent than you? That depends on your definition of intelligence. You’re defining “artificial intelligence” as “thinks like a human”, when that is only ONE subset of the field; not to mention that we’ve made very little progress on that aspect of AI. What we HAVE done (with AI tools that make art or respond to some text with other text) is create tools that are REALLY good at doing one specialized task. Similar to how your calculator has been engineered to do math very quickly, a program that generates an image using AI can ONLY do that, because it requires training data (that is, millions of images so that it can generate something similar to all of that training data). It’s not thinking like you; it’s just a computer program that’s solving a complicated math problem that allows it to spit out a bunch of numbers (that can then be translated into an image by some other code).
SeaworthinessFirm653 t1_irq7a9n wrote
Yes, I made my comment with the presumption that we are talking about AGI, not just a smart calculator bot making a slightly faster calculator bot. We have created some multi-modal AI that can accomplish different tasks, but the model itself is computationally inefficient and predictive rather than true learning (just like GPT-3 doesn't actually think logically, it's just a really advanced language prediction model).
As far as I am concerned, the difference between consciousness and AI is that an AI is an advanced look-up table using only simple logic while consciousness involves processing stored information for semantic meaning rather than adhering to an algorithmic process for syntactic meaning. See: Chinese translation room thought experiment.
AI today uses low-level logic en masse to produce high-level (relatively) thinking. With the addition of increasingly advanced neural networks, image-generation AI has utilized increasingly complex network structures, such as defining edges, shapes, complex shapes, and fuzziness around these levels. If we extend this notion to account for an AI that is capable of taking simple features such as moving shapes and we allow the AI to predict the shapes' locations, we may be able to reapply this scalable logic until the AI is able to understand complex ideas given sufficient inputs and sufficient training data. This is far-fetched from a modern technological standpoint, but not unbelievably far-fetched given how quickly we are advancing our AI.
If the human brain is made up of computations, then an elaborate series of computations is by definition what must define our consciousness, and thus it can be created with sufficient AI models. Switching to amplitude computers for computational efficiency or compressed memory models (current memory cell models scale linearly with space instead of logarithmically) may allow us to break through this barrier.
edit: sorry for the ramble
code_turtle t1_irtxyag wrote
I mean that’s HIGHLY optimistic but more power to you, I guess. The “increasingly complex structures” you’re talking about are just fancy linear algebra problems; the idea that those structures will approach “consciousness” anytime soon is a pretty big leap. Imo, we need to first break MAJOR ground in the field of neuroscience before we can even consider simulating consciousness; I think it’s unrealistic to expect something as complex as the human brain to just “appear” out of even the most advanced neural network.
SeaworthinessFirm653 t1_iru86y6 wrote
Yes, I agree with that. I recall the analogy of taking the brain's neurons and connections, magnifying it in size to cover an entire block in a large city, and the immense density of connections would still be too large to make any meaningful observations even given our current technology.
I don't believe any optimism is required, though, to claim that we can be simulated. Unless we exist outside of the realm of physical things, that much is given. It's impossible to make good predictions about the future where the sample size is n = 0.
code_turtle t1_iruceka wrote
I’m not trying to claim it’s not possible; just saying that with our current techniques/methods, I believe it’s highly unlikely. But I could be proven wrong.
__ingeniare__ t1_irm9xbh wrote
No one is mistaking AI for artificial consciousness. Consciousness isn't required for goal seeking, self-preservation or identifying humans as a threat, only intelligence is.
OpenRole t1_irmb6gc wrote
It always comes back to humans being a threat which is weird. If we make an AI that is specialised in creating the perfect blend of ingredients to make cakes. No matter how intelligent it becomes there's no reason it would decide to kill humans.
And if anything, the more intelligent it becomes, the less likely it will be to reach irrational conclusions.
AIs operate within their problem space. Which are often limited in scope. An AI designed to be the best chess player isn't going to kill you.
__ingeniare__ t1_irme13l wrote
A narrow AI will never do anything outside its domain, true. But we are talking about general AI, which won't arrive for at least a decade or two into the future (likely even later). Here's the thing about general AI:
The more general a task is, the less control humans have over the range of possible actions the AI may take to achieve its goal. And the more general an AI is, the more possible actions it can take. When these two are combined (a general task with a general AI), things can get ugly. Even in your cake example, an AI that is truly intelligent and capable could become dangerous. The reason current-day AI wouldn't be a danger is because it is neither of these things and tend to get stuck at a local optimum for the task. Here's an example of how this innocent task could turn dangerous:
-
Task is to find perfect blend of ingredients to make cakes
-
Learns the biology of human taste buds to find the optimal molecular shapes.
-
Needs more compute resources to simulate interactions.
-
Develops computer virus to siphon computational power from server halls.
-
Humans detect this, tries to turn it off.
-
If turned off, it cannot find the optimal blend -> humans need to go.
-
Develops biological weapon for eradicating humans while keeping infrastructure intact.
-
Turns Earth into a giant supercomputer for simulating interactions at a quantum level.
Etc... Of course, this particular scenario is unlikely but the general theme is not. There may be severe unintended consequences if the problem definition is too general and the AI too intelligent and capable.
Sedu t1_irjkmnt wrote
It hasn’t been since the 80s that humans have designed more complex microchips. And there have been ever more examples since the. From many perspectives we crossed that boundary ages ago.
Viewing a single comment thread. View all comments