Viewing a single comment thread. View all comments

Gortanian2 OP t1_jdxdpev wrote

Thank you for your response. The logistical issues I see in these articles that get in the way of unbounded recursive self-improvement, which is thought my many to be the main driver of a singularity event, are as follows:

  1. The end of moore’s law. This is something that the CEO of Nvidia himself has stated.
  2. The theoretical limits of algorithm optimization. There is such a thing as a perfect algorithm, and optimization beyond that is impossible.
  3. The philosophical argument that an intelligent entity cannot become smarter than its own environment or “creator.” A single person did not invent chatGPT, is instead the culmination of the sum total of civilization today. In other words, civilization creates AI, which is a dumber version of itself.

I do not believe these arguments are irrefutable. In fact, I would like them to be refuted. But I don’t believe you have given the opposition a fair representation.

3

BigZaddyZ3 t1_jdxfpvo wrote

Okay but even these aren’t particularly strong arguments in my opinion :

  1. The end of Moore’s law has been mentioned many times, but it doesn’t necessarily guarantee the end of technological progression. (We are making strong advancements in quantum computing for example.) Novel ways to increase power and efficiency within the architecture itself would likely make chip-size itself irrelevant at some point in the future. Fewer, better chips > more, smaller chips basically…

  2. It doesn’t have to perfect to for surpass all of humanity’s collective intelligence. That’s how far from perfect we are as a species. This is largely a non-argument in my opinion.

  3. This is just flat out Incorrect. And not based on anything concrete. It’s just speculative “philosophy” that doesn’t stand up to any real world scrutiny. It’s like asserting that a parent could never create a child more talented or capable then themselves. It’s just blatantly untrue.

12

greatdrams23 t1_jdy192q wrote

Quantum computing is a long way away. You cannot just assume that or any other technology will give what is needed.

Once again. I look for evidence that AGI and singularity will happen, but see no evidence.

It just seems to be assumed singularity will happen, and therefore proof is not necessary.

2

BigZaddyZ3 t1_jdy1xyf wrote

Depends on what you define as a ”long way” I guess. But the question wasn’t whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Moore’s law. There are other aspects that determine how powerful a technology can become besides the size of its chips.

2

drhugs t1_je5fefn wrote

> the size of it’s chips

If it's its it's its, if it's it is it's it's.

1

Gortanian2 OP t1_jdxkjna wrote

  1. Very strong counter argument. Love it.

  2. Again, strong, but I would argue that we don’t know where we are in terms of algorithm optimization. We could be very close or very far from perfect.

  3. I would push back and say that the parent doesn’t raise the child alone. The village raises the child. In todays age, children are being raised by the internet. And it could be argued that the village/internet as a collective is a greater “intelligence agent” making a lesser one. Which does bring up the question of how exactly we made it this far.

1

SgathTriallair t1_jdxq29b wrote

Every single day people discover new things that they didn't learn from society this increasing the knowledge base. There are zero examples of an intelligence being limited by what trained it.

7

Gortanian2 OP t1_jdxrco9 wrote

The first sentence is true and I agree with you. The second sentence is not. Feral children, those who were cut off from human contact during their developmental years, have been found to be incapable of living normal lives afterwards.

1

SgathTriallair t1_jdy07jz wrote

But those feral children are smarter than the trees that "trained" them. I didn't say that teaching has no value but it doesn't put a hard cap on what can't be learned.

Let's assume you are correct. IQ is not real but we can use it as a stand in for overall intelligence. If I have an IQ of then I can train multiple intelligences with an array of IQ but the top level is 150. That is the top though, but the bottom. So I can train something from 1-150.

The second key point is that intelligence is variable. We know that different people and machines have different levels of intelligence.

With these two principles we would see a degradation of intelligence. We can simulate the process by saying that intelligence has a variability of 10 points.

Generation 1 - start at 150, gen 2 is 148.

Gen 2 - start 148, gen 3 is 145.

Gen 3 - start 145, gen 3 is 135...

Since variation can only decrease the intelligence at each generation society will become dumber.

However, we know that in the past we didn't understand quantum physics, we didn't understand hand washing, and if you go back far enough we didn't have speech.

We know through evolution that intelligence increases through generations. For society it is beyond obvious that knowledge and capability in the world increases over time (we can do more today than we could ten years ago).

Your hypothesis is exactly backwards. Intelligence and knowledge are tools that are used to build even greater knowledge and intelligence. On average, a thing will be more intelligence than the thing that trains it because the trainer can synthesize and summarize their knowledge, pass it on, and the trainee can then add more knowledge and consideration on top of what they were handed.

4

shmoculus t1_jdxyp11 wrote

Also on 1. you might say the brain is a type of computer, and it has nothing to do with Moore's law. Imagine we can replicate a similar system using synthetic neurons.

2

shmoculus t1_jdxzbfk wrote

What thought or real experiment would invalidate 3? You have to understand intelligence first to put system wide constraints on it like that, I don't think we can make those assertions.

You also have human evolution which came about in a low intelligence environment and rapidly gained intelligence, so I'm not sure why that would be different for machines.

2

SgathTriallair t1_jdxprnf wrote

Moore's law is basically the principal that there use of tools allows one to build better tools. Technology has an exponential curve. It's possible that we run out of the ability to build smaller chips in the current style but 3D chips, light based computing, and quantum computing are examples of how we may be able to take the next step.

There is no good basis for s philosophical arguments that dumb things can't create smart things. We only have a single data point and that is humans. Inorganic matter (or if you want to skip that then single celled organisms) eventually became us. We weren't guided by something smarter than us but arose from dumb materials. ChatGPT has also demonstrated multiple emergent behaviors that were not built into it.

4

SuperSpaceEye t1_jdxhwi6 wrote

  1. Yeah, moore's law is already ending, but it doesn't really matter for neural networks. Why? As they are massively parallelizable, GPU makers can just stack more cores on a chip (be it by making chips larger, or thicker (3d stacking)) to speedup training further.
  2. True, but we don't know where is that limit, and it just has to be better than humans.
  3. I really doubt it.
2

Ok_Tip5082 t1_jdyvidj wrote

We're still going to be limited by fab capacity, rare earth minerals, energy, and maintenance technicians.

Supply chain still rules above all. Trade needs to exist until/unless post scarcity hits.

2

Ok_Tip5082 t1_jdyuuwy wrote

Energy is still finite, and AI uses an absolute fuck ton compared to the human brain. I don't see a practical way to scale it up with current technology that wouldn't also allow for genetic engineering to make us compete just as well, but more resiliently.

Also, We literally just had a 10-100x carrington event miss us in the last two weeks. That shit would set us back to the industrial era at best, above-human-AI or not.

If it turns out AGI can figure out a way to get infinite energy without destroying everything, hey, problem solved! No more conflict! Dark forest avoided!

1