Submitted by Gortanian2 t3_123zgc1 in singularity
BigZaddyZ3 t1_jdxfpvo wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
Okay but even these aren’t particularly strong arguments in my opinion :
-
The end of Moore’s law has been mentioned many times, but it doesn’t necessarily guarantee the end of technological progression. (We are making strong advancements in quantum computing for example.) Novel ways to increase power and efficiency within the architecture itself would likely make chip-size itself irrelevant at some point in the future. Fewer, better chips > more, smaller chips basically…
-
It doesn’t have to perfect to for surpass all of humanity’s collective intelligence. That’s how far from perfect we are as a species. This is largely a non-argument in my opinion.
-
This is just flat out Incorrect. And not based on anything concrete. It’s just speculative “philosophy” that doesn’t stand up to any real world scrutiny. It’s like asserting that a parent could never create a child more talented or capable then themselves. It’s just blatantly untrue.
greatdrams23 t1_jdy192q wrote
Quantum computing is a long way away. You cannot just assume that or any other technology will give what is needed.
Once again. I look for evidence that AGI and singularity will happen, but see no evidence.
It just seems to be assumed singularity will happen, and therefore proof is not necessary.
BigZaddyZ3 t1_jdy1xyf wrote
Depends on what you define as a ”long way” I guess. But the question wasn’t whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Moore’s law. There are other aspects that determine how powerful a technology can become besides the size of its chips.
drhugs t1_je5fefn wrote
> the size of it’s chips
If it's its it's its, if it's it is it's it's.
BigZaddyZ3 t1_je6ns1a wrote
Ever heard of autocorrect?
Gortanian2 OP t1_jdxkjna wrote
-
Very strong counter argument. Love it.
-
Again, strong, but I would argue that we don’t know where we are in terms of algorithm optimization. We could be very close or very far from perfect.
-
I would push back and say that the parent doesn’t raise the child alone. The village raises the child. In todays age, children are being raised by the internet. And it could be argued that the village/internet as a collective is a greater “intelligence agent” making a lesser one. Which does bring up the question of how exactly we made it this far.
SgathTriallair t1_jdxq29b wrote
Every single day people discover new things that they didn't learn from society this increasing the knowledge base. There are zero examples of an intelligence being limited by what trained it.
Gortanian2 OP t1_jdxrco9 wrote
The first sentence is true and I agree with you. The second sentence is not. Feral children, those who were cut off from human contact during their developmental years, have been found to be incapable of living normal lives afterwards.
SgathTriallair t1_jdy07jz wrote
But those feral children are smarter than the trees that "trained" them. I didn't say that teaching has no value but it doesn't put a hard cap on what can't be learned.
Let's assume you are correct. IQ is not real but we can use it as a stand in for overall intelligence. If I have an IQ of then I can train multiple intelligences with an array of IQ but the top level is 150. That is the top though, but the bottom. So I can train something from 1-150.
The second key point is that intelligence is variable. We know that different people and machines have different levels of intelligence.
With these two principles we would see a degradation of intelligence. We can simulate the process by saying that intelligence has a variability of 10 points.
Generation 1 - start at 150, gen 2 is 148.
Gen 2 - start 148, gen 3 is 145.
Gen 3 - start 145, gen 3 is 135...
Since variation can only decrease the intelligence at each generation society will become dumber.
However, we know that in the past we didn't understand quantum physics, we didn't understand hand washing, and if you go back far enough we didn't have speech.
We know through evolution that intelligence increases through generations. For society it is beyond obvious that knowledge and capability in the world increases over time (we can do more today than we could ten years ago).
Your hypothesis is exactly backwards. Intelligence and knowledge are tools that are used to build even greater knowledge and intelligence. On average, a thing will be more intelligence than the thing that trains it because the trainer can synthesize and summarize their knowledge, pass it on, and the trainee can then add more knowledge and consideration on top of what they were handed.
shmoculus t1_jdxyp11 wrote
Also on 1. you might say the brain is a type of computer, and it has nothing to do with Moore's law. Imagine we can replicate a similar system using synthetic neurons.
shmoculus t1_jdxzbfk wrote
What thought or real experiment would invalidate 3? You have to understand intelligence first to put system wide constraints on it like that, I don't think we can make those assertions.
You also have human evolution which came about in a low intelligence environment and rapidly gained intelligence, so I'm not sure why that would be different for machines.
Viewing a single comment thread. View all comments