BigZaddyZ3

BigZaddyZ3 t1_jec0gun wrote

I never said I was a communist… Your first comment had a heavy “anti-capitalist” tone to it.

And lol if you think AI companies are somehow immune to the pitfalls of greed and haste… lol. You’re stuck in lala-land if you think that pal. How exactly do you explain even the guys like Sam Altman (senior executive at OpenAI) saying that even OpenAI were a bit scared about the consequences?

1

BigZaddyZ3 t1_jebxmik wrote

Yeah… because plastic manufacturers totally considered the ramifications of what they were doing to the world right? All those companies that were destroying the ozone layer totally took that into consideration before releasing their climate destroying products to market right? Cigarette manufacturers totally knew they selling cancer to their unsuspecting consumers when they first put their products on the market right? Social media companies totally knew the products would be disastrous for young people’s mental health, right? Get real buddy.

Just because someone is developing a product doesn’t mean that they have a full grasp on the consequences of releasing said products. For someone who seems so against capitalism, you sure put a large amount of faith in certain capitalists…

2

BigZaddyZ3 t1_je8dh2a wrote

This thought process only works if you believe good and bad are completely subjective, which they aren’t.

There are two decently objective ways to define bad people.

  1. People who are a threat to the wellbeing of others around them (the other people being innocent of course.)

  2. People that are bad for the well-being of society as a whole.

For example, there’s no intelligent argument that disputes the idea that a serial killer targeting random people is a bad person. It literally can not be denied by anyone of sound mind. Therefore we can conclude that some people are objectively good and objectively bad.

1

BigZaddyZ3 t1_jdy1xyf wrote

Depends on what you define as a ”long way” I guess. But the question wasn’t whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Moore’s law. There are other aspects that determine how powerful a technology can become besides the size of its chips.

2

BigZaddyZ3 t1_jdxfpvo wrote

Okay but even these aren’t particularly strong arguments in my opinion :

  1. The end of Moore’s law has been mentioned many times, but it doesn’t necessarily guarantee the end of technological progression. (We are making strong advancements in quantum computing for example.) Novel ways to increase power and efficiency within the architecture itself would likely make chip-size itself irrelevant at some point in the future. Fewer, better chips > more, smaller chips basically…

  2. It doesn’t have to perfect to for surpass all of humanity’s collective intelligence. That’s how far from perfect we are as a species. This is largely a non-argument in my opinion.

  3. This is just flat out Incorrect. And not based on anything concrete. It’s just speculative “philosophy” that doesn’t stand up to any real world scrutiny. It’s like asserting that a parent could never create a child more talented or capable then themselves. It’s just blatantly untrue.

12

BigZaddyZ3 t1_jdx67sp wrote

Both of your links feature relatively weak arguments that basically rely on moving the goal on what counts as “intelligence”. Neither one provides any concrete logistical issues that would actually prevent a singularity from occurring. Both just rely on pseudo-intellectual bullshit (imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)

You could even argue that the second link has already been debunked in certain ways tbh. Considering the fact that modern AI can already do things that the average human can not (such as design a near photorealistic illustration in mere seconds), there’s no question that even a slightly more advanced AI will be “superhuman” by every definition. Which would renders the author’s arrogant assumptions irrelevant already. (The author made the laughable claim that superhuman AI was merely science fiction 🤦‍♂️🤣)

21

BigZaddyZ3 t1_jcs4yjd wrote

While quite a few of these were… interesting, to put it nicely. There actually were some pretty decent arguments in there as well tbh. Tho the article spent way too much time basically begging AI to adhere to human concepts of morality. I doubt any sufficiently advanced AI will really give a shit about that. But still, there were a couple of items on the list that actually were genuinely good points. Decent read.👍

4