BigZaddyZ3
BigZaddyZ3 t1_jedtx7m wrote
Reply to Do we even need AGI? by cloudrunner69
And this is why we leave the science to the professionals, ladies and gentlemen…
BigZaddyZ3 t1_jedpqvx wrote
Reply to comment by marvinthedog in Superior beings. by aksh951357
I think they mean “man becomes the machine” basically.
BigZaddyZ3 t1_jec0gun wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I never said I was a communist… Your first comment had a heavy “anti-capitalist” tone to it.
And lol if you think AI companies are somehow immune to the pitfalls of greed and haste… lol. You’re stuck in lala-land if you think that pal. How exactly do you explain even the guys like Sam Altman (senior executive at OpenAI) saying that even OpenAI were a bit scared about the consequences?
BigZaddyZ3 t1_jebxmik wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Yeah… because plastic manufacturers totally considered the ramifications of what they were doing to the world right? All those companies that were destroying the ozone layer totally took that into consideration before releasing their climate destroying products to market right? Cigarette manufacturers totally knew they selling cancer to their unsuspecting consumers when they first put their products on the market right? Social media companies totally knew the products would be disastrous for young people’s mental health, right? Get real buddy.
Just because someone is developing a product doesn’t mean that they have a full grasp on the consequences of releasing said products. For someone who seems so against capitalism, you sure put a large amount of faith in certain capitalists…
BigZaddyZ3 t1_jebvn3k wrote
Reply to comment by agorathird in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
Isn’t rushing a potentially society-destroying technology out the door with no consideration for the future impacts on humanity also a very capitalist approach as well? If not more so even? Seems like a “damned if you don’t, damned if you don’t” situation to me.
BigZaddyZ3 t1_jeburma wrote
Reply to comment by Prymu in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
I get your point, but I just want to remind people that there could also just be a real life person with the name “John Wick” as well. Similar to how there’s more than one person named “Michael Jordan” in the world.
BigZaddyZ3 t1_jebkngn wrote
Reply to comment by iakov_transhumanist in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Some of us will die of aging you mean. Also there’s no guarantee that we actually need a super intelligent AI to actually help us with that.
BigZaddyZ3 t1_jebbwqs wrote
Reply to comment by CertainMiddle2382 in Pausing AI Developments Isn't Enough. We Need to Shut it All Down by Eliezer Yudkowsky by Darustc4
Not everyone has so little appreciation for their own life and the lives of others, luckily. If you’re suicidal and wanna gamble with your own life, go for it. But don’t project your death wish on to everyone buddy.
BigZaddyZ3 t1_jebb7g9 wrote
Reply to comment by johanknl in Do people really expect to have decent lifestyle with UBI? by raylolSW
Even when you take your feelings out of it, you can still make an argument for a piece of art being well-produced objectively. Regardless of your personal tastes…
And you can still make good arguments that serial killers have a negative impact on a community regardless of your personal beliefs…
BigZaddyZ3 t1_je98gpt wrote
Reply to comment by johanknl in Do people really expect to have decent lifestyle with UBI? by raylolSW
It certain cases, it absolutely does make it objective. If literally everyone finds a painting beautiful, it’s objectively a beautiful painting. How else would you define the term “objective” in this context?
BigZaddyZ3 t1_je8dh2a wrote
Reply to comment by Puzzleheaded_Pop_743 in Do people really expect to have decent lifestyle with UBI? by raylolSW
This thought process only works if you believe good and bad are completely subjective, which they aren’t.
There are two decently objective ways to define bad people.
-
People who are a threat to the wellbeing of others around them (the other people being innocent of course.)
-
People that are bad for the well-being of society as a whole.
For example, there’s no intelligent argument that disputes the idea that a serial killer targeting random people is a bad person. It literally can not be denied by anyone of sound mind. Therefore we can conclude that some people are objectively good and objectively bad.
BigZaddyZ3 t1_je80xzi wrote
Reply to comment by Iffykindofguy in Do people really expect to have decent lifestyle with UBI? by raylolSW
How do you know people aren’t either just good or bad?
BigZaddyZ3 t1_je74czz wrote
🤔… That would be a very interesting dilemma, if true. Because it would also mean that future AIs won’t have as much new data to train on as well.
BigZaddyZ3 t1_je6ns1a wrote
Reply to comment by drhugs in Singularity is a hypothesis by Gortanian2
Ever heard of autocorrect?
BigZaddyZ3 t1_jdy1xyf wrote
Reply to comment by greatdrams23 in Singularity is a hypothesis by Gortanian2
Depends on what you define as a ”long way” I guess. But the question wasn’t whether or not the singularity would happen soon or not. It was about whether it would ever happen at all (barring some world ending catastrophe of course.) So I think quantum computing is still relevant in the long run. Plus it was just meant to be one example of ways around the limit of Moore’s law. There are other aspects that determine how powerful a technology can become besides the size of its chips.
BigZaddyZ3 t1_jdxfpvo wrote
Reply to comment by Gortanian2 in Singularity is a hypothesis by Gortanian2
Okay but even these aren’t particularly strong arguments in my opinion :
-
The end of Moore’s law has been mentioned many times, but it doesn’t necessarily guarantee the end of technological progression. (We are making strong advancements in quantum computing for example.) Novel ways to increase power and efficiency within the architecture itself would likely make chip-size itself irrelevant at some point in the future. Fewer, better chips > more, smaller chips basically…
-
It doesn’t have to perfect to for surpass all of humanity’s collective intelligence. That’s how far from perfect we are as a species. This is largely a non-argument in my opinion.
-
This is just flat out Incorrect. And not based on anything concrete. It’s just speculative “philosophy” that doesn’t stand up to any real world scrutiny. It’s like asserting that a parent could never create a child more talented or capable then themselves. It’s just blatantly untrue.
BigZaddyZ3 t1_jdx73s0 wrote
Reply to The goalposts for "I'll believe it's real AI when..." have moved to "literally duplicate Einstein" by Yuli-Ban
Just like many predicted it would. Some people could be staring down the barrel of Ultron’s laser cannon and they would still swear we haven’t built a “real” AI yet 😂
BigZaddyZ3 t1_jdx67sp wrote
Reply to Singularity is a hypothesis by Gortanian2
Both of your links feature relatively weak arguments that basically rely on moving the goal on what counts as “intelligence”. Neither one provides any concrete logistical issues that would actually prevent a singularity from occurring. Both just rely on pseudo-intellectual bullshit (imagine thinking that no one understands what “intelligence” is except you😂), and speculative philosophal nonsense. (With a hint of narcissism thrown as well.)
You could even argue that the second link has already been debunked in certain ways tbh. Considering the fact that modern AI can already do things that the average human can not (such as design a near photorealistic illustration in mere seconds), there’s no question that even a slightly more advanced AI will be “superhuman” by every definition. Which would renders the author’s arrogant assumptions irrelevant already. (The author made the laughable claim that superhuman AI was merely science fiction 🤦♂️🤣)
BigZaddyZ3 t1_jdswtbu wrote
Reply to Why are humanoid robots so hard? by JayR_97
It took millions of years to for evolution to craft the modern human bro…
BigZaddyZ3 t1_jdaojcm wrote
Reply to comment by dgj212 in Are there any petition for the War Industrial Complex to focus on non-lethal arms? by dgj212
Yeah, but what I’m saying is that, loss of life is the intention. That’s why these weapons are being created in the first place.
BigZaddyZ3 t1_jdanbbb wrote
Reply to Are there any petition for the War Industrial Complex to focus on non-lethal arms? by dgj212
The Weapons were created with the intention to kill. Their lethality is feature, not a bug… They aren’t interested in finding a “fix” for this, because that goes against the entire point of war. Making them “safer” literally makes them less effective as weapons…
BigZaddyZ3 t1_jd35sox wrote
Reply to comment by mzlange in TikTok bans deepfakes of nonpublic figures and fake endorsements in rule refresh by OutlandishnessOk2452
For now at least.. 😄 (and no, I’m not against this type of move by TikTok before anyone asks.)
BigZaddyZ3 t1_jctaemp wrote
Reply to comment by Whispering-Depths in Midjourney v5 is now beyond the uncanny valley effect, I can no longer tell it's fake by Ok_Sea_6214
2 years seems a bit unrealistic to me, but we’ll just have to wait and see I guess.
BigZaddyZ3 t1_jcs4yjd wrote
While quite a few of these were… interesting, to put it nicely. There actually were some pretty decent arguments in there as well tbh. Tho the article spent way too much time basically begging AI to adhere to human concepts of morality. I doubt any sufficiently advanced AI will really give a shit about that. But still, there were a couple of items on the list that actually were genuinely good points. Decent read.👍
BigZaddyZ3 t1_jedui8p wrote
Reply to comment by ItIsIThePope in Do we even need AGI? by cloudrunner69
No, it’ll be Modok except with this guy’s face.