TheLastSamurai
TheLastSamurai t1_ivwng47 wrote
Reply to comment by AI_Enjoyer87 in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
What are BCIS and FDVR?
TheLastSamurai t1_ivlrcnl wrote
Reply to comment by theabominablewonder in The Collapse vs. the Conclusion: two scenarios for the 21st century by camdoodlebop
There’s a video I watched that stated one of the pluses of moving away from fossil fuels and coals is that if there’s ever some catastrophic event there are a good fallback to basically restart industrial society
TheLastSamurai t1_ivbilcb wrote
Reply to comment by sideways in In the face on the Anthropocene by apple_achia
It could also do something like Make solar 1000 more efficient or create synergetic algae to capture carbon, a lot of possibilities but I wouldn’t pin our plans on it, it’s not a given
TheLastSamurai t1_iv7h7d9 wrote
Reply to comment by sonderlingg in Merger of consciousnesses by sonderlingg
What do you mean? Also if it closely resembled what we consider to be conscious does it even matter I guess?
TheLastSamurai t1_iv4evtn wrote
Reply to comment by Dark-Arts in Merger of consciousnesses by sonderlingg
Terrifying to consider
TheLastSamurai t1_iuweq0r wrote
Anytime in the history of earth when a smarter being emerged it either killed off or completely dominated the rest. Why would this trend stop?
TheLastSamurai t1_iuweaql wrote
Reply to comment by Rfksemperfi in Google’s ‘Democratic AI’ Is Better at Redistributing Wealth Than America by Mynameis__--__
Yeah that’s likely not very hard to do
TheLastSamurai t1_iuoifrc wrote
Reply to comment by OsakaWilson in whats the probability of governements voting laws to block AI research or other transhumanist tech by [deleted]
Some bad game theory. So we all have to try because another might, but if they do so it could literally end humanity?
TheLastSamurai t1_iuny9tr wrote
Reply to comment by [deleted] in whats the probability of governements voting laws to block AI research or other transhumanist tech by [deleted]
We don’t even think climate change is real here, we don’t act rationally. The issue will be politicized via populism, expect severe backlash
TheLastSamurai t1_iunh200 wrote
Reply to comment by OsakaWilson in whats the probability of governements voting laws to block AI research or other transhumanist tech by [deleted]
Why would it not make sense to refrain from creating AGI? I would love to see an actual risk/benefit analysis done.
TheLastSamurai t1_iungi2c wrote
Reply to comment by [deleted] in whats the probability of governements voting laws to block AI research or other transhumanist tech by [deleted]
I disagree completely. You’re assuming rational actors. Have you seen American politics? It’s a disaster. Also, maybe some people don’t want their entire life revolved around AI, and with the social upheaval automation from narrow AI is about to rapidly cause from job replacement, who do you think they will come for? I say it’s very likely
TheLastSamurai t1_iujihzr wrote
Reply to comment by MarromBrown in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
You may be right, I guess we'll see right? I think it can write something very very similar probably just based on ML inputs even if it's not entirely original, but we'll see very soon
TheLastSamurai t1_iujdcpl wrote
Reply to comment by MarromBrown in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
I think you’re underestimating how fast AI has been moving and how fast it will improve. It’s been breakneck pave the past two years for content and scaling will obviously keep going
TheLastSamurai t1_iujc6sm wrote
Reply to comment by MarromBrown in Experts: 90% of Online Content Will Be AI-Generated by 2026 by PrivateLudo
That’s simply not true, it will.
TheLastSamurai OP t1_ited1ji wrote
Reply to comment by Endward22 in Preventing an AI-related catastrophe - Problem profile by TheLastSamurai
You should read the article. Some very smart people but the chance at 10%, also look at AI scaling, it follows the concept of exponential growth, it will explode in capabilities so fast once it gets going.
TheLastSamurai t1_it82w9s wrote
Reply to A new UN report explores how to make human civilization safe from destruction. There’s a way to make civilization extinction-proof. But it won’t be easy. by mossadnik
I have commented about how nuclear non-proliferation needs to keep going and even accelerate on this sub and /News and it’s met with many downvotes lol
TheLastSamurai OP t1_it68lrg wrote
Summary
We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks.
Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity.2 There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this.3 As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute....
Submitted by TheLastSamurai t3_y9lj2u in Futurology
TheLastSamurai t1_iszox4i wrote
Reply to comment by kharjou in DeepMind AI One-Ups Mathematicians at a Calculation Crucial to Computing by Acceptable_Berry_393
I’d like it to stop right now personally. The risks far outweigh the benefits.
TheLastSamurai t1_isveixk wrote
Reply to comment by DonovanWrites in The killer ground drone revolution is here. The Netherlands has deployed four armed ground robots or unmanned ground vehicles (UGVs), making it the first NATO country to do so. The robots are Tracked Hybrid Modular Infantry Systems (THeMIS) UGVs built by the Estonian defense company Milrem Robotics. by mossadnik
Yes it is but don’t mention that on this sub they will downvote you into oblivion
TheLastSamurai t1_isucml4 wrote
Reply to comment by Secure_Cake3746 in The killer ground drone revolution is here. The Netherlands has deployed four armed ground robots or unmanned ground vehicles (UGVs), making it the first NATO country to do so. The robots are Tracked Hybrid Modular Infantry Systems (THeMIS) UGVs built by the Estonian defense company Milrem Robotics. by mossadnik
Exactly. It’s startling how little awareness or concern there is about this in the general public let alone a FUTURE subreddit
TheLastSamurai t1_isu6vty wrote
Reply to The killer ground drone revolution is here. The Netherlands has deployed four armed ground robots or unmanned ground vehicles (UGVs), making it the first NATO country to do so. The robots are Tracked Hybrid Modular Infantry Systems (THeMIS) UGVs built by the Estonian defense company Milrem Robotics. by mossadnik
Now imagine these get proliferate and AI starts controlling these. Not so easy to "just pull the plug" now is it?
TheLastSamurai t1_irlq5pf wrote
Reply to comment by littlebitsofspider in We'll build AI to use AI to create AI. by Defiant_Swann
And what do you do when AI controls everything connected to the internet? Not so easy to shut off is it?
TheLastSamurai t1_irlq43l wrote
Reply to We'll build AI to use AI to create AI. by Defiant_Swann
Why do we want to do this, still feels like the risk is not worth it....
TheLastSamurai t1_ivwniqo wrote
Reply to comment by ihateshadylandlords in AGI Content / reasons for short timelines ~ 10 Years or less until AGI by Singularian2501
I hope they are very wrong