175ParkAvenue
175ParkAvenue t1_jeelwtf wrote
Reply to There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
The comments here are garbage tier. Get informed people. Stop being fucking retards. LOL.
175ParkAvenue t1_jeellgt wrote
Reply to comment by RiotNrrd2001 in There's wild manipulation of news regarding the "AI research pause" letter. by QuartzPuffyStar
There is no conspiracy. All the information is freely available online. This is not about money becoming obsolete. It's about literally everyone on Earth dying.
175ParkAvenue t1_j4foi52 wrote
Reply to comment by photo_graphic_arts in What void are people trying to fill with transhumanism? by [deleted]
Every generation is consuming "an incredible amount of resources" compared to the previous one, and that is a good thing. We have not even scratched the surface of the available matter and energy in the reachable Universe.
175ParkAvenue t1_j243xed wrote
Reply to comment by XagentVFX in Robots, AI, and Automation. When and What do we do then? by Snipgan
I can't find anything online about UBI currently being implemented in the UK or even there being any plans to do so.
Where I disagree is that capitalism is bad. I believe it is the system that creates the most flourishing for the largest number of people. Money is the foundational technology of the economy and I don't see what could possibly replace it, or even why anyone would want to replace it.
175ParkAvenue t1_j23wxrf wrote
Reply to comment by XagentVFX in Robots, AI, and Automation. When and What do we do then? by Snipgan
This is vastly overblown to be honest. Obviously governments can't just implement UBI just because maybe at some undetermined point in the future there might be large scale unemployment due to automation. We are not yet there and right now the productive capacity of the economy simply could not sustain such programs, and today's highly targeted welfare is a lot more efficient.
Also this does not at all mean that money or capitalism must be discarded, since there is no better system to replace it. There will be many scarce resources even after the singularity, and there needs to be a way to distribute them efficiently. For that you need a market and money. UBI is actually the way to achieve wealth redistribution and keep increasing the living standards of people that will be economically useless in the automated markets of the (not so distant) future.
175ParkAvenue t1_j19ae1h wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
An AI is not coded though. It is trained using data and backpropagation. So you have no method to imbue it with morality, you can just try to train it on the right data and hope it learns what you want it to learn. But there are many many ways this can go wrong, from misalignment between what the human wants and what the training data contains, to misalignment between the outer objective and inner objective.
175ParkAvenue t1_j199poy wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
It doesn't have to be similar to us, but if it is to be useful in any way it has to decide what to do in the situations that it finds itself in.
175ParkAvenue t1_j199icf wrote
Reply to comment by SendMePicsOfCat in Why do so many people assume that a sentient AI will have any goals, desires, or objectives outside of what it’s told to do? by SendMePicsOfCat
How do you test an AGI to see if it works as intended? Its not that straightforward especially when said AGI will do things that are beyond the abiloty of humans to even comprehend or discern if it is a good thing or a bad thing.
175ParkAvenue t1_j17yipt wrote
Reply to comment by AITADestroyer in How hard would it be for an AI to do the work of a CEO? by SeaBearsFoam
Yep. AI CEOs are AGI complete which means that we are already in a full blown intelligence explosion when they exist.
175ParkAvenue t1_j0uai8s wrote
Reply to Prediction: De-facto Pure AGI is going to be arriving next year. Pessimistically in 3 years. by Ace_Snowlight
Dunno about literally one year, I would say more like 3-4, but it is possible. There is no fire alarm for AGI after all, and with all the progress lately with LLMs and such you can already feel the faint smell of smoke.
175ParkAvenue t1_ixl6f8u wrote
Reply to GPT3 is powerful but blind. The future of Foundation Models will be embodied agents that proactively take actions, endlessly explore the world, and continuously self-improve. What does it take? In our NeurIPS Outstanding Paper “MineDojo”, we provide a blueprint for this future by Dr_Singularity
I wonder when we will have continuously self improving agents that can take arbitrary actions in the world and do long term planning. Or at least totally slay in Minecraft. I'd guess about 3 years or so.
!RemindMe 3 years.
175ParkAvenue t1_ityk4z6 wrote
Reply to comment by ActuaryGlittering16 in First time for everything. by cloudrunner69
Most people currenty under 50 are likely going to make it to LEV.
175ParkAvenue t1_itauhqj wrote
Reply to comment by StarChild413 in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
Yeah for sure, I don't endorse the 'people are bad' narratives. I just used GTA as a placeholder for whatever ultra advanced open world games could be created by fully technologically mature civilizations, since I played some GTA installments and I liked it.
175ParkAvenue t1_it8dsr3 wrote
Reply to comment by iAmMonkee- in Thoughts on Full Dive - Are we in one right now? by fignewtgingrich
Yeah, it's is either a simulation or it is not so 50/50.
175ParkAvenue t1_it6w5kw wrote
Yeah it is likely that we are in a simulation. But there are many different possibilities regarding who is running the sim and to which purpose. Future humans playing GTA 17 is one possibility.
175ParkAvenue t1_it1m780 wrote
Reply to Prediction: By This Time Next Year, Sentient Robots Will Take Over the Planet Earth by supmandude
Wow those are some short timelines. I think there is less than a 5% chance of that happening but I admire your boldness.
175ParkAvenue t1_irb0gk7 wrote
Reply to comment by Mokebe890 in We are in the midst of the biggest technological revolution in history and people have no idea by DriftingKing
Those are fair questions. We don't know for sure if this boom will continue all the way to AGI and the Singularity. But that's what it looks like.
In my opinion only something on the scale of a US - Russia nuclear war could delay the Singularity by a few decades.
Something like the invasion of Taiwan could delay it by a few years maybe.
Localized wars, pandemics or recessions probably will not delay it significantly.
175ParkAvenue t1_jeen6a8 wrote
Reply to comment by [deleted] in The Alignment Issue by CMDR_BunBun
A rock also does not have wants and desires. And sure maybe you can make an AI that also does not have wants or desires. But it's not as useful as one that is autonomous and takes actions in the world to achieve some goals. So people will build the AI with wants or desires. Now, when the AI is much smarter than any human it will be very good at achieving goals. This is a problem for us, since we don't have a reliable way to specify some safe goal, and also we have no way to reliably induce some specific goal into an AI. In addition there are strong instrumental pressures on a powerful AI to decieve and use any means to obtain more power and eliminate any possible threats.