Nanaki_TV t1_jegsg3g wrote
Reply to comment by agorathird in I have a potentially controversial statement: we already have an idea of what a misaligned ASI would look like. We’re living in it. by throwaway12131214121
Describe the world today in 1980. You cannot predict Reddit or Twitter. You cannot make the claims you’re making with any substantial certainty. Stop acting as if you know.
agorathird t1_jegunvj wrote
>Describe the world today in 1980. You cannot predict Reddit or Twitter. You cannot make the claims you’re making with any substantial certainty. Stop acting as if you know.
Both in this thread and the other thread you seem to not to want to extrapolate based on presently given information. That's like the best thing about being sentient too. Or at least you don't want me to extrapolate since you gave me a r/futurology tier take on working.
You are acting like I'm describing hypothetical technology. It's already here. Look through the subreddit for direct sources. You seem to only be working off of ChatGPT-like text models. Even that can be quite autonomous with plug ins. You're like those people who don't know ai is starting to create functional code.
For as much as you love markets, which I also do, you seem to not acknowledge the profit motive and how human neutral it is.
---
On a sidenote, if I had access to books in the 1980s I might've predicted social media. A lot of singulitarians did. But really this is more like predicting social media in 2001 or 2007 depending how which sites you'd like to count. But I still think the analogy is flawed, as the tech is here.
Nanaki_TV t1_jegvhjv wrote
I do not want speculate nor wish others to because I see the tech as-is right now. I’m am however anticipating AGI fundamentally changing how the world works. What that will look like no one can say. Saying UBI is THE solution is naive of the world we are headed towards. Things are going to break soon and quickly, I hope. It’s up to us to guide others to build a better world once the current system collapses.
Profits are already used in the reward functions of NN. It will be interesting when the reward function is switched to real-world money (like Satoshis via the LN).
agorathird t1_jegwf73 wrote
It's not naive you have not thought through the implications of what AGI means. You are also ignorant of what is doable with the current technology. Artificial general intelligence is equal to us but also inherently superior due to its computational capacities. There is no need for us after that.
You literally are not describing any useful idea of AGI and are only describing the most surface level uses of text-modality only LLMs in your responses.
The r/futurology work week stuff you talk about is possible right now with current public models of chatgpt. It's been possible for a while. But it's not implemented due to greed and beauruacrats being steadfast in their ways. Luckily, not implementing a change hasn't been critically dire for mass swaths of people thus far.
Nanaki_TV t1_jegznf7 wrote
>you have not thought through the implications of what AGI means.
Almost agreed. But because I cannot know what it means. I keep trying my darnest to picture it but I cannot. I'm not smart enough to know what thousands of AGI coming together to solve complex problems will come up with, nor will anyone here. It's hubris to assume anyone can.
>There is no need for us after that.
Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
> But it's not implemented due to greed and beauruacrats being steadfast in their ways.
It is greed that will cause these models to be implemented and jobs to be automated. I'm working on the risk assessment of doing so right now for work. I do understand. I think I'm just not explaining well due to being sleep deprived thanks to having newborn twins. Lol.
agorathird t1_jeh2s75 wrote
>Again, assumption after assumption. More and new horizons will be created. What? I don't know. But electricity gave the ability for so much to exist on top of it once it was massively adopted. Once AGIs are massively adopted and in our homes, not requiring a supercomputer to train I mean, well, I can only hallucinate what that future will look like. If we are "not needed" then so be it, there's no use arguing. May we die quickly. But I doubt it very much.
Not assumptions, that's what AGI means lol as far as current jobs are concerned. Unless there's an issue they have with space travel? You can make a few edges cases assuming slow takeoff. Which I can give you a boon on about new horizons sure. Maybe we merge, whatever.
This doesn't mean we die or it's unaligned or whatever. That's real speculation. Good luck with your twins.
Nanaki_TV t1_jeh345p wrote
Thanks. And thanks for the discussion. P
Viewing a single comment thread. View all comments