flexaplext
flexaplext t1_jefzjzt wrote
Reply to Indirect democracy represented by AI by SSan_DDiego
I believe Yuval Harari went into ideas like this.
I've thought about it as well. Feels like a good idea.
flexaplext t1_jefw4sn wrote
Reply to comment by Iffykindofguy in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Yeah, because we don't just see governments knee-jerk reacting to AI now when private enterprise has been developing and investing in it for many years.
And it isn't the most important and dangerous technology that will ever exist and yet they have little to no regulations on it or proper plans going forward for it. Despite this being obvious and known for decades.
And MPs know so much about computer programming, I'm sure they'll be able to know how to lead AI development and appoint the right people to it. Doing so in an efficient and innovative manner
And I'm sure the best programmers will be lining up to work for the government and their military rather than OpenAI and progressive companies.
flexaplext t1_jefsxdn wrote
Reply to comment by Iffykindofguy in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Yeah, that's the alternative. And such people will probably win because governments are so useless.
However, I suspect the US government will just forcibly take over OpenAI at some point on the grounds of National Security. They may be useless, but they're good at taking things over.
The same option probably won't exist for the UK government though. Which is why they'd be better joining the EU again and trying something within that union. Of course, with the EU buying out a decent existing company to get themselves started, as I also suggested.
Or the EU could just fund many different companies and then take over the one that wins out, the US-style plan. To ask the UK try to do this model alone dramatically reduces their funding, company pool and odds of them being successful.
flexaplext t1_jefrxdi wrote
Reply to comment by Iffykindofguy in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
No. It's for them to rejoin the EU and probably be even more aligned inside it than they were before. So they're not this pathetic little island trying to take on the likes of the US and China 😂
flexaplext t1_jefnwy6 wrote
Reply to comment by Iffykindofguy in ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Yeah. The EU would provide vastly more money and resources. Which could try to make up for its inevitable incompetence and failings.
The UK, on the other hand, will supply a petty budget that won't make a dent. Along with their own fresh servings of incompetence, of course.
flexaplext t1_jefmsfi wrote
Reply to ChatGB: Tony Blair backs push for taxpayer-funded ‘sovereign AI’ to rival ChatGPT by signed7
Would have had more of a chance with an EU-backed one. Trying to buyout an existing firm that's already gone a long way with LLM development.
Oh wait, Brexit happened 🤷🏻♂️🤦🏻♂️
And governments are useless.
flexaplext OP t1_jefbuk4 wrote
Reply to comment by Current_Side_4024 in This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
Maybe just: The AGI Gamble.
Since AGI will be the new God.
Rather descriptive to what it is.
flexaplext OP t1_jef2djq wrote
Reply to This concept needs a name if it doesn't have one! AGI either leads to utopia or kills us all. by flexaplext
Someone else mentioned you could potentially apply the anthropic principle to this. Or my thought from that: quantum suicide / immortality potentially applies too if it is real.
Being; we will inevitably find ourselves only in the good outcome because we won't exist in the bad one.
Submitted by flexaplext t3_127o4i0 in singularity
flexaplext t1_jeesk7z wrote
Reply to comment by Xbot391 in What advances in AI are required for it to start creating mass unemployment? by Give-me-gainz
I'm not sure exactly. As people say, it just always happens. The only thing I think these people are wrong about is that this trend continues after true AGI. That's when all trends and models of the economy and everything else breaks down.
If I had to guess, I think a lot of people will just be moved into places where AI is not yet fully capable. Mass collective data training. The more people on it, the faster we'll get to true AGI. If the AI is not yet at true AGI, then that means there is obviously areas where it needs to learn.
The economic value of training AI, once it has a full capacity to learn well from training, will just be absolutely massive. So it will require work from home-based solutions to get more people into these areas and very quick turnaround and retraining of people into new areas. The economic value will certainly be there though to facilitate such a system.
I think we'll inevitably also start to see a greater amount of real-world value being created too. So there will be a large increase in real-life activity needing to be done, whilst the robotics side of things lags behind.
I think robotics will still lag behind for a while, even after true AGI is created. It will take some time to manufacture and deploy all the necessary robots to replace workers. So there will still be a lot of people with jobs safter the inception of AGI even but then, slowly but surely, they'll start to get replaced. Starting with the higher salaried jobs first, then down towards the minimum wage workers eventually.
I think there will still be physical work after AGI. But it will be incredibly low-paid and optional. Humans are still useful, but they'll just have to accept not being paid much at all i order to stay economically viable against a robot.
flexaplext t1_jeeos7z wrote
Reply to What advances in AI are required for it to start creating mass unemployment? by Give-me-gainz
It needs work-based training data. That's where Copilot comes in:
https://www.reddit.com/r/singularity/comments/11t13ts/the_full_path_to_true_agi_finally_emerges/
Once this system gets better, then we'll start seeing proper unemployment happen on a worldwide mass scale.
There will be lots of new jobs created for a while though. As people say. I think the job market will be perfectly fine, even with this massive shift. That is, up until AI reaches true AGI / ASI, then the job market will be shot to pieces.
flexaplext t1_je9yb1z wrote
Reply to The voice in our head is like an AI generator - whatever content you’re feeding it is the reality it creates for you. by noodsaregood
I wrote a post about the voice in your head being similar to a LLM:
https://www.reddit.com/r/singularity/comments/123r90m/llms_are_not_that_different_from_us_a_delve_into/
flexaplext t1_je95mic wrote
Reply to comment by Shack-app in The Only Way to Deal With the Threat From AI? Shut It Down by GorgeousMoron
I know. None of it's coming. But people should at least be smart enough to ask for it. Then if the worst happens they can at least say they pointlessly tried.
flexaplext t1_je7x01m wrote
Well that was a fun read. I've obviously heard his stuff before.
But what needs to be absolutely realized is that true global cooperation has to come first. First and foremost. Nothing else can happen until that happens. It cannot be shut down before that happens. No progress can be made at all until that happens.
I literally just wrote a thread post about it.
The lack of international cooperation is both the very first problem that needs solving and also the very first threat to society. It needs to happen now and before anything else. It is the only real discussion that needs to take place right now. How can that actually be facilitated. Because bringing the major world powers together and aligned and feeling safe from one another is by far the hardest problem. Shutting down all AI development is a piece of cake of a task in comparison.
flexaplext t1_je7ampt wrote
Reply to comment by BigZaddyZ3 in The Rise of AI will Crush The Commons of the Internet by nobodyisonething
I think this and OP's take is completely wrong. I would say it's going to be mostly non-useful data that's lost.
People will still write Wikipedia articles, they just won't be read as much, but the data on the site will still be valid.
People will still ask questions on stack overflow but there will be fewer questions and the number of trivial questions will significantly reduce as these are easier for AI to answer. But novel and difficult questions will still need to be asked to stack overflow, because AI isn't capable of answering them. And people will still want to and be interested in answering the more novel questions.
Thus, the overall effect will actually be to significantly improve the data, and engage people better. People wanting to answer questions will enjoy the experience much more with the less easy and obvious questions being removed. And they will be able to stumble across the interesting questions more easily and efficiently.
It should actually end up creating a much richer set of data for models to train on.
Think about it. If all the questions on stack overflow that are asked and answered were only questions that the models couldn't answer, that's like literally the most perfect training data. It filters out the questions it already knows the answer too which is not useful for it to read.
And this effect (I think it needs a name if it doesn't have one? - The Flex Effect if I just came up with it 😂😂) will only adapt over time with increased model output accuracy. As the model updates and its answers get better, so too will the questions, and subsequent answers. They'll get better and more and more difficult, matching the critea for what the new model then still needs to learn and train on.
flexaplext t1_je6j0wt wrote
Reply to The Limits of ASI: Can We Achieve Fusion, FDVR, and Consciousness Uploading? by submarine-observer
I think people often underestimate the capabilities of AI.
But they also often overestimate the capabilities of physics.
Some things will just be impossible and not be allowed within the laws of physics no matter what. Can't say exactly what those things will be but I'll put my hat in the ring to say it will be a number of the things they hypothesize AI to be capable of doing.
flexaplext t1_je26vkw wrote
Reply to AI Utopias by TikkunCreation
Content: Music, Movies, Books, Art
Economics: Efficient, Fair, Reliable systems
Politics: Efficient, accurate, representative systems
Human Enhancement: Directly enhance physical and mental capabilities
Biological: Create new animals, bring some back from extinction, create entirely new food sources
Safety Needs: Allow more direct control over our nervous system. So we can turn off pain entirely at source if we wish to do so
flexaplext OP t1_jdydx39 wrote
Reply to comment by Yomiel94 in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
Just hold it all in memory. My mental arithmetic and manipulation is actually rather decent, despite not being able to visualise it. You actually find that this applies to most people with aphantasia. There's lots of interesting things about it if you search and read up on people's perceptions and experience of it.
It's strange to describe.
Because I know exactly what something like a graph looks like, without being able to visualise it. Just by holding all the information about a graph in memory. I can manipulate that information by simply changing the information of the graph.
However, this ability does break down with more complex systems. If I try and hold an entire chess board in memory and manipulate it, I just fail completely. It's too much information for me to keep in memory and work out accurately without a visual aid.
flexaplext OP t1_jdxxwnv wrote
Reply to comment by CrazyShrewboy in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
It's really quite a strange experience if you properly delve deep into your conscious thought process and think about exactly what's going on in there.
This subconscious supercomputer in the back of your mind that's always running, throwing ideas into your thought process, processing and analysing and prioritising every single input of this massive stream of sensory data, storing, retrieving memories, managing your heartbeat and internal body systems.
There's this computer back there doing so, so much on autopilot and you have no direct access to it or control over it.
The strangest thing of all, though, is this way it just throws ideas and concepts, words into your conscious dialog. Maybe I think that's strangest to me though, just because it's the only thing I am able to have a true perception of it doing.
Like I said, it's not necessarily single words that it is throwing at you, but overarching ideas. However, maybe these ideas are just like single word terms, like a macro, and then that single term is expanded out into multiple words based on the sequence of words in such a term.
There are different ways to test and manipulate its output to you though. You have some conscious control over its functionality.
If you try to, you can tell and make your subconscious only throw out overarching ideas to you, rather than a string of words. Well, I can anyway.
You can also, like, force the output to slow down completely and force it to give you literally only one word at a time and not think at all about an overarching idea of the sentence. Again, I can do that anyway.
It's just like my thought process is completely slowed down and limited. It's just way more limited in thought and it's literally like the subconscious is just throwing one word at a time into my mind. I mean I can write out exactly what it comes up with when I do this:
"Hello, my name is something you should not come up with. How about your mom goes to prison. What's for tea tonight. I don't know how you're doing this but it's interesting. How come I'm so alone in the world. Where is the next tablet coming from."
I mean, fuck. That's weird to do. You should try it if you can. Just completely slow down and force your thoughts into completely singular words. Make sure to not let any ideas or concepts enter your mind. I mean, that output is way less than an LLMs capability when I do that, it's very, very similar to what basic predictive text currently is. In fact, it feels almost the same except that it appears to be affected by emotion and sensory input.
Edit: There is another way I can do it. Just think or even better speak out loud fairly fast without thinking at all about what you're saying. Don't give yourself time to think or for ideas to come into your mind. You wind up just stringing nonsensical words together. Sometimes there's a coherent sentence in there from where a concept pops in, but it's mainly still just like a random string of predictive text.
flexaplext t1_jdxjy0l wrote
Reply to comment by Pointline in Is AI alignment possible or should we focus on AI containment? by Pointline
It depends entirely on how seriously the government / AI company takes the threat of a strong AGI. To whether it will be created safely or not.
There is then the notion that we will need to be able to actually detect if it's reached strong AGI, or a hypothesis that it may have and may deceive us. So, whichever way, containment would be necessary if we consider it a very serious existential threat.
There are different levels of containment. Each further one is more and more restrictive but more and more safe. The challenge would likely come in working out how many restrictions you could lift in order to open up more functionality whilst also keeping it contained and completely safe.
We'll see when we get there how much real legislation and safety is enforced. Humans tend to, unfortunately, be rather reactive rather than proactive, which gives me great concern. An AI model developed between now and AGI may be used to enact something incredibly horrific though, which may then force these extreme safety measures. That's usually what it will take to actually make governments sit up properly and notice.
flexaplext t1_jdxi6k0 wrote
Reply to comment by SkyeandJett in Is AI alignment possible or should we focus on AI containment? by Pointline
Not very likely. It's much more likely it will first emerge in somewhere like OpenAI's testing where they have advanced it to a significant degree with their significant model changes. Hopefully, recognizing when they are near strong AGI levels and not giving it internet access for testing.
If they are then able to probe and test it's capabilities and find it able to be incredibly dangerous. This is when it would get reported to the pentagon and they may start to put extreme containment measures on it.
If AI has maybe been used up to this point for something highly horrific like an assassination of the president, or a terrorist attack. It is possible that these kinds of safety measures would be put in place. There's plenty of potential serious dangers of humans using AI before AGI itself actually happens. These might draw proper attention to its deadly consequences if safety is not made of paramount importance.
I can't really predict how it will go down though. I'm certainly not saying at all that containment will happen. I'm just saying that it's potentially possible to happen if it's taken seriously enough and ruled with an iron fist.
I don't personally have much faith though from humanity's past record of being reactive rather than proactive towards potential severe dangers. Successful proactive measures tend to never get noticed though, that's their point, so this may cause high sample bias on my behalf due to experience and media coverage.
flexaplext t1_jdxg54v wrote
Reply to comment by SkyeandJett in Is AI alignment possible or should we focus on AI containment? by Pointline
Not if you only give direct access to one singular person in the company and have them highly monitored and with very limited power and tool use outside of said communication. Just greatly limit the odds of a breach.
You can do AI containment successfully, it's just highly restrictive.
If it remains within a single data centre with no ability to output to the internet, only receive input. Governments world wide block and ban all other AI development and monitor this very closely and strictly 1984 style with tracking forcibly embedded into all devices.
I'm not saying this will happen, but it is possible. If we find out ASI could literally end with complete ease though, I wouldn't completely rule it out that we will go down this incredibly strict rule.
Understand that even in this highly restrictive state, it will still be world changing. Being able to potentially come up with all scientific discovery alone is good enough. We can always do rigorous tests of any scientific discovery just as we would if we came up with the idea ourselves. Make sure we understand it completely before any implementation.
flexaplext OP t1_jdxc3bh wrote
Reply to comment by Ok-Variety-8135 in LLMs are not that different from us -- A delve into our own conscious process by flexaplext
I actually can't do those things. As part of aphantasia I can't generate virtual vision, virtual taste, virtual smell or virtual touch at all.
I can only generate virtual sound in my head.
This is why I can say those other mental modes are not necessarily at all to thinking and conciousness. Because I know that I'm conscious and thinking without them and I still would be without any input from my real senses. But obviously my sensory input have been completely vital to learning.
Submitted by flexaplext t3_123r90m in singularity
flexaplext t1_jeg04oq wrote
Reply to comment by Charlierook in Indirect democracy represented by AI by SSan_DDiego
This idea is to let AI optimize things. But everyone has their own personal AI that puts forward their views and needs. Then a central AI concatenates all this data and works out what policies would best work for the population as a whole, as a whole of the individuals in that population. Which is what democracy should be.