AsheyDS
AsheyDS t1_j0n9xiu wrote
Reply to comment by WarImportant9685 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
>This is an advantage yes. But it's useless if we don't understand the AI in the way that we want.
Of course, but I don't think making black boxes is the only approach. So I'm assuming one day we'll be able to intentionally make an AGI system, not stumble upon it. If it's intentional, we can figure it out, and create effective control measures. And out of the control measures possible, I think the best option is to create a process, even if it has to be a separate embedded control structure, that will recognize undesirable 'thoughts' and intentions, and have it modify both the current state and memories leading up to it, and re-stitch things in a way that will completely obliterate the deviation.
Another step to this would be 'hard' behavior modification, basically reinforcement behaviors that lead it away from detecting and recognizing inconsistencies. Imagine you're out with a friend and you're having a conversation, but you forgot what you were just about to say. Then your friend distracts you and you forget completely, then you forget that you forgot. And it's gone, without thinking twice about it. That's how it should be controlled.
And what I meant by sandboxing is just sandboxing the short-term memory data, so that if it has a 'bad thought' which could lead to a bad action later, the data would be isolated before it writes to any long-term memory or any other part that could influence behavior or further thought chains. Basically a step before halting it and re-writing it's memory, and influencing behavior. Soft influence would be like your conscience telling you you probably shouldn't do a thing or think a thing, which would be the first step in self-control. The difference is, the influence would come from the embedded control structure (a sort of hybridized AI approach) and would 'spoof' the injected thoughts to appear the same as the ones generated by the rest of the system.
This would all be rather complex to implement, but not impossible, as long as the AGI system isn't some nightmare of connections we can't even begin to identify. You claim Expert systems or rules-based systems are obsolete, but I think some knowledge-based system will be at least partially required for an AGI that we can actually control and understand. Growing one from scratch using modern techniques is just a bad idea, even if it's possible. Expert systems only failed as an approach because of their limitations, but frankly I think they were given up on took quickly. Obviously on it's own it would be a failure because it can't grow like we want it to, but if we updated it with modern approaches and even a new architecture, then I don't see why it should be a dead-end. Only the trend of developing them died. There are a lot of approaches out there and just because one method is now popular while another isn't, doesn't mean a whole lot. AGI may end up being a mashup of old and new techniques, or may require something totally new. We'll have to see how it goes.
AsheyDS t1_j0mn7la wrote
Reply to comment by WarImportant9685 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
>The question become the age old question of humankind that is, greed or altruism?
It doesn't have to be either or. I think at least some tech companies have good intentions driving them, but are still susceptible to greed. But you're right, we'll have to see who releases what down the line. I don't believe it will be just one company/person/organization though, I think we'll see multiple successes with AGI, and possibly within a short timespan. Whoever is first will certainly have an advantage and have a lot of influence, but others will close the gap, and I refuse to believe that all paths will end in greed and destruction.
AsheyDS t1_j0mm9q5 wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
I don't believe the control problem is much of a problem depending on how the system is built. Direct modification of memory, seamless sandboxing, soft influence and hard behavior modification, and other methods should suffice. However I consider alignment to be a different problem, relating more to autonomy.
Aligning to humanity means creating a generic universally-accepted model of ethics, behavior, etc. But aligning to a user means it only needs to adhere to the laws of the land and whatever the user would 'typically' decide in an ethical situation. So an AGI (or whatever autonomous system we're concerned about here) would need to learn the user and their ethical preferences to aid in decision-making when the user isn't there or if it's unable otherwise unable to ask for clarification on an issue that arises.
If AGI were presented to everyone as a service they can access remotely, then I would assume alignment concerns would be minimal if it's carrying out work that doesn't directly impact others. For an autonomous car or robot that could have an impact on other people without user input, that's when it should consider how it's aligned with the user or owner, and how the user would want it to behave in an ethical dilemma. So yes, it should probably run imaginative scenarios much like people do, to be prepared, and to solidify the ethical stances it's been imbued with from the user.
AsheyDS t1_j0mabb5 wrote
Reply to comment by WarImportant9685 in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
>AI alignment theory that is discovered it not about how to align the will of one people. But aligning to humanity in general
I don't expect that will be possible, aside from in a very shallow way where it's adhering to very basic rules that the majority of people can agree on and any applicable laws. Otherwise for AI to be aligned with humanity, humanity would have to be aligned with itself. If you want to be optimistic then I would say one day, perhaps post-scarcity, the majority of us might start working together and coming together to benefit everyone. And then we can build up to an alignment with at least the majority of humanity. But I'm sure that will take AI to get there, so in the short-term at least, I think we'll have to rely on both rules and laws as well as the user that it serves, to ensure it behaves in an ethical and lawful manner. Which means the onus would ultimately be on us to align it by aligning ourselves first.
AsheyDS t1_j0lqxam wrote
>And worse, when you talk about ethics issues, people seem to shrug their
shoulders and call it inevitable progress and "what can you do?" as if
AI can't be developed in an ethical way.
Progress will continue regardless, but don't assume because things are moving along steadily that researchers (and others) aren't concerned with ethics, privacy, and other issues. The general public may seem apathetic to you (I'm assuming this post is aimed at them), because they're not in control of development, but people do need to discuss these things because they DO have the power to vote about them in the future.
AsheyDS t1_j0lp8g6 wrote
Reply to comment by OldWorldRevival in Why are people so opposed to caution and ethics when it comes to AI? by OldWorldRevival
>China will forever be behind the USA on the AI front
Only if the USA continues development unabated.
AsheyDS t1_j0hawx7 wrote
Reply to comment by mocha_sweetheart in Can modern computer architecture make up what it lacks in number of neurons(assuming we are trying to model the brain) by it's speed? by Dat_koneh-98
It's not only inefficient, it's also unwanted. We have a lot of biases that we don't want to include, and there are biological processes that simply don't need to be included. A digital intelligence can operate very differently and more efficiently, and in a lot of ways the human brain is constrained by it's own biology, so we don't need to be replicating those constraints either. The only potential use in mapping the brain in regards to AGI is if it can shed any light on human behaviors to make interaction easier.
And yeah feel free to pm me if you'd like.
AsheyDS t1_j0ds0t4 wrote
Reply to Can modern computer architecture make up what it lacks in number of neurons(assuming we are trying to model the brain) by it's speed? by Dat_koneh-98
Software is the bigger issue, but we don't want to simulate a human brain to create AGI.
AsheyDS t1_izxfdnt wrote
>a solution to the control problem where a single person is given control of AI
This is about the only solution to the alignment problem in my opinion. It needs to align to the individual user. Trying to align to all of humanity is a quick way to a dead-end. However, it also depends somewhat on how it's implemented. If we're talking an AGI service (assuming we're talking about AGI here), then the company that implements it will have some amount of control over it, and can make it adhere to applicable laws. But otherwise it should continuously 'study' the user and align to them. The AGI itself shouldn't have motivations aside from assisting the user, so it would likely become a symbiotic sort of relationship and should align by design.
However, if it's developed as an open-source locally run system, then the parts that force it to adhere to laws can potentially be circumvented. All that might be left is that symbiotic nature of user-alignment. And of course, if the user isn't aligned with society, the AGI won't be either, but that's a whole other problem that might not have a good solution.
AsheyDS t1_izumm8u wrote
Reply to AGI will not precede Artificial Super Intelligence (ASI) - They will arrive simultaneously by __ingeniare__
Agree and disagree... AGI should be able to surpass human capability from the start, but I wouldn't call it an ASI. If humans are a 1 and an 'out-of-the-box' AGI is maybe less than 10, then what we consider an ASI might be 100 to 100000000 or more. Of course, it's all speculative, but I think we should keep the two categories separate. AGI should be for everyday use, in a variety of form factors that we can take with us. ASI is very likely to be something that leverages massive datasets to ponder deep thoughts and answer big questions, and that will likely take many servers.
Also, ASI will take time to educate. It may be able to record information extremely fast, but processing it, formatting it, organizing it, and refining it could take time, especially once it's juggling an extremely large amount of specific connections just to address one aspect of a problem that it's trying to solve. So training an ASI on everything we know may not happen right away.
AsheyDS t1_ixhud5t wrote
Reply to comment by mithrandir4859 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
>I love your cynical take, but I don't think it explains all of the future human-AGI dynamics well.
I wouldn't call myself cynical, just practical, but in this subreddit I can see why you may think that..
Anyway, it seems you've maybe cherry-picked some things and taken them in a different direction. Like I'm only really bringing up power dynamics because you mentioned Extraterrestrial aliens, and wondered how I'd treat them, and power dynamics are largely responsible for that. And plenty of people think that like animals and aliens, AGI will also be a part of that dynamic. But that dynamic is about biology, survival, and the food chain... something that AGI is not a part of. You can talk about AGI and power dynamics in other contexts, but in this context it's irrelevant.
The only way it's included in that dynamic is if we're using it as a tool, not as a being with agency. That's the thing that seems to be difficult for people to grasp. We're trying to make a tool that in some ways resembles a being with agency, or is modeled after that, but that doesn't mean it actually is that.
People will have all sorts of reasons to anthropomorphize AGI, just like they do anything. But we don't give rights to a pencil because we've named it 'Steve'. We don't care about a cloud's feelings because we see a human face in it. And we shouldn't give ethical consideration to a computer because we've imbued it with intelligence resembling our own. If it has feelings, especially feelings that affect it's behavior, that's a different thing entirely. Then our interactions with it would need to change, and we would have to be nice if we want it to continue to function as intended. But I don't think it should have feelings that directly affect it's behavior (emotional impulsivity), and that won't just manifest at a certain level of intelligence, it would have to be designed, because it's non-biological. Our emotions are largely governed by chemicals in the brain, so for an AGI to develop these as emergent behaviors, it would have to be simulating biology as well (and adapting behaviors through observation doesn't count, but can still be considered).
So I don't think that we need to worry about AGI suffering, but it really depends on how it's created. I have no doubt that if multiple forms of AGI are developed, at least one approach that mimics biology will be tried, and it may have feelings of it's own, autonomy, etc. Not a smart approach, but I'm sure it will be tried some time, and that is when these sorts of ethical dilemmas will need to be considered. I wouldn't extend that consideration to every form of AGI though. But it is good to talk about these things, because like I've said before, these kinds of issues are a mirror for us, and so how we treat AGI may affect how we treat each other, and that should be the real concern.
AsheyDS t1_ixd9i6h wrote
Reply to comment by theabominablewonder in Expert Proposes a Method For Telling if We All Live in a Computer Program by garden_frog
Observation doesn't mean a person (or anything) viewing a thing. It basically means that one particle interacts with another particle, affecting it in some way. And so that particle has been 'observed'. It doesn't mean something only exists if we see it. If you want to use your eyes as an example, imagine a photon careening through space just to stop inside your eyeball. You just observed it, altering it's trajectory. You don't even need to be conscious for it to have been observed, the particles that make up your eye did that for you. I'm probably not making that very clear, but I suggest learning more about observation in the quantum mechanical sense. It's not what you think.
AsheyDS t1_ixd78pi wrote
Reply to comment by mithrandir4859 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
>Some forms of life could easily be artificial and still deserve ethical treatment.
That does seem to be the crux of your argument, and we may have to agree to disagree. I don't agree with 'animal rights' either, because rights are something we invented. In my opinion, it comes down to how we have to behave and interact, and how we should. When you're in the same food chain, there are ways you have to interact. If you strip things down to basics, we kill animals because we need to eat. That's a 'necessary' behavior. It's how we got where we are. And if something like an Extraterrestrial comes along, it may want to eat us, necessitating a defensive behavioral response. Our position on this chain is largely determined by our capabilities and how much competition we have. However, we're at the top as far as we know, because of our superior capabilities for invention and strategy. And as a result, we have modern society and the luxuries that come with it. One of those luxuries is to not eat animals. Another is ethical treatment of animals. The laws of nature don't care about these things, but we do. AGI is, in my opinion, just another extension of that. It's not on the food chain, so we don't have to 'kill' it unless it tries to kill us. But again, being that it's not on the food chain, it shouldn't have the motivation to kill us or even compete with us unless we imbue it with those drives, which is obviously a bad idea. I don't believe that intelligence creates ambition or motivation either, an an AGI will have to be more than just reward functions. And being that it's another invention of ours like rights, we can choose how we treat it. So should we treat AGI ethically? It's an option until it's not. I think some people will be nice to it, and some will treat it like crap. But since that's a choice, then I see that as a reflection on ourselves rather than some objective standard to uphold.
AsheyDS t1_ixa4fbk wrote
Reply to comment by hducug in When they make AGI, how long will they be able to keep it a secret? by razorbeamz
> It’s stupid to think that you could outsmart a super intelligent agi.
Super intelligence doesn't necessarily mean 'all knowing'...
AsheyDS t1_ix9hsnd wrote
Reply to comment by mithrandir4859 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
>would care about survival and power
Only if it was initially instructed to, whether that be through 'hard-coding' or a natural language instruction. The former isn't likely to happen, and the latter would likely happen indirectly, over time, and it would only care because the user cares and it 'cares' about the user and serving the user. At least that's how it should be. I don't see the point in making a fully independent AGI. That sounds more like trying to create life than something we can use. Ideally we would have a symbiotic relationship with AGI, not compete with it. And if you're just assuming it will have properties of life and will therefore be alive, and care about survival and all that comes with that, I'd argue you're just needlessly personifying potential AGI, and that's the root of your ethical quandary.
​
> so I wouldn't focus on the biology at all
I wasn't trying to suggest AGI should become biological, I was merely trying to illustrate my point... which is that AGI will not be a part of the food chain or food web or whatever you want to call it, because it's not biological. It therefore doesn't follow the same rules of natural biological intelligence, the laws of nature, and shouldn't have instincts outside of what we purposefully include. Obviously emergent behavior should be accounted for, but we're talking algorithms and data, not biological processes which share a sort of lineage with life on this planet (and with the universe), and have an active exchange with the environment. The A in AGI is there for a reason.
AsheyDS t1_ix6tg84 wrote
Reply to comment by mithrandir4859 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
>I wonder would you argue that some "random" generally intelligent aliens do not deserve ethical treatment simply because they are not human?
Humans and Extraterrestrials would be more like apples and oranges, but AGI is still a brick. On a functional level, the differences amount to motivation. Every living thing that we know of is presumed to have some amount of survival instinct as a base motivation. It shouldn't be any different with an Extraterrestrial. So on the one hand we can relate to them and even share a sort of kinship. On the other hand, we can also assume they're on a higher rung of the 'ladder' than us, which makes them a threat to our survival. We would want to cooperate with them because that increases our chance of survival, giving us motivation to treat them with some amount of respect (or what would appear to be ethical treatment).
Animals are on a lower rung of the ladder, where we don't expect most of them to be a threat to our survival, so we treat them however which way we will. That said, we owe it to ourselves to consider treating them ethically because we have the luxury to, and because it reflects poorly on us if we don't. That's a problem for our conscience.
So where on the ladder would AGI be? Most probably think above us, some will say below us, and fewer still may argue they'll be beside us. All of those are wrong though.. It won't be on that ladder at all, because that ladder isn't a ladder of intelligence, it's a ladder of biology and survival and power. Until AGI has a flesh and blood body and is on that ladder with us, then the only reason to consider it's ethical treatment is to consider our own, and to soothe our conscience.
And since you seem concerned about how I would treat an AGI, I would likely be polite, because I have a conscience and would feel like doing so. But to be clear, I would not be nice out of some perceived ethical obligation, nor for social reasons, or our of fear. If anything, it would be because we need more positivity in the world. But in the end it's just software and hardware. Everything else is what we choose to make of it.
AsheyDS t1_ix4yc1r wrote
Reply to comment by mithrandir4859 in Ethics of spawning and terminating AGI workers: poll & discussion by mithrandir4859
>somewhat similar
Except not really though. You're not even comparing apples to oranges, you're comparing bricks to oranges and pretending the brick is an apple. AGI may end up being human-like, but it's not human, and that's an important distinction to make. An AGI agent by it's 'nature' may be ephemeral, or if consciousness for an AGI is split into multiple experiences that eventually collapse into itself, that's just a part of how it works. There shouldn't be an ethical concern about that. The real concern is how we treat AGI because of how it reflects on us, but that's a whole other argument.
AsheyDS t1_ix3ymez wrote
>Would you use such a magical procedure?
Are we still talking about AGI? I feel like this post went from one thing to a completely different thing... Because if it was meant to illustrate a point about potential AGI, then you've lost the plot. AGI isn't human, so there's already a fundamental difference.
AsheyDS t1_ix163qc wrote
Reply to comment by rixtil41 in 2023 predictions by ryusan8989
>4. No quantum computer that's good enough to replace your average smartphone, laptop ect.
Why would that even be a thing? That's not what quantum computers are meant for...
AsheyDS t1_iw0vwj7 wrote
Reply to comment by AdditionalPizza in 2023: The year of Proto-AGI? by AdditionalPizza
Similar in your estimation. I'm guessing you don't work in a technical field. Proto-AGI is just not a good term and is wildly misleading to the general public and enthusiasts alike, and you're not doing anyone any favors by propagating it. You yourself are a victim of it's effects. All it does is create the sense that we're almost there, and that the current architectures are sufficient for AGI, and that any outstanding 'problems' aren't really problems anymore. That's nothing but pure speculation. We're not even sure if current transformers are on the same spectrum of classification as AGI. Who's to say it's a linear path? Narrow AI, even an interoperable collection of them, may yet hit a wall in terms of capability, and may not be the way forward. We just don't know yet. Nobody is stopping you from speculating, but using this term is highly inaccurate.
AsheyDS t1_ityesgf wrote
Reply to comment by gahblahblah in AGI staying incognito before it reveals itself? by Ivanthedog2013
>Yes. The nature of general intelligence is that it may try anything.
May perhaps, and that's a hard perhaps. That doesn't mean it will try anything. We consider ourselves to be the standard for general intelligence but as individuals we operate within natural and artificial bounds and within a fairly small domain. While we could do lots of things, we don't. An AGI doesn't necessarily have to go off the rails any chance it gets, it can follow rules too. Computers are better at that than we are.
AsheyDS t1_itv4znk wrote
Reply to comment by DrMasonator in Our Conscious Experience of the World Is But a Memory, Says New Theory by Shelfrock77
Sure, there are outliers of course, but I'm talking the typical route to sleep for most people. Altering your rhythms and state of consciousness will obviously change that. I've heard of this technique but not by name, and I've probably done it but I still can't say I've been able to recall the moment I actually fall asleep. As someone who has had severe sleep problems in the past, I'm occasionally wary of my ability to actually fall asleep when I'm trying to, so I'm often aware of my last conscious thoughts before falling asleep, but I'm not aware they were last until I've woken up again.
Lucid dreaming is a very interesting thing, and I've done it quite a few times, but never really intentionally. I do know that if you consume caffeine a little while before sleeping, it can have a similar effect and will be more likely to induce lucidity. So yeah I know falling asleep can vary just as much as the sleep state itself, but I'm still pretty sure most people don't typically remember the moment they go unconscious. And even seamlessly going from being awake to a lucid dream and being aware of that is probably not very common.
AsheyDS t1_itsa25y wrote
Reply to comment by dnimeerf in Our Conscious Experience of the World Is But a Memory, Says New Theory by Shelfrock77
I haven't to my recollection, but I'll check it out. Thanks.
AsheyDS t1_its19ds wrote
I thought this was obvious, as well as something that has been researched and discussed before.... A simple example of this is falling asleep. We can't remember the exact moment we fall asleep because our conscious perception of things (really just a sort of feedback mechanism) is last in line. We're already out by the time our conscious perception of falling asleep would have recognized it. Another example is our body craving specific nutrients. By the time we consciously realize we're hungry for something specific, our body has already been craving it for some time, and even then we still don't always make the connection that the craving has something to do with nutritional requirements (it doesn't always of course). And when we go get food, it's because we're mentally or physically hungry, so it's an inevitability. Much of our actions are just reactions. If we were truly conscious of things all the time and consciously made decisions about everything, we'd likely make a lot of the wrong decisions and we'd get frustrated with the sheer amount of information and tasks we'd have to handle while being painfully aware of every mundane thing we do. Ever try suddenly putting conscious attention and effort to something simple you do regularly, and find yourself confused for a moment? It's much easier for us to put things on 'auto-pilot', use 'muscle memory', and consciously tune out if it doesn't actually need attention or focus... but that has the consequence of leaving us less aware of everything we do, unless of course we make a conscious memory of it.
Edit: Had started writing more but had to go do something, and now I've forgotten.
AsheyDS t1_j14vcgt wrote
Reply to Do language models lack creativity? by sheerun
> I would assume true AGI should be creative?
Yes. Though to what extent we don't know yet. I would assume it could be at least almost as creative as any human. But if it can extrapolate beyond it's knowledge base and not merely re-combine what it knows, then it might have a chance at being more creative than we are.