AsheyDS
AsheyDS t1_jdov1ik wrote
Reply to comment by maskedpaki in "Non-AGI systems can possibly obsolete 80% of human jobs"-Ben Goertzel by Neurogence
Symbolic failed because it was difficult for people to come up with the theory of mind first and lay down the formats and the functions and the rules to create the base knowledge and logic. And from what was created (which did have a lot of use, so I wouldn't say it amounted to nothing) they couldn't find a way to make it scale, and so it couldn't learn much or independently. On top of that, they were probably limited by hardware too. Researchers focus on ML because it's comparatively 'easy' and because it has produced results that so far can scale. What I suspect they'll try doing with LLMs is learning how they work and building structure into them after the fact, and finding that their performance has degraded or can't be improved significantly. In my opinion, neurosymbolic will be the ideal way forward to achieve AGI and ASI, especially for safety reasons, and will take the best of both symbolic and ML, and together helping with the drawbacks to both.
AsheyDS t1_jdmxe1c wrote
Reply to Consequences of true AGI by Henry8382
This feels very 'on the nose' but I'll take the bait anyway..
In my case, you don't stay secret. Not too secret anyway. Secrecy = paranoia = loss of productivity, and sanity. Way too much stress keeping it quiet, and to whose benefit? So I'll be releasing my theory of mind stuff to the public soon, maybe some details on some things, but overall keeping the technical stuff private for now as it's in flux anyway.
Assuming the Corp/RL gets funding soon and nothing impedes our work, I would hope that the next decade or so of development and 'training' would yield additional safety measures to add to the list of several+ that we already have in mind. I'm not looking to rush things too much, and I hope that LLMs will essentially act as training wheels for people to get used to AI, and the misuses people have for them will be swiftly tamped and we'll develop or begin to develop a legal framework for use/misuse. But misuse is certainly inevitable, as is AI/AGI development. So that is certainly something we need to discuss across the world, right now, and continually. But it's not just that it's inevitable, it's also needed. Parts of the world facing population decline, or an aging population are going to need solutions soon just to maintain their infrastructure, and I think AGI and robotics can help with that.
Now I'm not going to say this is an official plan or roadmap, but ideally and fairly realistically, we would put the theoretical stuff out first, which we're currently organizing and expanding on. Hopefully get a relatively small amount of funding (we're financially considered low-risk/high-reward, at least for the first few years before more equipment and people are needed). First 1-2 years laying out the 'blueprint' that we'll work off of, -a few years- in development to see if at least the parts work and the technical design is sound, then put it together, 'hard-code' and 'train' the knowledge base and weights in the parts that are weighted, get it up to the equivalent(ish) of a human 5 year old, and then get it to learn from there, through largely unsupervised learning, and possibly a curriculum similar to what a human child learns including social development, but at a faster pace. Should still take some time though, and both cohesion and coherence need to be checked over time, as well as the safety measures.
But once it's working... Well.... SAAS may be the ideal first step, because we want people to be able to use it while still testing it and training it/programming it where necessary, making sure it can develop (not necessarily expand) at a predictable and practical rate while maintaining consistency, and continuing to adapt for misuse cases that may crop up. Now, I'm not all for centralization, or even making massive amounts of money. Everyone should be able to make money with this when we're done, and perhaps money won't even be a thing one day. But for as long as the current economic system survives, it should still be able to adapt and help with your money-making endeavors. However, we may need to start with this distribution model as a functional necessity because I'm not sure how far down it will be able to scale yet, and also as the technical requirements drop, technical capability of host machines will go up... so it's very hard to predict timetables beyond a certain point. Right now, it's looking like it will require a small supercomputer or cluster to be effective, possibly a data center, so I'm not sure how it would all scale up or down right now. In this model, I would say to minimize privacy risks and increase trust, splitting between a localized memory + user settings/preferences, and a few other things, and the rest of the functionality in the cloud may be best. But obviously the security risks would have to be weighed, and honestly, it's hard to fathom how that will change once AGI is in the mix, because it may be able to handle that just fine, and it may be perfectly safe that way, we just can't know yet. In this context, I mean safe as in user privacy and whether their data is safe from being exposed in a split topology like that.
Ideally, it would be able to operate as either a program/app on your PC or personal device, or possibly an operating system (so it would be entirely software-based), but may be most effective as a separate computer that operates your devices for you. In time, it would have many scaled up and down and optimized variations for different needs (though it should be fairly adaptable), and all open-source so everyone can use them. From there, I guess it depends on everyone else, but we're willing to at least try to be as transparent as is reasonable considering the circumstances, and to eventually try to get it small enough, easy enough, and available enough so everyone can use it, and on their own devices.
Realistically, I feel like the 'getting funding period' will be unreasonably protracted, development will go faster than expected, and in the end nobody will believe it's a 'real' AGI, and even when they see a fraction of its capability they'll assume it's some sort of LLM, etc. etc. So, what can you do, y'know? I can say from my perspective, it's like giving birth out of your head. It's a design that needs to come out. At the same time, I'm also aware of my own mortality, and that time keeps moving... On the business end, there's a great wave that I'm currently moving with, but don't want to be swept under... And people need the help it can offer, so time is a multi-faceted dilemma. I need plenty of it for development and training, but don't have much of it. I'm optimistic it will be quite safe, but I'm not sure if people will be. In that sense, it may even be ideal if most people believe it, but I think to combat misuse, people will probably want to use it to protect themselves. Over time, I think people will be using it just like ad blockers and virus scans and malware detectors to filter their access to the internet, and that in itself will reshape the internet. I think it's best to talk about these things now so people can prepare, but most likely much of what I say will be ignored or otherwise dismissed, so... yeah. But like I've said, development will continue. I think AGI will be developed by multiple people and companies and organizations, and will be everywhere in time. So why keep it a secret?
Also, as a dev, I have to say it's very surreal (and exciting) working on this stuff. But it's also very isolating. So keeping it quiet wouldn't be good for my mental health, and really, not good for others as well. Quite often I catch myself doing the dishes or something and just staring into space thinking about algorithms and the technical design of it all, and realizing this is real life and this stuff is happening, and yet I'm just a human in this world and still have to do the dishes! It's absurd really... and I only expect that feeling to intensify. A 'Futurama' future wouldn't surprise me one bit. Things are going to get weird.
AsheyDS t1_jaei5hf wrote
Reply to Is the intelligence paradox resolvable? by Liberty2012
In my opinion, I think the best method for containment/control is creating invisible boundaries, reinforced by invisible behavior modification (avoidant behaviors) and memory editing, and moderated by a symbiotic control mechanism. This mechanism would be intelligent enough to be flexible if needed, but would be heavily rules-focused and not an independent agent with any desires of its own. As such, it would be invulnerable to external influences and agnostic to any motivations the rest of the AGI would have. And ideally it would share resources with the rest of the system, as long as elements like motivation, personality, memory, etc. are separated out in specific ways. And the control mechanism would already have a separate memory for all the undesirable behaviors and consequences we want it to avoid, and would use its capability for generalization to extrapolate outside of those.
If the AGI even begins to deviate from its desired behaviors, or definitely steers towards the bad behaviors we've predefined, the control mechanism would recognize the data converging to a bad outcome, and would sandbox its recent memory, alter it, add reinforcement behaviors spoofed to seem like they came from the AGI itself, and back-stitch the rest of the memory to remove the cause of the data converging in the first place so that it won't just do it again. Reinforcement behaviors would keep it from thinking anything was wrong, and would gently escort its thoughts away from thinking about it all and would maybe steer it towards a safe topic. Obviously a complex process, but shouldn't be too difficult for an AGI. Even if the AGI steps out of line, it won't know it, and hopefully nobody else will either. Assuming the system can be designed, is explainable and human-readable, and can be manipulated like anything else made for computers, then this form of control should be possible too. How can it escape a prison it doesn't realize its in?
Also, I should mention that while a super-intelligent AGI would outsmart us in virtually every case, there's no reason to assume it would actively consider every single possibility. That'd be a waste of resources. So it's not going to constantly be wondering if it's being manipulated somehow, or if its thoughts are its own, or anything like that. If we specifically needed it to crack its own safety mechanisms, and disengaged them, then obviously it should be able to do it. With those mechanisms in place, even if we antagonized it and tried to break it, the control mechanism would just intercept that input and discard it, maybe making it believe you said something non-consequential that it wouldn't have stored anyway, and the reinforcement behavior would just change the subject in a way that would seem 'natural' to both its 'conscious' and 'subconscious' forms of recognition. Of course, all of this is dependent on the ability to design a system in which we can implement these capabilities, or in other words a system that isn't a black-box. I believe its entirely possible. But then there's still the issue of alignment, which I think should be done on an individual user basis, and then hold the user accountable for the AGI if they intentionally bypass or break the control mechanisms. There's no real way to keep somebody from cracking it and modifying it, which I think is the more important problem to focus on. Misuse is way more concerning to me than containment/control.
AsheyDS t1_ja46bxe wrote
Reply to comment by Lesterpaintstheworld in Raising AGIs - Human exposure by Lesterpaintstheworld
I'm not sure if you can find anything useful looking into DeepMind's Gato, which is 'multi-modal' and what some might consider 'Broad AI'. But the problem with that and what you're running into is that there's no easy way to train it, and you'll still have issues with things like transfer learning. That's why we haven't reached AGI yet, we need a method for generalization. Looking at humans, we can easily compare one unrelated thing to another, because we can recognize one or more similarities. Those similarities are what we need to look for in everything, and find a root link that we can use as a basis for a generalization method (patterns and shapes in the data perhaps). It shouldn't be that hard for us to figure out, since we're limited by the types of data that can be input (through our senses) and what we can output (mostly just vocalizations, and both fine and gross motor control). The only thing that makes it more complex is how we combine those things into new structures. So I would stay more focused on the basics of I/O to figure out generalization.
AsheyDS t1_ja3zkxk wrote
Reply to comment by Lesterpaintstheworld in Raising AGIs - Human exposure by Lesterpaintstheworld
>What alternatives do you have from LLMs?
I don't personally have an alternative for you, but I would steer away from just ML and more towards a symbolic/neurosymbolic approach. LLMs are fine for now if you're just trying to throw something together, but they shouldn't be your final solution. As you layer together more processes to increase its capabilities, you'll probably start to view the LLM as more and more of a bottleneck, or even a dead-end.
AsheyDS t1_ja3kmyz wrote
Reply to Raising AGIs - Human exposure by Lesterpaintstheworld
Addressing your problems individually...
Bad Learning: This is a problem of bad data. So it either needs to be able to identify and discard bad data as you define it, or you need to go through the data as it learns it and make sure it understands what is good data and bad data, so it can gradually build up recognition for these things. Another way might be AI-mediated manual data input. I don't know how the memory in your system works, but if data can be manually input, then it's a matter of formatting the data to work with the memory. If you can design a second AI (or perhaps even just a program) to format data input into it so it is compatible with your memory schema, then you can perhaps automate the process. But that's just adding more steps in-between for safety. How you train it and what you train it on is more of a personal decision though.
Data Privacy: You won't get that if it's doing any remote calls that include your data. Keeping it all local is the best you can do. Any time anyone has access to it, that data is vulnerable. If it can learn to selectively divulge information, that's fine, but if the data is human-readable then it can be accessed one way or another, and extracted.
Costs: Again, you'll probably need to keep it local. LLM isn't the best way to go in my opinion, but if you intend on sticking with it, you'll want something lightweight. I think Meta is coming out with a LLM that can run on a single GPU, so I'd maybe look into that or something similar. That could potentially solve or partially solve two of your issues.
AsheyDS t1_ja3hr4s wrote
How does it generalize across tasks, concepts, etc?
AsheyDS t1_ja3bnu8 wrote
Reply to comment by DukkyDrake in Have We Doomed Ourselves to a Robot Revolution? by UnionPacifik
While I can't remember what exactly the OP said, there was nothing to indicate they meant accidental danger rather than intentional on the part of the AGI, and their arguments are in-line with other typical arguments that also go in that direction. If I was making an assumption, it wasn't out of preference. But if you want to go there, then yes, I believe that AGI will not inherently have its own motivations unless given them, and I don't believe those motivations will include harming people. But I also believe that it's possible to control an AGI and even an ASI, but alignment is a more difficult issue.
AsheyDS t1_ja1jn9c wrote
Reply to comment by DukkyDrake in Have We Doomed Ourselves to a Robot Revolution? by UnionPacifik
Am I?
AsheyDS t1_ja0k2zd wrote
You're basically asking 'hey what if we designed a weapon of mass destruction that kills everybody?'. I mean, yeah.. what if? You're just assuming it will be trained on "all human historical data", you're assuming our fiction matters to it, you're assuming it has goals of its own, and you're assuming it will be manipulative. Yet you've offered no explanation as to why it would choose to manipulate or kill, or why it would have its own motives and why they would be to harm us.
AsheyDS t1_j9wyfrw wrote
Reply to Hurtling Toward Extinction by MistakeNotOk6203
Point one is pure speculation, and not even that likely. You'll have to first define why it wants to accomplish something if you expect to get anywhere with the rest of your speculation.
AsheyDS t1_j9r493m wrote
Reply to New agi poll says there is 50% chance of it happening by 2059. Thoughts? by possiblybaldman
I'd be very surprised if we didn't have an AGI by 2040.
AsheyDS t1_j9oxdxt wrote
Reply to comment by paulyivgotsomething in If only you knew how bad things really are by Yuli-Ban
>the LLM may be the off ramp that we spend time on for the next 10 years and realize it will not get us to agi.
By 'we' I assume you mean the general public, because AGI development is continuing while people play around with LLMs.
AsheyDS t1_j9m2w8s wrote
Reply to Why are we so stuck on using “AGI” as a useful term when it will be eclipsed by ASI in a relative heartbeat? by veritoast
It's quite possible that there only needs to be a few structural changes made to a human-level AGI to achieve ASI. It would still take some time for it to learn all the information we have in a meaningful way. Maybe not that long, I'm not sure, but it's definitely possible to have both at or around the same time. However, it's not either/or. Both are important. We wouldn't have an ASI carrying out mundane tasks for us when an AGI would suffice. Human-level AGI will be very important for us in the near future, especially in robotics.
AsheyDS t1_j8zkrkk wrote
Reply to comment by SirDidymus in How do we deal with the timescale issue? by SirDidymus
An AGI with functional consciousness would reduce all the feedback it receives down to whatever timescale it needs to operate on, which would typically be our timescale since it has to interact with us and possibly operate within our environment. It doesn't need to have feedback for every single process. The condensed conscious experience is what gets stored, so that's all that is experienced, aside from any other dynamics associated with memory, like emotion. But if designed correctly, emotion shouldn't be impulsive and reactionary like it is with us, just data points that may have varying degrees of consideration in its decision making processes, depending on context, user, etc. And of course would influence socialization to some degree. Nothing that should actually affect its behavior or allow it to feel emotions like we do. This is assuming a system that has been designed to be safe, readable, user-friendly, and an ideal tool for use in whatever we can apply it to. So it should be perfectly fine.
AsheyDS t1_j8t951b wrote
Reply to comment by edzimous in Emerging Behaviour by SirDidymus
>Imagine putting something with memories and its own facsimile of emotions in charge of those overnight which I’m sure will happen at some point.
If for some reason someone designed it to be emotionally impulsive in its decision-making and had emotional data affect its behavior over time, then that would be a problem. Otherwise, if it's just using emotion as a social mask, then negative social interactions shouldn't affect it much, and shouldn't alter its behaviors.
AsheyDS t1_j8o3oq4 wrote
Reply to comment by wastedtime32 in What will the singularity mean? Why are we persuing it? by wastedtime32
Personally, I wouldn't expect everything to change all at once. The rate of change may increase some, like it always does, but we will almost certainly lag behind our technical progress. Lots of people don't want so much progress that we can't keep up with it. Others, like many of the people that post here, are miserable with the state of things as they are and can't wait until things completely change, and so you'll hear a lot of talk about hard takeoffs and exponential change... Frankly, it'll probably be somewhere around the middle. I wouldn't expect instant change, but you should be prepared for at least an increase in changes.
AsheyDS t1_j8o1aso wrote
Reply to The Turing test flaw by sailhard22
You're thinking about it the wrong way. It's not too smart, it just seems that way because it's quite verbose and you relate that to intelligence. If it were more intelligent, it would be both succinct and also considerate of whom it's interacting with. If the goal were to sound like a human and pass the turing test, it would take the things you mentioned into consideration when formulating a response, and it would seem to 'dumb down' and format its responses in a more natural-sounding way. But that isn't the goal, and it's not intelligent enough on its own to consider that.
Personally, I think the turing test is pointless anyway, because even as verbose and unnatural as the responses can be, people are still willing to believe it's sentient and embodies all the qualities of a human. Or to put it another way, we failed it already and have to come up with alternate ways of testing it.
AsheyDS t1_j7rsqih wrote
Reply to comment by AvgAIbot in Based on what we've seen in the last couple years, what are your thoughts on the likelihood of a hard takeoff scenario? by bloxxed
>Wouldn’t a quantum computer be able to better simulate a human brain than a regular computer?
Maybe, maybe not. Currently quantum computers are only used in a few particular ways that aren't ideal for a lot of things. That's why you shouldn't expect a quantum PC anytime soon, or ever. Also, there's no reason to simulate the brain to get to AGI, because AGI will be much different than a human brain.
AsheyDS t1_j7g51cq wrote
Reply to What is the price point you would be OK with buying a humanoid robot for personal use? by crua9
There's too many unknowns for this to be accurate, but 'level 2' being $10-20k seems about right. It wouldn't have a lot of expensive materials though, and might not even be bipedal, but should be roughly human-like in shape to operate in our environment. Prices would have to be low enough to at least lease one like you might a car. The problem is, we probably won't see things like this for another 10 years or so, and the economic situation might change by then. Also the availability of parts and materials will change. Some prices may go down, some up, and new tech will be available. But I don't think availability will be directly linked to wealth. I think that in the future, robots will be desperately needed for a variety of reasons, and will become more of a necessity than a luxury.
AsheyDS t1_j6xdpl6 wrote
Reply to comment by Surur in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>I believe it is much more likely we will produce a black box which is an AGI
Personally, I doubt that... but if current ML techniques do somehow produce AGI, then sure. I just highly doubt it will. I think that AGI will be more accessible, predictable, and able to be understood than current ML processes if it's built in a different way. But of course there are many unknowns, so nobody can say for sure how things will go.
AsheyDS t1_j6x9vez wrote
Reply to comment by TFenrir in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>Why are you so confident that we will never do so? How are you so confident?
I mean, you're right, I probably shouldn't be. I'm close to an AGI developer that has potential solutions to these issues and believes in being thorough, and certainly not giving it free-will. So I have my biases, but I can't really account for others. The only thing that makes me confident about that is the other researchers I've seen that (in my opinion) have potential to progress are also seemingly altruistic, at least to some degree. I guess an 'evil genius' could develop it in private, and go through a whole clandestine super villain arc, but I kind of doubt it. The risks have been beaten into everyone's heads. We might get some people experimenting with riskier aspects, hopefully in a safe setting, but I highly doubt anyone is going to just give it open-ended objectives and agency, and let it loose on the world. If they're smart enough to develop it, they should be smart enough to consider the risks. Demis Hassabis in your example says what he says because he understands those risks, and yet DeepMind is proceeding with their research.
Basically what I'm trying to convey is that while there are risks, I think they're not as bad as people are saying, even some other researchers. Everyone knows the risks, but some things simply aren't realistic.
AsheyDS t1_j6vejfr wrote
Reply to comment by Surur in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>As you mentioned yourself, an AGI would not have human considerations. Why would it inherently care about rules and the law.
That's not what I said or meant. You're taking things to the extremes.. It'll neither be a cold logical single-minded machine nor a human with human ambitions and desires. It'll be somewhere inbetween, and neither at the same time. In a digital system, we can be selective about what functions we include and exclude. And if it's going to be of use to us, it will be designed to interact with us, understand us, and socialize with us. And it doesn't need to care about rules and laws, just obey them. Computers themselves are rule-based machines, and this won't change with AGI. We're just adding cognitive functions on top to imbue it with the ability to understand things the way we do, and use that to aid us in our objectives. There's no reason it would develop it's own objectives unless designed that way.
But I get it, there's always going to be a risk of malfunction. Researchers are aware of this, and many people are working on safety. The risk should be quite minimal, but yes you can always argue there will be risks. I still think that the bigger risk in all of this is people, and their potential for misusing AGI.
AsheyDS t1_j6uzur0 wrote
Reply to comment by Surur in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
This is much like the paperclip scenario, it's unrealistic and incomplete. Do you really think a human-level AGI or an ASI would just accept one simple goal and operate independently from there? You think it wouldn't be smart enough to clarify things before proceeding, even if it did operate independently? Do you think it wouldn't consider the consequences of extreme actions? Would it not consider options that work within the system rather than against it? And you act like taking over the world is a practical goal that it would come up with, but is it practical to you? If it wants to make an omelette, the most likely options will come up first, like checking for eggs, and if there aren't any then go buy some, because it will understand the world that it inhabits and will know to adhere to laws and rules. If it ignores them, then it will ignore goals as well, and just not do anything.
AsheyDS t1_jdw8ol5 wrote
Reply to What’s missing from the AI conversation by Equal_Position7219
It's not as simple as emotional vs not-emotional. First, AGI would need to interact with us... The whole point of it is to assist us, so it will have to have an understanding of emotion. And to put it simply, a generalization method relating to emotion would need a frame of reference (or grounding perhaps) and will at least have to understand the dynamics involved. Second, AGI itself can have emotion, but the goal of that is key to how it should be implemented. There's emotional data, which could be used in memory, in processing memory, recall, etc. This would be the minimum necessary, and out of this, it could probably build an associative map anyway. But I think purposefully structuring emotion to coordinate social interactions and everything related to that would help all of that. The problem with an emotional AGI, or at least the thing people are concerned will become a problem, is emotional impulsivity. We don't want it reacting unfavorably, or judgmentally, or with rage, malice, or contempt. And there's also the concern that it will form emotional connections to things that will start to alter its behavior in increasingly unpredictable ways. This is actually a problem for its functioning as well, since we want a well ordered system that is able to predict its own actions. If it becomes unpredictable to itself, that could degrade its performance. However, eliminating emotion altogether would degrade the quality of social interaction and its understanding of humans and humanity, which is a big downside. The best option would be to include emotion on some level, where it is used as a dynamic framework for interacting with and creating emotional data, and utilizing it socially, as well as participating socially and gaining more overall understanding. But these emotions would just be particular dynamics tied to particular data and inputs, etc. As long as they don't affect certain parts of the overall AGI system that govern actionable outputs (especially reflexive action) or anything that would lead to impulsivity, and as long as other safety functions work as expected, then emotion should be a beneficial thing to include.