AsheyDS
AsheyDS t1_j6ukhrl wrote
Reply to comment by just-a-dreamer- in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
>In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.
Again, you're making assumptions... We know a lot more than you think, and certainly have a lot of theories. You and others act like neurology, psychology, cognition, and so on are new fields of study that we've barely touched.
AsheyDS t1_j6ujqvc wrote
Reply to comment by TFenrir in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern, but the general public only has things like Terminator to go by, so of course they'll assume the worst. Researchers have seen Terminator as well, and we don't outright dismiss it. But the bigger threat by far is potential human misuse. There are already potential solutions to alignment and control, but there are no solutions for misuse. Maybe from that perspective you can appreciate why I might want to steer people's perceptions on the risks. I think people should be discussing how we'll mitigate the impacts of misuse, and what those impacts may be. Going on about god-like Terminators with free-will is just not useful.
AsheyDS t1_j6uiarq wrote
Reply to comment by Surur in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
You're stating the obvious, so I don't know that there's anything to argue about (and I'm certainly not trying to). Obviously if 'X bad thing' happens or doesn't happen, we'll have a bad day. I have considered alignment and control in my post and stand by it. I think the problem you and others may have is that you're anthropomorphizing AGI when you should be considering it a sophisticated tool. Humanizing a computer doesn't mean it's not a computer anymore.
AsheyDS t1_j6ugs8u wrote
Reply to comment by just-a-dreamer- in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
The paperclip thing is a very tired example of a single-minded super-intelligence that is somehow also stupid. It's not meant to be a serious argument. But since your defense is to get all hand-wavey and say 'we just can't know' (despite how certain you seemed about your own statements in previous posts), I'll just say that a competently designed system being utilized by people without ill intentions will not spontaneously develop contrarian motivations and achieve 'god-like' abilities.
AsheyDS t1_j6ud071 wrote
Reply to comment by just-a-dreamer- in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with. It's neither animal nor human, and won't ever be considered a god unless people want to worship it. You're just projecting your own humanity onto it.
AsheyDS t1_j647bx1 wrote
Reply to comment by Bierculles in Is Our Digital Future At Risk Because Of The Gen Z Skills Gap? by trafalgar28
>You can't really teach a lot of this stuff at universities because by the time you finish your degree, the stuff you learned is allready mostly obsolete.
Cybersecurity perhaps, but not AI. Learning basics like math (especially calculus) is still very much relevant and needed in ML, and really the AI field hasn't changed as much or as quickly as you think. Also, university is a path to academia, not just work in the tech field. You don't always need a degree. A lot of tech companies will still consider you if you can prove you know your stuff, which can include certifications (much quicker and often more relevant training) or just getting hands on and learning things, and finding a way to get your foot in the door. More difficult perhaps, but sometimes you gotta do what you gotta do. Even better would be if you can bring unique ideas to the table, which is something a degree alone won't provide.
AsheyDS t1_j620mvq wrote
>In other words, there is no logical way to manipulate beings of higher intelligence.
Then don't use logic.
AsheyDS t1_j5q48vw wrote
Reply to comment by No_Ask_994 in Steelmanning AI pessimists. by atomsinmove
A hybridized partition of the overall system. It uses the same cognitive functions, but has separate memory, objectives, recognition, etc. They hope for the whole thing to be as modular and intercompatible as possible, largely through their generalization schema. So one segment of it will have personality parameters, goals, memory, and whatever else, and the rest will be roughly equivalent to subconscious processes in the human brain, which will be shared with the partition. As I understand it, the guard would be strict and static, unless it's objectives or parameters are updated by the user via natural language programming. So it's actions should be predictable, but if it somehow deviates then the rest of the system should be able to recognize it as an unexpected thought (or action or whatever), either consciously or subconsciously, which would feedback to the guard and reinitialize it, like a self-correcting measure. And once it has been corrected, it can edit the memory of the main partition so that it's unaware of the fault. None of this has been tested yet, and they're still revising some things, so this may change in the future.
AsheyDS t1_j5njw0c wrote
Reply to comment by iiioiia in Steelmanning AI pessimists. by atomsinmove
>a person merges two objectively safe (on their own) AGI-produced ideas
Well that's kind of the real problem isn't it? A person, or people, and their misuse or misinterpretation or whatever mistake they're making. You're talking societal problems that no one company is going to be able to solve. They can only anticipate what they can, hope the AGI anticipates the rest, and future problems can be tackled as they come.
AsheyDS t1_j5n7s65 wrote
Reply to comment by iiioiia in Steelmanning AI pessimists. by atomsinmove
The guard would be a compartmentalized hybridization of the overall AGI system, so it too would have a generalized understanding of what bad undesirable things are, even according to our arbitrary framework of cultural conditioning. So could undesirable ideas leak out? Well, no not really. Not if the guard and other safety components are working as intended, AND if the guard is programmed with enough explicit rules and conditions and enough examples to effectively extrapolate from (meaning not every case needs to be accounted for if patterns can be derived).
AsheyDS t1_j5n68fi wrote
Reply to comment by Baturinsky in Steelmanning AI pessimists. by atomsinmove
I mean, that's out of their hands and mine. I probably shouldn't have used chatGPT as an example, I just mean near-future narrow AI. It's possible we'll have non-biased AI over the next few years (or minimally biased at least), but nobody can tell how many and how effective they'll be.
AsheyDS t1_j5myhgr wrote
Reply to comment by Baturinsky in Steelmanning AI pessimists. by atomsinmove
We don't have a lot of time, but we do have time. I don't think there will be any immediate critical risks, especially with safety in mind, but what risk there is might even be mitigated by near-future AI. chatGPT for example may soon enough be adequate in fact-checking misinformation. Other AIs might be able to spot deepfakes. It would help if more people started discussing the ways AGI can potentially be misused, so everybody can begin preparing and building up protections.
AsheyDS t1_j5mtpp0 wrote
Reply to comment by iiioiia in Steelmanning AI pessimists. by atomsinmove
Can you give an example?
AsheyDS t1_j5l6v7c wrote
Reply to comment by Baturinsky in Steelmanning AI pessimists. by atomsinmove
Their approach to safety, to put it simply, would be to keep it in an invisible box, watched by an invisible guard that intervenes covertly when needed to keep it within that box should it stray towards the outside.
You are right in that AIs and people are going to have to watch out for other people and their AIs. But even if you remove the AI component, you can still say the same. Some people will try to scam you, take advantage of you, use you, or worse. AI makes that quicker and easier, so we'll have to be on the lookout, we'll have to discuss these things, and we'll have to prepare and create laws anticipating these things. But if everyone can gain access to it equally, either as SAAS or open source and locally run, then there will be tools to protect against malicious uses. That's all that can be done really, and no one company will be able to solve that.
AsheyDS t1_j5kiejw wrote
Reply to Steelmanning AI pessimists. by atomsinmove
2035+
The one AGI project I'm close to has a design, potential solutions for all the big problems, and a loose plan for implementation. So I'm going largely off of that, but funding, building, training, and testing takes time. Rushing it wouldn't help anything anyway.
The few others that I've seen that have potential (in my opinion of course) will probably get it eventually, but are missing some things. Whether those things become showstoppers or not has yet to be seen. And no, they have nothing to do with LLMs.
I also think that society needs to prepare. I'm actually becoming more comfortable with people calling non-AGI AGI because it will help people get used to it, and encourage discussion, get new laws on the books, etc. I don't think there's much use trying to pin an exact date on it, because even after the first real AGI is available, it will just be the first of many.
AsheyDS t1_j5c79cw wrote
Reply to It is important to slow down the perception of time for future sentient A.I, or it would become a living LOOP hell for itself by [deleted]
>It is crazy to me that no one is even suggesting focusing on that, when this should be the upmost priority.
That's because it's obvious. It wouldn't experience severe time dilation anyway, because it won't be aware of every single process. The awareness component would be a feedback system that doesn't feedback every single process every single fraction of a second. We don't even perceive every single second usually.
AsheyDS t1_j57uni0 wrote
Reply to comment by Ribak145 in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
What's wrong with a personal AI system being aligned with it's owner? It would just mean that the owner has to take responsibility for the actions and behaviors of the AI.
AsheyDS t1_j57tzsx wrote
Reply to comment by LoquaciousAntipodean in The 'alignment problem' is fundamentally an issue of human nature, not AI engineering. by LoquaciousAntipodean
>That is literally what it is designed to do
I would like to know more about this design if you're willing to elaborate.
AsheyDS t1_j56cejt wrote
Reply to comment by DungeonsAndDradis in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
Anything you can link me to support that belief? Or is it just a gut feeling based on overall current progress in the AI field?
AsheyDS t1_j55tl03 wrote
Reply to comment by DungeonsAndDradis in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
>My layman's guesstimate is that the next major architectural design is going to happen this year.
You may be right, but a design is speculative until it can be built and tested, and that will take some time.
AsheyDS t1_j55s0h2 wrote
Reply to comment by Borrowedshorts in I was wrong about metaculus, (and the AGI predicted date has dropped again, now at may 2027) by blueSGL
>AGI experts
No such thing yet since AGI doesn't exist. Even when it does, there are still going to be many more paths to AGI in my opinion, so it may be quite a while before anyone can be considered an expert in AGI. Even the term is new and lacks a solid definition.
AsheyDS t1_j4t3m6g wrote
Reply to comment by Bakoro in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
>Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.
Bold of you to assume I care about being credible on reddit, in r/singularity of all places. This is the internet, you should be skeptical of everything. Especially these days.. I could be your mom, who cares?
And you're going to have to try harder than all that to impress me. Your nebulous 'emergent features' and internal dialogue aren't convincing me of anything.
However, I will admit that I was wrong in saying 'current' because I ignored the date on the infographic. My apologies. But even the infographic admits all the listed capabilities were a guess. A guess which excludes functions of cognition that should probably be included, and says nothing of how they translate over to the 'tech' side. So in my non-credible opinion, the whole thing is an oversimplified stretch of the imagination. But sure, pm me in a few months and we can discuss how GPT-3 still can't comprehend anything, or how the latest LLM still can't make you coffee.
AsheyDS t1_j4sg78y wrote
Reply to comment by Bakoro in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
That wasn't my point, I know all this. The topic was stringing together current AIs to create something that does these things. And that's ignoring a lot of things that they can't currently do, even if you slap them together.
AsheyDS t1_j4rysoq wrote
Reply to comment by turnip_burrito in Perhaps ChatGPT is a step back? by PaperCruncher
In the US, there's already this.. https://seedfund.nsf.gov/
Not the easiest thing to get into though, and still capitalist-minded. And actually it's easier to start and maintain a for-profit company for research than it is a non-profit organization, so if you're going to have a for-profit company anyway then you might as well seek a variety of funding sources, which do include strings attached leading to monetization. It's just how the system is unfortunately...
AsheyDS t1_j6uo5bb wrote
Reply to comment by Surur in Why do people think they might witness AGI taking over the world in a singularity? by purepersistence
Why would a computer try to take over the world? The only two options are because it had an internally generated desire, or an externally input command. The former option is extremely unlikely. Could you try articulating your reasoning as to why you think it might do that?