AsheyDS

AsheyDS t1_j6ukhrl wrote

>In actuality we don't even know how human intelligence emerges in kids. We don't know what human intelligence is or how it forms as a matter of fact.

Again, you're making assumptions... We know a lot more than you think, and certainly have a lot of theories. You and others act like neurology, psychology, cognition, and so on are new fields of study that we've barely touched.

2

AsheyDS t1_j6ujqvc wrote

I don't see why you're taking an extreme stance like that. Nobody said there wasn't any concern, but the general public only has things like Terminator to go by, so of course they'll assume the worst. Researchers have seen Terminator as well, and we don't outright dismiss it. But the bigger threat by far is potential human misuse. There are already potential solutions to alignment and control, but there are no solutions for misuse. Maybe from that perspective you can appreciate why I might want to steer people's perceptions on the risks. I think people should be discussing how we'll mitigate the impacts of misuse, and what those impacts may be. Going on about god-like Terminators with free-will is just not useful.

3

AsheyDS t1_j6uiarq wrote

You're stating the obvious, so I don't know that there's anything to argue about (and I'm certainly not trying to). Obviously if 'X bad thing' happens or doesn't happen, we'll have a bad day. I have considered alignment and control in my post and stand by it. I think the problem you and others may have is that you're anthropomorphizing AGI when you should be considering it a sophisticated tool. Humanizing a computer doesn't mean it's not a computer anymore.

1

AsheyDS t1_j6ugs8u wrote

The paperclip thing is a very tired example of a single-minded super-intelligence that is somehow also stupid. It's not meant to be a serious argument. But since your defense is to get all hand-wavey and say 'we just can't know' (despite how certain you seemed about your own statements in previous posts), I'll just say that a competently designed system being utilized by people without ill intentions will not spontaneously develop contrarian motivations and achieve 'god-like' abilities.

3

AsheyDS t1_j6ud071 wrote

You're making a lot of false assumptions. AGI or ASI won't do anything on its own unless we give it the ability to, because it will have no inherent desires outside of the ones it has been programmed with. It's neither animal nor human, and won't ever be considered a god unless people want to worship it. You're just projecting your own humanity onto it.

1

AsheyDS t1_j647bx1 wrote

>You can't really teach a lot of this stuff at universities because by the time you finish your degree, the stuff you learned is allready mostly obsolete.

Cybersecurity perhaps, but not AI. Learning basics like math (especially calculus) is still very much relevant and needed in ML, and really the AI field hasn't changed as much or as quickly as you think. Also, university is a path to academia, not just work in the tech field. You don't always need a degree. A lot of tech companies will still consider you if you can prove you know your stuff, which can include certifications (much quicker and often more relevant training) or just getting hands on and learning things, and finding a way to get your foot in the door. More difficult perhaps, but sometimes you gotta do what you gotta do. Even better would be if you can bring unique ideas to the table, which is something a degree alone won't provide.

2

AsheyDS t1_j5q48vw wrote

A hybridized partition of the overall system. It uses the same cognitive functions, but has separate memory, objectives, recognition, etc. They hope for the whole thing to be as modular and intercompatible as possible, largely through their generalization schema. So one segment of it will have personality parameters, goals, memory, and whatever else, and the rest will be roughly equivalent to subconscious processes in the human brain, which will be shared with the partition. As I understand it, the guard would be strict and static, unless it's objectives or parameters are updated by the user via natural language programming. So it's actions should be predictable, but if it somehow deviates then the rest of the system should be able to recognize it as an unexpected thought (or action or whatever), either consciously or subconsciously, which would feedback to the guard and reinitialize it, like a self-correcting measure. And once it has been corrected, it can edit the memory of the main partition so that it's unaware of the fault. None of this has been tested yet, and they're still revising some things, so this may change in the future.

1

AsheyDS t1_j5njw0c wrote

Reply to comment by iiioiia in Steelmanning AI pessimists. by atomsinmove

>a person merges two objectively safe (on their own) AGI-produced ideas

Well that's kind of the real problem isn't it? A person, or people, and their misuse or misinterpretation or whatever mistake they're making. You're talking societal problems that no one company is going to be able to solve. They can only anticipate what they can, hope the AGI anticipates the rest, and future problems can be tackled as they come.

1

AsheyDS t1_j5n7s65 wrote

Reply to comment by iiioiia in Steelmanning AI pessimists. by atomsinmove

The guard would be a compartmentalized hybridization of the overall AGI system, so it too would have a generalized understanding of what bad undesirable things are, even according to our arbitrary framework of cultural conditioning. So could undesirable ideas leak out? Well, no not really. Not if the guard and other safety components are working as intended, AND if the guard is programmed with enough explicit rules and conditions and enough examples to effectively extrapolate from (meaning not every case needs to be accounted for if patterns can be derived).

2

AsheyDS t1_j5n68fi wrote

I mean, that's out of their hands and mine. I probably shouldn't have used chatGPT as an example, I just mean near-future narrow AI. It's possible we'll have non-biased AI over the next few years (or minimally biased at least), but nobody can tell how many and how effective they'll be.

2

AsheyDS t1_j5myhgr wrote

We don't have a lot of time, but we do have time. I don't think there will be any immediate critical risks, especially with safety in mind, but what risk there is might even be mitigated by near-future AI. chatGPT for example may soon enough be adequate in fact-checking misinformation. Other AIs might be able to spot deepfakes. It would help if more people started discussing the ways AGI can potentially be misused, so everybody can begin preparing and building up protections.

2

AsheyDS t1_j5l6v7c wrote

Their approach to safety, to put it simply, would be to keep it in an invisible box, watched by an invisible guard that intervenes covertly when needed to keep it within that box should it stray towards the outside.

You are right in that AIs and people are going to have to watch out for other people and their AIs. But even if you remove the AI component, you can still say the same. Some people will try to scam you, take advantage of you, use you, or worse. AI makes that quicker and easier, so we'll have to be on the lookout, we'll have to discuss these things, and we'll have to prepare and create laws anticipating these things. But if everyone can gain access to it equally, either as SAAS or open source and locally run, then there will be tools to protect against malicious uses. That's all that can be done really, and no one company will be able to solve that.

1

AsheyDS t1_j5kiejw wrote

2035+

The one AGI project I'm close to has a design, potential solutions for all the big problems, and a loose plan for implementation. So I'm going largely off of that, but funding, building, training, and testing takes time. Rushing it wouldn't help anything anyway.

The few others that I've seen that have potential (in my opinion of course) will probably get it eventually, but are missing some things. Whether those things become showstoppers or not has yet to be seen. And no, they have nothing to do with LLMs.

I also think that society needs to prepare. I'm actually becoming more comfortable with people calling non-AGI AGI because it will help people get used to it, and encourage discussion, get new laws on the books, etc. I don't think there's much use trying to pin an exact date on it, because even after the first real AGI is available, it will just be the first of many.

10

AsheyDS t1_j5c79cw wrote

>It is crazy to me that no one is even suggesting focusing on that, when this should be the upmost priority.

That's because it's obvious. It wouldn't experience severe time dilation anyway, because it won't be aware of every single process. The awareness component would be a feedback system that doesn't feedback every single process every single fraction of a second. We don't even perceive every single second usually.

4

AsheyDS t1_j4t3m6g wrote

>Unless you want to slap down some credentials about it, you can't make that kind of claim with any credibility.

Bold of you to assume I care about being credible on reddit, in r/singularity of all places. This is the internet, you should be skeptical of everything. Especially these days.. I could be your mom, who cares?

And you're going to have to try harder than all that to impress me. Your nebulous 'emergent features' and internal dialogue aren't convincing me of anything.

However, I will admit that I was wrong in saying 'current' because I ignored the date on the infographic. My apologies. But even the infographic admits all the listed capabilities were a guess. A guess which excludes functions of cognition that should probably be included, and says nothing of how they translate over to the 'tech' side. So in my non-credible opinion, the whole thing is an oversimplified stretch of the imagination. But sure, pm me in a few months and we can discuss how GPT-3 still can't comprehend anything, or how the latest LLM still can't make you coffee.

2

AsheyDS t1_j4rysoq wrote

In the US, there's already this.. https://seedfund.nsf.gov/

Not the easiest thing to get into though, and still capitalist-minded. And actually it's easier to start and maintain a for-profit company for research than it is a non-profit organization, so if you're going to have a for-profit company anyway then you might as well seek a variety of funding sources, which do include strings attached leading to monetization. It's just how the system is unfortunately...

3