AsheyDS

AsheyDS t1_j4rwgmy wrote

>Yes, essentially. The data gets synthesized and we have the ability to mix and match, to an extent. We have the ability to recognize patterns and apply concepts across domains.

Amazing how you just casually gloss over some of the most complex and difficult-to-replicate aspects of our cognition. I guess transfer learning is no big deal now?

1

AsheyDS t1_j4qw68h wrote

Very interesting results so far, because the dominant impression I get from this sub is that a single AGI will take over everything.

I personally think multiple companies or groups will develop different AGI through different methods, and they'll all be valid in their own ways. I don't think there's any one route to AGI, and even our own brains vary wildly from one another. It would actually be nice if we had such variety, so maybe a particular cognitive architecture could be paired with an individual to best help them, either because they operate similarly or very differently depending on their needs.

As for the form it will take, that's hard to tell. I think at first it may take a small supercomputer to develop it, but by the time it's ready for public use, computers will have changed a lot, and maybe we'll have similar specs in a much smaller package. If it's little more than software, it should be able to adapt, and hopefully we'll be able to install it on just about anything that can support it.

2

AsheyDS t1_j4qgigy wrote

I wish... but public funding from where? AI enthusiasts? The general public at large who barely even knows what chatGPT is, let alone any other progress in the field? I've considered crowd-funding for my own work, but I would not get the amounts I need. Even just looking at this sub, there are too many differing opinions that are all over the map, so while I may get a few small donations here and there, it's just not going to amount to anything helpful. It would be nice if there were more alternatives than pairing up with big investors though.

2

AsheyDS t1_j4okrwn wrote

It doesn't matter if they're a company, research lab, non-profit, or whatever... research and development costs money. And of course their funding is going to come with strings attached.

Also, ChatGPT is, from my understanding, just a fine-tuned version of GPT-3 (GPT-3.5?) and nothing radically new in and of itself. If they write a paper on it, it'll likely be after it's been through thorough public testing so they can include new insights.

9

AsheyDS t1_j4hhl3y wrote

What if self-awareness had limits? We consider ourselves self-aware, but we don't know everything that's going on in our brains at any given moment. If self-awareness were curtailed so it was only functional, would it be as dangerous as you anticipate?

1

AsheyDS t1_j48w13t wrote

>against the people

Or maybe for the people? If you really think that every single person working on AI/AGI or who could possess it is dangerous and evil and working against you, then why the hell would you trust everyone with it? Or do you just not want anyone to have an advantage over you? Because I've got news for you...

4

AsheyDS t1_j3r91af wrote

Feel? No, not quite. But it's all relative. If one narrows their perspective on what's to come, it could feel like a huge change already. Personally I think this is just us dipping our toes into the water, so to speak. So yes "some" acceleration, especially when considering how many people think that what we've seen so far is half or most of the way to AGI.

1

AsheyDS t1_j3r7vte wrote

I never said it'd be 10 years, though it could for all anyone knows. If I said it would be released in 2035, and widely adopted by 2040, I don't think that's unreasonable. But I also believe in a slow takeoff and more practical timelines. Even Google, as seemingly ubiquitous as it is, did not become that way overnight, it took a few years to become widely known and used. Also we're dealing with multiple unknowns, like how many companies are working on AGI, how far along they are, how long it takes to adequately train them before release, how the rest of the world (not just enthusiasts) accepts or doesn't accept AGI, how many markets will be disrupted and the reaction to that, legal issues along the way, etc. etc. Optimistic timelines don't seem to account for everything.

Edit: I should also mention one of the biggest hurdles is even getting people to understand and agree on what AGI is! We could have it for years and many people might not even realize. Conversely, we have people claiming we have it NOW, or that certain things are AGI when they aren't even close.

2

AsheyDS t1_j3ovmd1 wrote

I just feel like a lot of people are seeing some acceleration and think that this is all of it. What I think, is that we'll continue seeing regular advances in tech and AI, science in general. But the 30's will be the start of AGI, and 40's will be when it really takes off (in terms of adoption and utilization). Even a guess of before 2035 is, in my estimation, an optimistic projection where everything goes right and there aren't any setbacks or delays. But just saying 30's is a solid guess.

0

AsheyDS t1_j3e7r7c wrote

>Do share then what your beliefs are.

I do not have a phd, nor do I have a degree that would satisfy you, so my beliefs are meaningless. :) I didn't even get into this field until after college.

>What exactly is AI without math?

What is natural intelligence without math? Math is just a system of measurement, and one that as of yet hasn't defined every single thing. I get that we're talking about computers as the substrate, so math makes sense, but it's not the only way to define things, or enact a process. That said, I'm not suggesting ditching math, it will be integral to many processes, I'm just saying it doesn't have to be the main focus of work or study centered around cognition. That's what we're ultimately talking about here with AGI, not just mathematical processes. This is, unless you believe ML is the path to AGI, as many do.

1

AsheyDS t1_j37j3rd wrote

It takes time, which takes money, but people can at least think about it and study. Some of the comments here make it seem like you have to adhere to current ML methods to get anywhere but that's not true at all. The best thing people can do, if they want to get into AGI, is to learn learn learn. Not just ML, but AI more broadly. Also both human and animal cognition and behaviors, computer hardware and software, etc. A strong foundation in all these is a good start, and looking into current and past methods to see what needs attention. I wouldn't get too bogged down in any one aspect of it though. In my opinion, general AI will require a general understanding of a lot of things, and less specified training. These days, if you have internet access, it only costs time to get pretty far into this stuff. No need to worry about compute/training costs and things like that when you're early into it. However, I doubt a largely distributed and collaborative approach will be good in the long-term without some sort of more substantial commitment and organization. Getting people interested is easy, but getting them committed to it long-term to get any sort of cohesion in the project is more difficult, and that's where it starts making more sense to turn it into a company or other formal organization than just a loosely collaborative online effort.

6

AsheyDS t1_j2s8nwz wrote

You're saying a lot while somehow not saying anything. You mention 'AGI needs this and that' and that 'social, relational networks' are the solution, but you don't explain how or why. Otherwise most of this is very obvious, and you're giving an empty pitch.

7

AsheyDS t1_j1zkqqd wrote

>Because they have shareholders best interests at heart. With such power, society should come first, not shareholders.

That's not always the case. It depends on the structure of the company. However, even if it isn't shareholders, say it was funded by crowdsourcing... AI devs are still beholden to those that donated, one way or another. Unfortunately, it can't be developed in a financial vacuum. That said, even if there are financial obligations, that doesn't mean AI devs are passively following orders either. Many are altruistic to varying degrees, and I doubt anyone is making an AGI just to make money or have power. Shareholders perhaps, but not the people actually making it.

I guess if it's a big concern for you, you should try looking for AI/AGI startups that don't have shareholders, determine their motives, and if you agree with their goals then donate to them directly.

2

AsheyDS t1_j1wo56q wrote

>the ones currently creating the AI make me very concerned about the future

Because of a vague fear of the future consequences of AI, or do you believe AI developers are somehow inherently nefarious?

>Even openAI is a for profit company.

I get the anti-capitalist bias, but there's nothing necessarily wrong with that. A for-profit company is easier to both start and maintain than a non-profit, and allows for more avenues for funding. If OpenAI didn't have Microsoft's deep pockets backing them, they'd probably have a bigger push to monetize what they've made. Even if they do have additional monetary goals, AI R&D costs money.

3

AsheyDS t1_j1a6yjk wrote

Wanting peace, cooperation, and responsible use of technology is admirable, but hardly a unique desire. If you figure out how to slow down the progress of humanity (without force) and get everybody to work together, you'll have achieved something more significant than any AI.

It's more likely that progress will continue, and we'll have to adapt or die, just like always.

7