AsheyDS
AsheyDS t1_j4qw68h wrote
Very interesting results so far, because the dominant impression I get from this sub is that a single AGI will take over everything.
I personally think multiple companies or groups will develop different AGI through different methods, and they'll all be valid in their own ways. I don't think there's any one route to AGI, and even our own brains vary wildly from one another. It would actually be nice if we had such variety, so maybe a particular cognitive architecture could be paired with an individual to best help them, either because they operate similarly or very differently depending on their needs.
As for the form it will take, that's hard to tell. I think at first it may take a small supercomputer to develop it, but by the time it's ready for public use, computers will have changed a lot, and maybe we'll have similar specs in a much smaller package. If it's little more than software, it should be able to adapt, and hopefully we'll be able to install it on just about anything that can support it.
AsheyDS t1_j4qgigy wrote
Reply to comment by turnip_burrito in Perhaps ChatGPT is a step back? by PaperCruncher
I wish... but public funding from where? AI enthusiasts? The general public at large who barely even knows what chatGPT is, let alone any other progress in the field? I've considered crowd-funding for my own work, but I would not get the amounts I need. Even just looking at this sub, there are too many differing opinions that are all over the map, so while I may get a few small donations here and there, it's just not going to amount to anything helpful. It would be nice if there were more alternatives than pairing up with big investors though.
AsheyDS t1_j4okrwn wrote
Reply to Perhaps ChatGPT is a step back? by PaperCruncher
It doesn't matter if they're a company, research lab, non-profit, or whatever... research and development costs money. And of course their funding is going to come with strings attached.
Also, ChatGPT is, from my understanding, just a fine-tuned version of GPT-3 (GPT-3.5?) and nothing radically new in and of itself. If they write a paper on it, it'll likely be after it's been through thorough public testing so they can include new insights.
AsheyDS t1_j4hhl3y wrote
Reply to comment by Evilsushione in Does anyone else get the feeling that, once true AGI is achieved, most people will act like it was the unsurprising and inevitable outcome that they expected? by oddlyspecificnumber7
What if self-awareness had limits? We consider ourselves self-aware, but we don't know everything that's going on in our brains at any given moment. If self-awareness were curtailed so it was only functional, would it be as dangerous as you anticipate?
AsheyDS t1_j48w13t wrote
Reply to comment by Scarlet_pot2 in Don't add "moral bloatware" to GPT-4. by SpinRed
>against the people
Or maybe for the people? If you really think that every single person working on AI/AGI or who could possess it is dangerous and evil and working against you, then why the hell would you trust everyone with it? Or do you just not want anyone to have an advantage over you? Because I've got news for you...
AsheyDS t1_j4840de wrote
Reply to Don't add "moral bloatware" to GPT-4. by SpinRed
Instead of "moral bloatware" how about if it just followed applicable laws? Or do you think it shouldn't have any constraints at all?
AsheyDS t1_j3r91af wrote
Reply to comment by arisalexis in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
Feel? No, not quite. But it's all relative. If one narrows their perspective on what's to come, it could feel like a huge change already. Personally I think this is just us dipping our toes into the water, so to speak. So yes "some" acceleration, especially when considering how many people think that what we've seen so far is half or most of the way to AGI.
AsheyDS t1_j3r7vte wrote
Reply to comment by coumineol in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
I never said it'd be 10 years, though it could for all anyone knows. If I said it would be released in 2035, and widely adopted by 2040, I don't think that's unreasonable. But I also believe in a slow takeoff and more practical timelines. Even Google, as seemingly ubiquitous as it is, did not become that way overnight, it took a few years to become widely known and used. Also we're dealing with multiple unknowns, like how many companies are working on AGI, how far along they are, how long it takes to adequately train them before release, how the rest of the world (not just enthusiasts) accepts or doesn't accept AGI, how many markets will be disrupted and the reaction to that, legal issues along the way, etc. etc. Optimistic timelines don't seem to account for everything.
Edit: I should also mention one of the biggest hurdles is even getting people to understand and agree on what AGI is! We could have it for years and many people might not even realize. Conversely, we have people claiming we have it NOW, or that certain things are AGI when they aren't even close.
AsheyDS t1_j3pbd58 wrote
Reply to comment by 420BigDawg_ in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
Fair enough, but it's a thing for a reason. Obviously the date will continue to change, so it could only possibly be a measure of that change. So why is it changing? What is it based on? It would make more sense to say a decade than a specific date or even year.
AsheyDS t1_j3pb1x8 wrote
Reply to comment by imlaggingsobad in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
But what is the crowd? Is this based on a sampling of all types of people, or enthusiasts being enthusiastic?
AsheyDS t1_j3ovmd1 wrote
Reply to comment by [deleted] in "Community" Prediction for General A.I continues to drop. by 420BigDawg_
I just feel like a lot of people are seeing some acceleration and think that this is all of it. What I think, is that we'll continue seeing regular advances in tech and AI, science in general. But the 30's will be the start of AGI, and 40's will be when it really takes off (in terms of adoption and utilization). Even a guess of before 2035 is, in my estimation, an optimistic projection where everything goes right and there aren't any setbacks or delays. But just saying 30's is a solid guess.
AsheyDS t1_j3orcp9 wrote
As a prediction, this is utterly meaningless. I'm not even sure if this is useful at all as a gauge of anything.
AsheyDS t1_j3ercb0 wrote
Reply to comment by BellyDancerUrgot in We need more small groups and individuals trying to build AGI by Scarlet_pot2
Oh it's absolutely hard, which makes it worth doing!
AsheyDS t1_j3e7r7c wrote
Reply to comment by BellyDancerUrgot in We need more small groups and individuals trying to build AGI by Scarlet_pot2
>Do share then what your beliefs are.
I do not have a phd, nor do I have a degree that would satisfy you, so my beliefs are meaningless. :) I didn't even get into this field until after college.
>What exactly is AI without math?
What is natural intelligence without math? Math is just a system of measurement, and one that as of yet hasn't defined every single thing. I get that we're talking about computers as the substrate, so math makes sense, but it's not the only way to define things, or enact a process. That said, I'm not suggesting ditching math, it will be integral to many processes, I'm just saying it doesn't have to be the main focus of work or study centered around cognition. That's what we're ultimately talking about here with AGI, not just mathematical processes. This is, unless you believe ML is the path to AGI, as many do.
AsheyDS t1_j3c78hd wrote
Reply to comment by BellyDancerUrgot in We need more small groups and individuals trying to build AGI by Scarlet_pot2
You're more concerned with the ML branch then, so maybe you think that's what's going to lead to AGI, but not even all ML researchers are convinced of that. There's a lot more to consider, like the rest of the AI field. People need to stop being discouraged by this talk of phds and math.
AsheyDS t1_j37j3rd wrote
It takes time, which takes money, but people can at least think about it and study. Some of the comments here make it seem like you have to adhere to current ML methods to get anywhere but that's not true at all. The best thing people can do, if they want to get into AGI, is to learn learn learn. Not just ML, but AI more broadly. Also both human and animal cognition and behaviors, computer hardware and software, etc. A strong foundation in all these is a good start, and looking into current and past methods to see what needs attention. I wouldn't get too bogged down in any one aspect of it though. In my opinion, general AI will require a general understanding of a lot of things, and less specified training. These days, if you have internet access, it only costs time to get pretty far into this stuff. No need to worry about compute/training costs and things like that when you're early into it. However, I doubt a largely distributed and collaborative approach will be good in the long-term without some sort of more substantial commitment and organization. Getting people interested is easy, but getting them committed to it long-term to get any sort of cohesion in the project is more difficult, and that's where it starts making more sense to turn it into a company or other formal organization than just a loosely collaborative online effort.
AsheyDS t1_j2s8nwz wrote
Reply to AGI will be a social network by UnionPacifik
You're saying a lot while somehow not saying anything. You mention 'AGI needs this and that' and that 'social, relational networks' are the solution, but you don't explain how or why. Otherwise most of this is very obvious, and you're giving an empty pitch.
AsheyDS t1_j1zkqqd wrote
Reply to comment by dracount in Concerns about the near future and the current gatekeepers of AI by dracount
>Because they have shareholders best interests at heart. With such power, society should come first, not shareholders.
That's not always the case. It depends on the structure of the company. However, even if it isn't shareholders, say it was funded by crowdsourcing... AI devs are still beholden to those that donated, one way or another. Unfortunately, it can't be developed in a financial vacuum. That said, even if there are financial obligations, that doesn't mean AI devs are passively following orders either. Many are altruistic to varying degrees, and I doubt anyone is making an AGI just to make money or have power. Shareholders perhaps, but not the people actually making it.
I guess if it's a big concern for you, you should try looking for AI/AGI startups that don't have shareholders, determine their motives, and if you agree with their goals then donate to them directly.
AsheyDS t1_j1wu62h wrote
Reply to comment by AI_Enjoyer87 in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
>If we get AGI or something similar that can facilitate 100 years of scientific research in a year
Considering the scientific method, how is that supposed to happen?
AsheyDS t1_j1wt9gt wrote
Reply to comment by Kaarssteun in Considering the recent advancements in AI, is it possible to achieve full-dive in the next 5-10 years? by Burlito2
ASI isn't magic. And there will always be real-world limitations.
AsheyDS t1_j1wo56q wrote
>the ones currently creating the AI make me very concerned about the future
Because of a vague fear of the future consequences of AI, or do you believe AI developers are somehow inherently nefarious?
>Even openAI is a for profit company.
I get the anti-capitalist bias, but there's nothing necessarily wrong with that. A for-profit company is easier to both start and maintain than a non-profit, and allows for more avenues for funding. If OpenAI didn't have Microsoft's deep pockets backing them, they'd probably have a bigger push to monetize what they've made. Even if they do have additional monetary goals, AI R&D costs money.
AsheyDS t1_j1wm5cr wrote
Reply to comment by Calm_Bonus_6464 in Concerns about the near future and the current gatekeepers of AI by dracount
>If we have beings infinitely more intelligent than us, there's no possible way we can retain control.
Infinitely more intelligent, sure. But no AI/AGI is going to be infinitely intelligent.
AsheyDS t1_j1t5435 wrote
Judging from these responses, I feel like if an open source AGI were dropped into people's laps, nobody would know what to do with it.
AsheyDS t1_j1a6yjk wrote
Reply to comment by a4mula in A Plea for a Moratorium on the Training of Large Data Sets by a4mula
Wanting peace, cooperation, and responsible use of technology is admirable, but hardly a unique desire. If you figure out how to slow down the progress of humanity (without force) and get everybody to work together, you'll have achieved something more significant than any AI.
It's more likely that progress will continue, and we'll have to adapt or die, just like always.
AsheyDS t1_j4rwgmy wrote
Reply to comment by Bakoro in What do you guys think of this concept- Integrated AI: High Level Brain? by Akimbo333
>Yes, essentially. The data gets synthesized and we have the ability to mix and match, to an extent. We have the ability to recognize patterns and apply concepts across domains.
Amazing how you just casually gloss over some of the most complex and difficult-to-replicate aspects of our cognition. I guess transfer learning is no big deal now?