Viewing a single comment thread. View all comments

Kaarssteun t1_j0s73ps wrote

100% a software problem. There is no guarantee, but the trends don't lie. Chances of AGI never arriving are... astronomically small.

30

thEiAoLoGy t1_j0sih7c wrote

Chances of it arriving any time soon? Unclear, none of the paths seem like it’ll lead there to me.

7

ghostfuckbuddy t1_j0tfite wrote

If we magically stumbled across the right algorithm, then it's only a software problem. But if we need to test a bunch of different approaches before we get there, then hardware becomes the limiting factor in progress.

2

civilrunner t1_j0ubwai wrote

I'd say it's still a hardware and software problem. We are still nowhere close to building a computational circuit that replicates the human brain which uses complex 3D computational structures where connections can be made between neurons that are far apart to link computational circuits in completely different ways from what we do with lithography constructed computers. While it's possible that we'll be able to achieve AGI through the raw power of miniaturizing lithography built computation, it is a completely different structure compared to the brain so it's not a guarantee.

The difference between a true 3D compute architecture and a 2D compute or even a stacked 2D compute is pretty enormous (it's like comparing x² vs x³).

It's also clearly a software problem as well, though I'm curious if you need plasticity and massive connectivity between far reaching compute sections to archive an AGI level intelligence for things like creativity similar to a human brain.

1

Kaarssteun t1_j0ucilx wrote

It's not our goal to replicate a human brain; that's what making children is for. We are trying to replicate the brain's intellectual intelligence in a way that enslaving it would still be ethical.

4

civilrunner t1_j0udhrm wrote

I agree, though it may not be nearly as efficient as a human brain when it comes to being intelligent. I'm my opinion all you need to do is look at the gains from GPUs vs CPU AI training to see how much scaling up local chip compute potential does for AI to see how much potentially better a 3D human brain may be compared to even a wafer scale stacked 2D chip and then acknowledge that the human brain doesn't just compute with 1 and 0, the chemical signals offers slightly more options and just off and on as we learned recently.

There are advantages to a silicone electronic circuit as well of course, the main one being speed since electricity flows far far faster than chemical signals.

I am also personally unsure of how "enslaving" a verified general intelligence would be ethical regardless of it's computational architecture. It's far better to ensure alignment so that it's not "enslaved" but rather wants to collaborate to achieve the same goals.

1

Kaarssteun t1_j0udpg9 wrote

Right, enslaving is subjective; but we want to make sure it enhances our lives rather than destroying it.

1

civilrunner t1_j0udzlb wrote

Sure, just wouldn't call it "enslaving" them seeing as that generally means forcing them to work against their will which if we build an AGI or an ASI seems unlikely to be feasible. Well aligned is a far better term and in my view will be the only thing that could work.

2

hydraofwar t1_j0up12g wrote

That's true, but replicating the brain's intellectual intelligence may require hardware made specifically for it. If I'm not mistaken, Google's AI Palm has a specific latest generation hardware for it

1

dasnihil t1_j0unptg wrote

yep, i almost feel bad for the nerds who are going into neuromorphic computing in the hopes of mimicking brain like computer while they can totally do all this in software. it's always a software/theoretical problem and once you solve it, you can implement it on a hardware, which is whole another engineering challenge. also imo we need some new/better algorithms for solving problems that look "hard" to solve. hope most people focus on this.

1