Comments
Ortus14 t1_j5niai9 wrote
For those who want the cliff notes.
Here's his poorly though out arguments
- Evolution took a long time to make brains so teams of programmers must take a long time to make brains. (Right because those two are totally equivalent processes)
- Humans are so special and magical we can't possible be emergent complexity arising from a simple rule set.
- Inventing a new chip could take 30 years (Ai invents viable chip designs in hours)
- We still need new ideas for AGI who knows how long that will take. (Not true btw, when available computation exists there's always more than enough ideas to take near maximum advantage of that computation, and available computation grows at a predictable exponential pace. Every 20 years we have more than a million times more computation available.).
But I understand the frustration of any Ai programmer that's been in the industry for multiple decades. They are attempting an impossible problem, trying to write AGI with insufficient compute, so of course they are going to get discouraged.
Compute power is all you need to look at to make predictions. The smartest people all over the world will squeeze nearly everything possible out of that compute as it arrives.
civilrunner t1_j5oicxa wrote
Similar people to these geniuses also predicted Airplanes will take millions of years to develop back in 1903...
https://bigthink.com/pessimists-archive/air-space-flight-impossible/
These people are always vastly overconfident for no reason. Maybe if we had to design a computer exactly the same as a brain it would take a hundred years, but we don't, we can use vastly more power than the brain to match its computational power, we can use far larger volumes of computational area since we aren't restricted to a skull and birth canal.
We made airplanes by simply throwing more power and human engineering at the problem, I suspect we'll make AI similarly and if flight was similar AGI will happen within 2 decades, far faster than even 2 average lifespans of hunter gatherers let alone LEV humans...
Shelfrock77 t1_j5k9676 wrote
Futurology ass post
Vehks t1_j5khtdv wrote
When it comes to predictions-
Generally anyone who speaks so matter-of-fact about things on a timeline greater than 5 years can be safely ignored.
I've mentioned this before, but its said that future predictions can only be reasonably extrapolated on a timeline of roughly 3 years with 5 years allowing for some leeway, but past that there are simply too many unknowns/variables to extrapolate upon that one is just guessing at that point.
What's interesting is this applies to both laymen and experts, as even educated guessing is still guessing.
sticky_symbols t1_j5kqu80 wrote
You can definitely predict some things outside of five years with good accuracy. Look at Moore's Law. That's way more accurate than predictions need to be to be useful. Sure if nukes were exchanged all bets are off, but outside of that I just disagree with your statement. For instance: will China's gender imbalance cause them trouble in ten years? It almost certainly will.
turnip_burrito t1_j5m4aoq wrote
Yes, it depends on the timescale of the system.
tms102 t1_j5l3zt0 wrote
Sounds like he doesn't think what "a lifetime" is won't be refined in our life times or our children's children's life times either.
Kaarssteun t1_j5kshqa wrote
Gary often raises valid criticism - the believability of which is completely drained when he posts some weird ass takes - Him heavily implying cynicism because of a fundamental limitation in LLMs, something he should be aware of
bitchslayer78 t1_j5kunsd wrote
Before the bashing starts , Booch is actually a titan of CS and definitely deserves respect, that being said he is also a bit cynical sometimes
Cryptizard t1_j5l109g wrote
Hot take: people that downvote valid criticisms because they would rather be blindly optimistic than try to find the truth are just as bad as people who reject AI because they are scared of the possibilities. Downvotes should be for bad/low effort posts not because you disagree with it. Otherwise you are just trying to create a useless echo chamber.
HeinrichTheWolf_17 t1_j5kwm50 wrote
It will happen in a dog’s lifetime.
Ashamed-Asparagus-93 t1_j5kwu7f wrote
Might happen in a rats lifetime
ken81987 t1_j5npeue wrote
The rate things are going.. 5-10 years
SWATSgradyBABY t1_j5oiwkl wrote
The thought of AI is very humbling and intimidating for some.
GayHitIer t1_j5kb7r4 wrote
Read the whole thing, pretty unimpressed with his arguments.
But whatever keeps the future away from luddites works I guess.