Viewing a single comment thread. View all comments

HeinrichTheWolf_17 t1_j3opxnr wrote

The consensus is the same outside this sub too, albeit not as soon, many experts are moving their timelines to the 2030s.

37

CyberAchilles t1_j3p8fix wrote

Sources?

11

enkae7317 t1_j3pv2yh wrote

Just trust me, bro.

16

AsuhoChinami t1_j3tk1e8 wrote

Maybe you could actually give him time to respond before coming up with some snarky little jab? Heinrich is active in multiple communities and is a trustworthy and reliable person.

6

throwawaydthrowawayd t1_j3vbipb wrote

/u/rationalkat made a cherry-picked list in the big predictions thread:

  • Rob Bensinger (MIRI Berkeley)
    ----> AGI: ~2023-42
  • Ben Goertzel (SingularityNET, OpenCog)
    ----> AGI: ~2026-27
  • Jacob Cannell (Vast.ai, lesswrong-author)
    ----> AGI: ~2026-32
  • Richard Sutton (Deepmind Alberta)
    ----> AGI: ~2027-32?
  • Jim Keller (Tenstorrent)
    ----> AGI: ~2027-32?
  • Nathan Helm-Burger (AI alignment researcher; lesswrong-author)
    ----> AGI: ~2027-37
  • Geordie Rose (D-Wave, Sanctuary AI)
    ----> AGI: ~2028
  • Cathie Wood (ARKInvest)
    ----> AGI: ~2028-34
  • Aran Komatsuzaki (EleutherAI; was research intern at Google)
    ----> AGI: ~2028-38?
  • Shane Legg (DeepMind co-founder and chief scientist)
    ----> AGI: ~2028-40
  • Ray Kurzweil (Google)
    ----> AGI: <2029
  • Elon Musk (Tesla, SpaceX)
    ----> AGI: <2029
  • Brent Oster (Orbai)
    ----> AGI: ~2029
  • Vernor Vinge (Mathematician, computer scientist, sci-fi-author)
    ----> AGI: <2030
  • John Carmack (Keen Technologies)
    ----> AGI: ~2030
  • Connor Leahy (EleutherAI, Conjecture)
    ----> AGI: ~2030
  • Matthew Griffin (Futurist, 311 Institute)
    ----> AGI: ~2030
  • Louis Rosenberg (Unanimous AI)
    ----> AGI: ~2030
  • Ash Jafari (Ex-Nvidia-Analyst, Futurist)
    ----> AGI: ~2030
  • Tony Czarnecki (Managing Partner of Sustensis)
    ----> AGI: ~2030
  • Ross Nordby (AI researcher; Lesswrong-author)
    ----> AGI: ~2030
  • Ilya Sutskever (OpenAI)
    ----> AGI: ~2030-35?
  • Hans Moravec (Carnegie Mellon University)
    ----> AGI: ~2030-40
  • Jürgen Schmidhuber (NNAISENSE)
    ----> AGI: ~2030-47?
  • Eric Schmidt (Ex-Google Chairman)
    ----> AGI: ~2031-41
  • Sam Altman (OpenAI)
    ----> AGI: <2032?
  • Charles Simon (CEO of Future AI)
    ----> AGI: <2032
  • Anders Sandberg (Future of Humanity Institute at the University of Oxford)
    ----> AGI: ~2032?
  • Matt Welsh (Ex-google engineering director)
    ----> AGI: ~2032?
  • Siméon Campos (Founder CEffisciences & SaferAI)
    ----> AGI: ~2032
  • Yann LeCun (Meta)
    ----> AGI: ~2032-37
  • Chamath Palihapitiya (CEO of Social Capital)
    ----> AGI: ~2032-37
  • Demis Hassabis (DeepMind)
    ----> AGI: ~2032-42
  • Robert Miles (Youtube channel about AI Safety)
    ----> AGI: ~2032-42
  • OpenAi
    ----> AGI: <2035
  • Jie Tang (Prof. at Tsinghua University, Wu-Dao 2 Leader)
    ----> AGI: ~2035
4

tatleoat t1_j3r8a33 wrote

All the experts I've seen say 2029, like Altman and Carmack. Musk has also said 2029 if that's an opinion you care about.

5

joecunningham85 t1_j3romoc wrote

"All the experts"

Altman is a CEO with a vested interested in hyping up AI progress for his business.

Musk said we would be on Mars and having self-driving cars take us everywhere by now lol.

8

tatleoat t1_j3rpi30 wrote

I don't see how saying "[thing] will come in 7 years" influences anything as a prediction, it's too far away to generate any tangible hype in the public. If he was going to lie to manipulate a products value I'd think I'd make my predictions something more near term, if we're indeed cynically manipulating the market. Not to mention any of that about Sam Altman changes nothing about the fact he's an expert and his credibility rests on his correctness, it's in his interest to be right. You can't just claim biased interests here, it's more nuanced than that, also none of that changes the fact they all are saying the same thing, 2029. That's pretty consistent, and I'm inclined to believe it.

1

maskedpaki t1_j3tcteo wrote

you have it all backwards

generating long term hype is perfect for a tech startup for 2 reasons

  1. it overvalues the company based on long term potential. open ai only makes 60m in revenue. standard 10x multiplier would have it valued at 600m at the most. but its valued at 30 billion because of the hope that revenues will billions in the future

&#x200B;

  1. you dont have to keep your long term promises. if he makes a promise for gpt4 people will call him out when it fails. but saying AGI 2035 and chances are no one will care when its 2035 and he doesnt deliver since the whole field will be different by then.
1