Viewing a single comment thread. View all comments

keefemotif t1_j3rsn2g wrote

It's 18, the point I'm making is we have a cognitive bias towards 10-20 years or so when making estimates and we also have a difficult time understanding nonlinearity.

The big singinst hypothesis was there would be a "foom" moment where we go to super exponential progression. From that point of view, you would have to start talking probability distribution of when that nonlinearity happens.

I prefer stacked sigmoidal distributions, where it goes exponential for a while, hits some limit (think Moore's and around 8nm)

Training a giant neural net towards language models is a very important development, but I mean imho AlphaGo was more interesting technically with the combination of value and policy networks, vs billions of nodes in some multilayer net.

2