unholyravenger
unholyravenger t1_j4wf4uz wrote
Reply to Watch Boston Dynamics' Atlas humanoid work at a 'construction site' - The Robot Report by Gari_305
I just imagine a construction site of the future with a bunch of robots doing gymnastics the entire time.
unholyravenger t1_j3rwzvs wrote
Reply to comment by Helpful_Opinion2023 in A Singular Trajectory: the Signs of AGI by mjrossman
For a conceptual understanding start with 3b1b . One of the best explanations of the underlying concepts I've seen. This is really the foundation of everything.
Next, there are 2 main concepts to understand, and that is how each layer of a NN works and the overall architecture. A quick list of layers to get your head around: Linear also called Multilayer Perceptron (MLP), CNN Convolutional Neural Network, then you have a family of layers that handles sequences like sentences. These are RNN, LSTM, and Transformers. But all of these are built on the same concepts as the 3b1b videos. If you're more of a math person this is a great way to conceptualize what each of these layers is doing.
Next different architectures. Start will simple classifiers, which you should already have a good understanding of. Then check out how GAN's work and how you can use two networks to train each other. Then maybe you can go to the state of the art with Diffusion networks. I think this is a bit easier to understand than how each layer works.
All the while playing around in python, and prepackaged ML stuff to apply your knowledge to something concrete. Make a simple classifier, download, and fine-tune a diffusion network on some dataset. Coursera has some really good classes, particularly by Andrew NG who is one of the biggest ML educators out there.
​
Bonus Resources:
ML Streat Talk: Podcast talking to people in the industry. Lots of deep concepts here.
2min Papers The hype channel. Learn about all the new stuff coming out.
Yannic Kilcher: Go deep into different papers written on ML and how they work at a very deep level.
​
Good luck it's a lot, but no one knows everything and you need surprisingly little to get started.
unholyravenger t1_j5zv1ne wrote
Reply to Are most of our predictions wrong? by Sasuke_1738
I'll talk about one of the reasons AI is difficult to predict.
We have some ingrained bias's about what is easy and hard for an intelligent machine. However, the way AI progresses does not follow those intuitions. For instance, in the 90s we were able to make a chess computer that was able to beat the best chess player in the world. It was until relatively recently that if you took all the chess pieces and threw them into a trash can, and ask a robot hand to set them up correctly on a chess board it was able to do it. It was like a 20-year gap between AI being the best at playing chess to being able to setup a chessboard.
This is not intuitive at all. I remember pretty recently reading art was going to be one of the last things AI would be able to do. Turns out we were pretty close to good text->image. It's going to be really hard to predict the path that AI takes. Problems we think are hard are going to be easy, and problems we think are easy will be hard.