Submitted by DesperateProblem7418 t3_114vev7 in singularity
onil_gova t1_ja0tlsm wrote
Reply to comment by ohmsalad in What are your thoughts on Bittensor? by DesperateProblem7418
I agree, i don't think our current methods of training models, mainly back propagation, can be distributed like across heterogeneous machines with various latencies, just seem impractical and not likely to scale. I can't imagine what would happen if a node goes down. Do you just lose those neurons? Is their self correcting mechanic? Are all the other nodes waiting? We dont currently have methods for training a partial model and scaling up and down with the inclusion or removal of neurons. And no dropout is not doing this. The models are usually static from creating to fully trained.
Another thing that im not clear about is that maybe you are not contributing training a model but with a trained model. I dont see how having a collection of trained models would lead to AGI. I also have a lot of doubts since it seems like we need to solve a lot of problems before something like this is practical or possible.
Viewing a single comment thread. View all comments