Submitted by Severe-Improvement32 t3_10ohqyw in deeplearning
so, I have been learning what DL is and how NN learns to do stuff. From what I understand is the repeated iteration will take random weights and at some point those weights will be kinda perfect for the given task (plz correct me if i'm wrong)
Ok, so lets take an example of a task like path finding AI, so we make a NN and train it to go from point A to point B, now it is trained and doing nice and goes to point b perfectly, SO here the weights are set to go from point A to point B right?
What if we give the point B somewhere else, How will the AI get perfect weights as the current weights are only perfect for current point B
What if we put an obstacle in between point A and B, how will the NN set weights, or is it something like a range of weights which are perfect for any given task for NN
​
IDK if I explained it right, plz comment if you have question about my question, and answer also💕
suflaj t1_j6eqh0b wrote
It depends. If it only learned A to B we say it is overfit. If you give it enough different A to Bs, it might learn to generalize, and then for any A to B pair it will be able to find the path.
If it learned on paths without obstacles, it will not be able to deal with obstacles. Which means that it will go right through them, or run into them, if your environment does not alloe an agent to go through them.