Submitted by Tiny-Mud6713 t3_yuxamo in MachineLearning
FakeOuter t1_iwbsmnp wrote
- try triplet loss
- swap Flatten with GlobalMaxPooling2D layer, it will reduce trainable params 49x in your case. Less params -> lower chance of overfitting. Maybe place some normalization layer right after maxPool
Tiny-Mud6713 OP t1_iwc219b wrote
Will try that thanks
Viewing a single comment thread. View all comments