Viewing a single comment thread. View all comments

sqweeeeeeeeeeeeeeeps t1_izphlmd wrote

? You are proving your SWIN model is overparameterized for CIFAR. Make an EVEN simpler model than those, you prob won’t be able to with off the shelf distillation. Doing this just for ImageNet literally doesn’t change anything. It’s just a different more complex dataset.

What’s your end goal? To come up with a distillation technique to make NN’s more efficient and smaller?

1

MazenAmria OP t1_izpii1s wrote

To examine SWIN itself whether it's overparameterized or not.

1

sqweeeeeeeeeeeeeeeps t1_izspv5o wrote

Showing you can create a smaller model with the same performance means SWIN is overparameterized for that given task. Give it datasets with varying complexity, not just one single one.

2

pr0d_ t1_izqjmmk wrote

yeah as per my comment, the DEiT papers explored knowledge distillation based off Vision Transformers. What you want to do here is probably similar, and the resources needed to prove it is huge to say the list. Any chance you've discussed this with your advisor?

1

MazenAmria OP t1_izrgnco wrote

I remember reading it, I'll read it again and discuss it. Thanks.

1