Submitted by diepala t3_11hdgtn in deeplearning
I have a regression problem with tabular data, and I want to train a deep learning model for this. I have more experience with image classification problems regarding deep learning, but not so much with regression for tabular data.
I am asking for general guidelines (empirical) on how to design the neural network architecture for this problem. I know this depends a lot on the particular problem, but I would like to know what type of things usually work. Some particular questions I have are:
- should layer sizes always increase? (e.g. [128, 256, 512, 1024])
- Should they decrease at the end before the final result? (e.g. [128, 256, 512, 1024, 256, 64])
- Should I repeat the layer size before increasing? (e.g. [128, 128, 256, 256, 512, 512])
- What activation functions do you usually use? I assume ReLU or LeakyReLU will probably be best for regression.
- Do you use dropout?
- Does anybody has experience with residual layers for regression?
big_ol_tender t1_jatvx64 wrote
If you have tabular data just use xgboost, forget the nn