Submitted by AutoModerator t3_10cn8pw in MachineLearning
Iljaaaa t1_j4uub0z wrote
I have an autoencoder input of 100x21. The 21 columns are PC scores, the 100 rows are observations. The importance of the columns degrades as the column number increases. The first column is the most important for the data variance, the last column is the least important. To be able to reconstruct the data back from PCA the first columns need to be as correct as possible.
I have tried searching whether I can adjust weights or something else of the autoencoder layers to include this importance of the columns, but I have not found it.
In other words, I want errors in the first (e.g 5) columns to be punished more harshly than errors in the last (e.g 5) columns.
I would be grateful if someone could point me in the right direction!
TastyOs t1_j5129q7 wrote
I assume you're doing something like minimizing MSE between inputs and reconstructions. Instead of calculating MSE for all 21 columns, you split it into two parts: do an MSE for the important columns, and an MSE for the unimportant columns. Then weight the important MSE higher than the unimportant MSE
​
So something like
loss = 0.9 * MSE_important + 0.1 * MSE_unimportant
Viewing a single comment thread. View all comments