Submitted by Constant-Cranberry29 t3_ywu5zb in deeplearning
Constant-Cranberry29 OP t1_iwo6vm1 wrote
Reply to comment by Hamster729 in How to normalize data which contain positive and negative numbers into 0 and 1 by Constant-Cranberry29
initial_learning_rate = 0.02
epochs = 50
decay = initial_learning_rate / epochs
def lr_time_based_decay(epoch, lr):
return lr * 1 / (1 + decay * epoch)
history = model.fit(
x_train,
y_train,
epochs=50,
validation_split=0.2,
batch_size=64,
callbacks=[LearningRateScheduler(lr_time_based_decay, verbose=2)],
)
Hamster729 t1_iwo99fy wrote
That's a very odd looking time decay rule, and I'm almost certain that it does not do what you expect it to do.
Try:
def lr_time_based_decay(epoch, lr):
return lr*0.95
(also see my suggestion from the edit to my previous post)
Constant-Cranberry29 OP t1_iwoc176 wrote
still the same even I drop abs, drop normalization, and change last layer to model.add(Dense(1, activation=None, use_bias=False)) it doesn't work
Viewing a single comment thread. View all comments