Skip to content Skip to sidebar Skip to footer

How Can We Perform Early Stopping With Train_on_batch?

I manually run the epochs in a loop, as well as further nested mini-batches in the loop. At each mini-batch, I need to call train_on_batch, to enable the training of a customized m

Solution 1:

In practice, 'early stopping' is largely done via: (1) train for X epochs, (2) save the model each time it achieves a new best performance, (3) select the best model. "Best performance" defined as achieving the highest (e.g. accuracy) or lowest (e.g. loss) validation metric - example script below:

best_val_loss = 999# arbitrary init - should be high if 'best' is low, and vice versa
num_epochs = 5
epoch = 0while epoch < num_epochs:
    model.train_on_batch(x_train, y_train)  # get x, y somewhere in the loop
    val_loss = model.evaluate(x_val, y_val)

    if val_loss < best_val_loss:
        model.save(best_model_path) # OR model.save_weights()print("Best model w/ val loss {} saved to {}".format(val_loss, best_model_path))
    # ...
    epoch += 1

See saving Keras models. If you rather early-stop directly, then define some metric - i.e. condition - that'll end the train loop. For example,

while True:
    loss = model.train_on_batch(...)
    if loss < .02:
        break

Post a Comment for "How Can We Perform Early Stopping With Train_on_batch?"