For this question, it’s a simple answer from the primary author:
With
fit_generator
, you can use a generator for the validation data as well. In general, I would recommend usingfit_generator
, but usingtrain_on_batch
works fine too. These methods only exist for the sake of convenience in different use cases, there is no “correct” method.
train_on_batch
allows you to expressly update weights based on a collection of samples you provide, without regard to any fixed batch size. You would use this in cases when that is what you want: to train on an explicit collection of samples. You could use that approach to maintain your own iteration over multiple batches of a traditional training set but allowing fit
or fit_generator
to iterate batches for you is likely simpler.
One case when it might be nice to use train_on_batch
is for updating a pre-trained model on a single new batch of samples. Suppose you’ve already trained and deployed a model, and sometime later you’ve received a new set of training samples previously never used. You could use train_on_batch
to directly update the existing model only on those samples. Other methods can do this too, but it is rather explicit to use train_on_batch
for this case.
Apart from special cases like this (either where you have some pedagogical reason to maintain your own cursor across different training batches, or else for some type of semi-online training update on a special batch), it is probably better to just always use fit
(for data that fits in memory) or fit_generator
(for streaming batches of data as a generator).