I’m playing with the reuters-example dataset and it runs fine (my model is trained). I read about how to save a model, so I could load it later to use again. But how do I use this saved model to predict a new text? Do I use models.predict()
?
Do I have to prepare this text in a special way?
I tried it with
import keras.preprocessing.text text = np.array(['this is just some random, stupid text']) print(text.shape) tk = keras.preprocessing.text.Tokenizer( nb_words=2000, filters=keras.preprocessing.text.base_filter(), lower=True, split=" ") tk.fit_on_texts(text) pred = tk.texts_to_sequences(text) print(pred) model.predict(pred)
But I always get
(1L,) [[2, 4, 1, 6, 5, 7, 3]] --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) <ipython-input-83-42d744d811fb> in <module>() 7 print(pred) 8 ----> 9 model.predict(pred) C:\Users\bkey\Anaconda2\lib\site-packages\keras\models.pyc in predict(self, x, batch_size, verbose) 457 if self.model is None: 458 self.build() --> 459 return self.model.predict(x, batch_size=batch_size, verbose=verbose) 460 461 def predict_on_batch(self, x): C:\Users\bkey\Anaconda2\lib\site-packages\keras\engine\training.pyc in predict(self, x, batch_size, verbose) 1132 x = standardize_input_data(x, self.input_names, 1133 self.internal_input_shapes, -> 1134 check_batch_dim=False) 1135 if self.stateful: 1136 if x[0].shape[0] > batch_size and x[0].shape[0] % batch_size != 0: C:\Users\bkey\Anaconda2\lib\site-packages\keras\engine\training.pyc in standardize_input_data(data, names, shapes, check_batch_dim, exception_prefix) 79 for i in range(len(names)): 80 array = arrays[i] ---> 81 if len(array.shape) == 1: 82 array = np.expand_dims(array, 1) 83 arrays[i] = array AttributeError: 'list' object has no attribute 'shape'
Do you have any recommendations as to how to make predictions with a trained model?