Skip to content

Instantly share code, notes, and snippets.

@amansrivastava17
Created November 12, 2019 07:16
Show Gist options
  • Save amansrivastava17/e1f25b1d7ac74ef9c0040931ed1e7bbb to your computer and use it in GitHub Desktop.
Save amansrivastava17/e1f25b1d7ac74ef9c0040931ed1e7bbb to your computer and use it in GitHub Desktop.
def test(sentence, model_path, word_index_path)
classifier = models.load_model( 'models/models.h5' )
tokenizer = tf.keras.preprocessing.text.Tokenizer(filters='.,:?{} ')
sentences = re.sub(r'.,:?{}', ' ', sentence)
with open(word_index_path, 'r') as f:
tokenizer.word_index = json.loads(f.read())
tokenized_messages = tokenizer.texts_to_matrix(sentence.split())
p = list(classifier.predict(tokenized_messages)[0])
for index, each in enumerate(p):
print(index, each)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment