Last active
September 23, 2020 18:50
-
-
Save MathewAlexander/dc7214e95fd4abc4538b0208b06bee8c to your computer and use it in GitHub Desktop.
Revisions
-
MathewAlexander revised this gist
Sep 23, 2020 . 1 changed file with 1 addition and 1 deletion.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,7 +1,7 @@ tokenizer = T5Tokenizer.from_pretrained('t5-base') model =T5ForConditionalGeneration.from_pretrained('path_to_trained_model', return_dict=True) def generate(text,model,tokenizer): model.eval() input_ids = tokenizer.encode("WebNLG:{} </s>".format(text), return_tensors="pt") -
MathewAlexander revised this gist
Sep 18, 2020 . 1 changed file with 1 addition and 0 deletions.There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -1,3 +1,4 @@ tokenizer = T5Tokenizer.from_pretrained('t5-base') model =T5ForConditionalGeneration.from_pretrained('path_to_trained_model', return_dict=True) def generate(text,modedl,tokenizer): -
MathewAlexander created this gist
Sep 18, 2020 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,8 @@ model =T5ForConditionalGeneration.from_pretrained('path_to_trained_model', return_dict=True) def generate(text,modedl,tokenizer): model.eval() input_ids = tokenizer.encode("WebNLG:{} </s>".format(text), return_tensors="pt") outputs = model.generate(input_ids) return tokenizer.decode(outputs[0])