Skip to content

Instantly share code, notes, and snippets.

@tianqig
Forked from jcoreyes/readme.md
Created September 11, 2017 09:43
Show Gist options
  • Select an option

  • Save tianqig/c7bd2a336a2f763496e0ebed1dc27f6c to your computer and use it in GitHub Desktop.

Select an option

Save tianqig/c7bd2a336a2f763496e0ebed1dc27f6c to your computer and use it in GitHub Desktop.
Image Captioning LSTM

##Information

name: LSTM image captioning model based on CVPR 2015 paper "Show and tell: A neural image caption generator".

model_file:

model_weights:

license:

neon_version:

neon_commit:

gist_id:

##Description The LSTM model is trained on the flickr8k, flickr30k, and coco datasets using precomputed VGG features from http://cs.stanford.edu/people/karpathy/deepimagesent/. Model details can be found in the following CVPR-2015 paper:

Show and tell: A neural image caption generator.
O. Vinyals, A. Toshev, S. Bengio, and D. Erhan.  
CVPR, 2015 (arXiv ref. cs1411.4555)

##Performance

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment