Created
May 19, 2017 03:03
-
-
Save kevingo/09cadb7b067e6447dbea93645c0018f5 to your computer and use it in GitHub Desktop.
Revisions
-
kevingo created this gist
May 19, 2017 .There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode charactersOriginal file line number Diff line number Diff line change @@ -0,0 +1,87 @@ ## Encoder, Generator, and put them together ### Speaker: 李宏毅 #### outline Auto Encoder Deep Generative Model Conditional Geenration #### deep learning in one slice - MLP - CNN: input matrix, output matrix - RNN: input seq of vector #### 化繁為簡 - A digit can be represented as a 28x28 dim vector - 用 28x28 來描述太複雜,應該要讓機器看過很多圖片後簡化 - 如果一個圖片可以用角度不同來描述,那就可以用角度一個維度來取代 28x28 維度的圖片 - 發現複雜的事物背後簡單的規則 - 我們希望用 unsupervied 方法發現這件事情 - 如果用 deep learning 來解決,這叫做 auto-encoder #### auto encoder - input 28x28 image -> NN Encoder -> output low dimension code - code -> NN Decoder -> 28x28 image #### Deep auto-encoder - NN encoder + NN decoder = a depp network - 把 encoder 和 decoder 疊加起來,input一個 image,output 也是一個 image,就可以串起來 - 如果串得起來,那就可以知道如何用一個簡單的 code 來表示一個複雜的 image #### tSNE - 降維度的方法之一 #### Word Enbedding - ML learn word by its contet #### Deep Generative Model - machine 如何產生新的東西 - Unsupervised - ex: 看了一堆詩詞,要自己寫詩詞 #### component by component - image are composed of pixels - to create an image, generating a pixel each time - 缺點:每次都產生一個 pixel,失去大局觀 #### VAE - auto-encoder 的變形 - input -> nn encoder -> code + noise -> nn decoder -> ouput - input and output 越接近越好 - why VAE add noise useful - 加上 noise 範圍內的 code 應該都要可以解出同樣的結果 - 用 vae 可能會比 auto-encoder 得到比較好的結果 - 缺點: it does not really try to simulate real image - 沒有真的去學習怎麼產生一個 image - 而是在學習怎麼產生一個越像資料集合的 image - 產生的 image 通常都是資料集合當中的 image #### GAN (Generative Adversarial Networks) - 很像是演化 - 比如:枯葉蝶,為什麼長得像枯葉呢?因為有演化的壓力,所以他慢慢演化成為枯葉的形狀。 - GAN: 預測一個 distribution - 缺點:很難 train - Why GAN is hard to train - #### W-GAN - using wasserstein distance instead of JS divergence - [令人拍案叫絕的Wasserstein GAN](https://zhuanlan.zhihu.com/p/25071913) - #### Conditional Generation - EX: 根據文字描述畫出動漫人物頭像 - EX: text summarization #### - - #### - -