Tagesbuch

Start from GAN
If feed all data in, after enough iterations, all output of generator would be 1 (using MNIST dataset), which is the simplest digit. --> "the generator fools discriminator with garbage"
training GAN for each classes individually --> 1. GAN structure is suitable for some classes, but when training some classes it leads to collapse mode; 2. not easy to select models for each classes

then to conditional GAN
similar structure, but with one-hot label concatenated to input of G and D
Advantage: no need to train model individually
Note: learning rate set to 0.001, 0.0001 would lead to bad result

then ACGAN
Current test shows ACGAN works not well while using two dense layers, reason might be that ACGAN only works when using convolutional D and G
todo: pretrain D

then Wasserstein GAN


  1. January
  1. January
    refine the proposal

10-12. January

  • implement a DC classifier for preparation to implement the discriminator
  • read Improved GAN, focus on this paper in following days
  1. January
  • DC classifier has no bugs, but performs awfully
  • install theano and lasagne to run the improvedGAN code
  1. - 19. January
  • finally install theano and its GPU backend correctly and fix a lot of deprecated issues
  1. January
  • try to translate it to keras, find way to implement the loss function
  1. January
  • translation to keras is way complicated, first try paviaU in the original theano code
  • 1D improved GAN is too bad for training paviaU (maybe the reason of the training data, check the training and testing data and resave them)
  1. January
  • prepare for the questions for tomorrow's meeting:
  • the loss function in the code does not match the loss in the paper, and the former has a very strange type
  • the l_lab and the train_err is the same thing
  • no implementation of K+1 class
  1. February
  • as to the 3d convolution, an idea: set stride=(1,1,2), which only manipulate the spectral dimension
  • try semi-supervised gan, discriminator classifies labeled sample, and generated sample as k+1, use unlabeled training data, set label as [0.1, 0.1, 0.1, ..., 0], on mnist dataset
  1. Feb. /- 9. Feb.
  • 1D tryout, seems good, need more tests
  1. March
    ready to test:
  • (replace conv3d to conv2d)
  • different training data size (count)
  • different patch size
  • different channel number
  • (different batch size)
  • (different deepwise conv channel)
  1. March
    find a case: the results that randomly choose 200 samples from the whole image as training set is much better than using randomly choose 200 samples from training set

  2. April

  • email to cluster team
  • try cross validation
  • ask Amir how to determine the final result
  • read the "discr_loss" blog, and try their code
  • read gan paper
  1. April
  • adam vs sgd
    the validation curve of using adam is up and down --> not suitable for normal early stopping algorithm
    try to fix: use smaller learning rate

  • alternative for progress (early stopping)
    not calculate the ratio of average training loss and min training loss in a training strip, but average training loss and past average training loss

  • learning rate decay strategy

  • optimizer for G and optimizer for D

  • use cross entropy loss of only first 9 labels to determine when to early stop

  • double check the Dataloader in demoGAN (zhu et al)(pytorch)

  1. April
  • test feature match, start from one layer model (ssgan_improved_pytorch)
  • try to implement custom loss function like keras
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
【社區(qū)內(nèi)容提示】社區(qū)部分內(nèi)容疑似由AI輔助生成,瀏覽時(shí)請結(jié)合常識與多方信息審慎甄別。
平臺聲明:文章內(nèi)容(如有圖片或視頻亦包括在內(nèi))由作者上傳并發(fā)布,文章內(nèi)容僅代表作者本人觀點(diǎn),簡書系信息發(fā)布平臺,僅提供信息存儲服務(wù)。

相關(guān)閱讀更多精彩內(nèi)容

  • 寫下一篇文字,開啟一段新的旅程 當(dāng)我在鍵盤上敲下第一個(gè)文字的時(shí)候,我的腦海里總是浮現(xiàn)出這樣一段話:“這個(gè)博客到底是...
    TYB閱讀 534評論 0 51
  • 作為一個(gè)資深手機(jī)賣家,經(jīng)常會有朋友問我該如何選購手機(jī),感謝朋友給予的信任,我愿意和大家分享我的一些心得體會,希望為...
    菲完美閱讀 833評論 0 0
  • 在java多線程并發(fā)編程中,Synchronized一直占有很重要的角色。Synchronized通過獲取鎖來實(shí)現(xiàn)...
    Vinctor閱讀 824評論 0 2

友情鏈接更多精彩內(nèi)容