1. 程式人生 > >用各種GAN生成正態分佈

用各種GAN生成正態分佈

用GAN生成正態分佈

    畢設中有一部分與GAN(Generative Adversarial Networks)相關,但是一直不work,因此準備重新從最簡單的GAN入手,實現一下試試看能不能發現什麼問題。
    本文會用GAN從標準正態噪聲生成均值為3.5,標準差為0.7的正態分佈 N ( 3.5 ,

0. 7 2 ) N(3.5, 0.7^2)


整體結構

    一個GAN由生成器Generator(G)和判別器Discriminator(D)構成,D希望能正確分辨真實樣本和G生成的假樣本,G希望能騙過D,讓D認為G生成的樣本是真的。根據GAN的不同,G和D的目標函式有所不同,結構和連線方式也各有不同,G與D內部的結構也比較自由(姑且這麼認為,實際上還是需要小心設計的)。
    下圖是一個原始GAN的結構。 z

z 是一個 d d 維的噪聲向量, x x 是從目標分佈 N
( 3 , 0. 5 2 ) N(3, 0.5^2)
中取樣得到的樣本,即真實樣本,也是一個 d d 維向量。


生成器與判別器

    由於任務比較簡單,只是隨機向量到隨機向量,所以G和D的結構只使用MLP。方便起見,令隨機向量維度 d = 10 d=10

# 生成器
class Generator_MLP(BasicBlock):
    def __init__(self, name=None):
        super(Generator_MLP, self).__init__(None, name or "Generator_MLP")
    def __call__(self, z, is_training=True, reuse=False):
        with tf.variable_scope(self.name, reuse=reuse):
            net = tf.nn.softplus(dense(z, 64, name='g_fc1'))
            out = dense(net, 10, name='g_fc2')
            return out
# 判別器
class Discriminator_MLP(BasicBlock):
    def __init__(self, name=None):
        super(Discriminator_MLP, self).__init__(None, name or "Discriminator_MLP")
    def __call__(self, x, is_training=True, reuse=False):
        with tf.variable_scope(self.name, reuse=reuse):
            net = tf.nn.tanh(dense(x, 64, name='d_fc1'))
            net = tf.nn.tanh(bn(dense(net, 64, name='d_fc2'), is_training, name='d_bn2'))
            yd = dense(net, 1, name="D_dense")
            return yd, net

目標函式

GAN

(1) L D = E [ l o g ( D ( x ) ) ] + E [ l o g ( 1 D ( G ( z ) ) ] L G = E [ l o g ( D ( G ( z ) ) ) ] \begin{matrix} L_D=E[log(D(x))] + E[log(1-D(G(z))] \\ \\ L_G=E[log(D(G(z)))] \\ \end{matrix} \tag{1}

WGAN

(2) L D = E [ D ( x ) ] E [ D ( G ( z ) ) ] L G = E [ D ( G ( z ) ) ] W D c l i p ( W D , 0.1 , 0.1 ) \begin{matrix} L_D=E[D(x)]-E[D(G(z))] \\ \\ L_G=E[D(G(z))] \\ \\ W_D\leftarrow clip(W_D,-0.1,0.1) \end{matrix} \tag{2}

LSGAN

(1) L D = E [ ( D ( x ) 1 ) 2 ] + E [ D ( G ( z ) 2 ] L G = E [ ( D ( G ( z ) ) 1 ) 2 ] \begin{matrix} L_D=E[(D(x)-1)^2] + E[D(G(z)^2] \\ \\ L_G=E[(D(G(z))-1)^2] \\ \end{matrix} \tag{1}

程式碼實現

def build_placeholder(self):
        self.noise = tf.placeholder(shape=(self.batch_size, self.noise_dim), dtype=tf.float32)
        self.source = tf.placeholder(shape=(self.batch_size, self.noise_dim), dtype=tf.float32)

def build_gan(self):
    self.G = self.generator(self.noise, is_training=True, reuse=False)
    self.G_test = self.generator(self.noise, is_training=False, reuse=True)
    self.logit_real, self.net_real = self.discriminator(self.source, is_training=True, reuse=False)
    self.logit_fake, self.net_fake = self.discriminator(self.G, is_training=True, reuse=True)

def build_optimizer(self):
    if self.gan_type == 'gan':
        self.D_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logit_real, labels=tf.ones_like(self.logit_real)))
        self.D_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logit_fake, labels=tf.zeros_like(self.logit_fake)))
        self.D_loss = self.D_loss_real + self.D_loss_fake
        self.G_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=self.logit_fake, labels=tf.ones_like(self.logit_fake)))
    elif self.gan_type == 'wgan':
        self.D_loss_real = - tf.reduce_mean(self.logit_real) 
        self.D_loss_fake = tf.reduce_mean(self.logit_fake)
        self.D_loss = self.D_loss_real + self.D_loss_fake
        self.G_loss = - self.D_loss_fake
        if self.clip_num:
            print "GC"
            self.D_clip = [v.assign(tf.clip_by_value(v, -self.clip_num, self.clip_num)) for v in self.discriminator.weights]
    elif self.gan_type == 'lsgan':
        def mse_loss(pred, data):
            return tf.sqrt(2 * tf.nn.l2_loss(pred - data)) / self.batch_size
        self.D_loss_real = tf.reduce_mean(mse_loss(self.logit_real, tf.ones_like(self.logit_real)))
        self.D_loss_fake = tf.reduce_mean(mse_loss(self.logit_fake, tf.zeros_like(self.logit_fake)))
        self.D_loss = 0.5 * (self.D_loss_real + self.D_loss_fake)
        self.G_loss = tf.reduce_mean(mse_loss(self.logit_fake, tf.ones_like(self.logit_fake)))
        if self.clip_num:
            print "GC"
            self.D_clip = [v.assign(tf.clip_by_value(v, -self.clip_num, self.clip_num)) for v in self.discriminator.weights]

結果

完整程式碼

https://github.com/SongDark/generate_normal