对于 分布(更加 general distribution),它可能无法进行计算(如 Nerual Network),即 无法计算.
Using GAN
GAN 是如何处理这个问题的呢? (Generator)
GAN 采用了一个 NN 来拟合 ,即 Generator 是一个 network, 使用它来定义分布
To learn the generator’s distribution ,we define a prior on input noise variables , then represent a mapping to data space as ,where G is a differentiable function represented by a multilayer perceptron with parameters .
We alse define a second multilayer perceptron that outputs a single scalar. D(x) represents the probability that x came from the data rather that . We train D to maximize the probability of assigning the correct label to both training examples and samples from G.
Object Function For D
When train D, Gis fixed: \tag not allowed in aligned environment\begin{aligned} V(G,D) &= E_x \sim_{P_{data}(x)}[log(D(x))] + E_z \sim_{P_z(z)}[log(1-D(G(z)))] \ & = E_x \sim_{P_{data}(x)}[log(D(x))] + E_x \sim_{P_G(x)}[log(1-D(x))] \tag{3} \end{aligned}
即
式(3)如何解释呢?
当 x 从 sample 出来的时,那就使 scalar(D(x)) 越大越好(因为它是真实的)
当 x 从 sample 出来时,saclar(D(x))越小越好,即1-D(x) 越大越好(因为它是Generator出来的的)