You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks very much for sharing the code and I think this is very great job. But I have some questions in understanding the code.
Firstly, what does the KL_loss mean? According to my understand, the SSE_loss in the code corresponds to the MSE_loss in the paper, but you didn't mentioned the KL_loss in the paper?
Secondly, you did't use the KL_loss and SSE_loss when updating the params, why?
Thirdly, z_x = tf.add(z_x_mean, tf.multiply(tf.sqrt(tf.exp(z_x_log_sigma_sq)), eps)) # grab our actual z, what does "grab actual z "mean?
Finally, What's the role of the function "average_gradients" @htconquer
The text was updated successfully, but these errors were encountered:
What was implemented is a variational autoencoder (VAE). So the output of the encoder is a probability distribution instead of a latent vector. Learning more about VAEs would let you answer your own questions.
Thanks very much for sharing the code and I think this is very great job. But I have some questions in understanding the code.
Firstly, what does the KL_loss mean? According to my understand, the SSE_loss in the code corresponds to the MSE_loss in the paper, but you didn't mentioned the KL_loss in the paper?
Secondly, you did't use the KL_loss and SSE_loss when updating the params, why?
Thirdly,
z_x = tf.add(z_x_mean, tf.multiply(tf.sqrt(tf.exp(z_x_log_sigma_sq)), eps)) # grab our actual z
, what does "grab actual z "mean?Finally, What's the role of the function "average_gradients" @htconquer
The text was updated successfully, but these errors were encountered: