Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make adversarial training digit-aware #17

Closed
johncf opened this issue Jul 19, 2023 · 0 comments · Fixed by #18
Closed

Make adversarial training digit-aware #17

johncf opened this issue Jul 19, 2023 · 0 comments · Fixed by #18

Comments

@johncf
Copy link
Owner

johncf commented Jul 19, 2023

Problem

Visualizing the style-feature encoding of MNIST test dataset using an encoder model trained using train-aae script gives the following result:

aae-vis

And generating images using the decoder model by sampling a random style-vector from a normal distribution (loc=0, scale=1):

aae-gen

Note that some digits are not constructed well. This is because, even though the overall distribution for each feature-component is nicely centered around zero, if we look at it separately for each digit, some of them are still skewed. This is (likely) because the discriminator is digit-agnostic, and thus can't enforce the distribution in a per-digit manner.

Solution

Make the discriminator digit-aware (basically using a simpler variant of the idea from section 2.3 of the paper). When training the discriminator:

  • "Fake" inputs should be the one-hot representation of the label + the Encoder's style encoding output.
  • "Real" inputs should be a random one-hot representation + a prior-distribution random-sampled style vector.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant