Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use Learn_direction_in_latent_space.ipynb #26

Closed
shouramo opened this issue Nov 4, 2021 · 4 comments
Closed

How to use Learn_direction_in_latent_space.ipynb #26

shouramo opened this issue Nov 4, 2021 · 4 comments

Comments

@shouramo
Copy link

shouramo commented Nov 4, 2021

Hi,

I am just wondering, how am I supposed to use Learn_direction_in_latent_space.ipynb?

What should "data.tsv" be? How should I organize my data here?

Thanks!

@EvgenyKashin
Copy link
Owner

Hey, good question.

I agree that right now it is't clear. After using the stylegan2/run_generator.py (as in the readme) you will get images and latents with names like 100.npy and 100.png (according to this line).

After using your attributes classifier you have to create data.tsv with key column representing index of generated image (100 in the example above). And with label column that in our example in the notebook contains male/female labels.

Hope it would help!

@shouramo
Copy link
Author

shouramo commented Nov 4, 2021

Thanks for such a quick response!

So, I would have 2 columns; the first indicating the index (just numbers), the second indicating the attribute label(gender, age, etc.)

For the attributes classifier, which one did you use? I know of deepFace and that works pretty well from my experience, but just curious which you have used to create your data.tsv!

I am using my own styleGAN for this that I trained to generate 256x256 images, corresponding to (14,512) latent outputs, so the directions provided [stylegan2directions] unfortunately do not work for me.

Thanks!

@EvgenyKashin
Copy link
Owner

Yep, I see. We used internal attributes classifier from my previous company Yandex. But for now I would recommend to use CLIP classifier. It's pretty easy to create a classifier by just describing it. For example, a prompt like "image of a smiling face" and "image of a neutral face" to classify emotions.

@shouramo
Copy link
Author

shouramo commented Nov 4, 2021

Amazing! Thank you so much for all of this info.

Take care,
Moaz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants