-
Notifications
You must be signed in to change notification settings - Fork 94
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Naive question about multiple seeds #95
Comments
Hey! So Thanks for reaching out and exciting that you are trying out JAX. Off the top of my head, I think for |
Hi, thanks for the reply! Okay, so you're saying the answer is simply not to parallelize across seeds, then use WandB's tools to aggregate separate 1-seed runs together. If I'm understanding that correctly, then what is being plotted when I run multiple seeds in parallel? The average across those seeds? |
The parallel runs will plot in the same space, meaning that you will have datapoints from all your runs but you will not be able to distinguish them. To distinguish them you can use an approach like this:
we will include training code like this soon |
Hi all! I would just like to clarify a dumb question with regards to the current Jax setup and the WandB logging. When you run training with multiple seeds, with the first set of plots generated (with or without |
Hi,
I am familiar with MARL in Pytorch, but very new to JAX, so please forgive me if this question is naive.
I see that many of your baselines are parallelized over multiple seeds at once (e.g. here in QMIX or here in transfQMIX). However, when running the baselines I notice that the resulting WandB runs seem to aggregate the seeds together. Is there some way to separate the performance of each seed for plotting purposes (e.g. to report the min/avg/max)? Your paper has several average return curves with some sort of error shading, so I imagine I must be missing something obvious.
The text was updated successfully, but these errors were encountered: