Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chore: anakin and sebulba folders #1090

Conversation

Louay-Ben-nessir
Copy link
Contributor

@Louay-Ben-nessir Louay-Ben-nessir commented Jul 16, 2024

What?

Changed the systems folder structure.

Why?

To create a folder for Sebulba systems.

How?

copied all of the Anakin systems into their own folder.

Extra

This PR must be reviewed after #1080 and #1088

Copy link
Contributor

@sash-a sash-a left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Louay-Ben-nessir 🔥

@sash-a sash-a merged commit 0860518 into instadeepai:feat/sebulba_arch Jul 23, 2024
2 checks passed
sash-a added a commit that referenced this pull request Dec 13, 2024
…lba-folders

Chore: anakin and sebulba folders
sash-a added a commit that referenced this pull request Jan 10, 2025
sash-a added a commit that referenced this pull request Jan 10, 2025
- Merge branch develop into seb-ff-ippo-only
- fix: rec_qmix import
- chore: pre-commits
- Merge branch develop into seb-ff-ippo-only
- fix: action_head parameters for all systems
- chore: pre-commits
- fix: sebulba compatiable get_action_head
- Squashed commit of the following:
- fix: smaclite win rate tracking
- chore: bunch of minor changes
- chore: pre-commits
- fix: removed axis swaping & wrapper rename
- fix: Metric tracking more aligned with Jumanji
- chore: removed learner accumulation
- chore: bunch of minor changes and fixes
- fix: give each learner a unique random key
- fix: random segfault
- chore: pre-commits
- feat: support for smac
- fix: start actors simultaneously to avoid deadlocks
- Merge branch develop into seb-ff-ippo-only
- chore: minor env typing fixes
- feat: better env creation and safer sharding
- fix: align gym config with other configs
- fix: key use in actor loss
- feat: shard_map working
- feat: shardmap almost working
- fix: timestep calculation with accumulation
- Merge branch seb-ff-ippo-only of github.com:Louay-Ben-nessir/Mava into seb-ff-ippo-only
- feat: jit evaluation on cpu
- feat: learner env accumulation
- fix: change to using gym.make to create envs and fix StepType
- fix: possible off by one fix
- chore: use orginal rware and lbf
- fix: create envs in main thread to avoid deadlocks
- chore: better graceful exit
- chore: remove some more device transfers
- feat: avoid unecessary host-device transfers
- fix: safer pipeline.clear()
- fix: reshape with multiple learners and system name
- fix: update configs to match latest mava
- Merge branch feat/sebulba_arch into seb-ff-ippo-only
- Merge branch develop into feat/sebulba_arch
- chore: a few minor changes to code style
- feat: minor refactor to sebulba utils
- fix: removed depricated gymnasium import
- fix: jumanji
- fix: updated to work with the latest gymnasium
- chore: loss unpacking
- fix: wasting samples
- fix: deadlock in pipeline
- feat: pass timestep instead of obs and done and fix potential race condition in pipeline
- chore: very nitpicky clean ups
- fix: changed the timestep discount
- chore : better error messeages
- fix: prevent the pipeline from stalling and  a lot of cleanup
- chore : various changes
- chore: code cleanup
- fix: fixed stalling at the end of training
- chore: config file changes
- chore: removed unused eval type
- feat: shared time steps checker
- chore: code cleanup and sps calcs and learner threads
- feat : major code restructer, non-blocking evalutors
- Merge remote-tracking branch upstream/feat/sebulba_arch into seb-ff-ippo-only
- Merge pull request #1094 from Louay-Ben-nessir/chore--sebulba-arch-update
- pre-commit
- chore: pre-commits
- Merge remote-tracking branch upstream/develop into chore--sebulba-arch-update
- Merge pull request #1090 from Louay-Ben-nessir/chore--anakin-and-sebulba-folders
- Merge pull request #4 from Louay-Ben-nessir/feat-sebulba-gym-wrapper
- fix: fixed the logging deadlock for sebulba
- fixed: annotations and add agent id spaces
- chore: minor changes
- fix: Async worker auto-resetting
- fix: LBF import
- fix: config file fixes
- chore: pre-commits
- fix: config changes
- fix: env wrappers fix
- fix: removed deprecated jax call
- folder re-structuring
- update the gym wrappers
- feat: restructured the folders
- chore: comments
- chore : annotation
- chore: bunch of minor changes
- fix: better agent ids wrapper?
- fix: rware import
- fix: config file fixes
- chore: pre-commits and annotaions
- feat: using gymnasium async worker
- feat: generic gym wrapper
- fix; moved from gym to gymnasium
- chore: config files rename
- chore: renamed arch_name to architecture_name
- chore: pre-commits
- fix: more config changes
- chore: pre-commits
- fix: configs revamp
- fix: sum the rewards when using a shared reward
- chore: arch_name for anakin
- fix: config and imports for anakin q_learning and sac
- fix: seeds need to python arrays not np arrays
- fix: added missing lbf import
- fix: sync neptune logging for sebulba to avoid stalling
- feat : lbf
- feat: LBF and reproducibility
- chore: pre-commits
- chore: pre-commits
- fix: allow for reproducibility
- fix: imports and config paths in systems
- chore: created the anakin and sebulba folders
- chore : pre commit
- fix: fixed the num evals cacls
- fix: fix the num_updates_in_eval in the last eval
- chore: pre-commits
- feat: sebulba ff_ippo
- chore: removed unused config file
- chore : pre-commits and some comments
- chore: clean up & updated the code to match the sebulba-ff-ippo branch
- fix: removed the lbf import/wrapper
- feat: ff_mappo and rec_ippo in sebulba
- fix: removed the sebulba spesifique types
- feat: mappo + removed sebulba specifique types and made the rware wrapper generic
- chore : code cleanup + comments + added checkpoint save
- fix: num_updates and code refactoring
- fix: batch size calc for multiple devices
- fix: logging and added LBF
- feat: fulll sebulba functional
- fix: changed the anakin ppo type import
- fix: fixed the training and added training logger
- fix: fixed function calls
- fix: changes the env creation
- feat: initial learner / training loop
- feat: init sebulba ippo
- feat: gym metric tracker wrapper
- chore: removed async gym wrapper
- fix: info only contains the action_mask and reformated (n_agents, n_env) ->(n_env, n_agents)
- fix: fixed the async env wrapper
- feat: async env wrapper , changed the gym wrapper to rware wrapper
- fix: handling rware reset function
- fix: various minor fixes
- fix: gymV26 compatability wrapper
- fix: fixed the async env creation
- chore: pre-commit
- fix: Create the gym wrappers directly
- fix: merged the observations and action mask
- chore : pre-commit hooks
- feat: gym wrapper

Co-authored-by: Sasha Abramowitz <[email protected]>
Co-authored-by: Omayma Mahjoub <[email protected]>
sash-a added a commit that referenced this pull request Jan 10, 2025
sash-a added a commit that referenced this pull request Jan 10, 2025
- Merge branch develop into seb-ff-ippo-only
- fix: rec_qmix import
- chore: pre-commits
- Merge branch develop into seb-ff-ippo-only
- fix: action_head parameters for all systems
- chore: pre-commits
- fix: sebulba compatiable get_action_head
- Squashed commit of the following:
- fix: smaclite win rate tracking
- chore: bunch of minor changes
- chore: pre-commits
- fix: removed axis swaping & wrapper rename
- fix: Metric tracking more aligned with Jumanji
- chore: removed learner accumulation
- chore: bunch of minor changes and fixes
- fix: give each learner a unique random key
- fix: random segfault
- chore: pre-commits
- feat: support for smac
- fix: start actors simultaneously to avoid deadlocks
- Merge branch develop into seb-ff-ippo-only
- chore: minor env typing fixes
- feat: better env creation and safer sharding
- fix: align gym config with other configs
- fix: key use in actor loss
- feat: shard_map working
- feat: shardmap almost working
- fix: timestep calculation with accumulation
- Merge branch seb-ff-ippo-only of github.com:Louay-Ben-nessir/Mava into seb-ff-ippo-only
- feat: jit evaluation on cpu
- feat: learner env accumulation
- fix: change to using gym.make to create envs and fix StepType
- fix: possible off by one fix
- chore: use orginal rware and lbf
- fix: create envs in main thread to avoid deadlocks
- chore: better graceful exit
- chore: remove some more device transfers
- feat: avoid unecessary host-device transfers
- fix: safer pipeline.clear()
- fix: reshape with multiple learners and system name
- fix: update configs to match latest mava
- Merge branch feat/sebulba_arch into seb-ff-ippo-only
- Merge branch develop into feat/sebulba_arch
- chore: a few minor changes to code style
- feat: minor refactor to sebulba utils
- fix: removed depricated gymnasium import
- fix: jumanji
- fix: updated to work with the latest gymnasium
- chore: loss unpacking
- fix: wasting samples
- fix: deadlock in pipeline
- feat: pass timestep instead of obs and done and fix potential race condition in pipeline
- chore: very nitpicky clean ups
- fix: changed the timestep discount
- chore : better error messeages
- fix: prevent the pipeline from stalling and  a lot of cleanup
- chore : various changes
- chore: code cleanup
- fix: fixed stalling at the end of training
- chore: config file changes
- chore: removed unused eval type
- feat: shared time steps checker
- chore: code cleanup and sps calcs and learner threads
- feat : major code restructer, non-blocking evalutors
- Merge remote-tracking branch upstream/feat/sebulba_arch into seb-ff-ippo-only
- Merge pull request #1094 from Louay-Ben-nessir/chore--sebulba-arch-update
- pre-commit
- chore: pre-commits
- Merge remote-tracking branch upstream/develop into chore--sebulba-arch-update
- Merge pull request #1090 from Louay-Ben-nessir/chore--anakin-and-sebulba-folders
- Merge pull request #4 from Louay-Ben-nessir/feat-sebulba-gym-wrapper
- fix: fixed the logging deadlock for sebulba
- fixed: annotations and add agent id spaces
- chore: minor changes
- fix: Async worker auto-resetting
- fix: LBF import
- fix: config file fixes
- chore: pre-commits
- fix: config changes
- fix: env wrappers fix
- fix: removed deprecated jax call
- folder re-structuring
- update the gym wrappers
- feat: restructured the folders
- chore: comments
- chore : annotation
- chore: bunch of minor changes
- fix: better agent ids wrapper?
- fix: rware import
- fix: config file fixes
- chore: pre-commits and annotaions
- feat: using gymnasium async worker
- feat: generic gym wrapper
- fix; moved from gym to gymnasium
- chore: config files rename
- chore: renamed arch_name to architecture_name
- chore: pre-commits
- fix: more config changes
- chore: pre-commits
- fix: configs revamp
- fix: sum the rewards when using a shared reward
- chore: arch_name for anakin
- fix: config and imports for anakin q_learning and sac
- fix: seeds need to python arrays not np arrays
- fix: added missing lbf import
- fix: sync neptune logging for sebulba to avoid stalling
- feat : lbf
- feat: LBF and reproducibility
- chore: pre-commits
- chore: pre-commits
- fix: allow for reproducibility
- fix: imports and config paths in systems
- chore: created the anakin and sebulba folders
- chore : pre commit
- fix: fixed the num evals cacls
- fix: fix the num_updates_in_eval in the last eval
- chore: pre-commits
- feat: sebulba ff_ippo
- chore: removed unused config file
- chore : pre-commits and some comments
- chore: clean up & updated the code to match the sebulba-ff-ippo branch
- fix: removed the lbf import/wrapper
- feat: ff_mappo and rec_ippo in sebulba
- fix: removed the sebulba spesifique types
- feat: mappo + removed sebulba specifique types and made the rware wrapper generic
- chore : code cleanup + comments + added checkpoint save
- fix: num_updates and code refactoring
- fix: batch size calc for multiple devices
- fix: logging and added LBF
- feat: fulll sebulba functional
- fix: changed the anakin ppo type import
- fix: fixed the training and added training logger
- fix: fixed function calls
- fix: changes the env creation
- feat: initial learner / training loop
- feat: init sebulba ippo
- feat: gym metric tracker wrapper
- chore: removed async gym wrapper
- fix: info only contains the action_mask and reformated (n_agents, n_env) ->(n_env, n_agents)
- fix: fixed the async env wrapper
- feat: async env wrapper , changed the gym wrapper to rware wrapper
- fix: handling rware reset function
- fix: various minor fixes
- fix: gymV26 compatability wrapper
- fix: fixed the async env creation
- chore: pre-commit
- fix: Create the gym wrappers directly
- fix: merged the observations and action mask
- chore : pre-commit hooks
- feat: gym wrapper

Co-authored-by: Sasha Abramowitz <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants