-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Completion of merging implementation #316
Conversation
* Base class for ``InitSparseConnectivity::Snippet::Init`` was not being detected - was a combination of SWIG being dumb and not having * EGPs were totally broken for synapse groups * Variable loading did not take variable location into account
# Conflicts: # pygenn/genn_model.py
# Conflicts: # include/genn/genn/initSparseConnectivitySnippet.h
…ues via merged struct * Neuron parameters will be sub * Current source parameters will be subs * Moved unpleasant sorting of 'children' of neuron groups into ``NeuronGroupMerged`` to allow this information to be accessed when generating neuron update
…python_microcircuit # Conflicts: # src/genn/genn/code_generator/modelSpecMerged.cc
…ut happy with behaviour
…s only used once for host init
# Conflicts: # include/genn/genn/code_generator/codeGenUtils.h # include/genn/genn/code_generator/generateInit.h # include/genn/genn/code_generator/generateNeuronUpdate.h # include/genn/genn/code_generator/generateSynapseUpdate.h # include/genn/genn/code_generator/substitutions.h # include/genn/genn/gennUtils.h # pygenn/genn_groups.py # pygenn/genn_model.py # setup.py # src/genn/backends/cuda/backend.cc # src/genn/generator/generator.cc # src/genn/genn/code_generator/codeGenUtils.cc # src/genn/genn/code_generator/generateAll.cc # src/genn/genn/code_generator/generateInit.cc # src/genn/genn/code_generator/generateNeuronUpdate.cc # src/genn/genn/code_generator/generateRunner.cc # src/genn/genn/code_generator/generateSynapseUpdate.cc
Codecov Report
@@ Coverage Diff @@
## master #316 +/- ##
=======================================
Coverage 82.34% 82.34%
=======================================
Files 64 64
Lines 9848 9848
=======================================
Hits 8109 8109
Misses 1739 1739 Continue to review full report at Codecov.
|
…c weight update model variables need to be updated BEFORE spikes are registered so sT is correct
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arguably, the flag at
genn/include/genn/genn/code_generator/backendBase.h
Lines 58 to 59 in 27400b2
//! Should GeNN generate empty state push and pull functions | |
bool generateEmptyStatePushPull = true; |
could be false by default? It is unclear what the empty pull/push functions would be good for (better for the user to get an error about an non-existing function rather than not getting any copy?)
Otherwise hard to digest ... have had a look through but don't think I can contribute much. We talked about possibly adding some developer notes on skype. I think that would be helpful, especially covering the merging data structures and procedures.
Totally agree that the default should change but, in general, I'm trying to semantically version so not change things that could break existing models (however, this does mean that, by the end of major version's life, it's a mess of flags...) |
I suppose in the longer term, one should then introduce a deprecation warning and then at the next to next version remove it ... |
First of all apologies that this has turned into such a beast. I was trying to cherry pick out features into seperate PRs but that turned into a mess so this PR encompasses a lot of things required to make the multi-area model work:
GENN_PREFERENCES
to switch back to id-based selection.1. Required user input
2. Didn't let you relocate the group indices on really large models (70000 indices don't fit in 64kb)
This is now fully-automated - the backend returns a data structure containing "memory spaces" and these structures are placed into this in a preferential order (we don't care as much about initialization kernel performance for example). This is some variant of an NP-hard bin-packing problem so finding the perfect solution is basically impossible 😨
1. The user has to add loads of simulation code (compare master to this PR)
2. Aside from shoving them in userprojects and adding more SWIG horror it was hard to provide them to (especially Python) users and it was a bit rubbish that this code wasn't part of the model class.
3. Calling these functions thousands of times from Python was very slow
4. All the boilerplate code for handling extra global parameters made the runner size and hence the compile time explode
Weight update models now have the concept of host initialization code which is used to initialize EGPs on the host. This uses the normal code generation tricks to expose EGP allocation etc and kernel merging to avoid code duplication. This seems a bit special-case but maybe some more use-cases will appear!
pushXXXXStateToDevice
andpullXXXXStateToDevice
functions - have added a flag to not generate these if they're emptyBeyond these actual features, there is a quite a lot of refactoring in order to make the kernel merging code less awful. It was scattered around the
MergedStructGenerator
, ingenerateRunner.cc
and inGroupMerged
but it is now centralized in a class hierarchy ofGroupMerged
classes. This still ends up being quite a lot of code but it's better.