-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Computing time (for network with synapses) scales unexpectedly with timestep and simulation time #1997
Comments
https://github.com/arbor-sim/arbor/blob/master/arbor/simulation.cpp#L304 sets the interval of the epochs according to the minimal delay (which is constant 3 ms in the example at hand). |
All we need to change in the example supplied by @jlubo is to set the synaptic delay >= the
|
For the record: Unfortunately, the suggestion by @schmitts doesn't solve the whole issue. We've found that there might be a general issue with long timesteps: 1. Arbor uses interpolation for computing exact spike times even for long time steps; 2. the |
So, has this changed on the current master w/ fixed-dt? |
I could solve the (additional) problem mentioned in my last post by using the workaround suggested above by @schmitts and exchanging the mechanism/recipe such that the Regarding the general issue: it hasn't changed with the fixed-dt feature, presumably because here the update steps still depend on the synaptic delay by |
Maybe it's not a bug, but it's definitively unexpected: network simulations with long timesteps and long simulation time take much longer than such with short timesteps and short simulation time, even though the network does not produce spikes. I've found this to occur for network models with and without synaptic plasticity. In the trivial case where all synapses are removed, everything seems to run as expected.
The behavior can be reproduced with the code provided here (without plasticity). The file
set_arbor_env
will have to be adapted to the specific Arbor installation. The scriptbuild_and_run_net_test
should run three simulations with short, medium, and long timesteps, respectively, on a single CPU core. On my local machine, I get the following output:short:
medium:
long:
I've been using Arbor version 0.6.1-dev, state of commit 8af6bd2 (including SDE computation as of commit 5d141aa, but don't know if that makes any difference for the given example).
The text was updated successfully, but these errors were encountered: