Skip to content

Latest commit

 

History

History
3160 lines (2672 loc) · 135 KB

sdnotes.org

File metadata and controls

3160 lines (2672 loc) · 135 KB

15may27 Modularized framework for NEURON/Python network simulations with MPI

Overview

Using this modularized structure can select different cell types, populations, connectivities, etc. just by modifying the parameters in params.py.

An example is provided in params.py which switches from a single Hodgkin-Huxley population, randomly connected model (mpiHHTut); to a multiple Izhikevich populations, yfrac-dependent connected M1 model.

To test this just modify the variable simType in params.py: simType = ‘M1model’ # ‘mpiHHTut’

Bill’s description of what this repo should be

we should try to make this our main organizational code that can spin off multiple projects? – would like a clean code center-org so that not so painful as it has been with our prior code base (which btw was all my fault and not sam’s – he inherited from me) then any pushes back to this main center would have to be agreed by curator (you) and would generally involve just some small fix to a single file

README with description of the different files

List of files:

  • main.py: Main executable; calls functions from other modules.
  • params.py: Contains all static parameters. It is imported as “p” from all other modules, so that params can be referenced from any file using p.paramName
  • shared.py: Contains all the model shared variables and modules (except params.py). It is imported as “s” from all other file, so that any variable or module can be referenced from any file using s.varName
  • sim.py: Simulation control functions (eg. runSim)
  • network.py: Network related functions (eg. createCells)
  • cell.py: contains cell and population classes
  • conn.py: contains class ‘Conn’ used to instantiate connections, and contains methods to connect cells (eg. random, yfrac-based)
  • analysis.py: functions to plot and analyse data
  • stimuli.py: functions and parameters for differnt types of neural stimulation
  • izhi2007.mod: NMODL definition of Izhikevich 2007 neuron model
  • evol.py: Functions to run evolutionary algorithms (using inspyred) to optimize model parameters

Known issues, potential improvements, and alternative implementations (outdated - see TODO list at end of file)

- Saving only happens at the end (instead of at fixed intervals)

- Need to add a DEBUG mode or similar (like mpiHHTut had)

  • currently have a verbose flag, that prints some additional info
  • also have print messages along the code, which could be made optional

- Check structure of data saved to file

  • some nested lists with inhomogeneous dimensionality were converted to ‘object’ data types in order to be saved – check if structure makes sense when reading from python/matlab

- Some attributes/variables of the class ‘Cell’ and ‘Pop’ are not used but still initialized to [ ]

  • maybe can find alternative using different constructors for different cell types
  • fixed by using dictionary of tags - flexible for eahc type

- Labels (eg. AMPA, IT, HH,…) are defined by assigning numeric values to variables

  • can use dictionaries instead
  • need to check pros and cons of each – are dicts slower or have any limitations compared to lists/arrays ?
  • changed to using dictionaries

- stimuli.py not tested with this model

  • left there cause thought could be useful for future

- analysis.py still has many functions which havent been adapted to new structure

- Some variables inherited from cliff still have too long names, eg. backgroundSpikevec

- Lines too long, need to split using \

- if increase p.backgroundNoise > 0.0 get memory error

  • had to store rnadom noise generator separately for each cell

- Find a way to save lambda funcs (can’t be pickled)

can use: >>>import inspect >>>inspect.getsource(myfunc)

- Clearing vector in simdata now gives error because also has python variables

for v in s.simdata.itervalues() – fix error separated vectors

- ‘Cell’ class record method uses eval() which is unsafe

- currently setup to record traces from all cells

- bug when saving cell traces using multiple cores

Gathering spikes… >>> >>> >>> Traceback (most recent call last): File “main.py”, line 46, in <module> runSeq() File “main.py”, line 37, in runSeq s.sim.gatherData() File “sim.py”, line 117, in gatherData for d in gather: tmp.extend(d[k]) KeyError: ‘cellTraces_233’

  • now saved using different structure
  • had to remove hoc objects that are not pickable

- Load/save net+sim parameters from file

  • load net and sim parameters file from mat?
  • have different .py file to generate the params files

within params.py - have option to load from file, or set params and save to 2 files net and sim.mat

  • maybe functions in sim.py? can be called from params?

- Store net and sim params in dictionaries inside params.py

eg. p.net[‘ncells’]

  • facilitates saving to file (prev point)
  • can use if key in dict:
  • use regexp in submlime text to replace in all files:

Find What: p.(\w+) Replace With: p.sim[‘$1’]

- Use dict of tags/attributes for pop params and for cells

eg. p.net[‘popParams’][i][‘cellModel’]

  • can use if key in dict:

Define conn rules based on pair of tag/value for pre and post

eg. use tuple key: connProbs[‘name’,’IT’,’name’,’IT’] = (lambda x,y: 0.1*x+0.01/y) or dict of dict with tuple keys: connProbs[‘name’,’IT’][‘name’,’IT’] = (lambda x,y: 0.1*x+0.01/y)

- Add $Id$ hg info

  • can do with https://mercurial.selenic.com/wiki/KeywordExtension
  • but not recommended
  • bill wanted to do because cliff and I add last update and author info to files – which is never updated correctly
  • checked other github repos and they don’t have it - just eg. Contributors

- Rename main.py with init.py

Remove popType and connType variables - infer from dict keys

Remove popid from popParams - can use list index

Replace topClass and subClass with projection-type, type (neurotransmitter involved?)

email to Ben

Hey Ben, Im working on some of the changes we discussed. I’ve replaced variables with dictionaries of tags/attributes. For now, I’ve kep the ‘population’ concept, although can replace in future version if makes sense.

For both the ‘population’ and ‘cell’ objects you suggested replacing the ‘topClass’ and ‘subClass’ tags with ‘projectionType’ and ‘cellType’ if my notes are correct. I know projType for Exc cells will be ‘IT’, ‘PT’ or ‘CT’, but not sure what would be the best classification for Inh cells? Same thing for cellType, I think you mentioned neurotransmitters involved, but could you elaborate on what would be the list of possible values for both ‘Exc’ and ‘Inh’ cells/pops ?

We can use the google chat or this google doc to bounce ideas back and forth (link points to new section ready to be filled in).

Synapse

  • synapses as list of objects inside each cell (postsynaptic) - netcon in pre is stub; netcon in post is real synapse
  • netcon (neuron object) as part of synapse object

15dec28 Convert into python package

  • PyNet ? NeuPyNE, NeuPyNet, netpyne !! PYthon-based NETwork development framework for the NEuron simulator
  • make shared -> framework
  • from pynet import framework as f
  • just need init.py and param.py file
  • to add cells or conn functions use:

– class newCellClass(PointNeuron): … ; f.newCellClass = newCellClass – class newConnFunc(): … ; f.newConnFunc = newConnFunc

  • default simConfig
  • from pynet import init ; init.createAndRun(netParams, simConfig)

16jan26 Possibly making more simulator-indep using NeuroML-based format

NeuroML format

Issues

  • Would like to keep simple declarative (python dicts based) specifications
  • Want to make netpyne more NEURON-independent, both at specifications, and py instantiated network
  • netpyne makes use of mod files, but neuroml requires detailed specification
  • netpyne provides conversion from abstract specifications -> instantiated network, and support for multicompartment, and subcellular connectivity
  • netpyne provides instantiation of NEURON objects, and parallel simulation of network
  • netpyne can use NeuroML structure but using dictionaries instead of xml text - so easy to manipulate
  • libneuroml and pynn - procedural, not so intuitive, and require internal definition of cell types etc (eg. IAF); whereas netpyne is ‘declarative’, more intuitive?, can define arbitrary cell props based on mechs and syns
  • dictionary of equivalences NEURON<->NeuroML (eg. tau1, rise)

cells

NOTE: segment = sections/segment; segmentGroup = sectionList/section

  • can have segmentGroups that consist of segmentGroups
  • Neuron sections->segmentGroups
  • Neuron 3d points -> segmnets
  • Neuron nseg -> property tag numberInternalDivisions
  • if have pointProcess in segment -> mapping of position along section and where pointprocess is placed

    <include href=”nap.channel.nml”/>

    <include href=”pas.channel.nml”/>

<cell id=”SupAxAx”>

<notes>Cell: supaxax_0 exported from NEURON ModelView</notes>

<morphology id=”morphology_SupAxAx”>

<segment id=”0” name=”Seg0_comp_1”> <proximal x=”0.0” y=”0.0” z=”0.0” diameter=”15.0”/> <distal x=”0.0” y=”10.0” z=”0.0” diameter=”15.0”/> </segment>

<segment id=”1” name=”Seg1_comp_1”> <parent segment=”0”/> <distal x=”-4.371139E-7” y=”20.0” z=”0.0” diameter=”15.0”/> </segment>

<segmentGroup id=”prox_dend_soma”> <include segmentGroup=”comp_1”/> <include segmentGroup=”comp_41”/> <include segmentGroup=”comp_28”/> <include segmentGroup=”comp_15”/> <include segmentGroup=”comp_2”/> </segmentGroup>

<biophysicalProperties id=”biophys”>

<membraneProperties>

<channelDensity condDensity=”0.0 mS_per_cm2” id=”ar_ModelViewParmSubset_1” ionChannel=”ar__m00_25” segmentGroup=”ModelViewParmSubset_1” erev=”-40.0 mV” ion=”ar”/>

<channelDensity condDensity=”0.1 mS_per_cm2” id=”cal_ModelViewParmSubset_4” ionChannel=”cal” segmentGroup=”ModelViewParmSubset_4” ion=”ca” erev=”125.0 mV”/>

<channelDensity condDensity=”0.2 mS_per_cm2” id=”cal_ModelViewParmSubset_5” ionChannel=”cal” segmentGroup=”ModelViewParmSubset_5” ion=”ca” erev=”125.0 mV”/>

<channelDensity condDensity=”0.0 mS_per_cm2” id=”cat_ModelViewParmSubset_1” ionChannel=”cat” segmentGroup=”ModelViewParmSubset_1” ion=”cat” erev=”125.0 mV”/>

<channelDensity condDensity=”0.0 mS_per_cm2” id=”k2_all” ionChannel=”k2” ion=”k” erev=”-100.0 mV”/>

<spikeThresh value=”0.0 mV”/>

<specificCapacitance value=”1.0 uF_per_cm2”/>

<initMembPotential value=”-65.0 mV”/>

connections

<projection id=”LTS_AxAx_sm” presynapticPopulation=”CG_C04_LTS_sm” postsynapticPopulation=”CG_C04_AxAx_sm” synapse=”Inh_LTS_FS”> <connection id=”0” preCellId=”../CG_C04_LTS_sm/5/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”112” preFractionAlong=”0.26785243” postSegmentId=”82” postFractionAlong=”0.36041966”/> <connection id=”1” preCellId=”../CG_C04_LTS_sm/3/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”116” preFractionAlong=”0.6782849” postSegmentId=”91” postFractionAlong=”0.99472666”/> <connection id=”2” preCellId=”../CG_C04_LTS_sm/6/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”112” preFractionAlong=”0.42696908” postSegmentId=”84” postFractionAlong=”0.46009916”/> <connection id=”3” preCellId=”../CG_C04_LTS_sm/0/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”113” preFractionAlong=”0.92599744” postSegmentId=”83” postFractionAlong=”0.60695267”/> <connection id=”4” preCellId=”../CG_C04_LTS_sm/9/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”113” preFractionAlong=”0.5823235” postSegmentId=”39” postFractionAlong=”0.646692”/> <connection id=”5” preCellId=”../CG_C04_LTS_sm/6/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”113” preFractionAlong=”0.16917303” postSegmentId=”57” postFractionAlong=”0.6089958”/>

synapses

<?xml version=”1.0” encoding=”ISO-8859-1”?> <neuroml xmlns=”http://www.neuroml.org/schema/neuroml2” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.neuroml.org/schema/neuroml2 https://raw.github.com/NeuroML/NeuroML2/development/Schemas/NeuroML2/NeuroML_v2beta4.xsd” id=”Syn_AMPA_L4SS_IN”>

<notes>ChannelML file describing a single synaptic mechanism</notes>

<alphaSynapse id=”Syn_AMPA_L4SS_IN” tau=”0.8ms” gbase=”2.94303552937e-07mS” erev=”0.0mV”>

<notes>Synapse with syn scaling constant c = 1 nS (translating to max cond of 2.94304e-07 mS), time course: 0.8 ms and reversal potential: 0 mV. Automatically generated by command: genSyn.py Syn_AMPA_L4SS_IN 0.8 1 0 </notes> </alphaSynapse>

</neuroml>

population

<population id=”CG_C04_FRB_sm” component=”L23PyrFRB_varInit” type=”populationList” size=”6”> <annotation> <property tag=”color” value=”0.0 0.0 0.0”/> </annotation> <instance id=”0”> <location x=”-739.27563” y=”-798.43396” z=”-440.9723”/> </instance> <instance id=”1”> <location x=”975.8342” y=”-870.85077” z=”1090.1094”/> </instance> <instance id=”2”> <location x=”-1012.162” y=”-785.71173” z=”1053.8684”/> </instance> <instance id=”3”> <location x=”-1391.423” y=”-776.4188” z=”-32.483643”/> </instance> <instance id=”4”> <location x=”149.5564” y=”-790.7691” z=”433.74915”/> </instance> <instance id=”5”> <location x=”585.53955” y=”-765.37933” z=”819.1377”/> </instance> </population>

pulse generator (netstim?)

<pulseGenerator id=”DepCurr_L23FRB” delay=”0.0s” duration=”20.0s” amplitude=”3.0E-10A”/>

<pulseGenerator id=”DepCurr_L23FRB__0” delay=”0.0s” duration=”20.0s” amplitude=”3.37936E-10A”/>

chat with Bill

summary of 3-hr meeting with Ben: 1) he really likes netpyne, 2) discussed how to add subcellular syn info (3 main methods: sectionlists, yfrac-based, path distance-based), 3) provided his unpublished code to convert sCRACM data to synapse density maps and said we can use in netpyne, 4) suggested making netpyne more NEURON/simulator-independent (similar to Kim Blackwell)

on this last point (simu-indep), I looked at NeuroML2 format, and some of the tools Padraig has built 2 interface it with Python and Neuron (eg. neuroconstruct, pyNeuroML, libNeuroML), and read Robert’s reproducibility paper

I think it might be possible to make netpyne simulator-independent at the specs and instantiated network, using a NeuroML-like internal format (but using python dicts instead of xml so can manipulate data) – and then keep final stage where its converted to Neuron simulatable objects

sent padraig an email to discuss the technical details of what they’ve already implemented and how netpyne could fit, and be useful to fill in missing tools in this whole story

an alternative is to keep it as a tool specific for Neuron, with its internal Neuron-based format for specs and network, and the possibility of exporting/importing from NeuroML and others (ie. its current status)

chat with Padraig

  • good to separate M1 model from netpyne
  • similarities with pnn and libneuroml
  • important of neuroml is conceptual structure
  • libneuroml has cell types
  • mapping from pyneuroml to mod files, but not the other way around
  • standardization of abstract conn rules is above pynn and nueroml/pyneuroml
  • eg 10 conn algorithms -> pynn -> neuroml
  • morphology - segments, segment groups
  • mod files would need to be translated into nueroml (not fully translated); alternative would be point to nueroml definition, convert via pyneuroml to Neuron mod
  • setup.py to netpyne
  • dot.travis.yml continuous integration server - run tests every time there is a commit to repository - make sure its consistent with latest version
  • convert from Neuron model to NeuroML:

– export specifications – OSB converting to NeuroML2 cell morphology (pyneuroml.convert_to_neuroml2) - in future if already in neuroml dont need to define in Nueron/netpyne format – Izhikevich (params) - abstract definiton – export to Nueroml2 positions and connectivity - no current tool to convert – https://github.com/OpenSourceBrain/Cerebellum3DDemohttp://www.opensourcebrain.org/docs#Converting_To_NeuroML2

  • convert from NeuromML to Neuron:

– java script to convert neuroml to mod and Neuron/py files - wait for better iteration – adapt java code to convert directly to netpyne – https://github.com/NeuroML/org.neuroml.export/blob/development/src/main/java/org/neuroml/export/neuron/NeuronWriter.javahttps://github.com/NeuroML/org.neuroml.export/blob/development/src/main/java/org/neuroml/export/pynn/PyNNWriter.java – check param files hhtut izhitut updated so can padraig can modify – add load cell function in netpyne in neuroml format

  • differences:
  1. declarative
  2. netpyne makes use of mod files, but neuroml requires detailed specification
  3. netpyne provides conversion from abstract specifications -> instantiated network, and support for multicompartment, and subcellular connectivity
  4. netpyne provides instantiation of NEURON objects, and parallel simulation of network
  • options:

– long-term: use Neuroml internally for instantiated network – short-term: Neuron-based and conversion to NeuroML;

16jan27 Consolidate connectivity functions

Full conn -> fullConn

  • =probConn with prob = 1?

combine convConn, divConn, and randConn?

  • probFunc = optional param – if not present, then:
  • select function depending of param included? eg. ‘convergence’, ‘divergence’, ‘probability’

Random conn with N conns per postsyn cell (convergence) -> convConn

~= conv conn

  • conv param allowed as function eg. uniform, gauss
  • also allows for weight+delay as func

Random conn with N conns per presyn cell (divergence) -> divConn

Random conn with fixed, function (including distance-dep) prob, weight, delay -> probConn

fixed

number eg. 5

func

  • uniform

func dist-dep

  • function of yfrac
  • function of 3d distance
  • generalizable to func of

– (pre, post, dist) + ’’ + (xfrac, yfrac, zfrac, x, y, z) = 18 variables – (dist) + ’’ + (xznorm, xyznorm, xz, xyz) = 4 variables – total: 22 variables

  • create dictionary with each variable associated to its value/variable/function – don’t evaluate to save
  • enter as string, and convert to function
  • other variables allowed: any that are defined in f.net.params[var] – arbitrary

implementation

  • use **kwargs
  • string based - converted to function, extract variables
  • arguments weight and delay valid for all conn types; and can be functional

Options to implement func:

lambda with kwargs - NO

prob = lambda(**d): d[‘pre_ynorm’]*0.5

lambda with args - NO!

prob = lambda(*d): d[3]*0.5

string

prob = ‘pre_ynorm*0.5’ prob = ‘post_xyznorm*0.5 prob = ‘uniform()’ prob = ‘gauss(10,5)’ prob = ‘uniform(1,5)’

  • functions allow all of python random functions
  • internal implementation:

– identifiy variables (pre, post or dist x,y,z,2d or 3d; plus any arbitrary defined in netParams) – define dict mapping variable name to lambda function returning actual variable using preCell and postCell as args – convert string to lambda function with variables as arguments – call lambda function with args as dict with each entry mapped to lambda func that returns the variable

efficiency

  • calculate all delay, prob, weight funcs together at the beginning
  • calcualte weights+delays for all possible conns despite some will not be used?
  • alternative might be more comput costly (calling 1 by 1), and repeated code
  • use map(lambda, list)
  • calculate final list of weights,probs,delays and pass to specific conn func

errors

  • arguments by reference using lambda func only works for ‘delay’ if ‘convergence’ func loops over preCellsTags and postCells – even though they have nothing to do with each other!! whats going on? python bug? Im missing something?!

comparison to PyNN

class AllToAllConnector(allow_self_connections=True, safe=True, callback=None)[source]

=fullConn

class OneToOneConnector(safe=True, callback=None)[source]

=conn rule specifying cell ids

class FixedProbabilityConnector(p_connect, allow_self_connections=True, rng=None, safe=True, callback=None)[source]

=probConn

class FromListConnector(conn_list, column_names=None, safe=True, callback=None)[source]

=conn rule specifying cell ids

class FromFileConnector(file, distributed=False, safe=True, callback=None)[source]

=conn rule specifying cell ids

class ArrayConnector(array, safe=True, callback=None)[source]

=conn rule specifying cell ids

class FixedNumberPreConnector(n, allow_self_connections=True, with_replacement=False, rng=None, safe=True, callback=None)

=divConn

class DistanceDependentProbabilityConnector(d_expression, allow_self_connections=True, rng=None, safe=True, callback=None)[source]

= probConn with dist-dep func

class IndexBasedProbabilityConnector(index_expression, allow_self_connections=True, rng=None, safe=True, callback=None)[sour

= conn rule specifying cell ids

class SmallWorldConnector(degree, rewiring, allow_self_connections=True, n_connections=None, rng=None, safe=True, callback=N

not implemented

class CSAConnector(cset, safe=True, callback=None)[source]

not implemented

comparison to NEST

ConvergentConnect

Connect many source nodes to one target node. =convConn

DivergentConnect

Connect one source node to many target nodes. = divConn

RandomConvergentConnect

Randomly connect one source node to many target nodes. = convConn with uniform func

RandomDivergentConnect

Randomly connect one source node to many target nodes. = divConn with uniform func

BinomialConvergentConnect

Connect a target to a binomial number of sources. ??

16feb18 Changing folder structure and adding setup.py

file structure

  • root

– examples/: param and run files – netpyne/: actual package – doc/: documentation – setup.py: required for pip install – other root files

OK how to add $PYTHONPATH in setup.py?

OK Make sure all examples and tuts work

  • examples - ok
  • tutorial examples - ok
  • claus - ok
  • M1Network -ok

OK Update documentation

OK Update README

16feb25 Adding STDP and RL

old stdp code

if s.usestdp and ([s.cellpops[pregid],s.cellpops[pstgid]] in s.plastConns): # If using STDP and these pops are set to be plastic connections if sum(abs(s.stdprates[s.EorI[pregid],:]))>0 or sum(abs(s.RLrates[s.EorI[pregid],:]))>0: # Don’t create an STDP connection if the learning rates are zero for r in range(s.nreceptors): # Need a different STDP instances for each receptor if newcon.weight[r]>0: # Only make them for nonzero connections stdpmech = h.STDP(0,sec=s.dummies[pstid]) # Create STDP adjuster stdpmech.hebbwt = s.stdprates[s.EorI[pregid],0] # Potentiation rate stdpmech.antiwt = s.stdprates[s.EorI[pregid],1] # Depression rate stdpmech.wmax = s.maxweight # Maximum synaptic weight precon = s.pc.gid_connect(pregid,stdpmech); precon.weight[0] = 1 # Send presynaptic spikes to the STDP adjuster pstcon = s.pc.gid_connect(pstgid,stdpmech); pstcon.weight[0] = -1 # Send postsynaptic spikes to the STDP adjuster h.setpointer(s.connlist[-1]._ref_weight[r],’synweight’,stdpmech) # Associate the STDP adjuster with this weight s.stdpmechs.append(stdpmech) # Save STDP adjuster s.precons.append(precon) # Save presynaptic spike source s.pstcons.append(pstcon) # Save postsynaptic spike source s.stdpconndata.append([pregid,pstgid,r]) # Store presynaptic cell ID, postsynaptic, and receptor if s.useRL: # using RL stdpmech.RLon = 1 # make sure RL is on stdpmech.RLhebbwt = s.RLrates[s.EorI[pregid],0] # Potentiation rate stdpmech.RLantiwt = s.RLrates[s.EorI[pregid],1] # Depression rate stdpmech.tauhebb = stdpmech.tauanti = s.stdpwin # stdp time constant(ms) stdpmech.RLwindhebb = stdpmech.RLwindhebb = s.eligwin # RL eligibility trace window length (ms) stdpmech.useRLexp = s.useRLexp # RL stdpmech.softthresh = s.useRLsoft # RL soft-thresholding else: stdpmech.RLon = 0 # make sure RL is off

how to update at intervals

why needed?

  • periodic saves (eg. of weights)
  • update virtual arm apparatus
  • run RL on syns based on critic signal
  • calculate LFP

option 1: run sim for short periods of time (not efficient?)

while round(h.t) < s.duration: s.pc.psolve(min(s.duration,h.t+s.loopstep)

option 2: finitialize handler?

netpyne format

  • within conn rule
  • One set of STDP params for each conn rule:

STDPparams = {‘hebbwt’: , ‘antiwt’:, ‘wmax’:, ‘RLon’: , ‘RLhebbwt’, ‘RLantiwt’, ‘tauhebb’, ‘RLwindhebb’, ‘useRLexp’, ‘softthresh’

stdpmech.RLon = 1 # make sure RL is on stdpmech.RLhebbwt = s.RLrates[s.EorI[pregid],0] # Potentiation rate stdpmech.RLantiwt = s.RLrates[s.EorI[pregid],1] # Depression rate stdpmech.tauhebb = stdpmech.tauanti = s.stdpwin # stdp time constant(ms) stdpmech.RLwindhebb = stdpmech.RLwindhebb = s.eligwin # RL eligibility trace window length (ms) stdpmech.useRLexp = s.useRLexp # RL stdpmech.softthresh = s.useRLsoft # RL soft-thresholding ‘plasticity’: {‘mech’: ‘STDP’, ‘params’: STDPparams}

  • Single set of STDP params for all:

‘plasticity’: ‘STDP’

revisiting STDP mod implementation

stdp.m1ms

COMMENT

STDP + RL weight adjuster mechanism

Original STDP code adapted from: http://senselab.med.yale.edu/modeldb/showmodel.asp?model=64261&file=\bfstdp\stdwa_songabbott.mod

Adapted to implement a “nearest-neighbor spike-interaction” model (see Scholarpedia article on STDP) that just looks at the last-seen pre- and post-synaptic spikes, and implementing a reinforcement learning algorithm based on (Chadderdon et al., 2012): http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0047251

Example Python usage:

from neuron import h

## Create cells dummy = h.Section() # Create a dummy section to put the point processes in ncells = 2 cells = [] for c in range(ncells): cells.append(h.IntFire4(0,sec=dummy)) # Create the cells

## Create synapses threshold = 10 # Set voltage threshold delay = 1 # Set connection delay singlesyn = h.NetCon(cells[0],cells[1], threshold, delay, 0.5) # Create a connection between the cells stdpmech = h.STDP(0,sec=dummy) # Create the STDP mechanism presyn = h.NetCon(cells[0],stdpmech, threshold, delay, 1) # Feed presynaptic spikes to the STDP mechanism – must have weight >0 pstsyn = h.NetCon(cells[1],stdpmech, threshold, delay, -1) # Feed postsynaptic spikes to the STDP mechanism – must have weight <0 h.setpointer(singlesyn._ref_weight[0],’synweight’,stdpmech) # Point the STDP mechanism to the connection weight

Version: 2013oct24 by cliffk

ENDCOMMENT

NEURON { POINT_PROCESS STDP : Definition of mechanism POINTER synweight : Pointer to the weight (in a NetCon object) to be adjusted. RANGE tauhebb, tauanti : LTP/LTD decay time constants (in ms) for the Hebbian (pre-before-post-synaptic spikes), and anti-Hebbian (post-before-pre-synaptic) cases. RANGE hebbwt, antiwt : Maximal adjustment (can be positive or negative) for Hebbian and anti-Hebbian cases (i.e., as inter-spike interval approaches zero). This should be set positive for LTP and negative for LTD. RANGE RLwindhebb, RLwindanti : Maximum interval between pre- and post-synaptic events for an starting an eligibility trace. There are separate ones for the Hebbian and anti-Hebbian events. RANGE useRLexp : Use exponentially decaying eligibility traces? If 0, then the eligibility traces are binary, turning on at the beginning and completely off after time has passed corresponding to RLlen. RANGE RLlenhebb, RLlenanti : Length of the eligibility Hebbian and anti-Hebbian eligibility traces, or the decay time constants if the traces are decaying exponentials. RANGE RLhebbwt, RLantiwt : Maximum gains to be applied to the reward or punishing signal by Hebbian and anti-Hebbian eligibility traces. RANGE wmax : The maximum weight for the synapse. RANGE softthresh : Flag turning on “soft thresholding” for the maximal adjustment parameters. RANGE STDPon : Flag for turning STDP adjustment on / off. RANGE RLon : Flag for turning RL adjustment on / off. RANGE verbose : Flag for turning off prints of weight update events for debugging. RANGE tlastpre, tlastpost : Remembered times for last pre- and post-synaptic spikes. RANGE tlasthebbelig, tlastantielig : Remembered times for Hebbian anti-Hebbian eligibility traces. RANGE interval : Interval between current time t and previous spike. RANGE deltaw : The calculated weight change. RANGE newweight : New calculated weight. }

ASSIGNED { synweight tlastpre (ms) tlastpost (ms) tlasthebbelig (ms) tlastantielig (ms) interval (ms) deltaw newweight }

INITIAL { tlastpre = -1 : no spike yet tlastpost = -1 : no spike yet tlasthebbelig = -1 : no eligibility yet tlastantielig = -1 : no eligibility yet interval = 0 deltaw = 0 newweight = 0 }

PARAMETER { tauhebb = 10 (ms) tauanti = 10 (ms) hebbwt = 1.0 antiwt = -1.0 RLwindhebb = 10 (ms) RLwindanti = 10 (ms) useRLexp = 0 : default to using binary eligibility traces RLlenhebb = 100 (ms) RLlenanti = 100 (ms) RLhebbwt = 1.0 RLantiwt = -1.0 wmax = 15.0 softthresh = 0 STDPon = 1 RLon = 1 verbose = 1 }

NET_RECEIVE (w) { deltaw = 0.0 : Default the weight change to 0.

Hebbian weight update happens 1ms later to check for simultaneous spikes (otherwise bug when using mpi)
if (verbose > 0)  { printf("flag %f\n",flag) }

if ((flag == -1) && (tlastpre != t-1)) { w = 0 : should be set to 0 for anti-hebb to work, but then there is weird error deltaw = hebbwt * exp(-interval / tauhebb) : Use the Hebbian decay to set the Hebbian weight adjustment. if (softthresh == 1) { deltaw = softthreshold(deltaw) } : If we have soft-thresholding on, apply it. if (verbose > 0) { printf(“Hebbian STDP event: t = %f ms; tlastpre = %f; w = %f; deltaw = %f\n”,t,tlastpre,w,deltaw) } : Show weight update information if debugging on. }

Ant-hebbian weight update happens 1ms later to check for simultaneous spikes (otherwise bug when using mpi)

else if ((flag == 1) && (tlastpost != t-1)) { :update weight 1ms later to check for simultaneous spikes (otherwise bug when using mpi) w = 0 : should be set to 0 for anti-hebb to work, but then there is weird error deltaw = antiwt * exp(interval / tauanti) : Use the anti-Hebbian decay to set the anti-Hebbian weight adjustment. if (softthresh == 1) { deltaw = softthreshold(deltaw) } : If we have soft-thresholding on, apply it. if (verbose > 0) { printf(“anti-Hebbian STDP event: t = %f ms; deltaw = %f\n”,t,deltaw) } : Show weight update information if debugging on. }

If we receive a non-negative weight value, we are receiving a pre-synaptic spike (and thus need to check for an anti-Hebbian event, since the post-synaptic weight must be earlier).

if (w >= 0) { interval = tlastpost - t : Get the interval; interval is negative if ((tlastpost > -1) && (interval > 1.0)) { : If we had a post-synaptic spike and a non-zero interval… if (STDPon == 1) { : If STDP learning is turned on… if (verbose > 0) {printf(“net_send(1,1)\n”)} net_send(1,1) : instead of updating weight directly, use net_send to check if simultaneous spike occurred (otherwise bug when using mpi) } if ((RLon == 1) && (-interval <= RLwindanti)) { tlastantielig = t } : If RL and anti-Hebbian eligibility traces are turned on, and the interval falls within the maximum window for eligibility, remember the eligibilty trace start at the current time. } tlastpre = t : Remember the current spike time for next NET_RECEIVE.

Else, if we receive a negative weight value, we are receiving a post-synaptic spike (and thus need to check for a Hebbian event, since the pre-synaptic weight must be earlier).    

} else { interval = t - tlastpre : Get the interval; interval is positive if ((tlastpre > -1) && (interval > 1.0)) { : If we had a pre-synaptic spike and a non-zero interval… if (STDPon == 1) { : If STDP learning is turned on… if (verbose > 0) {printf(“net_send(1,-1)\n”)} net_send(1,-1) : instead of updating weight directly, use net_send to check if simultaneous spike occurred (otherwise bug when using mpi) } if ((RLon == 1) && (interval <= RLwindhebb)) { tlasthebbelig = t} : If RL and Hebbian eligibility traces are turned on, and the interval falls within the maximum window for eligibility, remember the eligibilty trace start at the current time. } tlastpost = t : Remember the current spike time for next NET_RECEIVE. } adjustweight(deltaw) : Adjust the weight. }

PROCEDURE reward_punish(reinf) { if (RLon == 1) { : If RL is turned on… deltaw = 0.0 : Start the weight change as being 0. deltaw = deltaw + reinf * hebbRL() : If we have the Hebbian eligibility traces on, add their effect in. deltaw = deltaw + reinf * antiRL() : If we have the anti-Hebbian eligibility traces on, add their effect in. if (softthresh == 1) { deltaw = softthreshold(deltaw) } : If we have soft-thresholding on, apply it. adjustweight(deltaw) : Adjust the weight. if (verbose > 0) { printf(“RL event: t = %f ms; reinf = %f; RLhebbwt = %f; RLlenhebb = %f; tlasthebbelig = %f; deltaw = %f\n”,t,reinf,RLhebbwt,RLlenhebb,tlasthebbelig, deltaw) } : Show weight update information if debugging on. } }

FUNCTION hebbRL() { if ((RLon == 0) || (tlasthebbelig < 0.0)) { hebbRL = 0.0 } : If RL is turned off or eligibility has not occurred yet, return 0.0. else if (useRLexp == 0) { : If we are using a binary (i.e. square-wave) eligibility traces… if (t - tlasthebbelig <= RLlenhebb) { hebbRL = RLhebbwt } : If we are within the length of the eligibility trace… else { hebbRL = 0.0 } : Otherwise (outside the length), return 0.0. } else { hebbRL = RLhebbwt * exp((tlasthebbelig - t) / RLlenhebb) } : Otherwise (if we’re using an exponential decay traces)…use the Hebbian decay to calculate the gain.

}

FUNCTION antiRL() { if ((RLon == 0) || (tlastantielig < 0.0)) { antiRL = 0.0 } : If RL is turned off or eligibility has not occurred yet, return 0.0. else if (useRLexp == 0) { : If we are using a binary (i.e. square-wave) eligibility traces… if (t - tlastantielig <= RLlenanti) { antiRL = RLantiwt } : If we are within the length of the eligibility trace… else {antiRL = 0.0 } : Otherwise (outside the length), return 0.0. } else { antiRL = RLantiwt * exp((tlastantielig - t) / RLlenanti) } : Otherwise (if we’re using an exponential decay traces), use the anti-Hebbian decay to calculate the gain. }

FUNCTION softthreshold(rawwc) { if (rawwc >= 0) { softthreshold = rawwc * (1.0 - synweight / wmax) } : If the weight change is non-negative, scale by 1 - weight / wmax. else { softthreshold = rawwc * synweight / wmax } : Otherwise (the weight change is negative), scale by weight / wmax. }

PROCEDURE adjustweight(wc) { synweight = synweight + wc : apply the synaptic modification, and then clip the weight if necessary to make sure it’s between 0 and wmax. if (synweight > wmax) { synweight = wmax } if (synweight < 0) { synweight = 0 } }

stdp.cliff

COMMENT

STDP + RL weight adjuster mechanism

Original STDP code adapted from: http://senselab.med.yale.edu/modeldb/showmodel.asp?model=64261&file=\bfstdp\stdwa_songabbott.mod

Adapted to implement a “nearest-neighbor spike-interaction” model (see Scholarpedia article on STDP) that just looks at the last-seen pre- and post-synaptic spikes, and implementing a reinforcement learning algorithm based on (Chadderdon et al., 2012): http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0047251

Example Python usage:

from neuron import h

## Create cells dummy = h.Section() # Create a dummy section to put the point processes in ncells = 2 cells = [] for c in range(ncells): cells.append(h.IntFire4(0,sec=dummy)) # Create the cells

## Create synapses threshold = 10 # Set voltage threshold delay = 1 # Set connection delay singlesyn = h.NetCon(cells[0],cells[1], threshold, delay, 0.5) # Create a connection between the cells stdpmech = h.STDP(0,sec=dummy) # Create the STDP mechanism presyn = h.NetCon(cells[0],stdpmech, threshold, delay, 1) # Feed presynaptic spikes to the STDP mechanism – must have weight >0 pstsyn = h.NetCon(cells[1],stdpmech, threshold, delay, -1) # Feed postsynaptic spikes to the STDP mechanism – must have weight <0 h.setpointer(singlesyn._ref_weight[0],’synweight’,stdpmech) # Point the STDP mechanism to the connection weight

Version: 2013oct24 by cliffk

ENDCOMMENT

NEURON { POINT_PROCESS STDP : Definition of mechanism POINTER synweight : Pointer to the weight (in a NetCon object) to be adjusted. RANGE tauhebb, tauanti : LTP/LTD decay time constants (in ms) for the Hebbian (pre-before-post-synaptic spikes), and anti-Hebbian (post-before-pre-synaptic) cases. RANGE hebbwt, antiwt : Maximal adjustment (can be positive or negative) for Hebbian and anti-Hebbian cases (i.e., as inter-spike interval approaches zero). This should be set positive for LTP and negative for LTD. RANGE RLwindhebb, RLwindanti : Maximum interval between pre- and post-synaptic events for an starting an eligibility trace. There are separate ones for the Hebbian and anti-Hebbian events. RANGE useRLexp : Use exponentially decaying eligibility traces? If 0, then the eligibility traces are binary, turning on at the beginning and completely off after time has passed corresponding to RLlen. RANGE RLlenhebb, RLlenanti : Length of the eligibility Hebbian and anti-Hebbian eligibility traces, or the decay time constants if the traces are decaying exponentials. RANGE RLhebbwt, RLantiwt : Maximum gains to be applied to the reward or punishing signal by Hebbian and anti-Hebbian eligibility traces. RANGE wmax : The maximum weight for the synapse. RANGE softthresh : Flag turning on “soft thresholding” for the maximal adjustment parameters. RANGE STDPon : Flag for turning STDP adjustment on / off. RANGE RLon : Flag for turning RL adjustment on / off. RANGE verbose : Flag for turning off prints of weight update events for debugging. RANGE tlastpre, tlastpost : Remembered times for last pre- and post-synaptic spikes. RANGE tlasthebbelig, tlastantielig : Remembered times for Hebbian anti-Hebbian eligibility traces. RANGE interval : Interval between current time t and previous spike. RANGE deltaw : The calculated weight change. RANGE newweight : New calculated weight. }

ASSIGNED { synweight tlastpre (ms) tlastpost (ms) tlasthebbelig (ms) tlastantielig (ms) interval (ms) deltaw newweight }

INITIAL { tlastpre = -1 : no spike yet tlastpost = -1 : no spike yet tlasthebbelig = -1 : no eligibility yet tlastantielig = -1 : no eligibility yet interval = 0 deltaw = 0 newweight = 0 }

PARAMETER { tauhebb = 10 (ms) tauanti = 10 (ms) hebbwt = 1.0 antiwt = -1.0 RLwindhebb = 10 (ms) RLwindanti = 10 (ms) useRLexp = 0 : default to using binary eligibility traces RLlenhebb = 100 (ms) RLlenanti = 100 (ms) RLhebbwt = 1.0 RLantiwt = -1.0 wmax = 15.0 softthresh = 0 STDPon = 1 RLon = 1 verbose = 0 }

NET_RECEIVE (w) { deltaw = 0.0 : Default the weight change to 0.

Hebbian weight update happens 1ms later to check for simultaneous spikes (otherwise bug when using mpi)

if ((flag == -1) && (tlastpre != t-1)) { w = -2 deltaw = hebbwt * exp(-interval / tauhebb) : Use the Hebbian decay to set the Hebbian weight adjustment. if (softthresh == 1) { deltaw = softthreshold(deltaw) } : If we have soft-thresholding on, apply it. if (verbose > 0) { printf(“Hebbian STDP event: t = %f ms; deltaw = %f\n”,t,deltaw) } : Show weight update information if debugging on. }

Ant-hebbian weight update happens 1ms later to check for simultaneous spikes (otherwise bug when using mpi)

else if ((flag == 1) && (tlastpost != t-1)) { :update weight 1ms later to check for simultaneous spikes (otherwise bug when using mpi) w = -2 deltaw = antiwt * exp(interval / tauanti) : Use the anti-Hebbian decay to set the anti-Hebbian weight adjustment. if (softthresh == 1) { deltaw = softthreshold(deltaw) } : If we have soft-thresholding on, apply it. if (verbose > 0) { printf(“anti-Hebbian STDP event: t = %f ms; deltaw = %f\n”,t,deltaw) } : Show weight update information if debugging on. }

If we receive a non-negative weight value, we are receiving a pre-synaptic spike (and thus need to check for an anti-Hebbian event, since the post-synaptic weight must be earlier).

if (w >= 0) { interval = tlastpost - t : Get the interval; interval is negative if ((tlastpost > -1) && (interval > 1.0)) { : If we had a post-synaptic spike and a non-zero interval… if (STDPon == 1) { : If STDP learning is turned on… net_send(1,1) : instead of updating weight directly, use net_send to check if simultaneous spike occurred (otherwise bug when using mpi) } if ((RLon == 1) && (-interval <= RLwindanti)) { tlastantielig = t } : If RL and anti-Hebbian eligibility traces are turned on, and the interval falls within the maximum window for eligibility, remember the eligibilty trace start at the current time. } tlastpre = t : Remember the current spike time for next NET_RECEIVE.

Else, if we receive a negative weight value, we are receiving a post-synaptic spike (and thus need to check for an anti-Hebbian event, since the post-synaptic weight must be earlier).    

} else { interval = t - tlastpre : Get the interval; interval is positive if ((tlastpre > -1) && (interval > 1.0)) { : If we had a pre-synaptic spike and a non-zero interval… if (STDPon == 1) { : If STDP learning is turned on… net_send(1,-1) : instead of updating weight directly, use net_send to check if simultaneous spike occurred (otherwise bug when using mpi) } if ((RLon == 1) && (interval <= RLwindhebb)) { tlasthebbelig = t } : If RL and Hebbian eligibility traces are turned on, and the interval falls within the maximum window for eligibility, remember the eligibilty trace start at the current time. } tlastpost = t : Remember the current spike time for next NET_RECEIVE. } adjustweight(deltaw) : Adjust the weight. if (verbose>=1) { printf(“netreceive(): t=%f ms; w=%f; flag=%f; tlastpre=%f; tlastpost=%f, interval=%f; RLwindhebb=%f; tlasthebbelig=%f; RLwindanti=%f; tlastantielig=%f;\n”, t, w, flag, tlastpre, tlastpost, interval, RLwindhebb, tlasthebbelig, RLwindanti, tlastantielig) } : Show all information }

PROCEDURE reward_punish(reinf) { if (RLon == 1) { : If RL is turned on… deltaw = 0.0 : Start the weight change as being 0. deltaw = deltaw + reinf * hebbRL() : If we have the Hebbian eligibility traces on, add their effect in. deltaw = deltaw + reinf * antiRL() : If we have the anti-Hebbian eligibility traces on, add their effect in. if (softthresh == 1) { deltaw = softthreshold(deltaw) } : If we have soft-thresholding on, apply it. adjustweight(deltaw) : Adjust the weight. if (verbose>=2) { printf(“reward_punish(): t=%f ms; delta=%f; w=%f\n”, t, deltaw, synweight) } : Show weight update information if debugging on. if (verbose>=3) { printf(“reward_punish(): t=%f ms; delta=%f; w=%f; reinf=%f; hebbRL=%f; antiRL=%f; softthresh=%i\n”, t, deltaw, synweight, reinf, hebbRL(), antiRL(), softthresh) } : Show all information } }

FUNCTION hebbRL() { if ((RLon == 0) || (tlasthebbelig < 0.0)) { : If RL is turned off or eligibility has not occurred yet, return 0.0. hebbRL = 0.0 if (verbose > 2) { printf(“hebbRL(): off; t=%f ms; hebbRL=%f; RLon=%f; tlasthebbelig=%f\n”, t, hebbRL, RLon, tlasthebbelig) } : Show all information } else if (useRLexp == 0) { : If we are using a binary (i.e. square-wave) eligibility traces… if (t - tlasthebbelig <= RLlenhebb) { hebbRL = RLhebbwt } : If we are within the length of the eligibility trace… else { hebbRL = 0.0 } : Otherwise (outside the length), return 0.0. if (verbose > 2) { printf(“hebbRL(): square; t=%f ms; hebbRL=%f; tlasthebbelig=%f; RLlenhebb=%f\n”, t, hebbRL, tlasthebbelig, RLlenhebb) } : Show all information } else { hebbRL = RLhebbwt * exp((tlasthebbelig - t) / RLlenhebb) } : Otherwise (if we’re using an exponential decay traces)…use the Hebbian decay to calculate the gain. if (verbose > 2) { printf(“hebbRL(): exp; t=%f ms; hebbRL=%f; RLhebbwt=%f; tlasthebbelig=%f; RLlenhebb=%f\n”, t, hebbRL, RLhebbwt, tlasthebbelig, RLlenhebb) } : Show all information }

FUNCTION antiRL() { if ((RLon == 0) || (tlastantielig < 0.0)) { : If RL is turned off or eligibility has not occurred yet, return 0.0. antiRL = 0.0 if (verbose>=3) { printf(“antiRL(): off; t=%f ms; antiRL=%f; RLon=%f; tlastantielig=%f\n”, t, antiRL, RLon, tlastantielig) } : Show all information } else if (useRLexp == 0) { : If we are using a binary (i.e. square-wave) eligibility traces… if (t - tlastantielig <= RLlenanti) { antiRL = RLantiwt } : If we are within the length of the eligibility trace… else {antiRL = 0.0 } : Otherwise (outside the length), return 0.0. if (verbose>=3) { printf(“antiRL(): square; t=%f ms; antiRL=%f; tlastantielig=%f; RLlenanti=%f\n”, t, antiRL, tlastantielig, RLlenanti) } : Show all information } else { antiRL = RLantiwt * exp((tlastantielig - t) / RLlenanti) } : Otherwise (if we’re using an exponential decay traces), use the anti-Hebbian decay to calculate the gain. if (verbose>=3) { printf(“antiRL(): exp; t=%f ms; antiRL=%f; RLantiwt=%f; tlastantielig=%f; RLlenanti=%f\n”, t, antiRL, RLantiwt, tlastantielig, RLlenanti) } : Show all information }

FUNCTION softthreshold(rawwc) { if (rawwc >= 0) { softthreshold = rawwc * (1.0 - synweight / wmax) } : If the weight change is non-negative, scale by 1 - weight / wmax. else { softthreshold = rawwc * synweight / wmax } : Otherwise (the weight change is negative), scale by weight / wmax. }

PROCEDURE adjustweight(wc) { synweight = synweight + wc : apply the synaptic modification, and then clip the weight if necessary to make sure it’s between 0 and wmax. if (synweight > wmax) { synweight = wmax } if (synweight < 0) { synweight = 0 } if (verbose>=3) { printf(“adjustweight(): t=%f ms; synweight=%f; wc=%f; wmax=%f\n”, t, synweight, wc, wmax) } : Show all information }

netpynesimplestdp.mod

  • if use stdp.m1ms:

– only hebb events at the beginning – RL not working cause tlasthebbelig doesn’t update

  • if use stdp.cliff:

– both stdp and RL happen - but correctly?? – only hebb - so maybe anti-hebb not working – if critic = -1

  • problem might be netpyne?

m1ms model

  • if use stdp.cliff:

– seems like too many hebb events? – RL works (with critic =1 or -1) - but might be wrong due to hebb events?

  • if use stdp.m1ms:

– Hebbian works (no anti-Hebbian?) – RL works (with critic =1 or -1)

found bug and fixed (skip instead of w)

  • use separate ‘skip’ variable to skip the 2nd set of conditions; instead of using the ‘w’ var, since otherwise ‘w’ gets fixed to that value for following events

need to fix so -1 critic RL also works - fixed

  • was due to soft threshold being on – reduces -1 effect if close to min weight; and reduces +1 effect if close to max weight

check antihebb learning

  • fixed bug: if ((tlastpost > -1) && (-interval > 1.0)) (interval had to be -1)

16mar04 Adding subcellular synaptic conn distribution

sectionList

  • need during runtim? for connectivity purposes?
  • option 1: can include section attribute named ‘sectionList’ and add name, but could take longer to set properties if have to search all
  • option 2: can include as dict in sectionLists in cellProp, and then add all sections inside of each sectionList dict; but would change the consistency of the generic implementation (cell.py etc)
  • option 3: can add additional dict sectionLitst in cellProp, simply including list of sections (need to return additional dic from util.py function)
  • best options seem 1 or 3
  • Need to create sectionList at runtime? useful?

– iterating over sectionList from neuron import h ​ s = h.Section(name=’s’) a = h.Section(name=’a’) d = h.Section(name=’d’) ​ sl1 = h.SectionList() sl1.append(sec=s) ​ sl2 = h.SectionList() sl2.append(sec=a) sl2.append(sec=d) ​ section_lists = h.List(‘SectionList’) print ‘There are %d SectionLists’ % section_lists.count() ​ for i in xrange(int(section_lists.count())): print ‘SectionList %d has the following sections: %r’ % (i + 1, [str(sec.hname()) for sec in section_lists.o(i)])

— Note that this iterates over all SectionList objects, not just those associated with a particular cell. — Alternatively, itertools.chain.from_iterable — https://docs.python.org/2/library/itertools.html#itertools.chain.from_iterable

NeuroML implements as option 3:

<segmentGroup id=”prox_dend_soma”> <include segmentGroup=”comp_1”/> <include segmentGroup=”comp_41”/> <include segmentGroup=”comp_28”/> <include segmentGroup=”comp_15”/> <include segmentGroup=”comp_2”/> </segmentGroup>

Netpyne implementation

  • Option 3

cellRule = {‘label’: ‘PYRrule’, ‘conditions’: {‘cellType’: ‘PYR’}, ‘secs’: {}, ‘secLists’: {}} … cellRule[‘secLists’][‘all’] = [‘soma’, ‘dend’]

  • Can’t define properties of section lists in netpyne
  • Neuron section list object not created - just python version
  • Can import cell with sectionlists
  • Can use sectionlists in connectivity
  • Modified importCell so it reads sectionLists with original variable names; and modified input arguments so now pass cellRule as 1st arg – so can add ‘secs’ and ‘secLists’ to cellRule dict

TODO:

  • update tut_import.py - DONE
  • update documentation - DONE
  • update claus - DONE
  • update M1Network - DONE

Discussion with Ben

  • define subcellular location of synapses:
  1. named sectionlists of sublists - apicatuft, obliques
  2. radial distribution, fraction yfrac, normalized, cortical depth, where dendrites fall in layers (cx centered, not cell centered)
  3. path distance from soma

– plus combination of above – 1st, 2nd order branches differences cannot be specified directly, but can create sectionlists eg. 1st 50um apical trunk no spines - subsets that are spiny vs non-spiny

2 METHODS: A) 2-step implementation: 1) unitary conns, 2) contacts (subcellular conn) B) axo-dendritic density overlap - you don’t know unitary without calculating subcellular – HBP approach - axonal density is function of proj from L2->L5 + dendritic density, even if cells not realistic, calculate weight+prob

if we imagine a hierarch struc where a ‘unitaryConn’ object contains a list of ‘contacts’ (~= Neuron Netcons) and ‘synapses’ (Neuron Synapses) this unitaryConn would have a specific pre and post-cell id but it may be the case, that the synapse is used as the target of many presyn cells (at least in Neuron terms and how we usually build models now) which means the ‘synapse’ would have to be outside of the ‘unitaryConn’ and just be referenced by name or whatever I’ll just point out that this doesn’t make all that much sense to me (despite being the way you do it)

  • 2 levels of instantiation - abstract (no specific models, not simulatable, cellular); and specific (subcellular)
  • from 1st - every neuron has to be assigned and implementation; and every connection has to be implemented
  • thats what happens right now but abstract instantiation is not saved
  • 2nd step consists of instantiating subcellular info - cell morphology + subcellular conn (could be separate)
  • 1st pass) cellular instatiation step (includes cell+conns); 2nd pass) subcellular step; 3rd pass) choose simulator pick Neuron object for each cell
  • In 3rd level decide where to place h.Netcons and h.Synapses
  • conns = unitary conns - presyn, type (AMPA), weight (somatic response for 1 AP presyn cell ~0.2 mV EPSP), delay?
  • in connParams have subcellular conn info - but not added to cell objects, only once create simulatable individual cells, wire it up
  • just 3 levels : specifications -> cellular -> simulator instantiation
  • Neurons instantiation separate from python structure - linked by cell id
  • for each unitary connection, check cell model of presyn and postsyn and decide if use NetCons, Synapses etc

Current implementation

  • Cell (object)

– conns (list of dicts) — hNetcon — preGid — weight — threshold — sec — syn

– secs (dict of dicts) — geom — topol — mechs — pointps — syns (dict) ---- hSyn ---- _loc ---- _type ---- tau ---- e

Pros/Cons of making conns indep of cells

  • pros:

– easier to search for conns, do analysis etc – conceptually make more sense (Ben, Bill) – consistent with NueroML

  • cons:

– synapses separate from cell too? associate via gid? prev models and Neuron standard is syns inside cell, conns outside; harder to check syns – make sure conns for each cell in same node – seems easy since will happen naturally – easier to find conns of a given cell

Possible structure (with contacts inside each unitary conn):

  • net (object of class Network)

– cells (list of objects of class Cell) – conns (list of dicts) — preGid — postGid — contacts (list of dicts) ---- hNetcon ---- weight ---- threshold ---- sec ---- syn

  • Advantage is that would have equivalent top level connections indepdently of subcellular info (morphology).
  • Advantage: Need to know all unitary connections before can actually implement the subcellular details - distribution depends on total number of input conns
  • Disadvantage is that have more complex hierarchical structure, more difficult to search, if have simple single contacts then overcomplicated, and doesnt match neuroml

Alternative structure (with single list of contacts):

  • net (object of class Network)

– cells (list of objects of class Cell) – conns (list of dicts) — preGid — postGid — hNetcon — weight — threshold — sec — syn

for unitary conns - find unique preGid/postGid (eg. print list(set(zip(*myList)[1])))

Alternative structure (with contacts inside, but as a class)

  • net (object of class Network)

– cells (list of objects of class Cell) – conns (list of object of class Connection) — preGid — postGid — contacts (list of dicts) ---- hNetcon ---- weight ---- threshold ---- sec ---- syn

Advantage - can have subcelullar methods inside

Structure with proper nomenclature for cellConns, synConns, synMechs

  • net (object of class Network)

– cells (list of objects of class Cell) – cellConns (list of object of class CellConnection) — preGid — postGid — synConns (list of dicts) ---- hNetcon ---- weight ---- threshold ---- sec ---- synMech

Choice of cellConns, synConns, synMechs

  • ‘cellConns’ = unitary connection between 2 cells (can contain many synapses) = non-existent (Neuron) = non-existent (NeuroML)

= ‘connection’ (HBP)

  • ‘synConns’: individual synaptic contacts made between single pre and post cell (each connection can contain many) = ‘h.NetCon’ (Neuron) = ‘connections’ (NeuroML) = ‘synapse’ (HBP)
  • ‘synMechs’: postsynaptic mechanism (receptor-like), usually one per connection = eg. ‘h.ExpSyn’ or ‘h.AMPA’ (Neuron) = eg. ‘alphaSynapse’ (NeuroML) = ‘synaptic mechanism’ (HBP)

neuroml conn structure

  • projections: between populations; pre, post and synapse
  • connection: pregid, postgid, presegment, postsegment, prefraction, postfraction
  • synapse: mechanism associated to cell; tau, gbase, erev

<projection id=”LTS_AxAx_sm” presynapticPopulation=”CG_C04_LTS_sm” postsynapticPopulation=”CG_C04_AxAx_sm” synapse=”Inh_LTS_FS”>

<connection id=”0” preCellId=”../CG_C04_LTS_sm/5/SupLTSInter” postCellId=”../CG_C04_AxAx_sm/0/SupAxAx” preSegmentId=”112” preFractionAlong=”0.26785243” postSegmentId=”82” postFractionAlong=”0.36041966”/>

<neuroml xmlns=”http://www.neuroml.org/schema/neuroml2” xmlns:xsi=”http://www.w3.org/2001/XMLSchema-instance” xsi:schemaLocation=”http://www.neuroml.org/schema/neuroml2 https://raw.github.com/NeuroML/NeuroML2/development/Schemas/NeuroML2/NeuroML_v2beta4.xsd” id=”Syn_AMPA_L4SS_IN”>

<notes>ChannelML file describing a single synaptic mechanism</notes>

<alphaSynapse id=”Syn_AMPA_L4SS_IN” tau=”0.8ms” gbase=”2.94303552937e-07mS” erev=”0.0mV”>

<notes>Synapse with syn scaling constant c = 1 nS (translating to max cond of 2.94304e-07 mS), time course: 0.8 ms and reversal potential: 0 mV. Automatically generated by command: genSyn.py Syn_AMPA_L4SS_IN 0.8 1 0 </notes> </alphaSynapse>

Ben’s sCRACM code

insertSyn

  • vecstim
  • netcon
  • expsyn

uniformSynapseLocations

  • total length
  • list of secs
  • sec lens
  • find total length, divide by num syns, find absolute locs, find local secs and locs for each syn (use list with cum lengths and corresponding section)

pointFromLoc

  • section, loc
  • return anatomical point x,y,z

loadratio

  • load file wiht synaptic dist ratios - rel syn strength for 10x30 grid

setup_grid

create grid with 10 cols, 30 rows

calculateGridSigma

vector of local synaptic strength at each grid point, e.g. ratio peak_experimental / peak_uniform

distanceFromGrid

// arg 1: x // arg 2: y // arg 3: Vector to hold the distances

interpolateSegmentSigma

  • arg 1: SectionList to receive synapses, e.g. spiny
  • arg 2: Vector to be filled with the calculated sigma
  • Note: the dendrites must be fully enclosed by the grid (i.e. there must be 4 nearest grid points surrounding every segment)
  • for each section loc:

– find x,y pos – find 4 closest grid points and their sigmas (syn strengths) – bilinear interpolation of sigma

mappedSynapseLocations

  • arg 1: SectionList for which to calculate sigma
  • arg 2: Vector of sigmas for each segment
  • arg 3: SectionList to be filled
  • arg 4: Vector of x-vals to be filled
  • Load ratio (vals between 0-15) of syn strength in 10x30 grid
  • Obtain norm sigma values based on ratio
  • Calculate sigma via interpolation for every segment location
  • Calculate synapses to insert in each segment

radialSynapseLocations

  • arg 1: SectionList that can accept synapses (e.g. spiny)
  • arg 2: sigma_uniform (e.g. 1.5/8)
  • arg 3: y_min
  • arg 4: y_max
  • arg 5: SectionList to carry synapses, 1 for each synapse (e.g. synsecs)
  • arg 6: Vector of x-values, 1 for each synapse (e.g. synxs)
  • uniform distibution of synapse based on radial distance from soma
  • specify y range
  • calculate num synapses based on section length / nseg
  • store section and loc of each syn

distributeSynapse

  • arg 1: SectionList that can accept synapses (e.g. spiny)
  • arg 2: sigma_uniform (e.g. 1.5/8)
  • arg 3: mode: 0 = uniform, 1 = non-uniform aka mapped, 2 = non-uniform based on laminar depth
  • arg 4: SectionList to carry synapses, 1 for each synapse (e.g. synsecs)
  • arg 5: Vector of x-values, 1 for each synapse (e.g. synxs)
  • arg 6: optional, for mode 2: y_min
  • arg 7: optional, for mode 2: y_max
  • wrapper: selects 1 of 3 modes and passes in arguments

netpyne implementation

  • check number of presyn cells connecting to post (cellCons)
  • check syn distribution pattern and morphology of post
  • need to calculate num of synConns based on weight?? or fixed weight for synConn (divide cellConn weight by num of synConns?)
  • create synConns and synMechs
  • Is it worth having CellConn as class with SynConn attribute and methods?

– only 5 syns/cell – need to know all cellconns before start synconns, but method cannot be inside cellconn cause need to check all cellconns – have to call method createSubCellConn if params includes subCell - do once for each conn rule (from network.py) – subcellular syn dist shouldn’t be replicated on each CellConn, just once on conn rule

  • createConns

– for each rule: — probConn/convConn ---- cellConns.append(CellConn) (but no subcellular) – for all cellConns: — cellConns.addSubcellular

  • but needs subcell params from rule!! so either

– need to be stored in each cellConn / pointer to f.net.params – shouldn’t occupy extra space since pointer (can remove after so not saved?)

  • do we need subcellular info for each conn rule?! or just for each cell type and synMech?! Ben mentioned ‘after logic conns, create subcellular based on presyn+postsyn cell type”
  • weight + delay a property of CellConn or SynConn ?! both?
  • how to create extra synMechs needed? identical properties to cellprop syn mechs?
  • maybe there’s no need for cellconns vs synconns?

– if subcellular info: create temporal cellconns, then synconns (from synconns can generate cellconns after) – if no subcellular info: create synconns directly

  • or create cellconns and synconns as independent lists (redundant)
  • maybe create separate rules for subcellular distribution; needed because:

– conn data as func of yfrac (eg. [0.1 - 0.2]) – syn dist for presyn layer/type (eg. L2/3) – wouldn’t make sense to repeat for each yfrac-based conn rule (or cell type rule), plus need all

  • maybe add subcellular distribution to cell prop rules:

– for each cell prop, create conditions of subcellular distribution eg. [‘conditions’: {‘yfrac’: [0.1-0.4]}] – default: assume uniform distribution

  • where to put properties of synapses?

– can add to sections in properties – can just add generic, by setting ‘_loc’: ‘generic’ ? – but still inside a section! :/ – just make algorithm look for synmechs in all sections, and use same params to replicate – maybe move ‘synMechs’ out, to same level as ‘secs’ ? – add to sectionList !?

sCRACM (Petreanu 2009; Suter,2015)

  • inject ChR2 in certain presyn axons (eg. L2/3)
  • short pohotostimulation 1ms 2mW in specific grid (12x24, 50um) location
  • depolarizes axons; use TTX (blocked Na+ and K+ ->no AP)
  • axon terminal Ca2+ channels open -> glutamate release
  • measure EPSC in soma
  • provides a two-dimensional ‘image’ of the distribution of specific input within the dendritic arbors of the recorded cell
  • EPSCsCRACM amplitudes depend on the density of ChR2-positive axons, the fraction of axons that make synapses with the recorded neuron, the strength of the synapses, and their electrotonic distance from the soma
  • sCRACM maps were normalized to the largest pixels within a map and thus represent the relative strength of input within the dendritic tree.
  • Average sCRACM map of layer 2/3 inputs to M1-CSPs (n 􏰁 23 neurons). L, Input profiles by soma depth. M, Location of perisomatic input from layer 2/3 relative to soma position, plotted as a function of soma depth. For each profile, the input depth was calculated as the center of mass across the perisomatic pixels.

fig

questions for ben

always have to specify num contacts or distribution?

– Yes, or distribution – Ben: if full model the could calculate exactly; but since scaled down, we want syns per conn to be correct eg. 5 – assume in M1 model, realistic density, but reduced volume – experiments found ~5 for PT->PT not in motor cx – likely that there is data – does it vary significantly? Thalamocortical could be very different; assume every proj has 4-6 contacts as measured – select a fixed value eg 5. and value strength/weight of each synConn

Weight+delay in cellConn or synConn or both?

– if data from unitary conn EPSP, the cellConn, but need to calculate weight/synConn (h.NetCon) – allow weights both at cellConn and synConn level – if weight * 2 makes the synapse twice as strong? – can know max conductance, and temp response – Lefort used EPSP mV in soma for cellConn – need to know how to calculate synConn – is there a way to do this analytically? – he did in scracm code- assume syn density is correct, electrotonic difference made different between measured and simualated – multiply by factor difference, resimulate and compare again – syns close to soma were fine; syns far away, original map had understimated response of far away syns, and rescaled far away syns – pattern of syn density for cell – function of yfrac – only need to calculate for original scracm map, and output should be rule – but for cellConn -> 5 synConns get 1mV in soma when activated simultaneously – but will be distributed along dendrites so different transmission delays – difficult to calculate accurately – most models dont distribute syns over dendrite with realistic syn density

– delay could be identical for each synConn, or add small distance-based delay – delay would vary due to extra distance traveled through unmyelinated axon (1 m/s) – 1.1ms diff for different pyramidal neurons - can omit for now - euclidean distance between cell bodies - condVelocity = 1m/s – minDelay value = 2ms – too high! set to maybe 0.5 ms or 1ms – max(0.5, 3d distance based) ; for closeby cells, axon goes down ~100um (fat axon), collaterals branch off, come back up 100 or 200um ; localDelay = 0.5ms – for long-range conns use longer delays - eg. 4 ms - kind of aritraty seems noise

  • Bill: have to normalize by position based on a transfer function between locations and soma

– can use someting like ted’s multifreq transfer func program to get tha map – yes have to scale baesd on loc and based on freq of input – eg a high freq (short) input like AMPA has less effect on soma than a slow eg NMDA – was looking at paper on that recently but is just for passive case – jnsci15:1669.pdf The morphoelectrotonic transform: a graphical approach to dendritic function /u/billl/articles/jnsci15:1669.pdf – reshaping tree based on electrical distance – another old paper – author = “Jaffe, DB and Carnevale, NT”, title = “Passive normalization of synaptic integration influenced by dendritic architecture”, /u/billl/articles/jnphys82:3268.pdf – http:/www.neuron.yale.edu/phpbb/viewtopic.php?f=8&t=2404&p=9487&hilit=frequency+response#p9487 – calculate the space constant in response to a AC current // lamda_AC(f) = 1/2*sqrt(diam/PI*f*Ra*Cm) – think he has a little package for this but not finding now http://www.neuron.yale.edu/phpbb/viewtopic.php?f=15&t=313&p=870&hilit=frequency+response#p870 a ‘frequency tool’ – also depends on the Rm and on active conductances, eg an EPSP can end up being terminated by K chans or can be primarily and active response so that the actual gsyn is only a small piece

Syn dist specific to postsyn mech? depends on presyn cell props,loc?

– Specific to eg. AMPA/NMDA, or just exc vs inh? – we dont have specific data on differnt receptors – assume all exc are both AMPA and NMDA, and have a conductance ratio between them - has been measured (maybe ratio different in tuft vs basal, but ignore for now); allow to customize ratio

  • depends on presyn cell type and layer (eg. L2/3 exc vs L5B IT vs L5B PT)

– M1 L2/3 -> SPI sCRACM map – Petreanu barrel cortex L2/3 or L4 -> yfrac sCRACM map – long range inputs sCRACM maps (both Ben and Petreanu)

Why need to know num of conns before start? Can have prob dist of synapse, and map on a cell-by-cell basis, adapt

probabilities based on current num contacts – wouldn’t be exact since probability based; but need final distribution to match ratios

The non-uniform mapped / distance from soma-based would be given in the form of 30x10 map, distribution across segments?

– exp data is in 30x10 map format; distr across segments would vary depending on cell morphology – netpyne should be a 30x10 length density – num of syns/um – within given pixel stimulated with blue light, pixel = 50x50 um, num of syns/um/dendrite - homogeneous within 50x50um – convert scracm map -> map of how many syns per length of dendrite at each region of the map (at each pixel) – need to know how much dendrite in that pixel, and how many sysn were coactivated to get soma response – his method: put fixed number of dendrite, and then adjusted weight of these syns – couldnt really separate weights from num of synapses – started with plausible weight and num of syns – length density for a specific assumption of conductances (used to convert map) – input to netpyne should be free from Neuron peculiarities – relates to somehting experimentally – num of syn per length of dendrite and strength of each of thsoe syns as a peak conductance

  • 2 types of map: 1) num syns/um of dendrite, 2) conductance of syns/um dendrite (probably uniform everywhere)
  • remain within bounds that exp plausible - eg. now num of spines in a dend tree (eg max 20k)
  • L5->L5 somatic epsp unitary conns 0.2 0.5 mv for 5 syns/conn –> each syn has g that will result in 0.1 mV epsp soma

– but will depend on location eg apical tuft won’t give you value – 5 number comes mostly from counting dyns in basal – but maybe also in apical tuft but weren’t undetected (90% came from basals) – so probably ok to assume most syns are in basal not apical tuft – in his model he calculated conduc value for standard synapse that gives 0.1 or 0.2 mV (assume uniform over all arbor), so need to vary num of contacts and location

Other formats apart from length density map and conductance map (50um and 10x30 map,aligned to pia, centerd on soma):

– for STR assume they get same input and infer similar density map for STR cells – convert using dendritic density of STR cells – simpler way: syn length density maps are also radial functions – if account for how much dendrite there is, the x dimension of map may not contain any info – so rule is based on 1d map (radial function) - 30x1 map - function of yfrac (in 50 um bins) – could be averaged or represent individual neurons – for scracm map doesnt make sense to average – sectionLists - scracm has 50um resoulution but actually more like 100um resoulution – if you look at dends, 1st 50um of apical trunk have no syns; 1st 20um of root basal dont have any syns (just few) – at a scale where scracm doesnt reach there might be no syns – so where scracm doesnt reach can use sectionList to complement and tell u there are no syns thre – eg. spiny sectionList contains all sections except where they were no spines/syns - so set density to 0 – in neuron if u didnt reconstruct eveyr spine, you account for that by increasing cm – in detailed model I’m using – compensated by increasing membrane area

  • syn density maps are not common - so need other way to specify for rest – more common data in literature, anatomical/categorical statements eg. this interneuron targets apical tuft – but no specific numeric values (details not known) – usually always anatomical
  • he can send SPI syn length density map – SEND EMAIL!

comments by ben

If you want to see which dendrites are on basal, apical, etc. then better to look at the profiles sorted by soma depth, i.e. where you can see each individual neuron’s input collapsed to a radial profile.

[5:17] Or a soma-aligned average map.

[5:18] The former is in my paper. The latter I have as figs.

[5:19] In my paper I also show analysis of where the “perisomatic” input is centered, relative to the soma - generally above the soma with an offset of ~50 um, but depends on projection and on soma depth.

[5:19] One consistent finding was that while the basals do get input from all sources, it’s a bit weaker below the soma (note that there are basals above the soma too).

[5:21] The sCRACM technique probably won’t work well for local L5->L5 connections (Taro tried this with Rabies - not mapping, but LED flash - and said the input was too weak). But for L5-L5 connections we can turn to pair recording studies with full reconstructions of dendrites and axons - such as published by Markram et al 1997 (methinks), with Sakmann.

There is substantial variability within each projection - seen clearly in the example Salva just pasted. Some CSP get strong L2/3 input, while others don’t - even if their soma depth is similar.

[5:29] So my thought was to take each actual map and apply the inferred synapse density onto reconstructions at approximately matched soma depths. Do this for each projection.

[5:29] For the L2/3 -> CSP data set, I have reconstructions and maps in the same neuron, for a subset of the data set. For the other projections, I don’t have reconstructions.

[5:30] So the idea would be to take each mapped neuron, use it’s soma depth to find the reconstruction with closest matching soma depth, and instantiate that.

[5:31] Or alternatively (and probably better), take each reconstructed neuron, and then apply synapses to it according to every projection - where for each projection, you find the map whose soma depth best matches the reconstructed neuron.

[5:32] Another approach I started working on is to parameterize each projection’s pattern as a function of soma depth, so that there is a formula M_i(somadepth) for each projection i, which lets you determine the synapse distribution for neuron’s at any depth.

[5:34] However, because the sCRACM maps don’t tell you synapse density (but rather, synaptic strength measured at soma under strange pharmacological conditions), before you can calculate a formula for the whole projection, you’d need to simulate each map to infer the synapse density. Unless … it turns out that there is a nice analytic way to convert the input maps into synapse densities, i.e. without simulating each one. From the few simulations I’ve done, this might actually be possible.

Features implemented with adhoc solution – need to be fixed

GENERAL

default synMech placed in soma - where to define? make syns separate like neuroml

  • don’t create synMechs until needed by connections – just specify params? but need to store somewhere
  • can look up in f.net.params using tags[‘propList’] but probably slow
  • in Cell class have pointer to f.net.params from synMechs but dont create new python/neuron object
  • place synapse together with connection instead of cell? confusing since many conns can project to single synapse
  • In neuroml, syns are defined independently, and associated with connections - do same thing:

netParams[‘synMechParams’] = {‘NMDA’: {‘mod’: ‘ExpSyn’, ‘tau’: 0.1, ‘e’: 0}} cell.secs[‘soma’][‘synMechs’][0][‘label’: ‘NMDA’, ‘loc’: 0.5, ‘hSyn’: ]

  • place synMechs outside of sec? might be slower to find; plus synMechs are pointps so ‘belong’ to section
  • no need to add in netParams, specified in connRule, and created together with connections
  • when create conn check if synMech exists in that section, ie. has same ‘label’ and ‘loc’, if not, then create
  • make addSyn function
  • comparison of searching inside list vs accessing dict (~20x faster, for 1M conns: 0.75 sec vs 0.03sec; for 100M conns: 75 sec vs 4 sec):

– but have to add time of searching for specific location too

timeit.timeit(“b=next((item for item in a if item[‘label’]==’ampa’), None)”, setup=”a=[{‘label’:’gabab’,’v’:10},{‘label’:’gaba’,’v’:10},{‘label’:’nmda’,’v’:10},{‘label’:’ampa’,’v’:10}]”, number=1000000) Out[13]: 0.7533509731292725

timeit.timeit(“b=a[‘ampa’]”, setup=”a={‘gabab’:{‘v’:10},’gaba’:{‘v’:10},’nmda’:{‘v’:10},’ampa’:{‘v’:10}}”, number=1000000) Out[18]: 0.038023948669433594

  • NEED TO IMPORT SYNAPSES INTO SYNMECHPARAMS

independent scale conn weight for each cell model

  • where to store?
  • useful feature
  • maybe by default should be stored by cell model
  • allow both options (same variable)

move conns and stims out of Cell class

  • make stims be normal populations?
  • stims from S2 projection could make 2000 syns into single cell!
  • maybe place netstim part of stim outside of cell, but keep netcon inside
  • maybe keep conns inside cell for now? seems to make things easier eg. have all synConns of that cell ready to distribute

CONN RULE

conn rule with list of synMechs

  • eg. [AMPA, NMDA]
  • weightFraction or weightRatio - eg.
  • synWeightFraction = [0.9, 0.1]

synsPerConn as distribution

  • allows functional - seems to be gauss around 5 (E->) or 1 (I->)

weight/delay of conn and syns

  • allow to provide single weight – find way to estimate weight of syn contacts – check 16mar24 entry with papers to read!
  • allow to provide list of synweights – one per synConn (eg. 5 values) by convention assign starting from pia
  • allow functional only for single value (otherwise code would get overcomplicated)
  • maybe allow argument that represents location of synapse (not of cell) ??

SUBCELL RULE

create 1st conns, then syns

  • 2 steps, so can distribute syns
  • create temporary cellConns or keep?
  • need somewhere to store unitary weight? or when convert to subcellular lose?
  • 1) create synConns but don’t assign locs/delays/weights; 2) for each cell distribute syns, based on sep subcellular rules

– maybe add subcellular distribution to cell prop rules: — for each cell prop, create conditions of subcellular distribution eg. [‘conditions’: {‘yfrac’: [0.1-0.4]}] — default: assume uniform distribution — wouldn’t make sense to repeat for each yfrac-based conn rule, plus need all

  • create num synConns in 1st pass, and then replace if needed based on subcelluar rules
  • only do 2nd pass if any subcellular rules (can’t depend of numSyns cause could be 1, but still need to distribute)

distribute syns in sectionList or list of sectionLists

  • add suport for list of sectionLists - or even combination?!

syn distribution specification

  • 2d syn density map
  • 1d syn density map (yfrac)
  • apply 1d or 2d map only to certain sectionList, eg. spiny
  • categorical data - eg. by sectionLists + distribution function (uniform, gauss)

Meeting with Ben (March 18)

Comments from Ben

  • conceptual framework makes sense
  • HBP approach - axodendritic approach vs. know unitary and subcellular = refinement
  • 2 netcons to same synapse - what happens if input at same timestep ? coincident spiking
  • 1st pass + 2nd pass - approves
  • network model indep of subcellular info
  • single synapse could represent a projection between
  • unitary conn from preCell A could project to postCell B and C (so not totally accurate in terms of anatomy)
  • Do we need subcellular map for each presyn and postyn cell type, or per conn rule

– only pero conn rule if you average maps; but could also use indiv cell maps for each cell model – upper and deeper SPI cells scracm maps suggests they would have different input-output relations in networks – for a given density map could end with different syn density (and input-output relation) by applying to diff morphologies – yfrac dep of subcellular conn – also variabilty – CSP morphology very consistent for diff cortical depths; difference is in linker segments (eg 8% vs 12%) so not big diff – CSTR differences based on yfrac - within L5A and L5B - dont know if intrinsic diffs in CSTR

  • Make 3 CSP models - upper, middle, lower - 100 um apart – get data from 10 cells for each and fit – then test if can replace each of them and still get consistent results or very different (1 membrane model into 3 diff morphologies)
  • For IT have upper layer, L5A, L5B, and L6 – CSTR give you L5A, L5B, and have data for IT upper layers
  • IT cells - corticortical (corticocollosal; not exclusive) in L2/3; corticostriatal mostly in L5 (also corticocortical; not exclusive); not all 1 class (eg. project to S2 and M1, non-overlapping, intrinsics are different)
  • Naoki looked at IT L/3, L4 and L5 – different intrinsic? no consistent answer
  • cellConns vs synConns needed?

– single S2 prjection can make 2000-3000 synapses /contacts into single cell

Estimating synConn weights from cellConn weight

Discussion with Ben+Bill

Weight+delay in cellConn or synConn or both? (Ben)

– if data from unitary conn EPSP, the cellConn, but need to calculate weight/synConn (h.NetCon) – allow weights both at cellConn and synConn level – if weight * 2 makes the synapse twice as strong? – can know max conductance, and temp response – Lefort used EPSP mV in soma for cellConn – need to know how to calculate synConn – is there a way to do this analytically? – he did in scracm code- assume syn density is correct, electrotonic difference made different between measured and simualated – multiply by factor difference, resimulate and compare again – syns close to soma were fine; syns far away, original map had understimated response of far away syns, and rescaled far away syns – pattern of syn density for cell – function of yfrac – only need to calculate for original scracm map, and output should be rule – but for cellConn -> 5 synConns get 1mV in soma when activated simultaneously – but will be distributed along dendrites so different transmission delays – difficult to calculate accurately – most models dont distribute syns over dendrite with realistic syn density

– delay could be identical for each synConn, or add small distance-based delay – delay would vary due to extra distance traveled through unmyelinated axon (1 m/s) – 1.1ms diff for different pyramidal neurons - can omit for now - euclidean distance between cell bodies - condVelocity = 1m/s – minDelay value = 2ms – too high! set to maybe 0.5 ms or 1ms – max(0.5, 3d distance based) ; for closeby cells, axon goes down ~100um (fat axon), collaterals branch off, come back up 100 or 200um ; localDelay = 0.5ms – for long-range conns use longer delays - eg. 4 ms - kind of aritraty seems noise

Bill

  • Bill: have to normalize by position based on a transfer function between locations and soma

– can use someting like ted’s multifreq transfer func program to get tha map – yes have to scale baesd on loc and based on freq of input – eg a high freq (short) input like AMPA has less effect on soma than a slow eg NMDA – was looking at paper on that recently but is just for passive case – jnsci15:1669.pdf The morphoelectrotonic transform: a graphical approach to dendritic function /u/billl/articles/jnsci15:1669.pdf – reshaping tree based on electrical distance – another old paper – author = “Jaffe, DB and Carnevale, NT”, title = “Passive normalization of synaptic integration influenced by dendritic architecture”, /u/billl/articles/jnphys82:3268.pdf – http:/www.neuron.yale.edu/phpbb/viewtopic.php?f=8&t=2404&p=9487&hilit=frequency+response#p9487 – calculate the space constant in response to a AC current // lamda_AC(f) = 1/2*sqrt(diam/PI*f*Ra*Cm) – think he has a little package for this but not finding now http://www.neuron.yale.edu/phpbb/viewtopic.php?f=15&t=313&p=870&hilit=frequency+response#p870 a ‘frequency tool’ – also depends on the Rm and on active conductances, eg an EPSP can end up being terminated by K chans or can be primarily and active response so that the actual gsyn is only a small piece

TO READ or check out!

Mark15

-Pathway specific values for the parameter “utilization of synaptic efficacy” (U, analogous to the probability of neurotransmitter release) were unified from various experimental studies of synaptic transmission in juvenile rat somatosensory cortex (Le Bé et al., 2007; Brémaud et al., 2007; Feldmeyer, 2006; Koester and Johnston, 2005; Markram et al., 1997; Silver et al., 2003).

  • The axonal conduction delay for each synaptic contact was computed using the axonal path distance from the soma, and a AP conduction velocity of 300 μm/ms, based on experimental estimates (Stuart et al., 1997). Furthermore, experimentally measured ratios of NMDA and AMPA conductances were used in order to model their relative contribution to unitary the EPSC (Feldmeyer, 2006; Myme et al., 2003; Rinaldi et al., 2007; Silver et al., 2003; Wang and Gao, 2009).

Reim15

  • interbouton interval: 1-10 um
  • syns/bouton: 1 (2 low prob)
  • bouton density: ~0.2/um
  • syns/conn: E->E/I ~5 , I->E/I-> ~15

Zado95

  • attenuation in and out - graphical rep

Huys06

Lond02

Kome09

Jaff99

fig - pyr L5 cell

  • what is

> the significance of this Z transfer for neural communication?

Very simple. Transfer impedance is the best predictor of the effect of synaptic location on synaptic efficacy. This is a consequence of these two facts:

  1. Peak depolarization at the synaptic trigger zone is the

primary determinant of whether or not an epsp will trigger a spike. This is easily shown by computational modeling.

  1. Most synapses act like current sources, not voltage sources.

Also easily shown by computational modeling.

Therefore, despite everything you might find in textbooks, hear in the classroom, or read in most journal articles, voltage attenuation is not a useful predictor of the effect of synaptic location on synaptic efficacy. The best predictor is transfer impedance, which tells you how strongly a current, injected at one point in the cell, will affect membrane potential throughout the cell. See Jaffe, D.B. and Carnevale, N.T. Passive normalization of synaptic integration influenced by dendritic architecture. Journal of Neurophysiology 82:3268-3285, 1999.

So if you want to understand how the distribution of synaptic inputs over the surface of a cell will affect the spiking output of that cell, study the spatial variation of transfer impedance from a reference point located at the cell’s spike trigger zone (since transfer impedance between any two points is independent of the direction of signal propagation, the transfer impedance from any point to the soma is the same as from the soma to that point).

PointProcessDistributor.hoc http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1002545

16may19 Meeting with Padraig

mod

  • most channels already exist in NeuroML2
  • converting NeuroML2 -> mod is possible

github

  • waffle - organize issues
  • pull request to release

neuroml -> netpyne

16jun01 Setting up Travis

example code

Framework for running automated tests on OSB using Travis-CI, see https://github.com/OpenSourceBrain/osb-model-validation

sudo: false

addons: apt: packages:

  • python-numpy
  • python-scipy
  • python-matplotlib
  • python-sympy

language: python python: 2.7

virtualenv: system_site_packages: true

env:

  • OMV_ENGINE=PyNEURON
  • OMV_ENGINE=jNeuroML
  • OMV_ENGINE=jNeuroML_NEURON
  • OMV_ENGINE=jNeuroML_validate

install:

  • omv install Brian
  • omv install NEURON
  • cd NEURON/test
  • /home/travis/neuron/nrn/x86_64/bin/nrnivmodl ../mod.files
  • ls -alt
  • cd ../..

script:

  • omv all -V

useful repos

https://github.com/openworm/org.geppetto.simulator.external/blob/master/.travis.yml https://github.com/NeuralEnsemble/PyNN/blob/master/ci/install_neuron.sh https://github.com/OpenSourceBrain/SmithEtAl2013-L23DendriticSpikes/blob/master/.travis.yml

16jun15 New format for figures

Possible formats

list of dict

figs=[]

simConfig[‘analysis’].append({‘func’: ‘plotRaster’, ‘include’: [‘all’, 120, ‘E1’, (‘L2’, 56), (‘L5’, [4,5,6]), (‘L6’, range(5,20))], # all, gid, pop, pop rel index ‘maxSpikes’:3e8, ‘overlaySpikeHist’: True, ‘syncLines’: True, ‘orderBy’:’ynorm’|’y’|’popLabel’|’cellType’, ‘figId’: 1, # figs with same figId will be converted to subplots ‘saveData’: ‘data2.pkl’, ‘saveName’: ‘fig1.png’})

simConfig[‘analysis’].append({‘func’: ‘plotSpikeHist’, ‘include’: [‘all’, 120, ‘E1’, (‘L2’, 56), (‘L5’, [4,5,6]), (‘L6’, range(5,20))], # all, gid, pop, pop rel index ‘binsize’: 5, ‘overlay’: True, # else separate subplots ‘style’: ‘line’|’bar’|’scatter’, ‘yaxis’: ‘rate’|’count’, ‘figId’: 1, # figs with same figId will be converted to subplots ‘saveData’: ‘data1.pkl, ‘saveFig’: ‘fig1.png’})

simConfig[‘analysis’].append({‘func’: ‘plotTraces’, ‘include’: [‘all’, 120, ‘E1’, (‘L2’, 56), (‘L5’, [4,5,6]), (‘L6’, range(5,20))], # all, gid, pop, pop rel index (automatically added to recordCells) ‘overlay’: True, # else separate subplots ‘oneFigPer’: ‘cell’|’trace’, ‘saveData’: ‘data1.pkl, ‘saveFig’: ‘fig1.png’})

just dict

simConfig[‘analysis’][‘plotRaster’]={ ‘include’: [‘all’, 120, ‘E1’, (‘L2’, 56), (‘L5’, [4,5,6]), (‘L6’, range(5,20))], # all, gid, pop, pop rel index ‘maxSpikes’:3e8, ‘overlaySpikeHist’: True, ‘syncLines’: True, ‘orderBy’:’ynorm’|’y’|’popLabel’|’cellType’, ‘figId’: 1, # figs with same figId will be converted to subplots ‘saveData’: ‘data2.pkl’, ‘saveName’: ‘fig1.png’})

simConfig[‘analysis’][‘plotConnMatrix’]= True OR {‘values’: ‘weight’| ‘numConns’| ’ |’probability’| ‘convergence’, ‘divergence’, ‘groupBy’: ‘cell’|’pop’, ‘saveData’: ‘data1.pkl, ‘saveFig’: ‘fig1.png’})

Equivalent to calling function

can use same args for calling function or for simConfig (use **kwargs)

plotRaster y-axis – possible code to show actual y values instead of indices

”’maxY = max(y2ind.values()) minY = min(y2ind.values())

base=10 upperY = base * ceil(float(maxY)/base) lowerY = base * floor(float(minY)/base) yAddUpper = int(upperY - maxY) yAddLower = int (minY-lowerY) for i in range(yAddUpper): y2ind.update({yAddUpper: len(y2ind)+1}) for i in range(yAddLower): y2ind.update({yAddLower: min(y2ind.values())-1}) ystep = base #int(len(y2ind)/base) #ystep = base * round(float(ystep)/base) yticks(y2ind.values()[::ystep], y2ind.keys()[::ystep])”’

16Jun16 Parallelizing creation of cells and connections

Email from mike hines

Parallel NEURON idioms: Information exchange during setup Random numbers Debugging Michael Hines CodeJam 2014 ​ Information exchange during setup Results must be independent of Number of processors Distribution of cells ​ Information exchange during setup Results must be independent of Number of processors Distribution of cells A process is often interested in all the objects with a particular property. ​ Information exchange during setup Results must be independent of Number of processors Distribution of cells A process is often interested in all the objects with a particular property. But it generally does not know where the objects are. And the process that owns the object does not know who is interested in it. ​ Information exchange during setup Results must be independent of Number of processors Distribution of cells A process is often interested in all the objects with a particular property. But it generally does not know where the objects are. And the process that owns the object does not know who is interested in it. There is not enough memory in any one process to hold a map of which ranks hold which objects. ​ Example: MPI_ISend/Recv spike exchange Cells do not know which ranks are interested in its spikes. ​ Example: MPI_ISend/Recv spike exchange Cells do not know which ranks are interested in its spikes. ​ Example: Source/Target connectivity Reciprocal synapse connection description. (mitral_gid, mdend_index, xm, granule_gid, gdend_index, xg, …) ​ Construct a mitral => all the tuples with that mitral_gid. Granules don t know enough for construction of the tuples. Construct a granule => gather all the tuples with that granule_gid. ​ Basic exchange: dest = ParallelContext.py_alltoall(src) src and dest are a list of nhost pickleable objects. src[j] on the ith rank will be copied to dest[i] on the jth rank. Likely identical to mpi4py.MPI comm.alltoall(src, dest). ​ 1 ​ 2 ​ 3 ​ 1 ​ 2 ​ 3 ​ Basic exchange: dest = ParallelContext.py_alltoall(src) src and dest are a list of nhost pickleable objects. src[j] on the ith rank will be copied to dest[i] on the jth rank. Likely identical to mpi4py.MPI comm.alltoall(src, dest). Essentially a wrapper for: MPI_Alltoallv(s, scnt, sdispl, MPI_CHAR, r, rcnt, rdispl, MPI_CHAR, comm); along with a preliminary MPI_all2all(scnt, 1, MPI_INT, rcnt, 1, MPI_INT, comm); in order to calculate rcnt and rdispl. ​ But: No one knows who holds what. No room for anyone to have a global map. ​ But: No one knows who holds what. No room for anyone to have a global map. ​ Solution: A rendezvous rank function: rank = rendezvous(property) usually rank = gid % nhost ​ But: No one knows who holds what. No room for anyone to have a global map. ​ Solution: A rendezvous rank function: rank = rendezvous(property) usually rank = gid % nhost

  1. Everyone sends the keys they own to the rendezvous rank.
  2. Everyone sends the keys they want to the rendezvous rank.
  3. The rendezvous rank sends back to the owners,

which ranks want which keys.

  1. The owners send the objects to the ranks that want them.

​ Usually simplification is possible: If the objects are small.

  1. Everyone sends the keys and objects

they own to the rendezvous rank.

  1. Everyone sends the keys they want to the rendezvous rank.
  2. The rendezvous rank sends the objects to the ranks that

want them. ​ Usually simplification is possible: If rendezvous(property) is known to be the source rank for all the keys (a priori or by verifying with an all_reduce).

  1. Everyone sends the keys they want to the owner ranks.
  2. The owners send the objects to the ranks that want them.

​ Usually simplification is possible: If rendezvous(property) is known to be the destination rank for all the keys (a priori or by verifying with an all_reduce).

  1. The owners send the objects to the ranks that want them.

​ What about RANDOM? Results must be independent of Number of processors Distribution of cells ​ What about RANDOM? Results must be independent of Number of processors Distribution of cells Associate a random stream with a cell. Reproducible Independent Restartable ​ What about RANDOM? Results must be independent of Number of processors Distribution of cells Associate a random stream with a cell. Reproducible Independent Restartable Use cryptographic transformation of several integers. run number stream number (cell gid) stream pick index ​ Use cryptographic transformation of several integers. run number stream number (cell gid) stream pick index Had been using MCellRan4 but only two integers to define x(n1, n2) ​ Use cryptographic transformation of several integers. run number stream number (cell gid) stream pick index Had been using MCellRan4 but only two integers to define x(n1, n2) Thanks! to Eilif Muller for suggesting: Parallel Random Numbers: As Easy as 1, 2, 3 Salmon et al. SC11 (2011) D. E. Shaw Research, New York, NY 10036, USA We introduce several counter based PRNGs: some based on cryptographic standards (AES, Threefish) and some completely new (Philox). All our PRNGs pass rigorous statistical tests (including TestU01 s BigCrush) and produce at least 2^64 unique parallel streams of random numbers, each with period 2^128 or more. http://www.deshawresearch.com/resources_random123.html ​ Use cryptographic transformation of several integers. run number stream number (cell gid) stream pick index Random123: Eight integers define x(n1, …, n8) But we use 5 (three for the stream number). Philox variant ​ Use cryptographic transformation of several integers. run number stream number (cell gid) stream pick index Random123: Eight integers define x(n1, …, n8) But we use 5 (three for the stream number). Philox variant Good performance 10 million picks ACG 0.329s MLCG 0.681s MCellRan4 0.150s numpy.random.rand(n) 0.233s Random123 0.201s ​ (Mersenne Twister) ​ from neuron import h r = h.Random() r.Random123(1,2,3) r.negexp(1) ​ from neuron import h r = h.Random() r.Random123(1,2,3) r.negexp(1) ​ 1e+06 ​ 1e+06 ​ 800000 ​ 995000 ​ 600000 ​ 990000 ​ 400000 ​ 985000 0 ​ 0.005 0.01 0.015 ​ 200000 ​ n=100000000 0 dx = .01 0 1 2 y = h.Vector(n).setrand(r) y = y.histogram(0,5,dx).rotate( 1,0) x = y.c().indgen(dx/2,dx) g = h.Graph() y.line(g, x) ​ 3 ​ 4 ​ 5 ​ nrnran123.h (abridged) all generator instances share the global index extern void nrnran123_set_globalindex(uint32_t gix); extern nrnran123_State* nrnran123_newstream(uint32_t id1, uint32_t id2); extern uint32_t nrnran123_ipick(nrnran123_State*); uniform 0 to 2^32 1 extern double nrnran123_dblpick(nrnran123_State*); uniform open interval (0,1) minimum value is 2.3283064e 10 max value is 1 min ​ extern double nrnran123_negexp(nrnran123_State*); mean 1.0 min value is 2.3283064e 10 max is 22.18071 ​ extern double nrnran123_negexp(nrnran123_State*); mean 1.0 min value is 2.3283064e 10 max is 22.18071 ​ log(1/2^32) 22.18071 log(2/2^32) 21.487563 log(10/2^32) 19.878125 log(11/2^32) 19.782815 exp( 5)*2^32 28939262 ​ log(28939262/2^32) 5.0000000001 log(28939263/2^32) 4.99999996 ​ extern double nrnran123_negexp(nrnran123_State*); mean 1.0 min value is 2.3283064e 10 max is 22.18071 ​ stateless (though the global index is still used) extern nrnran123_array4x32 nrnran123_iran(uint32_t seq, uint32_t id1, uint32_t id2); ​ Debugging Results must be independent of Number of processors Distribution of cells ​ Debugging Results must be independent of Number of processors Distribution of cells

  1. GID and time of first spike difference.

​ Debugging Results must be independent of Number of processors Distribution of cells

  1. GID and time of first spike difference.
  2. All spikes delivered to synapses of that Cell?

​ Debugging Results must be independent of Number of processors Distribution of cells

  1. GID and time of first spike difference.
  2. All spikes delivered to synapses of that Cell?
  3. When and what is the first state difference?

h.load_file( prcellstate.hoc ) if pc.gid_exists(gid): h.prcellgid(gid)

16jul11 Modify network instance

sim.net.modifyCells

sim.net.modifyConns

sim.net.modifyStims

16jul15 Use dot notation

new classes Dict and ODict

~25% slower

old dicts

Creating simulation of 23 cell populations for 0.1 s on 1 hosts… Number of cells on node 0: 4768 Done; cell creation time = 3.07 s. Making connections… Number of connections on node 0: 9536 Done; cell connection time = 1.28 s.

Running… 0 Done; run time = 125.21 s; real-time ratio: 0.00.

Gathering spikes… Done; gather time = 1.91 s.

Analyzing… Run time: 125.21 s Simulated time: 0.1 s; 4768 cells; 1 workers Spikes: 2120 (4.45 Hz) Connections: 9536 (2.00 per cell) 162.0 89.4 820 IT_L23 2.20985431331 Hz 89.0 89.4 113 PV_L23 8.80996218645 Hz 97.0 89.4 113 SOM_L23 9.60186889984 Hz 69.0 89.4 431 IT_L4 1.79074728663 Hz 72.0 89.4 475 IT_L5A 1.69551395267 Hz 111.0 89.4 540 IT_L5B 2.29927914492 Hz 751.0 89.4 540 PT_L5B 15.5563841246 Hz 169.0 89.4 248 PV_L5 7.6225012629 Hz 235.0 89.4 248 SOM_L5 10.5993360756 Hz 83.0 89.4 496 IT_L6 1.8717976474 Hz 95.0 89.4 496 CT_L6 2.14241899401 Hz 81.0 89.4 124 PV_L6 7.30677635852 Hz 106.0 89.4 124 SOM_L6 9.56195424695 Hz Plotting recorded cell traces … Done; plotting time = 0.61 s

Total time = 132.20 s

new dot Dicts

Creating simulation of 23 cell populations for 0.1 s on 1 hosts… Number of cells on node 0: 4768 Done; cell creation time = 4.08 s. Making connections… Number of connections on node 0: 9536 Done; cell connection time = 1.55 s.

Running… 0 Done; run time = 123.59 s; real-time ratio: 0.00.

Gathering spikes… Done; gather time = 3.34 s.

Analyzing… Run time: 123.59 s Simulated time: 0.1 s; 4768 cells; 1 workers Spikes: 2120 (4.45 Hz) Connections: 9536 (2.00 per cell) 162.0 89.4 820 IT_L23 2.20985431331 Hz 89.0 89.4 113 PV_L23 8.80996218645 Hz 97.0 89.4 113 SOM_L23 9.60186889984 Hz 69.0 89.4 431 IT_L4 1.79074728663 Hz 72.0 89.4 475 IT_L5A 1.69551395267 Hz 111.0 89.4 540 IT_L5B 2.29927914492 Hz 751.0 89.4 540 PT_L5B 15.5563841246 Hz 169.0 89.4 248 PV_L5 7.6225012629 Hz 235.0 89.4 248 SOM_L5 10.5993360756 Hz 83.0 89.4 496 IT_L6 1.8717976474 Hz 95.0 89.4 496 CT_L6 2.14241899401 Hz 81.0 89.4 124 PV_L6 7.30677635852 Hz 106.0 89.4 124 SOM_L6 9.56195424695 Hz Plotting recorded cell traces … Done; plotting time = 0.93 s

Total time = 133.78 s

16jul19 Allen Brain example

emails

— Hi Salvador,

Thank you very much for talking to us today. That was helpful.

As a next step for us, I wonder if I could ask you to give a quick tutorial to a larger group of folks here at the Allen Institute. We have a meeting with several more people planned for Wednesday, July 20, at 3 pm Pacific time. Would you be able to talk to us then?

I am thinking that perhaps you could speak for 20 minutes or so, and then do 10 minutes of Q&A.

It would be most helpful if for this tutorial you could walk us through a specific example of NetPyNE workflow applied to a question that’s of interest to us. It doesn’t have to be a fully finished example – could be pseudo-code in part. We would just really benefit from seeing how you are doing these operations.

Here’s the example I have in mind.

  1. Let’s say you have 5 or so neuron types, each represented by a particular cell model and a morphology. Let’s build a model consisting of 10,000 neurons that are replicas of these 5 models.
  2. Distribute the 10,000 cells in space.
  3. Connect the neurons using cell-type specific connectivity rules (let’s say, probabilistic rules based on distances between somata).
  4. Establish feedforward inputs into the 10,000 cells representing spike trains incoming from other parts of the brain. Use arbitrary spike trains (e.g., experimental recordings), if possible.
  5. Save the constructed system to a file. Read the file into NetPyNE and run the simulation.
  6. Visualize the output.

Do you think it’s feasible? Sorry for a short notice, and please do let me know if this is too much to ask :).

Many thanks,

Anton.

— Hi Anton,

Yes, I think its feasible.

Were you interested in any particular cell models / number of compartments? any ones available in hoc or python (eg. from ModelDB) can be directly imported. Of course 10k cells with very detailed morphologies might take a while to simulate. In our M1 sim we combine Izhikevich point neurons, 5-compartment, and full >170 compartment cells. Let me know if you’d be interested in seeing an example of this.

Any requirements in terms of subcellular connectivity? Want single soma to soma synapse, or multiple synapses per cell-to-cell connection? Currently can provide specific section/location for each synapse, or can choose to distribute uniformly across a set of sections or sectionList (also about to release feature to distribute synapses over dendritic tree based on more complex patterns, eg. 1D or 2D density maps).

I’ll let you know if I have any issues, but otherwise talk to you Wednesday 3pm (pacific time).

Thanks again for your interest. Salva

— Sounds great, Salvador. Thank you very much.

We typically use models with a few hundred compartments. They all can be found here: http://celltypes.brain-map.org/ They are also available on ModelDB.

Let’s not worry about actually running a simulation with 10,000 cells. We are just curious to see the workflow and get an idea about the scale of the problem. If you can estimate how long it will take to run the model-building part, how long it would take to simulate it, and how large are the files in which you would save the model details, that would be great, but don’t worry if it turns out that estimating those numbers is not straightforward.

For the connectivity, we are using multiple synapses per connection (let’s assume, five synapses per connection on average). Synapses should be distributed over dendritic tree, with rules depending on target and source cell types. For example, something like “for this connection type, place synapses on the soma and dendrites within 100 um of path distance from the soma”. Again, please don’t worry about actually instantiating all this. We would just like to see an example of your code that does that.

One additional thought we have is that we could ask you for the code from such a tutorial after we talk on Wednesday. That way we could start playing around with NetPyNE in a meaningful way. Hope that’s OK.

Thanks, and looking forward to chatting soon! Best,

Anton.

results

300x1200x300

create_save.py:

Creating network of 9 cell populations on 1 hosts… Number of cells on node 0: 6477 Done; cell creation time = 11.53 s. Making connections… Number of connections on node 0: 4160680 Done; cell connection time = 617.51 s. Adding stims… Number of stims on node 0: 32385 Done; cell stims creation time = 3.49 s.

Gathering data… Done; gather time = 193.44 s.

Analyzing… Cells: 6477 Connections: 4193065 (647.38 per cell) Saving output as Allen.json … Finished saving!

  • size: 970Mb

load_run.py:

Loading file Allen.json … Done; file loading time = 56.78 s Loading simConfig… simConfig not found in file Allen.json Loading netParams… Loading net… Created 6477 cells Added NEURON objects to 6477 cells Done; re-instantiate net time = 75.16 s Loading simData… simData not found in file Allen.json Recording V_soma from cell 0 Recording V_dend6 from cell 0 Recording V_soma from cell 6045 Recording V_dend6 from cell 6045

380x120x380

create_save.py:

Creating network of 9 cell populations on 1 hosts… Number of cells on node 0: 10392 Done; cell creation time = 17.84 s. Making connections… Number of connections on node 0: 9647815 Done; cell connection time = 1625.43 s. Adding stims… Number of stims on node 0: 51960 Done; cell stims creation time = 6.23 s.

Gathering data… Done; gather time = 563.89 s.

Analyzing… Cells: 10392 Connections: 9699775 (933.39 per cell) Saving output as Allen.json … Finished saving!

  • size: 2.0Gb (crashed)

380x120x380 (less conns)

create_save.py:

Creating network of 9 cell populations on 1 hosts… Number of cells on node 0: 10392 Done; cell creation time = 18.68 s. Making connections… Number of connections on node 0: 6329300 Done; cell connection time = 1388.36 s. Adding stims… Number of stims on node 0: 51960 Done; cell stims creation time = 6.06 s.

Gathering data… Done; gather time = 353.89 s.

Analyzing… Cells: 10392 Connections: 6381260 (614.06 per cell) Saving output as Allen.json … Finished saving!

  • size: 1.4Gb (crashed)

load_run.py:

Loading file Allen_10k.json … Done; file loading time = 99.93 s Loading simConfig… simConfig not found in file Allen_10k.json Loading netParams… Loading net… Created 10392 cells Created 6381260 connections Created 51960 stims Added NEURON objects to 10392 cells Done; re-instantiate net time = 116.31 s Loading simData… simData not found in file Allen_10k.json

Running… Done; run time = 32.55 s; real-time ratio: 0.00.

Gathering data… Done; gather time = 751.81 s.

Analyzing… Cells: 10392 Connections: 6381260 (614.06 per cell) Spikes: 0 (0.00 Hz) Simulated time: 0.0 s; 1 workers Run time: 32.55 s Done; saving time = 0.11 s. Plotting raster… No spikes available to plot raster Plotting recorded cell traces … Plotting 2D representation of network cell locations and connections…

380x120x380 (less conns, with subcell)

16jul20 Matplotlib errors

16aug20 Fixing memory leak when running batch

links

testing different cleaning methods

  • added after iteration:

print ‘Memory usage: %s’ % resource.getrusage(resource.RUSAGE_SELF).ru_maxrss

  • memory increases in master node after iter

master: 89 mb -> 362 -> 569 -> 800 -> 993 -> 1190 slaves: 90 -> 139 …

  • trying gc.collect() - same
  • tried: replaceItemObj([cell.__dict__ for cell in sim.net.cells], ‘h’, None)

lowered mem, but crashed – need to gid_clear before

  • with gid_clear before, increasing but less

98 -> 180 -> 230

  • clear recorded stuff

89 -> 180 -> 230 -> 280

  • trying:

import objgraph objgraph.show_most_common_types()

function 11471 Dict 10633 list 8925 dict 6759 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094

function 11471 Dict 10658 list 8929 dict 6761 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094

function 11471 Dict 10597 list 8917 dict 6755 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094 Dict 63291 list 19046

function 11471 dict 6786 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094 Setting stimulation (NetCon) weight to 0.02 Memory usage: 181530624 Memory usage: 140632064 Memory usage: 140087296 Memory usage: 140316672

function 11471 list 11433 Dict 10658 dict 9270 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094

function 11471 list 11421 Dict 10597 dict 9264 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094

function 11471 list 11429 Dict 10633 dict 9268 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094 Dict 113384 list 31602

function 11471 dict 9303 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094 Setting stimulation (NetCon) weight to 0.03 Memory usage: 234979328 Memory usage: 140632064 Memory usage: 140087296 Memory usage: 140316672

  • clear allSimData and allCells

function 11471 Dict 10597 list 8917 dict 6755 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094

function 11471 Dict 10658 list 8929 dict 6761 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094

function 11471 Dict 10633 list 8925 dict 6759 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094 Dict 63291 list 19011

function 11471 dict 6785 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094 Setting stimulation (NetCon) weight to 0.02 Memory usage: 183537664 Memory usage: 140525568 Memory usage: 140050432 Memory usage: 140353536

function 11471 list 11421 Dict 10597 dict 9264 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094

function 11471 list 11429 Dict 10633 dict 9268 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094

function 11471 list 11433 Dict 10658 dict 9270 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094 Dict 115886 list 31567

function 11471 dict 9302 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094 Setting stimulation (NetCon) weight to 0.03 Memory usage: 233951232 Memory usage: 140353536 Memory usage: 140525568 Memory usage: 140050432

  • remove allCells and simDAta objects themselves

Memory usage: 183980032 Dict 52598 list 16498 function 11471 dict 5529 tuple 2616 wrapper_descriptor 1880 weakref 1748 builtin_function_or_method 1288 method_descriptor 1194 type 1094

Memory usage: 232386560 Dict 105193 list 29054 function 11471 dict 8046 tuple 2481 wrapper_descriptor 1895 weakref 1752 builtin_function_or_method 1288 method_descriptor 1196 type 1094

  • objects in sim: ‘allSimData’, ‘simData’, ‘net’, ‘cfg’
  • if no gather, then only increases 2mb / iteration
  • only happens when multiple cores! related to gather
  • leakage when pc.py_alltoall – need to clear vars inside func
  • small leakage when saving - think reduced by clearing vars
  • huge leakge with plotSpikeHist

Paper

Title

NetPyNE: a tool to develop, simulate and analyze data-driven biological neuronal network models

Journal

  • PLOS Comp Biol (similar to X-J Wang paper)

Figures

  • overview of netpyne stages
  • structure of netParams
  • how netParams specs translate to network: a) pops with yfrac, b) cell Model HH (import) vs Izhi, b) conn rules (eg. cell type, yfrac, pop)
  • yfrac-based connectivity illustration - with real data
  • sCRACM-based subcellular example - with real data
  • simulation process - distribution of cells and gathering (cite Neural Comp)
  • RL+STDP … for behavioral task: arm (intervalUpdateFunc)
  • Analysis: raster, hist, LFP, 2dnet, conn matrix
  • Conversion to NeuroML: example of neuroml code? Gepetto/OSB visualization?
  • Example nets: M1, claustrum, RL

Netpyne Models/Users

Github examples folder

Documentation tutorials

M1 network (Salva, Neurosim)

Parallel Neuron paper (Alex, Neurosim)

Claustrum network (Jing, George Augustine)

Sensorimotor network + arm (Cliff, Kerr lab)

Grant

16aug17 11:00 MBL bill discuss netpyne with hines [my notes]

  • value of dictionaries would be a small lambda function that gives generality
  • two things i’d like to see – notation is reasonable but very descriptive

– seems there would be a better way to organize it [similar to NeuroML but shorter; needs to be standardized format] – what can be additional value added for the user [easy creation, parallel sim and analysis of net; complex conn patterns simplified; subcellular; inputs]

– ultimate would the the traub model – very compact network level description of the traub model 13 cell types, ~128 projection types – 128 statements – shouldn’t have to write down the same things – pij, weight, delay (cf NQS) [conn matrix can be converted to netpynec conn rules eg. M1; but need to write down same things so have consistent standard format for all nets, eg. some not based on pij, dist-dep, yfrac] – extract all this from the current 15k line code [neuroml version can extract net instance; but will have to check manually for high-level conn rules]

  • lambda function style encourages compactness; or def if used more than once [can use; string-based more compact and same effect; complexity of lambda: have to include arguments (eg. dist, x loc), not same for all rules]
  • on the setup set – instantiation – target cells for a particular rank – only setup on that rank due to the sources, building the netcon depends on location of the source; rules have SRC->TARG info so need to use the net-rendezvous-rank strategy (a common idiom used to resolve these problems)
  • net-rendezvous-rank strategy: used if trying to decide if something exists or not; can’t answer question unless all of the types exist but they exist randomly distributed across all the ranks so have to gather info from all the ranks; were do we meet? – we don’t meet at a single master; we are in double loops (for targ over TARG for src over SRC) – here’s the answer: we rendezvous at target_gid%nhost to decide [can implement in netpyne]
  • bad (but still common) idiom: sometimes useful to do net-rendezvous-rank in 2 phases - build round-robin system first and then throw away and rebuild with balanced distribution
  • I mentioned mark hereld example of an interactive HPC (also cf vslice) sim that is managed thru a remote GUI [netpyne GUI, OSB gepetto, new NEURON GUI, NSG]
  • change name? – NetPyNe sounds to much like PYNN – mh also works with Andrew Davison so would like to avoid the apparent

overlap; domain of PYNN – metasimulator to construct models in NEURON, NEST, BRIAN, using neuromorophic simulators (spinnaker, analog VLSI); NeuroML, MUSIC (multi-simulator-coordinator == lower level data interchange) in somewhat similar domains [mostly phonetic similarity; many python packages have py so similar; discuss with Andrew in Janelia]

MBL meeting

Reverse engineering

  • inverse problem: approximation, computer-aided design
  • use GA to aid in the inverse problem
  • netpyne can generate initial attempt to inverse network; then user can update the randomizer functions etc based on the source code
  • modelview gets lowest hangin fruit, but netpyne could do more sophisticated analysis to extract features
  • network

– find cell types – connections to other types etc – given projection is there a constant/pattern that can be reverse engineered, convergence

  • Try out example of reverse engineering – maybe CA3 model
  • reverse engineer data - eg. M1 model have physio + conns and turn into model
  • netpyne can import cells from hoc template or cell class instance
  • people have done before – create model that generates data, and from data use Bayesian learning to infer params
  • 1000 noisy channels, get data and do experiments to get back params of sodium channels back
  • eg. get maximal conductance param from data - maximum likelihood (won’t be same since noise)
  • hidden parameters that have to be informed - markov model
  • process noise + experimental noise - log likelihood
  • always wrong - only approximation
  • 3 approaches:

– 1) current modeldb - get approx same rule – 2) GA - get different rule from original – 3) BRAIN (reality)

Evolutionary

  • What to optimize

– 1) GA - Learning params (eg. learning rate) – 2) High-level specs (eg. prob of conn) – 3) Network instance params (eg. explicit weights)

  • What to invert from GA generated network

– 1) High-level specs (eg. prob of conn) – 2) Dynamical properties (can be used to fit)

Aim 1

  • Brett et al

grant approach

  • we have done this in the last 5 years - can we do this for the next 5 years
  • significance
  • innovation
  • investigator

redefine aims

  • Aim 1: front end

– GUI – dendritic connectivity

  • Aim 2: back end

– rendezvous

  • not good - front vs back
  • make Aim 2: parallelization

– parallelize setup - rendezvous – load balance – parameter optimization (evol etc)

subworlds

  • set of nodes working on same networks
  • can have 10 subworlds, each for 1 net
  • bulletin board - old 1994 - currently master but could be done as distributed bulletin baoard

GUI

  • interface to create network
  • simtracker managing of sims
  • simtracker summary of model data

(moved to Github issues - 16may31)

- unify cellmodel

  • cell prop conditions:

– cellType=RS; cellModel=HH – cellType=RS; cellModel=Izhi

  • provide option to insert distribute mechanism or point process
  • single cell class
  • then add support for 2007a - spikes+record from izh._ref_V
  • INCLUDE BOTH HH AND IZHI IN SAME RULE AND USE CELLMODEL TAG IN CELL TO CHOOSE WHICH TO USE!!

- Overwrite files optional (timestamp)

- Default simConfig

- Import cells with generic set of arguments

- Use mechanismType to make importCell more robust

https://www.neuron.yale.edu/neuron/static/new_doc/modelspec/programmatic/mechtype.html https://www.neuron.yale.edu/neuron/static/new_doc/programming/mechstan.html http://neurosimlab.org/ramcd/pyhelp/mechstan.html#MechanismStandard - 2nd example

  • use params with initial underscore ‘_var’ to indicate used by netpyne, not mechanism/pointprocess variable - add to docu!

Check Izhi2007a

Distinguish syns from other point processes

  • no way to do it
  • assume ‘syn’ in name – establish naming convention – added to doc
  • syn: true, false, false (.is_netcon_target() = True; .has_net_event(i) = False; .is_artificial() = false)
  • artcell: true, true?, true
  • izh2007a: true, true, false
  • izhi2007b: true, false, false

MechanismType.is_netcon_target() Syntax: boolean = mt.is_netcon_target(i) Description: The i’th point process has a NET_RECEIVE block and can therefore be a target for a NetCon object.

MechanismType.has_net_event() Syntax: boolean = mt.has_net_event(i) Description: The i’th point process has a net_event call in its NET_RECEIVE block and can therefore be a source for a NetCon object. This means it is NetCon stimulator or that the point process can be used as an artificial neural network cell.

MechanismType.is_artificial() Syntax: boolean = mt.is_artificial(i) Description: The i’th point process is an ARTIFICIAL_CELL and can therefore be a source for a NetCon object. This means it is NetCon stimulator or that the point process can be used as an artificial neural network cell.

This seems to have, but does not, equivalent functionality to has_net_event() and was introduced because ARTIFICIAL_CELL objects are no longer located in sections. Some ARTIFICIAL_CELLs such as the PatternStim do not make use of net_event in their implementation, and some PointProcesses do use net_event and must be located in sections for their proper function, e.g. reciprocal synapses.

Make list of data saved part of simCfg

  • simConfig, netParams, net, simData
  • added to docu

Test importing Friesen, mainen, traub cell models

  • Read python names in importCell

– use dir; check if section; compare with secList

  • ‘synReceptor’ check for syn type not name

Associate_gid from axon

  • section[‘spikeGenLoc’] = 0.5

HDF5

Saving output as example-20160108_200403.hdf5… Traceback (most recent call last): File “init.py”, line 24, in <module> f.sim.saveData() # save params, cell info and sim output to file (pickle,mat,txt,etc) File “/usr/site/nrniv/pypkg/netpyne/sim.py”, line 346, in saveData hickle.dump(dataSave, f.cfg[‘filename’]+’.hdf5’, mode=’w’) File “/u/salvadord/anaconda/lib/python2.7/site-packages/hickle.py”, line 376, in dump dumper(obj, h5f, **kwargs) File “/u/salvadord/anaconda/lib/python2.7/site-packages/hickle.py”, line 300, in dump_dict _dump_dict(obj, hgroup, **kwargs) File “/u/salvadord/anaconda/lib/python2.7/site-packages/hickle.py”, line 282, in _dump_dict _dump_dict(dd[key], new_group, **kwargs) File “/u/salvadord/anaconda/lib/python2.7/site-packages/hickle.py”, line 269, in _dump_dict hgroup.create_dataset(“%s” % key, data=dd[key], **kwargs) File “/u/salvadord/anaconda/lib/python2.7/site-packages/hickle.py”, line 88, in create_dataset return super(H5GroupWrapper, self).create_dataset(*args, **kwargs) File “/u/salvadord/anaconda/lib/python2.7/site-packages/h5py/_hl/group.py”, line 94, in create_dataset dsid = dataset.make_new_dset(self, shape, dtype, data, **kwds) File “/u/salvadord/anaconda/lib/python2.7/site-packages/h5py/_hl/dataset.py”, line 79, in make_new_dset tid = h5t.py_create(dtype, logical=1) File “h5t.pyx”, line 1389, in h5py.h5t.py_create (h5py/h5t.c:13046) File “h5t.pyx”, line 1463, in h5py.h5t.py_create (h5py/h5t.c:12893) TypeError: Object dtype dtype(‘O’) has no native HDF5 equivalent

Traceback (most recent call last): File “init.py”, line 24, in <module> f.sim.saveData() # save params, cell info and sim output to file (pickle,mat,txt,etc) File “/usr/site/nrniv/pypkg/netpyne/sim.py”, line 348, in saveData hdf5storage.write(dataSave, filename=f.cfg[‘filename’]+’.hdf5’) File “build/bdist.macosx-10.5-x86_64/egg/hdf5storage/__init__.py”, line 1399, in write

File “build/bdist.macosx-10.5-x86_64/egg/hdf5storage/__init__.py”, line 1318, in writes

File “build/bdist.macosx-10.5-x86_64/egg/hdf5storage/lowlevel.py”, line 114, in write_data File “build/bdist.macosx-10.5-x86_64/egg/hdf5storage/Marshallers.py”, line 1557, in write NotImplementedError: Dictionaries with non-unicode keys are not supported: ‘netParams’

Enable providing num spikes in Netstim (paper)

Set v_init before calling init() (paper)

  • netParam param
  • ‘vinit’ param in section
  • automatically set vinit=vr for izhis - not needed cause fixed mod
  • add to doc

Conv and div conn (paper)

  • made conn very flexible by allowing function for: prob, conv, div, weight and delay
  • funcs can include spatial/distance variables; and others defined in netParams
  • can implement a large range of functionalities with very generic implementation

Add x,z,xnorm,znorm positions (paper)

Plot cell from specific population (paper)

  • and or one of each population

Allow to define xrange and zrange of pops

  • also xfracrange and zfracrange
  • for fixed cell density, easy, just don’t calcualte rand
  • for functional density more complex
  • make generic so no distinction between coordinates

Fix NetStim connectivity

  • no convergent or divergent; only full and prob

fix so lambda funcs saved as string

setup.py pip install (paper)

timing (paper)

Record subset of time, and only record traces for subset of cells

  • recordTraces = [‘all’, ‘P’, 1]
  • plotTraces = [‘all’, ‘P’, 1]
  • pops means just 1 cell of that pop!
  • automatically add elements of plotTraces to recordTraces!!

Fix pops starting with same characters

EMstim == EM !!

Add support for SectionLists (paper)

  • also when importing

Rename point process ‘_type’ -> ‘mod’, ‘_loc’ -> ‘loc’

Import synapses into synMechParams

Cell position in ellipsoid

Subcellular synaptic connectivity (paper)

  • distribution of syns across segments based on empirical map
  • concept of logical connection subsumes subset of unitary connections

simple LFP (paper)

  • simple voltage sum

Plot conn matrix (paper)

  • by cells and populations

import/export NeuroML (paper)

  • padraig (see notes above)

Functions to load simConfig, netParams, net, simData (paper)

importCell can modify h variables (eg. celsius) (paper)

  • maybe just note in documentation that shouldnt modify h.vars - undesired effects

import python lists of sections (equivalent hoc sectionLists)

Plasticity (paper?)

  • Hebbian
  • STDP

Stimulation - iclamp,vclamp, opto, microelectrode (paper?)

  • maybe iclamp for paper?
  • can implement via pointp in cell properties? - no, so can assign to multiple
  • create new struct stimParams

netParams[‘stimParams’] = [] netParams[‘stimParams’][‘sourceList’].append({‘label’: ‘Input_1’, ‘type’: ‘IClamp’, ‘delay’: 0.1, ‘dur’: 0.5, ‘amp’: 0.5}) netParams[‘stimParams’][‘stimList’].append({ ‘source’: ‘Input_1’, ‘sec’:’soma’, ‘loc’: 0.5, ‘conditions’: {‘popLabel’:’PYR’, ‘cellList’: [0,5]}})

in NeuroML:

<pulseGenerator id=”Input_1” delay=”0.1s” duration=”0.5s” amplitude=”6.0E-10A”/> <pulseGenerator id=”Input_0” delay=”0.1s” duration=”0.5s” amplitude=”1.0E-10A”/>

<inputList id=”Input_1” component=”Input_1” population=”pyramidals”> <input id=”0” target=”../pyramidals/0/pyr_4_sym” destination=”synapses”/> </inputList>

Upload examples to netpyne repo (paper)

  • Evol params batch
  • virtual arm with RL+STDP
  • robot control

User input error checking

eg. getattr(a,b, DEFAULT VALUE!)

Travis CI

test automatically examples

Set positions in Neuron’s 3D space

LFPy

cliff?

Save plots to file

Conn/props just in master node

normalized transfer entropy (nTE)

cliff?

Granger causality

cliff?

3D animation of network

  • use johns code

Add support for point neuron outside of section

eg. IntFire1,2,3,4 ARTIFICIAL_CELL means 3 : that this model not only has a NET_RECEIVE block and does NOT 4 : have a BREAKPOINT but is not associated with a 5 : section location or numerical integrator. i.e it does not 6 : refer to v or any ions or have a POINTER. It is entirely isolated 7 : and depends on discrete events from the outside to affect it and 8 : affects the outside only by sending discrete events.

  • outside of section so need different method - maybe not worth?

RxD

/u/samn/m1dyst -> models/m1dyst https://www.neuron.yale.edu/neuron/static/docs/rxd/index.html

recreating spikes

Setting up the re-simulation utilizes {\tt PatternStim} to feed the spikes back into the single cell. The setup code is

pattern = h.PatternStim() pattern.fake_output = 1 pattern.play(tvec, idvec)

Saving at intervals

modify instantiated network

  • don’t destroy NEURON objects, so can interact with them
  • load net and provide new set of params

Add sim options for efficient parallel sim

  • add timing of computational vs exchange
  • load balance
  • cvode.cache_efficient
  • compress spikes
  • bin queue

Save space using __slots__

http://tech.oyster.com/save-ram-with-python-slots/

Make simulator-indep using NeuroML internal format

GUI

Test adaptive exp cell (no Neuron code)

student doing

Make opensource with contributions from community

There are several ways to contribute to PyNN:

reporting bugs, errors and other mistakes in the code or documentation; making suggestions for improvements; fixing bugs and other mistakes; adding or maintaining a simulator backend; major refactoring to improve performance, reduce code complexity, or both.

Replace search in list with next()

a = next((i for i in userInput if i in wordsTask), None)

Netstim with different types of random input

  • emphasize alpha, beta, gamma
  • external pre-recorded signal

Allow range of properties

eg. ‘gnabar’:[0.11, 0.13] and each cell being assigned a random value within that range

Wrappers to change properties interactively both of NEURON+netpyne object

maybe can define generic wrappers for some of the most common cases; maybe include within same global wrapper that checks which needs to be used

generate interactive net struct from code

  • collapse/expand

http://neurosimlab.org/netpyne/_images/netstruct.png

add units

  • uS for synMech weight

Netstim default start

Plot cell subset of each population, indicating relative index

? Connection function with specific connections

  • fromList
  • relativePreIds
  • relativePostIds
  • connList

Update Cm of section based on C of Izhi

Add option to save different data (specs, instance, output) to different file names

including dat (txt) format