First-Order Approximations of LI Neurons#
Until access to neuromorphic hardware becomes more widespread, most neuromorphic algorithms are simulated on “standard” hardware and then ported to neuromorphic hardware. There are plenty of frameworks that can do this. However, it can be instructive to build our own.
We’re going to start by implementing a simulation of the leaky integrate-and-fire (LIF) neurons described in the previous page. We will implement our code in Python.
Recall from last time that we have the formula (1) describing the potential of a neuron as:
Because we are going to introduce other time constants, we will rename
Simulating Neurons in Discrete Time#
So far, we have a definition of how
Then, rather than representing a precise time,
If our neuron starts at
How did we get this value for ?
We started with a definition of how
Euler’s Approximation
If we already know
in our case, subbing in for
which can be re-arranged to:
Our estimate is usually most accurate if we use small values for
Discrete Time Simulations
Our definitions
Our definition of
Similarly, we will estimate our incoming potential (
Now, suppose we choose a value of
In discrete time, this means
Now, we have a definition of
Note that in the second equation, we assumed that potential energy gets added immediately and thus replaced
So far, our neurons have a few constants that determine its dynamics:
, the size of time steps that we use , which specifies how quickly our potential decays (larger means slower potential decay) , how much potential our neuron starts with , the threshold potential above which our neuron will fire.
Code Implementation#
Let’s start to implement our neurons in code. We are going to define a Python class to represent our neuron.
What is a Python “class”?
In Python, a class is a blueprint for creating objects. An object is an instance of a class. For example, if we have a class Dog
, then an object of the class Dog
is a specific dog, like Fido
or Rex
. The class defines the properties (like name
or breed
) and methods (like bark()
or fetch()
) that all dogs have. When we create an object of the class Dog
, we can set the properties to specific values (like Fido
is a “Golden Retriever”) and call the methods (like Fido.bark()
).
In our case, we will define a class to represent neurons. We can then create individual neurons as “instances” of this class.
It will have a step()
function to “advance” by one time step. We will call our class FirstOrderLI
(leaving out the “F” in “LIF” because our neuron does not fire yet). Our class will have two arguments:
tau_rc
to represent We will also use the instance variableself.tau_rc
to track this. Default value:0.2
v_init
to represent . Default value:0
Further, it contains an instance variable self.v
to represent the latest value of step()
function advances to the next value of t
and accepts two parameters: I
to represent t_step
to represent step()
method, our LIF moves forward by one time step and updates self.v
accordingly.
So if we have never called step()
, self.v
represents step()
self.v
represents step()
class FirstOrderLI: # First Order Leaky Integrate
def __init__(self, tau_rc=0.2, v_init=0): # Default values for tau_rc and v_init
self.tau_rc = tau_rc # Set instance variables
self.v = v_init
def step(self, I, t_step): # Advance one time step (input I and time step size t_step)
self.v = self.v * (1 - t_step / self.tau_rc) + I * t_step / self.tau_rc
In Python, we create an “instance” of this class with FirstOrderLI()
:
neuron1 = FirstOrderLI()
We can also pass in custom values for tau_rc
and v_init
to override the defaults:
neuron2 = FirstOrderLI(tau_rc=0.002, v_init=0.1)
print(f"Step 0: {neuron2.v=}")
neuron2.step(0.5, 0.001) # Apply a current of 0.5 for 1 ms
print(f"Step 1: {neuron2.v=}")
neuron2.step(0, 0.001) # Apply no current for 1 ms
print(f"Step 2: {neuron2.v=}")
neuron2.step(0, 0.001) # Apply no current for 1 ms
print(f"Step 3: {neuron2.v=}")
Step 0: neuron2.v=0.1
Step 1: neuron2.v=0.3
Step 2: neuron2.v=0.15
Step 3: neuron2.v=0.075
Graphing Outputs#
Let’s simulate a neuron for some period of time. First, we create a neuron:
neuron = FirstOrderLI(v_init = 0.6) # Create a new instance with v_init = 0.6
Then, we’ll simulate the neuron over the course of two seconds, with neuron.v
in the list v_history
.
Note: we’re also going to use the numpy
library’s arange
function to create a list of time steps from 0
to 2
seconds.
import numpy as np
T_step = 0.001 # Time step size
duration = 2.0 # Duration of the simulation: 2 seconds
I = 0 # No input current
v_history = []
times = np.arange(0, duration, T_step) # Create a range of time values
for t in times: # Loop over time
v_history.append(neuron.v) # Record the neuron's potential
neuron.step(I, T_step) # Advance one time step
print(v_history[:5], "...", v_history[-5:]) # Print the first and last 5 potentials
[0.6, 0.597, 0.594015, 0.5910449249999999, 0.588089700375] ... [2.7239387514018173e-05, 2.7103190576448083e-05, 2.6967674623565844e-05, 2.6832836250448015e-05, 2.6698672069195774e-05]
Then, we’ll graph the value of v_history
using Matplotlib
import matplotlib.pyplot as plt
plt.figure() # Create a new figure
plt.plot(times, v_history) # Plot the neuron's potential over time
plt.xlabel('Time (s)') # Label the x-axis
plt.ylabel('Neuron potential') # Label the y-axis
plt.show() # Display the plot

we can see that our neuron’s potential (neuron.v
) decays exponentially over time. Now, let’s see what happens if we instead use a “square wave” input current. We’ll define a square_wave
function for this.
Show code cell source
def square_wave_func(off_time_interval, on_time_interval=None): # Define a function that returns a square wave
"""
off_time_interval: time interval for which the square wave is 0
on_time_interval: time interval for which the square wave is 1
"""
if on_time_interval is None: # If on_time_interval is not provided, set it equal to off_time_interval
on_time_interval = off_time_interval
return lambda t: 1 if (t % (off_time_interval + on_time_interval)) < off_time_interval else 0 # Return a lambda function representing the square wave
square_wave = square_wave_func(1) # Create a square wave function with a period of 2 seconds
times = np.arange(0, 5, 0.01) # Create a range of time values
square_wave_values = [square_wave(t) for t in times] # Compute the square wave values
plt.figure() # Create a new figure
plt.plot(times, square_wave_values, color="grey", linestyle="--") # Plot the square wave
plt.xlabel('Time (s)') # Label the x-axis
plt.ylabel('Amplitude') # Label the y-axis
plt.show() # Display the plot

duration = 4 # Duration of the simulation: 6 seconds
neuron = FirstOrderLI() # Create a new instance with default parameters
v_history = []
I_history = []
times = np.arange(0, duration, T_step) # Create a range of time values
for t in times: # Loop over time
I = square_wave(t) # Get the input current at time t
neuron.step(I, T_step) # Advance one time step
v_history.append(neuron.v) # Record the neuron's potential
I_history.append(I) # Record the input current
plt.figure() # Create a new figure
plt.plot(times, v_history) # Plot the neuron's potential over time
plt.plot(times, I_history, color="grey", linestyle="--") # Plot the input current over time
plt.xlabel('Time (s)') # Label the x-axis
plt.ylabel('Neuron potential / Input current') # Label the y-axis
plt.legend(['Neuron potential', 'Input current']) # Add a legend
plt.show() # Display the plot

Simulation#
Look at the code below and try different values for tau_rc
, T_step
, and input_func
. Note how our simulation becomes inaccurate for large values of T_step
. For example, modify the code so that:
tau_rc = 0.100
T_step = 0.200
input_func = "no-input"
With this combination of parameters, we can overshoot 0
and oscillate instead of decaying. This shows that it’s important to make
In the next section, we’ll introduce firing, to make our neurons proper LIF neurons.