Model Collections¶
This document explains the different categories of models in the CANNs library and how to extend them.
Overview¶
The models module (canns.models) implements various CANN architectures and their variants. Models are organized into three categories:
- Basic Models (
canns.models.basic) Standard CANN implementations and variants
- Brain-Inspired Models (
canns.models.brain_inspired) Models with biological learning mechanisms
- Hybrid Models (
canns.models.hybrid) Combinations of CANN with artificial neural networks
All models are built on BrainPy’s dynamics framework, which provides state management, time stepping, and JIT compilation capabilities.
Basic Models¶
Basic models implement the mathematically tractable and canonical continuous attractor neural network called the Wu-Amari-Wong (WAW) model [5, 6, 7, 8] as described in theoretical neuroscience literature. They use predefined connectivity patterns (typically Gaussian kernels) and fixed parameters.
Available Basic Models¶
Models are organized by module files in canns.models.basic:
Origin CANN (cann.py)¶
Core continuous attractor neural network implementations.
CANN1DOne-dimensional continuous attractor network. Defaults to 512 neurons arranged on a ring with Gaussian recurrent connections. Suitable for head direction encoding [3] and angular variables.
CANN1D_SFACANN1D with Spike Frequency Adaptation. It adds activity-dependent negative feedback and enables self-sustained wave propagation. Useful for modeling intrinsic dynamics.
CANN2DTwo-dimensional continuous attractor network with neurons arranged on a torus. Suitable for place field encoding [1] and spatial variables.
CANN2D_SFACANN2D with Spike Frequency Adaptation. Supports 2D traveling waves.
Hierarchical Path Integration Model (hierarchical_model.py)¶
Hierarchical models [19] combining multiple cell types for spatial cognition.
GaussRecUnitsRecurrent units with Gaussian connectivity.
NonRecUnitsNon-recurrent units for comparison.
BandCellBand cell for 1D path integration.
GridCellSingle grid cell [2] module with multiple scales.
HierarchicalPathIntegrationModelFull path integration [4] system with grid and place cells.
HierarchicalNetworkCombines multiple cell types for spatial cognition.
Theta Sweep Model (theta_sweep_model.py)¶
Models designed for theta [11, 14] rhythm analysis and spatial navigation studies [9, 10, 11].
Implementing Basic Models¶
Every basic model inherits from canns.models.basic.BasicModel or canns.models.basic.BasicModelGroup.
Constructor Setup¶
Call the parent constructor with the total neuron count:
super().__init__(math.prod(shape), **kwargs)
Store shape information in self.shape and self.varshape for proper dimensional handling.
Required Methods¶
- Connection Matrix (
make_conn()) Generate the recurrent connection matrix. Typical implementation uses Gaussian kernels:
Compute pairwise distances between neurons
Apply Gaussian function with specified width
Store result in
self.conn_mat
See
src/canns/models/basic/cann.pyfor reference implementations.- Stimulus Generation (
get_stimulus_by_pos(pos)) Convert feature space positions into external input patterns. Called by task modules to generate neural inputs:
Takes position coordinates as input
Returns a stimulus vector matching network size
Uses Gaussian bump or similar localized pattern
- Update Dynamics (
update(inputs)) Define single-step state evolution:
Read current states
Compute derivatives based on CANN equations
Apply time step:
new_state = old_state + derivative * bm.get_dt()Write updated states
- Diagnostic Properties
Expose useful information for analysis:
self.x: Feature space coordinatesself.rho: Neuron densityPeak detection methods for bump tracking
Brain-Inspired Models¶
Brain-inspired models use biologically plausible learning mechanisms. Unlike basic models with fixed weights, these networks modify their connectivity through local, activity-dependent plasticity.
Key Characteristics¶
- Local Learning Rules
Weight updates depend only on pre- and post-synaptic activity
- No Error Backpropagation
Learning happens without explicit error signals
- Energy-Based Dynamics
Network states evolve to minimize an energy function
- Attractor Formation
Stored patterns become fixed points of dynamics
Available Brain-Inspired Models¶
AmariHopfieldNetworkClassic associative memory model [21, 23] with binary pattern storage. Hebbian learning [22] for weight formation. Content-addressable memory.
LinearLayerLinear layer with learnable weights for comparison and testing. Supports various unsupervised learning rules including Oja’s rule [25] for principal component extraction and Sanger’s rule [26] for multiple principal components.
SpikingLayerSpiking neural network layer with biologically realistic spike dynamics.
Implementing Brain-Inspired Models¶
Inherit from canns.models.brain_inspired.BrainInspiredModel or canns.models.brain_inspired.BrainInspiredModelGroup.
State and Weight Variables¶
Define state variables and trainable weights:
self.s: State vector (bm.Variable)self.W: Connection weights (bm.Variable)
All state and weight variables use bm.Variable in BrainPy.
Weight Attribute¶
If weights are stored under a different name, override the weight_attr property:
@property
def weight_attr(self):
return 'W' # or custom attribute name
Update Dynamics¶
Define state evolution under current weights in update(...). Typically involves matrix-vector multiplication and activation function.
Energy Function¶
Return scalar energy value for current state. Trainers use this to monitor convergence:
@property
def energy(self):
return -0.5 * state @ weights @ state
Hebbian Learning¶
Optional custom implementation of weight updates in apply_hebbian_learning(patterns) . If not provided, trainer uses default outer product rule:
W += learning_rate * patterns.T @ patterns
Dynamic Resizing¶
Optional support for changing network size while preserving learned structure: resize(num_neurons, preserve_submatrix)
See src/canns/models/brain_inspired/hopfield.py for reference implementation.
Hybrid Models¶
Note
Hybrid models combine CANN dynamics with other neural network architectures (under development). The vision includes:
CANN modules embedded in larger artificial neural networks
Differentiable CANN layers for end-to-end training
Integration of attractor dynamics with feedforward processing
Bridging biological plausibility with deep learning capabilities
Current status: Placeholder module structure exists in canns.models.hybrid for future implementations.
BrainPy Foundation¶
All models leverage BrainPy’s [18] infrastructure:
Dynamics Abstraction¶
bp.DynamicalSystem provides:
Automatic state tracking
JIT compilation support
Composable submodules
State Containers¶
bm.VariableUniversal container for all state variables (mutable, internal, or learnable parameters)
These containers enable transparent JAX [17] transformations while maintaining intuitive object-oriented syntax.
Time Management¶
brainpy.math provides time step management:
bm.set_dt(0.1): Set simulation time stepbm.get_dt(): Retrieve current time step
This ensures consistency across models, tasks, and trainers.
Compiled Simulation¶
bm.for_loop enables efficient simulation:
JIT compilation for GPU/TPU acceleration
Automatic differentiation support
Progress tracking integration
Summary¶
The CANNs model collection provides:
Basic Models - Standard CANN implementations for immediate use
Brain-Inspired Models - Networks with local learning capabilities
Hybrid Models - Future integration with deep learning (in development)
Each category follows consistent patterns through base class inheritance, making the library both powerful and extensible. The BrainPy foundation handles complexity, allowing users to focus on defining neural dynamics rather than implementation details.