Introduction to CANNs

Binder Open In Colab

Welcome to the CANNs (Continuous Attractor Neural Networks) library! This notebook provides an introduction to the key concepts and capabilities of this powerful neural network modeling framework.

What are Continuous Attractor Neural Networks?

Continuous Attractor Neural Networks (CANNs) are a special class of neural network models that can maintain stable activity patterns in continuous state spaces. Unlike traditional neural networks that work with discrete inputs and outputs, CANNs excel at:

  • Spatial Representation: Encoding continuous spatial positions through population activity

  • Working Memory: Maintaining and updating dynamic information over time

  • Path Integration: Computing position changes based on movement information

  • Smooth Tracking: Following continuously changing targets

Key Features of the CANNs Library

🏗️ Rich Model Library

  • CANN1D/2D: One and two-dimensional continuous attractor networks

  • SFA Models: Models with Slow Feature Analysis integration

  • Hierarchical Networks: Multi-layer architectures for complex information processing

🎮 Task-Oriented Design

  • Path Integration: Spatial navigation and position estimation tasks

  • Target Tracking: Smooth continuous tracking of dynamic targets

  • Extensible Framework: Easy addition of custom task types

📊 Powerful Analysis Tools

  • Real-time Visualization: Energy landscapes, neural activity animations

  • Statistical Analysis: Firing rates, tuning curves, population dynamics

  • Data Processing: z-score normalization, time series analysis

⚡ High Performance Computing

  • JAX Acceleration: Efficient numerical computation based on JAX

  • GPU Support: CUDA and TPU hardware acceleration

  • Parallel Processing: Optimized for large-scale network simulations

Installation

The CANNs library can be installed using pip with different configurations based on your hardware:

[ ]:
# Install CANNs (run this in your terminal, not in the notebook)
# Basic installation (CPU)
# !pip install canns

# GPU support (Linux)
# !pip install canns[cuda12]

# TPU support (Linux)
# !pip install canns[tpu]

Basic Usage Example

Let’s start with a simple example to demonstrate the basic usage of the CANNs library:

[1]:
import brainstate
from canns.models.basic import CANN1D
from canns.task.tracking import SmoothTracking1D
from canns.analyzer.visualize import energy_landscape_1d_animation
import numpy as np

# Set computation environment
brainstate.environ.set(dt=0.1)
print("BrainState environment configured with dt=0.1")
BrainState environment configured with dt=0.1
[ ]:
# Create a 1D CANN network
cann = CANN1D(num=512)
cann.init_state()

print(f"Created CANN1D with {cann.num} neurons")
print(f"Network shape: {cann.shape}")
print(f"Feature space range: [{cann.x.min():.2f}, {cann.x.max():.2f}]")

Understanding the Network Structure

Let’s explore the basic properties of our CANN network:

[ ]:
# Examine the connectivity matrix
import matplotlib.pyplot as plt

# Plot the connectivity matrix (a small portion for visualization)
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 4))

# Plot connectivity pattern
center_idx = cann.num // 2
connectivity_slice = cann.conn_mat[center_idx, :]
ax1.plot(cann.x, connectivity_slice)
ax1.set_title('Connectivity Pattern (from center neuron)')
ax1.set_xlabel('Position')
ax1.set_ylabel('Connection Strength')
ax1.grid(True)

# Plot network positions
ax2.plot(cann.x, np.zeros_like(cann.x), 'ko', markersize=2)
ax2.set_title('Neuron Positions in Feature Space')
ax2.set_xlabel('Position')
ax2.set_ylabel('Neurons')
ax2.set_ylim(-0.5, 0.5)
ax2.grid(True)

plt.tight_layout()
plt.show()

Creating a Simple Tracking Task

Now let’s create a smooth tracking task to see the network in action:

[ ]:
# Define a smooth tracking task
task = SmoothTracking1D(
    cann_instance=cann,
    Iext=(1., 0.75, 2., 1.75, 3.),  # External input sequence
    duration=(10., 10., 10., 10.),   # Duration of each phase
    time_step=brainstate.environ.get_dt(),
)

# Get task data
task.get_data()

print(f"Task created with {len(task.data)} time steps")
print(f"Input sequence: {task.Iext}")
print(f"Phase durations: {task.duration}")

Running the Simulation

Let’s run the network simulation and observe its behavior:

[ ]:
# Define simulation step
def run_step(t, inputs):
    cann(inputs)
    return cann.u.value, cann.inp.value

# Run simulation
print("Running simulation...")
us, inps = brainstate.compile.for_loop(
    run_step,
    task.run_steps,
    task.data,
    pbar=brainstate.compile.ProgressBar(10)
)

print(f"Simulation completed!")
print(f"Output shape: {us.shape}")
print(f"Input shape: {inps.shape}")

Visualizing the Results

Now let’s visualize the network activity and see how it tracks the input:

[ ]:
# Plot static snapshots at different time points
fig, axes = plt.subplots(2, 2, figsize=(12, 8))
axes = axes.flatten()

time_points = [0, len(us)//4, len(us)//2, len(us)-1]
titles = ['Initial', 'Quarter', 'Half', 'Final']

for i, (t, title) in enumerate(zip(time_points, titles)):
    axes[i].plot(cann.x, us[t], 'b-', label='Network Activity', linewidth=2)
    axes[i].plot(cann.x, inps[t], 'r--', label='External Input', alpha=0.7)
    axes[i].set_title(f'{title} State (t={t})')
    axes[i].set_xlabel('Position')
    axes[i].set_ylabel('Activity')
    axes[i].legend()
    axes[i].grid(True)

plt.tight_layout()
plt.show()
[ ]:
# Create energy landscape animation
print("Generating energy landscape animation...")
energy_landscape_1d_animation(
    {'u': (cann.x, us), 'Iext': (cann.x, inps)},
    time_steps_per_second=50,
    fps=10,
    title='1D CANN Smooth Tracking Demo',
    xlabel='Position',
    ylabel='Activity',
    save_path='introduction_demo.gif',
    show=True,
)
print("Animation saved as 'introduction_demo.gif'")

Key Observations

From this basic example, you should observe:

  1. Smooth Tracking: The network activity (blue line) smoothly follows the external input (red dashed line)

  2. Continuous Representation: The activity forms a smooth bump that moves continuously through the feature space

  3. Stable Dynamics: The network maintains stable activity patterns even when the input changes

  4. Population Coding: Multiple neurons contribute to representing each position

What’s Next?

This introduction covered the basics of the CANNs library. In the following notebooks, you’ll learn about:

  • Quick Start: Getting up and running quickly with common use cases

  • Core Concepts: Deep dive into the mathematical foundations

  • 1D Networks: Detailed exploration of one-dimensional CANNs

  • 2D Networks: Two-dimensional spatial representations

  • Hierarchical Models: Multi-layer architectures

  • Custom Tasks: Creating your own tasks and experiments

  • Visualization: Advanced plotting and analysis techniques

  • Performance: Optimization and scaling for large simulations

Resources

Ready to explore more? Let’s move on to the Quick Start Guide!