Task Generators¶
This document explains the task generation philosophy and available paradigms in the CANNs library.
Overview¶
The task module (canns.task) generates experimental data for CANN simulations with support for saving, loading, importing, and visualization. It provides standardized paradigms that abstract common experimental scenarios, ensuring reproducibility and convenience.
Create time-varying external inputs that drive network dynamics
Supply trajectory information for analysis and comparison
Task Categories¶
Tasks are organized into two main categories based on the cognitive function they model.
Tracking tasks simulate scenarios where the network follows an external signal. The bump of activity in the CANN tracks a moving stimulus position.
Network receives static input at a fixed location. Tests basic attractor stability and population representation accuracy.
Network receives brief, possibly noisy inputs. Tests pattern completion and recognition capabilities.
Most common paradigm
Network receives continuously moving input signals. Tests dynamic tracking ability with varying speeds and directions.
Available implementations:
SmoothTracking1D: One-dimensional tracking for ring networksSmoothTracking2D: Two-dimensional tracking for torus networks (under development)
Navigation tasks simulate spatial movement scenarios [4] where the network receives velocity or heading information rather than direct position inputs through path integration [27, 28].
Network updates its internal representation based on self-motion signals. Feedback from the environment can correct errors.
Network integrates velocity inputs without external feedback. Tests path integration capabilities and accumulation of errors over time.
Note
Navigation tasks do not require direct model coupling because they provide richer data (velocity, angles, etc.) that users interpret based on their specific application.
Model-Task Coupling¶
Why Coupling Exists¶
Tracking tasks require a CANN model instance to be passed during construction:
task = SmoothTracking1D(cann_instance=cann, ...)
Important
This coupling exists because tracking tasks need access to cann.get_stimulus_by_pos(). This method converts abstract position coordinates into concrete neural input patterns that match the network’s encoding scheme.
The coupling provides user convenience:
Automatic stimulus generation matching network topology
Consistent encoding between task and model
Reduced boilerplate for common use cases
When Coupling Is Required¶
Task Type |
Requires |
Data Provided |
|---|---|---|
Tracking tasks |
✅ Yes |
Input patterns in neural space |
Population Coding
Template Matching
Smooth Tracking
|
✅ Yes |
Use |
Navigation tasks |
❌ No |
Velocity, heading, position data |
Closed-Loop Navigation
Open-Loop Navigation
|
❌ No |
Users decide how to convert to neural inputs |
Design Rationale
This distinction reflects the different nature of these paradigms. Tracking involves direct sensory input to the network, while navigation involves internal state updates based on self-motion.
Task Components¶
Tasks are configured through constructor parameters:
Target positions: Where the stimulus appears or moves to
Durations: How long each segment lasts
Time step: Temporal resolution (from
bm.get_dt())Additional parameters: Speed profiles, noise levels, initial conditions
The get_data() method returns:
Input sequence: Array of neural inputs over time (for tracking tasks)
Trajectory information: Position, velocity, time stamps
Metadata: Task parameters for documentation
Tasks support saving and loading:
save(filename): Store task data for reproducibilityload(filename): Reload previously generated tasksStandard formats ensure compatibility
Feature under development
The library supports importing external trajectories from experimental recordings. This enables:
Replay of real animal movement paths
Validation against experimental data
Comparison of model predictions with neural recordings
Task Usage Patterns¶
Standard Workflow¶
Typical Usage Steps
Create model instance
Configure task with positions and durations
Generate data using
get_data()Run simulation feeding task inputs to model
Analyze results comparing model output to task trajectory
Multiple Trial Generation¶
Tasks support generating multiple trials with:
Same paradigm, different random seeds
Systematic parameter variations
Batch processing capabilities
Parameter Sweeps¶
Combine tasks with analysis pipelines to:
Test model robustness across conditions
Find optimal parameter ranges
Characterize attractor properties
Design Considerations¶
Tasks use bm.get_dt() to ensure temporal resolution matches the simulation environment.
Always set the global time step before creating tasks:
bm.set_dt(0.1)
task = SmoothTracking1D(...)
Tasks operate in abstract feature space (angles, coordinates). The conversion to neural activity patterns is handled by:
model.get_stimulus_by_pos()for direct couplingUser-defined encoding for decoupled scenarios
Custom tasks can be created by:
Inheriting from base task classes
Implementing required data generation methods
Following conventions for output formats
Summary¶
The task module provides:
1️⃣
Tracking Tasks: Direct stimulus following (population coding, template matching, smooth tracking)
2️⃣
Navigation Tasks: Self-motion integration (closed-loop, open-loop navigation)
3️⃣
Model Coupling: Automatic stimulus generation for tracking tasks
4️⃣
Flexibility: Navigation tasks allow user-defined input interpretation
Tasks abstract experimental paradigms into reusable components—enabling systematic study of CANN dynamics across standardized conditions. The coupling design balances convenience for common cases with flexibility for specialized applications.