src.canns.task.open_loop_navigation¶
Classes¶
Abstract base class for action policies that control agent movement. |
|
Template class for policy-based open-loop navigation tasks. |
|
Container for the inputs recorded during the open-loop navigation task. |
|
Open-loop spatial navigation task that synthesises trajectories without |
|
Preset task for cyclic dual-mode state-aware raster scan exploration. |
|
State-aware raster scan policy with cyclic dual-mode exploration. |
|
Open-loop navigation task in a T-maze environment. |
|
Open-loop navigation task in a T-maze environment with recesses at stem-arm junctions. |
Functions¶
|
Module Contents¶
- class src.canns.task.open_loop_navigation.ActionPolicy[source]¶
Bases:
abc.ABCAbstract base class for action policies that control agent movement.
Action policies compute parameters for agent.update() at each simulation step, enabling reusable, testable, and composable control strategies.
Example
```python class ConstantDriftPolicy(ActionPolicy):
- def __init__(self, drift_direction):
self.drift = np.array(drift_direction)
- def compute_action(self, step_idx, agent):
return {‘drift_velocity’: self.drift}
- task = CustomOpenLoopNavigationTask(
duration=100, action_policy=ConstantDriftPolicy([0.1, 0.0])
)¶
- abstractmethod compute_action(step_idx, agent)[source]¶
Compute action parameters for the current simulation step.
- Parameters:
step_idx (int) – Current simulation step (0 to total_steps-1)
agent (canns_lib.spatial.Agent) – Agent instance (for state-aware policies)
- Returns:
- Keyword arguments for agent.update()
Supported keys: - drift_velocity: np.ndarray of shape (2,) for 2D drift - drift_to_random_strength_ratio: float (typically 5.0-20.0) - forced_next_position: np.ndarray of shape (2,)
- Return type:
- class src.canns.task.open_loop_navigation.CustomOpenLoopNavigationTask(*args, action_policy=None, **kwargs)[source]¶
Bases:
OpenLoopNavigationTaskTemplate class for policy-based open-loop navigation tasks.
This class enables custom action policies by accepting an ActionPolicy object that controls agent movement at each simulation step.
- Parameters:
action_policy (ActionPolicy | None) – ActionPolicy instance controlling agent movement
**kwargs – All other arguments passed to OpenLoopNavigationTask
Example
```python # Define custom policy class MyPolicy(ActionPolicy):
- def compute_action(self, step_idx, agent):
return {‘drift_velocity’: np.array([0.1, 0.0])}
# Use it task = CustomOpenLoopNavigationTask(
duration=100, action_policy=MyPolicy()
- class src.canns.task.open_loop_navigation.OpenLoopNavigationData[source]¶
Container for the inputs recorded during the open-loop navigation task. It contains the position, velocity, speed, movement direction, head direction, and rotational velocity of the agent.
Additional fields for theta sweep analysis: - ang_velocity: Angular velocity calculated using unwrap method - linear_speed_gains: Normalized linear speed gains [0,1] - ang_speed_gains: Normalized angular speed gains [-1,1]
- ang_speed_gains: numpy.ndarray | None = None[source]¶
- ang_velocity: numpy.ndarray | None = None[source]¶
- hd_angle: numpy.ndarray[source]¶
- linear_speed_gains: numpy.ndarray | None = None[source]¶
- movement_direction: numpy.ndarray[source]¶
- position: numpy.ndarray[source]¶
- rot_vel: numpy.ndarray[source]¶
- speed: numpy.ndarray[source]¶
- velocity: numpy.ndarray[source]¶
- class src.canns.task.open_loop_navigation.OpenLoopNavigationTask(duration=20.0, start_pos=(2.5, 2.5), initial_head_direction=None, progress_bar=True, width=5, height=5, dimensionality='2D', boundary_conditions='solid', scale=None, dx=0.01, grid_dx=None, grid_dy=None, boundary=None, walls=None, holes=None, objects=None, dt=None, speed_mean=0.04, speed_std=0.016, speed_coherence_time=0.7, rotational_velocity_coherence_time=0.08, rotational_velocity_std=120 * np.pi / 180, head_direction_smoothing_timescale=0.15, thigmotaxis=0.5, wall_repel_distance=0.1, wall_repel_strength=1.0, rng_seed=None)[source]¶
Bases:
src.canns.task.navigation_base.BaseNavigationTaskOpen-loop spatial navigation task that synthesises trajectories without incorporating real-time feedback from a controller.
- calculate_theta_sweep_data()[source]¶
Calculate additional fields needed for theta sweep analysis. This should be called after get_data() to add ang_velocity, linear_speed_gains, and ang_speed_gains to the data.
- get_empty_trajectory()[source]¶
Returns an empty trajectory data structure with the same shape as the generated trajectory. This is useful for initializing the trajectory data structure without any actual data.
- import_data(position_data, times=None, dt=None, head_direction=None, initial_pos=None)[source]¶
Import external position coordinates and calculate derived features.
This method allows importing external trajectory data (e.g., from experimental recordings or other simulations) instead of using the built-in random motion model. The imported data will be processed to calculate velocity, speed, movement direction, head direction, and rotational velocity.
- Parameters:
position_data (np.ndarray) – Array of position coordinates with shape (n_steps, 2) for 2D trajectories or (n_steps, 1) for 1D trajectories.
times (np.ndarray, optional) – Array of time points corresponding to position_data. If None, uniform time steps with dt will be assumed.
dt (float, optional) – Time step between consecutive positions. If None, uses self.dt. Required if times is None.
head_direction (np.ndarray, optional) – Array of head direction angles in radians with shape (n_steps,). If None, head direction will be derived from movement direction.
initial_pos (np.ndarray, optional) – Initial position for the agent. If None, uses the first position from position_data.
- Raises:
ValueError – If position_data has invalid dimensions or if required parameters are missing.
Example
```python # Import experimental trajectory data positions = np.array([[0, 0], [0.1, 0.05], [0.2, 0.1], …]) # shape: (n_steps, 2) times = np.array([0, 0.1, 0.2, …]) # shape: (n_steps,)
task = OpenLoopNavigationTask(…) task.import_data(position_data=positions, times=times)
# Or with uniform time steps task.import_data(position_data=positions, dt=0.1) ```
- class src.canns.task.open_loop_navigation.RasterScanNavigationTask(duration, width=1.0, height=1.0, step_size=0.03, margin=0.05, speed=0.15, drift_strength=15.0, **kwargs)[source]¶
Bases:
CustomOpenLoopNavigationTaskPreset task for cyclic dual-mode state-aware raster scan exploration.
Systematically explores the environment using cyclic mode switching: 1. Horizontal phase: Left-right sweeps moving downward
→ Switches to Vertical when reaching bottom
Vertical phase: Up-down sweeps moving rightward → Switches back to Horizontal when reaching right edge
Cycles continuously: H → V → H → V → …
This cyclic dual-mode strategy achieves superior coverage by combining orthogonal scanning patterns and continuously adapting to avoid walls.
- Performance (200s, 1.0m x 1.0m):
Cyclic dual-mode: ~75-80% coverage (continuous cycling)
Single horizontal: 54.1% coverage (29 rows)
+20-30% improvement over random walk
- Parameters:
duration (float) – Simulation duration in seconds
width (float) – Environment width (default: 1.0)
height (float) – Environment height (default: 1.0)
step_size (float) – Scan density - smaller = denser scanning (default: 0.03)
margin (float) – Wall detection margin (default: 0.05)
speed (float) – Movement speed in m/s (default: 0.15)
drift_strength (float) – Drift control strength (default: 15.0)
**kwargs – Additional arguments passed to OpenLoopNavigationTask
Example
```python # High coverage dual-mode exploration task = RasterScanNavigationTask(
duration=200, width=1.0, height=1.0, step_size=0.03, # Dense scanning in both directions speed=0.15 # Movement speed
- class src.canns.task.open_loop_navigation.StateAwareRasterScanPolicy(width, height, margin=0.05, step_size=0.03, speed=0.15, drift_strength=15.0)[source]¶
Bases:
ActionPolicyState-aware raster scan policy with cyclic dual-mode exploration.
Scanning strategy (循环扫描): 1. Horizontal mode: Left-right sweeps moving downward
→ When reaching bottom: switch to Vertical mode
Vertical mode: Up-down sweeps moving rightward → When reaching right edge: switch back to Horizontal mode
Cycles continuously: H → V → H → V → … (避免撞墙)
This cyclic dual-mode approach achieves comprehensive coverage by combining orthogonal scanning patterns and avoiding wall collisions.
- Tested performance (200s, 1.0m x 1.0m environment):
Cyclic dual-mode: ~75-80%+ coverage (continuous cycling)
Single horizontal: 54.1% coverage (29 rows)
- Parameters:
width (float) – Environment width in meters
height (float) – Environment height in meters
margin (float) – Distance from wall to trigger turn (default: 0.05)
step_size (float) – Movement per turn in perpendicular direction (default: 0.03)
speed (float) – Movement speed (default: 0.15)
drift_strength (float) – Drift-to-random ratio for agent.update() (default: 15.0)
Example
```python policy = StateAwareRasterScanPolicy(
width=1.0, height=1.0, step_size=0.03, # Dense scanning for high coverage drift_strength=15.0
) task = CustomOpenLoopNavigationTask(
duration=200, action_policy=policy, width=1.0, height=1.0, start_pos=(0.05, 0.95) # Start at top-left
)¶
- compute_action(step_idx, agent)[source]¶
Compute next action based on current agent position and scanning mode.
Implements cyclic dual-mode scanning: - Horizontal mode: Left-right sweeps moving downward - Vertical mode: Up-down sweeps moving rightward - Auto-switches between modes to avoid walls and maintain coverage
- class src.canns.task.open_loop_navigation.TMazeOpenLoopNavigationTask(w=0.3, l_s=1.0, l_arm=0.75, t=0.3, start_pos=(0.0, 0.15), duration=20.0, dt=None, **kwargs)[source]¶
Bases:
OpenLoopNavigationTaskOpen-loop navigation task in a T-maze environment.
This subclass configures the environment with a T-maze boundary, which is useful for studying decision-making and spatial navigation in a controlled setting.
Initialize T-maze open-loop navigation task.
- Parameters:
w – Width of the corridor (default: 0.3)
l_s – Length of the stem (default: 1.0)
l_arm – Length of each arm (default: 0.75)
t – Thickness of the walls (default: 0.3)
start_pos – Starting position of the agent (default: (0.0, 0.15))
duration – Duration of the trajectory in seconds (default: 20.0)
dt – Time step (default: None, uses bm.get_dt())
**kwargs – Additional keyword arguments passed to OpenLoopNavigationTask
- class src.canns.task.open_loop_navigation.TMazeRecessOpenLoopNavigationTask(w=0.3, l_s=1.0, l_arm=0.75, t=0.3, recess_width=None, recess_depth=None, start_pos=(0.0, 0.15), duration=20.0, dt=None, **kwargs)[source]¶
Bases:
TMazeOpenLoopNavigationTaskOpen-loop navigation task in a T-maze environment with recesses at stem-arm junctions.
This variant adds small rectangular indentations at the T-junction, creating additional spatial features that may be useful for studying spatial navigation and decision-making.
Initialize T-maze with recesses open-loop navigation task.
- Parameters:
w – Width of the corridor (default: 0.3)
l_s – Length of the stem (default: 1.0)
l_arm – Length of each arm (default: 0.75)
t – Thickness of the walls (default: 0.3)
recess_width – Width of recesses at stem-arm junctions (default: t/4)
recess_depth – Depth of recesses extending downward (default: t/4)
start_pos – Starting position of the agent (default: (0.0, 0.15))
duration – Duration of the trajectory in seconds (default: 20.0)
dt – Time step (default: None, uses bm.get_dt())
**kwargs – Additional keyword arguments passed to OpenLoopNavigationTask