Hebbian Memory Examples¶
The examples/brain_inspired/ scripts highlight how to drive
HebbianTrainer with real data. They also illustrate common
pre-processing steps for pattern memories.
hopfield_train.py¶
Location:
examples/brain_inspired/hopfield_train.pyScenario: Convert a handful of
skimageimages into pattern memories, train a Hopfield network, then recover noisy inputs.Key steps:
preprocess_imageresizes and thresholds an image to a 128×128 vector in{-1, +1}.Instantiate
AmariHopfieldNetwork(synchronous updates,signactivation).Initialise
HebbianTrainer(model)and calltrainer.train(data_list).Corrupt each pattern by flipping ~30% of the pixels and run
trainer.predict_batch.Use Matplotlib to compare train/input/output panels (saved as
discrete_hopfield_train.png).
Extensions:
Toggle
asyn=Trueto observe asynchronous convergence.Set
normalize_by_patterns=Falseto keep the magnitude of original patterns.
hopfield_train_mnist.py¶
Location:
examples/brain_inspired/hopfield_train_mnist.pyScenario: Load a small selection of MNIST digits (with a series of fallbacks) and store them in a Hopfield network.
Key steps:
_load_mnist()attempts Hugging Facedatasets, TorchVision, Keras, and finally scikit-learn digits.Choose one exemplar per digit class and convert with
_threshold_to_pm1.Train with
trainer.train(patterns).Run
trainer.predicton clean held-out samples to confirm retrieval.Plot the train/input/output triads via
plt.subplots.
Extensions:
Introduce noisy test images to measure robustness.
Enable
trainer.configure_progress(show_iteration_progress=True)to log energy convergence.
Custom models¶
To plug in your own model, follow the
BrainInspiredModelinterface: exposeWandsstates (or overrideweight_attr/predict_state_attr) and provideupdate/energy.Experiment with
AmariHopfieldNetworkparameters such asactivationortemperatureto study the effect on energy descent.