Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,52 @@ This repository contains the neural network simulation models for the [CCN Textb

* See https://github.com/compcogneuro/sims/releases for full history

## Running Locally

### Prerequisites

* [Go 1.23 or newer](https://go.dev/dl/) — the simulations are written in Go.
* A C compiler (required by the Go graphics stack):
* **macOS**: Xcode Command Line Tools — run `xcode-select --install`
* **Linux**: GCC — `sudo apt install gcc` (Debian/Ubuntu) or equivalent
* **Windows**: [TDM-GCC-64](https://jmeubank.github.io/tdm-gcc/)

### Getting this fork

This repository includes a branch with extended `objrec` stimuli. To clone it:

```bash
git clone https://github.com/compcogneuro/sims.git
cd sims
git checkout copilot/edit-objrec-change-stimuli
```

### Running a simulation

Each simulation lives in its own subdirectory and is a standalone Go program. To run the `objrec` simulation (Chapter 6):

```bash
cd ch6/objrec
go run .
```

The GUI will open automatically. To run without the GUI (headless, e.g. on a server):

```bash
cd ch6/objrec
go run . -nogui
```

Other simulations follow the same pattern — navigate to the relevant chapter/sim directory and run `go run .`.

### Running all simulations (build only)

From the repository root you can build every simulation at once to confirm everything compiles:

```bash
go build ./...
```

## Developer notes

*This is not relevant for regular users*
Expand Down
29 changes: 27 additions & 2 deletions ch6/objrec/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,31 @@

This simulation explores how a hierarchy of areas in the ventral stream of visual processing (up to inferotemporal (IT) cortex) can produce robust object recognition that is invariant to changes in position, size, etc of retinal input images.

# Running Locally

This version of `objrec` is part of the `copilot/edit-objrec-change-stimuli` branch of the [compcogneuro/sims](https://github.com/compcogneuro/sims) fork and extends the original stimulus set from 20 to 25 LED patterns.

**Prerequisites:** [Go 1.23+](https://go.dev/dl/) and a C compiler (Xcode on macOS, GCC on Linux, TDM-GCC-64 on Windows).

```bash
# 1. Clone the repository and switch to this branch
git clone https://github.com/compcogneuro/sims.git
cd sims
git checkout copilot/edit-objrec-change-stimuli

# 2. Run the objrec simulation (opens the GUI)
cd ch6/objrec
go run .
```

To run headless (no GUI, useful for automated training on a server):

```bash
go run . -nogui
```

> **Note:** The pre-trained weights bundled with the original simulation are not compatible with the updated 25-pattern / 5×5 output layer. Use the **Init** button followed by **Train** in the GUI to train the network from scratch, or click **Open Trained Wts** only after generating new weights.

# Network Structure

![V1 Filters](fig_v1_visual_filters.png?raw=true "V1 Filters")
Expand All @@ -14,15 +39,15 @@ We begin by looking at the network structure, which goes from V1 to V4 to IT and

Neighboring groups process half-overlapping regions of the image. In addition to connectivity, these groups organize the inhibition within the layer. This means that there is both inhibitory competition across the whole V1 layer, but there is a greater degree of competition within a single hypercolumn, reflecting the fact that inhibitory neurons within a local region of cortex are more likely to receive input from neighboring excitatory neurons. This effect is approximated by having the FFFB inhibition operate at two scales at the same time: a stronger level of inhibition within the unit group (hypercolumn), and a lower level of inhibition across all units in the layer. This ensures that columns not receiving a significantly strong input will not be active at all (because they would get squashed from the layer-level inhibition generated by other columns with much more excitation), while there is also a higher level of competition to select the most appropriate features within the hypercolumn.

The V4 layer is also organized into a grid of hypercolumns (pools), this time 5x5 in size, with each hypercolumn having 49 units (7x7). As with V1, inhibition operates at both the hypercolumn and entire layer scales here. Each hypercolumn of V4 units receives from 4x4 V1 hypercolumns, with neighboring columns again having half-overlapping receptive fields. Next, the IT layer represents just a single hypercolumn of units (10x10 or 100 units) within a single inhibitory group, and receives from the entire V4 layer. Finally, the Output layer has 20 units, one for each of the different objects. Figure 2 shows which object each unit represents; the 5x4 array of output units corresponds to the 5x4 array of objects in Figure 2.
The V4 layer is also organized into a grid of hypercolumns (pools), this time 5x5 in size, with each hypercolumn having 49 units (7x7). As with V1, inhibition operates at both the hypercolumn and entire layer scales here. Each hypercolumn of V4 units receives from 4x4 V1 hypercolumns, with neighboring columns again having half-overlapping receptive fields. Next, the IT layer represents just a single hypercolumn of units (10x10 or 100 units) within a single inhibitory group, and receives from the entire V4 layer. Finally, the Output layer has 25 units, one for each of the different objects. Figure 2 shows which object each unit represents; the 5x5 array of output units corresponds to the 5x5 array of objects in Figure 2.

* You can view the patterns of connectivity described above by clicking on [[sim:Network/Wts]] / [[sim:Wts/r.Wt]], and then on units in the various layers.

# Training

![LED Objects](fig_objrec_objs.png?raw=true "LED Objects")

**Figure 2:** Set of 20 objects composed from horizontal and vertical line elements used for the object recognition simulation. By using a restricted set of visual feature elements, we can more easily understand how the model works, and also test for generalization to novel objects (object 18 and 19 are not trained initially, and then subsequently trained only in a relatively few locations -- learning there generalizes well to other locations).
**Figure 2:** Set of 25 objects composed from horizontal and vertical line elements used for the object recognition simulation. The first 20 objects use all unique combinations of 3 line segments, while objects 20–22 add more complex 4-segment patterns. Objects 23 and 24 (the last two) are "novel" objects not trained in the first phase, and are subsequently trained only in a restricted set of spatial locations to test generalization.

Now, let's see how the network is trained.

Expand Down
12 changes: 6 additions & 6 deletions ch6/objrec/led_env.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,11 @@ type LEDEnv struct {
// visual processing params
Vis Vis

// minimum LED number to draw (0-19)
MinLED int `min:"0" max:"19"`
// minimum LED number to draw (0-24)
MinLED int `min:"0" max:"24"`

// maximum LED number to draw (0-19)
MaxLED int `min:"0" max:"19"`
// maximum LED number to draw (0-24)
MaxLED int `min:"0" max:"24"`

// current LED number that was drawn
CurLED int `edit:"-"`
Expand Down Expand Up @@ -66,7 +66,7 @@ func (ev *LEDEnv) States() env.Elements {
els := env.Elements{
{"Image", []int{isz.Y, isz.X}, []string{"Y", "X"}},
{"V1", sz, nms},
{"Output", []int{4, 5}, []string{"Y", "X"}},
{"Output", []int{5, 5}, []string{"Y", "X"}},
}
return els
}
Expand Down Expand Up @@ -102,7 +102,7 @@ func (ev *LEDEnv) Init(run int) {
ev.Trial.Scale = etime.Trial
ev.Trial.Init()
ev.Trial.Cur = -1 // init state -- key so that first Step() = 0
ev.Output.SetShape([]int{4, 5}, "Y", "X")
ev.Output.SetShape([]int{5, 5}, "Y", "X")
}

func (ev *LEDEnv) Step() bool {
Expand Down
63 changes: 39 additions & 24 deletions ch6/objrec/leds.go
Original file line number Diff line number Diff line change
Expand Up @@ -104,6 +104,9 @@ func (ld *LEDraw) DrawSeg(seg LEDSegs) {
func (ld *LEDraw) DrawLED(num int) {
led := LEData[num]
for _, seg := range led {
if seg == NoSeg {
continue
}
ld.DrawSeg(seg)
}
}
Expand All @@ -122,30 +125,42 @@ const (
CenterH
CenterV
LEDSegsN
NoSeg LEDSegs = -1 // sentinel for unused slots in 4-element segment arrays
)

var LEData = [][3]LEDSegs{
{CenterH, CenterV, Right},
{Top, CenterV, Bottom},
{Top, Right, Bottom},
{Bottom, CenterV, Right},
{Left, CenterH, Right},

{Left, CenterV, CenterH},
{Left, CenterV, Right},
{Left, CenterV, Bottom},
{Left, CenterH, Top},
{Left, CenterH, Bottom},

{Top, CenterV, Right},
{Bottom, CenterV, CenterH},
{Right, CenterH, Bottom},
{Top, CenterH, Bottom},
{Left, Top, Right},

{Top, CenterH, Right},
{Left, CenterV, Top},
{Top, Left, Bottom},
{Left, Bottom, Right},
{Top, CenterV, CenterH},
// LEData contains the segment combinations for each LED object.
// The first 20 entries are the original 3-segment combinations (all C(6,3)=20 unique patterns).
// Entries 20-24 are new 4-segment combinations that form more complex stimuli.
// Entries 23 and 24 are reserved as "novel" objects not trained in the first phase.
var LEData = [][4]LEDSegs{
{CenterH, CenterV, Right, NoSeg},
{Top, CenterV, Bottom, NoSeg},
{Top, Right, Bottom, NoSeg},
{Bottom, CenterV, Right, NoSeg},
{Left, CenterH, Right, NoSeg},

{Left, CenterV, CenterH, NoSeg},
{Left, CenterV, Right, NoSeg},
{Left, CenterV, Bottom, NoSeg},
{Left, CenterH, Top, NoSeg},
{Left, CenterH, Bottom, NoSeg},

{Top, CenterV, Right, NoSeg},
{Bottom, CenterV, CenterH, NoSeg},
{Right, CenterH, Bottom, NoSeg},
{Top, CenterH, Bottom, NoSeg},
{Left, Top, Right, NoSeg},

{Top, CenterH, Right, NoSeg},
{Left, CenterV, Top, NoSeg},
{Top, Left, Bottom, NoSeg},
{Left, Bottom, Right, NoSeg},
{Top, CenterV, CenterH, NoSeg},

// New 4-segment patterns (indices 20-24)
{Bottom, Left, Right, Top}, // 20: rectangle (box outline)
{Left, Right, CenterH, CenterV}, // 21: H crossed with I
{Bottom, Top, CenterH, CenterV}, // 22: double crossbar (like = with verticals)
{Bottom, Right, CenterH, CenterV}, // 23: novel -- complex bracket right
{Left, Top, CenterH, CenterV}, // 24: novel -- complex bracket left
}
14 changes: 7 additions & 7 deletions ch6/objrec/objrec.go
Original file line number Diff line number Diff line change
Expand Up @@ -151,16 +151,16 @@ func (ss *Sim) ConfigEnv() {
trn.Name = etime.Train.String()
trn.Defaults()
trn.MinLED = 0
trn.MaxLED = 17 // exclude last 2 by default
trn.MaxLED = 22 // exclude last 2 novel items by default
if ss.Config.Env.Env != nil {
params.ApplyMap(trn, ss.Config.Env.Env, ss.Config.Debug)
}
trn.Trial.Max = ss.Config.Run.NTrials

novTrn.Name = etime.Analyze.String()
novTrn.Defaults()
novTrn.MinLED = 18
novTrn.MaxLED = 19 // only last 2 items
novTrn.MinLED = 23
novTrn.MaxLED = 24 // only last 2 novel items
if ss.Config.Env.Env != nil {
params.ApplyMap(novTrn, ss.Config.Env.Env, ss.Config.Debug)
}
Expand All @@ -173,8 +173,8 @@ func (ss *Sim) ConfigEnv() {
tst.Name = etime.Test.String()
tst.Defaults()
tst.MinLED = 0
tst.MaxLED = 19 // all by default
tst.Trial.Max = 500 // 0 // 1000 is too long!
tst.MaxLED = 24 // all 25 patterns by default
tst.Trial.Max = 500 // 0 // 1000 is too long!
if ss.Config.Env.Env != nil {
params.ApplyMap(tst, ss.Config.Env.Env, ss.Config.Debug)
}
Expand All @@ -192,7 +192,7 @@ func (ss *Sim) ConfigNet(net *leabra.Network) {
v1 := net.AddLayer4D("V1", 10, 10, 5, 4, leabra.InputLayer)
v4 := net.AddLayer4D("V4", 5, 5, 7, 7, leabra.SuperLayer)
it := net.AddLayer2D("IT", 10, 10, leabra.SuperLayer)
out := net.AddLayer2D("Output", 4, 5, leabra.TargetLayer)
out := net.AddLayer2D("Output", 5, 5, leabra.TargetLayer)

v1.SetSampleIndexesShape(emer.CenterPoolIndexes(v1, 2), emer.CenterPoolShape(v1, 2))
v4.SetSampleIndexesShape(emer.CenterPoolIndexes(v4, 2), emer.CenterPoolShape(v4, 2))
Expand Down Expand Up @@ -552,7 +552,7 @@ func (ss *Sim) ConfigLogItems() {
ss.Logs.AddItem(&elog.Item{
Name: "CatErr",
Type: reflect.Float64,
CellShape: []int{20},
CellShape: []int{25},
DimNames: []string{"Cat"},
Plot: true,
Range: minmax.F32{Min: 0},
Expand Down
11 changes: 11 additions & 0 deletions ch7/hip/hip.go
Original file line number Diff line number Diff line change
Expand Up @@ -210,6 +210,11 @@ type Sim struct {
// if true, run in pretrain mode
PretrainMode bool `display:"-"`

// DGLesion controls whether a 50% lesion is applied to the DG (dentate gyrus) layer
// before AC list training begins. The lesion is automatically removed at the start of
// each new Run so that AB training is always unaffected.
DGLesion bool

// pool patterns vocabulary
PoolVocab patgen.Vocab `display:"-"`

Expand Down Expand Up @@ -474,6 +479,10 @@ func (ss *Sim) ConfigLoops() {
ss.Stats.SetInt("FirstPerfect", epc)
trn.Config(table.NewIndexView(ss.TrainAC))
trn.Validate()
// Apply DG lesion for AC training if the flag is set
if ss.DGLesion {
ss.Net.LayerByName("DG").LesionNeurons(0.5)
}
}
}
})
Expand Down Expand Up @@ -548,6 +557,8 @@ func (ss *Sim) NewRun() {
ctx.Reset()
ctx.Mode = etime.Train
ss.Net.InitWeights()
// Restore DG to its full (un-lesioned) state for AB training
ss.Net.LayerByName("DG").UnLesionNeurons()
ss.InitStats()
ss.StatCounters()
ss.Logs.ResetLog(etime.Train, etime.Epoch)
Expand Down