Comparable Perceptual Learning With and Without Feedback in Non-stationary Context: Data and model
- Petrov, A., Dosher, B., & Lu, Z.-L. (2004)
-
Comparable perceptual learning with and without feedback
in non-stationary context: Data and model [Abstract].
Journal of Vision, 4(8), 306a,
http://journalofvision.org/4/8/306/.
(Poster presented at the 2004 Meeting of the
Vision Sciences Society)
Poster (pdf) Data
Abstract:
Learning was evaluated for orientation discrimination of peripheral Gabor
targets (+/-10 deg) in two filtered noise "contexts" with predominate
orientations at either +/-15 deg. The training schedule alternated two-day
blocks of each context. 3 target contrast levels were tested. 18 observers
received no feedback, yet improved both discriminability and speed within and
across blocks. The initial and asymptotic d' levels and learning dynamics
were comparable to those obtained for observers with feedback
(1).
For both groups, performance dropped at each context switch, with
approximately constant cost (about 0.3 d') over 5 switches (10800 trials).
In this situation, self-generated feedback seems sufficient for learning. A
self-supervised model can account for these results via incremental channel
reweighting with and without explicit feedback. Visual stimuli are first
processed by standard orientation and frequency tuned units with contrast
gain control via divisive normalization. Learning occurs only in the
"read-out" connections to decision units; the stimulus representation never
changes. An incremental Hebbian rule tracks the external feedback when
available, or else reinforces the model's own response. An a priori bias to
equalize the response frequencies stabilizes the model across switches.
As accuracy is above 50%, self-generated feedback drives the weights in the
right direction on average, though less efficiently than external feedback.
Weights of task-correlated units gain strength while weights on irrelevant
frequencies and orientations are reduced, producing a gradual learning
curve.
If the context shifts abruptly, the system lags behind as it works with
suboptimal weights until it readapts, creating switch costs of approximately
equal magnitude across successive context changes. Hebbian channel
reweighting with no change of early visual representations can explain
perceptual learning.
1. Petrov, Dosher, & Lu (2003). JOV/3/9/670/
Poster VSS03 (pdf)