Introduction to Computational Cognitive Neuroscience
Alexander A. Petrov,
Ohio State University
apetrov [at] alexpetrov.com,
http://alexpetrov.com/
Course Overview
How does cognition emerge from the brain? This course introduces you to the new and exciting field of Computational Cognitive Neuroscience (CCN) that provides important pieces of the answer to this question. We focus on simulations of cognitive and perceptual processes, using neural network models that bridge the gap between biology and behavior. We adopt the Leabra framework of Randy O'Reilly and Yuko Munakata and use their 2000 book Computational Explorations in Cognitive Neuroscience as the main text for the course. We first consider the basic biological and computational properties of individual neurons and networks of neurons, as well as their idealized Leabra counterparts. We discuss their role in basic processing mechanisms such as spreading activation, inhibition, and multiple constraint satisfaction. We then discuss learning mechanisms that allow networks of neurons to build internal models of their environments and perform complex tasks. Models illustrating these ideas will be demonstrated in class and discussed further in the afternoon sessions. We complement the simple demos with a case study of a full-blown model of visual object recognition and selective attention. Next, we turn to big-picture issues and present (an outline of) a comprehensive connectionist proposal of a cognitive architecture. We discuss how different brain systems (e.g., hippocampus, parietal cortex, frontal cortex) specialize to solve difficult computational tradeoffs. Finally, we discuss how neural networks can learn to do complicated symbol-manipulation tasks once thought to be the exclusive domain of symbolic architectures such as ACT-R. As each of these topics requires a course of its own, all we can do in a week is introduce the key ideas, illustrate how they fit together, and provide pointers to the literature. We will also introduce the PDP++ neural network simulator, which will enable you to explore the variety of models that come with the book.
Prerequisites
This course is open to all participants in the Summer School. However, we will cover a lot of ground and you are stongly advised to attend the introductory connectionist course taught by Bob French during the first week of the Summer School. We will take for granted many concepts (e.g., perceptrons, Hebbian learning) covered in Bob's course. Obviously, the stronger your background in computational modeling and neuroscience, the better. Knowledge of statistics and cognitive psychology is a great asset too. I will do my best to make the course useful to you even if you know relatively little in these areas.We will all see whether this is realistic. While the models we will be using are mathematically based, only algebra and some simple calculus-level concepts are involved. Computer programming experience is not required because the models are accessible via a graphical interface.
Textbook
O'Reilly, R. C. & Munakata, Y. (2000). Computational Explorations in Cognitive Neuroscience: Understanding the Mind by Simulating the Brain. Cambridge, MA: MIT Press.
This book is a valuable addition to your library. I strongly advise to obtain a copy of your own and read as much as possible before the Summer School begins. As a former student in several summer schools at NBU, I assure you from personal experience that once the school starts everything happens awfully fast. Do not fool yourselves that you will be able to read anything after a full day of classes in the morning, labs in the afternoon, and parties in the evening (and night).
Afternoon Lab
The textbook comes with dozens of pre-built neural network models that illustrate key principles and phenomena. They are all built in the Leabra framework using the PDP++ simulator. First-hand experience with these models is the best way to understand how neural networks really work. We will demonstrate several models during the morning lectures, but will not have time to sink our teeth in them. The afternoon lab provides an opportunity for more detailed exploration. We can change various parameters and see how the model's behavior changes, discuss alternative network topologies, and so forth. Those of you who have laptops will be able to install the simulator on your own machine and explore on your own. The afternoon lab will be useful for all students in the course and indispensable for those who take it for credit.
Assessment
Students who take the course for credit will be asked to write a brief (5-7 pages) paper that explores, evaluates, compares, and/or criticizes some of the models discussed in class or in the book. The paper can also relate the issues discussed in class to some alternative model or experimental result.
Plan of the Lectures
14th International Summer School in Cognitive Science
New Bulgarian University, Sofia, Bulgaria
Week 2, July 16-22, 2007.
1. Foundations. Neurons and Networks
- What is Computational Cognitive Neuroscience?
- Why modeling?
- Biological neurons and artificial neurons
- Neurons as detectors
- Networks of neurons
- Excitation, ihibition, and constraint satisfaction
- Distributed representations. Sparseness.
Required reading:
-
O'Reilly, R.C. (1998). Six Principles for Biologically-Based Computational Models of
Cortical Cognition. Trends in Cognitive Sciences, 2, 455-462.
Abstract Preprint (pdf)
Optional readings:
- Chapter 1 in the textbook: Introduction and Overview
- Chapter 2: Individual Neurons
- Chapter 3: Networks of Neurons
2. Learning in Neural Networks
- Induction. Indictive Biases
- Biological Basis of Learning: Synaptic Plasticity
- Hebbian Model Learning
- Error-Driven Task Learning: GeneRec
- Combined Model and Task Learning: Leabra
- Reinforcement Learning
Required reading:
- The same as for Lecture 1 (O'Reilly, 1998).
Optional readings:
- Chapter 4: Hebbian Model Learning
- Chapter 5: Error-Driven Task Learning
- Chapter 6: Combined Model and Task Learning, and Other Mechanisms
- Kandel, E. (2001). The Molecular Biology of Memory Storage: A Dialogue Between Genes and Synapses. Science, 294, 1030-1038. [Based on Eric Kandel's acceptance speech for the 2000 Nobel Prize for Medicine.]
3. Object Recognition and Selective Attention
- Dorsal and Ventral Visual Streams
- Invariant Object Recognition: The Problem
- Structural versus Image-Based Approaches
- The Binding Problem
- Object Recognition: A Simple Model
- Spatial Attention: A Simple Model
- Object Recognition and Spatial Attention: A Combined Model
Required reading:
- Chapter 8: Perception and Attention, pp. 241-247, 254-273.
Optional readings:
- The rest of Chapter 8 in the textbook.
- Riesenhuber, M. & Poggio, T. (1999). Hierarchical Models of Object Recognition in Cortex. Nature Neuroscience, 2, 1019-1025.
- Mozer, M. C. & Sitton, M. (1998). Computational Modeling of Spatial Attention. In H. Pashler (Ed.), Attention (pp. 341-393). Philadelphia, PA: Psychology Press.
- Hummel, J. E. (2001). Complementary solutions to the binding problem in vision: Implications for shape perception and object recognition. Visual Cognition, 8, 489-517. [Special issue on binding]
4. Leabra: A Connectionist Cognitive Architecture
- Cognitive Architecture: A Big Idea of Science
- Large-Scale Functional Organization in the Brain
- Tripartite Functional Organization
- Posterior Cortex: Distributed Representations, Slow Learning, Generalization
- Hippocampus: Sparse Conjunctive Representations, Fast Learning
- Prefrontal Cortex and Basal Ganglia: Active Maintenance, Gating, Modulatory Control
Required readings:
- Chapter 7: Large-Scale Brain Area Functional Organization
-
O'Reilly, R.C., Braver, T. S. & Cohen, J. D. (1999).
A Biologically Based Computational Model of Working Memory.
In A. Miyake & P. Shah (Eds.) Models of Working Memory:
Mechanisms of Active Maintenance and Executive Control (pp. 375-411).
New York: Cambridge University Press.
Abstract Preprint (pdf)
Optional reading:
-
Anderson, J. R. & Lebiere, C. L. (2003).
The Newell test for a theory of cognition.
Behavioral & Brain Sciences, 26, 587-637.
Abstract Preprint (pdf) -
McClelland, J. L., McNaughton, B. L. & O'Reilly, R. C. (1995).
Why There are Complementary Learning Systems in the Hippocampus and Neocortex:
Insights from the Successes and Failures of Connectionist Models of Learning and Memory.
Psychological Review, 102, 419-457.
Abstract Preprint (pdf)
5. High-Level Cognition in Neural Networks
- The Challenge of High-Level Cognition
- Banishing the Homunculus
- Symbols: Distal Access
- Generativity
- Gating: Active Maintenance and Rapid Updating
- Role of Prefrontal Cortex and Basal Ganglia
- Final Thoughts
Required readings:
-
O'Reilly, R. C. (2006). Biologically Based Computational Models of High-Level Cognition.
Science, 314, 91-94.
Abstract Preprint (pdf) -
Rougier, N. P., Noelle, D., Braver, T. S., Cohen, J. D. & O'Reilly, R. C. (2005).
Prefrontal Cortex and the Flexibility of Cognitive Control: Rules Without Symbols.
Proceedings of the National Academy of Sciences, 102, 7338-7343.
Abstract Main text (pdf) Supplement (pdf)
Optional reading:
- Chapter 11: Higher-Level Cognition
- Chapter 12: Conclusions
- Anderson & Lebiere (2003) -- also optional reading for Lecture 4.
About the Instructor
Alexander Petrov got his Ph.D. in 1998 from New Bulgarian University under the supervision of Boicho Kokinov. He has been a postdoc in John Anderson's lab at Carnegie Mellon University, where he learned firsthand about ACT-R. He has also been a postdoc in Randy O'Reilly's lab at the University of Colorado at Boulder, where he learned firsthand about Leabra. He is involved in a DARPA-sponsored project titled "Biologically Inspired Cognitive Architectures" and is really excited at the prospect of combining his two favorite architectures. At present, he is an Assistant Professor at the Department of Psychology at the Ohio State University, Columbus, OH. His bread-and-butter research there involves models of perceptual learning and memory-based scaling. More information is available on-line at http://alexpetrov.com