4CAPS Cognitive Neuroarchitecture 4CAPS is a cognitive architecture whose models can account for both traditional behavioral data and, more interestingly, the results of neuroimaging studies - in this sense it is a neuroarchitecture (Just & Varma, 2006). Cognitively speaking, it is a hybrid architecture that combines symbolic and connectionist mechanisms in a resource-constrained environment. Cortically speaking, it moves beyond the localism of most neuroscience accounts, proposing that thinking is a network phenomenon. 4CAPS is particularly well-suited for specifying models of high-level forms of cognition. This research has been supported by the Office of Naval Research Grant N00014-02-1-0037 and the Multidisciplinary Research Program of the University Research Initiative (MURI) Grant N00014-01-1-0677 HistoryOperating Principles Source Code & Documentation Reference Just, M. A., & Varma, S. (2006). The organization of thinking: What functional brain imaging reveals about the neuroarchitecture of complex cognition. Manuscript submitted for publication. | |
History
4CAPS is the most recent member of an architectural family that includes CAPS and 3CAPS.
The original CAPS architecture (Thibadeau et al., 1982) synthesizes symbolic and activation-based processing as it was understood in the early 1980s, and in this regard resembles other hybrid efforts of the time (Anderson, 1983; Hofstadter et al., 1983; Holland et al., 1985; Erman et al., 1980; Minsky, 1985; Rumelhart & McClelland, 1982). Its computational mechanisms include variable-binding, constituent-structured representations, graded activations, weights, thresholds, and parallel processing. The suitability of CAPS for accounting for high-level cognition has been demonstrated by successful models of language comprehension (Just & Carpenter, 1987; Thibadeau et al., 1982), mental rotation (Just & Carpenter, 1985), and problem solving (Carpenter et al., 1990). CAPS was succeeded by 3CAPS (Just & Carpenter, 1992; Just & Varma, 2002), which adds constraints on the resources available for maintaining and processing representations. This enables computational explorations of individual differences on a number of tasks: sentence comprehension in young adults of different working memory capacities (Just & Carpenter, 1992); sentence comprehension in intact normals and aphasics (Haarmann et al., 1997); discourse comprehension in young adults (Goldman & Varma, 1995); problem solving in normal adults different in fluid intelligence (Just et al., 1996); problem solving in intact normals and patients with frontal lobe lesions (Goel et al., 2001); and human-computer interaction (Byrne & Bovair, 1997; Huguenard et al., 1997). The success of these models furthers the case that human information processing employs hybrid computational mechanisms in a capacity-constrained environment. CAPS and 3CAPS models account for behavioral measures of high-level cognition collected from normal young adults and neuropsychological patients, broadly defined. 4CAPS, the latest member of the CAPS family, extends to new measures and new populations. Like their predecessors, 4CAPS models account for the time course of cognition and for individual differences. Unlike their predecessors, they also account for neuroimaging measures of normal cognition, and they provide much more precise accounts of the behavioral consequences of cortical lesions. | |
Operating Principles of 4CAPS
4CAPS embodies six operating principles that specify the nature of cognitive and cortical information processing. | |
An initial principle is intended to capture the current consensus of the field. 0. Thinking is the product of the concurrent activity of multiple brain areas that collaborate in a large-scale cortical network. | |
The next four principles, which constitute the theoretical core of our proposal, are relatively novel.
1. Each cortical area can perform multiple cognitive functions, and conversely, many cognitive functions can be performed by more than one area. | |
Finally, we propose a measurement assumption that enables our theoretical constructs to make contact with neuroimaging data.
5. The activation of a cortical area as measured by imaging techniques such as fMRI and PET varies as a function of its cognitive workload. | |
The reader interested in the technical details of how these principles are realized within a hybrid symbolic-connectionist architecture is directed to Just and Varma (2006). | |
Source Code & Documentation for 4CAPS and Models
4CAPS is written in ANSI Common Lisp. In theory, it should run in any compliant implementation of the language. In practice, it has been tested in two commercial products, Digitool's Macintosh Common Lisp (through version 5.0) and
Franz's Allegro Common Lisp (through version 6.5). A list of free and commercial Common Lisp implementations and useful information about the language are available at the Association of Lisp Users website.
4CAPS | |
4CAPS Source Code & Documentation
The source code for 4CAPS is available here. To run 4CAPS, load this file into your Common Lisp environment. There are two caveats to be aware of.
* The first concerns interpretation vs. compilation. Some Common Lisps environments (e.g., Digitool's) automatically compile all source code. However, others (e.g., Franz's) use the interpreter by default. In this case, you will want to compile the 4CAPS source code before loading it. This is done via the compile-file function or via a menu choice; the result should be a so-called "fasl" (fast load) file. Compilation is not necessary, but will greatly speed performance.
* The second concerns packages. 4CAPS will be loaded into the "CL-USER" package. To gain access to its functionality, you will have to operate within this package. Perhaps the simplest way to do this is to type (in-package "CL-USER") after loading 4CAPS.
There is no 4CAPS tutorial or manual. Rather, there exist several manuscripts written by several people over the years, each documenting a slightly different version of the cognitive architecture.
| |
Sentence Comprehension Model
The Sentence Comprehension Model is briefly described in Just et al. (1999) and Just and Varma (2006), and comprehensively described in Varma and Just (2006). Its source code is available here.
| |
At the bottom of the file are a number of commands are defined for simulating the comprehension of sentences used in a number of empirical studies. Some are for behavioral studies of normal young adults:   King & Just (1991): (king1991 &optional (summ-p t))   MacDonald et al. (1992): (macdonald1992 &optional (summ-p t)) One is for a behavioral study of lesion patients:   Haarmann et al. (1997): (haarmann1997 &optional (summ-p t)) One is for an fMRI study that employs a block design:   Just et al. (1996): (just1996 &optional (summ-p t)) Others are for event-related fMRI studies:   Mason et al. (2003): (mason2003 &optional (summ-p t))   Caplan et al. (2001): (caplan2001 &optional (summ-p t)) One is for a block-design fMRI study of a lesion patient:   Thulborn et al. (1999):(thulborn1999 &optional (summ-p t)) The :summ-p parameter is optional. It can be either t or nil. When t - its default value - the summ command is called after every simulation to pretty-print the results. |
The model defines a sim command for simulating comprehension of a sentence and a summ command for pretty-printing the temporal and resource utilization results. These are the components from which the above study-specific commands are built. The user can use them to write commands for simulating different studies. (The user must also extend the model's lexicon, which is defined in a straightforward fashion in the middle of the file.) |
References Caplan, D., Vijayan, S., Kuperberg, G., West, C., Waters, G., Greve, D., & Dale, A. M. (2001). Vascular responses to syntactic processing: Event-related fMRI study of relative clauses. Human Brain Mapping, 15, 26-38. Haarmann, H. J., Just, M. A., & Carpenter, P. A. (1997). Aphasic sentence comprehension as a resource deficit: A computational approach. Brain and Language, 59, 76-120. Just, M. A., Carpenter, P. A., Keller, T. A., Eddy, W. F., & Thulborn, K. R. (1996). Brain activation modulated by sentence comprehension. Science, 274, 114-116. King, J., & Just, M. A. (1991). Individual differences in syntactic processing: The role of working memory. Journal of Memory and Language, 30, 580-602. MacDonald, M. C., Just, M. A., & Carpenter, P. A. (1992). Working memory constraints on the processing of syntactic ambiguity. Cognitive Psychology, 24, 56-98. Mason, R. A., Just, M. A., Keller, T. A., & Carpenter, P. A. (2003). Ambiguity in the brain: How syntactically ambiguous sentences are processed. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29, 1319-1338. Thulborn, K. R., Carpenter, P. A., & Just, M. A. (1999). Plasticity of language-related brain function during recovery from stroke. Stroke, 30, 749-754. Varma, S., & Just, M. A. (2006). The cognitive neuroarchitecture of sentence comprehension. Manuscript in preparation. |
Tower Of London
The Tower of London (TOL) Model is described in Newman et al. (2003) and Just and Varma (2006). Its source code is available here .
At the bottom of the file, five commands are defined for testing the model's performance on problems of increasing complexity. These are invoked as:
The columns have the following meanings:
The following commands also simulate the solution of old and new problems, respectively, but display the results as a function of increasing minimum-length solutions. | |
Mental Rotation Model
The Mental Rotation (MR) Model is described in Just and Varma (2006). Its source code is available here.
At the bottom of the file is a command for simulating the solution of the Shepard and Metzler (1971) (SM) problems used by Carpenter et al. (1999):
The model defines a sim command for simulating the solution of a given problem:
The model also defines a summ command for pretty-printing the temporal and resource utilization results. These are defined at the bottom of the file and used by the mr1999 command. They can be combined by the user into commands that simulate the results of other studies. |
|
Driving Model
The Driving Model is not yet described in any publication. However, these slides provide an overview of its operation and its fit to the results of two (unpublished) fMRI studies. The Driving model implements the algorithm defined by Salvucci and Gray (2004). Its source code is available here. At the bottom of the file is a command for simulating navigation of a road.
The model also defines a summ command for pretty-printing the temporal and resource utilization results.
|
|
Tower of Hanoi
The Tower of Hanoi (TOH) Model is described in Varma (2006). Its source code is available here.
At the bottom of the file, five commands are defined for testing the model?s performance on problems of increasing complexity. These are invoked as:
The model defines a sim command for simulating the solution of a given problem and a summ command for pretty-printing the temporal and resource utilization results. These are defined at the bottom of the file and used by the various test commands. It is easy to infer how they work and define commands for simulating the results of other studies.
There exist commands for simulating solution of problems used in behavioral studies of normal adults (Anderson, 1993; Carpenter et al., 1990; Just et al., 1996; Ruiz, 1987), behavioral studies of patients with frontal lesions (Goel et al., 2001; Morris et al., 1997a; Morris et al., 1997b), and fMRI studies of normal adults (Anderson et al., 2005; Fincham et al., 2002). These will be made available soon.
|
|
Dual Sentence Comprehension and Mental Rotation
The Dual Comprehension-Rotation Model is described in Just and Varma (2006). It is not a new model per se, but rather the result of loading two separate models, combined with a glue script found here. It is defined by the following steps:
The glue script first defines a new facility for defining a "model." This is necessary for informing the system which model or models of the multiple defined models are to run for the task at. It also defines a new version of the main recognize-act loop for matching productions against declarative memory that is sensitive to the presence of multiple models and that records the activity or dormancy of each. Finally, it defines a generalized sim command for running one or more simulations at the same time and a generalized summ command for pretty-printing the results of one or more simulations. The sentence comprehension and mental rotation models are defined next, as are commands for simulating the single-task conditions.
(jv2005): Simulates comprehension of the sentences described in Just and Varma (2006).
(mr1999): Simulates solution of the Shepard-Metzler problems solved by the Carpenter et al. (1999) subjects.
These commands demonstrate the utility of the generalized sim and summ commands.
Finally, a command for simulating the Just et al. (2001) study of dual sentence comprehension and mental rotation is defined.
(dt2001 &optional (summ-p t))
The :summ-p parameter is optional. It can be either t or nil. When t - its default value - the summ command is called after every simulation to pretty-print the results.
References |
|
Dual Sentence Comprehension and Driving
The Dual Comprehension-Driving Model is not yet described in any publication. However, these slides provide an overview of its operation and fit to the results of an (unpublished) fMRI study. It is not a new model per se, but rather the result of loading two separate models, combined with a glue script found here. It is defined by the following steps:
The glue script first defines a new facility for defining a "model." This is necessary for informing the system which model or models of the multiple defined models are to run for the task at hand. It also defines a new version of the main recognize-act loop for matching productions against declarative memory that is sensitive to the presence of multiple models and that records the activity or dormancy of each. Finally, it defines a generalized sim command for running one or more simulations at the same time and a generalized summ command for pretty-printing the results of one or more simulations.
The sentence comprehension and mental rotation models are defined next, as are commands for simulating the single-task conditions.
Finally, a command for simulating dual-task sentence comprehension and driving is defined. | |
Dual Auditory and Visual Sentence Comprehension
The Dual (Auditory and Sentence) Comprehension (DC) model is not yet described in any publication. Its source code is available here.
The Dual Comprehension model differs from the other dual-task models in that it does not consist of two independent models running at the same time, but rather the same model processing two input streams simultaneously - think dual threads, not dual processes. It is defined as follows:
The DC is a slight augmentation of the conventional SCM. It adds the task-goal class of declarative memory element - should the auditory input stream be attended, the visual input stream, or both - It also adds an Executive center to house these goals and Phonological and Orthographic centers to house percepts of the different modalities.
The sim command for simulating comprehension of a sentence has been elaborated into three separate commands: |