We have already emphasized that the major goal of large-scale neural modeling is to enable one to propose a set of neural-based mechanisms that can explain particular human cognitive functions. If these mechanisms are reasonable representations of what actually occur in the brain, then simulated data generated by the model, at multiple spatiotemporal scales, should closely approximate corresponding experimental data. We will illustrate this use of modeling by discussing a model of object processing that was developed in our laboratory.
There are two versions of this model, one for processing visual objects (Tagamets & Horwitz 1998) and one for auditory objects (Husain et al. 2004). Although the notion of visual object (e.g., chair, table, person) seems straightforward, that of auditory object is more elusive, but can be thought of as an auditory perceptual entity subject to figure-ground separation (for a detailed discussion see Griffiths & Warren 2004; Kubovy & Van Valkenburg 2001). Examples of auditory objects would include words, melodic fragments, and short environmental sounds. There is much experimental evidence implicating the ventral visual processing pathway, which runs from primary visual cortex in the occipital lobe into the inferior temporal lobe and thence to inferior frontal cortex, as being involved with visual object processing in human and nonhuman primates (Desimone & Ungerleider 1989; Haxby et al. 1994; Ungerleider & Mishkin 1982). Although the supporting evidence is less extensive, an analogous processing stream along the superior temporal gyrus (STG) for auditory object processing has been hypothesized by Kaas, Rauschecker and others (Kaas et al. 1999; Rauschecker & Tian 2000). Our models build on these notions; we have proposed (and instantiated in our models) that visual and auditory (and possibly tactile) object processing uses a set of similar cortical computational mechanisms along each of their respective pathways, although the features on which these mechanisms act depend on the sensory modality (Husain et al. 2004). However, it is important to emphasize that we are not implying that all sensory features have analogues in the three systems, only that some do.
Was this article helpful?