Current models of auditory scene analysis postulate both low-level automatic processes and higher-level controlled or schema-based processes (Alain and Arnott, 2000; Bregman, 1990) in forming an accurate representation of the incoming acoustic wave. Whereas automatic processes use basic stimulus properties such as frequency, location, and time to segregate the incoming sounds, controlled processes use previously learned criteria to group the acoustic input into meaningful sources and hence require interaction with long-term memory. Therefore, in addition to bottom-up mechanisms, it is also important to assess how aging affects top-down mechanisms of auditory scene analysis.
The use of prior knowledge is particularly evident in adverse listening situations such as a cocktail party scenario. For example, a person could still laugh in all the right places at the boss's "humorous" golfing anecdote as a result of having heard the tale numerous times before, even though only intermittent segregation of the speech is possible (indeed, the adverse listening condition in this example may be a blessing in disguise). In an analogous laboratory situation, a sentence's final word embedded in noise is more easily detected when it is contextually predictable (Pichora-Fuller, Schneider, and Daneman, 1995), and older adults appear to benefit more than young adults from contextual cues in identifying the sentence's final word (Pichora-Fuller, Schneider, and Daneman, 1995). Since words cannot be reliably identified on the basis of the signal cue alone (i.e., without context), stored knowledge must be applied to succeed. That is to say that the context provides environmental support, which narrows the number of possible alternatives to choose from, thereby increasing the likelihood of having a positive match between the incoming sound and stored representations in working and/or longer-term memory. There is also evidence that older adults benefit more than young adults from having words spoken by a familiar than unfamiliar voice (Yonan and Sommers, 2000), suggesting that older individuals are able to use learned voice information to overcome age-related declines in spoken word identification. Although familiarity with the speaker's voice can occur incidentally in young adults, older adults need to focus attention on the stimuli in order to benefit from voice familiarity in subsequent word identification tasks (Church and Schacter, 1994; Pilotti, Beyer, and Yasunami, 2001). Thus, schema-driven processes provide a way to resolve perceptual ambiguity in complex listening situations, and, consequently, older adults appear to rely more heavily on controlled processing in order to solve the scene analysis problem.
Musical processing provides another real-world example that invokes both working memory representations of current acoustic patterns and long-term memory representations of previous auditory structures. Evidence suggests that young and older adults perform equally well in processing melodic patterns that are presented in a culturally familiar musical scale, whereas older adults perform worse than young adults when the patterns are presented in culturally unfamiliar scales (Lynch and Steffens, 1994). The age difference in processing melodic patterns from unfamiliar cultural contexts again suggests that older adults may rely more heavily on long-term knowledge of musical grammar than young adults. This age effect may be related to impairment either in the processing of ongoing melodic patterns and/or working memory given that a long-term representation of the unfamiliar melodic structure is unavailable. Other studies have shown that aging impairs listeners' ability to recognize melodies (Andrews, Dowling, Bartlett, and Halpern, 1998), and this age-related decline is similar for musicians and nonmusicians (Andrews, Dowling, Bartlett, and Halpern, 1998), suggesting that musical training does not necessarily alleviate age-related decline in melodic recognition. The use of musical stimuli in aging research offers a promising avenue for exploring the role of long-term representation and its relation with schema-driven processes involved in solving the scene analysis problem. Moreover, tasks involving musical stimuli are likely to be more engaging for participants than tasks using typical laboratory stimuli (e.g., white noise, pure tones, harmonic series) that may be less pleasant to listen to for extended periods of time. Furthermore, the results possess a higher degree of ecological validity in terms of everyday, meaningful acoustic processing.
Was this article helpful?
Your heart pumps blood throughout your body using a network of tubing called arteries and capillaries which return the blood back to your heart via your veins. Blood pressure is the force of the blood pushing against the walls of your arteries as your heart beats.Learn more...