Full Term Versus Preterm Infants

Ancient Secrets of Kings

Improve Your Intelligence and IQ

Get Instant Access

The term at risk usually refers to infants who are born with some difficulty that may or may not lead to a long-term deficit. The most common group of at-risk infants are those who are born prematurely with or without additional symptoms. Much of the early research on individual differences in infant perception and cognition compared full-term versus preterm infants. One of the first such studies was reported by Fagan, Fantz, and Miranda (1971). They tested normal and preterm infants on a novelty preference task from approximately 6 to 20 weeks of age. Their infants were familiarized to one complex black and white pattern and then tested with that pattern versus a novel pattern. A clear difference between the groups was obtained with normal-term infants first showing a novelty preference at 10 weeks of age, but preterm infants not showing a novelty preference until 16 weeks of age. Of more importance from a developmental perspective was that when the two groups were equated for conceptional age (gestational age plus age since birth) the group difference disappeared. Both groups first showed a strong novelty preference at about 52 weeks of conceptional age. Thus, at least on this one task, maturation seemed to play a more important role than that of the total amount or type of external stimulation the infants had received.

Others, however, have found differences between preterm and full-term infants even when conceptional age is equated. Sigman and Parmelee (1974), for example, found that at 59 weeks of conceptional age, full-term infants preferred faces to nonfaces, whereas preterm infants did not. Unlike the results in the Fagan et al. (1971) study, full-term but not preterm infants also displayed a novelty preference. Of course, there are many reasons that preterm infants may be delayed compared to full-term infants. Preterm infants usually have more serious medical complications, they are more isolated from their parents, they stay in the hospital longer, they tend to be disproportionately male and lower class, the parents tend to have received less prenatal care and poorer nutrition, and so on. Any number of these factors in isolation or combination could be responsible for delays in perceptual or cognitive development.

In another study (Cohen, 1981) three groups of infants were compared at 60 weeks conceptional age. The severe group had a number of complications, including prematurity and hyaline membrane disease; several had seizures, one had severe hypocalcemia, and one had congenital heart disease. In general these infants had suffered considerable prenatal or perinatal trauma but had survived relatively intact. They all also came from lower-class family backgrounds. A second group included only full-term, healthy infants, also from lower social class backgrounds. Finally, a third group included only full-term infants from middle-class backgrounds. In this study, low and middle SES (socioeconomic status) groups differed in number of two-parent families, years of education, racial background, and place of residence. All three groups were habituated to a picture of a face and then tested with two different novel faces. The middle-class group dishabituated to the novel faces (i.e., showed a novelty preference), but neither of the lower-class groups did so. It appeared that factors associated with class status were more significant than those associated with prematurity or risk status in this particular study.

We find it interesting that Rose, Gottfried, and Bridger (1978) reported a similar finding with one-year-old infants and a cross-modal task. Middle-class full-term infants, lower-class full-term infants, and preterm infants were allowed oral and tactile familiarization with a three-dimensional block. When shown that object and a novel object, only the middle-class infants looked longer at the novel object. In a subsequent study, however, using a visual task with simple geometric shapes presented at 6 months of age, lower-class full-term infants displayed a novelty preference but lower-class preterms did not (Rose et al., 1978). Thus the evidence is mixed with respect to preterm versus full-term differences. Systematic differences between these groups are frequently reported, but the bases for those differences are not always clear. In some cases, the difference appears to be based upon conceptional age or social class. In other cases, risk status seems to be implicated more directly.

The differences discussed so far between full-term and preterm infants have been rather global; full-term infants dishabituate or show a novelty preference, whereas preterm infants do not. But at least one study has gone further to investigate how the two groups differ in their information processing. Caron, Caron, and Glass (1983) tested preterm and full-term infants on a variety of problems that involved processing the relations among the parts of complex facelike drawings and other stimuli that they presented. They then tested to see whether the infants had processed the stimuli on a configural basis (e.g., the overall configuration of a face) and a component basis (e.g., the type of eyes and nose that made up the faces). They found clear evidence that the full-term infants were processing configurations, whereas the preterm infants were processing components.

Infants With an Established Risk Condition

A distinction is sometimes drawn between infants who are "at risk" for later disability and infants who have an "established risk condition," such as Down's syndrome, cerebral palsy, and spina bifida (Tjossem, 1976). Several studies have established that Down's syndrome infants, for example, are delayed relative to normal infants in habituation and novelty preference (e.g., Fantz, Fagan, & Miranda, 1975; Miranda, 1976).

One of the more interesting comparative studies was reported by McDonough (1988). She tested normal 12-month-old infants as well as 12-month-old infants with spina bifida, cerebral palsy, or Down's syndrome. The infants were given a category task similar to the one reported earlier by Cohen and Caputo (1978). Infants were habituated to a series of pictures of stuffed animals and then tested with a novel stuffed animal versus an item that was not a stuffed animal (a chair). The normal infants and the infants with spina bifida or cerebral palsy habituated, but the infants with Down's syndrome did not. Apparently the presentation of multiple distinct objects was too difficult for them to process. However, in the test, only the normal infants and the infants with spina bifida showed evidence of categorization by looking longer at the noncategory item than at the new category member; even though the infants with cerebral palsy habituated, they showed no evidence of forming the category.

These and other studies that have compared normal with at-risk infants provide compelling evidence that the at-risk infants perform more poorly on certain tests of habituation and novelty preference. Additional evidence on these differences is available in edited volumes by Friedman and Sigman (1981) and Vietze and Vaughan (1988). An important question is what these differences mean. Most would assume that habituation, and novelty preference tests are assessing certain aspects of information processing, such as attention, memory, or perceptual organization. But even if some at-risk infants perform more poorly during the first year of life, does this performance predict any long-term deficiency in one or more of these processes? Even if some long-term prediction is possible, does that prediction only apply to group differences, such as those between normal and at-risk or established-risk infants? Or can one also use habituation and novelty preference measures to make long-term predictions of individual differences even among normal infants? This question is addressed in the next section.

Predictive Validity of Habituation and Novelty Preference Measures

An examination of the predictive value of traditional standardized tests of infant development, such as the Bayley or the Gesell scales, has led to the unfortunate but definite conclusion that these tests have dubious long-term predictive validity for normal populations (e.g., McCall, 1979; McCall, Hogarty, & Hurlburt, 1972), as well as for populations that include infants at risk (Kopp & McCall, 1982). This lack of predictive validity of traditional tests was at first considered not to be a failure in the tests themselves but simply a reflection of the discontinuity and qualitative nature of change over age in intellectual development from infancy to childhood (McCall, 1981; McCall, Appelbaum, & Hogarty, 1973).

That view became somewhat suspect in the 1980s as studies appeared demonstrating sizable correlations between infant habituation or novelty preference usually assessed sometime between 3 and 8 months of age and later IQ— usually assessed between 3 and 8 years of age (e.g., Caron et al., 1983; Fagan & McGrath, 1981; Rose & Wallace, 1985). Both Bornstein and Sigman (1986) and McCall and Carriger (1993) provide excellent reviews and analyses of this literature. McCall and Carriger, for example, report that across these studies the median correlation between information-processing measures assessed via habituation or novelty preference tasks and childhood intelligence is approximately .47, whereas it is approximately .09 between standardized infant tests and later intelligence. Furthermore these high correlations between information processing and later IQ tend to occur even in small samples and with normal populations.

Although many specific measures of infant information processing have been tried, three classes of them appear to be the best predictors of later intelligence (Slater, 1995a). One is preference for visual novelty. Following brief exposure to a visual pattern (usually 5-20 s), the familiar and a novel pattern are presented side by side and the percent responding to the novel pattern is recorded. This percent novelty tends to be positively correlated with later IQ. A second is some measure of habituation rate. Various measures of habituation, such as the total looking time until some habituation criterion is reached or the total number of habituation trials prior to criterion are sometimes found to be correlated with later intelligence. In general, those who habituate more rapidly tend to have higher IQs. The third is some measure of fixation duration independent of habituation. The measure may be the duration of a look at the outset of the habituation trials, or the duration of the longest look during habituation, or the average duration of a look during habituation. In general, the shorter the look by the infant, the higher the IQ found later in life. Some have even found systematic individual differences between short and long lookers. For example, it has been found that younger infants tend to look longer at most pictures that they can see clearly than do older infants (Colombo & Mitchell, 1990).

Although most investigators agree that these measures tap some aspect of information processing, it is less clear what the underlying mechanism or mechanisms may be. Most explanations of differences in infants' performances have something to do with differences in encoding or processing speed or the ability to remember old information and compare it to new information. Perhaps the most popular explanation is based upon processing speed. Why speed of processing visual pattern information in infancy should be related to later IQ in childhood is still an open question, although Rose and Feld-man (1995) have recently reported that these infancy measures correlate with perceptual speed at 11 years of age, even when IQ was controlled.

Whatever the mechanism, correlations in the .4, .5, or even .6 range between measures of infant attention at around 6 months and later measures of IQ at around 6 years are quite impressive, particularly in light of the failure of standardized IQ tests to predict. But the results are not without controversy; not everyone obtains such high correlations. Both Lecuyer (1989) and Slater (1995a), for example, point out that the so-called 0.05 syndrome makes it difficult to publish a paper if the correlations are not statistically significant. Many studies have probably not found a relationship between infant attention and IQ, but they are not counted in summaries or meta-analyses because no one knows about them. In their meta-analysis of this literature, McCall and Carriger (1993) evaluate three other criticisms that have been raised about the importance of these correlations. First, habitua-tion and novelty preference measures may not reflect any interesting cognitive process. Second, the infancy measures have only moderate test-retest reliabilities. Third, the small sample sizes used may lead to a prediction artifact—that is, the inclusion of a few extreme scores, perhaps by infants who have known disabilities, can inflate correlations when the sample N is small. McCall and Carriger conclude that although these criticisms have some merit, in the end they do not negate the fact that even with normal populations, the ability to make long-term predictions is impressive.

A Specific Information-Processing Explanation

Before leaving this section, it might be worthwhile to try to understand these individual differences by referring to the specific set of information-processing propositions mentioned previously in this chapter (Cohen & Cashon, 2001b). First, it seems a bit odd that previous explanations have assumed that somehow habituation and novelty preferences are tapping infant information processing, but they do not emphasize how infants are actually processing the information or how that processing changes with age. It is not a coincidence that the best predictions seem to result when infants are between about 4 and 7 months of age and they are shown complex, abstract patterns or pictures of faces. That is just the age period when infants should be making a transition from processing those pictures in a piecemeal fashion to processing them holistically. If one makes the additional assumption that processing and remembering something holistically takes less time and fewer resources than processing it one piece at a time, then the following set of results— all of which have been reported—would be predicted.

• Younger infants should look longer at complex patterns than do older infants because the younger ones, who are processing the individual features, in effect have more to process.

• At 4 or 5 months of age, infants with short looking times should be more advanced than are infants with long looking times because the short lookers have made the transition to holistic processing, whereas the long lookers are still processing the stimuli piece by piece.

• Optimal predictions should occur in a novelty preference procedure when familiarization times are short. Obviously if familiarization times are long enough, even piecemeal processors will have sufficient time to process and remember all or most of the pieces.

• Both measures of infant fixation duration in habituation tasks and measures of percent novelty in novelty preference tasks should work equally well; both are essentially testing the same thing in different ways. Short fixation durations imply holistic processing. Therefore, short lookers should be more advanced than long lookers. Novelty preference tasks work when familiarization times are short— in other words, at the end of familiarization the holistic processor will have had time to process and remember the pattern presented, but for the piecemeal processor much about the pattern will still be novel. Therefore, when tested with the familiar versus a novel pattern, the holistic processor should show a greater novelty preference.

Thus, according to this version of the information-processing approach, the correlations with later intelligence occur because the infant tasks are tapping into an important developmental transition in information processing at exactly the right age and with exactly the right stimuli to assess that transition. Those who develop more rapidly as infants will tend to continue that rapid development and become the children with higher IQs. Whether the developmental progression that is being assessed in infancy is specific to infant perception and cognition or whether it is much more general really has yet to be determined.

Two final points can be derived from this approach. First, it is clear that the piecemeal to holistic transition is hierarchical. It occurs at several different levels at different ages. Therefore, one would predict that if simpler stimuli were presented with younger ages or more complex stimuli such as categories or events involving multiple objects were presented at older ages, one might achieve the same level of prediction that one now finds with complex two-dimensional patterns in the 4- to 7-month age period. At the very least this viewpoint predicts that the most appropriate stimuli for the infants to process will change systematically with age.

The other point is that the information-processing tasks given to infants might be tapping processing or perceptual speed as some assume, but only in an indirect way. More advanced infants may appear to process the items more rapidly because they effectively have fewer items to process in the same stimulus—not because they are processing each item more rapidly. After the manner in which infants process information at a particular age is understood, one can design experiments that equate the effective amount of information at different ages to see whether older and more advanced infants really do process and remember information more rapidly.

Was this article helpful?

0 0
Brain Research And Your Child

Brain Research And Your Child

Enchanted Learning Experiences -Why They Should Be The Norm For Our Children. The latter part of the twentieth century has seen more discoveries about the human brain than in all previous history of mankind. It is as though we have been paddling in the shallows of a vast ocean hitherto unaware of its existence.

Get My Free Ebook


Post a comment