Vibration Sensing

Vibration, the wavelike movement of air or other objects in which an organism comes into contact, can be created by traveling sound waves or by mechanical disturbance (such as water currents or wind). By itself, vibration detection is a form of the sense of touch. However, like light, there is additional information carried by the frequency, amplitude, complexity, and location of vibratory motion. It is this that we refer to as hearing, but as usual there are subtleties and gradations. And it is not only animals that sense and respond to vibration.

Plants

Some plants, such as mimosa, are able to perceive vibration and respond by folding the leaves that have sensed the motion. This is presumably a protective response, to shield delicate leaves. Response to vibration is called seismonasty; the plant may have the same response to touch, and this is called thigmonasty. The response is caused by changes in turgor pressure, which is driven electrochemically, through depolarization of cell membranes. The cell membranes of specialized motor cells become more permeable to potassium ions, resulting in the outflow of ions and osmotic loss of water from the cell. This results in the shrinking of the cells that sensed the vibration, which causes the leaves to droop. A whole plant may respond to loud sound or shaking, but this is not thought to be an organismal response, but one that takes place independently in each leaf.

Lateral Line

The lateral line is a vibration detection mechanism of fish and aquatic amphibians (see Chapter 12 and Figure 16-2) and is used for many vital individual as well as social behaviors. The spatial distribution of its mechanoreceptive organs, known as neuromasts, determines the receptive field, which can vary among species, and the innervation patterns determine how sensitive the system is going to be. Neuromast morphology also varies and is related to the types of fluid motion that will be detected, whether velocity or acceleration (Maruska and Tricas 1998).

The lateral line system comprises a series of neuromasts located in tunnel-like canals in the dermis of the head and along the midlateral flank of the fish and in pit organs throughout the body. The tunnels open at intervals at the body surface to expose the mechanosensory cells to the exterior. Neuromasts are the basic functional organ of the lateral line and are composed of groups of about 30 sensory hair cells and 60 supporting cells and are covered by a gelatinous cupula. The hair cells of the lateral line are essentially identical morphologically and functionally to those in the vertebrate inner ear and detect movement via the displacement of stereocilia in the same way. Most fish have both canal neuromasts and superficial neuromasts, which detect water current. Neuromasts are innervated by branches of the posterior (PLL) and anterior (ALL) lateral line nerves—those on the head by the ALL and those on the sides and tail by the PLL. Projections from the ALL and PLL extend to two locations in the hindbrain in two neighboring columns, preserving a somatotopic representation of neuromast order and thus the location in the body where a vibration was perceived.

The vertebrate homologs of the Drosophila proneural gene Atonal (Ato) include Mathl in mouse, Zathl in zebra fish, and Atoh in humans; these bHLH-class TFs are all essential for the development of inner ear hair cells and promote neuroblast differentiation and subsequent differentiation of the peripheral and CNS, including the brain (Itoh and Chitnis 2001).

There is great interspecific diversity in brain morphology among fish, with the area or size of a particular functional part of the brain correlated with modal specialization of the species. Deep-water fish have poor color vision, for example, and

Interpretation Vision Brain

may have poor vision in general or are even blind, whereas shallow water fish can differentiate colors; thus the size of the brain areas associated with color vision and vision in general are differentially enhanced.

Vertebrate Hearing

Hearing is the specialized reception and interpretation of another kind of vibration—sound waves, the mechanical displacement of the medium in which an animal lives, whether water or air. As with all sensory processes, the energy of the vibration must be transduced into an electrical signal. In the auditory system, this is done by the hair cells of the inner ear, as described in Chapter 12. As the term is generally used, "sound" refers to more than just general strength of vibratory signal—it also refers to the details and complexities of its frequency and "hearing" refers more specifically to its interpretation. The obvious reason is that this provides much more information about the source than just its presence.

The hair cells in the cochlea synapse with neurons whose cell bodies lie in the cochlear or spiral ganglion at the base of the vestibulocochlear or auditory nerve from each ear, which carry the signal to the cochlear nucleus in the brain stem. The vestibulocochlear nerves have two branches that project to the brain—one from the cochlea and the other from the vestibular system. The cochlear nerve contains about 30,000 afferent axons, most of them from inner hair cells in the cochlea, but 5-10 percent from the outer hair cells, which seem to play a role in tuning sound as a cochlear amplifier, among other functions.

The axon of each neuron in the cochlear nerve connects to an area of the cochlea that is most responsive to a particular characteristic frequency, and sound signals are carried to the brain in an orderly way that represents the spatial frequency tuning along the cochlea—the apical end receiving low frequencies and the basal end high frequencies. Moving from the beginning to the smaller inside end of the coiled cochlea, sounds of increasing frequency are detected.

The means by which each region along the cochlea responds most to a given frequency range is a complex matter of acoustical physics and hydrodynamics, and is not yet completely understood. Generally, the length and shape of the cochlear canal, and the differential stiffness of the basilar membrane along it, determine the vibration characteristics of the fluid in the cochlea. There are also differences in hair cell structure and firing behavior that vary along the cochlea. Sound energy applied to the basilar end sets up waves that travel up the cochlea through this fluid. The changing shape and stiffness of the basilar membrane along the canal dampen the passing waves, which as a result end up establishing standing peaks centered at a frequency-dependent distance from the basilar end.

In this way, a given sound frequency triggers hair cells in a specific region along the cochlea. Because this is a replicable property of fluid dynamics, the brain can recognize each occurrence of the same sound frequency by the location triggered. Further, while the relationship between frequency and position varies among species, and within individuals, and is not perfectly linear along the cochlea (it may be log-linear), the relative positions of activation can be used by the brain to work out complex sound characteristics, such as (at least for humans) harmonies, octaves, consonance, and the like. Thus, in a general way, successive octave frequencies activate regularly spaced locations.

Variation in gene expression leading to allometric variation in growth dynamics of the size, shape, length, and stiffness of the cochlea will produce a chamber with particular response characteristics, and these differ by species, as is shown in Figure 12-2. As a result, organisms are suited to particular hearing behavior. There is a danger that this will invoke an adaptive illusion, in this case of fine-tuning by selection. Such selection could in principle be what happened, but there are alternative potential explanations.

The basic mechanisms of mechanoreception and ion channel-based signal transmission were available early in evolution, and were cobbled together into the wide range of hearing mechanisms we see today. Rather than the sensitive touch of selection this could be explained as sloppy jerry-rigging of the pieces that evolution had to work with at the time, yielding hearing mechanisms that fell fairly randomly all over the map with regard to what organisms could hear. Organisms then made do with what they had, "tuning" themselves by organismal selection, regardless of the fact that the ability to hear a wider range of frequencies, or softer sounds, or from further afield would serve them even better. Evolution has built a basic vibration-response mechanism with wide interspecific, and even intraspecific, variation in how it works, with the only measure of importance being that it does work.

In fact, as with other senses, the sound wave peaks are spread over an area within the cochlea, so that there is overlapping sensitivity among hair cells. This and other aspects of interaction among cells along the cochlea give the brain the wherewithal to integrate the information and compensate for imprecision.

Relative position along the cochlea is conserved by relative position of fibers of the auditory nerve, with apical afferent fibers in the center and basilar ones around the outside. They thus arrive in the brain with locational information intact. This tonotopic map is analogous to the retinotopic map of the visual system in that an orderly spatial representation of the cochlea is sent to the brain but differs fundamentally in that the tonotopic map is totally unrelated to any spatial aspect of sound. The tonotopic map translates frequency into cochlear position, whereas the retino-topic map conserves the relative spatial positions of incoming photons. Perhaps this is related to the reason that, although sound is topographically represented in the brain, we don't perceive it as an image the way we do light (there is no reason, in principle, why the brain could not present sound to us that way; in fact, we make oscilloscopes to do just that). But the orderly map is in a sense a de facto result of the structural means by which sound is parsed so that individual frequencies can be detected. That structure takes advantage of the relative propagation properties of vibration of different frequencies in an enclosed fluid.

Auditory neurons are specific to different aspects of sound; some, for example, are responsive to the onset of a sound, transmitting information about the initiation of the sound but quickly dampening, whereas others don't begin to fire until the sound has been sustained for some time, thereby transmitting information about the sound's intensity and duration.

Neurons in the cochlear nucleus project to other auditory areas in the brain via three different pathways: the dorsal acoustic stria, the intermediate acoustic stria, and the trapezoid body. As with vision, sound is processed in parallel pathways that finally converge as complex acoustical information about sound source, intensity, frequency, and duration. Signals from both ears join in the superior olivary nucleus, which receives input from the trapezoid body. Postsynaptic axons from the superior olivary nucleus and axons from the cochlear nuclei project to the inferior colliculus in the midbrain via the lateral lemniscus, both of which, again, receive binaural input. Efferents from the superior olive project back to the cochlea to control sensitivity. The primary function of the superior olive is sound localization. Cells in the col-liculus project to the medial geniculate nucleus of the thalamus, and axons from the geniculate nucleus terminate in the primary auditory cortex in the temporal lobe and in the superior temporal gyrus. See Figure 16-3 for a diagram of the central auditory pathways.

The primary auditory cortex is segmented in several ways, as is most of the neo-cortex. In particular, its segmentation is similar to that of the visual cortex in being laminar, with each layer receiving input from neurons from different places. It is also organized into functional columns, as is the visual cortex, with all neurons in a given column responding optimally to sounds with the same frequency. The dorsal portion of the auditory cortex responds to lower frequencies and the anterior portion to higher frequencies, with a gradient of frequency responses in the columns

Central Auditory Pathways

Figure 16-3. Central auditory pathways. Sound is transmitted from A, the organ of Corti in B, the cochlea, where different frequencies stimulate the hair cells in stereotypical areas; signal is transmitted from the cochlea through the brainstem to higher auditory areas, with the tonotopic map maintained, and ultimately to C, the primary auditory cortex, where signal is perceived in the acoustic area of the temporal lobe, D, still with the tonotopic map intact.

Figure 16-3. Central auditory pathways. Sound is transmitted from A, the organ of Corti in B, the cochlea, where different frequencies stimulate the hair cells in stereotypical areas; signal is transmitted from the cochlea through the brainstem to higher auditory areas, with the tonotopic map maintained, and ultimately to C, the primary auditory cortex, where signal is perceived in the acoustic area of the temporal lobe, D, still with the tonotopic map intact.

in between. The auditory cortex is also divided into two types of alternating "zones"—summation columns and suppression columns. The summation columns are composed of neurons that are excited by stimulation from either ear, and the suppression columns are composed of neurons that are stimulated by input from only one ear and inhibited by stimulation from the opposite ear. The spatial orga nization of these columns relative to the axis of tonotopic mapping enables the primary auditory cortex to respond to every audible frequency and interaural interaction (Kandel, Schwartz et al. 2000).

In mammals, sound is generally sent for further processing to a number of other areas beyond the primary auditory cortex, in a parallel and hierarchical way similar to the visual system, but the process is not nearly as well understood. In marsupials, sound processing seems to be confined to the primary auditory cortex; however, in eutherian mammals, there are at least nine regions beyond the primary auditory cortex, although the number seems to vary by species and/or depending on the method of data collection. In all cases, sound continues to be organized according to the same tonotopic map received and processed by the primary auditory cortex. The number of areas in insectivores seems to be three or four, in rodents four to seven, and in carnivores and primates six to eight or nine (Ehret 1997).

Again like vision, bilateral symmetry makes it possible to locate a sound in a more three-dimensional way when integrated in a stereophonic rather than simple left-right way. Unlike light, sound reaches both ears no matter which direction its source. This means that it is possible to detect the slight delay that results from signal coming from one side or the other and even to assess distance to some extent.

Radios rely on devices to filter out signals at all frequencies other than the one to which they are tuned. In that way, a signal is detected out of a background that can contain energy of all frequencies. Auditory systems in the brain also tease out single sounds or sound frequencies from the generally broad panoply of incoming sounds, so, for example, a person can hear the melody of the violins in an orchestra or attend to a single conversation in a crowded room. How we do this auditory scene analysis is an important but still not well-understood phenomenon that probably involves learning from experience combined with accurate representation and localization of sound by the ear and brain.

Sound interpretation is more than just the detection of characteristic vibration signatures, the way a mass spectrophotometer detects the characteristic spectrum of each type of molecule. Signature detection is certainly part of this, but, at least in higher vertebrates, many more subtleties and complexities are resolved. Also as occurs in vision, this is basically done in the brain, after the information has left the detector itself.

Human speech processing is an important higher area of sound processing in humans, but understanding of the neural pathways and processes involved is still fairly rudimentary, in part because there are no laboratory animals in which it can be studied.

Invertebrate Hearing

It appears that only a minority of insects have the ability to hear, but insect hearing may have evolved at least 19 times (Yager 1999). Like vision, we will probably learn that rudimentary elements of the system, perhaps shared with vertebrates, were present in stem animals (with whom animals on two divergent branches share ancestry) with morphological details independently intercalated in between an early induction signal and later receptor-system differentiation. Cytoarchitectural processes recruited for mechanoreception are likely to be its basis.

Paired hearing organs are generally peripherally located on almost any part of the body, including various abdominal or thoracic body parts, legs, wings, or mouth.

Almost all insect ears share a tympanum covering a tracheal sac and tympanal organ. The somites and dendrites of insect auditory receptors are in the hearing organs themselves; sound frequency is first analyzed there. The axons of the auditory receptors enter the nearby segmental ganglion or ganglia and carry auditory input in a tonotopically organized way, as in the vertebrate hearing system.

Thus, insects also interpret a signal that has been given an orderly translation from a frequency pattern into a neurological space-based "map." The signal is then sent to the brain for decoding of the sound's direction, pattern, frequency, and subsequent localization (Pollack 1998;Stumpner and von Helversen 2001).The specifics of the neuronal pathways differ by species; some, for example, use parallel independent processing to recognize and localize signals and some use a process in which localization depends on recognition (Pollack 1998).

As in vertebrates, sound direction is determined by comparison of the auditory input from each of the two ears. The vertebrate brain uses interaural arrival time difference and intensity difference to make that determination. The insect brain uses intensity difference as well but whether there is enough difference in time of arrival of a sound at two ears located so close to each other to be useful in determining sound directionality is still open to question (Pollack 1998). (Again, we must be careful not to view the world from our human frame of reference: some mammals that can detect directionality of sound, like voles and bats, are certainly also very small.) Inhibiting or destroying sound reception of one ear does result in an insect no longer being able to distinguish the location of a sound source, such as the call of an echolocating bat.

0 0

Post a comment