Light

As we saw in Chapter 14, most organisms in this world, single-celled bacteria, plants, insects, mammals and fish, make use of light energy in some way. Some convert solar into chemical energy, whereas others have receptors that trigger responses ranging from movement toward or away from a light source to conversion of the light into a perception of the surroundings.

Plant Light Reception

Plants use light in more ways than do animals and have evolved more pigments to collect and transduce it. They use light for photosynthesis, photoperiodism, photomovement, and photomorphogenesis. We might say plants have organismal responses that require some forms of perception because they respond to light in multiple ways (e.g., positive phototropism in stems and leaves and negative pho-totropism in roots), and these responses are based on signals beyond light itself (e.g., gravity, hormones). Furthermore, responses vary from intracellular ones, such as chloroplast movement, to higher level ones, such as stomata opening or differential growth of different sides of branches in the process of photomorphogenesis. Plants are also responding to more than simply the presence or absence of sunlight: they also respond to wavelength, intensity, and directionality of the light, as well as diurnal length.

The plant's local, noncentralized photoreception means that, although the pho-tosynthesizing leaf cannot survive without the roots and circulatory system of the plant, the plant itself can survive without a specific leaf or even without many of its leaves. Unlike an organism with a cephalized nervous system, which cannot perceive light without its central ganglion (the head), a plant, with its many loci of efficient photosynthesis, can lose a lot of leaves before it isn't taking in enough solar energy to survive or to make local or organismal responses to light. We can think of this as automatic or purely molecular rather than "intelligent" response— so far as we know (reputed response to music and kind words notwithstanding), plants are unaware that they are reaching toward the skies—but, unless we agree to reserve the term for brain activity, that distinction seems somewhat artificial. Indeed, it may be rather anthropocentric and consciousness-focused to make such a distinction.

We also noted in Chapter 14 that algae can sense light and respond with photo-tactic motion directed in relation to its source. This involves complex signaling and response mechanisms, although of course less complex than in, say, organisms with a CNS that must respond by coordinated locomotion involving multiple parts. It is important not to forget that the result is not just the activation of a molecular pathway but an organismal response, even in the lowly algae.

Insect vision

Drosophila have two visual systems: the compound eye for image formation and ocelli for light detection. The compound eye is formed of 700 or so ommatidia, each of which contains photoreceptor cells. The ommatidia are arranged in a honeycomblike fashion, and the entire eye connects to three relay points in the optic lobes of the fly's CNS: the lamina, medulla, and lobula complex. The simple central ocelli, of which Drosophila have three, synapse with a single point in the CNS, the ocellar ganglion.

The ommatidial neurons terminate in one of the three layers. The projections from a single ommatidium extend to the lamina, and from there some axons continue on to the medulla. As in vertebrate vision, these projections are topographic. That is, the image is relayed to the brain in a way that preserves its physical layout or "map" in the eye which, because of the linear propagation of light itself, comes directly from the perceived image.

Some insects have an extraocular or dermal light sense. This has been shown with experiments in which the eyes have been made nonfunctional. This sense involves single neurons in the brain and/or ventral nerve cord responding to light.

In Chapter 14, we described some key aspects of the cascade of regulatory gene expression that is used to form the insect eyes themselves (Cutforth and Gaul 1997; Czerny et al. 1999; Punzo et al. 2002). Some of the regulatory genes that are involved in development of the optic lobes of the adult Drosophila nervous system have been identified and include Wingless (Wg) and Decapentaplegic (Dpp), genes already familiar to us because of their involvement in many other structures. Minibrain, a protein serine/threonine kinase gene, and Division abnormally delayed (Dally) are also involved in cell proliferation in the visual lobes of the fly brain. The final differentiation of cells in the developing lamina depends on the arrival of axons from the ommatidial photoreceptors, in a way homologous to the development of vertebrate olfaction (Cutforth and Gaul 1997), which will be described below. The process involves differential signal transduction, in this case among adjacent cells.

How an insect "perceives" light images is, however, still unknown, especially in terms of its experience. For example, how does a particular image generate a response such as flight or avoidance? Can these be called "emotional" responses, and if so how are they "felt" by the fly?

Vertebrate Vision

Light sensation typically requires detection of direction and strength of the signal. Some organisms do more with light, and it is for this that we use the term "vision." Except perhaps for touch, vision is the one sense that specifically requires the (sometimes detailed) spatial characterization of the signal. It is this that allows us to interpret an image that comes essentially in the form of pixels, to give an integrated assessment of the spatial relationships of objects distant from us, and we might say this is the value of vision. Eyes such as ours perceive many other aspects of light in addition to spatial ones, providing an even more nuanced sense of our environment.

We tend to think of the vertebrate retina as a simple analog of a camera's passive film or other kind of pixel sensor. Rods detect brightness and cones the various primary colors. But the interpretation of light absorbed by the rods and cones begins before the signal leaves the eye. Retinal neurons are specialized for different aspects of light, and the image that is sent to the brain is not that of absolute levels of illumination but instead a retinal map of spatial patterns of regions of relative light and dark, color, intensity, depth, and so forth.

The spatial orientation of the light that hits the retina is precisely maintained, although somewhat distorted, in the form of a retinotopic map, sent to each visual processing center in the brain; the map is intact in that neighboring cells in the retina project to groups of neighboring cells in the visual area of the thalamus, which in turn project to neighboring regions of the striate cortex (Kandel et al. 2000). "Somewhat distorted" means that the relative position of the signals is retained but the translation of a three-dimensional image received by a curved retina into a two-dimensional one necessarily alters it somewhat, and the absolute distances between receipt centers of two signals in the brain and between their receiving photorecep-tors is not maintained. The percept provided by the brain compensates for this kind of "stretch" distortion with mechanisms at least partially understood (see below). In other words, the correction appears to be such that the percept more closely maps to the image striking the retina than to the way that image enters the brain. In fact, most sensory systems project their receptive surface to the appropriate brain centers in a similar way, but the spatial correction is most important in visual and tactile perception.

Two predominant pathways carry the signal from eye to brain in vertebrates: the retinotectal and the retinogeniculate pathways. In vertebrates with a small forebrain, such as birds, amphibians, and reptiles, the retinotectal projection is the most prominent. Here, the photoreceptors in the retina project onto bipolar nerve cells (cells with two processes extending from the cell body, the axon and the dendrite), with limited dendritic branching on one end and a short axon on the other, a structure that allows fast and precise conductance of the signal. These neurons in turn connect with retinal ganglion cells that form the optic nerve and go primarily to a part of the midbrain called the optic tectum ("roof"). This structure coordinates orienting responses—turning toward light or sensing prey or danger—rather than analysis of form or shape or the like, as in mammals. That is, retinotectal vision is predominantly linked with motor control and is representationally rather crude.

In mammals, the homolog to the optic tectum is the superior colliculus (a small moundlike region in the midbrain), which receives light signals, as well as other kinds of sensory input. The superior colliculus helps orient the head and eyes in relation to these other kinds of sensory information.

The second pathway, the retinogeniculate, is the predominant visual signal relay pathway in mammals but is only barely evident in vertebrates with small forebrains. As in the retinotectal pathway, light signals are conveyed along retinal ganglion cells to the visual processing centers of the brain. The retinal ganglion cells are called X or Y cells (M or P cells in primates) and comprise two different major routes for visual information to reach the brain. The X and Y cells transmit slightly different visual information to slightly different areas of the brain, with the X cells projecting to the magnocellular, or large-celled layers, and the Y cells projecting to the parvocellular, or small-celled layers of the laminar lateral geniculate nucleus (LGN) in the thalamus.

The most important difference between the X and Y cells is their response to color contrasts; X cells are essential for color vision. But X cells are also important in distinguishing images that require high spatial and low temporal resolution (i.e., they specialize in sustained responses and are best at analysis of stationary objects), whereas Y cells are important in vision that requires the opposite: low spatial and high temporal resolution (i.e., they have fast and transient responses and detect movement, basic shape, depth, brightness, texture, etc., of objects but are poor at analysis when objects are stationary) (Kandel, Schwartz et al. 2000).

Retinal ganglion cells exit each eye bunched together into the optic nerve. The optic nerve from each eye meets in the optic chiasma (see Figure 16-1), where signals from each side cross to the other brain hemisphere, to travel on to the lateral genic-ulate to be relayed from there to the primary cortical visual center. This crossover is essential for coordination of the images from both eyes to create stereoscopic vision.

Each neuron in the LGN is responsive to a single spot of light in a region of the visual field, and each layer of the LGN is monocular, that is, responsive to only one eye. The retinotopic map is maintained by the neurons of the LGN, which are primarily projection or relay neurons, that in turn project to different layers of the primary visual cortex (also called Brodmann's area 17, V1, or the striate cortex) of the forebrain. The X and Y pathways still project to different sublayers of the cortical layer IV, maintaining the separation of input from each eye.

The primary visual cortex, as much of the nervous system, is modularly organized into sets of columns. This "multiple columnar system," with hypercolumns for different tasks, includes orientation and ocular dominance columns. The neurons in the ocular dominance columns receive input from a small spot on the retina of a particular eye, and the orientation columns receive input from light that hits the retina on a particular plane, that is, of lines and edges all tilted at essentially the same angle to the vertical. To represent all orientations for both eyes, 18-20 columns are required. Neighboring hypercolumns represent neighboring sections of the retina. The neighboring hypercolumns communicate through horizontal connections that link cells with the same specific tasks in the same layers and in this way integrate information over many millimeters of cortex. Information from outside a cell's

Overlapping visual fields

Visual field / of right eye

Visual field of left eye

Overlapping visual fields

Visual field / of right eye

Visual field of left eye

Lateral geniculate nucleus (LGN)

Signal from left half of visual field

Optic tract

SIgnal from right half of visual field

Primary visual cortex

(occipital lobe)

Figure 16-1. Transverse section showing pathway of visual signal to the human primary visual cortex. Redrawn from (Driesen 2003).

Lateral geniculate nucleus (LGN)

Signal from left half of visual field

Optic tract

SIgnal from right half of visual field

Primary visual cortex

(occipital lobe)

Figure 16-1. Transverse section showing pathway of visual signal to the human primary visual cortex. Redrawn from (Driesen 2003).

immediate environs may influence the way it processes information and thus influence the way we evaluate in context what we see.

After visual information leaves the primary visual cortex (V1), the signals go to 30 or more secondary visual processing centers in the occipital lobe and parts of the parietal and temporal lobes (Figure 15-6). In the end, perhaps 50 percent of the neo-cortex is involved in visual processing. Beyond the striate cortex, signal is processed with respect to color, motion, intensity, depth, and form.

The X and Y pathways remain segregated as the information leaves the primary visual cortex. The X pathway extends into the inferior temporal cortex, as the ventral cortical pathway and the Y pathway extend to the posterior parietal cortex. The dorsal pathway is largely responsible for the perceptions of motion, depth, and form, and the perception of contrast and contours takes place in the ventral pathway, although there is a good deal of overlap. Put simply, the dorsal pathway determines where an object is, and the ventral pathway is involved in recognizing what the object is (Kandel, Schwartz et al. 2000).

Visual images are passed to our brain inverted and backward, as they are represented on the retina, a curved but essentially two-dimensional surface. Bilateral symmetry allows organisms to see on both sides of themselves. One eye detects what is to the left of the organism, the other what is to the right. In some organisms, both eyes face forward enough that the fields of vision (images) of the two eyes overlap;

it then becomes possible to characterize the outside world more precisely in three dimensions, that is, additional information about relative distances from the eyes is available. The three dimensional aspects of vision are reproduced in the higher cortical centers, using both monocular and binocular cues. Binocular resolution allows the kinds of "triangulation" of differences that enable us to perceive depth or distance. Our brains apparently learn from experience to integrate the many cues to determine an object's distance, its size relative to other objects in the frame, motion, and other characteristics that give an image its three dimensionality, as well as to correct the inversion and reversal of the image as it enters the brain.

The retinal image is represented numerous times in the cortex, with many cells processing the image in different ways at once, in at least two different pathways. How do all the separate representations of what we see come together into a single image? This is known as the Binding Problem and is one of the central questions in cognitive psychology. The answer is still not clear. The brain apparently constructs a visual image in layers, by putting together the numerous interpretations produced at each level of processing, but whether there is a common pathway that integrates all this or whether the various afferent pathways interact in some way is not yet known.

Extraocular Vertebrate Photoreception

We referred briefly in Chapter 14 to the fact that organisms perceive light in many ways that do not involve eyes. We referred to vertebrate extraretinal photorecep-tors that are important in regulation of circadian rhythms and photoperiodicity. The pineal and parapineal glands within the brain are the most important such pho-toreceptors. Brains are more or less permeable to light, depending on the circumference of the head: although the light becomes somewhat refracted and filtered, in many animals sufficient light penetrates the skull and brain to reach photorecep-tors in the pineal and parapineal glands. In larger animals, light input to the pineal gland is part of the visual pathway. The pineal gland regulates synthesis of the hormone melatonin, which is derived from the neurotransmitter serotonin and is secreted at night. The neurochemical basis for its regulation of circadian rhythms is not known. Although this response to light is in some senses behavioral, it appears to be simple in that it does not involve detecting other aspects of light such as direction or, especially, image.

Many animals also use light in nonperceptive or certainly nonbehavioral ways. Some require sunlight in the ultraviolet range, for example, for the conversion of cholesterol into vitamin D or to induce the production of melanin by melanocytes to protect cells from the damage ultraviolet light can cause to DNA. Interestingly, melanocytes are derived from NC cells and in that sense are neural structures.

0 0

Post a comment