Once the hard work of developing coding schemes is past, trained observers are expected to categorize (i.e., code) quickly and efficiently various aspects of the behavior passing before their eyes, audible by their ears, or both. The behavior may be live, an audio or video recording (in either analog or digital form), or a previously prepared transcript, but one basic question concerns the coding unit: To what entity is a code assigned? Is it a neatly bounded time interval such as the single minute used by Parten? Or is it successive n-second intervals as is often encountered, especially in older literature? Or is it an event of some sort? For example, observers might be asked to identify episodes of struggles over objects between preschoolers and then code various dimensions of those struggles (Bakeman & Brownlee, 1982). Or—and this is the approach we generally favor— are observers asked to record onset and offset times of events, or to segment the stream of behavior into sequences of ME&E states, coding the type of the event and its onset times.
When onset and offset times of events are not recorded, the coding unit is usually straightforward. It could be a turn-of-talk in a transcript, a specified time interval, or a specified event. The practice of coding successive time intervals, which is often called zero-one or partial-interval or simply time sampling (Altmann, 1974), requires further comment. Given today's technology, interval recording has less to recommend it than formerly. As usually practiced, rows on a paper recording form represented successive intervals (often quite short, e.g., 15 seconds), columns represented particular behaviors, and observers noted with a tick mark when a behavior occurred within, or predominately characterized, each interval. The intent of the method was to provide approximate estimates of both frequency and duration of behaviors in an era before readily available recording devices automatically preserved time; it was a compromise between accuracy and ease that reflected the technology of the time.
Given today's technology, almost always the time over which events occur can be preserved quite easily, and so no compromise is required. When coding live, for example, whenever a key representing a code is pressed on a laptop computer or similar device, not just the code but also the time can be automatically recorded. Or video recordings may display time as part of the picture, allowing observers to note the onset times of codable events. Or computers may display video recordings that contain electronic time codes as part of the recording, which automates entry of time codes into data files. With video recording and appropriate technology, the coder's task is reduced to viewing the image (and re-viewing, which is an advantage of working with a video recording), and pressing keys corresponding to onsets of codable events. When codes are organized into sets of ME&E codes, as recommended earlier, only onset times need be recorded because each onset implies the offset of an earlier code from the same set.
When this approach is used—when onset and explicit or implied offset times are recorded—what is the coding unit? It does not make sense to say it is the event, which would imply a single decision, made once. The task is more complex. The coder is continuously alert, coding moment by moment, trying to decide if in this moment a particular code still applies. However, a moment is too imprecise to serve as a coding unit. As a practical matter, we need to quantify moment, and although arbitrary, probably the best choice is to let precision define the unit. Thus if we record times to the nearest second, as is common and reflects human reaction time, it is useful to think of the second as our coding unit, the entity to which a code is assigned. This is a fiction, of course, but a very useful one with implications for representing data and determining their reliability, as we discuss subsequently.
Two comments seem in order, one dealing with smaller, one with larger time units: First, half-second or tenth of a second intervals could be used, but without specialized equipment, hundredths of a second intervals make little sense. Even though time in seconds may be displayed with two digits after the decimal point, only 30 or 25 frames per second of video are recorded (in the American NTSC or National Television Systems Committee, or the European PAL or Phase Alteration Line system, respectively), so the precision is illusory. Second, thinking of codes being assigned to successive 1-second intervals is no different logically than assigning codes to other intervals (e.g., 10- or 15-second ones), with one key difference: 1-second intervals reflect plausible precision in a way that larger intervals do not.
Was this article helpful?