The DCT is a reversible transform and retains all the information in the image. Compression in JPEG and MPEG is obtained by discarding some of the DCT components. The principle is that a small section of the image can be represented by the average color intensity and only major differences from the average. Minor differences that are ignored provide the compression and introduce the loss. The wavelet transform described in a subsequent chapter is based on one of many available basis functions, and it is typically used for lossy compression by discarding some of the transform components.
The subsequent step, threshold and quantization, reduces the number of levels that represent gray scale or color, producing a reasonable but not accurate representation of the output of the transform step. The quality of reproduction depends on the number of levels used in quantizing color or
gray level. Quantization and thresholding are the steps where most of the image loss usually occurs and the quality of the compressed image is determined. Entropy encoding is the final stage where lossless compression is applied to the quantized data. Several techniques can be used, such as run-length and Huffman coding  (CCITT Group 3), and two-dimensional encoding (CCITT Group 4).
2.1 Joint Photographic Experts Group (JPEG) Compression
JPEG, formed as a joint committee of the International Standards Organization (ISO) and CCITT, focuses on standards for still image compression. The JPEG compression standard is designed for still color and gray-scale images, otherwise known as continuous tone images, or images that are not restricted to dual-tone (black and white) only. Emerging technologies such as color fax, scanners, and printers need a compression standard that can be implemented at acceptable price-to-performance ratios. The JPEG standard is published in two parts:
(1) The part that specifies the modes of operation, the interchange formats, and the encoder/decoder specified for these modes along with implementation guidelines
(2) The part that describes compliance tests which determine whether the implementation of an encoder or decoder conforms to Part 1 to ensure interoperability of systems.
The JPEG compression standard has three levels of definition:
• Baseline system
• Extended system
• Special lossless function
A coding function performed by a device that converts analog signals to digital codes and digital codes to analog signals is called a codec. Every codec implements a baseline system, also known as the baseline sequential encoding. The codec performs analog sampling, encoding/decoding, and digital compression/decompression. The baseline system must satisfactorily decompress color images and handle resolutions ranging from 4 to 16 bits per pixel. At this level, the JPEG compression standard ensures that software, custom very large scale integration (VLSI), and digital signal processing (DSP) implementations of JPEG produce compatible data. The extended system covers encoding aspects such as variable length encoding, progressive encoding, and the hierarchical mode of encoding. All of these encoding methods are extensions of the baseline sequential encoding. The special lossless function also known as predictive loss coding, is used when loss in compressing the digital image is not acceptable.
There are four modes in JPEG:
• Sequential encoding
• Progressive encoding
• Hierarchical encoding
• Lossless encoding
JPEG sequential encoding requirements dictate encoding in a left-to-right sequence and top-to-bottom sequence to ensure that each pixel is encoded only once. Progressive encoding is usually achieved by multiple scans. The image is decompressed so that a coarser image is displayed first and is filled in as more components of the image are decompressed. With hierarchical encoding, the image is compressed to multiple resolution levels so that lower resolution levels may be accessed for lower resolution target systems without having to decompress the entire image. With lossless encoding, the image is expected to provide full detail when decompressed.
JPEG and wavelet compression are compared in Fig. 3, on 8bit gray scale images, both at a compression ratio of 60 to 1. The top row shows chest X-ray images and the bottom row presents typical magnified retina images. The image detail is retained better in the wavelet compressed image.
2.2 Moving Picture Experts Group (MPEG) Compression
Standardization of compression algorithms for video was first initiated by CCITT for teleconferencing and video telephony. The digital storage media for the purpose of this standard include digital audio tape (DAT), CD-ROM, writeable optical disks, magnetic tapes, and magnetic disks, as well as communications channels for local and wide area networks, LANs and WANs, respectively. Unlike still image compression, full motion image compression has time and sequence constraints. The compression level is described in terms of a compression rate for a specific resolution.
The MPEG standards consist of a number of different standards. The original MPEG standard did not take into account the requirements of high-definition television (HDTV). The MPEG-2 standards, released at the end of 1993, include HDTV requirements in addition to other enhancements. The MPEG-2 suite of standards consists of standards for MPEG-2 audio, MPEG-2 video, and MPEG-2 systems. It is also defined at different levels to accommodate different rates and resolutions as described in Table 2.
Moving pictures consist of sequences of video pictures or frames that are played back at a fixed number of frames per second. Motion compensation is the basis for most compression algorithms for video. In general, motion compensation assumes that the current picture (or frame) is a revision of a previous picture (or frame). Subsequent frames may differ slightly as a result of moving objects or a moving camera, or both. Motion compensation attempts to account for this movement. To make the process of comparison more efficient, a frame is not encoded as a whole. Rather, it is split into blocks,
FIGURE 3 Effect of 60 to 1 compression on 8-bit gray-scale images.
and the blocks are encoded and then compared. Motion compensation is a central part of MPEG-2 (as well as MPEG-4) standards. It is the most demanding of the computational algorithms of a video encoder.
The established standards for image and video compression developed by JPEG and MPEG have been in existence, in one form or another, for over a decade. When first introduced, both processes were implemented via codec engines that were entirely in software and very slow in execution on the computers of that era. Dedicated hardware engines have been developed and realtime video compression of standard television transmission is now an everyday process, albeit with hardware costs that range from $10,000 to $100,000, depending on the resolution of the video frame. JPEG compression of fixed or still images can be accomplished with current generation PCs. Both JPEG and MPEG standards are in general usage within the multimedia image compression world. However, it seems that the DCT is reaching the end ofits performance potential since much higher compression capability is needed by most of the users in multimedia applications. The image compression standards are in the process of turning away from DCT toward wavelet compression.
FIGURE 3 Effect of 60 to 1 compression on 8-bit gray-scale images.
TABLE 2 MPEG-2 resolutions, rates, and metrics 
Level Pixel to line ratio Compression and decompression rate Lines per frame Frames per second Pixels per second
High 1920 Up to 60 Mbits per second 1152 60 62.7 million
High 1440 Up to 60 Mbits per second 1152 60 47 million
Main 720 Up to 15 Mbits per second 576 30 10.4 million
Low 352 Up to 4 Mbits per second 288 30 2.53 million
Table 3 presents quantitative information related to the digitization and manipulation of a representative set of film images. As shown, the average medical image from any of several sources translates into large digital image files. The significant payoff achieved with wavelet compression is the capability to send the image or set of images over low-cost telephone lines in a few seconds rather than tens of minutes to an hour or more if compression is not used. With wavelet compression, on-line medical collaboration can be accomplished almost instantaneously via dial-up telephone circuits. The average compression ratios shown in Table 3 are typically achieved with no loss of diagnostic quality using the wavelet compression process. In many cases, even higher ratios are achievable with retention of diagnostic quality. The effect of compression on storage capability is equally remarkable. For example, a set of six 35-mm slide images scanned at 1200 dpi producing nearly 34 Mbytes of data would compress to less than 175 kbytes. A single CD-ROM would hold the equivalent of nearly 24,000 slide images.
DCT will give way to wavelet compression simply because the wavelet transform provides 3 to 5 times higher compression ratios for still images than the DCT with an identical image quality. Figure 3 compares JPEG compression with wavelet compression on a chest X-ray and a retina image. The original images shown on the left are compressed with the wavelet transform (middle) and JPEG (right), both with a 60:1 compression ratio. The original chest X-ray is compressed from 1.34 Mbytes to 22 Kbytes while the original retina image is compressed from 300 Kbytes to 5 Kbytes. The ratio for video compression could be as much as 10 times the compression ratio of MPEG-1 or MPEG-2 for identical visual quality video and television applications. A change from the DCT is coming because of the transmission bandwidth reduction available with wavelets and the capability to store more wavelet-compressed files on a CD-ROM, DVD-ROM, or any medium capable of storing digital files.
Fractal compression is an approach that applies a mathematical transformation iteratively to a reference image to reproduce the essential elements of the original image. The quality of the decompressed image is a function of the number of iterations that are performed on the reference image and the processing power of the computer. This discussion will focus on the compression of black-and-white images and gray-scale images. For purposes of analysis, black-and-white images are modeled mathematically as point sets (black and white points) in a two-dimensional Euclidean space and gray-scale images are modeled as point sets in a three-dimensional Euclidean space.
Fractal image compression employs a set of functions that are mappings from a two-dimensional Euclidean space onto itself for black-and-white images, or mappings from a three-dimensional Euclidean space onto itself for gray-scale images. The set of mappings is employed recursively beginning with an initial (two- or three-dimensional) point set called the "initial image" to produce the final, "target image" (the original image to be compressed); i.e., application of the mappings to the initial image produces a secondary image, to which the mappings are applied to produce a tertiary image, and so on. The resulting sequence of images will converge to an approximation of the original image. The mappings employed are special to the extent that they are chosen to be affine linear contraction (ALC) mappings generally composed of a simple linear transformation combined with a translation. The fact
Was this article helpful?