Mammogram Study

The mammography study involved a variety of tasks: detection, localization, measurement, and management decisions. This work has been reported upon in [2, 24, 45] as well as in the recent Stanford Ph.D. thesis of Bradley J. Betts [8], which also includes detailed analyses of a much larger trial. The image database was generated in the Department of Radiology of the University of Virginia School of Medicine and is summarized in Table 1. The 57 studies included a variety of normal images and images containing benign and malignant objects.

TABLE 1 Data test set: 57 studies, 4 views per study

6 benign mass 6 benign calcifications

5 malignant mass

6 malignant calcifications

3 malignant combination of mass and calcifications

3 benign combination of mass and calcifications

4 breast edema

4 malignant architectural distortion

2 malignant focal asymmetry

3 benign asymmetric density 15 normals

Reprinted with permission from S.M. Perlmutter, P.C. Cosman, R.M. Gray, R.A. Olshen, D. Ikeda, C.N. Adams, B.J. Betts, M. Williams, K.O. Perlmutter, J. Li, A. Aiyer, L. Fajardo, R. Birdwell, and B.L. Daniel, Image Quality in Lossy Compressed Digital Mammograms, Signal Processing, 59:189-210, 1997. © Elsevier.

Corroborative biopsy information was available on at least 31 of the test subjects.

The images were compressed using Set Partitioning in Hierarchical Trees (SPIHT) [54], an algorithm in the subband/ wavelet/pyramid coding class. These codes typically decompose the image using an octave subband, critically sampled pyramid, or complete wavelet transformation, and then code the resulting transform coefficients in an efficient way. The decomposition is typically produced by an analysis filter bank followed by downsampling.

The most efficient wavelet coding techniques exploit both the spatial and frequency localization of wavelets. The idea is to group coefficients of comparable significance across scales by spatial location in bands oriented in the same direction. The early approach of Lewis and Knowles [31] was extended by Shapiro in his landmark paper on embedded zerotree wavelet coding [57], and the best performing schemes are descendants or variations on this theme. The approach provides codes with excellent rate-distortion trade-offs, modest complexity to implement, and an embedded bit stream, which makes the codes useful for applications where scalability or progressive coding are important. Scalability implies there is a "successive approximation" property to the bit stream. This feature is particularly attractive for a number of applications, especially those where one wishes to view an image as soon as bits begin to arrive, and where the image improves as further bits accumulate. With scalable coding, a single encoder can provide a variety of rates to customers with different capabilities. Images can be reconstructed to increasing quality as additional bits arrive.

After experimenting with a variety of algorithms, we chose Said and Pearlman's variation [54] of Shapiro's EZW algorithm because of its good performance and the availability of working software for 12-bpp originals. We used the default filters (9-7 biorthogonal filter) in the software compression package of Said and Pearlman [54]. The system incorporates the adaptive arithmetic coding algorithm considered in Witten, Neal, and Cleary [66].

For our experiment, additional compression was achieved by a simple segmentation of the image using a thresholding rule. This segmented the image into a rectangular portion containing the breast — the region of interest or ROI —and a background portion containing the dark area and any alphanumeric data. The background/label portion of the image was coded using the same algorithm, but at only 0.07 bpp, resulting in higher distortion there. We report here SNRs and bit rates both for the full image and for the ROI.

The image test set was compressed in this manner to three bit rates: 1.75, 0.4, and 0.15bpp, where the bit rates refer to rates in ROI. The average bit rates for the full image thus depended on the size of the ROI. An example of the Said-Pearlman algorithm with a 12-bpp original and 0.15-bpp reproduction is given in Fig. 4.

FIGURE 3 (a) Original 9.0bpp MR chest scan, (b) MR chest scan compressed to 1.14bpp, and (c) MR chest scan compressed to 0.36bpp.

FIGURE 3 (a) Original 9.0bpp MR chest scan, (b) MR chest scan compressed to 1.14bpp, and (c) MR chest scan compressed to 0.36bpp.

0 0

Post a comment