In domain rendering, the spatial 3D data is first transformed into another domain — such as compression, frequency, and wavelet domain — and then a projection is generated directly from that domain or with the help of information from that domain. The frequency domain rendering applies the Fourier slice projection theorem, which states that a projection of the 3D data volume from a certain view direction can be obtained by extracting a 2D slice perpendicular to that view direction out of the 3D Fourier spectrum, and then inverse-Fourier transforming it. This approach obtains the 3D volume projection directly from the 3D spectrum of the data, and therefore reduces the computational complexity for volume rendering from O(N3) to O(N2 logN) [12,40,42,43]. A major problem of frequency domain volume rendering is the fact that the resulting projection is a line integral along the view direction that does not exhibit any occlusion and attenuation effects. Totsuka and Levoy [71] proposed a linear approximation to the exponential attenuation [58] and an alternative shading model to fit the computation within the frequency-domain rendering framework.

The compression domain rendering performs volume rendering from compressed scalar data without decompressing the entire data set, and therefore reduces the storage, computation, and transmission overhead of otherwise large-volume data. For example, Ning and Hesselink [47,48] first applied vector quantization in the spatial domain to compress the volume, and then directly rendered the quantized blocks using regular spatial domain volume rendering algorithms. Fowler and Yagel [13] combined differential pulse-code modulation and Huffman coding, and developed a lossless volume compression algorithm, but their algorithm is not coupled with rendering. Yeo and Liu [89] applied discrete cosine transform based compression technique on overlapping blocks of the data. Chiueh et al. [6] applied 3D Hartley transform to extend the JPEG still image compression algorithm [76] for the compression of subcubes of the volume, and performed frequency domain rendering on the subcubes before compositing the resulting subimages in the spatial domain. Each of the 3D Fourier coefficients in each subcube was then quantized, linearly sequenced through a 3D zigzag order, and then entropy encoded. In this way, they alleviated the problem of lack of attenuation and occlusion in frequency domain rendering while achieving high compression ratios, fast rendering speed compared to spatial volume rendering, and improved image quality over conventional frequency domain rendering techniques. Figure 7 shows a CT scan of a lobster that was rendered out of the compressed frequency domain.

Rooted in time-frequency analysis, wavelet theory [7,10] has gained popularity in the recent years. A wavelet is a fast-decaying function with zero averaging. The attractive features of wavelets are that they have local property in both spatial and frequency domain, and can be used to fully represent volumes with a small number of wavelet coefficients. Muraki [46] first applied wavelet transform to volumetric data sets, Gross et al. [16] found an approximate solution for the volume rendering equation using orthonormal wavelet functions, and Westermann [79] combined volume rendering with wavelet-based compression. However, none of these algorithms have focused on the acceleration of volume rendering using wavelets. The greater potential of wavelet domain, based on the elegant multiresolution hierarchy provided by the wavelet transform, is still far from fully utilized for volume rendering. A possible research and development would be to exploit the local frequency variance provided by wavelet transform and accelerate the volume rendering in an homogeneous area.

Was this article helpful?

## Post a comment