Analytic Kernel Based Methods 2221 Derivatives of Gaussian

In order to handle image structures at different scales in a consistent manner, a linear scale-space representation is proposed in [24,34]. The basic idea is to embed the original signal into an one-parameter family of gradually smoothed signals, in which fine scale details are successively suppressed. It can be shown that the Gaussian kernel and its derivatives are one of the possible smoothing kernels for such scale-space. The Gaussian; kernel is well-suited for defining a space-scale because of its linearity and spatial shift invariance, and the notion that structures at coarse scales should be related to structures at finer scales in a well-behaved manner (new structures are not created by the smoothing method). Scale-space representation is a special type of multiscale representation that comprises a continuous scale parameter and preserves the same spatial sampling at all scales. Formally, the linear-space representation of a continuous signal is constructed as follows. Let f : ^ ^ represent any given signal. Then, the scale-space representation L : x R+ ^ ^ is defined by L (■; 0) = f so that

L(■; t) = g(.; t) * f where t e X+ is the scale parameter, and g : XNxR+{0} ^ X is the Gaussian kernel. In arbitrary dimensions, it is written as:

The square root of the scale parameter, a = -/(t), is the standard deviation of the kernel g and is a natural measure of spatial scale in the smoothed signal at scale t. From this scale-space representation, multiscale spatial derivatives can be defined by

Lxn(■; t) = dxnL (■; t) = gxn(■; t) * f, where gxn denotes a derivative of some order n.

The main idea behind the construction of this scale-space representation is that the fine scale information should be suppressed with increasing values of the scale parameter. Intuitively, when convolving a signal by a Gaussian kernel with standard deviation a = «Jt, the effect of this operation is to suppress most of the structures in the signal with a characteristic length less than a. Different directional derivatives can be used to extract different kind of structural features at different scales. It is shown in the literature [35] that a possible complete set of directional derivatives up to the third order are dn = [90, d90, dfi, 3f0, 9f2o, 9q, d|5, 9930, df35]. So our feature vector will consist on the directional derivatives, including the zero derivative, for each of the n scales desired:

Figure 2.8: Derivative of Gaussian responses for a = 2. (a) Original image; (b) first derivative of Gaussian response; (c) second derivative of Gaussian response; (d) third derivative of Gaussian response.

Figure 2.8: Derivative of Gaussian responses for a = 2. (a) Original image; (b) first derivative of Gaussian response; (c) second derivative of Gaussian response; (d) third derivative of Gaussian response.

Figure 2.8 shows some of the responses for the derivative of Gaussian bank of filters for a = 2. Figures 2.8(b), 2.8(c), and 2.8(d) display the first, second, and third derivatives of Gaussian, respectively. Wavelets

Wavelets come to light as a tool to study nonstationary problems [36]. Wavelets perform a decomposition of a function as a sum of local bases with finite support and localized at different scales. Wavelets are characterized for being bounded functions with zero average. This implies that the shapes of these functions are waves restricted in time. Their time-frequency limitation yields a good location. So a wavelet f is a function of zero average:

which is dilated with a scale parameter s and translated by u:

The wavelet transform of f at scale s and position u is computed by correlating f with a wavelet atom:

The continuous wavelet transform Wf (u, s) is a two-dimensional representation of a one-dimensional signal f. This indicates the existence of some redundancy that can be reduced and even removed by subsampling the parameters of these transforms. Completely eliminating the redundancy is equivalent to building a basis of the signal space.

The decomposition of a signal gives a series of coefficients representing the signal in terms of the base from a mother wavelet, that is, the projection of the signal on the space formed by the base functions.

The continuous wavelet transform has two major drawbacks: the first, stated formerly, is redundancy and the second, impossibility to calculate it unless a discrete version is used. A way to discretize the dilation parameter is a = am, m e Z, a0 = 1 constant. Thus, we get a series of wavelets ^m of width, am. Usually, we take a0 > 1, although it is not important because m can be positive or negative. Often, a value of a0 = 2 is taken. For m = 0, we make s to be the only integer multiples of a new constant s0. This constant is chosen in such a way that the translations of the mother wavelet, ^ (t — ns0), are as close as possible in order to cover the whole real line. Then, the election of s level is as follows:

that covers the entire real axis as well as the translations ^ (t — ns0) does. Summarizing, the discrete wavelet transform consists of two discretizations in the transformation Eq. (2.1),

The multiresolution analysis (MRA) tries to build orthonormal bases for a dyadic grid, where a0 = 2, b0 = 1, which besides have a compact support region. Finally, we can imagine the coefficients dmn of the discrete wavelet transform as the sampling of the convolution of signal f (t) with different filters ^m(—t), mßf (a-mt - ns0)

Figure 2.9: Scale-frequency domain of wavelets.

Figure 2.9: Scale-frequency domain of wavelets.

Figure 2.9 shows the dual effect of shrinking of the mother wavelet as the frequency increases, and the translation value decreasing as the frequency increases. The mother wavelet keeps its shape but if high-frequency analysis is desired the spatial support of the wavelet has to decrease. On the other hand, if the whole real line has to be covered by translations of the mother wavelet, as the spatial support of the wavelet decreases, the number of translations needed to cover the real line increases. Unlike Fourier transform, where translations of analysis are at the same distance for all the frequencies.

The choice of a representation of the wavelet transform leads us to define the concept of a frame. A frame is a complete set of functions that, though able to span L 2(^), is not abase because it lacks the property of linear independence. MRA proposed in [26] is another representation in which the signal is decomposed in an approximation at a certain level L with L detail terms of higher resolutions. The representation is an orthonormal decomposition instead of a redundant frame, and therefore, the number of samples that defines a signal is the same as that the number of coefficients of their transform. A MRA consists of a sequence of function subspaces of successive approximation. Let Pj be an operator defined as the orthonormal projection of functions of L2 over the space Vj. The projection of a function f over Vj is a new function that can be expressed as a linear combination of the functions that form the orthonormal base of Vj. Coefficients of the combination of each base function is the scalar product of f with the base functions:


Earlier we have pointed out the nesting condition of the Vj spaces, Vj c Vj_ i. Now, if f e Vj_1 then f e Vj or f is orthonormal to all the Vj functions, that is, we divide Vj_1 in two disjoint parts: Vj and other space Wj, such that if f e Vj, g e Wj, f ±g; Wj is the orthonormal complement of Vj in Vj_1:

Vj_i = Vj e Wj where symbol e measures the addition of orthonormal spaces. Applying the former equation and the completeness condition, then

Was this article helpful?

0 0

Post a comment