Wavelet theory has been enthusiastically adopted by people in the area of signal and image processing. It has been proved to be a useful tool in many applications. A wavelet-based shape from shading method was introduced in [31]. Unlike methods introduced in Section 5.3, the objective function in the constrained optimization problem is replaced by its projection to the wavelet subspaces. To understand this approach, we first recall some elements in wavelet theory.

5.4.1 Background of Wavelets Analysis 5.4.1.1 1D Wavelets

To begin with, we present here a few elements of one-dimensional orthogonal wavelet theory, in which an orthonormal basis {^mn| of L2(R) is constructed having the form fmn(t) = 2m/2 fmn(2mt — n), n, m e Z, where f(t) is the "mother wavelet." Usually it is not constructed directly but rather from another function called the "scaling function" 4(t) e L2(R). The scaling function 4 is chosen in such a way that

(iii) for each f e L 2(R), e > 0, there is a function

fm(t) = amn4(2mt — n) such that || fm — f || < e.

These conditions lead to a "multiresolution approximation" {Vm}meZ, consisting of closed subspaces of L2(R). The space Vm is taken to be the closed linear span of {4(2mt — n)}neZ. Because of (5.62) (ii), the Vm are nested, i.e. Vm c Vm+1 and because of (5.62) (iii), UmVm is dense in L2(R).

There are many different types of wavelet bases created and employed for different purposes. They can be classified as time-limited wavelets, such as Haar wavelets and Daubechies wavelets, band-limited wavelets, such as Shannon and Meyer wavelets. Another standard prototype is the Haar system in which 4(t) = X[0,i](t), where

is the characteristic function of [0, 1]. It is an easy exercise to show that (5.62) is satisfied. This prototype has poor frequency localization but good time localization. Most of the other examples found, e.g., in [12] and [66], attempt to get fairly good time and frequency localization simultaneously.

The various scales are related by the dilation equation of the scaling function

n=—o o f (t) = 42 J2 dn4(2t — n), n=—o where d = ci_n(—1)n.

In addition, the Fourier transform of the mother wavelet ^ (t) vanishes in a neighborhood of the origin. We denote by Wm the closed linear span of (2mt — n)}. This set of functions form an orthogonal basis of L2(R). That is,

For f e L2(R), we have the projections onto the subspace Vm and Wm respectively given by fm(t) = Pmf (t) = £ amn2m/20(2mt - n), (5.64)

where

The coefficients ajn and bjn at resolution j = m and j = m + 1 are related by a tree algorithm. To see this, we space V1, we have two distinct orthonormal bases:

[<P(x — n),f(x — k)}n;k=—œ . Hence each f e Vi has an expansion to f (x) = ^^ ai,nV2$(2x — n)

By (5.63) we have to to ai,n = Cn—2kO),k + ^ (— 1)n—1Cl—n+2kbo,k, (5.66)

k=—to k=—to which is the reconstruction part. The decomposition is even easier: We need merely use the formulas for an0 and bn0 to find

f (x)0(x — n)dx = I f (x)^2 ckV20(2x — 2n — k)dx (5.67)

= cka1,2n+k = ^2 a1,kck-2n, k k b0,n =^2 a\,k( — 1)k Xc1—k+2n^ k

This works at each scale to give us the tree algorithm for decomposition (5.67), bm—1,n b0,n

■ ■ ■ > am,n ^ ^ ' ' ' ^ a1,n ^ a0^n ^ ' ' '

■ ■ ■ ^ a0,n ^ a1,n ^ ' ' ' ^ am—l,n ^ am,n ^ ■■■ ■

Thus we need calculate the coefficients from the function f(t) only once at the finest scale of interest. Then we work down to successively coarser scales by using this decomposition algorithm, with the error at each successive scale corresponding to the wavelet coefficients. These algorithms are called Mallat algorithms (see [39]).

In order to represent an image using wavelet bases, we need to construct a basis for L2(R2) There are two different methods to do so. One way is based on the multiresolution analysis in 2D space to construct 2D wavelet basis directly, while another way is based on the tensor product of the 1D wavelets. The former usually leads to a nonseparable basis, while the latter derives a separable basis. Here we merely consider the separable basis, which is based on the separable multiresolution analysis of L2(R2)

Let {Vm} be a multiresolution of L2(R); a separable two-dimensional multiresolution is composed of the tensor product spaces

The space v; is the set of the finite energy functions that are linear expansions of the set of the separable bases {(x, y)} °°,=0 , while the correspondent wavelet subspace W; is given by the close linear span of

{$m,k(x)fm, I (y) , fm, I (x)$m, k(V), fm,k(x)fm, I (y)}°°=0

where

Like in 1D case, we have vm=vm-i e wm-i = v ® Vm) e wm-i, wm = (Vm ® Wm) e (Wm ® Vm) e (Wm ® Wm), and

Wells et al. [67] proved the following theorem.

Theorem 8 (Wells and Zhou). Assume the function f e C 2(^), where ^ is a bounded open set in R2. Let

1 k + c l + c fm(x, y):= — f ("j' )&mk(x)$mi (V), x, V e (5.69)

k,leA

where A = {k e Z : supp(<^mk) n ^ = 0} is the index set and

Then

|| f - fm||L2(^) < C(1/2m)2, where C is dependent on the diameter of the first and second moduli of the first- and second-order derivatives of f on ^ .

Formula (5.69) is the one which was used in the wavelet-based SFS method. Now we are ready to introduce this method.

A wavelet-based method was developed in [31]. Instead of using the constraints in Zheng-Chellappa's method (see Section 5.3.2.1), the authors introduced a new constraint (5.20). It is said that "the new constraint not only enforces integrability but also introduces a smoothness constraint in an implicit manner." Now the energy function is defined as

W = f f [(E(x, y) - R(p, q))2 + (p + p2y + q2x + q2y) (5.70)

+ ((Zx - P)2 + (Zy - q)2) + ((Zxx - P)2 + (Zyy - q)2)]dx dy.

The objective function is first replaced by its approximation in scaling space V0 of Daubechies wavelets. Then the variational problem is solved by an iterative algorithm. We now describe this method.

We assume that the given image size is M x M. The surface Z(x, y), its partial derivatives dZ = P(x, y), and || = q(x, y) have projection to V0, the scaling space at level 0:

Denoting d d 2 <fi,i (x, y) = dx<o,k,i (x, y), (x, y) = dx2(x, y),

&0%,i (x, y) = dy&o,k,i (x, y), <0ki(x, y) = ^yp<0ki(x, y), substitute (5.71) in each term of (5.70) to get

W = E(x, y) - R(^ Pk,i$0,k,i(x, y), ^ qki<0,k,i(x, y)) J Jal_ k,i=0 k,i=0

+ [f (( E Pki<01 (x, y)] + (e Pki<¿01 (x, y)) J Ja \ \k,i=0 ) \k,i=0 )

+ q*>l(x , y) + q,$tu (x, y) I dx dy \k,l=0 ) \k,l=0 ) I

+ / / I ( J2 Zk,l(x , y) —J2 pk,l(x, y) i J Jn y \k,l=0 k,l=0 /

J2 Zk,l(x, y) —J2 qkl $tii (x, y) I I dx dy k,l=0 k,l=0

J2 Zk,l$0Tl (x, y) —J2 qkl (x, y) I I dx dy. k,l=0 k,l=0

There are total of 3M2 unknown variables (they are the function samples of Z, p, and q):

{Vk,l}, {qkl}, and {Zkl}, where the indices run on M x M grid (see (5.69)).

It is remarkable that the interpolating property (5.69) simplified the computation significantly. The integrals we need to compute in energy function are only involved with the integrals which are the inner product of the scaling function $(x, y) := $0 0 0(x, y), its shifting $ktl(x, y) := $0 k l(x, y), and their partial derivatives of first and second orders. Note that we have dropped the scale (or the resolution) index 0 for simplicity, since the discussion here does not relate to other scales. Now we assume that the scaling function $ is the Daubechies scaling function with 2N + 1 filter coefficients ci (see (5.63)). These definite integrals are called connection coefficients [5]:

r(x4)(k, l) = j j $(xx)(x, y)$kxx)(x, y)dx dy = r(4)(k)D(l), ry4)(k, l) = f f $(yy)(x, y)$kyy)(x, y)dx dy = D(k)T(4)(k),

T^k l) = j J $(xy)(x, y)$k7\x, y)dx dy = r(2)(k)r(2)(l),

r(3) 1 y r(yX)(k, l} = f f $(yx)(x, y}$kyx)(x, y)dxdy = r(2)(k}r(2)(l}, J Jn '

(k, l) = j j $(xx)(x, y^kXiiX y)dx dy = D(l}r(3)(k),

(k, l} = f f $(y)(x, y}$kyy)(x, y)dx dy = D(k)r(3)(l}, J Jn '

r((2)(k, l } = j j $ (x)(x, y^fjix, y)dxdy = D(l }r(2)(k),

Yf(k, l) = f f $(y)(x, y^Xx, y)dx dy = D(k}r(2)(l), J Jn '

r(dXk, l) = J J $(x)(x, y)$k,i (x, y)dx dy = D(l}r(1)(k}, ryi)(k, l} = f f $(y)(x, y}$k,l(x, y}dx dy = D(k)r(1)(l}, n where r(1)(k) = j $(x)(x}$(x - k}dx, r(2)(k) = j $(x)(x}$(x)(x - k}dx, r(3)(k) = J $(xx)(x}$(x)(x - k}dx, r(4)(k} = J $(xx)(x}$(xx)(x - k}dx are 1D connection coefficients and D(0) = 1, D(n) = 0, n = 1. Note that since the 2D basis here is constructed from the tensor product of 1D basis, these 2D connection coefficients can be computed by using 1D coefficients. We also notice that these connection coefficients are independent of the input images; therefore, they only need to be computed once.

The energy function is then linearized by taking the linear term in its Taylor expansion at (p, q }. The next step is to solve the optimization problem associated with the linearized energy function by iterations. Let Sp^ j, Sqit j, and Sz, j be the small variation of p, j, q, j, and z^ j, respectively, and set dSW dSW dSW

dSpi^j dSqij dSzi

We obtain

dp d q d R dR Sqij = [C2D11 - C1 —(i j)—(i, j)]/D, dp d q

D22 = Rl. + 3r(2)(0) +1, d33 = 2r(2)(0) + 2r(4)(0), D = D11D22 - Rl, and

2N-2

+ J2 z-k.j(r(3)(k) + r(1)(k)) - (2Pi-k,j + Pi,j-k)r(2)(k), k=-2N+2

2N-2

+ £ z,j-k(r(3)(k) + r(1)(k)) - (Qi-k,j + 2Qi,j-k)r(2)(k), k=-2N+2 2N-2

Finally, we can write the iterative formula

Qij1 = Qtj + &Qij, zm+1 _ zm , ? . i, j = % j + i'J.

We now summarize this method as the follows:

SteP 0. Compute 1D connection coefficients and 2D connection coefficients.

SteP 1. Compute the set of coefficients (5.75) and (5.74).

SteP 2. Compute the set of variations &Pit j, j, and j (5.73).

SteP 3. Update the current (P^^j, qimj) and then the current shape reconstruction Zmj using Eq. (5.76).

Was this article helpful?

## Post a comment