Rp = R(p + Spq, q) — R(p, q), Rq = R(p, q + S pq ) — R( p, q ), px = p(x + 1, y) — p(x, y), pxx = p(x + 1, y) + p(x — 1, y) — 2p(x, y), pyy = p(x, y + 1) + p(x, y + 1) — 2p(x, y).
Similarly, we can get all the other needed values in (5.59), namely, qxx, qy, qyy, Zx, Zy, Zxx, Zyy, Ixx, and Iyy. Notice that, in (5.61), the partial derivatives px, pxx, and pyy are approximated by linear terms in their Taylor series.
In order to accelerate the computational process, the hierarchical implementation has been used in Zheng-Chellappa's algorithm. The lowest layer of the image is 32 x 32, the higher one is 64 x 64, etc. For a detailed discussion about the hierarchical method and its implementation, we refer the readers to .
The whole algorithm can be described by the following procedure.
• Step 1. Estimate the original parameters of the reflectance map.
• Step 2. Normalize the input image. This step can be used to reduce the input image size to that of the lowest resolution layer.
• Step 3. Update the current shape reconstruction using Eqs. (5.56)-(5.59), and (5.61).
• Step 4. If the current image is in the highest resolution, the algorithm stopped. Otherwise, we will increase the image size and expand the shape reconstruction to the adjacent higher resolution layer; reduce the normalized input image to the current resolution. Then go to step 3.
The following is the pseudocodes used to realize Zheng-Chellappa's method.
Algorithm 3: Zheng-Chellappa's method
Input Zmin (mindepthvalue), Zmax (maxdepthvalue), (x, y, z) (direction of the light source), I(x, y)(input image) D y/x2 + y2 + z2, sx x/D, sy y/D, sz z/D. p0 q0 Z0 0
8pq 0.001, il 1.0 fa will be used in Eqs. (5.57) and (5.58)) sin y sin(arccos (lz)), sin t <— sin (arctan (sy/sx)), cos t <— cos(arctan (sy/sx)). for i = 1 to width(I) do for j = 1 to height (I) do calculate(px, pxx, py, pyy,qx, qxx, qy, qyy, Zx, Zxx, Zy, Zyy)
R (p cos y — p(i, j) cos t sin y — q (i, j) sin t sin y)/ sqrt(1 + p(i, j)2 + q (i, j)2),
Rp R(p(i, j) + 8pq, q(i, j)) — R(p(i, j), q(i, j)) calculate(8p, 8q, 8Z) using Eqs. (5.57) and (5.58) p p0 + 8p, q q0 + 8q Z Z0 + 8z p ^ Z(i, j) — Z(i, j — 1) q ^ Z(i, j) — Z(i — 1, j)
end do end do
The subfunction Normalize is a standard math function used in signal and image processing.
We now demonstrate this method by using the following example.
Example 6. Reconstruct the surface of a synthetic vase using the Zheng-Chellappa method.
The experiments are based on the synthetic images that are generated using true depth maps. Figure 5.4(a) shows the same synthetic vase as in the previous section and the reconstruction results using Pentland's algorithm. The light is from above at (x = 0, y = 0, z = 1). The input image is showed in Fig. 5.4(a). The surface, shown in Figs. 5.4(b), (c), and (d), is the reconstructed depth map from three different directions. Zheng-Chellappa algorithm produces reasonable results as expected for the vase. However, some errors can be seen around the boundary of the vase. In general, the experiment shows that Zheng-Chellappa's algorithm can reasonably reconstruct the object on the surface. The most important advantage of Zheng-Chellappa's algorithm is that the optimization approach is not limited to the situation where the reflectance map changes linearly with respect to the surface shape.
Example 7. Reconstruct the surface of a synthetic Mozart using Zheng-Chellappa's method.
The experiments are also based on the synthetic images that are generated using true depth maps. Figure 5.5(a) shows the synthetic Mozart and the reconstruction results using Zheng-Chellappa's algorithm. The light is from above at (x = 0, y = 0, z = 1). The input image is showed in Fig. 5.5 (a). The result image, shown in Figs. 5.5(b), (c), and (d), is the reconstructed depth map from three different directions. The recovered surface is well outlined as expected for the human's head. However, the details of Mozart cannot be accurately recovered using their approach. In our opinion, this is due to the rapid changes and complexity of the input image. Although the results can be improved by prefiltering and smoothing the input image, in general, we conclude
from the experiment that Zheng-Chellappa's algorithm does encounter some difficulties when the input image is complex. This observation is also true even if we used the simplest light source direction. We expect this experiment to inform the readers that SFS problem is indeed one of the most difficult problems in computer vision. No perfect, or even satisfactory, solution has been proposed yet.
We summarize this section with a few words about the advantage and disadvantage of these two methods we introduced in this section. Pentland's method uses FFT and IFFT to calculate the depth map. This makes the algorithm relatively nonsensitive to the initial values. However, there are a few disadvantages: (1) When the light source direction and the viewing direction are similar, the Fourier transforms of p2 and q2 will have a doubling effect in the frequency
domain, which will affect the accuracy of the linear approximation. (2) When applying FFT and IFFT to the whole image, Pentland's algorithm needs more time than Tsai-Shah's approach. Tsai-Shah's algorithm uses Newton's method to solve the quadratic equations. When the initial value is close to the exact solution, Tsai-Shah's algorithm converges very fast. Actually, given certain good initial values, Tsai-Shah's algorithm needs several steps to converge. However, it is well known that Newton's method cannot always guarantee convergence. This disadvantage makes Tsai-Shah's approach sensitive to initial estimation than Pentland's.
The discussion in this subsection has also shown us that the linear approach is conceptually simple. The related algorithms are relatively fast and easy to implement. However, the reconstruction accuracy of this kind of methods is limited. The assumption of simple linear models is not quite satisfactory for the actual objects (see Section 18.104.22.168). Therefore, more advanced methods, such as multiscale methods, are introduced to overcome the disadvantage of these linear approaches. As an example, we will introduce a wavelet-based method in the following section.
To end this section, we would like to acknowledge the website http:// www.cs.ucf.edu/~vision/source.html; all the source codes used in this section can be found in this site.
Finally, we will iterate the importance of the direction of the light source. We recall that the brightness of an object depends on the following three factors:
(1) microstructure of the surface,
(2) distribution of the incident light,
(3) orientation of the surface with respect to the view and light source.
It is notable that if we change the direction of the light source, the irradiance map will be changed coordinately. This will have an impact on the convergence properties for certain numerical methods (see Section 5.3 and ).
Was this article helpful?