So far, we have only two types of synthesis functions: those that do reproduce the constant, and those that do not. We introduce now more categories. Firstly, we perform the following experiment:
(1) Take some arbitrary square-integrable function f (x) and select a sampling step h>0.
(3) From this sequence, using either (1) or (4), build an interpolated function
The equivalence e = ^ holds for band-limited functions. For those functions that do not belong to that class, the estimated error ^(h) must be understood as the average error over all possible sets of samples f(hk + A), where A = (A:, A2, •••, A*) is some phase term with A,- e [0, h]. When * = 1, for band-limited functions f and when the synthesis function is interpolating, this error kernel reduces to the kernel proposed in .
On the face of Eq. (9), a decrease in the sampling step h will result in a decrease of the argument of E. Since the function f is arbitrary, and since it is desired that the approximation error e(h) vanish for a vanishing sampling step h, the error kernel itself must also vanish at the origin. It is thus interesting to develop E in a Mac-Laurin series around the origin (for simplicity, we consider only the 1D case here). Since this function is even (i.e., symmetric), only even factors need be considered, and the Mac-Laurin development is n e N
where is the (2«)th derivative of the error kernel. By definition, the order of differentiation L for which E(2L) (0) = 0 and E(2m) (0)
= 0 Vme [0, L — 1], is called the approximation order of ç. Thus, for a synthesis function of order L, the infinite Mac-Laurin expansion is given by
When the sampling step h gets smaller, more details of /can be captured; it is then reasonable to ask that the approximation error e(h) get smaller, too. The fundamental questions are: how where the constant C^ depends on ^ only. When the sampling step is small enough, we can neglect the high-order terms of this expansion. The introduction of the resulting expression of E into (9) yields
Was this article helpful?