Communication Systems, Civilian

Simon Haykin , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

III.C.3 Quantizing

A continuous signal, such as voice, has a continuous range of amplitudes, and therefore its samples have a continuous amplitude range. In other words, within the finite amplitude range of the signal we find an infinite number of amplitude levels. It is not necessary in fact to transmit the exact amplitudes of the samples. Any human sense (the ear or the eye), as ultimate receiver, can detect only finite intensity differences. This means that the original continuous signal can be approximated by a signal constructed of discrete amplitudes selected on a minimum-error basis from an available set. The existence of a finite number of discrete amplitude levels is a basic condition of PCM. Clearly, if we assign the discrete amplitude levels with sufficiently close spacing, we can make the approximated signal practically indistinguishable from the original continuous signal.

The conversion of an analog (continuous) sample of the signal to a digital (discrete) form is called the quantizing process.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122274105001253

BEARING DIAGNOSTICS

C.J. Li , K. McKee , in Encyclopedia of Vibration, 2001

Wavelet transform

For a continuous signal x(t), the wavelet transform (WT) is defined as:

[7] W x ( a , b ) = g ( a , b ) ( t ) x ( t ) d t

where * denotes the complex conjugate, and g(t) represents the mother wavelet, e.g.:

[8] g ( t ) = exp ( σ t ) sin ( ω 0 t ) for t = 0 and : g ( t ) = g ( t ) for t < 0 g ( a , b ) ( t ) = 1 a g ( t b a )

where a is the dilation parameter which defines a baby wavelet for a given value, and b is the shifting parameter.

For a given a, carrying out WT over a range of b is like passing the signal through a filter whose impulse response is defined by the baby wavelet. Therefore, one may consider WT as a bank of band-pass filters defined by a number of a's. The salient characteristic of the WT is that the width of the passing band of the filters is frequency-dependent. Therefore, the WT can provide a high-frequency resolution at the low-frequency end while maintaining good time localization at the high-frequency end. This is advantageous for processing transient bearing ringings.

When applied to a bearing signal as a preprocessing tool, the passing band of one or more of the filters could overlap with some of the resonances that are being excited by the roller defect impacts. This results in an enhanced signal-to-noise ratio. For example, Figure 4 shows a bearing vibration measured from a roller-damaged bearing. (Note that it is already high-pass-filtered.) The periodic ringings are not obvious. Figure 5 is the result of WT with one of the baby wavelets. The periodic ringings can be seen more readily and therefore easily identified by, say, an envelope analysis.

Figure 4. Bearing vibration with inner-race defect.

Figure 5. Wavelet transform of the vibration in Figure 4.

By breaking up a broad-band bearing signal into a number of narrow-band subsignals and then scanning them for evidence of bearing defect, WT avoids the risk of selecting a wrong band that does not include any resonance and then missing the defect. The price is that one has to repeat the same bearing diagnostic algorithm on more than one subsignal.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122270851002009

Power Spectral Density

Scott L. Miller , Donald Childers , in Probability and Random Processes (Second Edition), 2012

For a deterministic continuous signal, x(t), the Fourier transform is used to describe its spectral content. In this text, we write the Fourier transform as 1

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123869814500138

Power Spectral Density

Scott L. Miller , Donald Childers , in Probability and Random Processes, 2004

For a deterministic continuous signal, x(t), the Fourier transform is used to describe its spectral content. In this text, we write the Fourier transform as 1

(10.1) X ( f ) = F [ x ( t ) ] = - x ( t ) e - j 2 π f t d t ,

and the corresponding inverse transform is

(10.2) x ( t ) = F [ X ( f ) ] = - X ( f ) e j 2 π f t d f .

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780121726515500105

Laplace and Fourier transforms

Brent J. Lewis , ... Andrew A. Prudil , in Advanced Mathematics for Engineering Students, 2022

3.3.1 Discrete Fourier transforms

With a continuous Fourier transform in Section 3.2, a Fourier transform of an original signal f ( t ) with a suitable normalization is defined by

F ( w ) = 1 2 π f ( t ) e i w t d t .

Consider the case of a sampling of data f ( k ) that gives rise to N individual values of f [ 0 ] , f [ 1 ] , f [ 2 ] , . . . . , f [ N 1 ] that are separated by a sample time T. One can think of each sample as an impulse with an area f [ k ] , with a transform integral that now has finite integration limits. The integral can be further replaced by a summation over the sampling points:

F ( w ) = 0 ( N 1 ) T f ( t ) e i w t d t = f [ 0 ] e i 0 + f [ 1 ] e i w T + . . . + f [ k ] e i w k T + . . . f [ N 1 ] e i w [ N 1 ] T = k = 0 N 1 f [ k ] e i w k T .

With only N data points, there are only N significant outputs. A continuous Fourier transform is evaluated over the integral limits of −∞ to ∞ for a periodic function. Similarly, with only a finite number of data points, the DFT treats data in a periodic manner as well, where the interval f ( N ) to f ( 2 N 1 ) is identical to the sequence f ( 0 ) to f ( N 1 ) . The fundamental frequency for one cycle per sequence is w = 1 N T Hz or w = 2 π N T rad/s, including w = 0 (for a nonoscillating or average component of the signal), as well as higher-order harmonics.

Hence, in general, the DFT F [ n ] of the sequence f [ k ] is defined as (Oxford, 2020)

(3.38)

This equation can be equivalently written in the matrix form

[ F [ 0 ] F [ 1 ] F [ 2 ] F [ N 1 ] ] = [ 1 1 1 1 1 W W 2 W N 1 1 W 2 W 4 W N 2 1 W N 1 W N 2 W ] [ f [ 0 ] f [ 1 ] f [ 2 ] f [ N 1 ] ] ,

where W = e i 2 π / N and W = W 2 N . An inverse transform, as analogously defined for the continuous transform, is

(3.39)

where F [ n ] coefficients are complex and the resulting spectrum is symmetrical about N / 2 . The inverse matrix is therefore derived as 1 / N times the complex conjugate of the original matrix. For the inverse transform, the terms F [ n ] and F [ N n ] give rise to two frequency components of which only the lower frequency component at n 2 π T Hz for n N / 2 is valid, while the other one is extraneous for n > N / 2 . This latter component is termed an aliasing frequency and is an artifact of the signal processing that can be avoided by applying a low-pass filter. Thus, for these two contributions, one has

(3.40) f n [ k ] = 1 N { F [ n ] e i 2 π N n k + F [ N n ] e i 2 π N ( N n ) k } ,

where for f [ k ] real

(3.41) F [ N n ] = k = 0 N 1 f [ k ] e i 2 π N ( N n ) k = k = 0 N 1 f [ k ] e i 2 π k = 1 k e i 2 π N n k = F [ n ] ,

where F [ n ] is the complex conjugate. Thus, substituting, Eq. (3.41) into Eq. (3.40), noting that e i 2 π k = 1 , and applying Euler's formula for the complex exponential, one obtains

(3.42) f n [ k ] = 1 N { F [ n ] e i 2 π N n k + F [ n ] e i 2 π N n k } = 2 N { Re { F [ n ] } cos 2 π N n k Im { F [ n ] } sin 2 π N n k } = 2 N { | F [ n ] | cos { ( 2 π n N T ) k T + arg ( F [ n ] ) } } .

This formula gives a sampled sinusoidal wave at a frequency of 2 π n N T and a signal amplitude of 2 N | F [ n ] | .

Example 3.3.1

Given a continuous signal that is composed of one nonoscillating component and two oscillating components, f ( t ) = 3 + 2 cos ( 2 π t π / 2 ) + cos 4 π t , evaluate the DFT for this signal sampled at four times per second (that is, 4 Hz) from t = 0 to t = 3 / 4 .

Solution. Putting the time t = k T = k 4 , the values for the discrete sampling of the continuous signal in Fig. 3.7 become

f [ k ] = 3 + 2 cos ( π 2 k π 2 ) + cos π k .

Thus, with N = 4

f [ 0 ] = 3 + 2 cos ( π 2 ) + cos ( 0 ) = 4 , f [ 1 ] = 3 + 2 cos ( 0 ) + cos ( π ) = 4 , f [ 2 ] = 3 + 2 cos ( π 2 ) + cos ( 2 π ) = 4 , f [ 3 ] = 3 + 2 cos ( 3 π 2 π 2 ) + cos ( 3 π ) = 0 .

From Eq. (3.38), with N = 4

F [ n ] = k = 0 3 f [ k ] e i 2 π 4 n k = k = 0 3 f [ k ] ( i ) n k .

Hence,

[ F [ 0 ] F [ 1 ] F [ 2 ] F [ 3 ] ] = [ 1 1 1 1 1 i 1 i 1 1 1 1 1 i 1 i ] [ f [ 0 ] f [ 1 ] f [ 2 ] f [ 3 ] ] = [ 12 4 i 4 4 i ] . [answer]

Figure 3.7

Figure 3.7. Continuous signal and individual components of the signal.

Example 3.3.2

For Example 3.3.1, evaluate the contribution of f [ k ] from the inverse transform results for F [ 0 ] , F [ 1 ] , and F [ 2 ] .

Solution.

First component: Since F [ 0 ] = 12 , from Eq. (3.39) as a special case that n = 0 , f 0 [ k ] = 1 N F [ 0 ] = 12 4 = 3 (that is, a constant).   [answer]

Second component: Since F [ 1 ] = 4 i = F [ 3 ] , the peak amplitude for the fundamental component f 1 [ k ] is 2 N | F [ 1 ] | = 2 4 × 4 = 2 . The phase for 4 i from polar coordinates for this point in a complex x-y diagram is arg ( F [ 1 ] ) = π 2 . Hence, from Eq. (3.42), f 1 [ k ] = 2 cos ( 2 π N T k T π 2 ) = 2 cos ( π 2 k π 2 ) .   [answer]

Third component: Since F [ 2 ] = 4 , where n = N 2 , the second term in Eq. (3.40) is extraneous so that there are no N n components such that f 2 [ k ] = 1 N F [ 2 ] e i 2 π 4 2 k = e i π k = cos π k since sin π k = 0 for all k.   [answer]

As expected, all f n [ k ] (for n = 1 , 2 , 3 ) match the three components for the discrete sampling of the continuous signal f [ k ] in Example 3.3.1.

In addition to aliasing where different signals can become indistinguishable when sampled, spectral leakage can also occur when a noninteger number of periods of a signal is transformed. This phenomenon results in a spread of the signal among several frequencies after the DFT analysis.

Boundary value problems

The DFT technique is also used to numerically solve boundary value problems for partial differential equations in Chapter 5 and Chapter 6. For instance, consider heat conduction in a flat plate as described by a Poisson partial differential equation, 2 u = f ( x , y ) , in Eq. (6.13). To solve this problem, one can apply a discrete inverse Fourier transform with a double sum for the temperature in the plate u ( x , y ) at the position x and y with an equivalent relation to Eq. (3.39), such that (see Press et al., 1986)

u j l = 1 J L m = 0 J 1 n = 0 L 1 u ˆ m n e 2 π i j m / J e 2 π i l n / L .

Here the function u ( x , y ) is represented by values at the discrete set of points x j = x 0 + j h , j = 0 , 1 , . . . , J , and y l = y 0 + l h , l = 0 , 1 , . . . , L , and h is the common grid spacing. Similarly,

f j l = 1 J L m = 0 J 1 n = 0 L 1 f ˆ m n e 2 π i j m / J e 2 π i l n / L .

Note that the nomenclature of Press et al. (1986) is adopted in this derivation, which differs only with a sign change in the exponential for the transform pairs. One just needs to be consistent on taking the transform and its inverse, depending on which definition is followed. Substituting these expressions into Eq. (6.17), one obtains

u ˆ m n ( e 2 π i m / J + e 2 π i n / J + e 2 π i n / L + e 2 π i n / L 4 ) = f ˆ m n h 2 .

This latter expression can be simplified and solved for u ˆ m n :

u ˆ m n = f ˆ m n h 2 2 ( cos 2 π m J + cos 2 π n L 2 ) .

Thus, for the solution of the heat conduction equation, one computes f ˆ m n as the Fourier transform

f ˆ m n = j = 0 J 1 l = 0 L 1 f j l e 2 π i m j / J e 2 π i n l / L .

This transform is used in the expression for u ˆ m n and then an inverse transform of this subsequent result is taken to obtain the final solution for u j l . This analysis is only applicable for periodic boundary conditions: u j l = u j + J , l = u j , l + L .

However, the methodology can be easily extended to a Dirichlet boundary condition, where u = 0 on the rectangular boundaries. In this case,

u j l = 2 J 2 L m = 1 J 1 n = 1 L 1 u ˆ m n sin π j m J sin π l n L

and one computes analogously the sine transform for f ˆ m n ,

f ˆ m n = j = 1 J 1 l = 1 L 1 f j l sin π j m J sin π l n L .

This expression is used in the similar expression for u ˆ m n ,

u ˆ m n = f ˆ m n h 2 2 ( cos π m J + cos π n L 2 ) .

Again, an inverse sine transform is applied to the resulting expression for u ˆ m n to obtain the final solution for u j l .

In the case of an inhomogeneous boundary condition, such that u = 0 on all boundaries except at x = J h , where u = g ( y ) , one simply adds the above solution to the solution for the homogeneous equation 2 u = 2 u x 2 + 2 u y 2 = 0 that satisfies the required boundary condition. For instance in the continuum for a flat plate from Example 5.2.6, the solution is given by Eq. (5.28):

u H = n A n sinh n π x J h sin n π y L h .

Here A n is found by imposing the boundary condition such that u = g ( y ) at x = J h . Analogously, for the discrete case

u j l H = 2 L n = 1 L 1 A n sinh n π j J sin n π l L ,

where A n is obtained from the inverse formula

A n = 1 sinh π n l = 1 L 1 g l sin n π l L ,

where g l = g ( y = l h ) . The complete solution is

u = u j l + u j l H .

Algorithms using FFT methods (see Section 3.3.2) are given by Press et al. (1986) to efficiently solve these transforms numerically.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128236819000113

Representations for Morphological Image Operators and Analogies with Linear Operators

Petros Maragos , in Advances in Imaging and Electron Physics, 2013

5.4.1 Representation of Weighted Operators and Basis Approximations

For every TI, increasing, and u.s.c. signal operator there is a special collection of functions, called its basis, such that the operator can be represented as a supremum of morphological erosions by its basis functions. As for the case of TI set operators, this basis is a subcollection of a suitably defined kernel. Specifically, let ψ be a signal operator on Fun ( E m , R ¯ ) — that is, the set of extended real-valued functions defined on E m = R m or Z m , and let ψ ( f ) = ψ ( f ) be its dual (a.k.a. negative) operator. Let

(213) Ker ( ψ ) { f : ψ ( f ) ( 0 ) 0 }

be the kernel of ψ. This collection of signals can uniquely represent the operator, as the following result reveals.

Theorem 24

(Maragos, 1985, 1989a)

If ψ is a TI and increasing operator on Fun ( E m , R ¯ ) , then it can be represented as the supremum of weighted erosions by the functions of its kernel and as the infimum of weighted dilations by the reflected functions of the kernel of its dual operator:

(214) ψ ( f ) = g Ker ( ψ ) f g = h Ker ( ψ ) f h s .

In the above theorem, h s ( x ) = h ( x ) denotes the reflection of a function, and the function dilations and erosions are of the weighted type, defined in Eqs. (60) and (61).

We can improve this representation by using fewer erosions as follows. The basis Bas ( ψ ) is defined as the collection of the minimal (w.r.t. ≤) kernel functions:

(215) Bas ( ψ ) { g Ker ( ψ ) : [ f Ker ( ψ ) and f g ] f = g } .

If we limit to u.s.c. operators acting on the class of u.s.c. functions, then the basis exists and can fully represent the operator, as explained next.

Theorem 25

(Maragos, 1985, 1989a)

(a)

If ψ is a TI, increasing, and u.s.c. operator on Fun u s c ( E m , R ¯ ) , then it can be represented as the supremum of weighted erosions by the functions of its basis.

(b)

If E m = Z m and the dual operator is also u.s.c., then ψ can also be represented as the infimum of weighted dilations by the reflected functions of the basis of its dual operator:

(216) ψ ( f ) = g Bas ( ψ ) f g = h Bas ( ψ ) f h s .

Thus, the above theorem represents exactly any TI, increasing, and u.s.c. operator by using a full expansion of erosions by all its basis functions (and dually as a dilation expansion). What happens if we use only a subcollection of the basis functions in the above representation? Such a question often arises in practical image-processing applications such as denoising where an optimum system needs to be designed based on a finite small number of erosions (Loce and Dougherty, 1992b, 1995). The following result is a straightforward consequence of Theorem 25(b).

Proposition 21. (Approximate basis representation)

If in the basis representation in Eqs.(216) we use collections B Bas ( ψ ) and B Bas ( ψ ) smaller than the bases of the operators ψ and ψ , respectively, of Theorem 25(b), and we create the operators

(217) ψ ( f ) g B f g , ψ u ( f ) h B f h s ,

then the original operator ψ is bounded from below and above by these two operators with the truncated bases:

(218) ψ ( f ) ψ ( f ) ψ u ( f ) , f .

For cases where all the basis functions are finite valued on the same subset of the general domain (e.g., such a case is the basis of increasing linear TI filters discussed in Section 5.4.4), Dougherty and Kraus (1991) have found a tight error bound in the approximation of an operator when removing one basis functions from the full erosion expansion.

The bounding result expression (217) assumed that we already had a TI increasing operator whose basis was truncated to create a new approximate operator. Another direction is to synthesize a collection of functions possessing the fundamental property of a morphological basis — that is, its elements must be minimal — and then construct an operator as supremum of erosions by these basis functions:

Proposition 22

(a)

Let B be a collection of functions such that all elements of B are minimal in ( B , ) and define the operator

(219) ψ ( f ) g B f g .

Then ψ is a TI and increasing operator whose basis is equal to B .

(b)

Let B be a collection of functions such that all elements of B are minimal in ( B , ) and define the operator

(220) ϕ ( f ) h B f h s .

Then ϕ is a TI and increasing operator whose dual operator ϕ has B as its basis.

Thus, the morphological basis plays a conceptually similar role as a Hamel basis in a linear space. The minimality condition between two distinct functions g 1 and g 2 in a morphological basis implies that there exist points x and y such that

g 1 ( x ) > g 2 ( x ) and g 1 ( y ) < g 2 ( y )

In other words, inside the morphological basis we cannot find two distinct elements such that one contains (w.r.t. the partial order) or is contained by the other. All the elements in a basis B are atoms in the poset ( B , ) . Thus, the elements of a morphological basis are "independent" in the sense of being minimal and can synthesize an operator via the supremum. Next, we proceed with the example of a grey-level image operator that possesses a finite basis. In Section 5.4.4 we present an application of the basis representation Theorem 25 to linear filters too.

Example 17 (Basis of weighted opening)

Consider the TI weighted opening of discrete-domain input signals f Fun ( Z m , R ¯ ) by a non-flat (structuring function) kernel k ( x ) :

(221) ( f k ) ( x ) = [ ( f k ) k ] ( x ) = z y f ( x + y z ) k ( y ) + k ( z ) .

From Proposition 22 it follows that this operator has a basis that consists of the functions in the following collection:

(222) Bas ( f f k ) = { g : g ( x ) = k ( x + z ) k ( z ) , z Spt ( k ) } ,

where Spt ( k ) = { x : k ( x ) > } is the support of k ( x ) . Assuming, as usually done in imaging applications, that k has a finite support that yields a finite basis. Note, however, that the above results also hold for structuring functions k ( x ) with infinite support and for continuous-domain openings.

Morales and Acharya (1993) have analyzed the above discrete opening for 1D signals and found its finite basis. This was then used to efficiently implement the 1D discrete opening and closing by k using a block matrix method in Ko et al. (1995).

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124077027000024

Radar

Nadav Levanon , in Encyclopedia of Physical Science and Technology (Third Edition), 2003

X.A Continuous Wave Radar

A continuous wave (CW) radar, as its name implies, emits a continuous signal. It must therefore receive the returned signal while transmitting. The unavoidable leakage between transmitter and receiver means that the weak reflected signal may have to compete with the strong directly received transmission. Separation between the two must be based on parameters other than intensity.

The CW design is found in radars that emphasize velocity measurement, such as police radars or artillery muzzle velocity radars. The Doppler shift provides the means to separate the transmitted signal from the received signal. The long signal duration enables high-resolution velocity measurement. For a CW radar to be able to measure range too, the transmitted signal must be marked on the time axis. Such marking is usually implemented through periodic phase or frequency modulation. The modulation also helps to separate the target-reflected signal from the directly received signal. Still, it is important to minimize the direct reception, which is why CW radars usually use two separate antennas, a transmitting one and a receiving one.

A major advantage of CW radars is pointed out in Eq. 7, which shows that the SNR is a function of the average transmitted power during target illumination. A pulse radar needs high peak power to achieve sufficient average power, while in a CW radar the peak power is equal to the average power. This is why CW radars use low-power transmitters, based on low-voltage solid state devices rather than on high-voltage vacuum tubes.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B012227410500973X

Noise in the Luenberger Observer

George Ellis , in Observers in Control Systems, 2002

7.1.2 Quantization and Noise

Quantization is a common source of noise in digital control systems. Quantization is the undesirable process of limiting resolution of a continuous signal. For example, a 12-bit analog-to-digital converter (ADC) allows only 2 12 (4096) discrete values to represent a voltage. Even though the voltage input to the ADC almost always falls between these values, it will be assigned one of these discrete values. For the ideal case, the value will be the closest discrete value to the actual value. Assuming that the input (nonquantized) signal can take on any value, quantization is sometimes represented as a random noise added to the actual signal. The magnitude of the random noise is half the resolution of the quantization process. For example, if the ADC were quantized to 0.005 V, the output of the ADC could be modeled as the actual magnitude of the input signal summed with a random noise signal that had a min/max of ±0.0025 V.

Quantization comes from two primary sources. First, in digital control systems, sensor output must be represented digitally. Since sensors usually monitor analog processes, this implies that the sensor output must be quantized. This may occur through standard analog-to-digital converters or through any of the myriad of digital converters for specialized sensors. The second source for quantization is through digital calculations. Many digital calculations generate quantized outputwhen the result is truncated. Practical limitations usually require that results of arithmetic operations be truncated. Careful design of calculations can minimize the effects of quantization in calculations, but in most cases, the effects cannot be eliminated.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780122374722500080

Advances in Imaging and Electron Physics

Leonid P. Yaroslavsky , in Advances in Imaging and Electron Physics, 2011

1 Introduction: Sparse Sampling

Image sampling is a special case of image discretization, the very first step in digital image processing, storage, and transmission. Generally, discretization is the representation of continuous images by sets of numbers. Mathematically, it can be treated as computing image representation coefficients by means of projecting images on discretization basis functions, assuming that continuous images can be restored from the set of the representation coefficients by means of weighted summation of the reconstruction basis functions with signal representation coefficients as weights. Equations (1.1) describe these discretization and reconstruction processes:

(1.1a) α k = x a ( x ) φ k ( d ) ( x ) d x

(1.1b) a ( x ) a ˜ ( x ) = k = 0 N 1 α k φ k ( r ) ( x ) ,

where a(x) is a continuous image signal defined as a function of coordinate variable x in a domain of definition X, a ˜ ( x ) is its approximation achieved by the reconstruction, {αk } are the representation coefficients, { φ k ( d ) ( x ) } { φ k ( r ) ( x ) } are sets of discretization and reconstruction basis functions, and N is the number of representation coefficients and, correspondingly, reconstruction basis functions, involved in signal reconstruction.

Discretization basis functions and their reciprocal reconstruction basis functions are physically implemented as point spread functions of discretization and reconstruction (display) devices. They should, in principle, be selected in a manner that minimizes the number of representation coefficients sufficient for image reconstruction with the desired accuracy. However, in reality technological tradition and implementation issues frequently dictate the selection of discretization and reconstruction basis functions.

Almost overwhelmingly, image discretization and display devices implement the principle of image sampling by means of shift, or sampling, basis functions {φ (s)(xkΔx)}, {φ (r)(xkΔx)}; image discrete representation of images is obtained in the form of samples {ak } of images after they are prefiltered by the sampling function:

(1.2) a k = a ( x ) φ ( s ) ( x k Δ x ) d x .

The samples {ak } are taken at nodes {kΔx} of a regular sampling grid with interval Δx (sampling interval).

The theoretical foundation of this approach to discretization originates from the sampling theorem (Kotelnikov, 1933; Shannon, 1948). The sampling theorem in its classical formulation states that signals with a band-limited Fourier spectrum can be precisely reconstructed from their samples taken on a uniform sampling grid with a sampling interval inversely proportional to the bandwidth of the signal Fourier spectrum.

In reality, no band-limited signals exist, and the sampling theorem must be reformulated in terms of the accuracy of reconstruction of continuous signals from their samples. In such formulation, the sampling theorem states that

The least square error approximation a ˜ ( x ) of signal a(x) from its samples {ak } taken on a uniform sampling grid with sampling interval Δx is

(1.3) a ˜ ( x ) = k = a k sinc [ 2 π ( x k Δ x ) / Δ x ] ,

provided that signal samples {ak } are obtained as values of a low-pass–filtered signal at sampling points:

(1.4) a k = 1 Δ x a ( x ) sinc [ 2 π ( x k Δ x ) / Δ x ] d x .

The approximation mean square error (MSE) is minimal in this case and is equal to the signal energy outside the frequency interval [−1/2Δx, 1/2Δx]:

(1.5) | a ( x ) a ˜ ( x ) | 2 d x = 1 / Δ x | α ( f ) | 2 d f + 1 / Δ x | α ( f ) | 2 d f = 2 1 / Δ x | α ( f ) | 2 d f ,

where

(1.6) α ( f ) = a ( x ) exp ( i 2 π f x ) d x

is the signal Fourier spectrum and f is frequency.

The sinc function sinc(x) = sin x/x in Eqs. (1.3) and (1.4) has a uniform spectrum within the frequency interval [−1/2Δx, 1/2Δx] and is a point spread function of the ideal low-pass filter

(1.7) sinc ( 2 π x / Δ x ) = Δ x 1 / Δ x 1 / Δ x exp ( i 2 π f x ) d f .

Fourier spectra of images usually decay quite rapidly with frequency f. However, high-frequency spectral components carry highly important information for image analysis, object detection, and recognition that cannot be neglected despite the fact that their contribution to signal energy | a ( x ) | 2 d x = | α ( f ) | 2 d x is relatively small. For this reason, the sampling interval Δx must be sufficiently small to preserve image-essential high frequencies. As a consequence, image representation by samples is frequently quite redundant because samples are highly correlated. This means, that, in principle, far fewer samples would be sufficient for image reconstruction if the reconstruction could be done in a more sophisticated manner than conventional weighted summation according to Eq. (1.1b).

Apart from the general desire to reduce the number of image samples required for image storage and transmission, there are many real applications, where, contrary to the common practice of uniform sampling, sampled data are collected in an irregular fashion. The following are some typical instances:

Samples are taken not where the regular sampling grid dictates them to be taken but where it is feasible because of technical or other limitations.

The pattern of sample disposition is dictated by physical principles of the work of the measuring device (e.g., in interferometry or moirétechnique, where samples are taken along level lines).

The sampling device and sample positioning are jittery as a result of camera or object vibrations or other irregularities, such as imaging through a turbulent medium.

Some samples of the regular sampling grid are lost or unavailable due to losses in communication channels.

Because display devices, sound synthesizers, and other devices for reconstruction of continuous signals from their samples, as well as computer software for processing sampling data, assume the customary use of a regular uniform sampling grid, in all these cases it is necessary to convert irregularly sampled images to regularly sampled ones. Generally, the corresponding regular sampling grid must contain more samples than are available, because the coordinates of the positions of available samples might be known with subpixel accuracy—that is, with accuracy (in units of image size) better than 1/K, where K is the number of available pixels.

The task of converting an irregular sampling grid into a regular one is obviously a special case of the image resampling task, which can be solved by various methods of numerical interpolation of sampled data. Several approaches can be used to solve this task. One approach is purely empirical and is based on simplistic numerical interpolation procedures, such as interpolation by means of a weighted summation of known samples in close vicinity to the sought samples with weights inversely proportional to the distance between them. A review of these methods can be found in (Lodha & Franke, 1997). Although such an approach meets some practical needs, it is lacking in signal restoration accuracy and optimality.

A more substantiated approach is based on generalizations of the sampling theory to nonuniform sampling. In this approach, it is assumed that the sampled continuous signals belong to a certain approximation subspace M (e.g., subspaces of band-limited signals, splines subspaces) of the parent Hilbert space (usually, L 2 Hilbert space of finite energy functions) with the requisite that the interpolation procedure must determine a continuous signal that satisfies two constraints: (1) the interpolated signal must belong to the subspace M and (2) its available samples must be preserved. Conditions for the existence and uniqueness of the solution depend on the signal model (underlying approximation subspace) and the set of given samples. For the band-limited case, Landau (1967) proved that a necessary and sufficient condition for the unique reconstruction of a continuous band-limited one-dimensional (1D) signal with bandwidth W from its irregularly spaced samples is that the density of its samples should exceed the Nyquist rate 1/Bandwidth. It was also shown that this condition is necessary for D -dimensional signals with band-limited Fourier spectra. A comprehensive exposition of this approach can be found in Marvasti (2001).

An alternative approximation model is associated with spline subspaces (Unser, 1999). However, due to their localized nature, their use in the recovery of large gaps in data is limited. A practical numerical algorithm for the interpolation and approximation of two-dimensional (2D) signals, based on multilevel B-splines, is suggested by Lee, Wolberg, and shin (1997). A similar spline-based algorithm, which uses nonuniform splines for interpolation, was suggested by Margolis and Eldar (2004).

All the aforementioned methods are oriented toward the approximation of continuous signals, specified by their sparse samples. Some publications also consider discrete models. However, those publications treat only various special cases. Ferreira (2001) considers discrete signal recovery from sparse data with the assumption of signal band limitation in the discrete Fourier transform (DFT) domain. Hasan and Marvasti (2001) suggested error detection coding as a method to recover discrete signals with missing data during data transmission. For signal recovery, they suggested using the discrete cosine transform (DCT) domain band-limitation assumption.

In this chapter, we suggest a general framework for the recovery, from a given set of their arbitrary taken samples, of discrete signals that originate from continuous signals. We treat this problem as an approximation task in the following assumptions:

Continuous signals are to be represented in computers by their samples taken at some of, say, N nodes of a regular uniform sampling grid.

It is assumed that if all N samples were known, they would be sufficient to represent the continuous signal.

K < N samples of signals are available.

The goal of the processing is generating, from this incomplete set of K samples, a complete set of N signal samples in such a way as to secure the most accurate (in terms of MSE) approximation of the discrete signal that would be obtained if the continuous signal it is intended to represent were densely sampled at all N positions.

The mathematical foundation of the framework is provided by the discrete sampling theorem for "band-limited" discrete signals that have only a few nonzero coefficients in their representation over a certain orthogonal basis. This theorem is introduced in Section 2. The rest of the chapter is organized as follows. Section 3 describes algorithms for signal minimum MSE recovery from sparse sampled data. In Section 4, properties of transforms that are specifically relevant for signal recovery from sparse data are analyzed, and experimental illustrations of precise reconstruction of band-limited signals from sparse data are provided. Section 5 is a discussion of the energy compaction capability of transforms—that is, their ability to compress image energy in a small number of transform coefficients, and illustrates it for such widely used transforms as the DFT, DCT, Walsh transform, and Haar transform. Section 6 addresses application issues and illustrates the discrete sampling theorem–based methodology of discrete signal recovery on examples of image super-resolution from multiple frames and image recovery from sparse projection data. Finally, Section 7 formulates the discrete uncertainty principle and demonstrates the existence of discrete signals sharply limited both in their extent and in their bandwidth in the domain of a transform.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780123859853000055

GEAR DIAGNOSTICS

C.J. Li , in Encyclopedia of Vibration, 2001

Wavelet transformation

The WT can also be considered as a time–frequency distribution. However, it is not as computational costly as, say, CWD. For a continuous signal x(t), the WT is defined as:

(8) W x ( a , b ) = g * ( a , b ) ( t ) x ( t ) d t

where * denotes the complex conjugate and g(t) represents the mother wavelet, e.g.:

g ( t ) = exp ( σ t ) sin ( ω 0 t ) for t = 0 and g ( t ) = g ( t ) for t < 0

and:

(9) g ( a , b ) ( t ) = 1 a g ( t b a )

where a is the dilation parameter which defines a baby wavelet for a given value, and b is the shifting parameter.

For a given a, carrying out the WT over a range of b is like passing the signal through a filter whose impulse response is defined by the baby wavelet. Therefore, one may consider the WT as a bank of band-pass filters defined by a number of as. The salient characteristic of the WT is that the width of the passing band of the filters is frequency-dependent. Consequently, WT can provide a good frequency resolution at the low-frequency end while maintaining good time localization at the high-frequency end.

Experimental evaluations have shown that the WT is able to detect and trend the impacts associated with faulty teeth in an experiment using spur gears and seeded faults. However, it is not clear if the WT would be effective in detecting the early stages of more graduated tooth faults (e.g., small tooth crack) in smoother operating gears such as helical gears, which only produce a modest amount of modulation instead of impact. The difference between WT and CWD is that the former has a better time localization of the impacts, and the latter has a better frequency resolution at high frequencies, however, at a higher computational cost.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B0122270851000989