Nyquist–Shannon sampling theorem
From Wikipedia, the free encyclopedia
The Nyquist–Shannon sampling theorem is a fundamental result in the field of information theory, in particular telecommunications and signal processing. The theorem is commonly called the Shannon sampling theorem, and is also known as Nyquist–Shannon–Kotelnikov, Whittaker–Shannon–Kotelnikov, Whittaker–Nyquist–Kotelnikov–Shannon, WKS, etc., sampling theorem, as well as the Cardinal Theorem of Interpolation Theory. It is often referred to as simply the sampling theorem. See the historical background section below.
Sampling is the process of converting a signal (for example, a function of continuous time or space) into a numeric sequence (a function of discrete time or space).
The theorem states that
“  Exact reconstruction of a continuoustime baseband signal from its samples is possible if the signal is bandlimited and the sampling frequency is greater than twice the signal bandwidth.  ” 
The theorem also leads to a formula for the reconstruction. The assumptions necessary to prove the theorem form a mathematical model that is only an idealized approximation, at best, to any realistic situation. The conclusion, that perfect reconstruction is possible, is mathematically correct for the model, but only an approximation for actual signals and actual sampling techniques.
[edit] Introduction
A signal or function is bandlimited if it contains no energy at frequencies higher than some bandlimit or bandwidth . A signal that is bandlimited is constrained in terms of how rapidly it changes in time, and therefore how much detail it can convey, in between discrete instants of time. The sampling theorem means that the uniformly spaced discrete samples are a complete representation of the signal if this bandwidth is less than half the sampling rate.
To formalize these concepts, let represent a continuoustime signal and be the continuous Fourier transform of that signal (which exists if is squareintegrable):
The signal is bandlimited to a onesided baseband bandwidth if:
 for all
Then the condition for exact reconstructability from samples at a uniform sampling rate (in samples per unit time) is:
or equivalently:
is called the Nyquist rate and is a property of the bandlimited signal, while is called the Nyquist frequency and is a property of this sampling system.
The time between successive samples is referred to as the sampling interval
and the samples of are denoted by:
 (integers)
The sampling theorem leads to a procedure for reconstructing the original from the samples , and states sufficient conditions for such a reconstruction to be exact.
[edit] The sampling process
From a signal processing perspective, the theorem describes two processes; a sampling process, in which a continuous time signal is converted to a discrete time signal, and a reconstruction process, in which the continuous signal is recovered from the discrete signal.
The continuous signal varies over time (or space as in a digitized image or another independent variable in some other application) and the sampling process is done by simply measuring the continuous signal's value every T units of time (or space), which is called the sampling interval. In practice, for signals that are a function of time, the sampling interval is typically quite small, on the order of milliseconds or microseconds or less. This results in a sequence of numbers, called samples, which is to represent the original signal. Each sample is associated to the specific point in time where it was measured. The reciprocal of the sampling interval, 1/T is the sampling frequency, f_{s}, and measured in samples per unit time. If T is expressed in seconds then f_{s} is expressed in Hz.
The reconstruction process is an interpolation process that mathematically defines a continuoustime signal, x(t), from the discrete samples x[n] and at times in between the sample instants, nT.
 The procedure: Each sample is multiplied by the sinc function scaled so that the zerocrossings of the sinc function occur at the sampling instants and that the sincfunction's central point is shifted to the time of that sample, nT. All of these shifted and scaled functions are then added together to recover the original signal. The scaled and timeshifted sincfunctions are continuous making the sum of these also continuous. This means that the result of this operation is indeed a continuous signal. This procedure is represented by the Whittaker–Shannon interpolation formula.
 The condition: The signal obtained from this reconstruction process will have no frequencies higher than onehalf the sampling frequency. This reconstructed signal will match the original signal if the original signal contains no frequencies equal to or above half the sampling frequency; that is, if the sampling frequency exceeds twice the highest frequency in the original signal. This condition is called the Nyquist criterion or sometimes the Raabe condition.
Note that if the original signal contains a frequency component exactly equal to onehalf the sampling rate, this condition is not satisfied, and the resulting reconstructed signal may have a component at that frequency but the amplitude and phase of that component will not, in general, match the original component.
This reconstruction or interpolation using sinc functions is not the only interpolation scheme, and indeed, is practically impossible because it requires summing an infinite number of terms. However, it is the interpolation method that exactly reconstructs any given bandlimited x(t) with any bandlimit B<1/(2T); any other method that does so is formally equivalent to it.
[edit] Practical considerations
A few consequences can be drawn from the theorem:
 If it is known that the signal which we sample has a certain highest frequency B, the theorem gives us a lower bound on the sampling frequency to assure perfect reconstruction. This lower bound to the sampling frequency, 2B, is called the Nyquist rate.
 If instead the sampling frequency is known, the theorem gives us an upper bound for frequency components, B<f_{s}/2, of the signal to allow for perfect reconstruction. This upper bound is the Nyquist frequency, denoted f_{N}.
 Both of these cases imply that the signal to be sampled must be bandlimited; that is, any component of this signal which has a frequency above a certain bound should be zero, or at least sufficiently close to zero to allow us to neglect its influence on the resulting reconstruction. In the first case, the condition of bandlimitation of the sampled signal can be accomplished by assuming a model of the signal which can be analysed in terms of the frequency components it contains; for example, sounds that are made by a speaking human normally contain very small frequency components at or above 10 kHz and it is then sufficient to sample such an audio signal with a sampling frequency of at least 20 kHz. For the second case, we have to assure that the sampled signal is bandlimited such that frequency components at or above half of the sampling frequency can be neglected. This is usually accomplished by means of a suitable lowpass filter; for example, if it is desired to sample speech waveforms at 8 kHz, the signals should first be lowpass filtered to below 4 kHz.
 In practice, neither of the two statements of the sampling theorem described above can be completely satisfied, and neither can the reconstruction formula be precisely implemented. The reconstruction process that involves scaled and delayed sinc functions can be described as ideal. It cannot be realized in practice since it implies that each sample contributes to the reconstructed signal at almost all time points, requiring summing an infinite number of terms. Instead, some type of approximation of the sinc functions, finite in length, has to be used. The error that corresponds to the sincfunction approximation is referred to as interpolation error. Practical digitaltoanalog converters produce neither scaled and delayed sinc functions nor ideal impulses (that if ideally lowpass filtered would yield the original signal), but a sequence of scaled and delayed rectangular pulses. This practical piecewiseconstant output can be modeled as a zeroorder hold filter driven by the sequence of scaled and delayed dirac impulses referred to in the mathematical basis section below. A shaping filter is sometimes used after the DAC with zeroorder hold to make a better overall approximation.
 Furthermore, in practice, a sampled signal that is "timelimited", or finite length, can never be fully bandlimited. This means that even if an ideal reconstruction could be made, the reconstructed signal would not be exactly the original signal. The error that corresponds to the failure of bandlimitation is referred to as aliasing.
 The sampling theorem does not say what happens when the conditions and procedures are not exactly met, but its proof suggests an analytical framework in which the nonideality can be studied. A designer of a system that deals with sampling and reconstruction processes needs a thorough understanding of the signal to be sampled, in particular its frequency content, the sampling frequency, how the signal is reconstructed in terms of interpolation, and the requirement for the total reconstruction error, including aliasing and interpolation error. These properties and parameters may need to be carefully tuned in order to obtain a useful system.
[edit] Aliasing
If the sampling condition is not satisfied, then frequencies will overlap; that is, frequencies above half the sampling rate will be reconstructed as, and appear as, frequencies below half the sampling rate. The resulting distortion is called aliasing; the reconstructed signal is said to be an alias of the original signal, in the sense that it has the same set of sample values.
For a sinusoidal component of exactly half the sampling frequency, the component will in general alias to another sinusoid of the same frequency, but with a different phase and amplitude.
To prevent or reduce aliasing, two things can be done:
 Increase the sampling rate, to above twice some or all of the frequencies that are aliasing.
 Introduce an antialiasing filter or make the antialiasing filter more stringent.
The antialiasing filter is to restrict the bandwidth of the signal to satisfy the condition for proper sampling. Such a restriction works in theory, but is not precisely satisfiable in reality, because realizable filters will always allow some leakage of high frequencies. However, the leakage energy can be made small enough so that the aliasing effects are negligible.
[edit] Application to multivariable signals and images
The sampling theorem is usually formulated for functions of a single variable. Consequently, the theorem is directly applicable to timedependent signals and is normally formulated in that context. However, the sampling theorem can be extended in a straightforward way to functions of arbitrarily many variables. Grayscale images, for example, are often represented as twodimensional arrays (or matrices) of real numbers representing the relative intensities of pixels (picture elements) located at the intersections of row and column sample locations. As a result, images require two independent variables, or indices, to specify each pixel uniquely — one for the row, and one for the column.
Color images typically consist of a composite of three separate grayscale images, one to represent each of the three primary colors — red, green, and blue, or RGB for short. Other colorspaces using 3vectors for colors include HSV, LAB, XYZ, etc. Some colorspaces such as cyan, magenta, yellow, and black (CMYK) may represent color by four dimensions. All of these are treated as vectorvalued functions over a twodimensional sampled domain.
Similar to onedimensional discretetime signals, images can also suffer from aliasing if the sampling resolution, or pixel density, is inadequate. For example, a digital photograph of a striped shirt with high frequencies (in other words, the distance between the stripes is small), can cause aliasing of the shirt when it is sampled by the camera's image sensor. The aliasing appears as a Moiré pattern. The "solution" to higher sampling in the spatial domain for this case would be to move closer to the shirt or use a higher resolution sensor.
Another example is shown to the right in the brick patterns. The top image shows the effects when the sampling theorem's condition is not satisfied. When software rescales an image (the same process that creates the thumbnail shown in the lower image) it, in effect, runs the image through a lowpass filter first and then downsamples the image to result in a smaller image that does not exhibit the Moiré pattern. The top image is what happens when the image is downsampled without lowpass filtering: aliasing results.
The top image was created by zooming out in GIMP and then taking a screenshot of it. The likely reason that this causes a banding problem is that the zooming feature simply downsamples without lowpass filtering (probably for performance reasons) since the zoomed image is for onscreen display instead of printing or saving.
The application of the sampling theorem to images should not be made without care. For example, the sampling process in any standard image sensor (CCD or CMOS camera) is relatively far from the ideal sampling which would measure the image intensity at a single point. Instead these devices have a relatively large sensor area at each sample point in order to obtain sufficient amount of light. Also, it is not obvious that the analog image intensity function which is sampled by the sensor device is bandlimited. It should be noted, however, that the nonideal sampling is itself a type of lowpass filter, although far from one that ideally removes high frequency components. Despite images having these problems in relation to the sampling theorem, the theorem can be used to describe the basics of down and up sampling of images.
[edit] Downsampling
When a signal is downsampled, the sampling theorem can be invoked via the artifice of resampling a hypothetical continuoustime reconstruction. The Nyquist criterion must still be satisfied with respect to the new lower sampling frequency in order to avoid aliasing. To meet the requirements of the theorem, the signal must usually pass through a lowpass filter of appropriate cutoff frequency as part of the downsampling operation. This lowpass filter, which prevents aliasing, is called an antialiasing filter.
[edit] Critical frequency
The Nyquist rate is defined as twice the bandwidth of the continuoustime signal. It should be noted that the sampling frequency must be strictly greater than the Nyquist rate of the signal to achieve unambiguous representation of the signal. This constraint is equivalent to requiring that the system's Nyquist frequency (also known as critical frequency, and equal to half the sample rate) be strictly greater than the bandwidth of the signal. If the signal contains a frequency component at precisely the Nyquist frequency then the corresponding component of the sample values cannot have sufficient information to reconstruct the Nyquistfrequency component in the continuoustime signal because of phase ambiguity. In such a case, there would be an infinite number of possible and different sinusoids (of varying amplitude and phase) of the Nyquistfrequency component that are represented by the discrete samples.
As an example, consider this family of signals at the critical frequency:
Where the samples
are in every case just alternating –1 and +1, for any phase θ. There is no way to determine either the amplitude or the phase of the continuoustime sinusoid x(t) that x[n] was sampled from. This ambiguity is the reason for the strict inequality of the sampling theorem's condition.
[edit] Mathematical basis for the theorem
The Nyquist–Shannon sampling theorem states that, given a bandlimited continuoustime signal x(t) that is uniformly sampled at a sufficient rate, even if all of the information in the signal between samples is discarded, there remains sufficient information in the samples that the original continuoustime signal can be mathematically reconstructed perfectly from only those discrete samples. To prove this, a different function is first constructed, conceptually, from the whole original signal, but preserving information from just the sample instants:
 x(t) is the original continuoustime signal.
 x_{s}(t) is a function that depends only on the values of x(t) at discrete moments of time
 Δ_{T}(t) is the sampling operator called the Dirac comb and, being periodic with period T, can be formally expressed as a Fourier series:
 f_{s} = 1/T is the sampling frequency and is the fundamental frequency of the periodic function Δ_{T}(t).
 δ(tnT) is a dirac impulse delayed to time nT.
 The (implied) limit in the Fourier summation is not in the pointwise sense but in the sense of tempered distributions, see also Dirichlet kernel.
Since the Dirac impulse is zero except where its argument is zero, Δ_{T}(t) takes a value of zero except for values of t that are at the sampling instants, nT, for integer n. Therefore x_{s}(t) also takes on zero values for all t except for the sampling instants nT. Multiplying x(t) by Δ_{T}(t) effectively discards all of the information between sampling instants and retains information only at the sampling instants nT. x_{s}(t) can be represented in terms of the samples:
where x[n] = x(nT) are the samples. The sequence of sample impulses x_{s}(t) can also be written in terms of the Fourier series of the Dirac comb,:
Using the frequency shifting property of the continuous Fourier transform,
where X(f) is the Fourier transform of x(t). This says that the spectrum of the baseband signal being sampled is shifted and repeated forever at integral multiples of the sampling frequency, f_{s}. These repeated copies are called images of the original signal spectrum.
Now constrain x(t) to be bandlimited to B (that is, X(f) = 0 for all f > B), and consider what condition precludes overlapping of the adjacent images X(fkf_{s}) :

right edge of k^{th} image of X( f ) left edge of (k+1)^{th} image
With that condition satisfied, there is no overlap of images in X_{s}(f) and X(f) (and thus x(t)) can be reconstructed from X_{s}(f) (or x_{s}(t)) by low pass filtering out all of the images of X(f) in X_{s}(f) except for the original image at the baseband. To do that, f_{s} > 2B (to prevent overlap) and the frequency response of the reconstruction filter H(f) must be:
The reconstruction lowpass filter transition band is between B and f_{s}B and the filter response need not be precisely defined in that region (since there is no nonzero spectrum in that region). However, the worst case is when the bandwidth B is virtually as large as the Nyquist frequency f_{s}/2 and in that worst case, the reconstruction filter H(f) must be:
where is the rectangular function.
With H(f) so defined, it is clear that
and the spectrum of the original signal that was sampled, X(f), is recovered from the spectrum of the sampled signal, X_{s}(f). This means, in the time domain, that the original signal that was sampled, x(t), is recovered from the sampled signal, x_{s}(t).
This completes the proof of the Nyquist–Shannon sampling theorem. It says that if the sampling frequency, f_{s}, is strictly greater than twice the bandwidth, B, of the continuoustime baseband signal, x(t), then no information is lost (or "aliased"). Following Whittaker and Shannon and most more recent expositors, the reconstruction that bypasses all the frequencydomain math, and specifies the reconstruction of the original signal directly from its samples, is now given.
The impulse response of the reconstruction filter is the inverse Fourier transform of H(f):

, in terms of the normalized sinc function.
This function is the impulse response of the reconstruction filter with input the sampled signal x_{s}(t), which is just a collection of dirac impulses, δ(tnT), each delayed to the time of their sampling instance, nT and weighted by a value proportional to the value of the continuoustime signal that was sampled at that instance, x[n]=x(nT). Since the reconstruction filter is a linear, timeinvariant system, each impulse at time nT generates its own impulse response delayed to the same time, and the output of the reconstruction filter is the sum of outputs driven by each weighted impulse separately. For each input impulse, the component of the output is the impulse response delayed to the same time of that input impulse, h(tnT), and weighted by the same coefficient attached to that input impulse, T•x[n]. That is, the output of the reconstruction filter is:

, where is the convolution operator
This shows explicitly how the samples x[n] are combined to reconstruct the original function x(t). This completes the reconstruction formula derivation.
[edit] Concise summary of the mathematical proof
There is no actual device that produces the infinitevalued samples implied by the Dirac comb model of sampling. The finitevalued samples, x[n], are not a function of continuous time, so their Fourier transform is undefined. To use that analysis tool, a continuoustime function is contrived conceptually (not actually nor numerically) by using the samples to modulate the "teeth" of a Dirac comb function. This modulated comb does have a continuoustime Fourier transform (not within the strict definition that requires square integrable functions, but in the generalization that allows Schwartz distributions, in the case of the original signal being square integrable).
The transform of the (virtual) modulated comb, , is related to the transform of the physical waveform, , via a superposition of shifted copies (which is equivalent to convolution with a frequencydomain Dirac comb); this superposition viewpoint leads to an understanding of aliasing and ways to mitigate it. When the shifted copies do not overlap, the original can be extracted by lowpass filtering, giving back the original signal.
The Fourier transform view also reveals that the sample rate can be higher than twice the highest frequency, with no ill effect, and even leaving room for a transition band in which the transfer function of the reconstruction filter is free to take intermediate values. Undersampling, which causes aliasing, is not in general a reversible operation. Oversampling may be inefficient or wasteful, but it is also reversible, meaning that no information is lost.
[edit] Shannon's original proof
The original proof presented by Shannon is elegant and quite brief, but it offers less intuitive insight into the subtleties of aliasing, both unintentional and intentional. Quoting Shannon's original paper, which uses f for the function, F for the spectrum, and W for the bandwidth limit:
 Let F(ω) be the spectrum of f(t). Then
 since F(ω) is assumed to be zero outside the band W. If we let
 where n is any positive or negative integer, we obtain
 On the left are values of f(t) at the sampling points. The integral on the right will be recognized as essentially the nth coefficient in a Fourierseries expansion of the function F(ω), taking the interval –W to W as a fundamental period. This means that the values of the samples f(n / 2W) determine the Fourier coefficients in the series expansion of F(ω). Thus they determine F(ω), since F(ω) is zero for frequencies greater than W, and for lower frequencies F(ω) is determined if its Fourier coefficients are determined. But F(ω) determines the original function f(t) completely, since a function is determined if its spectrum is known. Therefore the original samples determine the function f(t) completely.
Shannon's proof of the theorem is complete at that point, but he goes on to discuss reconstruction via sinc functions, what we now call the Whittaker–Shannon interpolation formula as discussed above. He does not derive or prove the properties of the sinc function, but these would have been familiar to engineers reading his works at the time, since the Fourier pair relationship between rect and sinc was well known. Quoting Shannon:
 Let x_{n} be the nth sample. Then the function f(t) is represented by:
As in the other proof, the existence of the Fourier transform of the original signal is assumed, so the proof does not say whether the sampling theorem extends to bandlimited stationary random processes.
[edit] Sampling of nonbaseband signals
For sampling a nonbaseband signal, the conditions to avoid information loss and to allow perfect reconstruction can be generalized in terms of conditions on the frequency interval of nonzero spectrum. See Sampling (signal processing) for more details and examples.
A bandpass condition is that for all nonnegative outside the open band of frequencies
for some nonnegative integer . This formulation includes the normal baseband condition as the case N=0.
The corresponding interpolation function is the impulse response of a bandpass filter with cutoffs at the upper and lower edges of the specified band, which is the difference between a pair of lowpass impulse responses:

 .
Other generalizations, for example to signals occupying multiple noncontiguous bands, are possible as well. Even the most generalized form of the sampling theorem does not have a provably true converse. That is, one cannot conclude that information is necessarily lost just because the conditions of the sampling theorem are not satisfied; from an engineering perspective, however, it is generally safe to assume that if the sampling theorem is not satisfied then information will most likely be lost.
[edit] Historical background
The sampling theorem was implied by the work of Harry Nyquist in 1928 ("Certain topics in telegraph transmission theory"), in which he showed that up to 2B independent pulse samples could be sent through a system of bandwidth B; but he did not explicitly consider the problem of sampling and reconstruction of continuous signals. About the same time, Karl Küpfmüller showed a similar result^{[1]}, and discussed the sincfunction impulse response of a bandlimiting filter, via its integral, the step response Integralsinus; this bandlimiting and reconstruction filter that is so central to the sampling theorem is sometimes referred to as a Küpfmüller filter (but seldom so in English).
The sampling theorem, essentially a dual of Nyquist's result, was proved by Claude E. Shannon in 1949 ("Communication in the presence of noise"). V. A. Kotelnikov published similar results in 1933 ("On the transmission capacity of the 'ether' and of cables in electrical communications", translation from the Russian), as did the mathematician E. T. Whittaker in 1915 ("Expansions of the InterpolationTheory", "Theorie der Kardinalfunktionen"), J. M. Whittaker in 1935 ("Interpolatory function theory"), and Gabor in 1946 ("Theory of communication").
[edit] Other discoverers
Others who have independently discovered or played roles in the development of the sampling theorem have been discussed in several historical articles, for example by Jerri^{[2]} and by Lüke.^{[3]} For example, Lüke points out that H. Raabe, an assistant to Küpfmüller, proved the theorem in his 1939 Ph.D. dissertation; the term Raabe condition came to be associated with the criterion for unambiguous representation (sampling rate greater than twice the bandwidth).
Meijering^{[4]} mentions several other discovers and names in a paragraph and pair of footnotes:
 As pointed out by Higgins [135], the sampling theorem should really be considered in two parts, as done above: the first stating the fact that a bandlimited function is completely determined by its samples, the second describing how to reconstruct the function using its samples. Both parts of the sampling theorem were given in a somewhat different form by J. M. Whittaker [350, 351, 353] and before him also by Ogura [241, 242]. They were probably not aware of the fact that the first part of the theorem had been stated as early as 1897 by Borel [25].^{27} As we have seen, Borel also used around that time what became known as the cardinal series. However, he appears not to have made the link [135]. In later years it became known that the sampling theorem had been presented before Shannon to the Russian communication community by Kotel'nikov [173]. In more implicit, verbal form, it had also been described in the German literature by Raabe [257]. Several authors [33, 205] have mentioned that Someya [296] introduced the theorem in the Japanese literature parallel to Shannon. In the English literature, Weston [347] introduced it independently of Shannon around the same time.^{28}
 .^{27} Several authors, following Black [16], have claimed that this first part of the sampling theorem was stated even earlier by Cauchy, in a paper [41] published in 1841. However, the paper of Cauchy does not contain such a statement, as has been pointed out by Higgins [135].
 .^{28} As a consequence of the discovery of the several independent introductions of the sampling theorem, people started to refer to the theorem by including the names of the aforementioned authors, resulting in such catchphrases as “the WhittakerKotel’nikovShannon (WKS) sampling theorem" [155] or even "the WhittakerKotel'nikovRaabeShannonSomeya sampling theorem" [33]. To avoid confusion, perhaps the best thing to do is to refer to it as the sampling theorem, "rather than trying to find a title that does justice to all claimants" [136].
[edit] Why Nyquist?
Exactly how, when, or why Nyquist had his name attached to the sampling theorem remains obscure. The first known use of the term Nyquist sampling theorem is in a 1965 book.^{[5]} It had been called the Shannon Sampling Theorem as early as 1954,^{[6]} but also just the sampling theorem by several other books in the early 1950s.
In 1958, Blackman and Tukey^{[7]} cited Nyquist's 1928 paper as a reference for the sampling theorem of information theory, even though that paper does not treat sampling and reconstruction of continuous signals as others did. Their glossary of terms includes these entries:
 Sampling theorem (of information theory)
 Nyquist's result that equispaced data, with two or more points per cycle of highest frequency, allows reconstruction of bandlimited functions. (See Cardinal theorem.)
 Cardinal theorem (of interpolation theory)
 A precise statement of the conditions under which values given at a doubly infinite set of equally spaced points can be interpolated (with the aid of the function to yield a continuous bandlimited function. (sic: mismatched parentheses)
Exactly what result of Nyquist they are referring to remains mysterious.
When Shannon stated and proved the sampling theorem in his 1949 paper, according to Meijering^{[4]} "he referred to the critical sampling interval T = 1/2W as the Nyquist interval corresponding to the band W, in recognition of Nyquist’s discovery of the fundamental importance of this interval in connection with telegraphy." This explains Nyquist's name on the critical interval, but not on the theorem.
Similarly, Nyquist's name was attached to Nyquist rate in 1953 by Harold S. Black:^{[8]}
 "If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/2B has been termed a Nyquist interval." (bold added for emphasis; italics as in the original)
According to the OED, this may be the origin of the term Nyquist rate. In Black's usage, it is not a sampling rate, but a signaling rate.
[edit] Historical references
 ^
K. Küpfmüller, "Über
die Dynamik der selbsttätigen Verstärkungsregler", Elektrische
Nachrichtentechnik, vol. 5, no. 11, pp. 459467, 1928.
(German)
K. Küpfmüller, On the dynamics of automatic gain controllers, Elektrische Nachrichtentechnik, vol. 5, no. 11, pp. 459467. (English translation)  ^ Abdul J. Jerri, The Shannon Sampling Theorem—Its Various Extensions and Applications: A Tutorial Review, Proceedings of the IEEE, 65:1565–1595, Nov. 1977. See also Correction to "The Shannon sampling theorem—Its various extensions and applications: A tutorial review", Proceedings of the IEEE, 67:695, April 1979
 ^ Hans Dieter Lüke, The Origins of the Sampling Theorem , IEEE Communications Magazine, pp.106–108, April 1999.
 ^ ^{a} ^{b} Erik Meijering, A Chronology of Interpolation From Ancient Astronomy to Modern Signal and Image Processing , Proc. IEEE, 90, 2002.
 ^ Richard A. Roberts and Ben F. Barton, Theory of Signal Detectability: Composite Deferred Decision Theory, 1965.
 ^ Truman S. Gray, Applied Electronics: A First Course in Electronics, Electron Tubes, and Associated Circuits, 1954.
 ^ R. B. Blackman and J. W. Tukey, The Measurement of Power Spectra : From the Point of View of Communications Engineering, New York: Dover, 1958.
 ^ Harold S. Black, Modulation Theory, 1953
[edit] See also
 Aliasing
 Antialiasing filter
 Dirac comb
 Hartley's law, where the sampling theorem is applied to data transmission.
 Whittaker–Shannon interpolation formula
 Sampling (signal processing)
 Signal (electrical engineering)
 Reconstruction from Zero Crossings
[edit] References
 E. T. Whittaker, "On the Functions Which are Represented by the Expansions of the Interpolation Theory", Proc. Royal Soc. Edinburgh, Sec. A, vol.35, pp.181194, 1915
 H. Nyquist, "Certain topics in telegraph transmission theory", Trans. AIEE, vol. 47, pp. 617644, Apr. 1928 Reprint as classic paper in: Proc. IEEE, Vol. 90, No. 2, Feb 2002.
 Karl Küpfmüller, "Utjämningsförlopp inom Telegraf och Telefontekniken", ("Transients in telegraph and telephone engineering"), Teknisk Tidskrift, no. 9 pp.153160 and 10 pp.178182, 1931. [1] [2]
 V. A. Kotelnikov, "On the carrying capacity of the ether and wire in telecommunications", Material for the First AllUnion Conference on Questions of Communication, Izd. Red. Upr. Svyazi RKKA, Moscow, 1933 (Russian). (english translation, PDF)
 J. M. Whittaker, Interpolatory Function Theory, Cambridge Univ. Press, Cambridge, England, 1935.
 C. E. Shannon, "Communication in the presence of noise", Proc. Institute of Radio Engineers, vol. 37, no.1, pp. 1021, Jan. 1949. Reprint as classic paper in: Proc. IEEE, Vol. 86, No. 2, (Feb 1998)
 J. R. Higgins: Five short stories about the cardinal series, Bulletin of the AMS 12(1985)
 Michael Unser: Sampling50 Years after Shannon, Proc. IEEE, vol. 88, no. 4, pp. 569587, April 2000
[edit] External links
 Learning by Simulations Interactive simulation of the effects of inadequate sampling
 Undersampling and an application of it
 Sampling Theory For Digital Audio
 Journal devoted to Sampling Theory
Digital signal processing 

Theory — Nyquist–Shannon sampling theorem, estimation theory, detection theory 
Subfields — audio signal processing  control engineering  digital image processing  speech processing  statistical signal processing 
Techniques — Discrete Fourier transform (DFT)  Discretetime Fourier transform (DTFT)  bilinear transform  Ztransform, advanced Ztransform 
Sampling — oversampling  undersampling  downsampling  upsampling  aliasing  antialiasing filter  sampling rate  Nyquist rate/frequency 