Whenever we try to make accurate measurements we discover that the quantities we are observing appear to fluctuate randomly by a small amount. This limits our ability to make quick, accurate measurements and ensures that the amount of information we can collect or communicate is always finite. These random fluctuations are called *Noise.* They arise because the real world behaves in a quantised or ‘lumpy’ fashion. A common question when designing or using information systems is, ‘Can we do any better?’ In some cases it's possible to improve a system by choosing a better design or using it in a different way. In other cases we're up against fundamental limits set by unavoidable noise effects. To decide whether it is worth trying to build a better system we need to understand how noise arises and behaves. Here we will concentrate on electronic examples. However, you should bear in mind that similar results arise when we consider information carried in other ways (e.g. by photons in optonics systems).

**Johnson noise**

In 1927 J. B. Johnson observed random fluctuations in the voltages across electrical resistors. A year later H. Nyquist published a theoretical analysis of this noise which is thermal in origin. Hence this type of noise is variously called *Johnson* noise, *Nyquist* noise, or *Thermal* noise.

A resistor consists of a piece of conductive material with two electrical contacts. In order to conduct electricity the material must contain some charges which are free to move. We can therefore treat it as ‘box’ of material which contains some mobile electrons (charges) which move around, interacting with each other and with the atoms of the material. At any non-zero temperature we can think of the moving charges as a sort of *Electron Gas* trapped inside the resistor box. The electrons move about in a randomised way — similar to Brownian motion — bouncing and scattering off one another and the atoms. At any particular instant there may be more electrons near one end of the box than the other. This means there will be a difference in electric potential between the ends of the box (i.e. the non-uniform charge distribution produces a voltage across the resistor). As the distribution fluctuates from instant to instant the resulting voltage will also vary unpredictably.

Figure 3.1 illustrates a resistor connected connected via an amplifier to a centre-zero d.c. voltmeter. Provided that the gain of the amplifier and the sensitivity of the meter are large enough we will see the meter reading alter randomly from moment to moment in response to the thermal movements of the charges within the resistor. We can't predict what the precise noise voltage will be at any future moment. We can however make some __statistical__ predictions after observing the fluctuations over a period of time. If we note the meter reading at regular intervals (e.g. every second) for a long period we can plot a histogram of the results. To do this we choose a ‘bin width’, , and divide up the range of possible voltages into small ‘bins’ of this size. We then count up how often the measured voltage was in each bin, divide those counts by the __total__ number of measurements, and plot a histogram of the form shown in figure 3.2.

We can now use this plot to indicate the likelihood or *probability, *, that any future measurement of the voltage will give a result in any particular small range, . This type of histogram is therefore called a display of the *Probability Density Distribution* of the fluctuations. From the form of the results, two conclusions become apparent:

Firstly, the __average__ of all the voltage measurements will be around zero volts. This isn't a surprise since there's no reason for the electrons to prefer to concentrate at one end of the resistor. For this reason, the average voltage won't tell us anything about how large the noise fluctuations are.

Secondly, the histogram will approximately fit what's called a *Normal* (or *Gaussian*) distribution of the form

(Note that you'll only get these results if you make __lots__ of readings. One or two measurements won't show a nice Gaussian plot with its centre at zero!) The value of which fits the observed distribution indicates how wide the distribution is, hence it's a useful measure of the amount of noise.

The value is useful for theoretical reasons since the probability distribution is Gaussian. In practice, however, it is more common to specify a noise level in terms of an *rms* or *root-mean-square* quantity. Here we can imagine making a series of *m* voltage measurements, , of the fluctuating voltage. We can then calculate the rms voltage level which can be defined as

In general in these pages we can simplify things by using the ‘angle brackets’, , to indicate an averaged quantity. Using this notation expression 3.2 becomes

Since will be positive when __and__ when we can expect to always be positive whenever the Gaussian noise distribution has a width greater than zero. The wider the distribution, the larger the rms voltage level. Hence, unlike the mean voltage, the rms voltage is a useful indicator of the noise level. The rms voltage is of particular usefulness in practical situations because the amount of power associated with a given voltage varies in proportion with the voltage squared. Hence the average __power__ level of some noise fluctuations can be expected to be proportional to .

Since thermal noise comes from thermal motions of the electrons we can only get rid of it by cooling the resistor down to absolute zero. More generally, we can expect the thermal noise level to vary in proportion with the temperature.

**Shot noise**

Many forms of random process produce Gaussian/Normal noise. Johnson noise occurs in __all__ systems which aren't at absolute zero, hence it can't be avoided in normal electronics. Another form of noise which is, in practice, unavoidable is *Shot Noise*. As with thermal noise, this arises because of the quantisation of electrical charge. Imagine a current flowing along a wire. In reality the current is actually composed of a stream of carriers, the charge on each being *q*, the electronic charge (1·6 × 10 Coulombs). To define the current we can imagine a surface through which the wire passes and count the number of charges, *n*, which cross the surface in a time, *t*. The current,* i*, observed during each interval will then simply be given by

Now the moving charges will not be aligned in a precise pattern, crossing the surface at regular intervals. Instead, each carrier will have its own random velocity and separation from its neighbours. When we repeatedly count the number of carriers passing in a series of *m* successive time intervals of equal duration, *t*, we find that the counts will fluctuate randomly from one interval to the next. Using these counts we can say that the typical (average) number of charges seen passing during each time *t* is

where is the number observed during the *j *th interval. The mean current flow observed during the whole time, , will therefore be

During any __specific__ time interval the observed current will be

which will generally differ from *I *by an unpredictable amount. The effect of these variations is therefore to make it appear that there is a randomly fluctuating noise current superimposed on the nominally steady current, *I*. The size of the current fluctuation, , during each time period can be defined in terms of the variation in the numbers of charges passing in the period, , i.e. we can say that

As with Johnson noise, we can make a large number of counts and determine the magnitude of the noise by making a statistical analysis of the results. Once again we find that the resulting values have a *Normal* distribution. By definition we can expect that (since is arranged to be the value which makes this true). Hence, as with Johnson noise, we should use the mean-squared variation, not the mean variation, as a measure of the amount of noise. In this case, taking many counts and performing a statistical analysis, we find that

Note that — as with the statement that thermal noise and shot noise exhibit Gaussian probability density distributions — this result is based on experiment. We will not take any interest in __why__ these results are correct. It is enough for our purposes to take it as an experimentally verified fact that these statements are true. Combining the above expressions we can link the magnitude of the current fluctuations to the mean current level and say that

Hence we find that the rms size of the random current fluctuations is approximately proportional to the average current. Since some current and voltage is always necessary to carry a signal this noise is unavoidable (unless there's no signal) although we can reduce its level by reducing the magnitude of the signal current.

Content and pages maintained by: Jim Lesurf (jcgl@st-and.ac.uk)

using
TechWriter Pro and HTMLEdit on a RISCOS machine.

University of St. Andrews, St Andrews, Fife KY16 9SS, Scotland.