hd1.gif - 9Kb




12.1 Is the world ‘analog’?

In general, we can imagine representing information in terms of some form of analog or digital signal. The digital data stored on a CD will normally have been produced using analog to digital convertors which are fed with amplified signals from microphones. The original microphone signals are obviously ‘analog’ — or are they?. . .

Modern physics is largely based upon the concept that the world behaves according to the rules of Quantum Mechanics. One of the axioms of this is that all forms of energy behave as if quantised. This gives us the well-known (although not well understood!) ‘wave—particle duality’. Statistically, the behaviour of physical processes can be described in terms of things like waves and continuous functions. Yet, when we examine any process in enough detail we can expect to see behaviour which it is more convenient to describe in terms of distinct particles or ‘packets’ of energy, mass, etc.

When the Compact Disc system was originally launched some people criticised it on the grounds that, ‘Sound signals are inherently analog, i.e. sound is a smoothly varying (continuous) pattern of pressure changes. Converting sound information into digital form “chops it up”, ruining it forever.’ This view is based on the idea that — by its very nature — sound is inherently a wave phenomenon. These waves satisfy a set of Wave Equations. Hence we should always be able to represent a given soundfield by a suitable algebraic function whose value varies smoothly from place to place and from moment to moment. Since the voltage/current patterns emerging from our microphones vary in proportion to the sound pressure variations falling upon them it seems fairly natural to think of the sound waves themselves as having all the properties we associate with ‘analog’ signals, i.e. the sound itself is essentially an analog signal, carrying information from the sound sources to the microphones. But how can sound be ‘analog’ if the theories of quantum mechanics are correct?

The purpose of this section is to show that the real world isn't actually either ‘analog’ or ‘digital’. Analog and digital signals are no more than mathematical representations of reality, useful when we want to process information. In fact we could say the same thing about the ‘waves’ and ‘particles’ we use so much in physics. Although it's easy to forget the fact, both waves and particles are mental models or ‘pictures’ we use to help us grasp how the real world behaves. Although useful as concepts, they don't necessarily ‘really exist’. To illustrate this point, imagine a situation where we are given a working electronic circuit board without being told anything about it and asked, ‘Is this an analog or a digital circuit?’ How could we tell? Of course, we could probably decide by looking to see if the circuit contained any integrated circuits, reading their type numbers, and looking them up in a book! (We can also guess that if the circuit doesn't contain any integrated circuits, it's probably not digital‡) However for our purposes, this would be cheating. The real question is, ‘Can we tell just by looking at the kinds of electronic signals being passed around between components on the board?’

If we connect an oscilloscope we can watch how some of the voltage or current levels in the circuit vary with time. In most cases, the shapes of the waveforms we'd see on the oscilloscope would quickly show whether the signal was digital or analog.

Digital signals will often show ‘square’ shapes. The signal voltages tend to spend most of the time near one or the other of two particular levels, switching between them relatively quickly. Analog signals sometimes show no obvious patterns, although in some cases they show a simple recognisable shape like a sinewave. As a result we can sometimes form an opinion about the type of signal by seeing if we can recognise the waveforms. But is there a more ‘scientific’ — i.e. objective — way of deciding? Is their an algorithm or recipe which would always be able to tell us what form a signal is taking?

At first it might seem as if this problem is an easy one. When we look at them on an oscilloscope, digital signals can look nice and square, analog ones tend to look like bunches of sinewaves or noise. Unfortunately, when an information channel is being used to its limits the situation can be less clear. When a digital signal is transmitted at very high bit-rates, the rising and falling edges of each level change tend to become rounded by the finite channel bandwidth. As a result, the actual transmitted voltage fluctuations may not display an obviously digital pattern.

In a similar way, some analog waveforms may show fairly square patterns. For example, the output from a heavy rock band, compressed by studio equipment, can have a ‘clipped’ look similar to a stream of, slightly rounded, digital bits. Also, if an analog channel is being used efficiently every possible waveform shape will appear sometimes. As a result, the waveform will sometimes look just like a digital one.

We can't know with absolute certainty, just by examining a real signal pattern for a while, whether it carries information in either digital or analog form — although we can be fairly confident in many cases. We use voltage patterns (or currents, etc) to carry information in various ways, but the terms ‘digital’ or ‘analog’ really refer to the way we process information, not some inherent property of the voltage/current itself.

For most purposes this lack of absolute knowledge doesn't matter. But it serves to make the point that digital and analog signals are idealisations. Any real signal will have both analog and digital characteristics.



Content and pages maintained by: Jim Lesurf (jcgl@st-and.ac.uk)
using HTMLEdit and TechWriter on a StrongARM powered RISCOS machine.
University of St. Andrews, St Andrews, Fife KY16 9SS, Scotland.