Expanding reading
Channel Capacity
A variety of impairments can distort or corrupt a signal.A common impairment is noise,which is any unwanted signal that combines with and hence distorts the signal intended for transmission and reception.For the purposes of this section,we simply need to know that noise is something that degrades signal quality. For digital data, the question that then arises is to what extent these impairments limit the data rate that can be achieved. The maximum rate at which data can be transmitted over a given communication path,or channel,under given conditions is referred to as the channel capacity.
There are four concepts here that we are trying to relate to one another:
▶ Data rate:This is the rate,in bits per second(bps),at which data can be communicated.
▶ Bandwidth:This is the bandwidth of the transmitted signal as constrained by the transmitter and the nature of the transmission medium,expressed in cycles per second,or Hertz.
▶ Noise: For this discussion, we are concerned with the average level of noise over the communications path.
▶ Error rate:This is the rate at which errors occur,where an error is the reception of 1 when 0 was transmitted or the reception of 0 when 1 was transmitted. The problem we are addressing is this:Communications facilities are expensive and,in general,the greater the bandwidth of a facility,the greater the cost.Furthermore,all transmission channels of any practical interest are of limited bandwidth.The limitations arise from the physical properties of the transmission medium or from deliberate limitations at the transmitter on the bandwidth to prevent interference from other sources.Accordingly,we would like to make as efficient use as possible of a given bandwidth.For digital data,this means that we would like to get as high a data rate as possible at a particular limit of error rate for a given bandwidth.The main constraint on achieving this efficiency is noise.
Nyquist Bandwidth
To begin, let us consider the case of a channel that is noise free. In this environment, the limitation on data rate is simply the bandwidth of the signal.A formulation of this limitation,due to Nyquist,states that if the rate of signal transmission is 2B,then a signal with frequencies no greater than B is sufficient to carry the signal rate.The converse is also true:Given a bandwidth of B,the highest signal rate that can be carried is 2B. This limitation is due to the effect of inter symbol interference, such as is produced by delay distortion. The result is useful in the development of digital-to-analog encoding schemes.
We referred to signal rate.If the signals to be transmitted are binary,then the data rate that can be supported by B Hz is 2B bps. As an example, consider a voice channel being used, via modem, to transmit digital data.Assume a bandwidth of 3100 Hz.Then the capacity,C,of the channel is 2B=6200 bps.However,signals with more than two levels can be used;that is,each signal element can represent more than one bit. For example, if four possible voltage levels are used as signals, then each signal element can represent two bits.With multilevel signal,the Nyquist formulation becomes
Where M is the number of discrete signal elements or voltage level.Thus,for M=8,a value used with some modems,a bandwidth of B=3100 Hz yields a capacity C=18600 bps.
So,for a given bandwidth,the data rate can be increased by increasing the number of different signal elements.However,this places an increased burden on the receiver.Instead of distinguishing one of two possible signal elements during each signal time,it must distinguish one of M possible signals.Noise and other impairments on the transmission line will limit the practical value of M.
Shannon Capacity Formula
Nyquist's formula indicates that,all other things being equal,doubling the bandwidth doubles the data rate.Now consider the relationship among data rate,noise,and error rate.The presence of noise can corrupt one or more bits.If the data rate is increased,then the bits become shorter in time, so that more bits are affected by a given pattern of noise.Thus,at a given noise level,the higher the data rate,the higher the error rate.
Here is an example of the effect of noise on a digital signal.The noise consists of a relatively modest level of background noise plus occasional larger spikes of noise. The digital data can be recovered from the signal by sampling the received waveform once per bit time.As can be seen,the noise is occasionally sufficient to change 1 to 0 or 0 to 1.
All of these concepts can be tied together neatly in a formula developed by the mathematician Claude Shannon. As we have just mentioned, the higher the data rate, the more damage that unwanted noise can do. For a given level of noise, we would expect that greater signal strength would improve the ability to receive data correctly in the presence of noise. The key parameter involved in this reasoning is the signal-to-noise ratio(or S/N),which is the ratio of the power in a signal to the power contained in the noise that is present at a particular point in the transmission. Typically,this ratio is measured at a receiver,because it is at this point that an attempt is made to process the signal and eliminate the unwanted noise.For convenience,this ratio is often reported in decibels:
This expresses the amount,in decibels,that the intended signal exceeds the noise level.A high SNR will mean a high-quality signal.
The signal-to-noise ratio is important in the transmission of digital data because it sets the upper bound on the achievable data rate.Shannon’s result is that the maximum channel capacity,in bits per second,obeys the equation
Where C is the capacity of the channel in bits per second, B is the bandwidth of the channel in Hertz.The Shannon formula represents the theoretical maximum that can be achieved.In practice, however,only much lower rates are achieved.One reason for this is that the formula assumes white noise(thermal noise).
The capacity indicated in the preceding equation is referred to as the error-free capacity. Shannon proved that if the actual information rate on a channel is less than the error-free capacity, then it is theoretically possible to use a suitable signal code to achieve error-free transmission through the channel Shannon’s theorem unfortunately does not suggest a means for finding such codes,but it does provide a yardstick by which the performance of practical communication schemes may be measured.
Several other observations concerning the preceding equation may be instructive.For a given level of noise, it would appear that the data rate could be increased by increasing either signal strength or bandwidth.However,as the signal strength increases,so do the effects of nonlinearities in the system,leading to an increase in modulation noise.Note also that,because noise is assumed to be white,the wider the bandwidth,the more noise is admitted to the system.Thus,as B increases,SNR decreases.