A voice grade channel of the telephone network has a bandwidth of 3.4 kHz. Calculate the information capacity of the telephone channel for a signal-to-noise ratio of 30 dB
EXAMPLE 9.56. A voice grade channel of the telephone network has a bandwidth of 3.4 kHz. Calculate the information capacity of the telephone channel for a signal-to-noise ratio of 30 dB.
Solution : Given that
B = 3.4 kHz, and
= 30 dB
We know that ratio in dB is expressed as
equation
or 30 = 10 log10
or = Antilog 30 = 1000
We know that the information capacity of the telephone channel is given by
C = B log2
Substituting all the values, we get
C = 3.4 x 103 log2 (1 + 1000)
or C = 3.4 x 103 = 3400 x 9.9672
or C = 33888.57 bits/sec = 33.89 kbps. Ans.
EXAMPLE 9.57. A binary symmetric channel (BSC) error probability is Pe. The probability of transmitting 1 is Q and that of transmitting 0 is 1 – Q. Determine the probability of receiving 1 and 0 at the receiver.
Solution : Let us draw the channel diagram for a binary symmetric channel (BSC) as shown in figure 9.17.
We know that the probability of receiving a ‘0’ i.e. P(y0) is given by
P(y0) = [Pe x P(x1)] + [(1 – Pe) P(x0)]
or P(y0) = QPe + (1 – Pe) (1 – Q). Ans.
Also, the probability of receiving a ‘1’ i.e. P(y1) is given by
P(y1) = [Pe x P(x0)] + [(1 – Pe) P(x1)]
or P(y1) = (1 – Q) Pe + (1 – Pe) Q. Ans.
diagram
FIGURE 9.17.
EXAMPLE 9.58. Explain the following term : Channel capacity.
(U.P. Tech, Sem. Exam, 2006-07) (2,5 Marks)
Solution : The channel capacity is denoted by C and in simple terms defined as the maximum possible bit rate a channel can support without introducing any errors. The unit of channel capacity is bits/sec. The mutual information of a channel is dependent not only on the channel but also on the way in which channel is used. The channel capacity can also be defined as follows in terms of mutual information.
Definition of channel capacity
The channel capacity C of a discrete memoryless channel (DMC) may be defined as the maximum mutual information I(X ; Y) in any single use of the channel (i.e., signalling interval) where the maximization is over all possible input probability distributions on X.
Therefore, we have
C = max I(X ; Y)
The channel capacity is function of only the transitional probabilities.
EXAMPLE 9.59. Derive channel capacity expression for a binary symmetric channel (BSC). (U.P. Tech, Sem. Exam, 2003-04) (05 Marks)
Solution : Let us consider a binary symmetric channel (BSC) as shown in figure 9.18. Let the probability of error be p and hence the probability of correct reception is (1 – p).
We know that the source entropy H(X) is maximum when all the transmitted messages (x0 and x1 in this case) are equiprobable.
diagram
FIGURE 9.18 A binary zymmetric channel (BSC)
Therefore, p(x0) = p(x1) =
where x0 and x1, are 0 and 1 respectively.
Under this condition, the mutual information I(X ; Y) also is maximized. Therefore, we have
equation
Now, from figure 9.18, the transition probabilities are as under :
p(y0|x1) = p (y1|x0) = p
and p(y0|x0) = p (y1|x1) = 1- p
Hence, the channel capacity can be proved to be equal to
C = 1 + p log2 p + (1 – p) log2 (1 – p)
But, the source entropy H(p) will be
H(p) = p log2 + (1 – p) log2
H(p) = – p log2 p – (1 – p) log2 (1 – p)
or H(p) = – [p log2 p + (1 – p) log2 (1 – p)]
Using equations (i) and (ii), we get
C = 1- H(p). Hence proved.
SUMMARY
■ The purpose of a communication system is to carry information-bearing baseband signals from one place to another place over a communication channel.
■ Information theory is a branch of probability theory which may be applied to the study of the communication systems.
■ In the context of communications, information theory deals with mathematical modeling and analysis of a communication system rather than with physical sources and physical channels.
■ Information theory was invented by communication scientists while they were studying the statistical structure of electronic communication equipments. When the communique is readily measurable, such as an electric current, the study of the communication system is relatively easy.
■ An information source may be an object produces an event, the outcome of which is selected at random according to a probability distribution. A practical source in a communication system is a device which produces messages, and it can be either analog or discrete.
■ A discrete information source is a source which has only a finite set of symbols as possible outputs. The set of source symbols is called the source alphabet, and the elements of the set are called symbols or letters.
■ Information sources can be classified as having memory or being memoryless. A source with memory is one for which a current symbol depends on the previous symbols. A memoryless source is one for which each symbol produced is independent of the previous symbols.
■ A discrete memoryless source (DMS) can he characterized by the list of the symbols, the probability assignment to these symbols, and the specification of the rate of generating these symbols by the source.
■ The amount of information contained in an event is closely relatedto its uncertainty. Messages containing knowledge of high probability of occurrence convey relatively little information.
■ In a practical communication system, we usually transmit long sequences of symbols from an information source. Thus, we are more interested in the average information that a source produces than the information content of a single symbol.
■ For quantitative representation of average information per symbol we make the following assumptions:
(i) The source is stationary so that the probabilities may remain constant with time.
(ii) The successive symbols are statistically independent and come form the source at a average rate of r symbols per second.
■ The mean value of I(xi) over the alphabet of source X with m different symbols is given by
equation
■ If the time rate at which source X emits symbols is r (symbols s), the information rate R of the source is given by
R = rH(X) b/s
■ A communication channel may be defined as the path or medium through which the symbols flow to the receiver. A discrete mamoryless channel (DMC) is a statistical model with an input X and an output Y
■ A channel is completely specified by the complete set of transition probabilities.
■ A channel described by a channel matrix with only one non-zero element in each column is called a lossless channel.
■ A channel described by a channel matrix with only one non-zero element in each row is called a deterministic channel.
■ A channel is called noiseless if it is both lossless and deterministic. A noiseless channel.
■ we can define the following various entropy functions for a channel with m inputs and n outputs:
equation
equation
equation
equation
■ The mutual information: I(X ; Y) of a channel is defined by
I(X ; Y) = H(X) — H(X|Y) b/symbol
Since H(X) represents the uncertainty about the channel input before the channel output is observed and H(XIY) represents the uncertainty about the channel input after the channel output is observed, the mutual information I(X ; Y) represents the uncertainty about the channel input that is resolved by observing the channel output.
■ Properties of (X; Y):
I(X; Y) = I(Y;X)
I(X;Y) ≥ 0
I(X; Y) = H (Y) – H (Y | X) = H(X) + H(Y) – H(X,Y)
■ The channel capacity per symbol of a DMC is defined as
Equation
where the maximization is over all possible input probability distributions {P(xi)} on X. Note that the channel capacity Cs is a function of only the channel transition probabilities which define the channel.
■ In a continuous channel, an information source produces a continuous signal x(t). The set of possible signals is considered as an ensemble of waveforms generated by some ergodic random process. It is further assumed that x(t) has a finite bandwidth sc, that x(t) is completely characterized by its periodic sample values. Thus, at any sampling instant, the collection of possible sample values constitutes a continuous random variable X described by it probability density function fX(x).
■ The average amount of information per sample value of x(t) is measured by
equation
■ The entropy H(X) defined by above equation is known as the differential entropy of X.
■ In an additive white Gaussian noise (AWGN) channel, the channel output Y is given by
Y= X + n
where X is the channel input and n is an additive bandlimited white Gaussian noise with zero mean and variance σ2.
■ The capacity Cs of an AWGN channel is given by
equation
where S/N is the signal-to-noise ratio at the channel output.
- The capacity C(b/s) of the AWGN channel is given by
C = 2BCs = B log2 b/s
The above equation is known as the Shannon-Hartley law.
■ The Shannon-Hartley law underscores the fundamental role of bandwidth and signal-to-noise ratio in communication. It also shows that we can exchange increased bandwidth for decreased signal power for a system with given capacity C.
■ A conversion of the output of a DMS into a sequence of binary symbols (i.e., binary codeword) is called source coding. The device that performs this conversion is called the source encoder
■ An objective of source coding is to minimize the average bit rate required for representation of the source by reducing the redundancy of the information source.
■ The average codeword length L, per source symbol is given by
equation
The parameter L represents the average number of bits per source symbol used in the source coding process.
■ Also, the code efficiency Ƞ is defined as
where Lmin is the minimum possible value of L. When q approaches unity, the code is said to be efficient.
■ The code redundancy γ is defined as
γ = 1 – Ƞ
■ The source coding theorem states that for a DMS X, with entropy H(X), the average codeword length L per symbol is bounded as
L ≥ H(X)
and further, L can be made as close to H(X) as desired for some suitably chosen code.
Thus, with Lmin = H(X), the code efficiency can be rewritten as
- A necessary and sufficient condition for the existence of an instantaneous binary code is
equation
which is known as the Kraft inequality.
■ The design of a variable-length code such that its average codeword length approaches the entropy of DMS is often referred to as entropy coding. In this section, we present two examples of entropy coding.
■ An efficient code can be obtained by the following simple procedure. known as Shannon-Fano algorithm.
हिंदी माध्यम नोट्स
Class 6
Hindi social science science maths English
Class 7
Hindi social science science maths English
Class 8
Hindi social science science maths English
Class 9
Hindi social science science Maths English
Class 10
Hindi Social science science Maths English
Class 11
Hindi sociology physics physical education maths english economics geography History
chemistry business studies biology accountancy political science
Class 12
Hindi physics physical education maths english economics
chemistry business studies biology accountancy Political science History sociology
English medium Notes
Class 6
Hindi social science science maths English
Class 7
Hindi social science science maths English
Class 8
Hindi social science science maths English
Class 9
Hindi social science science Maths English
Class 10
Hindi Social science science Maths English
Class 11
Hindi physics physical education maths entrepreneurship english economics
chemistry business studies biology accountancy
Class 12
Hindi physics physical education maths entrepreneurship english economics