WhatsApp Group Join Now
Telegram Join Join Now

SIGNAL SPACE ANALYSIS in digital communication pdf , signal space analysis of optimum detection

signal space analysis of optimum detection , SIGNAL SPACE ANALYSIS in digital communication pdf ?
INTRODUCTION TO SIGNAL SPACE ANALYSIS
Inside this Chapter

  • Introduction
  • Concept of Additive White Gaussain Noise (AWGN) Channel
  • Concept of Optimum Receiver
  • Geometric Representation of Signals
  • Schwarz Inequality
  • Gram-Schmidt Orthogonalization Procedure

7.1       INTRODUCTION
Figure 7.1 shows the most basic form of a digital communication system.
Here, a message source emits one symbol every T seconds. The emitted symbols belong to an alphabet of M symbols represented by m1, m2, ……mM.
To make things more clear, let us consider two examples as under:
(i)         In first example, let us consider the remote connection of two digital computers with one computer acting as an information source which calculates digital outputs based upon observation and inputs fed into it. The resulting computer output is expressed as a sequence of 0s and 1s, which are transmitted to a second computer over a communication channel. In this case, the alphabet consists simply of two binary symbols i.e., 0 and 1.
(ii)        In second example, let us consider a quaternary PCM encoder with an alphabet consisting of four possible symbols, i.e., 00, 01 10, and 11.
In case of an event, let probabilities p1, p2, …. pM denote the message source output. For simplicity, let us assume that the M symbols of the alphabet are equally likely.
Then, the probability that symbol mi is emitted by the information source, is expressed as
Pi = P(mi)
or                                 pi =  for i = 1, 2, …. M                                                   …(7.1)
Now, the transmitter takes the message source output mi and codes it into a distinct signal si(t) which is suitable for transmission over the channel.
DIAGRAM
FIGURE 7.1. A most basic form of digital communication system.
This signal si(t) occupies the full duration T alloted to symbol mi. Further, si(t) is a real-valued energy signal as given by
for i = 1, 2, …M                              …(7.2)
7.2 CONCEPT OF ADDITIVE WHITE GAUSSIAN NOISE (AWGN) CHANNEL
Now, the channel is assumed to have following two characteristics:
(i)         the channel is linear with a bandwidth that is wide enough to accommodate the transmission of signal si(t) with negligible or no distortion.
(ii)        the channel noise, w(t), is the sample function of a zero-mean white Gaussian noise process.
At this stage, it may be noted that the reasons for second assumption are that it makes receiver calculations very easy and also, it is a reasonable description of the type of noise present in Several practical communication systems. Such a channel is popularly known as an additive Mute Gaussian noise (AWGN) channel.
Hence, in view of above discussion, the received signal x(t) is expressed as
EQUATION
and thus, we can model an additive white Gaussian noise (AWGN) channel in figure 7.2.
DIAGRAM
FIGURE 7.2 Additive white Gaussian noise (AWGN) model of a channel.
7.3       CONCEPT OF OPTIMUM RECEIVER
            As a matter of fact, the receiver observes the received signal x(t) for a duration of T seconds and makes a best estimate of the transmitted signal si(t) or equivalently the estimate of symbols mi. But, due to the presence of channel noise, this decision-making process is statistical in nature. As a result of this, the receiver is likely to make occasional errors.
Therefore, the requirement is to design the receiver so as to minimize the average probability of symbol error.
This average probability of symbol error may be defined as
…(7.4)
where              mi = transmitted symbol,
= estimate produced by the receiver, and
P = the conditional error probability given that the ith symbol was sent.
The resulting receiver is called to be optimum in the minimum probability of error sense. The above model provides a basis for the design of the optimum receiver, for which, a geometric representation of the known set of transmitted signals, {si(t)}, will be used.
7.4       GEOMETRIC REPRESENTATION OF SIGNALS        (Very Important)
In geometric representation of signals, we represent any set of M energy signals {si(t)} as linear combinations of N orthonormal basis functions, where N £ M.
This means that given a set of real-valued signals si(t), s2(t), …, sM(t), each of duration T seconds, we may write si(t) as under:
EQUATION
where, the coefficients of the expansion can be defined as
EQUATION
Now, the real-valued basis functions , , …,  are orthonormal. Here, the wort `orthonormar implies that
EQUATION
where  is the Kronecker delta.
From equation (7.7), we note the following two points:
(i)         the first condition in equation (7.7) states that each basis function is normalized to have unit energy.
(ii)        the second condition states that the basis functions , , …,  are, orthogonal with respect to each other over the internal 0 £ t £ T.
In fact, the set of coefficients  may be viewed as an N-dimensional vector, represented by si*
With respect to vector si and transmitted signal si(t), let us note the following two points:
(i)         Given the N elements of the vectors si (i.e., si1, si2, …, siN) operating as input, the scheme shown in figure 7.3 may used to generate the signal si(t). It consists of a group of N multipliers. Here, each multiplier has its own basis function, followed by a summer. In fact, this scheme may be called as synthesizer.
(ii)        Conversely, given the signals si(t), i = 1, 2, …, M, operating as input, the scheme shown in figure 7.3(b) may be used to calculate the coefficient si1 si2,…siN, which follows directly from equations (7.6).
This second scheme consists of a group of N product-integrators or correlators with a common input. Each correlator is supplied with its own basis function.
In fact, the scheme in figure 7.3 (b) may be called as an analyzer.
DIAGRAM
FIGURE 7.3 (a) Synthesizer for generating the signal si(t),
                                    (b) Analyzer for generating the set of signal vectors (si).
Hence, based upon above two points, we can say that each signal in the set {si(t)} is completely determined by the vector of its coefficients
 
si = , i = 1, 2, …,M                                             …(7.8)
Here, the vector si is called a signal vector. Also, if we extend the conventional notion of two and three dimensional Euclidean spaces to an N-dimensional Euclidean space, the set of signal vectors {si | i = 1, 2, …M} may be viewed as defining a corresponding set of M points in an N-dimensional Euclidean space, with mutually perpendicular axes labeled
*          The vector si bears a one-to-one relationship with the transmitted signal si(t).
In fact, this N-dimensional Euclidean space is called the signal space.
NOTE It may be noted that the idea of visualizing a set of energy signals geometrically is of utmost importance. In fact, it provides the mathematical basis for the geometric representation of energy signals and hence giving the way for the noise analysis of digital communication systems in a much satisfied manner. Figure 7.4 illustrates this form of representation for the case of a two-dimensional signal space with three signals i.e., N = 2 and M = 3.
In an N-dimensional Euclidean space, we may define lengths of vectors and angles between vectors. It is customary to denote the length (also called the absolute value or norm) of a signal vector si by the symbol | | si | |. The squared-length of any signal vector si is defined to be the inner product or dot product if si with itself, as shown by
EQUATION
where sij is the jth element of si, and the superscript T denotes matrix transposition.
There is an interesting relationship between the energy content of a signal and its representation as a vector. By definition, the energy of a signal si(t) of duration T seconds is
Ei =   (t) dt                                            …(7.10)
Therefore, substituting equation (7.5) into equation (7.10), we get
equation
Interchanging the order of summation and integration, and the rearranging terms, we get
equation
But, since the  from and orthonormal set, in accordance with the two conditions of equation (7.7), we find that equation (7.11) reduces simply to
equation
diagram
FIGURE 7.4 Illustrating the geome-tric representation of signals for the case when N = 2 and M=3.
Thus equations (7.9) and (7.12) show that the energy of a signal si(t) is equal to the squared length of the signal vector si(t) representing it.
In the case of a pair of signals si(t) and sk(t), represented by the signal vectors si and sk, respectively, we may also show that
Equation (7.13) states that the inner product of the signal si(t) and sk(t) over the interval [0, 7], using their time-domain representations, is equal to the inner product of their respective vector representations si and sk. It may be noted that the inner product of si(t) and sk(t) is invariant to the choice of basis functions  in that it only depends upon the components of the signals si(t) and sk(t) projected onto each of the basis functions.
Yet another useful relation involving the vector representations of the signals si(t) and sk(t) is described by
equation
or                                                           equation
where | | si – sk | | is the Euclidean distance, dik, between the points represented by the signal vectors si and sk.
To complete the geometric representation of energy signals, we need to have a representation for the angle ik subtended between two signal vectors si and sk. By definition, the cosine of the angle ik is equal to the inner product of these two vectors divided by the product of their individual norms, as shown by
equation                                                          …(7.15)
The two vectors si and sk are thus orthogonal or perpendicular to each other if their inner product   is zero, in which case ik, = 90 degrees; this condition is intuitively satisfying.

Leave a Reply

Your email address will not be published. Required fields are marked *