, Indian Broadcaster - Informative & researched article on Doordarshan, Indian Television Broadcaster

Doordarshan, Indian Television Broadcaster Doordarshan is the public television broadcaster of and a division of Prasar Bharati, a public service broadcaster nominated by the Government of India.

Doordarshan, operated by thePrasar Bharati, is a television broadcaster in India and is an undertaking of the Government of India. Due to the transmitters and infrastructural facilities of Doordarshan, it is considered amongst the leading broadcasting organisations throughout the world. Doordarshan successfully completed its 50th year on September 2009.

History of Doordarshan Doordarshan started with a tentative telecast on September 1959 from Delhi. The infrastructure at that time was small supported by a temporary studio. Regular transmission commenced on 1965, and formed a part of . By 1972, the telecast was expanded to Amritsar and . Doordarshan was the only channel available at time and by 1975, it was available in seven cities around the nation. In 1976, it was detached from All India Radio and was fully managed from New Delhi, by two different Director Generals. In 1982, colour television sets became available in country and the speech given by the Prime Minister of that time,Indira Gandhi, was telecast live through out the nation. After this, the 1982 Asian games, Delhi, was also broadcasted by the channel.

Early Programmes of Doordarshan Doordarshan gained exceeding popularity during the 1980s with its new and groundbreaking shows that could easily connect with the urban and rural audiences alike. Shows like Hum Log, Yeh Jo Hai Zindagi, Buniyaad, Nukkad, along with the epics like Ramayana andMahabharata were watched by viewers throughout the country. Later programmes like Bharat Ek Khoj, Chitrahaar, Sword of Tipu Sultan, Rangoli, The Great Maratha, Ek Se Badkar Ek, and Superhit Muqabla also were watched widely.

Other popular programmes included thrillers like Byomkesh Bakshi, Karamchand, Barrister Roy, Tehkikaat, Reporter and Suraag. Family oriented shows like Wagle ki Duniya, Fauji, Mr. , Talaash, Kashish, Srimaan Srimati, Dekh Bhai Dekh, Zabaan Sambhal Ke, Swabhimaan, Shanti, Saagar, Lifeline, Udaan, Circus, Sansaar, Jaspal Bhatti`s Flop Show, Meri Awaaz Suno, Sangharsh, Gul Gulshan Gulfam, Sea Hawks, Tu tu mein mein and Junoon were also widely accepted. Mythological programmes like Dastan-E-Hatim Tai, Chandrakanta, Alif Laila were also very popular among the viewers.

Shows targeted at kids were also much appreciated. Programmes like Captain Vyom, a Desi version of Star Wars, Potli Baba Ki, Malgudi Days, Stone Boy, Tenali Raman, Sigma, Vikram Betaal, Kile ka Rahasya and Dada Dadi ki Kahaniyan are worth mentioning. Many popular international programmes were also aired, after being dubbed in Hindi, such as Johnny Soko and his flying robot, Street Hawk, Knight Rider, Superhuman Samurai Cyber Squad, and animated shows like the jungle book, He-Man and the masters of universe, Spiderman, Disney adventures were also admired by the young audience.

Channels of Doordarshan Doordarshan currently has 21 channels, 11 regional channels and 2 national channels (DD National and DD News), 1 sports channel (DD Sports), 1 international channel and a few more. DD National broadcasts both regional and national programmes. DD-Sports exclusively telecast various sporting tournaments and events, which are of national and international significance. It also broadcasts local sports like Kabaddi, Kho- Kho etc. DD News, which was launched by replacing DD Metro, is a 24 hour new channel.

The array of channels offered by Doordarshan include- DD National, DD Sports, DD News, Rajya Sabha TV, DD-Lok Sabha, DD Bharti and many regional channels such as, DD Gujarati, DD Bangla, DD Punjabi, DD Kashir, DD Malayalam, DD Odia, DD Podhigai, DD Saptagiri, DD Sahyadri, DD Urdu and DD NorthEast.

National Programmes on Doordarshan The objective of a common programme broadcast, which will cater to people in different states, was achieved by Mr. Sathe, Minister for Information and Broadcast, in the 90 minute National programme, on August 15, 1982. This was to consist of news in Hindi and English, and programmes reflecting music, dance and other aspects of life, literature and culture of all regions. Although few programmes have been appreciated by the viewers, but in general it is believed that the output has lacked quality and standard.

There are seven full fledged centres at Delhi, Mumbai, Kolkata, Chennai, Jalandar, Srinagar, and Lucknow; eight transmitting centres at Raipur, Jaipur, Muzaffarpur,Gulbarga, Sambalpur, Hyderabad, Ahmedabad and Nagpur, and relay centres at Gwalior, Kanpur, Pune, Allahabad, Amritsar, Bengaluru, Mussoorie, Panaji a ndAsansol. There are also three Upgraha Doordarshan Kendras (Satellite programme production centres) at Cuttack, Delhi and Hyderabad and also 20 low-power transmitters in many states to provide coverage to national and other programmes through relays. Doordarshan has 45 transmitters at work, and the programmes reach about 28% of the population.

Active Doordarshan Recently, along with Tata , Doordarshan has launched an Interactive Service, which is offered as a special channel on . It is an Interactive Service of Tata Sky to show 4 TV Channels of Doordarshan which are not available on Tata sky as normal channels. DD Podhigai, DD Gujarati and DD Punjabi are offered in this service. Doordarshan also has launched its own Direct-To-Home service, named DD Direct Plus.

International Broadcasting of Doordarshan Doordarshan had also started broadcasting internationally via Satellite and has a presence in almost 146 countries, globally. But there were some technical problems on the availability of the channel in some countries. The programmes and timie slot are not as similar as the broadcast in India. In July 2008, transmissions in U.K. and U.S. were stopped.

Now more than 90 percent of population of the country can receive Doordarshan programmes through a network of nearly 1400 terrestrial transmitters. Around 46 Doordarshan Studios are producing TV software. The Doordarshan televises through the Official and Associate Official languages, and its regional channels televise through the state dominant languages and dominant minority languages.

Analog camcorder: front view Portable camera with an integrated VCR that records sounds and images on the magnetic tape of a compact videocassette.

Video-tape operation controls Buttons that control viewing of recorded images; they include playback, stop, pause, fast-forward and rewind.

Eye-cup Part attached to the eyepiece; the eye is placed on it for viewing.

Near/far dial Lens focusing ring used to manually adjust the sharpness of an image.

Focus selector Button used to focus the image automatically or manually.

Power/functions switch Button used to turn the camcorder on or off and to select the operating mode including camera, playback and battery recharge. Display panel Movable plate housing and protecting the liquid crystal display. electronic viewfinder Small video monitor for viewing the scene to be filmed in order to frame it and bring it into focus.

Collapsible fins

It controls the amount of light entering the camera lens.

zoom lens Lens for changing the visual field so that a close-up or distant shot of the subject can be obtained without moving the camcorder.

How Does The Human Eye Work?

The individual components of the eye work in a manner similar to a camera. Each part plays a vital role in providing clear vision. So think of the eye as a camera with the cornea, behaving much like a lens cover. As the eye's main focusing element, the cornea takes widely diverging rays of light and bends them through the pupil, the dark, round opening in the center of the colored iris. The iris and pupil act like the aperture of a camera.

Next in line is the lens which acts like the lens in a camera, helping to focus light to the back of the eye. Note that the lens is the part which becomes cloudy and is removed during cataract surgery to be replaced by an artificial implant nowadays.

The Camera The Human Eye

The very back of the eye is lined with a layer called the retina which acts very much like the film of the camera. The retina is a membrane containing photoreceptor nerve cells that lines the inside back wall of the eye. The photoreceptor nerve cells of the retina change the light rays into electrical impulses and send them through the optic nerve to the brain where an image is perceived .The center 10% of the retina is called the macula. This is responsible for your sharp vision, your reading vision. The peripheral retina is responsible for the peripheral vision. As with the camera, if the "film" is bad in the eye (i.e. the retina), no matter how good the rest of the eye is, you will not get a good picture.

The human eye is remarkable. It accommodates to changing lighting conditions and focuses light rays originating from various distances from the eye. When all of the components of the eye function properly, light is converted to impulses and conveyed to the brain where an image is perceived.

Glossary of Eye Terms:

Anterior Chamber The cavity in the front part of the eye between the lens and cornea is called the Anterior Chamber. It is filled with Aqueous, a water-like fluid. This fluid is produced by the ciliary body and drains back into the blood circulation through channels in the chamber angle. It is turned over every100 minutes.

Chamber Angle Located at the junction of the cornea, iris, and sclera, the anterior chamber angle extends 360 degrees at the perimeter of the iris. Channels here allow aqueous fluid to drain back into the blood circulation from the eye. May be obstructed in glaucoma.

Ciliary Body A structure located behind the iris (rarely visible) which produces aqueous fluid that fills the front part of the eye and thus maintains the eye pressure. It also allows focusing of the lens.

Conjunctiva A thin lining over the sclera, or part of the eye. This also lines the inside of the eyelids. Cell in the conjunctiva produce mucous, which helps to lubricate the eye.

Cornea The transparent, outer "window" and primary focusing element of the eye. The outer layer of the cornea is known as epithelium. Its main job is to protect the eye. The epithelium is made up of transparent cells that have the ability to regenerate quickly. The inner layer of the cornea is also made up of transparent tissue, which allows light to pass.

Hyaloid Canal A narrow channel that runs from the optic disc to the back surface of the lens. It serves an embryologic function prior to birth but none afterwards.

Iris Inside the anterior chamber is the iris. This is the part of the eye which is responsible for one's eye . It acts like the diaphragm of a camera, dilating and constricting the pupil to allow more or less light into the eye.

Pupil The dark opening in the center of the colored iris that controls how much light enters the eye. The colored iris functions like the iris of a camera, opening and closing, to control the amount of light entering through the pupil.

Lens The part of the eye immediately behind the iris that performs delicate focusing of light rays upon the retina. In persons under 40, the lens is soft and pliable, allowing for fine focusing from a wide variety of distances. For individuals over 40, the lens begins to become less pliable, making focusing upon objects near to the eye more difficult. This is known as presbyopia.

Macula The part of the retina which is most sensitive, and is responsible for the central (or reading) vision. It is located near the optic nerve directly at the back of the eye (on the inside). This area is also responsible for color vision.

Optic Disc The position in the back of the eye where the nerve (along with an artery and vein) enters the eye corresponds to the "blind spot" since there are no rods or cones in these location. Normally, a person does not notice this blind spot since rapid movements of the eye and processing in the brain compensate for this absent information. This is the area that the ophthalmologist studies when evaluating a patient for glaucoma, a condition where the optic nerve becomes damaged often due to high pressure within the eye. As it looks like a cup when viewed with an ophthalmoscope, it is sometimes referred to as the Optic Cup.

Optic Nerve The optic nerve is the structure which takes the information from the retina as electrical signals and delivers it to the brain where this information is interpreted as a visual image. The optic nerve consists of a bundle of about one million nerve fibers.

Retina The membrane lining the back of the eye that contains photoreceptor cells. These photoreceptor nerve cells react to the presence and intensity of light by sending an impulse to the brain via the optic nerve. In the brain, the multitude of nerve impulses received from the photoreceptor cells in the retina are assimilated into an image.

Sclera The white, tough wall of the eye. Few diseases affect this layer. It is covered by the episclera (a fibrous layer between the conjunctiva and sclera ) and conjunctiva, and eye muscles are connected to this.

Vitreous Next in our voyage through the eye is the vitreous. This is a jelly-like substance that fills the body of the eye. It is normally clear. In early life, it is firmly attached to the retina behind it. With age, the vitreous becomes more water-like and may detach from the retina. Often, little clumps or strands of the jelly form and cast shadows which are perceived as "floaters". While frequently benign, sometimes floaters can be a sign of a more serious condition such as a retinal tear or detachment and should be investigated with a thorough ophthalmologic examination.

Modulator

Modulator is a device used for modulation so let me explain something about modulation. What is modulation

Modulation

Modulation is a process of superimposing information on a carrier by varying one of its parameters (amplitude, frequency or phase).

Need for Modulation

 Modulating the signal over higher frequency can reduce size.  To differentiate among transmissions (stations)  Maximum to minimum frequency ratio can be reduced to minimum by modulating the signal on a high frequency.

Types of Modulation

In general, there are three types of modulation:

a) Amplitude Modulation b) Angle Modulation

c) Pulse Modulation

Amplitude Modulation

If the amplitude of the carrier is varied in accordance with the amplitude of the modulating signal (information), it is called amplitude modulation. This modulation has been shown in figure 1. We can see this on the screen of oscilloscope.

Carrier freq. fc Vc(t) Amplitude Ac

Modulating Signal freq. fm Vm(t amplitude Am )

V(t) Amplitude Modulated DSB Signal

Carrier amplitude corresponding to negative peak of signal is zero for 100% modulation (Am=Ac) Fig. 1 Amplitude Modulation Spectrum of AM signal

The spectrum of AM signal is shown in figure 2. If fm = modulating frequency fc = Carrier frequency. The Spectrum of AM signal will be as below :

Carrier

LSB USB Level

0 fc-fm fc fc+fm Frequency

Fig. 2 Spectrum of AM Signal

Power in the carrier,

2  1  1 2 Pc   Ac   (Ac )  2  2

Power in the sideband,

2 2  1 Ac  1 2 Ac Plsb  Pusb   ma   (ma )  2 2  4 2

Power in the upper side band(Pusb) = Power in the lower side band (Plsb)

Hence total power, Pt = Pc + Pusb + Plsb

where Ac = Amplitude of the carrier

ma = Modulation Index = Am/Ac

Am = Amplitude of the modulating signal

Bandwidth,

BW  (fc  fm )  (fc  fm )  2fm

Variation of AM Signals

DSB - FC : Double sidebands with full carrier. This is used in MW and SW Transmitters.

DSB - SC : Double sidebands with suppressed carrier. This method is used for transmission of chroma signals in TV and stereo signal in FM transmitter.

VSB : Vestigial sideband. This method utilises one side band (usually USB) with carrier and a portion of other sideband. This is used for picture (video) transmission in television.

SSB : Single Side band : In this method only one side band (without carrier) is utilised for transmission. There is considerable saving in power and bandwidth. But as the carrier is not transmitted it becomes difficult to recover the signal at the receiver end. Hence the receiver circuit is complex. The use of this method is restricted to special purpose only, such as military communications.

ISB : Independent side band : In this method each side band carries a different message and hence they are independent of each other. A reduced carrier is also inserted so as to facilitate an easy detection. This method is used in Telephone system. Generation of AM Signal

There are two methods of generation of AM signals :

i) Low Level Modulation System ii) High Level Modulation System

i) Low Level Modulation

One which generates many additional frequencies and hence it requires filtering and so to avoid loss of power in filtering, it should be generated at low power level. This is shown in figure 3.

In low level modulation, all the following the modulator stage, have to be linear. This system of modulation is used in TV transmissions.

A B C D E F G AM Output

Mod. Signal

Fig. 3 Low Level Modulation

A = Stable Oscillator E = Chain

B = Buffer Amplifier F = Intermediate Power Amplifier

C = Frequency Multiplier G = Final Power Amplifier

D = Modulator

ii) High Level Modulation

This method does not give rise to many additional frequency and so filtering is not required. This is best suited for higher power amplification. Medium wave and short wave transmissions use this method of modulation. This is shown in figure 4.

High level modulator can be operated in class 'C' configuration. As the earlier stages operate at the carrier frequency only (or, in some cases at its sub-harmonics), all these stages can also be operated in class 'C' mode.

A B C E F D AM Output E G Mod. Signal

Fig. 4 High Level Modulation

High level modulation leads to higher efficiency, better linearity and higher output power for a given device. However, high level modulation requires a significantly high modulating signal power whereas low level modulation does not.

Non-linear Amplitude Modulator

This modulator utilizes the non-linearity of the active device characteristics near the origin. This has been shown in figure 5.

Modulation signal AM Modulated output + Carrier V (t) Generator c -

Fig. 5 Non-linear AM Modulator

Angle Modulation

Variation of the angle of carrier signal with time results in angle modulation. It is of two types

a) b) Phase Modulation

Frequency Modulation

If the frequency of the carrier is varied in accordance with the amplitude of the modulating signal (information), it is called frequency modulation. This has been shown in figure 9.

Maximum frequency corresponding to positve peak amplitude of modulation signal

Minimum frequency corresponding to negative peak amplitude of mod. signal

Carrier freq. fc Vc(t) Amplitude Ac

Vm(t Modulating Signal freq. fm ) amplitude Am

Freq. Modulated V(t) signal

Fig. 9 Frequency Modulation

A frequency modulated signal has a large number of frequency components even when the modulating signal is a single frequency signal. The adjacent frequency components are spaced just fm apart. These components lie on both sides of the carrier and are symmetrically placed about it. The amplitudes of the corresponding component are equal. The components, with frequencies (fc + fm) and (fc - fm) are called the first order side bands. The amplitude of each of the first order side bands is AJ1 (mf). The components with frequencies (fc - nfm) and (fc - nfm), where n is an integer, are called the nth order side bands. The amplitude of carrier is AJ0 (mf). The relative amplitudes of various side bands, therefore, depend upon the index of modulation alone, the amplitude of a particular side bands being equal to the Bessel function of the corresponding order. The spectrum is shown in figure 10. 0.9385

fc = 1000 kHz fm = 1 kHz mf = 0.5 f = 0.5 kHz f Frequency

0.7652

mf = 1.0 f = 1 kHz f Frequency

0.2239

mf = 2.0 f = 2 kHz f Frequency

0.1776

mf = 5.0 f = 5 kHz f Frequency Fig. 10 Spectrum of FM Signal

From the above we find that the process of frequency modulation results in a reduction of carrier amplitude and generation of an infinite number of side bands. Generally, a limited number of side bands closer to the carrier frequency will have significant amplitudes.

Power

1 p  A2  Power of the unmodulated carrier c 2 c This holds true for any type of a modulating signal and for any value of modulation index.

Where, Ac = Amplitude of the carrier

Am = Amplitude of the modulating signal

f f mf    Modulation Index fm fm f  Frequency deviation

fm  Frequency of the modulating signal

Band width of FM signal Band width can be defined as that frequency range which accommodates the carrier and the closest side bands contributing at least 98% of the total signal power. (Carson rule).

BW = 2 (mf + 1) fm = 2 (mf fm + fm) = 2 (f + fm)

Generation of FM signal

There are two methods of generation of FM signal : a) Direct Method of FM generation. B) Indirect Method of FM generation.

Direct Method of FM Generation i) Varactor Diode Modulator

This modulator has been shown in fig. 11. It makes use of a varactor diode (also known as varicap or capacitance diode). The capacitance of this diode varies with the applied bias voltage (DC voltage + modulating voltage). The diode forms, at least partially the tuning capacitor of the tank circuit, that determines the frequency of the oscillator. The capacitance varies with the applied modulating voltage and so does the frequency.

-Vcc

C d C L FM output

RFC T

Cb

X(t) Mod. Sig., X(t) C d Vd Q

Vo

Fig. 11 Varactor Diode Modulator

Operation of the Circuit

Vo - provides a suitable bias to the varactor diode.

Cb - blocking capacitor. It blocks the DC bias voltage of varactor diode so that operating point of transistor and bias voltage of varactor diode can be chosen independently. x (t)  Modulating signal.

Cd - Diode capacitance at its operating voltage.

Ct - total capacitance of the tank circuit Ct - C + Cd

RFC : choke. It blocks the oscillation going to bias voltage Vo.

The modulating signal added to DC voltage Vo through transformer T. Hence the capacitance of varactor diode is varied in accordance with the modulating signal. This capacitance becomes the part of the main tank circuit. Accordingly, frequency of oscillation is controlled by the modulating signal. The FM output is taken from the collector of the transister through a buffer amplifier so that the load impedance on the oscillator is essentially constant.

ii) Voltage controlled oscillator (VCO) modulator

This circuit has been given in figure 12. It is an astable multi-vibrator. Its frequency of oscillation depends upon the applied DC voltage. Hence, if the applied voltage is made to vary in accordance with the modulating signal, by putting the DC supply and the modulating signal voltage in series, the frequency of oscillation will vary with the modulating signal. This type of circuit will produce a rectangular wave form of varying frequency from which it is not difficult to derive the corresponding sinusoidal signal.

-V cc -V Mod. Sig.

Rc R1 R2 Rc

C1 C2

T2

T1

Fig. 12 VCO Modulator

Operation of the circuit

The circuit generates a periodic rectangular waveform. Its time period T is given by

T = 0.69 (R1 C1 + R2 C2)

With R1 = R2 and C1 = C2 we get T = 0.69 (RC + RC) = 1.38 RC

From this we find that the period T does not depend on supply voltage. It depend on the values of and C only. This type of multi-vibrator operates at a fixed frequency.

Now, if we disconnect the resistance R1 and R2 from the Vcc and connect these two resistances to an auxiliary voltage -V (modulating signal) shown as dotted line in t figure, the frequency of oscillation becomes function of both Vcc and -V. The time period T is given by,

 Vcc  T  2RC ln 1   V 

Hence, now by putting V in series with Vcc and varying it we can get a variable frequency which will be in accordance with the V (modulating signal). From this rectangular wave we can get sinusoidal waveform by passing it through band pass filter.

Indirect method of FM Generator

Armstrong method

In this method frequency modulation is obtained through a phase modulator. The modulating signal is integrated prior to modulating the carrier, so that the output of the phase modulator becomes a frequency modulated signal. Fig. 13. The required phase modulated signal is generated with the help of a double side band - SC modulator. In this method index of modulation is limited to 0.5.

Input Integrater Phase Modulator Modulator FM Output signal

Fig. 13 Armstrong method of FM generation

Detection of AM Signal

There are many types of detectors for detecting various form of AM signals. One such detector is Envelope Detector, which is used for detection of DSB - FC signal. The operation of this detector is discussed below :

Envelope Detector

The circuit diagram of envelope detector along with wave form is shown in Fig. 14 (a) and (b). During positive half cycle of the input, capacitor C charges to almost the peak value of the half cycle. When the level of input voltage starts falling, the diode D is reversed biased by the voltage on the capacitor. The capacitor discharges through the resistance R with a time constant RC. The discharging of the capacitor continues, till, in the next positive part of the carrier cycle, the input voltage exceeds the capacitor voltage. Then, the diode conducts again and the whole process repeats. Thus, in each carrier cycle, the capacitor charges to nearly the peak value of the cycle.

The time constant RC should be sufficiently large so as to reduce the carrier frequency ripples, but at the same time sufficiently small in order to allow the voltage on the capacitor to follow the modulation envelope. When these requirements meet, the output follow the peaks of the carrier cycles i.e. it follows the envelope of the input, hence the device is called as peak or envelope detector. The practical circuit will be somewhat different from the circuit of fig. 14.

Diode Current D

AM Input Output C R t

Output Voltage

(a)

(b) t

Fig. 14 Envelope Detector

Detection of FM Signal

The process of detection of FM signal is the process of deriving a voltage that varies in proportion to the instantaneous frequency of the received FM signal. There are many methods to detect FM signal. If the received signal is modified in such a way that its amplitude varies in accordance with its instantaneous frequency, then the envelope detector of figure 14 can be successfully utilised for FM detection. One such circuit is discussed here.

Resonant circuit Discriminator (Slope Detector)

This detector has been shown in fig. 15. The tuned circuit is driven by a current source. The current is frequency modulated. The circuit is detuned in such a way that the carrier frequency lies on the positive slope of the characteristics. Hence, resulting voltages across the tuned circuit varies with the frequency of the input current

-Vcc D

Output C C R

FM Input Q

Fig. 15 Resonant Circuit discriminator

In a FM detector, the output should be independent of the input amplitude variations, if any. Hence, it is common practice to proceed an FM detector by a limiter, so that the input amplitude variation gets removed.

Phase Modulation

If the Phase of the carrier is varied in accordance with the amplitude of the modulating signal (information), it is called phase modulation. Since this modulation has got minimum use, it is not discussed here.

UPCONVERTER It is also known as Block Up-converter . It is used to for converting the lower frequency into higher frequency . For example Modern BUCs convert from the L band to Ku band, C band and Ka band. Older BUCs convert from a 70 MHz (IF) to Ku band or C band. BUCs used in remote locations are often 2 or

4 W in the Ku band and 5 W in the C band. The 10 MHz reference frequency is usually sent on the same feedline as the main carrier. Many smaller BUCs also get their direct current (DC) over the feedline, using an internal DC block. BUCs are generally used in conjunction with low- block converters (LNB). The BUC, being an up-converting device, makes up the "transmit" side of the system, while the LNB is the down-converting device and makes up the "receive" side.

Fig shows up-convertor

Fig. illustrates PDA attached with up-convertor

Amplifier

When we cannot hear a stereo system we increase the volume, when the picture on our T.V is too dark we increase the brightness control. In both of these cases we are taking a relative weak signal and making it stronger( i.e. increasing power of the signal). The process of increasing the power of a.c. signal is called amplification . The circuits used to perform these functions are called amplifiers.

An amplifier may be defined as a device which produces larger electrical output of similar characteristics to that input parameters. It increases magnitude of the current or voltage input by means of energy drawn from external source. The input signal may be obtained from a phonograph cartridge, tape head, or transducer such as thermo-couple, pressure gauge, etc. The output signal may be supplied to loud speaker in an audio amplifier, motor in servo amplifier, a relay in a control application, etc.

An amplifier can be defined as the device which produces larger electrical output of similar characteristics than that of input parameters . It increases the magnitude of current or input voltage by means of energy drawn from external source . The input signal may obtained from a transducer such as thermocouple, pressure gauge etc .

As explained above, an amplifier needs a source for amplification, this may be provided by battery or a d.c source resulting from a rectifier and a filter combination. The amplifier also contains at least one active device. This may be electron tube ,bipolar transistor or field effect transistor which can provide a control function. Basically, the active device converts the energy from d.c source into energy at output of amplifier which is proportional to a input signal . The a.c input signal merely supplies a means of converting d.c to a.c convertion which takes in the tube of the transistor. The control is merely established with comparatively little input signal power.

The amplifier in which instantaneous output signal is proportional to the corresponding input signal is called a linear amplifier. On the other hand an amplifier in which output is not directly proportional to the input , is called non linear amplifier

Circuit carrying large electrical current

Circuit carrying small electric current Amplifier modifies larger current based on smaller current

Fig. showing working of amplifier

Classification of Amplifier

The linear amplifier may be classified according to their mode of operation i.e the way they operate on predetermined set value . The descriptions are based on following factors :

1. As based on the input (a) Small- signal amplifier (b) Large signal amplifier

2. As Based on the output (a) Voltage amplifier (b) Power amplifier

3. As based on transistor configuration (a) Common –emitter (CE) amplifier (b) Common base(CB) configuration

©Common-collector (CC) configuration

4. As based on biasing conditions (a) Class-A (b) Class -B (c) Class-AB (d) Class-C

5. As based on nature of load resistance (a) Untuned amplifiers (Wide -band amplifier) (b) Tuned amplifier (Narrrow –band amplifier)

6. As Based on frequency response (a) Direct coupled (DC) amplifier (b)Audio frequency (AF) amplifier

© Radio frequency (RF ) amplifier

(c) (UHF)and microwave frequency amlifier 7. As based on number of stages (a) Single –stage amplifier (b) Multistage amplifier

8. As based on method of coupling between the stages (b) DC(direct coupled) amplifier (b) RC coupled amplifier (c) (c) Transformer coupled amplifier

Transistor as an amplifier

In the circuit NPN transistor is used .Therefore the basic circuit I known as basic common emitter amplifier. Here Vb-b is supply forward bias emitter base junction and Vcc supply reverse biases the collector base junction . This biases the transistor to operate in active region Vs is a sinusoidal a.c input signal source . It has a source resistance Rs. The magnitude of signal source voltage is such that it is always forward biased emitter based junction regardless of the polarity of the signal

First of all , let us assume that there is no a.c signal source. Under this condition d.c collector current (Ic) flows through the collector load (Rc) . This is called zero signal current or quiescent operating current. Now let an a.c signal be applied between emitter base junction. During the positive half of a.c input signal . The forward bias across the emitter base junction is increased . As a result more electrons are injected into the base and reach the collector, which increase the collector current . The increased collector current produces greater voltage drop across the resistance Rc. However during the negative half –cycle forward bias across emitter-base junction is decreased . Due to this collector current decreases. The decreased collector current produces smaller drop across resistance Rc.

It is evident from above discussion that small a.c signal at the input produces large a.c signal at the output. Thus transistor acts as an amplifier

Basic common emitter amplifier

Class A Power Amplifier A class A power amplifier may be defined as a power amplifier In which the output current flows for the full –cycle (i.e.,360 degree) of applied input signal as shown in fig.

Thus we can say that transistor remains in forward biased state for the full period of input cycle. Fig. shows schematic circuit diagram for series fed class-A amplifier or large signal using resistive load Rc.

A series fed class A amplifier

Note that the term series fed means that the load Rc has been connected in series with the transistor output. However this circuit is not used for the purpose of power amplification because the collector efficiency is very poor. However this circuit can provide clear understanding of class A amplifier to the readers. Fig. shows the output characteristics with operating point Q . Here ICQ and VCEQ represents no signal collector current and collector-emitter voltage respectively. It may be observed that when an a.c. input signal is applied, the operating point shifts up and down causing output current and voltage to vary about. The output current is increased to Ic max. and falls to Ic min. The same fasion the collector emitter voltage increases to Vce max. and falls to Vce min.

Output characteristics of class A amplifier

Class –B Power Amplifier

In class-B operation transistor is biased in such a way that the zero signal collector current is zero. This means that class-B operation does not need any biasing system. The operating point is cut-off as shown in fig.. It remains in forward bias only for half cycle of a.c. input signal.

Illustration of class B operation

As shown in fig. during the positive half cycle of the input a.c signal the circuit is forward biased, and hence the collector current flows. On the other hand in the negative half of input a.c signal, the circuit is reversed biased and hence no collector current flows.

CLASS AB POWER AMPLIFIER

In class AB power amplifiers, the biasing circuit is so adjusted that operating point Q lies near the cut- off voltage. During the small portion of negative half cycle and complete positive half cycle of the signal ,the input circuit remains forward biased and hence the current flows. But during small portion (less than half cycle) of negative cycle, the input circuit is reversed biased and therefore no collector current flows during this period . Class AB operation needs a push –pull to achieve a full output cycle.

Why Maximum Power is used in the Transmitter design

The Maximum Power Transfer Theorem is not so much a means of analysis as it is an aid to system design. Simply stated, the maximum amount of power will be dissipated by a load resistance when that load resistance is equal to the Thevenin/Norton resistance of the network supplying the power. If the load resistance is lower or higher than the Thevenin/Norton resistance of the source network, its dissipated power will be less than maximum. This is essentially what is aimed for in radio transmitter design , where the antenna or transmission line “impedance” is matched to final power amplifier “impedance” for maximum radio frequency power output. Impedance, the overall opposition to AC and DC current, is very similar to resistance, and must be equal between source and load for the greatest amount of power to be transferred to the load. A load impedance that is too high will result in low power output. A load impedance that is too low will not only result in low power output, but possibly overheating of the amplifier due to the power dissipated in its internal (Thevenin or Norton) impedance.

Taking our Thevenin equivalent example circuit, the Maximum Power Transfer Theorem tells us that the load resistance resulting in greatest power dissipation is equal in value to the Thevenin resistance (in this case, 0.8 Ω):

With this value of load resistance, the dissipated power will be 39.2 watts:

Power dissipation increased for both the Thevenin resistance and the total circuit, but it decreased for the load resistor. Likewise, if we increase the load resistance (1.1 Ω instead of 0.8 Ω, for example), power dissipation will also be less than it was at 0.8 Ω exactly:

If you were designing a circuit for maximum power dissipation at the load resistance, this theorem would be very useful. Having reduced a network down to a Thevenin voltage and resistance (or Norton current and resistance), you simply set the load resistance equal to that Thevenin or Norton equivalent (or vice versa) to ensure maximum power dissipation at the load. Practical applications of this might include radio transmitter final amplifier stage design (seeking to maximize power delivered to the antenna or transmission line), a grid tied inverter loading a solar array, or electric vehicle design (seeking to maximize power delivered to drive motor).

The Maximum Power Transfer Theorem is not: Maximum power transfer does not coincide with maximum efficiency. Application of The Maximum Power Transfer theorem to AC power distribution will not result in maximum or even high efficiency. The goal of high efficiency is more important for AC power distribution, which dictates a relatively low generator impedance compared to load impedance.

Similar to AC power distribution, high fidelity audio amplifiers are designed for a relatively low output impedance and a relatively high speaker load impedance. As a ratio, "output impdance" : "load impedance" is known as damping factor, typically in the range of 100 to 1000.

Maximum power transfer does not coincide with the goal of lowest noise. For example, the low-level radio frequency amplifier between the antenna and a radio receiver is often designed for lowest possible noise. This often requires a mismatch of the amplifier input impedance to the antenna as compared with that dictated by the maximum power transfer theorem.

REVIEW:

1. The Maximum Power Transfer Theorem states that the maximum amount of power will be dissipated by a load resistance if it is equal to the Thevenin or Norton resistance of the network supplying power.

2. The Maximum Power Transfer Theorem does not satisfy the goal of maximum efficiency.

Star- Transformation

In many circuit applications, we encounter components connected together in one of two ways to form a three-terminal network: the “Delta,” or Δ (also known as the “Pi,” or π) configuration, and the “Y” (also known as the “T”) configuration.

It is possible to calculate the proper values of resistors necessary to form one kind of network (Δ or Y) that behaves identically to the other kind, as analyzed from the terminal connections alone. That is, if we had two separate resistor networks, one Δ and one Y, each with its resistors hidden from view, with nothing but the three terminals (A, B, and C) exposed for testing, the resistors could be sized for the two networks so that there would be no way to electrically determine one network apart from the other. In other words, equivalent Δ and Y networks behave identically.

There are several equations used to convert one network to the other:

Δ and Y networks are seen frequently in 3-phase AC power systems (a topic covered in volume II of this book series), but even then they're usually balanced networks (all resistors equal in value) and conversion from one to the other need not involve such complex calculations. When would the average technician ever need to use these equations?

A prime application for Δ-Y conversion is in the solution of unbalanced bridge circuits, such as the one below:

After the Δ-Y conversion

If we perform our calculations correctly, the voltages between points A, B, and C will be the same in the converted circuit as in the original circuit, and we can transfer those values back to the original bridge configuration.

Resistors R4 and R5, of course, remain the same at 18 Ω and 12 Ω, respectively. Analyzing the circuit now as a series/parallel combination, we arrive at the following figures:

We must use the voltage drops figures from the table above to determine the voltages between points A, B, and C, seeing how the add up (or subtract, as is the case with voltage between points B and C):

Now that we know these voltages, we can transfer them to the same points A, B, and C in the original bridge circuit:

Voltage drops across R4 and R5, of course, are exactly the same as they were in the converted circuit.

At this point, we could take these voltages and determine resistor currents through the repeated use of Ohm's Law (I=E/R):

At this point, we could take these voltages and determine resistor currents through the repeated use of Ohm's Law (I=E/R):

The voltage figures, as read from left to right, represent voltage drops across the five respective resistors, R1 through R5. I could have shown currents as well, but since that would have required insertion of “dummy” voltage sources in the SPICE netlist, and since we're primarily interested in validating the Δ-Y conversion equations and not Ohm's Law, this will suffice.

REVIEW:

1. “Delta” (Δ) networks are also known as “Pi” (π) networks. 2. “Y” networks are also known as “T” networks. 3. Δ and Y networks can be converted to their equivalent counterparts with the proper resistance equations. By “equivalent,” I mean that the two networks will be electrically identical as measured from the three terminals (A, B, and C). 4. A bridge circuit can be simplified to a series/parallel circuit by converting half of it from a Δ to a Y network. After voltage drops between the original three connection points (A, B, and C) have been solved for, those voltages can be transferred back to the original bridge circuit, across those same equivalent points.

Class AB- Power Amplifier used in Ujjain LPT

Class AB operation

Class AB Push-Pull Amplifier

The basic circuit of class AB push-pull amplifier is shown above in fig., here the voltage drop across resistor R2 is so adjusted that it is approximately equals to cut-in voltage(0.5 for Si and 0.1 for Ge). Now the operation becomes AB type i.e. the collector current flows more than half the cycle of input signal but less than complete cycle.

Advantages and drawbacks of Push-pull amplifiers Advantages

(i) Because of absence of even harmonics in the output of push-pull amplifier, such a circuit gives more output per active for a given amount of for a given output per transistor. (ii) The dc component of the collector currents oppose each other magnetically in the transformer core. This eliminates any tendency toward core saturation and consequent non-linear distortion that may arise from the curvature of the transformer magnetization curve. (iii) Another advantage of this system is that the effects of ripple voltage that may be continued in the power supply because of inadequate filtering will be balanced out. This cancellation results because the currents produced by the ripple voltages are in the opposite direction in the transformer winding and so will not appear in the load. Off-course the power supply hum will also act on the voltage amplifier stages and so will be the part of the input to the power stage. The hum will not be eliminated by the push-pull circuit.

Drawbacks

The drawbacks of the push-pull amplifiers are as under:

(i) Requirement of two identical transistors. (ii) Need of use of driver stage to furnish two equal and opposite voltages at the input. (iii) Need of bulky and expensive transformer.

Nonlinear Distortion

Any unwanted change in shape of an a.c. signal is called distortion. One common problem that occurs in common-emitter amplifier is called nonlinear-distortion. The ouput waveform from an amplifier experiencing nonlinear is shown in fig(a).

Output waveform illustrating non-linear distortion

Notice the difference between the shape of the negative alteration of the signal(normal) and that of it’s positive alteration(distorted). Non-linear distortion is caused by driving the base emitter junction of the transistor into it’s non linear operating region. This point is illustrated by fig (b) which shows basic characteristics of a transistor.

Normally a transmitter is operated so that the base emitter junction stays in the Linear region of the operation. In this region , a change in VBE causes a linear(constant rate)change in IB. If the transistor is operated in nonlinear operating region, a change in VBE causes a nonlinear change in IB. It is evident if we compare the slope of the curve immediately above and below Vk. Since the slope of the curve changes, the rate of change in IB also changes.

The base -emitter junction of a transistor may be driven into it’s nonlinear operating region in one of the two sets of circumstances:

1. A transistor is normally biased so that IB have a value well within the linear region of operation. If an amplifier is poorly biased so that the Q-point value of is near the non-linear region of operation , an relative small a.c. input signal will drive the transistor into non-linear region of operation and as a result non –linear distortion will result 2. The amplifier input signal (high)may be sufficient to drive the well biased amplifier into non linear operation causing non-linear distortion because high amplification of the input signal increases the amplitude of the input signal and noise corrupts the amplitude of the signal.

The solution to the first problem is to redesign the amplifier for high values of IB, while for the second problem solution is to reduce the amplitude of amplified input signal.

Non –linear distortion can cause a variety of problems in communication engineering. In stereos ,it may cause the audio to sound “grainy” or may even cause the tones of various musical instruments to change . In , crossover in video circuitry can cause distortions in the picture on the CRT. To avoid these types of problems, amplifier design normally emphasis on avoiding cross-over distortion , since amplifiers are generally designed to avoid non- linear distortion problems. The most common cause of non-linear distortion is overdriving an amplifier.

Cross-Over Distortion

In addition to distortion introduced due to non-linearity of the collector characteristics and due to non- matching of the two transistors, there is one source of distortion that is caused by non-linearity of input characteristics.

Recall that the silicon transistors must have at least 0.5V to 0.6V of forward base emitter bias before they will go into the conduction. Since in class-B push pull amplifier, the forward bias is produced by input signal, both of transistors will be non-conducting, when the input signal is approx. (+-)0.5 V. This forms a dead-band in the input and produces cross-over distortion in the output. In simple words cross-over conducting as shown in fig. The distortion introduced is called cross-over distortion because it occurs in the time operation cross- over from one transistor in push –pull amplifier to the other in the same.. The same is shown in fig. using transfer characteristics of the two transistors.

To eliminate cross-over distortion, it is necessary to add a small amount of forward bias to take the transistor to the average of conduction or slightly beyond. This does slightly lowers the efficiency of the circuit and there is a wastage of stand-by power, but it eleviates the cross-over distortion problem. Technically, the operation of the circuit lies between class A and class-B mode. Therefore the circuit operation is often reffered to as being class AB operation

Illustration of cross-over distortion

Earth stations

Types, construction and sizes

There are two types of earth-stations. Those that receive only and those that receive and transmit. Receive- only stations are used principally for the reception of television signals emitted by satellites, when they are usually known as TVRO stations. They are also used for receiving data and other forms of information that can be displayed visually or in printed form. For two way links between users, such as telephony, video-conferencing and computer tie-ups, stations at each end are provided with both transmission and reception facilities.

Both types of stations employ similar geometries to capture and focus the signals arriving from the satellite ( down link signals) and to aim signals at the satellite (to uplink ) . Fig. shows how the downlink signals are captured by the earth-station antenna, which usually takes the form of a parabolic circular dish and which reflects the captured signals to the focus of the parabola ,where a collecting ‘horn’ is mounted. For transmission purposes, the horn emits signals which are reflected off the parabola to form the uplink beam.

Mounting a horn in this manner, at the focus of a circular dish, means that a structure has to be provided for the mounting , and this structure interferes with line-of-sight of dish, reducing it’s efficiency. For this reason many new stations employ now employ parabolic dishes that are elliptical in front view with horn offset from centre line of the dish. The dishes are elliptical so that they present an apparent circular shape as ‘seen’ from the satellite, to give a circular cone into the horn.

Fig. 1 Signals entering the horn-or leaving it – must be conducted to and from the amplifying equipment in the station. This conduction takes place through a length of a waveguide – rectangular –section tubing of a size dependent on the operating frequencies .In some designs of small stations this waveguide is used as the horn- mounting structure to minimize interference .To minimize losses in conduction , the waveguide length must be kept short as possible. This is easier in offset designs than in than with centre –fed circular dishes. The shortest length of all is achieved by what is known as Casegrain antenna. Here by using double reflectors, the amplifying equipment can be mounted directly behind the horn. This type of antenna is however seen in only the largest and most expensive Earth stations where maximum performance is essential.

Fig. 2 – shows casegrain feed amplifier

Fig.3 shows the principal components of transmission and reception equipments in a small earth station used for voice, data and video exchange in KU-band business service. On the transmit side, incoming digital signals are mixed together in a multiplexer to produce a combined data stream which is then passed into a modulator. The to produce a combined data stream which is then passed to a modulator. The modulator is fed with an intermediate frequency (IF) carrier, which operates at 70 Mhz or 140 Mhz depending on the characteristics of the system; this carrier is changed in phase (modulated) according to whether a digit ‘1’ or a ‘0’ is being imposed on it. An IF amplifier then filters and amplifies the phased carrier and passes it to an up-convertor which changes the carrier frequency to that of space system-in this case ,14 Ghz carrier is passed through

Fig 3

high power amplifier (HPA) to the antenna for up-linking to the satellite. Depending on the receive sensitivity of the satellite, the size of the earth station antenna and quantity of the traffic to be handled, the R$F output power of HPA can vary from about 50 watts to 1 or 2 kilowatts.

The receive side operates in reverse. The phased 12GHz carrier arriving from satellite is passed first into low noise amplifier – typically, today a field effect transistor (FET) amplifier. The carrier is down converted to IF and is filtered and re-amplified at that frequency at that frequency. After amplification the signal passes into demodulator which detects the phase shift into carrier and converts them into original digits which first imposed the changes into transmission station. The final digital stream is then demultiplexed into the original voice data and video signals intended for receiving station.

Fig. 4

Fig. 5

Fig. 6

EIRP

EIRP is a product of the RF power of the satellite and it’s antenna gain. EIRP stands for equivalent isotropic radiated power sometimes effective is used instead of equivalent. It is the term used universally in satellite communication because it the one component that provides measure of the quality of the downlink service offered by the satellite independently of it’s coverage and it’s power. It thus enables suppliers and users to compare the qualities of competing satellite system without having worry about what area they cover or what power the satellite produce.

Fig 7

The gain of the antenna is directly proportional to it’s diameter, and inversely proportional to the wavelength of frequency being used. The working formula for gain is:

G= e (πD/ ) ²

Where e is the efficiency of the design ,usually taken as 55%

π is the constant 3.14

D is the antenna diameter in meters

is the antenna diameters in meters.

Putting in the values of e and π, this reduces to

G= 5.4( D/ )²

Take an example of an antenna 1m in diameter, working at 12Ghz:

Then D= 1 = 0.025 G = 5.4 x 40 ² = 8640 = 39.4 dBi

As another example, take 1.5 m antenna working at 4Ghz

D = 1.5 = 0.075

G = 5.4 x 40 ² = 2160 =33.3 dBi

Fig. 4 shows a range of gains against varying diameters and frequencies. To obtain EIRP , all that is necessary is to add these gains to the output power of the satellite amplifiers ,also expressed in dBW. If, in both examples, the the output power per transponder is 20 watts that is 13 dBW ( as 1 W = 0.65 dBW), the EIRP of the first example is 39.4 +13= 52.4 dBW. The EIRP of the second example is 33.3 + 13= 46.3 dBW.

Fig. 8

Then to obtain the same EIRP of 52.4 dBW which is 80 watts

This clearly is the case because in halving the diameter of antenna case because in halving the diameter of the antenna it’s gain would be reduced by a factor of 4(D2)i.e. 4D square, so to maintain same EIRP, the power must be increased by a factor of 4. It is not possible to increase or decrease the EIRP by simply changing antenna diameters and output powers at will. It was observed that the conical beam produced by circular antenna is dependent on antenna diameter, and the size of the beam determines the coverage area that will be seen on the earth. Thus it’s coverage area that determines the size of the antenna and once it is fixed EIRP can be altered only by changing the output power.

The relationship between the size of the antenna of the conical beam and antenna diameter is given by the formula:

Θ = 70 ( /D) Where

Θ = halfpower beamwidth in degrees

= operating wavelength in meters

D= antenna diameter in meters

Thus the beamwidth of 1m diameter antenna operating at 12Ghz ( =0.025m) is

Θ= (70 x 0.025) /1 = 1.75 degrees

Fig. 9

Up-link Transmission

Traffic Handling capacity on the uplink, with the earth-station transmitting to the satellite is dependent on the same three factors that determine the same downlink capacity- EIRP, gain, and noise are those associated with the satellite.

As with the satellite, the EIRP of the satellite is the product of it’s antenna gain and it’s transmitter power, as with the earth station the receive performance of the satellite is it’s G/T, but here it’s not possible to increase the antenna diameter to improve the performance because the diameter has already been decided to it’s coverage area. Further the antenna is not looking at clear sky, with it’s low noise temperature of 40K , but the earth which emits radiation at 300K. Thus the receive performance of the satellite is constrained by it’s relative lack of gain and higher noise, and is invariably poor than the earth station working with it. To compensate for this the earth station simply uses power in it’s transmit amplifier. Unlike the satellite which is limited in the the power it can transmit, higher power in the earth station causes no supp-ly problems, because power usually in the range of few hundred watts, is simply drawn from the mains. However because of radiation spill- over that can occur with small antennas , the output RF power of the earth-station may be restricted to prevent interference with the adjacent satellites. All stations are indeed subject to international and national regulations on this matter of interference. In some cases to provide an adequate uplink EIRP without exceeding regulatory limitations the gain of the earth station has to increased by increasing the diameter of but expense of more reduced bandwidth and thus need for more accurate pointing.

Fig. 10

Fig.11

Pointing an Earth-station

All operators of commercial communication satellite provide tables to show users whwer to point their Earth-stations to find the satellite that they are going to use. The users locates his exact latitude and longitude in the tables, which then show him the the required elevation of the antenna (degrees from horizontal) and azimuth (degrees clockwise from true North).

Fig..is a general chart ,showing elevation and azimuth angles for any earth –station and any satellite location in geosynchronous arc. Here, longitude difference need to be known, that is the number of degrees between longitude of station and that of satellite. Generally minimum elevation of 5 degrees is satisfactory. Below this ,station can pick interference from terrestrial radio sources and suffer from earth heat radiation , which will incread=se noise in the system.

The chart will clearly offer only an approximation of elevation and azimuth angles, but is good enough for very small Earth-stations of about 1m diameter or less. These can capture satellite initially in their relative broad beamwidth, using the chart, then the antenna can be fine-pointed to receive the best signal. Larger stations need more accurate, tabular pointing data.

Operators’ tables should also show when the ‘Sun-Blinding’ will occur , that is, when the station satellite and the sun are in the same line, which occurs twice per year. The Sun’s temperature increases overall noise in the station receiver, which will interfere in communication few minutes when conjunction occurs.

Fig.12

List of Satellite Name and look angle are calculated by azimuth and elevation are given below:-

Transponder

Receiver Section transmitting section

6Ghz Filter Pre - Down - 4Ghz High - Filter 4 Ghz amplifier converter Power Amplifier input

FIG. 1

INSAT 1A(SSL)

Insat-1 was a multi-purpose satellite system to provide two high power TV broadcast and twelve telecommunications national coverage transponders, in addition to also providing meteorological services.

The Insat-1A was launched by a Delta in April 1982 but was abandoned in September 1983 when its attitude control propellant was exhausted.

When Insat-1B was launched on 30 August 1983, it almost suffered the same fate as the Insat-1A. It was not until mid-September that Ford and Indian controllers succeeded in deploying its solar array. By then it had been stationed at 74°E in place of Insat-1A. Full operational capability was achieved in October 1983. It continued to operate into 1990 with all its 4375 two-way voice or equivalent circuits in use. Around 36,000 earth images were returned.

Eleven of its 12 C-band transponder and its two S-band transponders provided direct nationwide TV & communications to thousands of remote villages, plus a detailed weather and disaster-warning service. Around 35,000 Indian built 3 to 3.6 meter diameter, earth receive only terminals were in place to supply rural communities with social and educational programs. It was relegated to spare status on 17 July 1990 by the Insat-1D. The Insat-1B was finally removed from GEO in August 1993, after being replaced at 93.5°E by Insat-2B. Total cost of Insat-1B and its backup Insat-1C, including the PAM-D launch was estimated at $140 million.

The Insat-1C satellite was launched on 21 July 1988 from Kourou for location at 93.5°E to bring the Insat system up to full capacity. Half of the 12 C-band transponders and its two S-band transponders were lost when a power system failure knocked out one of the two buses, but the meteorological earth images and its data collection systems were both fully operational. Earth lock was lost 22 November 1989 and the satellite was abandoned. Reported insurance payout was $70 million.

The specification for the Insat-1D is the same as the Insat-1B but with expanded battery and propellant capacities. Launched on 12 June 1990 to conclude the first generation Insat series. Launch was planned for 29 June 1989 but 10 days before it was seriously damaged during launch preparation, when a crane hook fell on it. The fully insured satellite was repaired by Ford Aerospace at a reported cost of $10 million. It also suffered $150,000 of damage during the October 1989 Californian Earthquake. It assumed prime role from Insat-1B on 17 July 1990. Design life was seven years.

Nation: India

Type / Application: Communication / Meteorology

Operator: Insat

Contractors: Ford Aerospace

Equipment: 12 C-band transponder, 2+1 S-band transponders,

Configuration: Insat-1 Bus

Propulsion: R-4D-11

Power: Deployable solar array, batteries

Lifetime: 7 years

Mass: 1152 kg (#1A, 1B); 1190 kg (#1C, 1D)

Orbit: GEO Satellite Date LS Launch Vehicle Remarks

Insat 1A 10.04.1982 CC LC-17A Delta-3910 PAM-D

Insat 1B 30.08.1983 CC LC-39A Shuttle [PAM-D] with Challenger F2 (STS 8)

Insat 1C 21.07.1988 Ko ELA-1 Ariane-3 with ECS 5

Insat 1D 12.06.1990 CC LC-17B Delta-4925

Radio Wave Spectrum

Fig. shows allotment of various frequency bands and bandwidth used by various services for operation

Spectrum:

The electromagnetic spectrum is the range of all possible frequencies of electromagnetic radiation. The "electromagnetic spectrum" of an object is the characteristic distribution of electromagnetic radiation emitted or absorbed by that particular object .Now as we have understood the meaning and bandwidth allocation we can understand something about the famous 2G spectrum

What is 2G Spectrum

The 2G spectrum scam in India involved the issue of 1232 licenses by the ruling Congress-led UPA allianceof the 2Gspectrum to 85 companies including many new telecom companies with little or no experience in the telecom sector at a price set in the year 2001. The scam involved allegations regarding

. the under pricing of the 2G spectrum by the Department of Telecommunications which resulted in a heavy loss to the exchequer, and . the illegal manipulation of the spectrum allocation process to favour select companies The issue came to light after the auction of airwaves for 3G services which amounted to 677,190 crore (US$151.01 billion)to the exchequer. A report submitted by the Comptroller and Auditor General based on the money collected from 3G licenses estimated that the loss to the exchequer due to under pricing of the 2G spectrum was 176,379 crore (US$39.33 billion).

The scam came to public notice when the Supreme Court of India took Subramaniam Swamy's complaints on record. If the communication engineers were aware of the spectrum allocated to them at the time of allocation this scam could have been avoided.

Transmitting Antenna

The satellite also contains an transmitting antenna to transmit a signal back to the earth. Down-linking

Down-Linking is a process in which RF(radio-frequency) signal from the satellite is sent back to earth. The 6Ghz signal is down-converted to 4Ghz RF-signal in the satellite and then sent to back earth, therefore the process is known as Down- linking. The earth station unit contains large number of parabolic antennas for receiving the RF from the satellite. The RF is then sent to the system unit after that it passes through a number of units described below:-

RF PDA RF System Unit RF Receiver IRD RF Audio/Video Switcher RF Exciter RF BPF

RF

Transmitting Antenna Tower Directional Coupler

Parabolic-Dish Antenna

The parabolic dish antenna is placed at the receiving earth-station to receive the RF-signal from the satellite. It’s shape is parabolic so that it could receive the maximum signals from the satellite. The only difference in the casegrain feed antenna and parabolic dish antenna (PDA) is that casegrain feed contains two convex reflectors. The one is larger and the other which is smaller is placed above the larger at a fixed distance. The shape of the dish is parabolic or convex shaped because all the RF-signals coming from different directions will concentrate at a point and highly concentrated good quality signal could be obtained.

Skin Effect

It follows the faraday’s Principle

Feed LNBC RF

Satellite System Unit

18 V dc

The RF-signal from the satellite is received by the PDA(parabolic dish antenna) at the receiver end, the 18 volts DC from the system unit and the RF-signal from the PDA both travels in the opposite direction from the same co-axial cable as shown in the fig. therefore this effect is known as skin effect.

Michel Faraday’s Experiment

Faraday's ice pail experiment demonstrated that the charge resided only on the exterior of a charged conductor, and exterior charge had no influence on anything enclosed within a conductor. This is because the exterior charges redistribute such that the interior fields due to them cancel. This shielding effect is used in what is now known as a Faraday cage.

-(ve) +(ve) Electrons e-

An external electrical field causes the charges to rearrange, which cancels the field inside.

Faraday’s law:

Emf = N

Basically there are 6 units at the LPT(low power transmitter) they are as follows :-

1. Monitoring Unit and (i) Pattern Generator

* Colour Step Amplifier

2. Receiver Unit 3. Satellite System Unit 4. Processing Unit 5. Transmitting Unit 6. Power Unit

1.Monitoring Unit

It contains two set of coloured T.V receivers in which input signal enters. The T.V has large variety of functions in it, for example, has it functions to reset the contrast, brightness, colour, sharpness , etc.. It has a function channel scan in which it scans the different channels. It also has a significant function in the menu section of the T.V. functions in which a T.V can adjust itself according to the different systems adopted by the world. This system contains different systems as follows:-

The PAL System (Phase Alternation by Line)

Television encoding systems by nation; countries using the PAL system are shown in .

PAL, short for Phase Alternating Line, is an analogue television colour encoding system used in broadcast television systems in many countries. Other common systems are NTSC and SECAM. This page primarily discusses the PAL colour encoding system. The articles on broadcast television systems and analogue television further describe frame rates, image resolution and audio modulation. For discussion of the 625-line / 50 field (25 frame) per second television standard

This system is adopted by India in the year . This system was developed by keeping the in account the interest of common and poor people of India. This system emphasized on the compatibility and reverse compatibility. This means that a and white television should be capable of showing colored transmission, similarly a coloured television should be capable of showing black and white transmission. The detailed of PAL system is shown in the chain shown in the fig. The list of countries adopting PAL along-with India is shown below:-

History:-

In the 1950s, the Western European countries commenced planning to introduce colour television, and were faced with the problem that the NTSC standard demonstrated several weaknesses, including colour tone shifting under poor transmission conditions. To overcome NTSC's shortcomings, alternative standards were devised, resulting in the development of the PAL and SECAM standards. The goal was to provide a colour TV standard for the European picture frequency of 50 fields per second (50 hertz), and finding a way to eliminate the problems with NTSC. PAL was developed by Walter Bruch at Telefunken in Germany. The format was first unveiled in 1963, with the first broadcasts beginning in the United Kingdom in 1964 and Germany in 1967,[1] though the one BBC channel initially using the broadcast standard only began to broadcast in colour from 1967. Telefunken was later bought by the French electronics manufacturer Thomson. Thomson also bought the Compagnie Générale de Télévisionwhere Henri de France developed SECAM, the first European Standard for colour television. Thomson, now called Technicolor SA, also owns the RCA brand and licenses it to other companies; Radio Corporation of America, the originator of that brand, created the NTSC colour TV standard before Thomson became involved.

The term PAL is often used informally to refer to a 625-line/50 Hz () television system, and to differentiate from a 525-line/60 Hz () NTSC system. Accordingly, DVDs are labeled as either PAL or NTSC (referring informally to the line count and frame rate) even though technically the discs do not have either PAL or NTSC composite colour. The line count and frame rate are defined as EIA 525/60 or CCIR 625/50. PAL and NTSC are only the method of colour transmission.

Colour Encoding:- Both the PAL and the NTSC system use a quadrature amplitude modulated subcarrier carrying the information added to the luminance video signal to form a baseband signal. The frequency of this subcarrier is 4.43361875 MHz for PAL, compared to 3.579545 MHz for NTSC. The SECAM system, on the other hand, uses a frequency modulation scheme on its two line alternate colour subcarriers 4.25000 and 4.40625 MHz. The name "Phase Alternating Line" describes the way that the phase of part of the colour information on the video signal is reversed with each line, which automatically corrects phase errors in the transmission of the signal by cancelling them out, at the expense of vertical frame colour resolution. Lines where the colour phase is reversed compared to NTSC are often called PAL or phase-alternation lines, which justifies one of the expansions of the acronym, while the other lines are called NTSC lines. Early PAL receivers relied on the human eye to do that cancelling; however, this resulted in a comb-like effect known as on larger phase errors. Thus, most receivers now use a chrominance delay line, which stores the received colour information on each line of display; an average of the colour information from the previous line and the current line is then used to drive the picture tube. The effect is that phase errors result in saturation changes, which are less objectionable than the equivalent hue changes of NTSC. A minor drawback is that the vertical colour resolution is poorer than the NTSC system's, but since the human eye also has a colour resolution that is much lower than its brightness resolution, this effect is not visible. In any case, NTSC, PAL, and SECAM all have chrominance bandwidth (horizontal colour detail) reduced greatly compared to the luminance signal.

Spectrum of a System I television channel with PAL

Oscillogram of composite PAL signal - one frame

Oscillogram of composite PAL signal - several lines

Oscillogram of composite PAL signal - two lines.

The 4.43361875 MHz frequency of the colour carrier is a result of 283.75 colour clock cycles per line plus a 25 Hz offset to avoid interferences. Since the line frequency (number of lines per second) is 15625 Hz (625 lines x 50 Hz / 2), the colour carrier frequency calculates as follows: 4.43361875 MHz = 283.75 * 15625 Hz + 25 Hz.

The original colour carrier is required by the colour decoder to recreate the colour difference signals. Since the carrier is not transmitted with the video information it has to be generated locally in the receiver. In order that the phase of this locally generated signal can match the transmitted information, a 10 cycle burst of colour subcarrier is added to the video signal shortly after the line sync pulse but before the picture information, during the so called back porch. This colour burst is not actually in phase with the original colour subcarrier but leads it by 45 degrees on the odd lines and lags it by 45 degrees on the even lines. This swinging burst enables the colour decoder circuitry to distinguish the phase of the R-Y vector which reverses every line.

PAL vs. NTSC NTSC receivers have a tint control to perform colour correction manually. If this is not adjusted correctly, the colours may be faulty. The PAL standard automatically cancels hue errors by phase reversal, so a tint control is unnecessary. Chrominance phase errors in the PAL system are cancelled out using a 1H delay line resulting in lower saturation, which is much less noticeable to the eye than NTSC hue errors.

However, the alternation of colour information — Hanover bars — can lead to picture grain on pictures with extreme phase errors even in PAL systems, if decoder circuits are misaligned or use the simplified decoders of early designs (typically to overcome royalty restrictions). In most cases such extreme phase shifts do not occur. This effect will usually be observed when the transmission path is poor, typically in built up areas or where the terrain is unfavourable. The effect is more noticeable on UHF than VHF signals as VHF signals tend to be more robust. In the early 1970s some Japanese set manufacturers developed decoding systems to avoid paying royalties to Telefunken. The Telefunken license covered any decoding method that relied on the alternating subcarrier phase to reduce phase errors. This included very basic PAL decoders that relied on the human eye to average out the odd/even line phase errors. One solution was to use a 1H delay line to allow decoding of only the odd or even lines. For example, the chrominance on odd lines would be switched directly through to the decoder and also be stored in the delay line. Then, on even lines, the stored odd line would be decoded again. This method effectively converted PAL to NTSC. Such systems suffered hue errors and other problems inherent in NTSC and required the addition of a manual hue control.

PAL and NTSC have slightly divergent colour spaces, but the colour decoder differences here are ignored.

Parameter Value

Pixel Clock frequency (digital sources with 13.5 MHz 704 or 720 active Pixel/Line)

Bandwidth 5 MHz[2]

Horizontal Negative sync polarity

Total time for 64.000 µs[3][4] each line

Front 1.65+0.4 porch (A) −0.1 µs

Sync pulse 4.7±0.20 µs length (B)

Back porch (C) 5.7±0.20 µs

Active video 51.95+0.4

(D) −0.1 µs

PAL vs. SECAM SECAM is an earlier attempt at compatible colour television which also tries to resolve the NTSC hue problem. It does so by applying a different method to colour transmission, namely alternate transmission of the U and V vectors and frequency modulation, while PAL attempts to improve on the NTSC method. SECAM transmissions are more robust over longer distances than NTSC or PAL. However, owing to their FM nature, the colour signal remains present, although at reduced amplitude, even in monochrome portions of the image, thus being subject to stronger cross colour. Like PAL, a SECAM receiver needs a delay line.

PAL signal details For PAL-B/G the signal has these characteristics.

(Total horizontal sync time 12.05 µs) After 0.9 µs a 2.25±0.23 µs of 10±1 cycles is sent. Most rise/fall times are in 250±50 ns range. Amplitude is 100% for white level, 30% for black, and 0% for sync.[3] The CVBS electrical amplitude is Vpp 1.0 V and impedance of 75 Ω.

The composite video (CVBS) signal used in systems M and N before combination with a sound carrier and modulation onto an RFcarrier

The vertical timings are:

Parameter Value

Vertical lines 313 (625 total)

Vertical lines visible 288 (576 total)

Vertical sync polarity Negative (burst)

Vertical frequency 50 Hz

Sync pulse length (F) 0.576 ms (burst)[6]

Active video (H) 18.4 ms

(Total vertical sync time 1.6 ms)

The fig. shows how the waveform behaves for signal

The fig. shows how waveform behaves for black signal

Fig. shows nature of input white signal

Fig. Shows nature of input pink signal

Comparative Quality

PAL Colour Bar Test Pattern (with a "Bucket of Blood")

Colour Bars are used to callibrate video and television monitors. The example above is a common type of PAL colour bar test pattern. The bar at the bottom is often referred to as a bucket of blood and may contain some text identifying the source. Colour Bars are usually generated by specialised video calibration equipment such as a synch pulse generator, or by a video camera When adjusting the contrast, also watch the white square in the lower left. If the contrast is too high, the white square appears to “spill” into the surrounding squares. Adjust the contrast until the of the white square no longer spills into surrounding squares. Important: Contrast should only be adjusted after brightness. The SMPTE color bars, an example of a .

Calibrating Video Monitors with Color Bars Editors and broadcast designers shouldn’t rely on an uncalibrated monitor when making crucial adjustments to the color and brightness of their programs. Instead, it’s important to use a calibrated broadcast monitor to ensure that any adjustments made to exposure and color quality are accurate. Monitors are calibrated using SMPTE standard color bars. Brightness and contrast are adjusted by eye, using the color bars onscreen. Adjusting chroma and phase involves using the “blue only” button found on professional video monitors. This calibration should be done to all monitors in use, whether they’re in the field or in the editing room. To calibrate your monitor

1. Connect a color bars or test pattern generator to the monitor you’re using, or output one of the built-in color bars generators in Final Cut Pro. Important: Avoid using still image graphics of color bars. For more information, see Y′CBCR Rendering and Color Bars. 2. Turn on the monitor and wait approximately 30 minutes for the monitor to “warm up” and reach a stable operating temperature. 3. Select the appropriate input on the video monitor so that the color bars are visible on the screen. Near the bottom-right corner of the color bars are three black bars of varying intensities. Each one corresponds to a different brightness value, measured in IRE. (IRE originally stood for Institute of Radio Engineers, which has since merged into the modern IEEE organization; the measurement is a video-specific unit of voltage.) These are the PLUGE (Picture Lineup Generation Equipment) bars, and they allow you to adjust the brightness and contrast of a video monitor by helping you establish what absolute black should be. 4. Turn the chroma level on the monitor all the way down. This is a temporary adjustment that allows you to make more accurate luma adjustments. The Chroma control may also be labeled color or saturation. 5. Adjust the brightness control of your monitor to the point where you can no longer distinguish between the two PLUGE bars on the left and the adjacent black square. At this point, the brightest of the bars (11.5 IRE) should just barely be visible, while the two PLUGE bars on the left (5 IRE and 7.5 IRE) appear to be the same level of black. 6. Now, turn the contrast all the way up so that this bar becomes bright, and then turn it back down. The point where this bar is barely visible is the correct contrast setting for your monitor. (The example shown below is exaggerated to demonstrate.)

7 Once you have finished adjusting luma settings, turn up the Chroma control to the middle (detent) position. Note: Some knobs stop subtly at a default position. This is known as the detent position of the knob. If you’re adjusting a PAL monitor, then you’re finished. The next few steps are color adjustments that only need to be made to NTSC monitors. 8 Press the “blue only” button on the front of your monitor to prepare for the adjustment of the Chroma and Phase controls. Note: This button is usually only available on professional monitors. 9 Make the following adjustments based on the type of video signal you’re monitoring: If you’re monitoring an SDI or component Y′CBCR signal, you only need to adjust the Chroma control so that the tops and bottoms of the alternating gray bars match. This is the only adjustment you need to make, because the Phase control has no effect with SDI or component signals. If you’re monitoring a Y/C (also called S-Video) signal, it’s being run through an RGB decoder that’s built into the monitor. In this case, adjust both the Chroma and Phase controls. The chroma affects the balance of the outer two gray bars; the phase affects the balance of the inner two gray bars. Adjustments made to one of these controls affects the other, so continue to adjust both until all of the gray bars are of uniform brightness at top and bottom. Note: The step in the second bullet also applies to the monitoring of composite signals, but you really, really shouldn’t be monitoring a composite signal if you’re doing color correction. Once your monitor is correctly calibrated, all the gray bars will be evenly gray and all the black bars evenly black.

PAL broadcast systems This table illustrates the differences:

PAL B PAL G, H PAL I PAL D/K PAL M PAL N

Transmission VHF UHF UHF/VHF* VHF/UHF VHF/UHF VHF/UHF Band Fields 50 50 50 50 60 50

Lines 625 625 625 625 525 625

Active lines 576 576 582** 576 480 576

Channel 7 MHz 8 MHz 8 MHz 8 MHz 6 MHz 6 MHz Bandwidth

Video 5.0 MHz 5.0 MHz 5.5 MHz 6.0 MHz 4.2 MHz 4.2 MHz Bandwidth

Colour 4.43361875 4.43361875 4.43361875 4.43361875 3.575611 3.58205625 Subcarrier MHz MHz MHz MHz MHz MHz

Sound Carrier 5.5 MHz 5.5 MHz 6.0 MHz 6.5 MHz 4.5 MHz 4.5 MHz

NTSE(National Television System Committee) System:-

Lines and refresh rate NTSC color encoding is used with the system M television signal, which consists of 29.97 interlaced frames of video per second, or the nearly identical system J in Japan. Each frame consists of a total of 525 scanlines, of which 486 make up the visible raster. The remainder (the vertical blanking interval) are used for synchronization and vertical retrace. This blanking interval was originally designed to simply blank the receiver's CRT to allow for the simple analog circuits and slow vertical retrace of early TV receivers. However, some of these lines now can contain other data such as and vertical interval timecode (VITC). In the complete raster (ignoring half- lines), the even-numbered or 'lower" scanlines (Every other line that would be even if counted in the video signal, e.g. {2,4,6,...,524}) are drawn in the first field, and the odd- numbered or "upper" (Every other line that would be odd if counted in the video signal, e.g. {1,3,5,...,525}) are drawn in the second field, to yield a flicker- image at the field refresh frequency of approximately 59.94 Hertz (actually 60 Hz/1.001). For comparison, 576i systems such as PAL-B/Gand SECAM uses 625 lines (576 visible), and so have a higher vertical resolution, but a lower temporal resolution of 25 frames or 50 fields per second. The NTSC field refresh frequency in the black-and-white system originally exactly matched the nominal 60 Hz frequency of alternating currentpower used in the United States. Matching the field refresh rate to the power source avoided intermodulation (also called beating), which produces rolling bars on the screen. When color was later added to the system, the refresh frequency was shifted slightly downward to 59.94 Hz to eliminate stationary dot patterns in the difference frequency between the sound and color carriers, as explained below in "Color encoding". Synchronization of the refresh rate to the power incidentally helped kinescope cameras record early live television broadcasts, as it was very simple to synchronize a film camera to capture one frame of video on each by using the alternating current frequency to set the speed of the synchronous AC motor-drive camera. By the time the frame rate changed to 29.97 Hz for color, it was nearly as easy to trigger the camera shutter from the video signal itself.

The actual figure of 525 lines was chosen as a consequence of the limitations of the vacuum-tube-based technologies of the day. In early TV systems, a master voltage- controlled oscillator was run at twice the horizontal line frequency, and this frequency was divided down by the number of lines used (in this case 525) to give the field frequency (60 Hz in this case). This frequency was then compared with the 60 Hz power-line frequency and any discrepancy corrected by adjusting the frequency of the master oscillator. For interlaced scanning, an odd number of lines per frame was required in order to make the vertical retrace distance identical for the odd and even fields, which meant the master oscillator frequency had to be divided down by an odd number. At the time, the only practical method of frequency division was the use of a chain of vacuum tube multivibrators, the overall division ratio being the mathematical product of the division ratios of the chain. Since all the factors of an odd number also have to be odd numbers, it follows that all the dividers in the chain also had to divide by odd numbers, and these had to be relatively small due the problems of thermal drift with vacuum tube devices. The closest practical sequence to 500 that meets these criteria was 3 × 5 × 5 × 7 = 525. (For the same reason, 625-line PAL-B/G and SECAM uses 5 × 5 × 5 × 5, the old British 405-line system used 3 × 3 × 3 × 3 × 5, the French 819-line system used 3 × 3 × 7 × 13 etc.).

Colorimetry The original 1953 color NTSC specification, still part of the United States Code of Federal Regulations, defined the colorimetric values of the system as follows:-

Original NTSC colorimetry (1953) CIE 1931 x CIE 1931 y

primary red 0.67 0.33

primary 0.21 0.71

primary blue 0.14 0.08

white point (CIE illuminant C) 0.310 0.316

Early receivers, such as the RCA CT-100, were faithful to this specification, having a larger gamut than most of today's monitors. Their low-efficiency phosphors however were dark and long-persistent, leaving trails after moving objects. Starting in the late 1950s, picture tube phosphors would sacrifice saturation for increased brightness; this deviation from the standard both at the receiver and broadcaster ends was the source of considerable color variation. Color correction in studio monitors and home receivers To ensure more uniform color reproduction, receivers started to incorporate color correction circuits that converted the received signal — encoded for the colorimetric values listed above — into signals encoded for the phosphors actually used within the receiver. Since such color correction cannot be performed accurately on the nonlinear (gamma-corrected) signals transmitted, the adjustment can only be approximated introducing both hue and luminance errors for highly saturated . Similarly at the broadcaster stage, in 1968-69 the Conrac Corp., working with RCA, defined a set of controlled phosphors for use in broadcast color picture monitors. This specification survives today as the SMPTE "C" phosphor specification:

SMPTE "C" colorimetry CIE 1931 x CIE 1931 y primary red 0.630 0.340 primary green 0.310 0.595 primary blue 0.155 0.070 white point (CIE illuminant D65) 0.3127 0.3290

As with home receivers, it was further recommended[10] that studio monitors incorporate similar color correction circuits so that broadcasters would transmit pictures encoded for the original 1953 colorimetric values, in accordance with FCC standards. In 1987, the Society of Motion Picture and Television Engineers (SMPTE) Committee on Television Technology, Working Group on Studio Monitor Colorimetry, adopted the SMPTE C (Conrac) phosphors for general use in Recommended Practice 145,[11] prompting many manufacturers to modify their camera designs to directly encode for SMPTE "C" colorimetry without color correction.,[ as approved in SMPTE standard 170M, "Composite Analog Video Signal — NTSC for Studio Applications" (1994). As a consequence, the ATSC digital television standard states that for 480i signals, SMPTE "C" colorimetry should be assumed unless colorimetric data is included in the transport stream. Variations Japanese NTSC uses the same colorimetric values for red, blue, and green, but employs a different white point of CIE Illuminant D93 (x=0.285, y=0.293). Both the PAL and SECAM systems used the original 1953 NTSC colorimetry as well until 1970; unlike NTSC, however, the European Broadcasting Union (EBU) eschewed color correction in receivers and studio monitors that year and instead explicitly called for all equipment to directly encode signals for the "EBU" colorimetric values, further improving the color fidelity of those systems. Color encoding For backward compatibility with black-and-white television, NTSC uses a luminance- chrominance encoding system invented in 1938 by Georges Valensi. Luminance (derived mathematically from the composite color signal) takes the place of the original monochrome signal. Chrominance carries color information. This allows black-and-white receivers to display NTSC signals simply by filtering out the chrominance. If it were not removed, the picture would be covered with dots (a result of chroma being interpreted as luminance). All black-and-white TVs sold in the US after the introduction of color broadcasting in 1953 were designed to filter chroma out, but the early B&W sets did not do this and chroma dots would show up in the picture. In NTSC, chrominance is encoded using two 3.579545 MHz signals that are 90 degrees out of phase, known as I (in-phase) and Q (quadrature)QAM. These two signals are each amplitude modulated and then added together. The carrier is suppressed. Mathematically, the result can be viewed as a single sine wave with varying phase relative to a reference and varying amplitude. The phase represents the instantaneous color hue captured by a TV camera, and the amplitude represents the instantaneous color saturation. For a TV to recover hue information from the I/Q phase, it must have a zero phase reference to replace the suppressed carrier. It also needs a reference for amplitude to recover the saturation information. So, the NTSC signal includes a short sample of this reference signal, known as the color burst, located on the 'back porch' of each horizontal line (the time between the end of the horizontal synchronization pulse and the end of the blanking pulse.) The color burst consists of a minimum of eight cycles of the unmodulated (fixed phase and amplitude) color subcarrier. The TV receiver has a "local oscillator", which it synchronizes to the color bursts and then uses as a reference for decoding the chrominance. By comparing the reference signal derived from color burst to the chrominance signal's amplitude and phase at a particular point in the , the device determines what chrominance to display at that point. Combining that with the amplitude of the luminance signal, the receiver calculates what color to make the point, i.e. the point at the instantaneous position of the continuously scanning beam. Note that analog TV is discrete in the vertical dimension (there are distinct lines) but continuous in the horizontal dimension (every point blends into the next with no boundaries), hence there are no pixels in analog TV. In CRT televisions, the NTSC signal is turned into RGB, which is then used to control the electron guns. Digital TV sets receiving analog signals instead convert the picture into discrete pixels. This process of discretization necessarily degrades the picture information somewhat, though with small enough pixels the effect may be imperceptible. Digital sets include all sets with a matrix of discrete pixels built into the display device, such as LCD, plasma, and DLP screens, but not CRTs, which do not have fixed pixels. This should not be confused with digital (ATSC) television signals, which are a form of MPEG video, but which still have to be converted into a format the TV can use.

When a transmitter broadcasts an NTSC signal, it amplitude-modulates a radio- frequency carrier with the NTSC signal just described, while it frequency-modulates a carrier 4.5 MHz higher with the audio signal. If non-linear distortion happens to the broadcast signal, the 3.579545 MHz color carrier may beat with the sound carrier to produce a dot pattern on the screen. To make the resulting pattern less noticeable, designers adjusted the original 60 Hz field rate down by a factor of 1.001 (0.1%), to approximately 59.94 fields per second. This adjustment ensures that the sums and differences of the sound carrier and the color subcarrier and their multiples (i.e., the intermodulation products of the two carriers) are not exact multiples of the frame rate, which is the necessary condition for the dots to remain stationary on the screen, making them most noticeable. The 59.94 rate is derived from the following calculations. Designers chose to make the chrominance subcarrier frequency an n + 0.5 multiple of the line frequency to minimize interference between the luminance signal and the chrominance signal. (Another way this is often stated is that the color subcarrier frequency is an odd multiple of half the line frequency.) They then chose to make the audio subcarrier frequency an integer multiple of the line frequency to minimize visible (intermodulation) interference between the audio signal and the chrominance signal. The original black-and-white standard, with its 15750 Hz line frequency and 4.5 MHz audio subcarrier, does not meet these requirements, so designers had either to raise the audio subcarrier frequency or lower the line frequency. Raising the audio subcarrier frequency would prevent existing (black and white) receivers from properly tuning in the audio signal. Lowering the line frequency is comparatively innocuous, because the horizontal and vertical synchronization information in the NTSC signal allows a receiver to tolerate a substantial amount of variation in the line frequency. So the engineers chose the line frequency to be changed for the color standard. In the black-and-white standard, the ratio of audio subcarrier frequency to line frequency is 4.5 MHz / 15,750 = 285.71. In the color standard, this becomes rounded to the integer 286, which means the color standard's line rate is 4.5 MHz / 286 = approximately 15,734 lines per second. Maintaining the same number of scan lines per field (and frame), the lower line rate must yield a lower field rate. Dividing (4,500,000 / 286) lines per second by 262.5 lines per field gives approximately 59.94 fields per second

Transmission modulation scheme

Spectrum of a System M television channel with NTSC color.

An NTSC television channel as transmitted occupies a total bandwidth of 6 MHz. The actual video signal, which is amplitude-modulated, is transmitted between 500 kHz and 5.45 MHz above the lower bound of the channel. The video carrier is 1.25 MHz above the lower bound of the channel. Like most AM signals, the video carrier generates two sidebands, one above the carrier and one below. The sidebands are each 4.2 MHz wide. The entire upper sideband is transmitted, but only 1.25 MHz of the lower sideband, known as avestigial sideband, is transmitted. The color subcarrier, as noted above, is 3.579545 MHz above the video carrier, and is quadrature-amplitude- modulated with a suppressed carrier. The audio signal isfrequency-modulated, like the audio signals broadcast by FM radiostations in the 88–108 MHz band, but with a ±25 kHz maximum frequency swing, as opposed to 75 kHz as is used on the FM band. The main audio carrier is 4.5 MHz above the video carrier, making it 250 kHz below the top of the channel. Sometimes a channel may contain an MTS signal, which offers more than one audio signal by adding one or two subcarriers on the audio signal, each synchronized to a multiple of the line frequency. This is normally the case when stereo audio and/or second audio program signals are used. The same extensions are used in ATSC, where the ATSC digital carrier is broadcast at 1.31 MHz above the lower bound of the channel. The Cvbs (Composite vertical blanking signal) (sometimes called "setup") is a voltage offset between the "black" and "blanking" levels. Cvbs is unique to NTSC. Cvbs has the advantage of making NTSC video more easily separated from its primary sync signals. Framerate conversion There is a large difference in framerate between film, which runs at 24.0 frames per second, and the NTSC standard, which runs at approximately 29.97 frames per second.

Unlike the 576i video formats, this difference cannot be overcome by a simple speed-up.

A complex process called "3:2 pulldown" is used. One film frame is transmitted for three video fields (1½ video frame times), and the next frame is transmitted for two video fields (one video frame time). Two film frames are therefore transmitted in five video fields, for an average of 2½ video fields per film frame. The average frame rate is thus 60 / 2.5 = 24 frame/s, so the average film speed is exactly what it should be. There are drawbacks, however. Still-framing on playback can display a video frame with fields from two different film frames, so any motion between the frames will appear as a rapid back-and-forth flicker. There can also be noticeable jitter/"stutter" during slow camera pans (telecine judder). To avoid 3:2 pulldown, film shot specifically for NTSC television is often taken at 30 frame/s. For viewing native 576i material (such as European television series and some European movies) on NTSC equipment, a standards conversion has to take place. There are basically two ways to accomplish this:

. The framerate can be slowed from 25 to 23.976 frames per second (a slowdown of about 4%) to subsequently apply 3:2 pulldown. . Interpolation of the contents of adjacent frames in order to produce new intermediate frames; unless highly sophisticated motion-sensing computer algorithms are applied, this introduces artifacts, and even the most modestly trained of eyes can quickly spot video that has been converted between formats. Modulation for analog satellite transmission Because satellite power is severely limited, analog video transmission through satellites differs from terrestrial TV transmission. AM is a linear modulation method, so a given demodulated signal-to-noise ratio (SNR) requires an equally high received RF SNR. The SNR of studio quality video is over 50 dB, so AM would require prohibitively high powers and/or large antennas. Wideband FM is used instead to trade RF bandwidth for reduced power. Increasing the channel bandwidth from 6 to 36 MHz allows a RF SNR of only 10 dB or less. The wider noise bandwidth reduces this 40 dB power saving by 36 MHz / 6 MHz = 8 dB for a substantial net reduction of 32 dB. Sound is on a FM subcarrier as in terrestrial transmission, but frequencies above 4.5 MHz are used to reduce aural/visual interference. 6.8, 5.8 and 6.2 MHz are commonly used. Stereo can be multiplex or discrete, and unrelated audio and data signals may be placed on additional subcarriers. A triangular 60 Hz energy dispersal waveform is added to the composite baseband signal (video plus audio and data subcarriers) before modulation. This limits the satellite downlink power spectral density in case the video signal is lost. Otherwise the satellite might transmit all of its power on a single frequency, interfering with terrestrial microwave links in the same frequency band. In half transponder mode, the frequency deviation of the composite baseband signal is reduced to 18 MHz to allow another signal in the other half of the 36 MHz transponder. This reduces the FM benefit somewhat, and the recovered SNRs are further reduced because the combined signal power must be "backed off" to avoid inter-modulation distortion in the satellite transponder. A single FM signal is constant amplitude, so it can saturate a transponder without distortion. Field order An NTSC "frame" consists of an "even" field followed by an "odd" field. As far as the reception of an analog signal is concerned, this is purely a matter of convention and, it makes no difference. It's rather like the broken lines running down the middle of a road, it doesn't matter whether it is a line/space pair or a space/line pair, the effect to a driver is exactly the same. The introduction of digital television formats has changed things somewhat. Most digital TV formats, including the popular DVD format, record NTSC originated video with the even field first in the recorded frame (the development of DVD took place in regions that traditionally utilize NTSC). However, this frame sequence has migrated through to the so-called PAL format (actually a technically incorrect description) of digital video with the result that the even field is often recorded first in the frame (the European 625 line system is specified as odd frame first). This is no longer a matter of convention because a frame of digital video is a distinct entity on the recorded medium. This means that when reproducing many non NTSC based digital formats (including DVD) it is necessary to reverse the field order otherwise an unacceptable shuddering "comb" effect occurs on moving objects as they are shown ahead in one field and then jump back in the next. This has also become a hazard where non NTSC progressive video is trans-coded to interlaced and vice versa. Systems that recover progressive frames or trans-code video should ensure that the "Field Order" is obeyed, otherwise the recovered frame will consist of a field from one frame and a field from an adjacent frame, resulting in "comb" interlacing artifacts. This can often be observed in PC based video playing utilities if an inappropriate choice of de-interlacing algorithm is made.

Countries and region using PAL, NTSE, and SECAM systems

All countries except Greece, France& Poland (SECAM) uses PAL

NTSE

PAL

All countries except Brazil uses NTSE, Brazil uses Israel and Turkey have PAL PAL and Iran and Iraq have SECAM TV color and Broadcasting System Country Frequency(Hz) Voltage(V) Plug type Color Broadcasting system System India 50 220,230,240 PAL B B,B3,B,FC Indonesia 50 220,230 PAL B A,B,B3,BF,C,SE South Korea 60 110,220 NTSC M A,C,SE North Korea 60 100,200,220 PAL D A,C,SE Singapore 50 110,230 PAL B B3,BF Sri Lanka 50 230 PAL B B,C,B3,BF Thailand 50 220,240 PAL B A,B,B3,BF,C Taiwan 60 110,220 NTSC M A,C,O China 50 110,220 PAL D A,B,B3,BF,C,SE,O Japan 50,60 100 NTSC M A Pakistan 50 220,230 PAL B B,B3,C The Philippines 60 110,220,230,240 NTSC M A,B3,C,O Vietnam 50 110,220 PAL D A,C,BF Hong Kong 50 200,220 PAL I B,B3,BF,C Malaysia 50 240 PAL B B3,BF,C Myanmar 50 230 NTSC M B,B3,BF,C Russia 50 220 SECAM D/K B,C

Why PAL in India In India we use PAL system because: (i) Phase error is not prominent. (ii) Maintains compatibility and Reverse compatibility. (iii) 25Mhz-25Mhz flickering does not take place.

Thus taking care of all these factors and financial conditions of the people of India the government here decided to develop a system which will not lay the burden on the poor people of India . As a result PAL system was developed.

Pattern Generator

Typical Pattern Generator used in Ujjain LPT

Different patterns of the colour can be obtained by this instrument and the colour bar test pattern can be tested through this instrument. It is very useful for testing R,G and B colour pattern. It is mostly found at LPT’s to perform colour bar test.

Receiver Unit

The receiver unit consists of 3 Scopus receivers. In the Ujjain LPT, there are 3 Scopus receivers, one for National Channel( Delhi) other for Bhopal( M.P) and the last one is in spare, if there is any technical mistake occurs in any one of these, then it will be immediately be replaced by the one in the spare.

Backview of the receiver unit

INSAT- 4B is used for transmitting National channel of Doordarshan which is 6.3m PDA and is switch number4 and operates on scopus receiver while the Bhopal channel of Doordarshan is operating by INSAT-3A satellite which is switch no. 6 on satellite System Unit.

Eb/No.

Eb/N0 (the energy per bit to noise power spectral density ratio) is an important parameter in digital communication or data transmission. It is a normalized signal-to- noise ratio (SNR) measure, also known as the "SNR per bit". It is especially useful when comparing the bit error rate (BER) performance of different digital modulation schemes without taking bandwidth into account.

Eb/N0 is equal to the SNR divided by the "gross" link spectral efficiency in (bit/s)/Hz, where the bits in this context are transmitted data bits, inclusive of error correction information and other protocol overhead. When forward error correction (FEC) is being discussed,Eb/N0 is routinely used to refer to the energy per information bit (i.e. the energy per bit net of FEC overhead bits); in this context, Es/N0 is generally used to relate actual transmitted power to noise.[1]

The noise spectral density N0, usually expressed in units of wattsper hertz, can also be seen as having dimensions of energy, or units of joules, or joules per cycle. Eb/N0 is therefore a non-dimensional ratio.

Eb/N0 is commonly used with modulation and coding designed for noise-limited rather than interference-limited communication, and for power-limited rather than bandwidth- limited communications. Examples of power-limited communications include deep- space and spread spectrum, and is optimized by using large bandwidths relative to the bit rate.

Relation to Carrier to Noise Ratio

Eb/N0 is closely related to the carrier-to-noise ratio (CNR or C/N), i.e. the signal-to-noise ratio (SNR) of the received signal, after the receiver filter but before detection:

where

fb is the channel data rate (net bitrate), and B is the channel bandwidth The equivalent expression in logarithmic form (dB):

,

Sometimes, the noise power is denoted by N0 / 2 when negative frequencies and complex-valued equivalent base-band signals are considered, and in that case, there will be a 3 dB difference. Relation to Es/No.

Eb/N0 can be seen as a normalized measure of the energy per symbol per noise power spectral density (Es/N0):

,

where Es is the energy per symbol in joules and ρ is the nominal spectral [2] efficiency in (bit/s)/Hz. Es/N0 is also commonly used in the analysis of digital modulation schemes. The two quotients are related to each other according to the following:

, where M is the number of alternative modulation symbols.

Es/N0 can further be expressed as:

,

where C/N is the carrier-to-noise ratio or signal-to-noise ratio. B is the channel bandwidth in hertz.

fs is the symbol rate in baud or symbols per second. For a PSK, ASK or QAM modulation with pulse shaping such as raised cosine shaping, the B/fs ratio is usually slightly larger than 1, depending of the pulse shaping filter.

Cassette Recorder

Fig. shows cassette recorder used in Ujjain LPT

It is used in the LPT’s for transmitting a program known as ”Gram Mangal” for the farmers to improve their skills in farming and increase their production. One cassette in every week or two three days is sent to Ujjain LPT to give the guidance to farmers about latest updates and farming skills. The cassette contains a number of programs for farmers. A list of programs to be shown on a particular day in given at the Ujjain LPT.s This is very good program started by Doordarshan to help farmers to improve their production and prevent them from the harms caused by insects and worms.

Fig. shows waveform analyzer(left top),cassette recorder(right top),three receivers, 2 Satellite System Unit, Audio- Video switcher,(to transmit the signals in air and show different channel channels), and two digital TV receivers

Satellite System Unit

This unit is very significant unit the RF-signal from the PDA is received at the satellite system unit and 18V dc is sent from SSU to the LNBC mount on the PDA. The skin effect takes place between SSU and LNBC in the co-axial cable. Thus this unit can be thought as the essential un-separable part of the transmitter without which no transmission can be thought about.

Receiver Unit

All the signals are received at this unit and the S/N ratio can be checked at this unit. The strength of the signal can be checked at this unit. Eb/no. can be checked at this unit. The satellite angle, i.e azimuth and elevation can also be checked here. It also shows the position of the satellite, latitude and longitude of the LPT, distance of LPT from the satellite, and many more functions.

Transmitting Unit

It consists of the modulator, mixer, power amplifier, attenuation panel, level controller

Fig. shows PLL(Phase lock Loop) SAW(Square acoustic wave) adjacent channel Modulator

Fig. shows changeover panel, SAW amplifier, mixer, level controller, RF power indicator,lock alarm, RF-local oscillator monitor, AGC control meter, IF monitor, sound IF-deviation-and carrier.

Fig. shows Program Amplifier(showing- power on-off; gain 1 and gain 2;level indicator), amplifier(power on-off; input 1and input 2, vol. control; peak level)

Inside the modulator of the signal takes place that is frequency and amplitude modulation which is already discussed in preceding pages

Mixer

Mixer is a device in which the mixing of the audio and video signals takes place.

Level controller

It controls the amount of voltage entering the amplifier and thus protects the circuit from damaging.

Level Indicator

It indicates red when the voltage or current level rises above the expected range i.e. when it extracts more power otherwise it will indicate green

Alarm

It is also a indicator to alert the workers when the current or the voltage is reached it’s peak value that it will damage the whole circuitry if not checked.

Attenuator Panel

An attenuator is an electronic device that reduces the amplitude or power of a signal without appreciably distorting its waveform. An attenuator is effectively the opposite of an amplifier, though the two work by different methods. While an amplifier provides gain, an attenuator provides loss, or gain less than 1. Attenuators are usually passive devices made from simple voltage divider networks. Switching between different resistances forms adjustable stepped attenuators and continuously adjustable ones using potentiometers. For higher frequencies precisely matched low VSWR resistance networks are used. Fixed attenuators in circuits are used to lower voltage, dissipate power, and to improve impedance matching. In measuring signals, attenuator pads or adaptors are used to lower the amplitude of the signal a known amount to enable measurements, or to protect the measuring device from signal levels that might damage it. Attenuators are also used to 'match' impedances by lowering apparent SWR. RF attenuators Radio frequency attenuators are typically coaxial in structure with precision connectors as ports and coaxial, microstrip or thin-film internal structure. Above SHF special waveguide structure is required. Important characteristics are:

. accuracy, . low SWR, . flat frequency-response and . repeatability. The size and shape of the attenuator depends on its ability to dissipate power. RF attenuators are used as loads for and as known attenuations and protective dissipations of power in measuring RF signals

Fig. shows RF attenuator

Basic circuits used in attenuators are pi pads (π-type) and T pads. These may be required to be balanced or unbalanced networks depending on whether the line geometry with which they are to be used is balanced or unbalanced. For instance, attenuators used with coaxial lines would be the unbalanced form while attenuators for use with twisted pair are required to be the balanced form. Four fundamental attenuator circuit diagrams are given in the figures on the left. Since an attenuator circuit consists solely of passive resistor elements, it is linear and reciprocal. If the circuit is also made symmetrical (this is usually the case since it is usually required that the input and output impedances Z1 and Z2 are equal) then the input and output ports are not distinguished, but by convention the left and right sides of the circuits are referred to as input and output, respectively.

Fig. shows π-type unbalanced attenuator circuit Fig. shows π-type balanced attenuator circuit

Fig. shows T- balanced attenuator circuit

Circulator

Fig. shows circulator

It is a device which takes samples of the signals into the meter circuit. It is the combination of isolators.

Isolator

It is made of ferrite material and phase shifts the signal by 45 degrees. It strictly follows one way path input signal 45 degree output signal

Ferrite expected output

Thus the circulator is like a rat- race

2

1 3 Fig. shows circulator

4

Directional Coupler

Fig. shows directional Coupler

The function of directional coupler is to take the samples from the input signal and supply it to the circuit. It is used as a coupling device in the transmitter.

Band Pass Filter

It’s function is to stop a band of a particular frequency and pass the band of desired frequency. It helps in reducing the noise in the amplifier and thus protects the signal from being corrupted.

SMPS (Switch Mode Power Supply)

Fig. shows Switch Mode Power Supply

It supplies current to the amplifiers for the transmitter to perform it’s function .Thus without the whole transmitter is useless. The SMPS shown in the Fig. has voltage and current level indicator so that the voltage and current ratings could be easily be monitored. We generally watch current ratings carefully because if the current ratings go above 25 to 30 A then there is a problem in the working of the transmitter i.e. any of the devices are consuming more current.

)

Fig. illustrates the (front and back view)

A dummy load is often put in the circuitry to bypass the reflected current(if any) passing to amplifier so as to prevent the amplifier from getting damaged. The fig. shows the dummy load.

Transmitting Tower

Fig. shows transmitting antenna of Ujjain Doordarshan

After being passed through the transmitter, the RF-signal is sent to transmitting antenna. It transmits the RF-signals in the range of 20-25 KMS. It can effectively transfer the RF-signals in the range upto 20 KMS and after that the signals become weak but can be received in the rest of the area it covers.