890 likes | 1k Views
WELCOME. to the lecture on SATELLITE COMMUNICATION. K.R.Radhakrishnan Asst. Engineer,Dooradarshan. INTRODUCTION. Television Transmission & Dooradarshan. Public Television Broadcaster of India
E N D
to the lecture on SATELLITE COMMUNICATION K.R.Radhakrishnan Asst. Engineer,Dooradarshan
INTRODUCTION Television Transmission & Dooradarshan • Public Television Broadcaster of India • One of the largest broadcasting organizations in the world in terms of the infrastructure of studios and transmitters • Digital Terrestrial Transmitters • On September 15 th 2009, Doordarshan celebrated its 50th anniversary
Our Milestones Experimental Telecast 1959 Regular Daily Transmission 1965 National Telecasts (Colour) 1982 DD Direct Plus (DTH Service) 2004
Today’s Topic Satellite Communication
Terrestrial Vs Satellite TERESTRIAL SATELLITE More coverage area Higher BW Less Power • Less coverage area • Lower BW • More Power
Frequency Bands • The up-link is a highly directional, point topoint link • The down-link can have a footprint providing coverage for a substantial area "spot beam“.
History of Satellite Communication • 1945 Arthur C. Clarke Publishes an Essay“Extra Terrestrial Relays“ • 1957 First satellite SPUTNIK of Russia • 1960 First reflecting communication satellite “ECHO” by NASA • 1963 First geostationary satellite “SYNCOM” by NASA • 1965 First commercial geostationary satellite “Early Bird“ (INTELSAT I): 240 duplex telephone channels or 1 TV
ECHO SYNCOM
Factors in Satellite Communication Elevation Angle: The angle of the horizontal of the earth surface to the center line of the satellite transmission beam.
Factors in Satellite Communication …… • Coverage angle • A measure of the portion of earth surface visible to a satellite taking the minimum elevation angle into consideration • R/(R+h) = sin(π/2 - β - θ)/sin(θ + π/2) = cos(β + θ)/cos(θ) R = 6370 km (earth’s radius) h = satellite orbit height β = coverage angle θ = minimum elevation angle >>>>
Factors in Satellite Communication…… • Other Impairments to Satellite Communication: • The distance between an earth station and a satellite (free space loss). • Satellite Footprint: The satellite transmission’s strength is strongest in the center of the transmission, and decreases farther from the center as free space loss increases. • Atmospheric Attenuation caused by air and water can impair the transmission. It is particularly bad during rain and fog. >>>>
Satellite transmission Analog Vs Digital DIGITAL More programs per channel / Transponder i.e. spectrum efficient Noise-Free Reception CD quality sound & better than DVD quality picture Reduced transmission power Flexibility in service planning -quality / Bandwidth trade off Terrestrial free network ANALOG One program per channel / transponder Comparatively noisy Lower quality with respect to VCD, DVD digital media Fixed reception Limited coverage
Polarization and Frequency Reuse • Most communications satellites transmit using two orthogonal (i.e., at right angles) senses of polarization in order to utilize the available satellite frequency spectrum twice. • Transponders with one sense of polarization are totally transparent to the second set of transponders using the opposite sense. • Twice the number of transponders can therefore occupy the same amount of frequency spectrum. This is called frequency reuse.
EARTH STATION Earth Station is a uplink center from which the signals are fed to Satellite for distribution in a specified area covered by the Satellite. The signal is up-linked from the earth station and received by many down link centers in TV broad casting. It is a very important part of satellite communication system for broadcasting of signals.
Digital Earth Station Major Components of Digital Earth Station • PDA (Parabolic Dish Antenna) • FEED • LNA / LNBC • Wave Guide / Low Loss Cable • HPA (TWTA, SSPA, Klystrons) • Up converter • Modulator • Encoder • Multiplexer • IRD (Integrated Receiver Decoder)
DVB - Digital Video Broadcasting Digital Video Broadcasting (DVB) is being adopted as the standard for digital television Main forms of DVB >>>>
Main differences between DVB-S/DSNG and DVB-S2 DVB-S/DSNG DVB-S2 • Meant for broadcast only • Fixed 188 byte/packets • One TS / carrier • RS and Viterbi coding • Need of high Rx margin • QPSK /QPSK-8PSK-16QAM • Consumer LNB’s work in QPSK only • Fully transparent to all data • • Baseband in 16 or 64 kb/s • • Can work within noise floor • • QPSK-8PSK -16APSK -32APSK • • Pilot tones for extra synch in 8PSK
Need of VIDEO COMPRESSION • Uncompressed video (and audio) data are huge. • It is a big problems for storage and communications. • Multimedia files are large and consume lots of hard disk space. • The files size makes it time-consuming to move them from place to place. Compression shrinks files, making them smaller and more practical to store and share
Definitions • Bitrate • Information stored/transmitted per unit time • Usually measured in Mbps (Megabits per second) • Ranges from < 1 Mbps to > 40 Mbps • Resolution • Number of pixels per frame • Ranges from 160x120 to 1920x1080 • FPS (frames per second) • Usually 24, 25, 30, or 60 • Don’t need more because of limitations of the human eye
Video Compression….. • Main goal of MPEG-2 standard is to define the format of video data to be transmitted • This data format is the result of compression and encoding • Compression technique in MPEG-2 based on human perception of vision. >>>>
Video Compression….. • Images are described and structured in digital equipment using color spaces • RGB : Computer environments • YUV/YCrCb : related to TV world
Video Compression….. • Y,Cr,Cb color space splits color information • into Y ,Cr ,Cb components. • Y, Cr, Cb generated out of R, G, B • Each pixel carries color information in the form of color component values Y,Cr&Cb >>>>
Video Compression….. SAMPLING OF CHROMINANCE AND LUMINANCE Sampling Ratio 4:4:4 - Y, Cr and Cb are present for every pixel 4:2:2 - Y present for every pixel Cr and Cb are present for every second pixel 4:2:0 - Y for every pixel Cr and C b for every forth pixel >>>>
Video Compression….. Mpeg-2 Video part deals with the basic objects used to structure video information >>>>
Video Compression….. Video sequence : Group of video pictures Frame or picture: Contains color and brightness information required to display a picture on the screen >>>>
Video Compression….. • PICTURE • Very important object of MPEG-2 video • Picture divided in to blocks • Blocks grouped into macro blocks • One block contains 64 chrominance or luminance pixels • Each block contains 8 lines • Each line holds 8 samples of luminance or chrominance pixels • The number of chrominance blocks in a macro block depends on the sampling format used to digitize the video material. >>>>
Video Compression….. 4:2:0 - 4 blocks of luminance and 2 blocks of 4:2:2 - 4 blocks of luminance and 4 blocks of chrominance information 4:4:4 - 4 blocks of luminance and 8 blocks of chrominance >>>>
Video Compression….. • There are three types of coded pictures • Intra coded picture ( I pictures ) • Predictive coded pictures ( P picture ) • Bi directionally coded pictures ( B pictures ) >>>>
Video Compression….. • Intra coded pictures • These are coded in such a way that they can be decoded without knowing anything about other pictures in video sequence. • Blocks or macro blocks forming I pictures are called Intra blocks or Intracodedmacroblocks. >>>>
Video Compression….. • Predictive coded pictures • These pictures are decoded by using information from previous pictures ( reference picture) displayed earlier. • Information used from earlier pictures ( I or P )is determined by motion estimation and is coded what is called Inter coded macroblocks. • Information that cannot be borrowed is coded as Intracoded (I macroblocks ) • P pictures are 50-30 % size of I picture. >>>>
Video Compression….. • Bi directionally predicted pictures • Uses information from pictures occurred before and that coming in the future. • Encoding time encoder has access to the following pictures. • Information that are not available from preceding or following pictures are intracoded.
Video Compression….. • Data Compression used in MPEG-2 video • Achieved by combining three technique • Removing picture information that is invisible to human eye. • Using variable length coding tables. • Motion estimation.
Human eye less sensitive to high frequencies in color changes. Uses DCT to approximate the original chrominance and luminance in each block.
Video Compression….. After DCT process , the coefficients for increasing values are arranged in zig zag manner. This zig zag order is matched by a Quantization Matrices Quantization , process delivers a large number of zeroes in high frequency range.
Motion Estimation Areas of twosuccessive pictures are compared in order to determine the direction anddistance of relative motion between the frames.
Video Compression….. • MPEG-2 SYSTEM PART • This describes the specification how the encoded audio-video bit stream should be multiplexed together to form actual programs and how it can be made suitable for different media and network applications. • It is self consistent containing all necessary information to decode the audio and video • Bit streams belonging to a specific program.
Video Compression….. • It is independent of the networks physical implementation and should be suitable for error-prone and error free environments. • The key functionally addressed by MPEG-2 systems is multiplexing • It is referred to as MPEG-2 multiplex.
Video Compression….. • MPEG-2 systems using data structures called packets • Packets consists packet header and the packet payload. • It is fixed or variable size. • Packets concept create a flexible mechanism to transport data • Packet Header contains necessary information to process this data in the packet payload.
Video Compression….. MPEG-2 standard defines two basic tools to support media and network delivery systems. The Program Stream : CD ROM and Hardware Media The Transport Stream : Network Environment