270 likes | 526 Views
Introduction. This presentation uses hyperlinks so that you may proceed through all of the slides or jump to a topic that you wishIf you have opened this presentation within Explorer, the links will not function correctlyPress File, Save as
E N D
1. HDTV Survival Guide(525 version-30/JAN/06)by the Product Management Team
3. HDTV Survival Guide
4. Video Formats – Technology View Video exists in many domains as analog or digital signals for television, PC (Personal Computer), Serial Digital (SD-SDI / HD-SDI / ASI), Internet (IP) and Digital Cinema (DC).
5. Video Formats – Technology View In the past, and still today, analog technology supports television systems
6. Video Formats – Technology View Using digital compression technologies (eg. MPEG over ASI) have enabled digital television (SDI) to migrate to high definition (HD-SDI) and Internet Protocol (IP)
7. Video Formats – Technology View Film technology is still used today.
8. Video Formats – Technology View The next step for film technology is Digital Cinema (DC)
9. Video Formats – Technology View This presentation discusses the video and audio technologies used today in “baseband” (uncompressed) television environments
10. Video Bandwidths - Today Hybrid Serial Digital 270Mb/s and 1.5Mb/s SDI infrastructures are becoming more and more prevalent
3.0Gb/s SDI is emerging as a standard for production and DC (Digital Cinema) infrastructures
11. HDTV Survival Guide
12. Video Formats - Defined There are many types of video formats used in today’s television systems
High Definition or Standard Definition
HDTV High Definition TV is loosely defined as having 2x the number of scan lines as SDTV Standard Definition TV
Many line, frame rate and scan types exist for HDTV and SDTV
Component or Composite
Video signals are made up of two main components: Y or luminance (black and white portion of the signal) and C or chrominance (color portion of the signal)
When combined into a single signal, it is known as composite.
If the luminance and chrominance are not combined, it is considered a component signal format
Digital or Analog
Video signals exist in both analog and digital domains
In today’s systems, a mix of analog and digital are used.
13. Video Formats - Defined Video signals can be described in two major ways:
Either as a Signal Format (eg: SDI)
Or as a Tape Format (eg: DigiBeta)
Each video standard is described with the number of lines and the frame rate (eg: 525/60)
As well, there are two main scan types:
Interlace
Progressive
When digital, video signals can be described with their bandwidth (eg: 270Mb/s., 1.5 Gb/s. 3.0 Gb/s.)
The size of the digital word (8, 10, 20 bits) is important, too.
14. Video Formats - Defined
15. Video Format - Defined
16. Video Format - Defined
17. Video Formats – Analog Composite NTSC
18. Video Formats – Analog Component NTSC
19. Video Formats – Analog Component NTSC
20. Video Formats – Digital Component 270 Mb/s SDTV digital signals are carried on a coaxial cable using a bandwidth of 270 Mb/s
The luminance (Y) and color difference signals (Pr/Pb) are sampled using a sample frequency of 27MHz, a frequency that is related to both 525/59.94 and 625/50 standards.
A 10-bit word sample with a ratio of 4:2:2 for the Luminance and Color Difference is used:
270 MHz is the resultant serial frequency for a 10-bit wide path
Up to 16 channels of audio and metadata can be embedded
21. Video Formats – HDTV 1.5 Gb/s HDTV signals are carried on a coaxial cable using a bandwidth of 1.5 Gb/s
The luminance (Y) and color difference signals (Pr/Pb) are sampled using a frequency of 75MHz.
A 10-bit word sample with a ratio of 4:2:2 for the Luminance and Color Difference is used.
1.5 Gb/s is the resultant frequency for a 10-bit wide path (75+75/2+75/2)x10
Up to 16 channels of audio and metadata can be embedded
22. Video Formats – Super HDTV 3.0Gb/s 1080p-60 is the “holy grail” for HDTV
It is emerging for higher quality production purposes
Dual link (two coaxial HD-SDI connections) have been used to date
A standard for a single 3.0 Gbps co-axial connection is emerging for both HDTV and DC (Digital Cinema).
As well, a 10Gbps fiber standard is emerging up to 5 1.5 Gb/s, DC, or 1080p-60 purposes.
23. HDTV Survival Guide
24. Video Conversion - Defined There are many types of conversion used today which can be confusing
Color Decode/Encode
Analog and Digital conversion
Up, Down and Cross conversion
Frame Rate or Standards Conversion
(Scan Conversion)
25. Video Conversion – Make a Selection
26. Video Conversion – Make a Selection
27. Video Conversion: Color Encode, D>A In today’s systems, digital component to analog composite conversions are still prevalent for hybrid analog and digital SDTV systems.
If HDTV signals are downconverted to analog composite SDTV, it is important that the conversion maintain the best signal quality
Monitoring or “minimum delay” conversions, although incurring less processing time, may not provide the same performance.
28. Video Conversion: Color Encoding
29. Video Conversion: Color Encoding
30. Video Conversion: Color Encoding
31. Video Conversion: Color Decode, A>D In today’s systems, analog composite to digital component conversions are still prevalent.
Maintaining the integrity of the signal thru the color decoding and digitizing process is important if compression and/or upconversion is required.
32. Video Conversion: Color Decoding Decoding composite video is never a perfect process
Composite video already has some cross luma / cross chroma artifacts
The chrominance and luminance can never be completely separated once combined through the color encode process
Different filtering methods are used:
1D, one dimensional simple or notch Filtering
2D, two dimensional or line comb filtering
3D, three dimensional or field/frame comb filtering
33. Video Conversion: Color Decoding To generate component signals, a color decoder is used to separate luma and chroma components
In theory the reverse of a color encoder
In practice, with moving video, it is very difficult to separate luma and chroma
Depending on the techniques used by the decoder, an number of different artifacts can be introduced into the SDI signal
These artifacts affect the efficiency of a MPEG encoder
34. Video Conversion: Color Decoding Video Artifacts…
Major sources of video artifacts:
Composite video encoding
Composite video decoding
Up/Down/Cross Conversion
What will the consumer see?
Dot crawl around thin horizontal lines
Loss of resolution in original image
Jagged diagonal lines
Induced chrominance in referees shirts
35. Video Conversion: Color Decoding Notch filtering
Very simple color decoders only use simple notch filtering.
This is known as 1D or one dimensional filtering
Results are poor and not practical for broadcast
A simple notch filter removes subcarrier (3.58MHz for NTSC) from the luminance signal.
It removes some luminance information, as well.
If not properly designed, can produce “ringing” or visual reflections
The propagation time through a decoder of this type is not an issue for lip sync
Notch filtering is using as part of the filter choices in adaptive comb filters
36. Video Conversion: Color Decoding Notch filtering
37. Video Conversion: Color Decoding Notch filtering
Notch filter is centered around 3.58MHz
This will cut high frequency components from the decoded video
SDI signal will be soft
38. Video Conversion: Color Decoding 2D, 3 line adaptive color decoding
3 line adaptive color decoding compares information from the present line, the previous line and the following line (in the same field)
Line by line comparison provides better separation of luma and chroma components
In the event that the adjacent lines do not contain similar information, the decoder will “adapt” and drop into Notch filter
Easy to filter if the image is static
Harder to filter when the image is moving
Decoder algorithm must be intelligent and stay in 3 line filter as long as possible
39. Video Conversion: Color Decoding 2D, 3 line adaptive color decoding artifacts
Chrominance bands will be induced into the decoded digital signal
40. Video Conversion: Color Decoding 2D, 3 line adaptive color decoding artifacts
Dot Crawl is one of the biggest issues with this type of decoding
41. Video Conversion: Color Decoding 2D, 3 line adaptive color decoding artifacts
Vertical lines that are close to 3.58MHz can experience induced chroma
42. Video Conversion: Color Decoding Leitch’s 3D Decoding Technology
Uses multiple comb filters and shifts between them on a pixel by pixel basis
The amount of motion in the image dictates which comb filter is to be used
43. Video Conversion: Color Decoding Leitch’s 3D Decoding Technology
Leitch Technology’s industry leading 3D process uses the following comb filters:
Notch filter
3 line adaptive comb filter
Field comb filter
Frame comb filter
Comb Decision block looks at all four comb filters simultaneously.
Decides which of the four filters will yield the best possible outputs signal
Motion Detection block compares the identical pixel (spatial, luma and chroma in same phase) to determine motion.
If any motion detected frame comb is shut off. The frame comb is highly motion sensitive.
Propagation delay maintains the same disregarding the setting.
44. Video Conversion: Up Conversion When converting from SDTV to HDTV it is called up conversion.
De-interlacing technology is important as both interlace and progressive scan types are used for HDTV
3D adaptive comb filtering is important if analog composite signals are upconverted
Standards or frame rate conversion is required between 50Hz and 60Hz based video signals during up conversion.
If the 480i signal is 24fps (23.98fps) with a 2:3 cadence added to make to 30fps (29.97fps), it is easier to convert by detecting the sequence (3:2) and perform a high quality conversion.
If the 480i material is 29.97fps with no 3:2 cadence (true NTSC) and converted to 23.98p, there will be motion artifacts.
45. Interlace versus Progressive Assume equal scan rates (ie 59->59 or 50->50, not from- or to- film). They each have their own set of challenges:
1080iNN -> 720pNN
1080i is de-interlaced (-> 1080p), then scaled down to 720p.
De-interlacing is the art of re-constructing missing information. There are no perfect de-interlacers, and each makes compromises.
Scaling down to 720p involves discarding of information. Scaling down will discard some of the information re-genereated by the de-interlacer, and will thus reduce the magnitude of deinterlacing errors.
720pNN -> 1080iNN
720p is scaled up (-> 1080p), then interlaced to 1080i.
720p to 1080p up scaling preserves all of the information present in the original material, and adds some interpolated information.
Interlacing discards some of the original and interpolated information.
The de-interlacer is by far the block with the greatest responsibility and complexity. Most artifacts will be introduced in this block. Furthermore, downscaling discards information.
Thus 720p -> 1080i is "easier" to implement and creates fewer artifacts.
46. Video Conversion: Down Conversion When converting from HDTV to SDTV it is called down conversion.
When converting between 50Hz and 60Hz based video during down conversion, a standards or frame rate conversion is required
Aperature control (horizontal and vertical bandwidth control) is important when downconverting to minimize aliasing effects
47. Video Conversion: Cross Conversion Cross conversion is typically between different HDTV formats without any changes to the frame rate
If film rate based (24P) material is converted to 59.94P or 59.94I, a 2:3 cadence is inserted
24P film rate based material may be is “speeded up” to 25P using “SLO-PAL” in a “two step” process
If real time conversion is required between frame rates, standards (or frame rate) conversion is required
Upconversion is sometime referred to cross conversion !
48. Video Conversion – Standards Conversion (SD/HD) Converting between different frame rates based on the power line frequency, (ie 50 Hz or 60 Hz) called standards conversion
When converting between HDTV video formats with different frame rates based on the power line frequency, (ie 50 Hz or 60 Hz) it is considered to be a standards conversion or a frame rate conversion.
This may be considered a cross conversion, as well
When up and/or down converting and changing frame rates, it is considered to be a standards or frame rate conversion.
49. Video Conversion – Standards Conversion There are different methods used for standards conversion:
Basic
The most basic type of conversion, called drop / repeat, is just like a frame sync. When going up in frame rate, frames are repeated as required, when going down, frames are dropped. All of the frames at the output are identical to the input frames with two exceptions - sometimes one is repeated a second time, and sometimes one is dropped. This results in a jerky motion (called judder) when converting from 25 to 29.97 or back, since we have to add or drop approx. 5 frames per second. Frame syncs only do this very infrequently by comparison.
Linear
Linear standards conversion involves temporal interpolation / decimation of frames. In other words, the hardware uses the input frames to synthesize more or fewer output frames. When the output frame rate is higher than the input, the extra frames required are generated, the opposite for lower output frame rate. In both cases, all frames going out are different from the ones going in - each output frame is made up of percentages of the input frames that are around it in time. Linear conversion causes blurring of motion.
50. Video Conversion – Standards Conversion There are different methods used for standards conversion:
Motion Adaptive
Motion Adaptive looks at motion between frames to decide how to do the interpolation / decimation between frame rates. Depending on motion, different filtering can provide better pictures. This results in less blurring than linear.
Motion Compensated
Motion Compensated looks at motion between frames and not only decides how to do the filtering, but can also move pixels around, essentially predicting where things have moved to in the synthesized frames and moving them there to reduce the visibility of jerkiness. This results in the least blurring and judder.
51. HDTV Survival Guide
52. Audio Processing Audio for television is processed in many ways:
Discrete
Analog
Outputs: Balanced 600 ohm (high) or 66 ohm (low) impedance
Inputs: Balanced 600 ohm (low) or >10k ohm (high) impedance
Unbalanced
Digital
AES/EBU, 110 ohm Balanced or 75 ohm Unbalanced
Carries two mono or one stereo
Embedded
Into SD-SDI or HD-SDI by groups of 4 channels to a total of 16 channels in 4 groups
Compressed for Surround Sound
8 channels compressed into one AES for discrete or embedded applications
53. Audio Conversion: A to D AES digital audio signals are PCM (Pulse Code Modulated) and can be sampled at various rates
For instance, CD Audio is sampled at 44.1 kHz.
Audio signals in professional television applications are sampled at 48kHz.
48kHz sampled audio is used in professional broadcast television applications because it can be easily locked to a SDI (Serial Digital Interface) digital video signal.
Analog audio embedders sample the audio at 48kHz before embedding.
AES embedders pass the AES digital audio signal through a SRC (Sample Rate Converter) before being embedded into the digital video signal (SDI).
The SRC output is 48kHz and locked to the digital video signal.
If the input is not 48kHz, it is sample rate converted.
The sample rate conversion process will corrupt a compressed audio signal (DOLBY-E, DOLBY DIGITAL) that is formatted as an AES signal.
The AES embedder must provide a non-PCM mode of operation by bypassing the SRC so that compressed audio signals are not corrupted.
54. Audio Processing Analog audio is still used in many applications; however digital audio (AES/EBU) is used for audio interfacing
In todays’ TV systems, audio is typically embedded into digital signals.
The advantage of embedded audio processing is that any video processing delays incurred by digital video processing such as frame synchronization and noise reduction relative to audio are not incurred
The sampling frequency used is 48kHz as audio can be easily locked to video
55. Audio Processing Multiple language and surround sound is a reality.
To facilitate this new way of presenting audio along with standard and high definition video, audio compression techniques are used.
DOLBY-E is used in contribution applications and DOLBY DIGITAL (AC-3) is used for distribution into the home (5.1 channels : front left, centre, front right, rear right, rear left, and subwoofer).
PRO-LOGIC is used for distribution purposes into the home in some cases. Using PRO-LOGIC techniques, a four channel (front left, front right, rear right, rear left) surround sound mix is matrixed into a stereo program signal.
DD+ (7.1) is emerging as another audio surround format
56. Compressed Audio Processing – Dolby-E For production and contribution purposes, the DOLBY-E audio format is popular
Four AES signals (a total of 8 monophonic channels) are compressed into one AES
Typically, 5.1 surround sound (left front, center, right front, left rear, right rear and sub-woofer) and a stereo audio program (possibly PRO-LOGIC encoded), a total of 8 channels are carried within a DOLBY-E compressed data stream (one AES).
It is possible for other combinations to exist such as 6.1 (same as 5.1 with the addition of a rear channel for a total of 7 channels) and a monophonic program signal or multiple languages (4 stereo program signals in different languages or up to 8 monophonic channels – each carrying a different language).
57. Compressed Audio Processing – Dolby Digital (AC-3) For distribution into the home, DOLBY DIGITAL is used.
Typically, DOLBY-E will be used up to the point just before the signal is broadcasted
At this point, the DOLBY-E signal is decompressed and then compressed into the DOLBY DIGITAL format
For the sake of simplicity, DOLBY-E and DOLBY DIGITAL can be considered to have the same characteristics with the exception of the compression techniques used.
58. Compressed Audio Processing – Audio Metadata Audio metadata (not to be confused with other types of metadata) is also carried along in the data stream with the compressed audio
Audio metadata is used to describe many aspects of the audio signal(s)
It allows for the decompression of a full surround sound mix for home theatres to a monophonic audio mix for small TVs in kitchens and bedrooms
In some cases, the audio metadata is extracted as an RS-422 data format
An example is when the audio metadata is passed form a DOLBY-E decompression unit to a DOLBY DIGITAL compression unit or if the audio metadata needs to be updated.
59. HDTV Survival Guide
60. Considerations for Video Systems Design CRT Monitors
from Sony Research in UK:
Sony consider that the CRT is mature, and still the only technology that is good enough for evaluation purposes
Although there is little or no new development going on, Sony have made arrangements which should ensure that a supply of Grade-1 tubes will be available for several years to come
When you consider that such tubes have a lifetime of many years, it should be possible for broadcasters to use CRT monitors for evaluation for a long time to come
So CRTs will still be with us for sometime, at least when a grade-1 monitor is required (High-end post production)
Many other types of displays will be introduced with their own processing artifacts and limitations
61. Considerations for Video Systems Design Distribution Amplifiers:
Coaxial cable is used for analog composite (1), analog component (3), digital SDTV (1), and digital HDTV (1) and ASI (1)
It is important to choose a cable type that be used for all formats
Cable length is important, especially for HDTV signals
Different distribution amplifiers are required for analog and digital signals
Typically, separate digital SDTV and HDTV DAs are used; however, it is possible to use a DA today that passes SD-SDI, HD-SDI and ASI
Routing Switchers
Routing switchers do not typically provide a clean/quiet switch for either analog or digital signals, processing is required
Some routing switchers will support SD-SDI and HD-SDI signals in the same matrix simplifying hybrid digital SDTV and HDTV systems design
Analog routing switchers are required for analog signals
62. Considerations for Video Systems Design Video/Audio Servers, Editors and Tape transports:
Video/audio servers, editors and tape transports can record, edit and playback compressed video streams
The compression rate will affect the quality of the signal
Audio Embedders (or Multiplexers):
Separate audio embedders are typically required for analog and digital audio signals
Digital audio embedders are now available that will support embedding audio into SD-SDI or HD-SDI video
Audio De-embedders (or Demultiplexers):
Separate audio de-embedders are required for analog and digital audio signals
AES digital audio signals, compressed or otherwise, are de-embedded from the SD-SDI or HD-SDI signal
If uncompressed, multiple channel surround sound signals are de-embedded (3 AES for 5.1)
It is important that there are minimal differences in timing to preserve the surround sound aural image.
Digital audio de-embedders are now available that will support de-embedding audio from SD-SDI or HD-SDI video
63. Considerations for Video Systems Design Conversion Equipment
Analog/Digital
It is important to use the highest quality processing when converting between analog (composite) and digital (component) signals. 3D adaptive comb filters provide the best quality when converting from analog to digital especially before compression and up-converting
Up/Down/Cross
It is important to use the highest quality conversion when up, down and cross converting.
Frame rate conversion may be required. If so, be sure to pick the best processing that you can afford (linear versus motion adaptive versus motion compensated)
High-Quality HDTV conversion minimize introduction of motion and conversion artifacts.
64. Considerations for Video Systems Design Conversion Equipment
Scan Conversion
Converting from computer scan types to television scan types may cause artifacts. It is important to be aware if this type of conversion is used especially in picture monitors
Native Format
Choose a native format for your system and keep conversion to a minimum to maintain high quality
65. Considerations for Video Systems Design MPEG Encoders
MPEG encoders rely on redundancies that exist within video signals to reduce the numbers of bits required to transmit a signal
Consumer transmissions can often be allocated less than 1Mbps
MPEG encoders perform better with clean, artifact-free digital signals free from noise and artifacts
By improving the quality of the input baseband signal to an MPEG encoder, better subjective quality can be achieved at the same bit rate
Digital video originating from analogue sources can significantly reduce MPEG encoding efficiency
Composite decoding artifacts
Noise
Film grain, dirt, scratches
66. Considerations for Video Systems Design The challenges of compression
The consumer broadcast industry is in the midst of a transition to digital services
Consumers expect that digital services will bring higher quality signals to the home
Distributors must manage a vast array of analogue/digital signals and deliver on consumer expectations
Composite Video (the majority of signals remain analog to this day)
SDI
HD-SDI
ASI (compressed video transport – typically MPEG2 but moving to MPEG4)
Digital transmissions make use of MPEG compression to deliver more signals to the consumer
In order to maximize the number of signals delivered to the consumer, MPEG transmissions are bandwidth and bit rate limited
To optimize the quality of the signals, noise reduction and advanced composite video decoding are recommended
67. Considerations for Video Systems Design The challenges of compression
Analog video signals cannot be directly MPEG encoded
They must be first converted into digital (270Mb/s) signals
Color decoding is used to separate luminance (black and white) from chrominance (color) information
Color decoding is accomplished using filters
Different algorithms can be used to convert composite video into digital
Each one can impact different artifacts into the digital videoMPEG encodes using 270Mb SDI signals
The ideal resultant luminance signal should have no residual subcarrier
68. Considerations for Video Systems Design The challenges of compression
Optimizing video prior to MPEG encoding encompasses several different techniques.
High-quality video processing/conversion
3D Adaptive decoding of component digital video to composite.
Minimize conversion artifacts.
Noise reduction processing
Minimize extraneous noise to maximize encoder efficiency
69. HDTV Survival Guide
70. Considerations for Audio Systems Design Distribution Amplifiers:
AES (unbalanced or balanced) distribution amplifiers do not affect compressed audio streams and can be used with no difficulties whether in reclocking or bypass modes.
Routing Switchers:
AES routing switchers can route and switch compressed audio streams; however, a switch between streams can affect downstream equipment
Vertical interval switching can help eliminate problems with DOLBY-E streams, but a “quiet switch” is recommended if downstream equipment is not to be disturbed by upstream switching.
71. Considerations for Audio Systems Design Video/Audio Servers, Editors and Tape transports:
Video/audio servers, editors and tape transports can record, edit and playback compressed audio streams (DOLBY-E)
It is important that the DOLBY-E signals passes through in a transparent manner
DOLBY-E can not be re-sampled, gain or equalization can not be applied, channels can not be swapped or inverted and vari-speed (eg. slo-motion) can not be used.
DOLBY-E can be editted on video frame boundaries.
Audio Embedders (or Multiplexers):
The sample rate conversion process will corrupt a compressed audio signal (DOLBY-E, DOLBY DIGITAL) that has been formatted as an AES signal.
The AES embedder must provide a non-PCM mode of operation by bypassing the SRC so that compressed audio signals are not corrupted.
72. Considerations for Audio Systems Design Audio De-embedders (or Demultiplexers):
AES digital audio signals, compressed or otherwise, are de-embedded from the SDI signal
If uncompressed multiple channel surround sound signals are de-embedded (3 AES for 5.1)
It is important that there are minimal differences in timing to preserve the surround sound image.
73. Considerations for Audio Systems Design One must take care when passing AES signals carrying compressed audio streams through processing and transport equipment
The AES signal which contains a compressed audio data stream can not be manipulated in any way
The path through processing and transport equipment must remain transparent
It is important to maintain exact phase on de-embedded AES channels for 5.1 surround sound signals as any variances between channels change the surround sound aural “image”.
74. HDTV Survival Guide
75. Video Formats – Analog Composite NTSC
76. Video Formats – Analog Composite PAL-M
77. Video Formats - Analog Composite SECAM
78. Video Formats – Analog Composite Record/Playback
79. Video Formats – 4 fsc Digital Composite When sampling analog composite signals into the digital composite domain, the sample rate is determined by 4x the the frequency of the subcarrier signal (4fsc)
10 bit samples were taken for higher quality purposes; however, 8 and 9 bit processing is typical for this format (dependent on cost)
Initially, the 8 or 9 or 10 bit wide signal was carried on parallel cables similar to printer cable with a maximum cable distance of approximately 50 ft.
The SDI (Serial Digital Interface) standard supports a serial version of the these formats for longer cable runs.
80. Video Formats – Digital Composite 143 Mb/s For the analog composite NTSC signal, the subcarrier frequency is 3.58 MHz.
Four times the frequency of 3.58 MHz results in a 14.3 MHz. sample frequency.
The analog composite signal is digitized at 14.3Mhz. using 10 bit words into the parallel digital domain.
When 14.3 MHz signals are serialized, the resultant bit rate in the serial digital domain is 10 x 14.3 = 143 Mb/s.
This is the frequency at which digitized NTSC signals travel along a co-axial cable using the SDI (Serial Digital Interface)
81. Video Formats – Digital Composite 177 Mb/s For the analog composite PAL signal, the subcarrier frequency is 4.43 MHz.
Four times the frequency of 4.43 MHz results in a 17.7 MHz. sample frequency.
The analog composite signal is digitized at 17.7Mhz. using 10 bit words into the parallel digital domain.
When 17.7 MHz signals are serialized, the resultant bit rate in the serial digital domain is 10 x 17.7 = 177 Mb/s.
This is the frequency at which digitized PAL signals travel along a co-axial cable using the SDI (Serial Digital Interface)
82. Video Formats – Digital Composite Record/Playback Two composite digital tape transports are common worldwide: D2 and D3
The digital recording and playback allowed for multi-generation use; however, the analog (composite) inputs and outputs were typically used.
Television products are no longer being manufactured using the digital composite standard except for high quality analog to analog processing.
This standard was popular in the past for frame synchronization, digital video effects, still stores using analog interfaces and internal digital processing
83. Video Formats – YPrPb, YUV, YR-YB-Y (SDTV) Analog component video signals are captured by a camera in a RGB (Red, Green and Blue) format.
RGB signals are displayed on a picture monitor, as well.
However, to carry three signals from the camera to the monitor on three cables is cost prohibitive.
Techniques such as translating the signal into other analog component formats and color encoding reduced the bandwidth
RGB analog was used in high end post production in the past as some digital processing used analog component IO for higher quality reasons (eg: character and graphics generators)
84. Video Formats – YPrPb, YUV, YR-YB-Y (SDTV) YPrPb, YUV and YR-YB-Y are all related signals which vary slightly in their formatting
These three signals are translated from RGB.
The Y signal is made up of a portion each of R, G and B to create a luminance signal which contains the black and white portion of the RGB signal. This is a full bandwidth signal.
Two color difference signals are matrixed (PrPb, UV, R-YB-Y) and bandwidth limited.
85. Video Formats – YC Y is the luminance portion of the video signal
If you combine the two color difference signals (Pr/Pb or UV or R-Y/B-Y), the result is a chrominance signal: C
A quadrature modulation scheme is used to modulate the two color difference signals onto a single subcarrier signal C.
The frequencies for the subcarrier are 3.58 MHz. for NTSC and 4.43 MHz. for PAL
YC is used in prosumer and consumer applications
(NOTE: When you combine Y + C, you get a analog composite signal)
86. Video Formats – 525 / 625 SDTV analog component signals are based on two line standards corresponding to NTSC and PAL analog composite standards:
Lines Frame Rate Field Rate
525 29.97 59.94
625 25 50
87. Video Formats – Analog Component Record/Playback Analog Betacam, BetaSP and MII took advantage of the high quality of a analog component signal for recording and playback
A few analog component systems were built; however using three cables for interconnect was cost prohibitive and can be problematic
SVHS and HI8 take advantage of not combining the luminance and chrominance signal and are used in consumer and prosumer domains to maintain higher quality
88. Video Formats – 4:2:2/4:4:4 (SDTV) When digitizing analog component signals, the line count describes the lines in the active picture (480 or 486 for 525 and 576 for 625)
Component signals have the best quality versus bandwidth. The Y and color difference (Pr/Pb or UV or R-Y/B-Y) signals are sampled using 27MHz, a frequency that is related to both 525/59.94 and 625/50 standards.
A 10-bit word sample with a ratio of 4:2:2 for the Luminance and Color Difference is used: Y = 4, Pr/Pb = 2 resulting in 13.5 Mhz for luminance and 6.75 for each of the color difference signals
Parallel cables were initially used however, serial connections have greater cable length capabilities.
270 MHz is the resultant serial frequency for a 10-bit wide path
For higher quality, the color difference signals can be sampled at 13.5 Mhz for a total bandwidth of 40.4 MHz. Typically two cables are used to carry a 4:4:4 signal (dual link BNC cables)
“i” or “p” describes the scanning formats: interlace or progressive
89. Video Formats – 540 Mb/s One of the formats that was considered for the roll-out of HDTV was using the same numbers of lines as standard definition but using a progressive scan type
This results in twice as much bandwidth required
The 540Mb/s signal is carried on either a dual link (dual BNC cables) or is mapped into a 1.5Gb/s serial data stream
Both 525 (NTSC) and 625 (PAL) standards are possible using this format:
Standard Lines Frame Rate (p)
525 (NTSC) 480 (486) 59.94
625 (PAL) 576 50
90. Video Formats – 360 Mb/s One of the formats that was considered for the roll-out of HDTV was a wide-screen SDTV signal (16:9) using a higher sampling rate to maintain horizontal resolution.
If a 4:3 is stretched horizontally to 16:9, an increase of 33% picture information is required
The Y channel sampling frequency would be increased to 18MHz (13.5MHz * 1.33 = 18MHz)
The color difference signals would be increased to 9 MHz (6.75 * 1.33 = 9 MHz)
The resultant parallel sampling frequency is 18 + 9 + 9 = 36 MHz.
The resultant serial frequency is (36 x 10 bits wide = 360 MHz)
This format has not been deployed due to the fact that there was no record/playback equipment that was produced in quantity and accepted by the market
91. Video Formats – Digital Component Record/Playback The first digital tape transport available was D1 – it supported parallel interfaces for input and output
The first digital Betacam tape transport used A>D and D>A with an analog tape transport (based on analog Betacam) with serial digital interfaces. This evolved to a digital tape transport using compression technologies.
D5 provides more record/playback bandwidth than digital Betacam
There are many DV type tape transports available. The record/playback perfromace varies between formats depending on the compression ratio used providing a range of products used in the professional, semi-professional and consumer domains
Hard disk based record and playback systems typically use the 4:2:2 digital component format which is compressed, as well
92. Video Formats – RGB (HDTV) Video signals are captured by HDTV cameras in a RGB (Red, Green and Blue) format.
RGB signals are displayed on HDTV picture monitors, as well.
RGB analog may be used in high end post production.
93. Video Formats – YPrPb (HDTV) YPrPb is a related signal to RGB
The Y signal is made up of a portion each of R, G and B to create a luminance signal which contains the black and white portion of the RGB signal. This is kept full bandwidth as luminance resolution needs to be higher than color resolution
Two color difference signals are matrixed (PrPb) and bandwidth limited as color is not required to have as much bandwidth as luminance signals
This format is used typically as the output signal from a camera or as the input signal to a picture monitor (this format is used exclusively for HDTV in the consumer domain at this time)
94. Video Formats – 1080 / 720 lines There are two line formats used for HDTV
Today’s HDTV signals use either 1080 or 720 lines
The horizontal resolution for each is shown below:
Lines (V) Pixels (H)
1080 1920
720 1280
95. Video Formats – Frame/Field Rates, Scan Types (HDTV) There are many frame/field rates used for HDTV signals
These are progressively scanned frames used at the following frame per second rates: 23.98, 24, 25, 29.97 and 30 frames per second.
As well, SF or Segmented Frame is used where the camera and display are progressively scanned, but the processing in done in an interlaced manner. SF can also be described as P or Progressive for SONY products. From a Sony brochure: 23.98P, 24P, 25P, 29.97P, and 30P are used as generic names in this literature for the industry standard 23.98PsF, 24PsF, 25PsF, 29.97PsF, and 30PsF (Progressive Segmented Frames), respectively.
Interlaced formats include 25 frames (50 fields) per second, 29.97 frames (59.94 fields) per second and 30 frames per second (60 fields)
Broadcasts formats follow the frame/field rates used in SDTV: 25 frames per second (50 fields per second) in PAL countries and 29.97 frames per second (59.94 fields per second) in NTSC countries. For the 1080 line standard, 25 frames (50 fields) and 29.97 frames (59.94 fields) per second interlaced are used. For the 720 line standard, 25 and 29.97 frames per second progressive are used in broadcast. Typically these formats are described in the following manner: 1080i-50, 1080i-59.94 and 720p-50 and 720p-59.94
23.98 / 24 / 25 / 29.97 / 30 frame per second progressive and segmented frame scan types frames are used in production and post production
96. Video Formats – 4:2:2/4:4:4 (HDTV) When digitizing HDTV analog component signals, a 4:2:2 ratio is used to sample the Y, Pr, Pb signals
The sample rate is 74.25MHz. (Y = 74.25 MHz, Pb/Pr = 37.13 MHz)
A 10-bit word sample is used.
The serial digital interface has a bandwidth of 1.5 Gb/s
(74.25 + 74.25/2 + 74.25 / 2) x 10 = 1.485 MHz
For full bandwidth sampling, either RGB or the Pb and Pr signals are sampled at 75.25 MHz. (4:4:4). These signals are carried on two 1.5Gb/s serial interfaces (dual link)
“i” or “p” describes the scanning formats: interlace or progressive
97. Video Formats – 1080 / 720 lines There are two line formats used for HDTV
HDTV signals use either 1080 or 720 lines
The horizontal resolution for each is shown below:
Lines (V) Pixels (H)
1080 1920
720 1280
98. Video Formats – Frame/Field Rates, Scan Types (HDTV) There are many frame/field rates used for HDTV signals
These are progressively scanned frames used at the following frame per second rates: 23.98, 24, 25, 29.97 and 30 frames per second.
As well, SF or Segmented Frame is used where the camera and display are progressively scanned, but the processing in done in an interlaced manner
Interlaced formats include 25 frames (50 fields) per second, 29.97 frames (59.94 fields) per second and 30 frames per second (60 fields)
Broadcast formats follow the frame/field rates used in SDTV: 25 frames per second (50 fields per second) in PAL countries and 29.97 frames per second (59.94 fields per second) in NTSC countries. For the 1080 line standard, 25 frames (50 fields) and 29.97 frames (59.94 fields) per second interlaced are used. For the 720 line standard, 25 and 29.97 frames per second progressive are used in broadcast. Typically these formats are described in the following manner: 1080i-50, 1080i-59.94 and 720p-50 and 720p-59.94
23.98 / 24 / 25 / 29.97 / 30 frame per second progressive and segmented frame scan types frames are used in production and post production
99. Video Formats –HDTV Record/Playback There are many choices for HDTV recording and playback
Tape, solid state and hard drive devices are used with compression techniques for recording and playing back video signals.
Typically MPEG2 or a form of DCT (Discrete Cosine Transform) compression or JPEG2000 is used.
100. Video Formats – 2k DC (Digital Cinema) DC has a new standard (DC1.0) defining 2k as:
2048 pixels by 1080 pixels at 24 or 48 frames per second
A “dual” 1.5 Gb/s interface is used
A standard for a single 3.0 Gbps co-axial connection is emerging for both HDTV and DC (Digital Cinema)
It will be used in the near future for high quality production purposes
101. Video Formats – 4k DC (Digital Cinema) DC has a new standard (DC1.0) defining 4k as:
4096 pixels by 2160 pixels at 24 frames per second
A “dual dual” 1.5 Gb/s interface is used (quad)
102. HDTV Survival Guide
103. Video Conversion: Color Encoding Video signals start out in the analog component and are processed from three to one signal
This process of color encoding introduces artifacts due to the fact that luminance and chrominance can never be truly separated at the display device
104. Video Conversion: Color Decoding For analog systems in the past, color decoding is the process of converting analog composite (1 channel, 1 wire) to component (3 channels, 3 wires)
Color decoding uses simple notch filtering. This is known as 1D or one dimensional filtering
Typically, the results are not good enough for the production of television today, but are used in lower priced products for consumer and pro-sumer applications.
A simple notch filter removes subcarrier (3.58MHz for NTSC) from the luminance signal.
It removes some luminance information, as well.
If not properly designed, can produce “ringing” or visual reflections
The propagation time through a decoder of this type is not an issue for lip sync
Notch filtering is used as part of the filter choices in adaptive comb filters
For analog systems, 2D comb filtering techniques were used (passive glass delay line)
105. Video Conversion: A>D (composite domain) The techniques for converting analog composite to digital composite are well known.
However, the industry has moved away from composite digital processing as digital component processing provides for greater bandwidth and resolution
106. Video Conversion: D>A (composite domain) The techniques for converting digital composite to analog composite are well known.
However, the industry has moved away from composite digital processing as digital component processing provides for greater bandwidth and resolution
107. Video Conversion – Standards Conversion (Analog) When converting from NTSC to or from PAL, it is considered to be “standards conversion”.
The number of lines can be converted by a vertical stretch for 525 to 625 conversion and vertical squeeze for 625 to 525 conversion
The frame rate needs to be converted, as well.
Frame rate conversions are considered to be “temporal” as the conversion takes place over time between frames (or motion)
108. Video Conversion – Standards Conversion (Component) If component analog video is used for standards conversion, there is an improvement as luminance and chrominance information are carried on three separate channels
Typically today, digital component processes and interfaces are used (SD-SDI)
109. Video Conversion: SRC Or digital encoder and digital decoder
Sample Rate Conversion is used to convert between the composite and component domains
This is no longer a conversion typically done in todays’ systems
110. Video Conversion: Translate Moving between three channels formats (RGB<>YPrPb) is a translation process
For analog interfaces, there are three choices:
3 channels
Component (RGB, YPrPb)
2 channels
YC
1 channel
Composite
111. Video Conversion: Mod/Demod Combining color difference signals (Pr,Pb) to a single chrominance signal (C) is called modulation
Separating the chrominance signal (C) to the two color difference signals (Pr,Pb) is called demodulation
112. Video Conversion: A to D (SDTV) Analog component three channel to digital component conversion for standard definition video is typically called an A to D conversion
RGB is first translated to Y,Pr,Pb and then digitized using the 4:2:2 sampling scheme
113. Video Conversion: D to A (SDTV) Digital component to analog component three channel conversion for standard definition is typically called a D to A conversion
The 4:2:2 sampled signal are processed through digital to analog converters for Y, Pr, Pb
RGB is translated from Y,Pr,Pb
114. Video Conversion: A to D (HDTV) Analog component three channel to digital component conversion for high definition video is typically called an A to D conversion
RGB is first translated to Y,Pr,Pb and then digitized using the 4:2:2 sampling scheme
115. Video Conversion: D to A (HDTV) Digital component to analog component three channel conversion for high definition is typically called a D to A conversion
The 4:2:2 sampled signals are processed through digital to analog converters for Y, Pr, Pb
RGB is translated from Y,Pr,Pb
Analog component high definition video is used for monitoring devices and for consumer applications
116. HDTV Survival Guide
117. Total Content Delivery
118. Products: Multiple Image Processor
NEO SuiteView™ is a highly scalable, modular multi-source display processor capable of rendering multiple video and computer signals in real time to high-resolution computer monitors, plasma or projection displays.
Resolutions from VGA to UXGA (1600x1200) are supported at the output
Extensive input format support
HD modules support auto-sensing HD-SDI, SDI & composite inputs for the ultimate future proof selection
SD modules supports auto-sensing SDI and composite inputs
Analog video modules supports composite inputs
Also supports RGB, YUV, Y/C inputs
Graphics module supports computer graphics inputs: VGA and DVI
Streaming video also supported
Configurable high resolution graphics background input supporting resolutions up to UXGA (1600x1200)
119. Products: SDTV Record, Edit and Playback NX4000TXS SD Transmission Server
Supports multiple formats: DV/MPEG, SD and HD content on a single file system
Supports SDI, DVB/ASI, and AES interfaces
Easy integration with IP networks using gigabit Ethernet for content transfer
MXF-compatible
Scales to 100 I/O channels with shared access to content
Supports up to 14Terabytes of storage
Dual hot-swappable power supplies
Works with a wide range of automation, archiving, and media management applications
120. Products: SDTV Record, Edit and Playback VelocityQ Advanced Multi-Stream Non-Linear Editing System
Guaranteed, full-quality, multi-stream real-time performance
Real-time playback of 4 video streams and up to 6 graphics streams
Optional two-channel or four-channel real-time 3D DVE
Mixable compressed & uncompressed video
Integrated real-time multi-camera editing
Powerful, flexible and intuitive user interface
Tightly integrated hardware/software solution
Flexible output formats for broadcast, video, CD, DVD and the web
121. NEXIO
Transmission Servers
News Servers
Specialty Services
Storage Solutions
Software codecs for coding and decoding HD MPEG2 media
Supports 1080i and 720p resolutions
Supports SD and HD content on a single file system
Record user selectable I-frame MPEG2 HL 4:2:2 @ 50, 80, 100, 120 and 150Mbps
Playback MPEG 2 MP@HL LGOP up to 80 Mbps and MPEG 2 MP@HL at 4:2:2 profile I-frame up to 150Mbps
Supports HD-SDI (SMPTE-292M) and AES-3 interfaces
720p or 1080i Formats 1080i @ 29.97fps, 1080i @ 25fps & 720p @ 59.94fps
Scales to 48 channels – up to 48 HD outputs or up to 24 input channels and 24 outputs with shared access to content
Supports up to 30 terabytes of storage
Easy integration with IP networks using gigabit Ethernet for media transfers
Runs NXOS software
FTP server included
Transfer Manager built into NXOS to move media from server-to-server and between attached general purpose drives and the server
AC3 Dolby®Digital and Dolby®E audio pass-through
Dual hot swappable power supplies
RAIDSoft™ storage protection
Smooth off speed play at up to +/- 4x with audio scrub
Works with a wide range of automation, archiving and media management applications Products: HDTV Record, Edit and Playback
122. Products: HDTV Record, Edit and Playback VelocityHD non-linear editing system brings guaranteed, full-quality, real-time editing performance to the HD realm at a remarkable price. VelocityHD features full-quality HD playback of two video streams, two dynamic graphics streams, dual-stream real-time HD transitions and effects, and optional 3D effects. VelocityHD also features flexible formats and frame rates, compressed and uncompressed video, HD-SDI and IEEE-1394 HDV I/O, and outstanding multi-stream SD performance.
VelocityQ is a fully integrated, standard-definition multi-layer NLE solution that combines the Quattrus hardware with the flexible and intuitive Velocity software interface. VelocityQ’s powerful features include full-quality simultaneous playback of four video streams (compressed or uncompressed video), up to six graphics streams and optionally up to four channels of real-time 3D DVE.
VelocityX is the ultimate field and offline editing companion for VelocityHD and VelocityQ, unleashing the Velocity user interface in a software-only NLE for VelocityQ or VelocityHD customers needing a laptop editor for field editing, or looking to expand their operations with additional low-cost editing seats for offline work or online projects with lower performance requirements.
123. Products: HDTV Record, Edit and Playback Velocity and Video format conversion:
When the frame rate stays the same (1080/59.94i <> 486/59.94i) or when there's a good methodology for processing the frame rate conversion (e.g. 1080/23.98psf > 486/59.94i using 3:2 pulldown) the quality is maintained, as the spatial resizing uses just a bi-cubic algorithm, with the processing of each pixel based on the surrounding 4 pixels.
When temporal conversions are added to this, then the quality might suffer as Velocity primarily uses blending to quickly process these conversions. As such, there can be a lot of resulting artifacts.
These trade-offs come largely through the attempt to render these conversions very fast -- the trade-off between speed and quality.
Velocity users do have another option -- capturing with VelocityHD, then using eyeon's Digital Fusion software on the same system to do the conversion. As a high-end image processing package (among other things), Digital Fusion will do considerably better, particularly in the spatial sense...but slower.
124. Products: Noise Reduction X75HD with HDTV video noise reduction and enhancement
Optional X75OPT-NR 3D adaptive SDTV video noise reduction and enhancement
NEO XHD HDTV video noise reduction and enhancement
125. Products: Color Encoding 6800+ core modular products
VAM/6800+
NEO advanced modular products
ENS-3901
X75 converter synchronizer
X75SD
X75HD
126. Products: Video A to D Conversion 6800+ core modular products
ADC-6801
X75 converter synchronizer
X75SD
127. Products: Video D to A Conversion 6800+ core modular products
DAC-6801, USM6800+
NEO advanced modular products
VSM-3901
X75 converter synchronizer
X75SD
128. Products: Up Conversion 6800+ core modular products
XHD-series
NEO advanced modular products
XHD-series
X75 converter synchronizer
X75HD
129. Products: Down Conversion 6800+ core modular products
XHD-series
NEO advanced modular products
XHD-series
X75 converter synchronizer
X75HD
130. Products: Cross Conversion NEO advanced modular products
XHD-series
X75 converter synchronizer
X75HD
131. Products: Reference Generator NEO advanced modular products
MTG-3901
132. Products: Video Distribution Amplifiers 6800+ core modular products
DAs for all signal types
Analog
Digital
Video
133. Products: AES Distribution Amplifiers AES DAs are available in 6800+ and NEO modular product lines
134. Products: Routing Switchers PLATINUM
Large Routing Systems
INTEGRATOR
Medium Routing Systems
PANACEA
Small Routing Systems
6800+ and NEO
Modular
135. Products: Audio Embedders or Multiplexers X75 for both SD and HD applications
8 ch processor with 2 analog, 2 AES
16 ch processor with 2 analog 5 AES
32 ch processor with 8 AES
6800+ for both SD and HD applications
2 and 4 ch analog
2, 4 and 8 AES
NEO Simplicity
4 ch analog
4 AES
136. Products: Audio De-embedder or Demultiplexer X75 for both SD and HD applications
8 ch processor with 2 analog, 2 AES
16 ch processor with 2 analog 5 AES
32 ch processor with 8 AES
6800+ for both SD and HD applications
2 and 4 ch analog
2, 4 and 8 AES
NEO Simplicity
4 ch analog
4 AES
137. Products: Frame Synchronizers 6800+ core modular products
DES, ENS, VFS, HFS
NEO advanced modular products
DAS, DES, DNS, VFS, VHS-H
X75 converter synchronizer
X75SD
X75HD
138. HDTV Survival Guide
139. Products: Color Decoding 2D, 3 line adaptive color decoding
This type of decoder is the most common in the broadcast industry
Many competitors offer similar products
Many competitors use “off the shelf” components (Philips) to decode the input composite video
These components are general in their applications and tend to have good overall performance
Leitch uses its own 3 line adaptive decoding technology
DEC/DES6800+
DPS-575
X75SD with PQM decoder
140. Products: Color Decoding Leitch’s PQM 2D Decoding Technology
Phase Quadrature Mixing (PQM) technology provides the highest degree of video decoding quality while minimizing the level of residual subcarrier artifacts associated with competitive comb filter designs
PQM decoding technology generates a set of dynamic control signals that are derived from specific quadrature characteristics of the source video signal.
These signals are filtered and used as logical steering controls for the decoder to better adapt between notch filter and 3 line comb filter
These signals capitalize on some of the modulation characteristics that were initially used by a composite encoder to generate the signal
141. Products: Color Decoding Leitch’s 3D Decoding Technology
Uses multiple comb filters and shifts between them on a pixel by pixel basis
The amount of motion in the image dictates which comb filter is to be used
142. Products: Color Decoding Leitch’s 3D Decoding Technology
Leitch Technology’s industry leading 3D process uses the following comb filters:
Notch filter
3 line adaptive comb filter
Field comb filter
Frame comb filter
Comb Decision block looks at all four comb filters simultaneously.
Decides which of the four filters will yield the best possible outputs signal
Motion Detection block compares the identical pixel (spatial, luma and chroma in same phase) to determine motion.
If any motion detected frame comb is shut off. The frame comb is highly motion sensitive.
Propagation delay maintains the same disregarding the setting.
143. Products: Color Decoding Real life, practical example
Multi-channel service providers are seeking to deliver the most channels possible in a fixed, finite amount of bandwidth
Multi-channel service providers compete on the basis of subjective video quality and number of channels
Existing customer used NEO 3D adaptive decoders on every channel
Improved subjective quality of baseband SDI signal
Fewer artifacts
Cleaner video
Invested ~$4k (US) per signal
Reduced bits allocated to each channel and maintained overall subjective video quality
Overall, increased channel count by 20% on average
144. Products: Color Decoding 3D adaptive comb filter (industry leading)
NEO advanced modular products
DEC/DES/DAS-3901
X75 with X75OPT-A3D analog input option
X75SD
X75HD
2D adaptive comb filter
6800+ core modular products
DEC/DES6800+
X75 with X75OPT-PQM analog input option
X75SD
X75HD
145. HDTV Survival Guide