SMPTE SD/HD/3G-SDI 3.0 Logicore IP Product Guide (PG071)
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Naba Dpp Hd (Mpeg-2)
TECHNICAL SPECIFICATIONS FOR DELIVERY OF “MPEG-2 Air Ready” TELEVISION PROGRAMS TO Insert Broadcaster Name and Logo This document is a guide to the technical standards for delivery of MPEG-2 based Air Ready Masters as agreed by the North American Broadcasters Association and the Digital Production Partnership for use in Canada and the United States. The document includes: NABA-DPP Common Technical Specifications: • Technical parameters which must be used and that all material must meet to be acceptable by the NABA broadcasters. • Metadata Specifications. Broadcaster Specific Instructions and Requirements: • Mandatory broadcaster Delivery Requirements, which details the broadcaster Specific instructions for the de livery of program material. • Broadcaster Production and Post Production requirements, which form part of the binding contract for the delivery of program material. This section includes individual broadcaster requirements for Program Production and Post Production and guidance on the expected perceptual picture and sound quality. Picture and Sound quality assessment is subjective and therefore highly dependent on the nature of the program. Some quality requirements may be expressed in relative terms (“reasonable”, “not excessive” etc.), and it will be necessary to make a judgment as to whether the quality expectations of the program’s intended audience will be fulfilled and whether the broadcaster will feel that value for money has been achieved. Every program submitted for transmission must satisfy both an Objective Technical Quality Control process and a Subjective Picture and Sound Quality Assessment as specified by [BROADCASTER’S NAME]. Any program failing to meet these requirements may be rejected. Note: Unless specifically stated, quoted standards documents shall refer to the published version in force at the time of reading (e.g. -
An Overview of Signal Processing Techniques for Millimeter Wave MIMO Systems
1 An Overview of Signal Processing Techniques for Millimeter Wave MIMO Systems Robert W. Heath Jr., Nuria Gonzalez-Prelcic, Sundeep Rangan, Wonil Roh, and Akbar Sayeed Abstract Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless com- munication. The mmWave band offers higher bandwidth communication channels versus those presently used in commercial wireless systems. The applications of mmWave are immense: wireless local and personal area networks in the unlicensed band, 5G cellular systems, not to mention vehicular area networks, ad hoc networks, and wearables. Signal processing is critical for enabling the next generation of mmWave communication. Due to the use of large antenna arrays at the transmitter and receiver, combined with radio frequency and mixed signal power constraints, new multiple-input multiple-output (MIMO) communication signal processing techniques are needed. Because of the wide bandwidths, low complexity transceiver algorithms become important. There are opportunities to exploit techniques like compressed sensing for channel estimation and beamforming. This article provides an overview of signal processing challenges in mmWave wireless systems, with an emphasis on those faced by using MIMO communication at higher carrier frequencies. I. INTRODUCTION The millimeter wave (mmWave) band is the frontier for commercial – high volume consumer – wireless communication systems [1]. MmWave makes use of spectrum from 30 GHz to 300 GHz whereas most arXiv:1512.03007v1 [cs.IT] 9 Dec 2015 R. W. Heath Jr. is with The University of Texas at Austin, Austin, TX, USA (email: [email protected]). Nuria Gonzalez- Prelcic is with the University of Vigo, Spain, (email: [email protected]). -
Sources and Compensation of Skew in Single-Ended and Differential Interconnects
Published on Signal Integrity Journal, www.signalintegrityjournal.com April 2017 Sources and Compensation of Skew in Single-Ended and Differential Interconnects Eben Kunz, Oracle Corp. Jae Young Choi, Oracle Corp. Vijay Kunda, Oracle Corp. Laura Kocubinski, Oracle Corp. Ying Li, Oracle Corp. Jason Miller, Oracle Corp. Gustavo Blando, Oracle Corp. Istvan Novak, Oracle Corp. Disclaimer: This presentation does not constitute as an endorsement for any specific product, service, company, or solution. 1 Published on Signal Integrity Journal, www.signalintegrityjournal.com April 2017 Abstract In high-speed signaling with embedded clock a few ps in-pair skew may cause serious signal degradations. Point-to-point topologies, long PCB traces put the emphases on local speed variations due to bends, glass- weave, deterministic and random asymmetries. First we analyze the contributors to the delay in single-ended traces. Length in differential pairs varies due to bends and turns. At each turn the outer trace has a little extra length. We show that using the center-line trace length can give a delay estimation error up to several ps. We will show how different turns, right angle, double 45-degree or arced turn, impact delay. Second, we look at practical ways of compensating skew. A few options are looked at and their performances compared. We consider a few statistical contributors to skew and establish a limit below which compensation makes no sense. The simulated data is illustrated by the measured performance of a few simple structures. Author(s) Biography Eben Kunz graduated from MIT in 2012 with a BS and Master's in EE. -
User Requirements for Video Monitors in Television Production
TECH 3320 USER REQUIREMENTS FOR VIDEO MONITORS IN TELEVISION PRODUCTION VERSION 4.1 Geneva September 2019 This page and several other pages in the document are intentionally left blank. This document is paginated for two sided printing Tech 3320 v4.1 User requirements for Video Monitors in Television Production Conformance Notation This document contains both normative text and informative text. All text is normative except for that in the Introduction, any § explicitly labeled as ‘Informative’ or individual paragraphs which start with ‘Note:’. Normative text describes indispensable or mandatory elements. It contains the conformance keywords ‘shall’, ‘should’ or ‘may’, defined as follows: ‘Shall’ and ‘shall not’: Indicate requirements to be followed strictly and from which no deviation is permitted in order to conform to the document. ‘Should’ and ‘should not’: Indicate that, among several possibilities, one is recommended as particularly suitable, without mentioning or excluding others. OR indicate that a certain course of action is preferred but not necessarily required. OR indicate that (in the negative form) a certain possibility or course of action is deprecated but not prohibited. ‘May’ and ‘need not’: Indicate a course of action permissible within the limits of the document. Informative text is potentially helpful to the user, but it is not indispensable, and it does not affect the normative text. Informative text does not contain any conformance keywords. Unless otherwise stated, a conformant implementation is one which includes all mandatory provisions (‘shall’) and, if implemented, all recommended provisions (‘should’) as described. A conformant implementation need not implement optional provisions (‘may’) and need not implement them as described. -
Understanding HD and 3G-SDI Video Poster
Understanding HD & 3G-SDI Video EYE DIGITAL SIGNAL TIMING EYE DIAGRAM The eye diagram is constructed by overlaying portions of the sampled data stream until enough data amplitude is important because of its relation to noise, and because the Y', R'-Y', B'-Y', COMMONLY USED FOR ANALOG COMPONENT ANALOG VIDEO transitions produce the familiar display. A unit interval (UI) is defined as the time between two adjacent signal receiver estimates the required high-frequency compensation (equalization) based on the Format 1125/60/2:1 750/60/1:1 525/59.94/2:1, 625/50/2:1, 1250/50/2:1 transitions, which is the reciprocal of clock frequency. UI is 3.7 ns for digital component 525 / 625 (SMPTE remaining half-clock-frequency energy as the signal arrives. Incorrect amplitude at the Y’ 0.2126 R' + 0.7152 G' + 0.0722 B' 0.299 R' + 0.587 G' + 0.114 B' 259M), 673.4 ps for digital high-definition (SMPTE 292) and 336.7ps for 3G-SDI serial digital (SMPTE 424M) sending end could result in an incorrect equalization applied at the receiving end, thus causing Digital video synchronization is provided by End of Active Video (EAV) and Start of Active Video (SAV) sequences which start with a R'-Y' 0.7874 R' - 0.7152 G' - 0.0722 B' 0.701 R' - 0.587 G' - 0.114 B' as shown in Table 1. A serial receiver determines if the signal is “high” or “low” in the center of each eye, and signal distortions. Overshoot of the rising and falling edge should not exceed 10% of the waveform HORIZONTAL LINE TIMING unique three word pattern: 3FFh (all bits in the word set to 1), 000h (all 0’s), 000h (all 0’s), followed by a fourth “XYZ” word whose B'-Y' -0.2126 R' - 0.7152 G' + 0.9278 B' -0.299 R' - 0.587 G' + 0.886 B' detects the serial data. -
GD-W213L Monitor
JVCKENWOOD Sales Information KEY FEATURES KY-PZ100 GD-W213LDT-V17G25/DT-V24G2/DT-V17G2 monitor und DT-V21G2 • 1 / 2.8-inch CMOS sensor (2.2 million pixels) • Optical zoom lens with 30x zoom ratio (4,3-129mm, f / 1.6 ~ 4.7) Powerful f / 1.6 to 4.7 • OctoberLoLuxGeneralle Monitorenmode 2015 Spezifikationen(up to 0.01 lux) DT-V17G2 DT-V17G25 DT-V21G2 DT-V24G2 Screen Size 17 (16,5" effective) 17 (16,5" effective) 21 (21,5" effective) 24 • DirectNumber of drivepixels displayed motor for pan, tilt and zoom1920 x 1080 1920 x 1080 1920 x 1080 1920 x 1200 Panel Type IPS IPS IPS IPS • FullNumber HD of colours 1080p, displayed 1080i, 720p video 16.7M 107.3B 16.7M 107.3B • 3GBrightness SDI and HDMI digital output 300cd/m2 450cd/m2 300cd/m2 400cd/m2 Contrast 1500:1 1500:1 1500:1 1500:1 • 2Reaction channel Time audio (or 1-channel balanced8ms audiotyp with Phantom8ms typ power) 8ms typ 8ms typ • USBViewing Anglehost (Typ.) port for WiFi or 4G LTE adapter178 x 178 178 x 178 178 x 178 178 x 178 Audio Output 1+1W 1+1W 1+1W 1+1W • LANWave Form port supports Power over EthernetYes (PoE) Yes Yes Yes • AdvancedWave Form Size (small/big) IP communication capability:Yes Yes Yes Yes Vectorscope Yes Yes Yes Yes • StreamingVectorscope Size (small/big) with SMPTE 2022 forward errorYes correction Yes Yes Yes R/G/B/Y Histogram Yes Yes Yes Yes • Extended16-Channel Audio Zixi reliable communication with16ch ARQ, FEC and 16chadaptive bit rate control16ch 16ch • LowZebra stripes latency streaming Yes Yes Yes Yes R/G/B/Mono Yes Yes Yes Yes • RTMPTimecode streaming directly -
Multi-Rate Serial Digital Interface (SDI) Physical Layer IP Core Is an Ipexpress User-Configurable Core and Can Be Used to Generate Any Allowable Configuration
ispLever TM CORECORE Multi-Rate Serial Digital Interface Physical Layer IP Core User’s Guide January 2012 ipug70_01.2 Multi-Rate Serial Data Interface Physical Layer Lattice Semiconductor IP Core User’s Guide Introduction Serial Digital Interface (SDI) is the most popular raw video link standard used in television broadcast studios and video production facilities. Field Programmable Gate Arrays (FPGAs) with SDI interface capability can be used for acquisition, mixing, storage, editing, processing and format conversion applications. Simpler applications use FPGAs to acquire SDI data from one or more standard definition (SD) or high definition (HD) sources, perform sim- ple processing and retransmit the video data in SDI format. Such applications require an SDI physical layer (PHY) interface and some basic processing blocks such as a color space converter and frame buffer. In more complex applications, the acquired video receives additional processing, such as video format conversion, filtering, scaling, graphics mixing and picture-in-picture display. FPGA devices can also be used as a bridge between SDI video sources and backplane protocols such as PCI Express or Ethernet, with or without any additional video processing. In an FPGA-based SDI solution, the physical interface portion is often the most challenging part of the solution. This is because the PHY layer includes several device-dependent components such as high speed I/Os (inputs/outputs), serializer/deserializer (SERDES), clock/data recovery, word alignment and timing signal detection logic. Video processing, on the other hand, is algorithmic and is usually achieved using proprietary algorithms developed by in-house teams. The Lattice Multi-Rate SDI PHY Intellectual Property (IP) Core is a complete SDI PHY interface that connects to the high-speed SDI serial data on one side and the formatted parallel data on the other side. -
ATSC Candidate Standard: Video – HEVC (A/341)
ATSC S34-168r7 Video – HEVC 03 January 2017 ATSC Candidate Standard: Video – HEVC (A/341) Doc. S34-168r7 03 January 2017 Advanced Television Systems Committee 1776 K Street, N.W. Washington, D.C. 20006 202-872-9160 i ATSC S34-168r7 Video – HEVC 03 January 2017 The Advanced Television Systems Committee, Inc., is an international, non-profit organization developing voluntary standards for digital television. The ATSC member organizations represent the broadcast, broadcast equipment, motion picture, consumer electronics, computer, cable, satellite, and semiconductor industries. Specifically, ATSC is working to coordinate television standards among different communications media focusing on digital television, interactive systems, and broadband multimedia communications. ATSC is also developing digital television implementation strategies and presenting educational seminars on the ATSC standards. ATSC was formed in 1982 by the member organizations of the Joint Committee on InterSociety Coordination (JCIC): the Electronic Industries Association (EIA), the Institute of Electrical and Electronic Engineers (IEEE), the National Association of Broadcasters (NAB), the National Cable Telecommunications Association (NCTA), and the Society of Motion Picture and Television Engineers (SMPTE). Currently, there are approximately 120 members representing the broadcast, broadcast equipment, motion picture, consumer electronics, computer, cable, satellite, and semiconductor industries. ATSC Digital TV Standards include digital high definition television (HDTV), standard definition television (SDTV), data broadcasting, multichannel surround-sound audio, and satellite direct-to-home broadcasting. Note: The user's attention is called to the possibility that compliance with this standard may require use of an invention covered by patent rights. By publication of this standard, no position is taken with respect to the validity of this claim or of any patent rights in connection therewith. -
XAPP592 (V2.0) July 14, 2014
Application Note: Kintex-7 Family Implementing SMPTE SDI Interfaces with Kintex-7 GTX Transceivers Author: John Snow XAPP592 (v2.0) July 14, 2014 Summary The Society of Motion Picture and Television Engineers (SMPTE) serial digital interface (SDI) family of standards is widely used in professional broadcast video equipment. These interfaces are used in broadcast studios and video production centers to carry uncompressed digital video, along with embedded ancillary data such as multiple audio channels. The Xilinx® SMPTE SD/HD/3G-SDI LogiCORE™ IP is a generic SDI receive/transmit datapath that does not have any device-specific control functions. This application note provides a module containing control logic to couple the SMPTE SD/HD/3G-SDI LogiCORE IP with the Kintex®-7 FPGA GTX transceivers to form a complete SDI interface. This application note also provides several example SDI designs that run on the Xilinx Kintex-7 FPGA KC705 evaluation board. Terms used in this document are explained in the Glossary, page 63. Titles of SMPTE reports and standards are listed in References, page 66, and referred to by SMPTE document number in text. Introduction The Xilinx SMPTE SD/HD/3G-SDI LogiCORE IP (called the SDI core in the rest of this document) can be connected to a Kintex-7 GTX transceiver to implement an SDI interface capable of supporting the SMPTE SD-SDI, HD-SDI, and 3G-SDI standards. The SDI core and GTX transceiver must be supplemented with some additional logic to connect them together to implement a fully functional SDI interface. This application note describes this additional control and interface logic and provides the necessary control and interface modules in both Verilog and VHDL source code. -
HD-STAR® Handheld HD-SDI and SD-SDI Generator and Monitor
HD-STAR® Handheld HD-SDI and SD-SDI Generator and Monitor FEATURES • Multiple functionality – Color monitor – Vectorscope – Waveform monitor VIDEO TEST – Test signal generator – Embedded audio monitor // – Serial data analyzer T N • Multiformats – HD-SDI – SD-SDI EME • Portable, handheld – PDA-sized SUR – Weighs under one pound with battery • Integrated 320x240 color LCD display EA • Touchscreen operation ND M A PRODUCT DETAILS T With a powerful array of features and functions that include a video test signal gen- S erator, color monitor, waveform monitor, vectorscope, serial data analyzer and an au- E T dio analyzer/monitor, the lightweight HD-STAR is ideal for monitoring field production camera setup, equipment installation, or troubleshooting signal path issues related to high-definition and standard-definition digital formats. The Videotek® HD-STAR® is a portable, battery-powered HD-SDI and SD-SDI This PDA-sized test monitor was designed to offer the convenience of portability without video generator and monitor. Also possessing embedded audio generator and sacrificing function and performance. To enhance the user’s experience, the HD-STAR monitor capabilities, the HD-STAR provides a level of multiformat functionality and features an integrated 320x240 color LCD display, utilizing touchscreen technology to control and configure each operation. Maintaining power in versatility that sets it apart from other handheld test and measurement products the field won’t be a problem — the HD-STAR runs on a Li-Ion battery pack. on the market. The HD-STAR includes one looping video input for monitoring HD-SDI and SD-SDI sig- nals formatted in SMPTE 292 M or SMPTE 259 M-C with embedded audio. -
Etsi Ts 101 154 V2.4.1 (2018-02)
ETSI TS 101 154 V2.4.1 (2018-02) TECHNICAL SPECIFICATION Digital Video Broadcasting (DVB); Specification for the use of Video and Audio Coding in Broadcast and Broadband Applications 2 ETSI TS 101 154 V2.4.1 (2018-02) Reference RTS/JTC-DVB-377 Keywords broadcasting, digital, DVB, MPEG, TV, UHDTV, video ETSI 650 Route des Lucioles F-06921 Sophia Antipolis Cedex - FRANCE Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16 Siret N° 348 623 562 00017 - NAF 742 C Association à but non lucratif enregistrée à la Sous-Préfecture de Grasse (06) N° 7803/88 Important notice The present document can be downloaded from: http://www.etsi.org/standards-search The present document may be made available in electronic versions and/or in print. The content of any electronic and/or print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any existing or perceived difference in contents between such versions and/or in print, the only prevailing document is the print of the Portable Document Format (PDF) version kept on a specific network drive within ETSI Secretariat. Users of the present document should be aware that the document may be subject to revision or change of status. Information on the current status of this and other ETSI documents is available at https://portal.etsi.org/TB/ETSIDeliverableStatus.aspx If you find errors in the present document, please send your comment to one of the following services: https://portal.etsi.org/People/CommiteeSupportStaff.aspx Copyright Notification No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and microfilm except as authorized by written permission of ETSI. -
1. Introduction
AN377 TIMING AND SYNCHRONIZATION IN BROADCAST VIDEO 1. Introduction Digitization of video signals has been common practice in broadcast video for many years. Early digital video was commonly encoded on a 10-bit parallel bus, but as higher processing speeds became practical, a serial form of the digitized video signal called the Serial Digital Interface (SDI) was developed and standardized. Serialization of the digital video stream greatly facilitates its distribution within a professional broadcast studio. Master Sync Generator Sync Video Server (Mass Storage) (Genlock) On-site Video Cameras SDI SDI Video Switching/ Processing SDI Transmission Facility SDI SDI Distribution Video Amplifier SDI SDI Router Frame Synchronizer SDI SDI Remote Professional Video Server Video Camera Monitor (Storage) Figure 1. Typical Example of a Professional Broadcast Video In a studio with multiple cameras, it is important that video signals coming from multiple sources are frame aligned or synchronized to allow seamless switching between video sources. For this reason, a synchronization signal is often distributed to all video sources using a master synchronization generator as shown in Figure 1. This allows video switching equipment to select between multiple sources without having to buffer and re-synchronize all of its input video signals. In this application note, we will take a closer look at the various components that make up a broadcast video system and how each of the components play a role in the synchronization chain. Rev. 0.1 8/09 Copyright © 2009 by Silicon Laboratories AN377 AN377 2. Digitizing the Video Signal A video camera uses sensors to capture and convert light to electrical signals that represent the three primary colors– red, green, and blue (RGB).