Real-Time HDR/WCG Conversion with Colorfront Engine™ Video Processing

Total Page:16

File Type:pdf, Size:1020Kb

Real-Time HDR/WCG Conversion with Colorfront Engine™ Video Processing Real-time HDR/WCG Conversion with Colorfront Engine™ Video Processing $7,995 US MSRP* Real time HDR Conversion for 4K/UHD/2K/HD Find a Reseller 4-Channel 2K/HD/SD or 1-Channel 4K/UltraHD HDR and WCG frame synchronizer and up, down, cross-converter FS-HDR is a HDR to SDR, SDR to HDR and Bulletproof reliability. Incredible Conversion Power. connectivity including 4x 3G-SDI with fiber options**, as well as 6G-SDI HDR to HDR universal converter/frame and 12G-SDI over copper or fiber**. synchronizer, designed specifically to FS-HDR is your real world answer for up/down/cross conversions meet the HDR (High Dynamic Range) and realtime HDR transforms, built to AJA’s high quality and reliability In single channel mode, FS4 will up scale your HD or SD materials to and WCG (Wide Color Gamut) needs of standards. 4K/UltraHD and back, with a huge array of audio channels over SDI, AES, broadcast, OTT, post and live event AV and MADI for an incredible 272 x 208 matrix of audio possibilities. In four environments. Powered by Colorfront Engine, FS-HDR’s extensive HDR and WCG channel mode, independent transforms can be applied to each 2K/HD or SD channel. FS-HDR offers two modes for processing support enables real time processing of a single channel of comprehensive HDR/WCG conversion 4K/UltraHD/2K/HD including down-conversion to HD HDR or up to four and signal processing. Single channel channels of 2K/HD simultaneously. FS-HDR also enables the conversion Maintaining Perceptual Integrity. mode provides a full suite of 4K/UltraHD of popular camera formats from multiple vendors into the HDR space, processing and up, down, cross- plus conversion to-and-from BT.2020/BT.709, critical for the widespread FS-HDR’s HDR/WCG capabilities leverage video and color space processing conversion to and from 2K, HD or SD. deployment of HDR alongside SDR in broadcast and OTT workflows. algorithms within Colorfront Engine™, specially-licensed by AJA from Four channel mode offers 4x 2K/HD/SD Colorfront, and developed by Colorfront’s Academy Award winning team channels of simultaneous processing and conversion. 4K/UltraHD delivery, processing and synchronization is fast becoming the of color scientists. next standard, and FS-HDR gets you there with a wealth of digital video www.aja.com * Pricing is for US only. International pricing will vary. Please contact a local AJA Reseller for pricing details. 2 | FS-HDR ** with optional SFP modules Original Scene Dynamic Range What Viewer Sees Color Contrast Capture Postproduction Mastering Distribution Display Color Contrast SDR .0005 - 100 nits Original Scene Dynamic Range What Viewer Sees HDR Color Contrast Capture Postproduction Mastering Distribution Display Color Contrast .0005 - 1000 nits In: Dynamic Range/Color Gamut HDR Workflows for UltraHD/HD Comprehensive Input Formats: FS-HDR opens a range of exciting possibilities for the integration of HDR within a range of • SDR BT.709 100 Nits workflows: HDR Conversions • PQ BT.2020 1000 Nits • PQ P3D65 1000 Nits • Converting from a house HDR standard to a Modern cameras are inherently HDR capable with their wide dynamic ranges • Hybrid Log Gamma BT.2100 delivery HDR standard. available from the sensor on, and for live events and broadcasts, it’s crucial to • Sony S-Gamut3/S-Log3 • Converting from a camera OETF to a house translate their Log and Gamut capabilities to the accepted standards now being • ARRI Log C Wide Gamut HDR standard. employed for HDR delivery. With support for leading cameras on input, and output • Panasonic V-Log • Converting from a HDR standard to SDR for to (Perceptual Quality) PQ which is at the heart of HDR 10 as well as support for • RED Log3G10 Wide Gamut delivery or monitoring. widely accepted (Hybrid Log Gamma) HLG, FS-HDR takes the pain out of HDR • Canon Log 2 • Converting SDR camera output to a house conversion and makes realtime camera to HDR workflows instantly possible. With • Canon Log 3 HDR standard. SDR to HDR conversion, you can even bring in your current materials and upgrade • Converting SDR sources to HDR for integrating them to match your live HDR programming output. For live event large LED displays into an HDR program utilize FS-HDR for HDR to HDR conversion, and for enriching your SDR sources, SDR Out: Dynamic Range/Color Gamut • C onverting HD SDR BT.709 sources to UHD to HDR conversion. Output formats: HDR BT.2020 • SDR BT.709 100 Nits • C onverting UHD HDR BT.2020 sources to HD • PQ BT.2020 1000 Nits SDR BT.709 • Hybrid Log Gamma BT.2100 • C onverting source inputs to Sony S-Gamut3/ • Sony S-Gamut3/S-Log3 S-Log3 www.aja.com 3 | FS-HDR Rec 601/709 color space Rec 2020 color space 0.9 0.9 0.9 0.9 520 520 Rec 2020520 color space 520 10 0.8 0.8 0.8 0.8 540 540 540 540 y y y y 0.7 0.7 0.7 0.7 560 560 560 560 0.6 0.6 0.6 0.6 1 500 500 500 500 580 580 580 580 0.5 0.5 0.5 0.5 12bit PQ 13bit Log 0.4 600 0.4 600 0.4 600 0.4 600 D65 620 D65 620 D65 620 D65 620 15bit Gamma0.3 16bit OpenEXR 700 0.3 700 0.3 700 0.3 700 0.01 490 490 490 490 0.2 0.2 0.2 0.2 Minimum Contrast Step (%) Step Minimum Contrast 480 480 480 480 0.1 0.1 0.1 0.1 470 470 470 470 460 460 460 460 0.01 0.0 0.0 0.0 0.0 0.001 0.01 0.1 1 100.8 0.8380 0.8100 0.8 0.81000 0.8 0.810000 0.8 0.8 0.8 0.8380 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8380 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8 0.8380 0.8 0.8 0.8 0.8 0.8 0.8 0.8 Luminance (cd/m2) x x x x Realtime Camera Log Conversions Colorfront Engine™ PQ, HLG and Log Convert Today’s SDR to HDR SDR equates to around 6-stops of dynamic range, FS-HDR’s HDR/WCG capabilities leverage To facilitate the most flexibility for emerging HDR To match your HDR live programming and live HDR expands that to around 15 stops, greatly video and color space processing algorithms pipelines, it has to be recognized that HDR is a event feeds, you are going to need SDR to HDR increasing both dynamic range and an expanded within Colorfront Engine™, specially-licensed new toolset and standards are evolving to serve conversions to match your HDR feeds and integrate color gamut, producing a much richer and by AJA from Colorfront, and developed by different needs while still resulting in the result legacy materials. fuller image, closer to what cameras today are Colorfront’s Academy Award winning CTO Bill desired; gorgeous, rich imagery. FS-HDR includes SDR to HDR conversions, intrinsically capable of and closer to the ultimate Feighter and Lead Engineer Tamas Perlaki. FS-HDR recognizes this and provides you choice. allowing you to bring in your current SDR materials goal, the HVS (Human Vision System). Powered by Colorfront Engine, FS-HDR’s and upgrade them to match your live HDR Preserving as much of the camera sensor detail FS-HDR can take this incoming imagery from the extensive HDR and WCG processing support programming output. and range as possible with support for Log most popular professional cameras in the world enables real time processing of a single channel inputs as well as HDR and SDR standards, FS-HDR For live event large LED displays with high nit today and convert their Log output in realtime to of 4K/UltraHD/2K/HD including down-conversion transforms this data to the standards now being counts, utilize FS-HDR for realtime HDR to HDR, or 4K/UltraHD or multiple channels of 2K/HD and to HD HDR or up to four channels of 2K/HD employed for HDR delivery. SDR to HDR conversions, bringing your screens to output as either HDR, SDR or both. simultaneously. rich and colorful life. FS-HDR makes your real time HDR workflows This greatly simplifies workflows in a mixed The “secret sauce” in terms of the Colorfront come to life in the formats you need, including camera environment, as FS-HDR can take in Engine™ is that the emphasis is on maintaining PQ (Perceptual Quality) HDR 10 and HLG (Hybrid multiple camera log formats and provides for a perceptual integrity and creative intent, not “just Log Gamma), or Log formats such as Sony unified workflow output to HDR or SDR capable the math” to convert between color spaces. S-Gamut3/S-Log 3, as well as BT.709 and switchers and more. The algorithms available in FS-HDR results in BT. 2020 transformations. Additionally, FS-HDR provides the ability to a final image that benefits from years’ worth of convert to Sony S-Gamut3/S-Log3 on output for work on Hollywood’s biggest productions; field workflows requiring a Log feed to compatible tested, critiqued and supported by the industry’s switchers and more when needed. greatest artists. www.aja.com 4 | FS-HDR 4K UltraHD HD SD Single-Channel Mode Four-Channel Mode Incredible Digital Connectivity Remote Configuration & Control Single-channel mode for 4K/UltraHD or 2K/ Four-channel mode for simultaneous FS-HDR offers the connectivity to meet the FS-HDR is network ready and supports SNMP HD frame sync and conversion including HDR independent 2K/HD/SD channels including HDR demands of your projects.
Recommended publications
  • Digital TV Framework High Performance Transcoder & Player Frameworks for Awesome Rear Seat Entertainment
    SOLUTION BRIEF In-Car Digital TV Framework High performance transcoder & player frameworks for awesome rear seat entertainment INTRODUCTION Ittiam offers a complete Digital TV Framework that brings media encapsulated in the latest Digital TV broadcast standards to the automotive dashboard and rear seat. Our solution takes input from the TV tuner including video, audio, subtitle, EPG, and Teletext, and transcodes iton the automotive head unit for display by the Rear Seat Entertainment (RSE) units. By connecting several software components including audio/video decoding, encoding and post processing, and control inputs, our transcoder and player frameworks deliver SUMMARY high performance audio and video playback. PRODUCT Transcoder framework that runs on Head Unit OVERVIEW OF FEATURES Player framework that runs on RSE unit Supports a wide range of terrestrial broadcast standards – DVB-T / DVB-T2, SBTVD, T-DMB, ISDBT and ISDB-T 1seg HIGHLIGHTS Transcoder Input is a MPEG-2 TS stream; and output is an MPEG-2 TS stream containing H.264 Supports an array of terrestrial video @ 720p30 and AAC audio broadcast standards Split HEVC decoder (ARM + EVE) . Supports MPEG-2, H.264 / SL-H264, H.265 input video formats . on TI Jacinto6 platform Supports MPEG 1/2 Layer I, MPEG-1/2 Layer II, MP3, AAC / HE-AAC/ SL-AAC, BSAC, MPEG Supports video overlay and Surround, Dolby AC3 and Dolby EAC3 input audio formats . Supports dynamic switching between 1-seg and 12-seg ISDB-T streams blending Free from any open source code Includes audio post-processing (down mix,
    [Show full text]
  • PRACTICAL METHODS to VALIDATE ULTRA HD 4K CONTENT Sean Mccarthy, Ph.D
    PRACTICAL METHODS TO VALIDATE ULTRA HD 4K CONTENT Sean McCarthy, Ph.D. ARRIS Abstract Yet now, before we hardly got used to HD, we are talking about Ultra HD (UHD) with at Ultra High Definition television is many least four times as many pixels as HD. In things: more pixels, more color, more addition to and along with UHD, we are contrast, and higher frame rates. Of these getting a brand new wave of television parameters, “more pixels” is much more viewing options. The Internet has become a mature commercially and Ultra HD 4k TVs rival of legacy managed television distribution are taking their place in peoples’ homes. Yet, pipes. Over-the-top (OTT) bandwidth is now Ultra HD content and service offerings are often large enough to support 4k UHD playing catch-up. We don’t yet have enough exploration. New compression technologies experience to know what good Ultra HD 4k such as HEVC are now available to make content is nor do we know how much better use of video distribution channels. And bandwidth to allocate to deliver great Ultra the television itself is no longer confined to HD experiences to consumers. In this paper, the home. Every tablet, notebook, PC, and we describe techniques and tools that could smartphone now has a part time job as a TV be used to validate the quality of screen; and more and more of those evolved- uncompressed and compressed Ultra HD 4k from-computer TVs have pixel density and content so that we can plan bandwidth resolution to rival dedicated TV displays.
    [Show full text]
  • Video Processing and Its Applications: a Survey
    International Journal of Emerging Trends & Technology in Computer Science (IJETTCS) Web Site: www.ijettcs.org Email: [email protected] Volume 6, Issue 4, July- August 2017 ISSN 2278-6856 Video Processing and its Applications: A survey Neetish Kumar1, Dr. Deepa Raj2 1Department of Computer Science, BBA University, Lucknow, India 2Department of Computer Science, BBA University, Lucknow, India Abstract of low quality video. The common cause of the degradation A video is considered as 3D/4D spatiotemporal intensity pattern, is a challenging problem because of the various reasons like i.e. a spatial intensity pattern that changes with time. In other low contrast, signal to noise ratio is usually very low etc. word, video is termed as image sequence, represented by a time The application of image processing and video processing sequence of still images. Digital video is an illustration of techniques is to the analyze the video sequences in traffic moving visual images in the form of encoded digital data. flow, traffic data collection and road traffic monitoring. Digital video standards are required for exchanging of digital Various methods are present including the inductive loop, video among different products, devices and applications. the sonar and microwave detectors has their own Pros and Fundamental consumer applications for digital video comprise Cons. Video sensors are relatively cost effective installation digital TV broadcasts, video playback from DVD, digital with little traffic interruption during maintenance. cinema, as well as videoconferencing and video streaming over Furthermore, they administer wide area monitoring the Internet. In this paper, various literatures related to the granting analysis of traffic flows and turning movements, video processing are exploited and analysis of the various speed measurement, multiple point vehicle counts, vehicle aspects like compression, enhancement of video are performed.
    [Show full text]
  • Performance of Image and Video Processing with General-Purpose
    Performance of Image and Video Processing with General-Purpose Processors and Media ISA Extensions y Parthasarathy Ranganathan , Sarita Adve , and Norman P. Jouppi Electrical and Computer Engineering y Western Research Laboratory Rice University Compaq Computer Corporation g fparthas,sarita @rice.edu [email protected] Abstract Media processing refers to the computing required for the creation, encoding/decoding, processing, display, and com- This paper aims to provide a quantitative understanding munication of digital multimedia information such as im- of the performance of image and video processing applica- ages, audio, video, and graphics. The last few years tions on general-purpose processors, without and with me- have seen significant advances in this area, but the true dia ISA extensions. We use detailed simulation of 12 bench- promise of media processing will be seen only when ap- marks to study the effectiveness of current architectural fea- plications such as collaborative teleconferencing, distance tures and identify future challenges for these workloads. learning, and high-quality media-rich content channels ap- Our results show that conventional techniques in current pear in ubiquitously available commodity systems. Fur- processors to enhance instruction-level parallelism (ILP) ther out, advanced human-computer interfaces, telepres- provide a factor of 2.3X to 4.2X performance improve- ence, and immersive and interactive virtual environments ment. The Sun VIS media ISA extensions provide an ad- hold even greater promise. ditional 1.1X to 4.2X performance improvement. The ILP One obstacle in achieving this promise is the high com- features and media ISA extensions significantly reduce the putational demands imposed by these applications.
    [Show full text]
  • Ultra High Definition 4K Television User's Guide
    Ultra High Definition 4K Television User’s Guide: 65L9400U 58L8400U If you need assistance: Toshiba's Support Web site support.toshiba.com For more information, see “Troubleshooting” on page 173 in this guide. Owner's Record The model number and serial number are on the back and side of your television. Print out this page and write these numbers in the spaces below. Refer to these numbers whenever you communicate with your Toshiba dealer about this Television. Model name: Serial number: Register your Toshiba Television at register.toshiba.com Note: To display a High Definition picture, the TV must be receiving a High Definition signal (such as an over- the-air High Definition TV broadcast, a High Definition digital cable program, or a High Definition digital satellite program). For details, contact your TV antenna installer, cable provider, or GMA300039012 satellite provider. 12/14 2 CHILD SAFETY: PROPER TELEVISION PLACEMENT MATTERS TOSHIBA CARES • Manufacturers, retailers and the rest of the consumer electronics industry are committed to making home entertainment safe and enjoyable. • As you enjoy your television, please note that all televisions – new and old – must be supported on proper stands or installed according to the manufacturer’s recommendations. Televisions that are inappropriately situated on dressers, bookcases, shelves, desks, speakers, chests, carts, etc., may fall over, resulting in injury. TUNE IN TO SAFETY • ALWAYS follow the manufacturer’s recommendations for the safe installation of your television. • ALWAYS read and follow all instructions for proper use of your television. • NEVER allow children to climb on or play on the television or the furniture on which the television is placed.
    [Show full text]
  • Set-Top Box Capable of Real-Time Video Processing
    Set-Top Box Capable of Real-Time Video Processing Second Prize Set-Top Box Capable of Real-Time Video Processing Institution: Xi’an University of Electronic Science and Technology Participants: Fei Xiang, Wen-Bo Ning, and Wei Zhu Instructor: Wan-You Guo Design Introduction The set-top box (STB) played an important role in the early development of digital TV. As the technology develops, digital TV will undoubtedly replace analog TV. This paper describes an FPGA- based STB design capable of real-time video processing. Combining image processing theories and an extraction algorithm, the design implements real-time processing and display, audio/video playback, an e-photo album, and an infrared remote control. Background The television, as a home appliance and communications medium, has become a necessity for every family. Digital TV is favored for its high-definition images and high-fidelity voice quality. The household digital TV penetration rate in Europe, America, and Japan is almost 50%. In August 2006, the Chinese digital TV standards (ADTB-T and DMB-T) were officially released. In 2008, our country will offer digital media broadcasting (DMB) during the Beijing Olympic games, signaling the end of analog TV. However, the large number of analog TVs in China means that digital TVs and analog TVs will co-exist for a long time. The STB transforms analog TV signals into digital TV signals and it will play a vital role in this transition phase. There are four types of STBs: digital TV STBs, satellite TV STBs, network STBs, and internet protocol television (IPTV) STBs. Their main functions are TV program broadcasting, electronic program guides, data broadcasting, network access, and conditional access.
    [Show full text]
  • Video Processing - Challenges and Future Research CASS
    UGC Approval No:40934 CASS-ISSN:2581-6403 HEB Video Processing - Challenges and Future Research CASS D. Mohanapriya1, K.Rajalakshmi2 , N.Geetha3, N.Gayathri4 and Dr.K.Mahesh Ph.D Research Scholar, Department of Computer Applications, Alagappa University, Karaikudi Address for Correspondence: [email protected] ABSTRACT Video has become an essential component of today’s multimedia applications because of its rich content. In producing the digital video documents, the real challenge for computer vision is the development of robust tools for their utilization. It involves the tasks such as video indexing, browsing or searching. This paper is a complete survey of different video processing techniques and large number of related application in diverse disciplines, including medical, pedestrian protection, biometrics, moving object tracking, vehicle detection and monitoring and Traffic queue detection algorithm for processing various real time image processing challenges. Video Processing are hot topics in the field of research and development. Video processing is a particular case of signalprocessing, where the input and output signals are video files or video streams. In This paper, We present Video processing elements and current technologies related Video Processing. Keywords— Video Processing, Video indexing , Object Tracking, Video Encryption,Video Classification, Video Embedding. Access this Article Online http://heb-nic.in/cass-studies Quick Response Code: Received on 10/05/2019 Accepted on 11/05/2019@HEB All rights reserved May 2019 – Vol. 3, Issue- 1, Addendum 10 (Special Issue) Page-47 UGC Approval No:40934 CASS-ISSN:2581-6403 I. INTRODUCTION Video, a rich information source, is generally used for capturing and sharing knowledge in learning systems.
    [Show full text]
  • An Overview of H.264 / MPEG-4 Part 10
    EC-VIP-MC 2003,4th EURASIP Conference focused on Video I Image Processing and Multimedia Communications, 2-5 July 2003, Zagreb. Croatia AN OVERVIEW OF H.264 I MPEG4 PART 10 (Keynote Speech) A. Tamhankar and K. R. Rao University of Texas at Arlington Electrical Engineering Department Box 19016, Arlington, Texas 76019, USA [email protected], [email protected] Abstract: The video coding standards to date have not been able to address all the needs required by varying bit rates of dflerent applications and the at the same time meeting the quality requirements. An emerging video coding standard named H.264 or UPEG-4 part 10 aims at coding video sequences at approximately half the bit rate compared to UPEG-2 at the same quality. It also aims at having significant improvements in coding eficiency, error robustness and network@iendliness. It makes use of better prediction methods for Intra (I), Predictive (P) and Bi-predictive (B) frames. All these features along with others such as CABAC (Context Based Adaptive Binary Arithmetic Coding) have resulted in having a 2.1 coding gain over UPEG-2 at the cost of increased complexity. 1. INTRODUCTION Two intemational organizations have been heavily involved in the standardization of image, audio and video coding methodologies namely - ISO/IEC and ITU-T. ITU-T Video Coding Experts Group (VCEG) develops intemational standards for advanced moving image coding methods appropriate for conversational and non-conversational audioivideo applications. It caters essentially to real time video applications. ISO/IEC Moving Picture Experts Group (MPEG) develops international standards for compression and coding, decompression, processing, representation of moving pictures, images, audio and their combinations [SI.
    [Show full text]
  • Design Guide Switchers
    System Planning & Design Resource Design Guide Switchers Video Walls Multiviewers Codecs KVM SPECTRUM To Our Valued Customers and Partners, We’re here to help you select the finest equipment that solves your challenges, in an elegant, intuitive, and purpose-built manner. Since 1987, we have been designing and manufacturing innovative solutions for the display, recording and transmission of computer and video signals. With advanced capabilities, proven reliability, and flexible user interfaces, our products are preferred by discriminating customers in commercial, military, industrial, medical, security, energy, and educational markets. In creating this guide, our primary goal is to help simplify the process of choosing the right product for each system. The introductory section includes an overview of current and emerging audiovisual technologies, followed by primers on Networked AV and 4K video technologies, a directory of RGB Spectrum products, case studies, and specifications for all RGB Spectrum products, sample system diagrams, and finally, a glossary of key terms and concepts. RGB Spectrum’s products work together to provide a key part of a system solution — the AV core around which the rest is designed. The case studies illustrate methods to configure both simple and advanced systems. You can expand upon these to meet the requirements of your customers. We are happy to assist our readers to develop better, more effective, and more profitable AV solutions. If you need more assistance, our Design Services team is at the ready to help guide you through the process of determining the optimal RGB Spectrum equipment for your project. Sincerely, Bob Marcus Founder and CEO RGB Spectrum TABLE OF CONTENTS Technology Tutorial .
    [Show full text]
  • Transitioning Broadcast to Cloud
    applied sciences Article Transitioning Broadcast to Cloud Yuriy Reznik, Jordi Cenzano and Bo Zhang * Brightcove Inc., 290 Congress Street, Boston, MA 02210, USA; [email protected] (Y.R.); [email protected] (J.C.) * Correspondence: [email protected]; Tel.: +1-888-882-1880 Abstract: We analyze the differences between on-premise broadcast and cloud-based online video de- livery workflows and identify technologies needed for bridging the gaps between them. Specifically, we note differences in ingest protocols, media formats, signal-processing chains, codec constraints, metadata, transport formats, delays, and means for implementing operations such as ad-splicing, redundancy and synchronization. To bridge the gaps, we suggest specific improvements in cloud ingest, signal processing, and transcoding stacks. Cloud playout is also identified as critically needed technology for convergence. Finally, based on all such considerations, we offer sketches of several possible hybrid architectures, with different degrees of offloading of processing in cloud, that are likely to emerge in the future. Keywords: broadcast; video encoding and streaming; cloud-based services; cloud playout 1. Introduction Terrestrial broadcast TV has been historically the first and still broadly used technology for delivery of visual information to the masses. Cable and DHT (direct-to-home) satellite TV technologies came next, as highly successful evolutions and extensions of the broadcast TV model [1,2]. Yet, broadcast has some limits. For instance, in its most basic form, it only enables linear delivery. It also provides direct reach to only one category of devices: TV sets. To Citation: Reznik, Y.; Cenzano, J.; reach other devices, such as mobiles, tablets, PCs, game consoles, etc., the most practical Zhang, B.
    [Show full text]
  • Anthem Statement Review
    PRODUCT D2 Vol. 13, No. 3 “… Video with a capital ‘V’ … INCREDIBLE … It really was like looking through a window.” BY ALAIN LÉVESQUE AT LAST, VIDEO WITH A CAPITAL ‘V’ Since my colleague, Emmanuel Lehuy, Anthem has included a number of new had already reviewed the D1 in detail terminals in the D2, including four HDMI When they launch a preamplifier for home (see his review on the Anthem website at inputs and one output (compatible with theatre use, manufacturers often assure us www.anthemav.com), I will restrict myself HDCP, and supporting a maximum res- that they will provide all hardware and to a review of the new functionalities olution of 1080p 60). It has also relocated software updates for their equipment. in the D2, equipped with a cutting-edge all of the S-Video inputs and outputs. Most of the time, these are empty promises. video processor. I will evaluate the video The technical support people at Anthem However, such is not the case with Canadian processing and the link between digital informed me that the new video pro- manufacturer, Anthem. Its popular AVM 20 audio and video. cessing board contains more than 1,800 preamplifier/processor, launched a few parts; in other words, more parts than in years ago, is still functional today thanks to To the casual observer, the D2 could the entire D1! the large number of updates that Anthem easily pass for a D1. However, if one looks has offered. The company repeated this more closely, one will notice some major Video processors providing an inverse performance when it launched its brand updates, including the remodelling of telecine (film mode) and motion adaptive new flagship audio video processor, the the back panel.
    [Show full text]
  • Mediawall V Data Sheet Rev04
    MediaWall V The World’s First True UHD Video Wall Display Processor The new MediaWall ® V is the world’s first true 4K/UHD video wall processor, offering up to 4K resolution I/O, single wire connectivity and fully scalable windows. MediaWall V supports up to a 12 screen array at 4K UHD output resolution or to a 24 screen array at HD/2K output resolution, with up to 36 video/graphic inputs. The MediaWall V display processor features an architecture that combines the real-time processing and reliability of a hardware-based processor with the flexibility of a separate, dedicated processor to run applications. Add in integrated IP capabilities, extensive user-friendly features, enhanced security and an intuitive, easy-to-operate control interface. The result is a 4K wall processing system with unsurpassed power and versatility. A true video wall is not simply a wall of monitors; rather, it is a continuous multi-screen surface upon which to display a combination of graphics and video. Not all systems that claim to be 4K wall processors provide video synchronization, required for the seamless display of high-motion graphics, or fully scalable windows, which allow imagery to be displayed anywhere, including across monitor bezels. The MediaWall V processor delivers all of these capabilities. With a MediaWall V processor, you can display ultra high-resolution 4K video and graphics together with HD sources on a wall array of 4K UHD display devices. A 4K UHD video wall offers dramatically higher pixel density, with up to 4x more total resolution than a conventional HD video wall of similar size.
    [Show full text]