Low Power and Area Efficient DCT Architecture for Low Bit Rate Communication

Total Page:16

File Type:pdf, Size:1020Kb

Low Power and Area Efficient DCT Architecture for Low Bit Rate Communication Muhammad FAISAL SIDDIQUI1 , Raja ALI RIAZ1, 2, Syed SAUD NAQVI1 Department of Electrical Engineering, COMSATS Institute of Information Technology, Islamabad (1), School of ECS, University of Southampton (2) Low Power and Area Efficient DCT Architecture for Low Bit Rate Communication Abstract. In this paper a low power and area efficient DCT (Discrete Cosine Transform) pipelined architecture using multiplier-less method is presented for low bit rate communications such as videoconferencing in mobile devices. The multiplier-less multiplication is implemented by minimum number of additions, subtractions and shifts using CSD (Canonic Signed Digit) representation for fixed point DCT coefficient. Power reduction is achieved by minimizing both the number of arithmetic operations and data-path width. The proposed DCT architecture was implemented on a XILINX FPGA (Field-Programmable Gate Array). The results from power estimation show that our design is capable of reducing the power dissipation 5.5 times compared to the other DCT architectures for video streaming/video conferencing in portable devices. Streszczenie. W artykule zaprezentowano system komunikacji DCT )(Discrete Cosine Transform) o małej przepływności bitów charakteryzujący się małą mocą, dobrym pokryciem obszaru. System ma strukturę potokową. Ograniczenie mocy osiągnięto przez zmniejszenie operacji matematycznych oraz szerokości ścieżki danych. System zaimplementowano z wykorzystaniem elementów FPGA. (Architektura komunikacji DCP o małej mocy do małej przepływności danych) Keywords: Discrete Cosine Transform (DCT), Canonical Signed Digit (CSD), Multiplier-less Słowa kluczowe: komunikacja DCT, przepływność danych Introduction communications such as H.261, H.263, H.263+, H.264 and Many of today multimedia applications such as video- many more [1, 2]. Normally for internet video streaming (@ conferencing, internet video streaming and video-over 20-200 kbps) uses H.263, for video conferencing (@ 20-320 wireless are most bandwidth consuming modes of kbps) uses H.261 or H.263, and for video over 3G wireless communication. Efficient hardware is needed for these (@ 20-100kbps) uses H.263 video standards. types of applications. The key features of hardware design Video is composed of frames of pictures, so the still are to consume very low power and low area to implement image coding has to process one more dimension for video in portable small devices such as mobiles phones or signals compression. To de-correlate the blocks of original cameras. pixels, DCT has been widely used in many multimedia Several international standards of image and video compression international standards. Some low bit rate coders use compression techniques based on DCT which international video coding standards are given in Table 1. transform it in frequency domain. DCT algorithm has excessive numbers of multiplication and addition Table 1. Comparison of Resources Usage operations. In hardware implementation multiplication is Coding Schemes Bit Rate Applications costly and consumes large amount of power. To reduce the (kbps) multiplication operation some mathematical manipulation H.261 / H.263 20-320 Video Telephony/Video Conferencing techniques are used such as row-column efficient matrix H.263 / H.264 20-100 Video over 3G Wireless form of DCT coefficient which reduces the complexity of the H.264 / H.264 20-200 Internet Video Streaming algorithm. In hardware fixed point constant multiplication is efficiently implemented by CSD. Furthermore, to decrease Discrete Cosine Transform (DCT) the arithmetic operations common sub-expression The two dimensional DCT is broadly used in still image elimination technique is also used. compression techniques. It is widely recognized that The proposed architecture is designed in such a way performing DCT over 8 x 8 blocks represents a good trade- that the architecture consumes minimum power to compute off between transform complexity and spatial correlation. the desired results with efficiently using the hardware The 8 x 8 2-D DCT transforms an 8 x 8 block sample from resources. Pipelining is used to increase the frequency and spatial domain f(x,y) into frequency domain F(k,l). DCT is throughput of the design. Hardware optimization is used to defined mathematically as in Eq (1): make resources more suitable and consumes less power with less complexity. The proposed design is implemented (1) on XILINX FPGA. The comparison results show that 77 proposed architecture proves to be efficient compared to 1 kx ly other published results. Fkl(,) CkCl () () f (, xy )cos cos This paper is organized as follows. First section gives an 41616xy00 overview of the video coding standards. Second section kxk(2 1) introduces the background of the DCT algorithm with x efficient matrix approach. Third and Fourth sections briefly lyl(2 1) describe the CSD multiplication and common sub- y expression elimination respectively. Fifth section presents 1 the design of the proposed architecture and comparison if k,0 l results. Finally, conclusions are presented in last section. Ck() Cl () 2 Overview on Video Coding Standards 1 otherwise In emerging multimedia applications such as image and and kl, varies from 0 to 7 video compression, require the use of coders. There are many different video coding standards for low bit rate 216 PRZEGLĄD ELEKTROTECHNICZNY (Electrical Review), ISSN 0033-2097, R. 88 NR 8/2012 For direct computation of 2-D DCT according to the [7]. CSD form notation is: definition, it needs 4096 multiplications and 4032 additions. N 1 i This computational complexity can be reduced by using the (4) Sa i 2 row-column approach [3]. In this row-column technique, 2-D i0 DCT can be implemented by two successive 1-D DCT. First 1-D DCT is performed in row-wise and the second 1-D DCT Here ai is in the set {-1, 0, 1} for each i . is performed in column wise on the output of the first 1-D The DCT coefficients are implemented in fixed point DCT. Eq (2) yields the mathematical expression of the 1-D CSD representation. According to IEEE 1180-1990, 12-bit DCT. precision is used in order to confirm the accuracy specifications [6]. CSD expression of the cosine coefficients 7 are shown in Table 2. 1 kx Fk() Ck () f ()cos x 416x0 Table 2. 12-bit Precision DCT Coefficient in CSD Form Coefficient Decimal Value CSD Representation (2) kxkx (2 1) c1 0.4904 0.ı00000ī0ī000 -c1 -0.4904 0.ī00000ı0ı000 1 c2 0.4619 0.ı000ī0ī00ı00 if k 0 -c2 -0.4619 0.ī000ı0ı00ī00 Ck() 2 c3 0.4157 0.ı0ī0ı0ı0ı0ī0 -c3 -0.4157 0.ī0ı0ī0ī0ī0ı0 1 otherwise c4 0.3536 0.ı0ī0ī0ı0ı000 -c4 -0.3536 0.ī0ı0ı0ī0ī000 and k varies from 0 to 7 c5 0.2778 0.0ı00ı00ī000ı -c5 -0.2778 0.0ī00ī00ı000ī In this paper, the efficient DCT coefficient matrix form is c6 0.1913 0.0ı0ī000ı000ī used for computational and architectural optimizations [4]. -c6 -0.1913 0.0ī0ı000ī000ı The 8 x 8 DCT coefficient matrix form can be represented c7 0.0975 0.00ı0ī00ı000ī as: -c7 -0.0975 0.00ī0ı00ī000ı cccccccc In this manner maximum addition latency is three for 44 4 4 4 4 4 4 computing any multiple of fixed point DCT coefficient. cc13 c 5 c 7 c 7 c 5 c 3 c 1 Common Sub-Expression Elimination cc26 c 6 c 2 c 2 cc 66 c 2 The fixed coefficients of a DCT in CSD format have some common sub-expressions. Common sub-expression ccccccc c 3715517 3 means that some of the bit patterns occur more than once cccccccc in any expression. Close observation on the fixed 44444 444 coefficients extracts some common sub-expressions which can be easily eliminated. For example, a bit pattern of cccc5173 cccc 3715 (ı0ī )CSD in coefficient c3 comes twice so it is not needed to cccccccc 6 22 6 62 26 compute this two times. It is achieved just by computing it ccccccccone time and using its shifted result. This characteristic 75311357 makes it possible to reduce the number of arithmetic where operations and the hardware complexity. c3 is computed as: 1 i (3) ci() cos, i 1,2, ,6,7 tx 13 x 216 1 tx2 57 x Since the coefficient matrix having constant values so minimum number of arithmetic operations will be possible cttt3121 8 on using CSD representation. The given example shows that the common sub- Multiplier-Less Implementation of Multiplication using expression technique eliminates one arithmetic operation. Canonic Signed Digit The first operation result is re-used with some shift in Multiplier-less method is widely used for VLSI (Very computing the final result. It provides the maximum addition Large scale Integration) realization because of improvement latency of 3. in speed, area overhead and power consumption [5-6]. In normal multipliers-less multiplication of a constant number Proposed DCT Architecture is implemented just by shift and add operations. In this The proposed DCT architecture is inherently low power method the number of adders are directly proportional to and low area because of minimum required arithmetic the number of ones ‘1’ in a constant. The proposed scheme operations, therefore making it highly suitable to compact especially incorporates CSD method for more efficient mobile applications e.g. video streaming, video hardware usage and reduces the hardware complexity conferencing and online video-gaming. The 5-stage significantly in multiplier-less implementation of DCT. In pipelined structure is pretty simple, composed of 4 barrel CSD form the numbers of ones are lesser than and equal to shifters, 5 adders/subtractors and 1 multiplexer, shows in the normal binary representation of a number. Therefore Fig. 1. CSD method decreases the number of add/sub operations PRZEGLĄD ELEKTROTECHNICZNY (Electrical Review), ISSN 0033-2097, R. 88 NR 8/2012 217 Fig. 1. Proposed DCT Architecture First three stages are performing the multiplication enables an encoding rate of up to 68 PAL (Phase operation. The 4th stage adder is adding the eight Alternating line) frames (720 x 576 pixels) per second and multiplication values which make one resultant value of the 280 CIF (Common Intermediate Format) frames (352 x 288 1D-DCT.
Recommended publications
  • Video Codec Requirements and Evaluation Methodology
    Video Codec Requirements 47pt 30pt and Evaluation Methodology Color::white : LT Medium Font to be used by customers and : Arial www.huawei.com draft-filippov-netvc-requirements-01 Alexey Filippov, Huawei Technologies 35pt Contents Font to be used by customers and partners : • An overview of applications • Requirements 18pt • Evaluation methodology Font to be used by customers • Conclusions and partners : Slide 2 Page 2 35pt Applications Font to be used by customers and partners : • Internet Protocol Television (IPTV) • Video conferencing 18pt • Video sharing Font to be used by customers • Screencasting and partners : • Game streaming • Video monitoring / surveillance Slide 3 35pt Internet Protocol Television (IPTV) Font to be used by customers and partners : • Basic requirements: . Random access to pictures 18pt Random Access Period (RAP) should be kept small enough (approximately, 1-15 seconds); Font to be used by customers . Temporal (frame-rate) scalability; and partners : . Error robustness • Optional requirements: . resolution and quality (SNR) scalability Slide 4 35pt Internet Protocol Television (IPTV) Font to be used by customers and partners : Resolution Frame-rate, fps Picture access mode 2160p (4K),3840x2160 60 RA 18pt 1080p, 1920x1080 24, 50, 60 RA 1080i, 1920x1080 30 (60 fields per second) RA Font to be used by customers and partners : 720p, 1280x720 50, 60 RA 576p (EDTV), 720x576 25, 50 RA 576i (SDTV), 720x576 25, 30 RA 480p (EDTV), 720x480 50, 60 RA 480i (SDTV), 720x480 25, 30 RA Slide 5 35pt Video conferencing Font to be used by customers and partners : • Basic requirements: . Delay should be kept as low as possible 18pt The preferable and maximum delay values should be less than 100 ms and 350 ms, respectively Font to be used by customers .
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • Netflix and the Development of the Internet Television Network
    Syracuse University SURFACE Dissertations - ALL SURFACE May 2016 Netflix and the Development of the Internet Television Network Laura Osur Syracuse University Follow this and additional works at: https://surface.syr.edu/etd Part of the Social and Behavioral Sciences Commons Recommended Citation Osur, Laura, "Netflix and the Development of the Internet Television Network" (2016). Dissertations - ALL. 448. https://surface.syr.edu/etd/448 This Dissertation is brought to you for free and open access by the SURFACE at SURFACE. It has been accepted for inclusion in Dissertations - ALL by an authorized administrator of SURFACE. For more information, please contact [email protected]. Abstract When Netflix launched in April 1998, Internet video was in its infancy. Eighteen years later, Netflix has developed into the first truly global Internet TV network. Many books have been written about the five broadcast networks – NBC, CBS, ABC, Fox, and the CW – and many about the major cable networks – HBO, CNN, MTV, Nickelodeon, just to name a few – and this is the fitting time to undertake a detailed analysis of how Netflix, as the preeminent Internet TV networks, has come to be. This book, then, combines historical, industrial, and textual analysis to investigate, contextualize, and historicize Netflix's development as an Internet TV network. The book is split into four chapters. The first explores the ways in which Netflix's development during its early years a DVD-by-mail company – 1998-2007, a period I am calling "Netflix as Rental Company" – lay the foundations for the company's future iterations and successes. During this period, Netflix adapted DVD distribution to the Internet, revolutionizing the way viewers receive, watch, and choose content, and built a brand reputation on consumer-centric innovation.
    [Show full text]
  • The H.264 Advanced Video Coding (AVC) Standard
    Whitepaper: The H.264 Advanced Video Coding (AVC) Standard What It Means to Web Camera Performance Introduction A new generation of webcams is hitting the market that makes video conferencing a more lifelike experience for users, thanks to adoption of the breakthrough H.264 standard. This white paper explains some of the key benefits of H.264 encoding and why cameras with this technology should be on the shopping list of every business. The Need for Compression Today, Internet connection rates average in the range of a few megabits per second. While VGA video requires 147 megabits per second (Mbps) of data, full high definition (HD) 1080p video requires almost one gigabit per second of data, as illustrated in Table 1. Table 1. Display Resolution Format Comparison Format Horizontal Pixels Vertical Lines Pixels Megabits per second (Mbps) QVGA 320 240 76,800 37 VGA 640 480 307,200 147 720p 1280 720 921,600 442 1080p 1920 1080 2,073,600 995 Video Compression Techniques Digital video streams, especially at high definition (HD) resolution, represent huge amounts of data. In order to achieve real-time HD resolution over typical Internet connection bandwidths, video compression is required. The amount of compression required to transmit 1080p video over a three megabits per second link is 332:1! Video compression techniques use mathematical algorithms to reduce the amount of data needed to transmit or store video. Lossless Compression Lossless compression changes how data is stored without resulting in any loss of information. Zip files are losslessly compressed so that when they are unzipped, the original files are recovered.
    [Show full text]
  • A Comparison of Video Formats for Online Teaching Ross A
    Contemporary Issues in Education Research – First Quarter 2017 Volume 10, Number 1 A Comparison Of Video Formats For Online Teaching Ross A. Malaga, Montclair State University, USA Nicole B. Koppel, Montclair State University, USA ABSTRACT The use of video to deliver content to students online has become increasingly popular. However, educators are often plagued with the question of which format to use to deliver asynchronous video material. Whether it is a College or University committing to a common video format or an individual instructor selecting the method that works best for his or her course, this research presents a comparison of various video formats that can be applied to online education and provides guidance in which one to select. Keywords: Online Teaching; Video Formats; Technology Acceptance Model INTRODUCTION istance learning is one of the most talked-about topics in higher education today. Online and hybrid (or blended) learning removes location and time-bound constraints of the traditional college classroom to a learning environment that can occur anytime or anywhere in a global environment. DAccording to research by the Online Learning Consortium, over 5 million students took an online course in the Fall 2014 semester. This represents an increase in online enrollment of over 3.9% in just one year. In 2014, 28% of higher education students took one or more courses online (Allen, I. E. and Seaman, J, 2016). With this incredible growth, albeit slower than the growth in previous years, institutions of higher education are continuing to increase their online course and program offerings. As such, institutions need to find easy to develop, easy to use, reliable, and reasonably priced technologies to deliver online content.
    [Show full text]
  • WILL Youtube SAIL INTO the DMCA's SAFE HARBOR OR SINK for INTERNET PIRACY?
    THE JOHN MARSHALL REVIEW OF INTELLECTUAL PROPERTY LAW WILL YoUTUBE SAIL INTO THE DMCA's SAFE HARBOR OR SINK FOR INTERNET PIRACY? MICHAEL DRISCOLL ABSTRACT Is YouTube, the popular video sharing website, a new revolution in information sharing or a profitable clearing-house for unauthorized distribution of copyrighted material? YouTube's critics claim that it falls within the latter category, in line with Napster and Grokster. This comment, however, determines that YouTube is fundamentally different from past infringers in that it complies with statutory provisions concerning the removal of copyrighted materials. Furthermore, YouTube's central server architecture distinguishes it from peer-to-peer file sharing websites. This comment concludes that any comparison to Napster or Grokster is superficial, and overlooks the potential benefits of YouTube to copyright owners and to society at large. Copyright © 2007 The John Marshall Law School Cite as Michael Driscoll, Will YouTube Sail into the DMCA's Safe Harboror Sink for Internet Piracy?, 6 J. MARSHALL REV. INTELL. PROP. L. 550 (2007). WILL YoUTUBE SAIL INTO THE DMCA's SAFE HARBOR OR SINK FOR INTERNET PIRACY? MICHAEL DRISCOLL* 'A sorry agreement is better than a good suit in law." English Proverb1 INTRODUCTION The year 2006 proved a banner year for YouTube, Inc. ("YouTube"), a well- known Internet video sharing service, so much so that Time Magazine credited YouTube for making "You" the Person of the Year. 2 Despite this seemingly positive development for such a young company, the possibility of massive copyright 3 infringement litigation looms over YouTube's future. For months, YouTube was walking a virtual tightrope by obtaining licensing agreements with major copyright owners, yet increasingly gaining popularity through its endless video selection, both legal and otherwise.
    [Show full text]
  • MPEG Standardization Roadmap October 2017 Version Why a Standardisation Roadmap?
    MPEG Standardization Roadmap October 2017 Version Why a Standardisation Roadmap? • MPEG has created, and is still producing, media standards that enable huge markets to flourish • MPEG works on requirements from industry • Many industries represented in MPEG, but not all of MPEG’s customers can or need to participate in the process • MPEG wants to inform its customers about its long- term plans (~ 5 years out) • … and collect feedback and requirements from these customers • … including in this session What is in the Roadmap • Our roadmap is a short document. • It briefly outlines MPEG’s most important standards MPEG Standards 1990 1995 2000 2005 2010 2015 Digital Music Custom Fonts on Television Distribution Web & DigiPub Internet Mobile Video HD Distribution UHD & Immersive Audio Services MPEG-2 MP3 Video & MPEG-4 AAC AVC OFF HEVC 3D Audio Systems Video MP4 FF DASH MMT MLAF Media Storage and Unified Media Second Streaming Streaming Screen New Forms of Digital Television What is in the Roadmap • Our roadmap is a short document. • It briefly outlines MPEG’s most important standards • It then gives an overview of MPEG’s activities Multimedia Platform MPEG’s Areas Systems, video, Technologies MPEG- audio and U,V of Activity transport MPEG- E,M Technologies MPEG- for content B,C,D, Device and e-commerce application DASH interfaces MPEG-A Multimedia Application Compression of Formats (combinations video, audio MPEG-21 of content formats) and 3DG MPEG-7 Description of video, MPEG- audio and multimedia 1,2,4,H,I for content search What is in the Roadmap • Our roadmap is a short document.
    [Show full text]
  • Reliable Streaming Media Delivery
    WHITE PAPER Reliable Streaming Media Delivery www.ixiacom.com 915-1777-01 Rev B January 2014 2 Table of Contents Overview ................................................................................................................ 4 Business Trends ..................................................................................................... 6 Technology Trends ................................................................................................. 7 Building in Reliability .............................................................................................. 8 Ixia’s Test Solutions ............................................................................................... 9 Streaming Media Test Scenarios ........................................................................... 11 Conclusion .............................................................................................................13 3 Overview Streaming media is an evolving set of technologies that deliver multimedia content over the Internet and private networks. A number of service businesses are dedicated to streaming media delivery, including YouTube, Brightcove, Vimeo, Metacafe, BBC iPlayer, and Hulu. Streaming video delivery is growing dramatically: according to the comScore Video Metrix1, Americans viewed a significantly higher number of videos in 2009 than in 2008 (up 19%) due to both increased content consumption and the growing number of video ads delivered. In January of 2010, more than 170 million viewers watched videos
    [Show full text]
  • A Study on H.26X Family of Video Streaming Compression Techniques
    International Journal of Pure and Applied Mathematics Volume 117 No. 10 2017, 63-66 ISSN: 1311-8080 (printed version); ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu doi: 10.12732/ijpam.v117i10.12 Special Issue ijpam.eu A Study On H.26x Family Of Video Streaming Compression Techniques Farooq Sunar Mahammad, V Madhu Viswanatham School of Computing Science & Engineering VIT University Vellore, Tamil Nadu, India [email protected] [email protected] Abstract— H.264 is a standard was one made in collaboration MPEG-4 Part 10 (AVC), MPEG21,MPEG-7 and M-JPEG. with ITU-T and ISO/IEC Moving Picture Expert Group. The ITU-I VCEG standard incorporates H.26x series, H.264, standard is turning out to be more prevalent as the principle H.261 and H.263. In this paper, distinctive video weight objective of H.264 is to accomplish spatial versatility and to strategies are kept an eye on, starting from H.261 series to the enhance the compression execution contrasted with different most recent such standard, known as H.265/HEVC. Fig. 1 standards.The H.264 is utilized as a part of spatial way to encode the video, so that the size is reduced and the quantity of the expounds the evolution of MPEG standards and ITU-T frames is being decreased and it’s in this way it accomplish Recommendations [2]. versatility. It gives new degree to making higher quality of video encoders and also decoders that give extensive level of quality video streams at kept up bit-rates (contrasted with past standdards), or, on the other hand, a similar quality video at a lower bitrate.
    [Show full text]
  • Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users
    Internet Video Telephony Allows Speech Reading by Deaf Individuals and Improves Speech Perception by Cochlear Implant Users Georgios Mantokoudis*, Claudia Da¨hler, Patrick Dubach, Martin Kompis, Marco D. Caversaccio, Pascal Senn University Department of Otorhinolaryngology, Head & Neck Surgery, Inselspital, Bern, Switzerland Abstract Objective: To analyze speech reading through Internet video calls by profoundly hearing-impaired individuals and cochlear implant (CI) users. Methods: Speech reading skills of 14 deaf adults and 21 CI users were assessed using the Hochmair Schulz Moser (HSM) sentence test. We presented video simulations using different video resolutions (12806720, 6406480, 3206240, 1606120 px), frame rates (30, 20, 10, 7, 5 frames per second (fps)), speech velocities (three different speakers), webcameras (Logitech Pro9000, C600 and C500) and image/sound delays (0–500 ms). All video simulations were presented with and without sound and in two screen sizes. Additionally, scores for live SkypeTM video connection and live face-to-face communication were assessed. Results: Higher frame rate (.7 fps), higher camera resolution (.6406480 px) and shorter picture/sound delay (,100 ms) were associated with increased speech perception scores. Scores were strongly dependent on the speaker but were not influenced by physical properties of the camera optics or the full screen mode. There is a significant median gain of +8.5%pts (p = 0.009) in speech perception for all 21 CI-users if visual cues are additionally shown. CI users with poor open set speech perception scores (n = 11) showed the greatest benefit under combined audio-visual presentation (median speech perception +11.8%pts, p = 0.032). Conclusion: Webcameras have the potential to improve telecommunication of hearing-impaired individuals.
    [Show full text]
  • The Impact of Online Video Distribution on the Global Market for Digital Content
    The Impact of Online Video Distribution on the Global Market for Digital Content David Blackburn, Ph.D. Jeffrey A. Eisenach, Ph.D. Bruno Soria, Ph.D. March 2019 About the Authors Dr. Blackburn is a Director in NERA’s Communications, Media, and Internet Practice as well its Intellectual Property and Antitrust Practices. Among other issues, Dr. Blackburn’s work at NERA has focused on media production and distribution, and assessing the value of IP in music, television, and film. Dr. Blackburn has taught at the undergraduate level at Harvard University and Framingham State College, and at the graduate level at the Universidad Nacional de Tucumán in Argentina. Dr. Eisenach is a Managing Director and Co-Chair of NERA’s Communications, Media, and Internet Practice. He is also an Adjunct Professor at George Mason University Law School, where he teaches Regulated Industries, and a Visiting Scholar at the American Enterprise Institute, where he focuses on policies affecting the information technology sector, innovation, and entrepreneurship. Previously, Dr. Eisenach served in senior policy positions at the U.S. Federal Trade Commission and the White House Office of Management and Budget, and on the faculties of Harvard University’s Kennedy School of Government and Virginia Polytechnic Institute and State University. Dr. Soria is an Associate Director in NERA’s Communications, Media and Internet Practice. While at NERA, he has advised governments, telecommunications operators and media companies, including on convergent competition and the pricing of content. He is also Guest Professor at the University of Barcelona where he lectures on Telecommunications Economics and Regulation. Previously, he held executive positions in Telefónica and MCI Worldcom.
    [Show full text]
  • History Analog Video Transmission
    Videotelephony - AccessScience from McGraw-Hill Education http://www.accessscience.com/content/videotelephony/732100 (http://www.accessscience.com/) Article by: Bleiweis, John J. JJB Associates, Great Falls, Virginia. Publication year: 2014 DOI: http://dx.doi.org/10.1036/1097-8542.732100 (http://dx.doi.org/10.1036/1097-8542.732100) Content History Infrastructure Bibliography Analog video transmission Video telephone hardware Additional Readings Digital video transmission Video telephone software A means of simultaneous, two-way communication comprising both audio and video elements. Participants in a video telephone call can both see and hear each other in real time. Videotelephony is a subset of teleconferencing, broadly defined as the various ways and means by which people communicate with one another over some distance. Initially conceived as an extension to the telephone, videotelephony is now possible using computers with network connections. In addition to general personal use, there are specific professional applications, such as criminal justice, health care delivery, and surveillance that can greatly benefit from videotelephony. See also: Teleconferencing (/content/teleconferencing/680075) History Although basic research on the technology of videotelephony dates back to 1925, the first public demonstration of the concept was by American Telephone and Telegraph Corporation (AT&T) at the 1964 New York World's Fair. The device was called Picturephone, and the high cost of the analog circuits required to support it made it very expensive and thus unsuitable for the market place. The 1970s brought the first attempts at digitization of transmissions. The video telephones comprised four parts: a standard touch-tone telephone, a small screen with a camera and loudspeaker, a control pad with user controls and a microphone, and a service unit (coder-decoder, or CODEC) that converted the analog signals to digital signals for transmission and vice versa for reception and display.
    [Show full text]