Proceedings of the Advanced Seminar on Modern Opengl

Total Page:16

File Type:pdf, Size:1020Kb

Proceedings of the Advanced Seminar on Modern Opengl Proceedings of the advanced seminar on modern OpenGL Chair of computer graphics and visualization summer semester 2015 “OpenGL® and the oval logo are trademarks or registered trademarks of Silicon Graphics, Inc. in the United States and/or other countries worldwide.” Preface Table of Contents OpenGL (Open Graphics Library) is a hardware- and platform independent API for the generation 1 Basics 4 of 2D and 3D graphics. Since version 1.0, that was officially specified by the Architecture Review 1.1GrundlagenShader.................................... 5 Board (ARB), it has evolved to be one of the major APIs for GPU based rendering. Today, the specification contains more than 200 commands and data structures for all areas of computation 2 Common applications 12 with graphics cards. During its more than 20 years long development history, the API has undergone massive 2.1OpenGLTessellation................................... 13 changes. Initially, OpenGL was heavily based on abstraction and provided high level function- 2.2TheComputeShader................................... 19 ality to configure an otherwise fixed rendering pipeline. This approach was consistent with GPUs 2.3 Parallel Sorting Algorithms on GPUs . 25 at the time where most of the functionality was hard-wired. However, modern GPUs account for 2.4 State of the art regarding order-independent transparency with modern OpenGL . 31 the demand of high data throughput and flexibility. In order to catch up with this development, the OpenGL API changed drastically, for example by the introduction of shaders or the massive 2.5 Global illumination in real-time . 37 use of buffers. With recent versions of OpenGL 4, a variety of problems that used to be hard to tackle or required tricks and even misuse of the rendering pipeline, can be solved in an efficient 3 Frameworks and technical view 44 and elegant manner. 3.1 Frameworks für GPGPU-Programmierung . 45 This collection contains the work of the advanced seminar on computer graphics and visualiza- 3.2APIAlternativestoOpenGL.............................. 51 tion of the summer semester 2015 on modern GPU programming. It is divided into three areas. 3.3 Review of the advancing progress of Vulkan development . 59 First, basics are laid out that provide a detailed overview on the different shader stages. Then, a selection of common problems is described and elegant solutions are shown. Finally, frameworks for general programming on the GPU and alternative APIs such as DirectX, Mantle or Metal are presented. The collection ends with an overview on Vulkan, which is considered to be the long term successor of OpenGL. 2 Preface Table of Contents OpenGL (Open Graphics Library) is a hardware- and platform independent API for the generation 1 Basics 4 of 2D and 3D graphics. Since version 1.0, that was officially specified by the Architecture Review 1.1GrundlagenShader.................................... 5 Board (ARB), it has evolved to be one of the major APIs for GPU based rendering. Today, the specification contains more than 200 commands and data structures for all areas of computation 2 Common applications 12 with graphics cards. During its more than 20 years long development history, the API has undergone massive 2.1OpenGLTessellation................................... 13 changes. Initially, OpenGL was heavily based on abstraction and provided high level function- 2.2TheComputeShader................................... 19 ality to configure an otherwise fixed rendering pipeline. This approach was consistent with GPUs 2.3 Parallel Sorting Algorithms on GPUs . 25 at the time where most of the functionality was hard-wired. However, modern GPUs account for 2.4 State of the art regarding order-independent transparency with modern OpenGL . 31 the demand of high data throughput and flexibility. In order to catch up with this development, the OpenGL API changed drastically, for example by the introduction of shaders or the massive 2.5 Global illumination in real-time . 37 use of buffers. With recent versions of OpenGL 4, a variety of problems that used to be hard to tackle or required tricks and even misuse of the rendering pipeline, can be solved in an efficient 3 Frameworks and technical view 44 and elegant manner. 3.1 Frameworks für GPGPU-Programmierung . 45 This collection contains the work of the advanced seminar on computer graphics and visualiza- 3.2APIAlternativestoOpenGL.............................. 51 tion of the summer semester 2015 on modern GPU programming. It is divided into three areas. 3.3 Review of the advancing progress of Vulkan development . 59 First, basics are laid out that provide a detailed overview on the different shader stages. Then, a selection of common problems is described and elegant solutions are shown. Finally, frameworks for general programming on the GPU and alternative APIs such as DirectX, Mantle or Metal are presented. The collection ends with an overview on Vulkan, which is considered to be the long term successor of OpenGL. 3 EUROGRAPHICS 2015 / L. Zepner Volume 34 (2015), Number 2 (Editors) Grundlagen Shader L. Zepner1 1TU Dresden, Germany Abstract Dieses Paper gibt einen kurzen Überblick über die sechs Shader der OpenGL API. Zu jedem Shader werden die charakteristischen Eigenschaften, wie Ein- und Ausgaben, als auch Aufgaben definiert und mit knappen Beispielen verdeutlicht. Besonderer Fokus liegt dabei auf dem Unterschied zwischen den einzelnen Shadern und ihrem Beitrag zu einer Grafikanwendung. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Grundlagen Shader— Vertex-Shader, Fragment-Shader, Tesselation Control-Shader, Tesselation Evaluation-Shader, Geometry-Shader, 1 Basics Compute-Shader 1. Einleitung OpenGL dient als abstrakte Schicht zwischen Anwendung und Hardware und unterteilt das Grafiksystem in verschie- dene Bereiche. Durch die Verwendung der OpenGL API ist es möglich auf dieses Grafiksystem zuzugreifen und plattfor- munabhängig Grafikanwendungen zu entwickeln. Program- mieren lassen sich die Anwendungen über Shader, welche durch fest eingebaute Teile der Pipeline verbunden sind. Shader werden in der OpenGL Shading Language geschrie- ben und in einem Programm-Objekt miteinander verlinkt. Figure 1: Überblick über die Grafikpipeline Jeder Shader hat seinen eigenen Wirkungsbereich und fest- gelegte Ein- und Ausgabemöglichkeiten, welche im Folgen- den beschrieben werden. [SWH13] der Teil, der von einem Nutzer frei definiert werden kann. Die festgelegten Bereiche, hier mit runden Ecken dargestellt, 2. Die Grafikpipeline sind nicht frei programmierbar. [SWH13] Die Grafikpipeline beinhaltet fünf integrierte Shader und Die Grundlage der OpenGL Programmierung ist die Ver- einen extra Shader. Der Compute-Shader besitzt eine Son- wendung von Shadern. Dabei handelt es sich um einzelne derstellung unter den Shadern, da dieser kein Teil der Gra- Programme, welche vom Nutzer implementiert werden kön- fikpipeline ist und in der Regel nur indirekt zum Rendern nen und die dadurch das Rendering der Grafik beeinflussen. von Grafiken genutzt wird. Aus diesem Grund wird er in Ab- Die Grundlage im Hinblick auf die Infrastruktur zwischen schnitt 8 gesondert angesprochen. den einzelnen Teilen bildet die Grafikpipeline. Sie bestimmt In der Minimalkonfiguration besteht ein Shaderprogramm das Verhalten und die Reihenfolge der einzelnen Phasen der aus mindestens einem Vertex-Shader, welcher alleine jedoch Grafikprogrammierung. [RK06] noch keine sichtbare Ausgabe erzeugen kann, weswegen oft Die Figure 1 gibt einen groben Überblick über die einzelnen auch ein Fragment-Shader integriert wird. Punkte, Linien Abschnitte. und Dreiecke bilden die Basiseinheiten von OpenGL auf Die Pipeline setzt sich aus programmierbaren und festge- welchen die Shader operieren und komplexere Primitive ge- legten Funktionsbereichen zusammen. Die Shader werden nerieren. Dieser Prozess unterteilt sich in zwei größere Teile, in der Grafik durch blaue Boxen hervorgehoben, sie sind welche sich in der Pipeline wiederfinden. Zuerst werden die c 2015 The Author(s) Computer Graphics Forum c 2015 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. EUROGRAPHICS 2015 / L. Zepner Volume 34 (2015), Number 2 (Editors) Grundlagen Shader L. Zepner1 1TU Dresden, Germany Abstract Dieses Paper gibt einen kurzen Überblick über die sechs Shader der OpenGL API. Zu jedem Shader werden die charakteristischen Eigenschaften, wie Ein- und Ausgaben, als auch Aufgaben definiert und mit knappen Beispielen verdeutlicht. Besonderer Fokus liegt dabei auf dem Unterschied zwischen den einzelnen Shadern und ihrem Beitrag zu einer Grafikanwendung. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Grundlagen Shader— Vertex-Shader, Fragment-Shader, Tesselation Control-Shader, Tesselation Evaluation-Shader, Geometry-Shader, Compute-Shader 1. Einleitung OpenGL dient als abstrakte Schicht zwischen Anwendung und Hardware und unterteilt das Grafiksystem in verschie- dene Bereiche. Durch die Verwendung der OpenGL API ist es möglich auf dieses Grafiksystem zuzugreifen und plattfor- munabhängig Grafikanwendungen zu entwickeln. Program- mieren lassen sich die Anwendungen über Shader, welche durch fest eingebaute Teile der Pipeline verbunden sind. Shader werden in der OpenGL Shading Language geschrie- ben und in einem Programm-Objekt miteinander verlinkt. Figure 1: Überblick über die Grafikpipeline Jeder Shader hat seinen eigenen Wirkungsbereich und fest- gelegte Ein- und Ausgabemöglichkeiten, welche im Folgen- den
Recommended publications
  • Intel® Quick Sync Video Technology Guide
    WP80 Superguide 3: THE CLOUD VIDEO SUPERGUIDE JANUARY/FEBRUARY 2015 SPONSORED CONTENT Intel® Quick Sync Video Technology and Intel® Xeon® Processor Based Servers— Flexible Transcode Performance and Quality Video transcoding involves converting provides enterprise-quality HEVC and and boost image quality. Some key one compressed video format to audio codecs, Intel® VTune™ Amplifier improvements include the following: another. In the past this process has XE performance analysis tools and Video • Additional JPEG/MJPEG decode in been a compute-intensive task which Quality Caliper stream quality analyzer. the multi-format codec engine. This demanded a large amount of precious Additionally, product family members, support is on top of existing energy- CPU resources. Intel® Quick Synch Video Intel® Video Pro Analyzer and Intel® efficient, high-performance AVC (QSV) can enable hardware-accelerated Stress Bitstreams and Encoder bundles encode/decode that sustains multiple transcoding to deliver better performance enable production–scale validation and 4K and Ultra HD video streams. than transcoding on the CPU without debug of encode, transcode, and decode • A dedicated new video quality engine sacrificing quality. and playback applications. to provide extensive video processing First introduced in 2011, Intel Quick Intel Media Server Studio SDK at low power consumption Sync technology is available in the Intel® implements many codec and tools • Programmable and media-optimized Xeon® Processor E3-1200 v3 with Intel components initially in software, and EU (execution units)/samplers for HD Graphics P4600/4700 and Iris™ Pro later as hybrid (software and hardware) high quality P5200. (From here on, we’ll simply refer to or entirely in hardware.
    [Show full text]
  • GPU Developments 2018
    GPU Developments 2018 2018 GPU Developments 2018 © Copyright Jon Peddie Research 2019. All rights reserved. Reproduction in whole or in part is prohibited without written permission from Jon Peddie Research. This report is the property of Jon Peddie Research (JPR) and made available to a restricted number of clients only upon these terms and conditions. Agreement not to copy or disclose. This report and all future reports or other materials provided by JPR pursuant to this subscription (collectively, “Reports”) are protected by: (i) federal copyright, pursuant to the Copyright Act of 1976; and (ii) the nondisclosure provisions set forth immediately following. License, exclusive use, and agreement not to disclose. Reports are the trade secret property exclusively of JPR and are made available to a restricted number of clients, for their exclusive use and only upon the following terms and conditions. JPR grants site-wide license to read and utilize the information in the Reports, exclusively to the initial subscriber to the Reports, its subsidiaries, divisions, and employees (collectively, “Subscriber”). The Reports shall, at all times, be treated by Subscriber as proprietary and confidential documents, for internal use only. Subscriber agrees that it will not reproduce for or share any of the material in the Reports (“Material”) with any entity or individual other than Subscriber (“Shared Third Party”) (collectively, “Share” or “Sharing”), without the advance written permission of JPR. Subscriber shall be liable for any breach of this agreement and shall be subject to cancellation of its subscription to Reports. Without limiting this liability, Subscriber shall be liable for any damages suffered by JPR as a result of any Sharing of any Material, without advance written permission of JPR.
    [Show full text]
  • Keepixo’S Genova Live:Speed Is the Latest Addition to Genova Virtualizable Software Family of Products
    High Density Video Transcoding Keepixo’s Genova Live:Speed is the latest addition to Genova Virtualizable Software family of products. It runs on the Kontron SYMKLOUD platform and leverages Intel’s Quick Sync Video (QSV) technology to dramatically increase transcoding density; minimize power consumption while meeting professional service grade levels. Media Processing Acceleration Genova Live:Speed The explosive growth of Internet video traffic has put Keepixo worked with Kontron to leverage the Kontron pressure on video transcoding infrastructures to SYMKLOUD Converged Infrastructure platform and optimize bandwidth, power consumption and cost of developed a comprehensive package that optimizes the operations. High density transcoding solutions such as performance of QSV and allows smooth deployment of Intel’s Quick Sync Video technology have emerged to highly dense transcoding infrastructures comprising of address these challenges. They introduce various levels hundreds of services. of HW acceleration to off-load computationally intensive tasks. Initially targeted at consumer Keepixo selected the Kontron SymKloud MS2910 applications, high density transcoding has become a platform to implement its high density transcoding viable option for professional transcoding solution. The MS2910 platform comes in a 2U (21” infrastructures when included in suitable packages that depth) chassis, dual hot-swappable 10GbE switches, address the requirements of professional applications. and can accommodate up to 9 modular compute servers, each hosting 2 independent CPUs for a total of Intel Quick Sync Video (QSV) is available on a range of up to 18 CPUs per chassis. The SymKloud compute Intel i7 and Xeon E3 processors fitted with on-chip nodes can be of different mix-and-match processor graphics.
    [Show full text]
  • Rethinking Visual Cloud Workload Distribution
    WHITE PAPER Media and Communications Content Creation and Distribution RethinkingVisualCloud WorkloadDistribution Creating a New Model With visual computing workloads growing at an accelerating pace, cloud service providers (CSPs), communications service providers (CoSPs), and enterprises are rethinking the physical and virtual distribution of compute resources to more effectively balance cost and deployment efficiency while achieving exceptional performance. Visual cloud deployments accommodate a diverse range of streaming workloads, encompassing media processing and delivery, cloud graphics, cloud gaming, media analytics, and immersive media. Contending with the onslaught of new visual workloads will require more nimble, scalable, virtualized infrastructures; the capability of shifting workloads to the network edge when appropriate; and a collection of tools, software, and hardware components to support individual use cases fluidly. Advanced network technologies and cloud architectures are essential for agile distribution of visual cloud workloads. A 2017 report, Cisco Visual Networking Index: Forecast and Methodology, 2016–2021, projected strong growth in all Internet and managed IP video- related sectors. Compound annual growth rate (CAGR) figures during this time span, calculated in petabytes per month, included these predictions: • Content delivery network (CDN) traffic: 44 percent increase globally • Consumer-managed IP video traffic: 19,619 petabytes per month (14 percent increase) by 2021 • Consumer Internet video: 27 percent increase for fixed, 55 percent increase for mobile The impact of this media growth on cloud-based data centers will produce a burden on those CSPs, CoSPs, and enterprises that are not equipped to deal with TableofContents large-scale media workloads dynamically. Solutions to this challenge include: Creating a New Model . 1 • Increasingflexibilityandoptimizingprocessing: Virtualization and software- defined infrastructure (SDI) make it easier to balance workloads on available OpenSourceSoftware resources.
    [Show full text]
  • Intel® Quick Sync Video and Ffmpeg Installation and Validation Guide
    White paper Intel® Quick Sync Video and FFmpeg Installation and Validation Guide Introduction Intel® Quick Sync Video technology on Intel® Iris™ Pro Graphics and Intel® HD graphics provides transcode acceleration on Linux* systems in FFmpeg* 2.8 and later editions. This paper is a detailed step-by-step guide to enabling h264_qsv, mpeg2_qsv, and hevc_qsv hardware accelerated codecs in the FFmpeg framework. For a quicker overview, please see this article. Performance note: The *_qsv implementations are intended to provide easy access to Intel hardware capabilities for FFmpeg users, but are less efficient than custom applications optimized for Intel® Media Server Studio. Document note: Monospace type = command line inputs/outputs. Pink = highlights to call special attention to important command line I/O details. Getting Started 1. Install Intel Media Server Studio for Linux. Download from software.intel.com/intel-media-server- studio. This is a prerequisite for the *_qsv codecs as it provides the foundation for encode acceleration. See the next chapter for more info on edition choices. Note: Professional edition install is required for hevc_qsv. 2. Get the latest FFmpeg source from https://www.FFmpeg.org/download.html. Intel Quick Sync Video support is available in FFmpeg 2.8 and later editions. The install steps outlined below were verified with ffmpeg release 3.2.2 3. Configure FFmpeg with “--enable –libmfx –enable-nonfree”, build, and install. This requires copying include files to /opt/intel/mediasdk/include/mfx and adding a libmfx.pc file. More details below. 4. Test transcode with an accelerated codec such as “-vcodec h264_qsv” on the FFmpeg command line.
    [Show full text]
  • System Design for Telecommunication Gateways
    P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come SYSTEM DESIGN FOR TELECOMMUNICATION GATEWAYS Alexander Bachmutsky Nokia Siemens Networks, USA A John Wiley and Sons, Ltd., Publication P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come SYSTEM DESIGN FOR TELECOMMUNICATION GATEWAYS P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come SYSTEM DESIGN FOR TELECOMMUNICATION GATEWAYS Alexander Bachmutsky Nokia Siemens Networks, USA A John Wiley and Sons, Ltd., Publication P1: OTE/OTE/SPH P2: OTE FM BLBK307-Bachmutsky August 30, 2010 15:13 Printer Name: Yet to Come This edition first published 2011 C 2011 John Wiley & Sons, Ltd Registered office John Wiley & Sons Ltd, The Atrium, Southern Gate, Chichester, West Sussex, PO19 8SQ, United Kingdom For details of our global editorial offices, for customer services and for information about how to apply for permission to reuse the copyright material in this book please see our website at www.wiley.com. The right of the author to be identified as the author of this work has been asserted in accordance with the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without the prior permission of the publisher.
    [Show full text]
  • H P Intel® Quick Sync Video Tec Nology on Intel® Iris™ Graphics
    WHITE PAPER Intel Quick Sync Video Intel® Quick Sync Video Tech n ology on Intel® Iris™ Graphics and Intel® HD Grap h ics family—Flexible Transcode Performance an d Quality The 4th generation Intel® Core™ Processor introduces Intel® Iris™ Pro Graphics, Intel® Iris™ Graphics and Intel® HD Graphics (4200+ Series), featuring the newest iteration of Intel® Quick Sync Video technology. In addition, Intel Quick Sync technology is also available in the Intel Xeon® Processor E3-1200 v3 product family with Intel HD Graphics P4600/P4700. For the purposes of this whitepaper, we will refer simply to Intel Quick Sync technology. This advanced graphics solution sets a milestone in performance by delivering a blazingly fast transcode experience, and balances image quality and performance through a series of optimized user targets for easy adoption. This paper explains the changes in Intel Quick Sync Video transcode target usages, describes the new high-performance hardware acceleration engine, and sets performance and quality expectations for each target usage. Introduction In the past, transcoding has been a Video transcoding involves converting one compute-intensive task that demanded a compressed video format to another. The large amount of precious CPU resources. process can apply changes to the format, With multiple cores and more powerful such as moving from MPEG2 to H.264, or processors driving today’s systems, transcoding can change the properties of transcoding happens faster, and is a given format, like bit rate or resolution. commonly used to support the format Conversions cannot generally happen in a requirements of a whole range of video single step; rather, a series of transitions consumption devices.
    [Show full text]
  • Processing Multimedia Workloads on Heterogeneous Multicore Architectures
    Doctoral Dissertation Processing Multimedia Workloads on Heterogeneous Multicore Architectures H˚akon Kvale Stensland February 2015 Submitted to the Faculty of Mathematics and Natural Sciences at the University of Oslo in partial fulfilment of the requirements for the degree of Philosophiae Doctor © Håkon Kvale Stensland, 2015 Series of dissertations submitted to the Faculty of Mathematics and Natural Sciences, University of Oslo No. 1601 ISSN 1501-7710 All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, without permission. Cover: Hanne Baadsgaard Utigard. Printed in Norway: AIT Oslo AS. Produced in co-operation with Akademika Publishing. The thesis is produced by Akademika Publishing merely in connection with the thesis defence. Kindly direct all inquiries regarding the thesis to the copyright holder or the unit which grants the doctorate. Abstract Processor architectures have been evolving quickly since the introduction of the central processing unit. For a very long time, one of the important means of increasing per- formance was to increase the clock frequency. However, in the last decade, processor manufacturers have hit the so-called power wall, with high heat dissipation. To overcome this problem, processors were designed with reduced clock frequencies but with multiple cores and, later, heterogeneous processing elements. This shift introduced a new challenge for programmers: Legacy applications, written without parallelization in mind, gain no benefits from moving to multicore and heterogeneous architectures. Another challenge for the programmers is that heterogeneous architecture designs are very different with respect to caches, memory types, execution unit organization, and so forth and, in the worst case, a programmer must completely rewrite the application to obtain the best performance on the new architecture.
    [Show full text]
  • Courrier Du CNRS 1991
    Ce document a été réalisé à partir du document original de 1995. Il a été obtenu par scan (en 2017) de chaque page ensuite traitée, avant d’être reconstruit. Il peut donc présenter des légères différences ainsi que quelques coquilles mais son contenu est inchangé par rapport à celui du document original LE COURRIER DU CNRS Ce numéro du Courrier du CNRS a été DOSSIERS SCIENTIFIQUES préparé sous la direction du département scientifique Sciences pour l'ingénieur du CNRS. Directeur de la publication : Le Courrier du CNRS remercie les François Kourilsky auteurs et les organismes qui ont participé à ce dossier. REALISATION : Les titres, les chapeaux introductifs et CNRS-Atelier de l'Ecrit les résumés ont été rédigés par la Groupement des Unités de la rédaction. Communication du CNRS Les textes peuvent être reproduits sous I - place Aristide Briand réserve de l’autorisation du directeur 92195 Meudon Cedex de la publication. Prix : 50 francs. Vente au numéro : Direction : Bernard Hagene Presses du CNRS, 20-22, rue Saint- Rédaction en chef: Sylvie Langlois Amand, 75015 Paris- Rédaction : Pierrette Massonnet tél :(1) 45 33 16 00. Diffusion : Christine Girard Secrétariat: Muriel Hourüer Fabrication Coordination de la fabrication : COMITE SCIENTIFIQUE : J.O. - Communication, 10, avenue Bourgain, Coordinateur 92130 Issy-les-Moulineaux COUVERTURE Direction artistique ; Top Conseil. 18, rue Gérard Favier Volney, 75002 Paris - tél. : 42.96.14.58. Première étape du traitement du signal : Pierre-Y ves Arqués Impression : Roto-France-Impression, boulevard l'acquisition de l’information. de Beaubourg, Emerainville, 77327 Marne-la- Albert Bijaoui Vallée Cedex 2 - tél. : 60.06.60.00.
    [Show full text]
  • The Impact of Multichannel Game Audio on the Quality of Player Experience and In-Game Performance
    The Impact of Multichannel Game Audio on the Quality of Player Experience and In-game Performance Joseph David Rees-Jones PhD UNIVERSITY OF YORK Electronic Engineering July 2018 2 Abstract Multichannel audio is a term used in reference to a collection of techniques designed to present sound to a listener from all directions. This can be done either over a collection of loudspeakers surrounding the listener, or over a pair of headphones by virtualising sound sources at specific positions. The most popular commercial example is surround-sound, a technique whereby sounds that make up an auditory scene are divided among a defined group of audio channels and played back over an array of loudspeakers. Interactive video games are well suited to this kind of audio presentation, due to the way in which in-game sounds react dynamically to player actions. Employing multichannel game audio gives the potential of immersive and enveloping soundscapes whilst also adding possible tactical advantages. However, it is unclear as to whether these factors actually impact a player’s overall experience. There is a general consensus in the wider gaming community that surround-sound audio is beneficial for gameplay but there is very little academic work to back this up. It is therefore important to investigate empirically how players react to multichannel game audio, and hence the main motivation for this thesis. The aim was to find if a surround-sound system can outperform other systems with fewer audio channels (like mono and stereo). This was done by performing listening tests that assessed the perceived spatial sound quality and preferences towards some commonly used multichannel systems for game audio playback over both loudspeakers and headphones.
    [Show full text]
  • Episteme Em Artigos VIII
    Ensinar e aprender é algo contagioso, pois o que dissemina sabedoria também constrói o novo. O projeto Episteme é uma destas inovações que, há anos, se renova e reinventa nossa realidade. Por isso, com muito Episteme em Artigos orgulho convidamos a todos para conhecer mais um pouco deste trabalho, e junto conosco compartilhar seus pensamentos, seus ideais e suas buscas. VIII Um olhar sobre as Profissões Pontual Centro de Ensino - Uma escola feita a mão... Rua Tupi, 455 - Centro - Londrina PR - 43 3321 6757 www.pontuallondrina.com.br facebook.com/pontuallondrina Uma Escola feita a mão Organização Carlos Augusto Portello Carlos Henrique Duarte Maria Luisa Marigo 2014 Volume VIII Todos os direitos reservados. Proibida a reprodução total ou em partes sem autorização prévia. Índice Apresentação .......................................................................................... 04 Dedicatória ............................................................................................... 05 A biologia celular e marinha ................................................................... 06 Eduardo Kiiti Hachiya Martins e William Seiki Ogido Hirata Jogos indies – Gamers nova profissão? ............................................... 18 Antonio Panissa e André Panissa O que é um escritor? ............................................................................... 31 Júlia Oliveira Bilibio Profissões mais e menos remuneradas ................................................ 43 Vitor Hugo Omotto e Henrique Yabushita Um olhar sobre a psicanálise
    [Show full text]
  • Extracting and Mapping Industry 4.0 Technologies Using Wikipedia
    Computers in Industry 100 (2018) 244–257 Contents lists available at ScienceDirect Computers in Industry journal homepage: www.elsevier.com/locate/compind Extracting and mapping industry 4.0 technologies using wikipedia T ⁎ Filippo Chiarelloa, , Leonello Trivellib, Andrea Bonaccorsia, Gualtiero Fantonic a Department of Energy, Systems, Territory and Construction Engineering, University of Pisa, Largo Lucio Lazzarino, 2, 56126 Pisa, Italy b Department of Economics and Management, University of Pisa, Via Cosimo Ridolfi, 10, 56124 Pisa, Italy c Department of Mechanical, Nuclear and Production Engineering, University of Pisa, Largo Lucio Lazzarino, 2, 56126 Pisa, Italy ARTICLE INFO ABSTRACT Keywords: The explosion of the interest in the industry 4.0 generated a hype on both academia and business: the former is Industry 4.0 attracted for the opportunities given by the emergence of such a new field, the latter is pulled by incentives and Digital industry national investment plans. The Industry 4.0 technological field is not new but it is highly heterogeneous (actually Industrial IoT it is the aggregation point of more than 30 different fields of the technology). For this reason, many stakeholders Big data feel uncomfortable since they do not master the whole set of technologies, they manifested a lack of knowledge Digital currency and problems of communication with other domains. Programming languages Computing Actually such problem is twofold, on one side a common vocabulary that helps domain experts to have a Embedded systems mutual understanding is missing Riel et al. [1], on the other side, an overall standardization effort would be IoT beneficial to integrate existing terminologies in a reference architecture for the Industry 4.0 paradigm Smit et al.
    [Show full text]