Emerging RTC Use-Cases Innovative and Vertical-Market Use Cases of Voice/Video Real-Time Communications

Total Page:16

File Type:pdf, Size:1020Kb

Load more

Emerging RTC use-cases Innovative and Vertical-Market Use Cases of Voice/Video Real-Time Communications A Disruptive Analysis thought-leadership paper June 2018 Author: Dean Bubley Contact: [email protected] © Disruptive Analysis Ltd. RTC use-cases, June 2018: CALLSTATS I/O Oy 1 Executive Summary This white paper examines the ongoing proliferation of voice and video communications, beyond familiar uses like “calling” and “conferencing”. A new breed of interactive experiences is starting to emerge, spanning consumer and enterprise domains, mobile and new devices, private applications and public services. These solutions span both “verticals” (meaning specific industries, such as automotive, healthcare or finance), and new generic “horizontal” functions like AI, chatbots, security and emotion-detection. While the vanguard of this trend is already visible, Disruptive Analysis believes this is only just the tip of the iceberg. Since starting to write about “the future of voice” in 2009, and covering WebRTC technology as an enabler for new forms of video in 2011, much has already occurred. Most owners of smartphones have 5, 10, 20 or more applications with some form of communications capability, either as primary or secondary functions. Social apps are especially good examples, with Instagram recently adding a video calling feature. In the last two years, we have also started seeing interactive voice-assistant speakers like Google Home or Amazon Echo, as well as other new initiatives like real-time video- streaming, camera-equipped consumer drones, and mainstream “video doctor” services advertised and adopted broadly. New business processes are being developed around app-embedded voice or video, such as voice biometric identification, or “know your customer” compliance tools for online banking. There are three main themes to consider: ● Embedded communications: It is becoming increasingly easy to embed real- time voice and video capabilities into existing applications and devices. Vehicles, consumer electronics, e-commerce apps, smart buildings and advertising can all exploit cheap displays, cameras or microphones to enable better human interactions. ● Process enhancement: The addition of real-time communications (RTC) can often improve productivity, convenience or customer satisfaction for a given process or application – for example consultations with a “video stylist” when buying clothes online, or intelligent dictation and transcription software for doctors speaking to patients. ● Enabling innovation: Entirely new business models, workflows or process structures may be enabled by RTC. They may also enable compliance with regulatory instruments, such as know-your-customer rules for finance, that become problematic with online-only services without physical locations for identity checks. RTC is becoming increasingly advantageous – even critical - for many sectors. Yet it is still lacking in recognition in some quarters; there are many more applications that could benefit from it. This document highlights a selection of new use-cases and vertical- markets, to illustrate what is possible, and what can be achieved with some imagination. A secondary theme highlights the importance of performance and control, especially where new business-models and processes are being enabled. © Disruptive Analysis Ltd. RTC use-cases, June 2018: CALLSTATS I/O Oy 2 The document has been prepared by independent research firm Disruptive Analysis, and commissioned by CALLSTATS I/O, for distribution to customers, partners and a wider audience. It is based on Disruptive Analysis’ research covering networks, IoT, AI, WebRTC, telco and enterprise communications, and the “future of voice & video”. It should be read by CIOs, strategy executives, CTOs, CMOs, enterprise architects & planning/operational staff at major enterprises, communications service providers, information providers, software vendors, IoT firms, cable operators, ISPs, integrators, developers, XaaS providers and similar organisations. Mentions of companies and products in this document are intended as illustrations of market evolution and are not intended as endorsements or product/service recommendations. For more details, please contact [email protected] Introduction When most people think about evolution of real-time communications (RTC), they tend to focus on three very specific trends seen over the last decade: ● The use of mobile phones as the preferred device for calling (rather than fixed-line phones at home and work) ● The emergence of new VoIP-based calling, with services such as Skype, WhatsApp Voice, or enterprise IP-PBXs and cloud-based alternatives. ● The growing use of video calling and conferencing – whether that’s on mobile devices (like Apple faceTime), or business video conferencing room systems or desktop applications. Yet underneath these obvious changes are a set of broader and more far-reaching evolution paths for RTC. Enabled by various shifts in underlying technologies and platforms, such as better cameras, or WebRTC standards, there are actually four separate, orthogonal, trends that are happening simultaneously: ● More contexts for RTC ● More format for RTC ● More processing for RTC ● More platforms for RTC So for example, in the past we have seen voice calls evolve to embrace: ● A new context: Call centre ● A new format: Conferencing ● New processes: Storage & sending of voice-messages / voicemail (plus also IVR [Interactive Voice Response] systems and others) ● New platforms: Voice calling (and messaging) APIs These three trends (mobile, VoIP and video) are all now accelerating, especially with the introduction of video, cloud-based services, and widespread availability of open-source tools and “communications platform-as-a-service” (cPaaS) providers. Shifts in behaviour © Disruptive Analysis Ltd. RTC use-cases, June 2018: CALLSTATS I/O Oy 3 by both consumers and business employees mean that “communications everywhere” is widely accepted, going well beyond traditional “call and conferencing” forms. As the rest of this paper will explore, we are now seeing the early stages of a likely “Cambrian Explosion” of RTC. ● More contexts: Communications is being embedded in-app, in-device and in- process, for everything from banks know-your-customer interactions, to telemedicine interactions with remote doctors, or interactive video inside games. ● More formats: we are seeing one-way, three-way and hybrid voice/video combinations, always-on RTC, live-streaming, “whisper mode” asymmetric communications, augmented- and mixed-reality communications, non-voice applications for audio, and many more. ● More processing: this is a huge domain which intersects with AI, as well as more conventional techniques and analytics. It includes new areas such as voice assistants, emotion-detection, real-time translation, facial recognition and much more. ● More platforms: we see a continued proliferation of cPaaS providers, with some specialising in certain domains or techniques described above. Others combine RTC functions with other tools for developers. Some are telcos or UC providers extending their existing businesses, while others are “pure-play” RTC platforms. Underlying drivers and enablers Various technical, commercial and social trends are driving this massive expansion of RTC. Some of these are general to the whole technology sector (for instance, adoption of smartphones and cloud platforms) and a closer examination here is not necessary. Others are specifically important for RTC and bear consideration, as they point towards likely further evolution. Among these are: ● Availability of screens and cameras ● RTC-capable connectivity ● User behavioural changes ● AI ● Ecosystem of RTC platforms and specialists All of these elements – and others – are critical for pushing RTC beyond is traditional bastions of calling and conferencing, into new contexts and formats. Screens and cameras A key driver for new types of video communications is the availability and falling price of suitable displays, especially smaller touch-screens. High-quality touchscreen display modules of 5-7 inch size can now cost less than $30 for equipment manufacturers, as prices have fallen and qualities have improved, with the volumes needed for smartphones and small tablets. Similarly, good-quality camera modules are also much cheaper than in © Disruptive Analysis Ltd. RTC use-cases, June 2018: CALLSTATS I/O Oy 4 the past. As a comparison, consider that the first non-touchscreen 1080p 7” display was launched in 2005 by Sanyo-Epson, costing over $100 and only aimed at ultra-portable PCs. Together, the addition of 2-way video communications to a high-end IoT product is unlikely to add more than perhaps $50 to a typical bill-of-materials, unless it needs to be ruggedised. (for comparison, this is around the price of a low-end Android smartphone, which obviously has video capabilities as well). This trend has enabled numerous innovations, such as Amazon’s Echo Show which offers “visual Alexa” capabilities. for one-way, or lower-quality, video, for example where just a camera (but no screen) is used in a “smart doorbell” or CCTV camera, the costs can be much lower than this. While these numbers are still substantial – and obviously not relevant for a $3 sensor – they start to enable new functions and capabilities in both existing and new product categories. This means that many consumer appliances, vehicles, industrial equipment and many other products can support the inclusion
Recommended publications
  • Vision-Based Positioning for Internet-Of-Vehicles Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming-Hsuan Yang, and Yi-Ping Hung

    Vision-Based Positioning for Internet-Of-Vehicles Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming-Hsuan Yang, and Yi-Ping Hung

    http://ieeexplore.ieee.org/Xplore Vision-Based Positioning for Internet-of-Vehicles Kuan-Wen Chen, Chun-Hsin Wang, Xiao Wei, Qiao Liang, Chu-Song Chen, Ming-Hsuan Yang, and Yi-Ping Hung Abstract—This paper presents an algorithm for ego-positioning Structure by using a low-cost monocular camera for systems based on from moon the Internet-of-Vehicles (IoV). To reduce the computational and For local model memory requirements, as well as the communication load, we construc,on tackle the model compression task as a weighted k-cover problem for better preserving the critical structures. For real-world vision-based positioning applications, we consider the issue of large scene changes and introduce a model update algorithm to For image collec,on address this problem. A large positioning dataset containing data collected for more than a month, 106 sessions, and 14,275 images is constructed. Extensive experimental results show that sub- meter accuracy can be achieved by the proposed ego-positioning algorithm, which outperforms existing vision-based approaches. (a) Index Terms—Ego-positioning, model compression, model up- date, long-term positioning dataset. Download local 3D Upload newly scene model acquired images for model update I. INTRODUCTION NTELLIGENT transportation systems have been exten- I sively studied in the last decade to provide innovative and proactive services for traffic management and driving safety issues. Recent advances in driving assistance systems mostly For image matching provide stand-alone solutions to these issues by using sensors and posi1oning limited to the line of sight. However, many vehicle accidents occur because of other vehicles or objects obstructing the (b) view of the driver.
  • Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril

    Artificial Intelligence in Health Care: the Hope, the Hype, the Promise, the Peril

    Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril Michael Matheny, Sonoo Thadaney Israni, Mahnoor Ahmed, and Danielle Whicher, Editors WASHINGTON, DC NAM.EDU PREPUBLICATION COPY - Uncorrected Proofs NATIONAL ACADEMY OF MEDICINE • 500 Fifth Street, NW • WASHINGTON, DC 20001 NOTICE: This publication has undergone peer review according to procedures established by the National Academy of Medicine (NAM). Publication by the NAM worthy of public attention, but does not constitute endorsement of conclusions and recommendationssignifies that it is the by productthe NAM. of The a carefully views presented considered in processthis publication and is a contributionare those of individual contributors and do not represent formal consensus positions of the authors’ organizations; the NAM; or the National Academies of Sciences, Engineering, and Medicine. Library of Congress Cataloging-in-Publication Data to Come Copyright 2019 by the National Academy of Sciences. All rights reserved. Printed in the United States of America. Suggested citation: Matheny, M., S. Thadaney Israni, M. Ahmed, and D. Whicher, Editors. 2019. Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril. NAM Special Publication. Washington, DC: National Academy of Medicine. PREPUBLICATION COPY - Uncorrected Proofs “Knowing is not enough; we must apply. Willing is not enough; we must do.” --GOETHE PREPUBLICATION COPY - Uncorrected Proofs ABOUT THE NATIONAL ACADEMY OF MEDICINE The National Academy of Medicine is one of three Academies constituting the Nation- al Academies of Sciences, Engineering, and Medicine (the National Academies). The Na- tional Academies provide independent, objective analysis and advice to the nation and conduct other activities to solve complex problems and inform public policy decisions.
  • Visual Prosthetics Wwwwwwwwwwwww Gislin Dagnelie Editor

    Visual Prosthetics Wwwwwwwwwwwww Gislin Dagnelie Editor

    Visual Prosthetics wwwwwwwwwwwww Gislin Dagnelie Editor Visual Prosthetics Physiology, Bioengineering, Rehabilitation Editor Gislin Dagnelie Lions Vision Research & Rehabilitation Center Johns Hopkins University School of Medicine 550 N. Broadway, 6th floor Baltimore, MD 21205-2020 USA [email protected] ISBN 978-1-4419-0753-0 e-ISBN 978-1-4419-0754-7 DOI 10.1007/978-1-4419-0754-7 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2011921400 © Springer Science+Business Media, LLC 2011 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com) Preface Visual Prosthetics as a Multidisciplinary Challenge This is a book about the quest to realize a dream: the dream of restoring sight to the blind. A dream that may have been with humanity much longer than the idea that disabilities can be treated through technology – which itself is probably a very old idea.
  • Machine Vision for the Iot and I0.4

    Machine Vision for the Iot and I0.4

    International Journal of Management, Technology And Engineering ISSN NO : 2249-7455 MACHINE VISION FOR THE IOT AND I0.4 ROHIT KUMAR1, HARJOT SINGH GILL2 1Student, Department of Mechatronics Engineering Chandigarh University, Gharuan 2Assistant Professor, Department of Mechatronics Engineering Chandigarh University, Gharuan Abstract: Human have six sense and vision is one of the most used sense in the daily life. So here the concept if we have to replace the human by machine .We need the high quality sensor that can provid the real time vision for the machine . It also help to make control system. It can be used in industry very easily at different stages of manufacturing for preventive maintenance and fault diagnosis. Machine vision aslo help in industry to speed up inspection processes and reduces the wastage by preventing production quality. For instance, consider an arrangement of pictures demonstrating a tree influencing in the breeze on a splendid summer's day while a cloud moves over the sun changing the power and range of the enlightening light. For instance, consider an arrangement of pictures demonstrating a tree influencing in the breeze on a brilliant summer's day while a cloud moves over the sun adjusting the power and range of the enlightening light. Key words: Artificial Intelligence, Machine vision, Artificial neural network, , image processing. Introduction: The expanded utilization, consciousness of value, security has made a mindfulness for enhanced quality in customer items. The interest of client particular custom-misation, increment of rivalry has raised the need of cost decrease. This can be accomplished by expanding the nature of items, decreasing the wastage amid the generation, adaptability in customisation and quicker creation.
  • An Overview of Computer Vision

    An Overview of Computer Vision

    U.S. Department of Commerce National Bureau of Standards NBSIR 82-2582 AN OVERVIEW OF COMPUTER VISION September 1982 f— QC Prepared for iuu National Aeronautics and Space . Ibo Administration Headquarters 61-2562 Washington, D.C. 20546 National Bureau ot Standard! OCT 8 1982 ftsVac.c-'s loo - 2% NBSIR 82-2582 AN OVERVIEW OF COMPUTER VISION William B. Gevarter* U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Manufacturing Engineering Industrial Systems Division Metrology Building, Room A127 Washington, DC 20234 September 1982 Prepared for: National Aeronautics and Space Administration Headquarters Washington, DC 20546 U.S. DEPARTMENT OF COMMERCE, Malcolm Baldrige, Secretary NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Director * Research Associate at the National Bureau of Standards Sponsored by NASA Headquarters * •' ' J s rou . Preface Computer Vision * Computer Vision — visual perception employing computers — shares with "Expert Systems" the role of being one of the most popular topics in Artificial Intelligence today. Commercial vision systems have already begun to be used in manufacturing and robotic systems for inspection and guidance tasks. Other systems at various stages of development, are beginning to be employed in military, cartograhic and image inter pretat ion applications. This report reviews the basic approaches to such systems, the techniques utilized, applications, the current existing systems, the state-of-the-art of the technology, issues and research requirements, who is doing it and who is funding it, and finally, future trends and expectations. The computer vision field is multifaceted, having many participants with diverse viewpoints, with many papers having been written. However, the field is still in the early stages of development — organizing principles have not yet crystalized, and the associated technology has not yet been rationalized.
  • (12) United States Patent (10) Patent No.: US 9,547,804 B2 Nirenberg Et Al

    (12) United States Patent (10) Patent No.: US 9,547,804 B2 Nirenberg Et Al

    USO09547804B2 (12) United States Patent (10) Patent No.: US 9,547,804 B2 Nirenberg et al. (45) Date of Patent: Jan. 17, 2017 (54) RETNAL ENCODER FOR MACHINE (58) Field of Classification Search VISION CPC ...... G06K 9/4619; H04N 19760; H04N 19/62; H04N 19/85; G06N 3/049 (75) Inventors: Sheila Nirenberg, New York, NY (US); (Continued) Illya Bomash, Brooklyn, NY (US) (56) References Cited (73) Assignee: Cornell University, Ithaca, NY (US) U.S. PATENT DOCUMENTS (*) Notice: Subject to any disclaimer, the term of this 5,103,306 A * 4/1992 Weiman .................. GOS 5.163 patent is extended or adjusted under 35 348/400.1 U.S.C. 154(b) by 0 days. 5,815,608 A 9/1998 Lange et al. (21) Appl. No.: 14/239,828 (Continued) (22) PCT Filed: Aug. 24, 2012 FOREIGN PATENT DOCUMENTS CN 101.239008 A 8, 2008 (86) PCT No.: PCT/US2O12/052348 CN 101.336856. A 1, 2009 S 371 (c)(1), (Continued) (2), (4) Date: Jul. 15, 2014 OTHER PUBLICATIONS (87) PCT Pub. No.: WO2O13AO29OO8 Piedade, Moises, Gerald, Jose, Sousa, Leonel Augusto, Tavares, PCT Pub. Date: Feb. 28, 2013 Goncalo, Tomas, Pedro. “Visual Neuroprothesis: A Non Invasive System for Stimulating the Cortex'IEEE Transactions on Circuits (65) Prior Publication Data and Systems-I: Regular Papers, vol. 53 No. 12, Dec. 2005.* US 2014/0355861 A1 Dec. 4, 2014 (Continued) Primary Examiner — Kim Vu Related U.S. Application Data Assistant Examiner — Molly Delaney (60) Provisional application No. 61/527.493, filed on Aug. (74) Attorney, Agent, or Firm — Foley & Lardner LLP 25, 2011, provisional application No.
  • WIRELESS CCTV CASE STUDY Event Security: University of Arkansas Police Dept

    WIRELESS CCTV CASE STUDY Event Security: University of Arkansas Police Dept

    Event Security: University of Arkansas Police Dept. The Challenge The Solution Fayetteville, AR hosts an annual Bikes, Blues & WCCTV supplied UAPD with multiple Verizon BBQ (BBB) rally attracting upwards of 400,000, Certified 4G Mini Domes. UAPD are able to deploy centered near the University of Arkansas campus. these cameras at multiple locations across the site ensuring a safe and enjoyable event for all. Although the event usually passes without major incident, UAPD are needed to monitor additional Each WCCTV Mini Dome has the ability to stream ‘hot spots’ during the event in addition to their day live and recorded images, securely & efficiently, to day duties. utilizing the power of the Verizon 4G-LTE network. The WCCTV Mini Dome also allows the user to remotely control the camera (full pan, tilt & zoom functionality) via any PC, Laptop, Tablet or Smartphone. The WCCTV Mini Dome requires only a power source; negating the need for bespoke and costly local infrastructure. WIRELESS CCTV CASE STUDY www.wcctv.com Event Security: University of Arkansas Police Dept. The Result The Quote UAPD were able to identify suitable locations for “Having WCCTV’s Mini Dome systems around the multiple WCCTV Mini Dome camera systems rally site meant all areas could be remotely viewed for the duration of the event. The Mini Dome to ensure a safe and enjoyable event. could be relocated at short notice to any ‘hot spot’ based on event intelligence. The footage provided proved to be of an extremely high standard, making it easier to The event passed without major assistance, monitor on the large crowds who attend the rally.
  • A Neuromorphic Approach for Tracking Using Dynamic Neural Fields on a Programmable Vision-Chip

    A Neuromorphic Approach for Tracking Using Dynamic Neural Fields on a Programmable Vision-Chip

    A Neuromorphic Approach for Tracking using Dynamic Neural Fields on a Programmable Vision-chip Julien N.P. Martel Yulia Sandamirskaya Institute of Neuroinformatics Institute of Neuroinformatics University of Zurich and ETH Zurich University of Zurich and ETH Zurich 8057 Zurich, Switzerland 8057 Zurich, Switzerland [email protected] [email protected] ABSTRACT DNF activation Saliency map In artificial vision applications, such as tracking, a large Frame amount of data captured by sensors is transferred to pro- Processor Array cessors to extract information relevant for the task at hand. Smart vision sensors offer a means to reduce the computa- Lens tional burden of visual processing pipelines by placing more processing capabilities next to the sensor. In this work, we use a vision-chip in which a small processor with memory is located next to each photosensitive element. The architec- Scene ture of this device is optimized to perform local operations. To perform a task like tracking, we implement a neuromor- phic approach using a Dynamic Neural Field, which allows to segregate, memorize, and track objects. Our system, con- sisting of the vision-chip running the DNF, outputs only the activity that corresponds to the tracked objects. These out- Cellular Processor Array Vision-chip puts reduce the bandwidth needed to transfer information as well as further post-processing, since computation happens Figure 1: An illustration of the system implemented at the pixel level. in this work: A vision-chip is directed at a scene and captures a stream of images processed on-chip CCS Concepts in a saliency map. This map is fed as an input to a •Computing methodologies ! Object detection; Track- Dynamic Neural Field (DNF), also run on-chip and ing; Massively parallel and high-performance simulations; used for tracking objects.
  • Spectrum Abundance and the Choice Between Private and Public Control

    Spectrum Abundance and the Choice Between Private and Public Control

    SPECTRUM ABUNDANCE AND THE CHOICE BETWEEN PRIVATE AND PUBLIC CONTROL STUART MINOR BENJAMIN* Prominent commentators recently have proposed that the government allocate sig- nificant portions of the radio spectrum for use as a wireless commons. The problem for commons proposals is that truly open access leads to interference, which renders a commons unattractive. Those advocating a commons assert, how- ever, that a network comprising devices that operate at low power and repeat each other's messages can eliminate the interference problem. They contend that this possibility renders a spectrum commons more efficient than privately owned spec- trum, and in fact that private owners would not create these "abundant networks" in the first place. In this Article, Professor Benjamin argues that these assertions are not well founded, and that efficiency considerationsfavor private ownership of spectrum. Those advocating a commons do not propose a network in which anyone can transmit as she pleases. The abundant networks they envision involve significant control over the devices that will be allowed to transmit. On the question whether private entities will create these abundantnetworks, commons advocates emphasize the transaction costs of aggregating spectrum, but those costs can be avoided via allotment of spectrum in large swaths. The comparative question of the efficiency of private versus public control, meanwhile, entails an evaluation of the implica- tions of the profit motive (enhanced ability and desire to devise the best networks, but also the desire to attain monopoly power) versus properties of government action (the avoidance of private monopoly, but also a cumbersome process that can be subject to rent-seeking).
  • AI: What It Is, and Should You Use It

    AI: What It Is, and Should You Use It

    AI: What it is, and should you use it AI: What it is, and should you use it In this E-Guide: The hype surrounding artificial intelligence (AI) and all its potential applications is seemingly unending—but, could it really as powerful as people say it is? AI (artificial intelligence) Keep reading to discover what AI is by definition, why it’s becoming a hot commodity for contact centers, and the pros and cons of implementing it for yourself. 5 trends driving AI in contact centers The pros and cons of customer service AI Page 1 of 21 SPONSORED BY AI: What it is, and should you use it AI (artificial intelligence) Margaret Rouse, WhatIs.com Artificial intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of AI (artificial information and rules for using the information), reasoning (using rules to reach approximate intelligence) or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision. 5 trends driving AI in contact centers AI can be categorized as either weak or strong. Weak AI, also known as narrow AI, is an AI system that is designed and trained for a particular task. Virtual personal assistants, such as The pros and cons of Apple's Siri, are a form of weak AI. Strong AI, also known as artificial general intelligence, is customer service AI an AI system with generalized human cognitive abilities. When presented with an unfamiliar task, a strong AI system is able to find a solution without human intervention.
  • Israel at IBC 2013 Israel Pavilion • Hall 3 • A19 September 13-17, 2013 • RAI Amsterdam

    Israel at IBC 2013 Israel Pavilion • Hall 3 • A19 September 13-17, 2013 • RAI Amsterdam

    The Israel Export & International Cooperation Institute Israel at IBC 2013 Israel Pavilion • Hall 3 • A19 September 13-17, 2013 • RAI Amsterdam IBC ALON 2013 - A4.indd 1 5/23/13 8:44 AM The Israel Export & International Cooperation Institute The Israel Export & International Cooperation Institute (IEICI) Israel Broadcasting Technology Industry is back at IBC 2013, presenting a full spectrum of innovative solutions tailored for cables companies, satellite services, iptv, ott, content owners, TV networks, broadcasting equipment, smart TV and more. The Israeli new media industry has developed rapidly over the last few years. While there are some big companies in this industry, most companies are small ones, with more than 600 of them classified as start-ups. They are characterized by innovation and entrepreneurship, with low production costs aiding competitiveness, and a willingness to adapt solutions to customer requirements. This industry is rich and diverse and has more than 1000 new media companies that offer a range of new media possibilities: broadcasting, digital & cable TV, IPTV and satellite services. It introduces content creation, delivery and management, Internet applications and services, e-commerce and marketing, online advertising, entertainment and video, search and social networks. Therefore, we believe you will find at IBC 2013 an Israeli innovative solution meeting your needs! The Israel Export & International Cooperation Institute, a non-profit organization supported by the government of Israel and the private sector, facilitates business ties, joint ventures and strategic alliances between overseas and Israeli companies. Charged with promoting Israel’s business community in foreign markets, it provides comprehensive, professional trade information, advice, contacts and promotional activities to Israeli companies, and complementary services to business people, commercial groups, and business delegations from abroad.
  • Neural Network Based Fault Detection on Painted Surface

    Neural Network Based Fault Detection on Painted Surface

    UMEÅ UNIVERSITY MASTER THESIS Neural network based fault detection on painted surface Author: Midhumol Augustian A thesis submitted in partial fulfillment of the requirements for the degree of Masters in Robotics and Control Engineering Department of Applied Physics and Electronics Umeå University 2017 iii Declaration of Authorship I, Midhumol Augustian, declare that this thesis titled, “Neural network based fault detection on painted surface ” and the work presented in it are my own. I confirm that: • This work was done wholly while in candidature for a Masters degree in Robotics and control Engineering at Umeå University. • Where I have consulted the published work of others, this is always clearly attributed. • Where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. • I have acknowledged all main sources of help. • Where the thesis is based on work done by myself jointly with others, I have made clear exactly what was done by others and what I have contributed myself. Signed: Date: v Abstract Machine vision systems combined with classification algorithms are being in- creasingly used for different applications in the age of automation. One such application would be the quality control of the painted automobile parts. The fundamental elements of the machine vision system include camera, illumi- nation, image acquisition software and computer vision algorithms. Tradi- tional way of thinking puts too much importance on camera systems and ig- nores other elements while designing a machine vision system. In this thesis work, it is shown that selecting an appropriate illumination for illuminating the surface being examined is equally important in case of machine vision system for examining specular surface.