Olwal.Com | Resume | August 2021

Total Page:16

File Type:pdf, Size:1020Kb

Olwal.Com | Resume | August 2021 Alex Olwal portfolio www.olwal.com [email protected] Research Scientist Human—Computer Interaction olwal.com/linkedin +1 650 276 88 35 Ph.D. / Docent / M.Sc. Computer Science / Engineering Santa Cruz, CA Google Augmented Reality 2020– / Research 2018–2020 / ATAP 2017–2018 / Wearables 2015–2017 / [x] 2014–2015 Research MIT Media Lab / Columbia University / University of California, Santa Barbara / Iowa State University / KTH / Microsoft Research Teaching Stanford / Rhode Island School of Design / KTH Royal Institute of Technology Academia Docent (Human-Computer Interaction) / Ph.D. (Computer Science) / M.Sc. (Computer Science and Engineering) 3566 citations h-index 33 48 publications 6 patents 3 best paper awards / 3 honorable mentions / 1 best demo Research Interests I am a Tech Lead/Manager in Google’s Augmented Reality team and a founder of the Interaction Lab. I direct research and development of interaction technologies based on advancements in display technology, low-power and high-speed sensing, wearables, actuation, electronic textiles, and human—computer interaction. I am passionate about accelerating innovation and disruption through tools, techniques and devices that enable augmentation and empowerment of human abilities. Research interests include augmented reality, ubiquitous computing, mobile devices, 3D user interfaces, interaction techniques, interfaces for accessibility and health, medical imaging, and software/hardware prototyping. Employment Google AR Technical Lead / Manager | Staff Research Scientist 2020–Current Augmented Reality Leading the AR Visualization and Sound team and Interaction Lab. Stealth project. Google Research Technical Lead / Manager | Senior Research Scientist 2018–2020 Perception Led the Biointerfaces Team (co-founder) and Interaction Lab (10+ research scientists/engineers, PMs, UX researchers/designers) Led two funded multi-year programs with VP-support for R&D in wearables, AI and Wearable Subtitles accessibility, leveraging novel sensing and display hardware opportunities. olwal.com/wearablesubtitles (Best demo honorable mention award) Google ATAP Technical Lead / Manager | Senior Research Scientist 2017–2018 ATAP (Advanced Technology and Projects) Recruited and led 15-person software/hardware engineering team and ramped up wearable project with subsequent successful graduation to Google Research. Google Wearables / Project Aura Technical Lead | Senior Research Scientist 2015–2017 Wearables, Augmented and Virtual Reality / Project Aura, Glass and Beyond Founded the Interaction Lab to accelerate organization’s capabilities for rapid hardware prototyping of wearables and interface technology. In-person presentations and demos to Google Hardware VP-level leadership, incl. Rick E-Textile microinteractions Osterloh (SVP Hardware) and Ivy Ross (VP Design). olwal.com/e-textile (Best demo award) Google [x] Senior Interaction Researcher 2014–2015 R&D in sensors, display and interaction techniques Member of Google Glass Interaction Research team, developing new concepts and innovations to inform Google [x] roadmaps. Software and hardware prototyping, user interface design and research. Working closely with optics scientists, UX designers and usability researchers. In-person presentations and demos to leadership, incl. Sergey Brin, Larry Page, Sundar Pichai, Ruth Porat. Stanford University Lecturer / Adjunct Lecturer 2016 / 2017 / 2018 Teaching: Introduction to the Design of Smart Products KTH Royal Institute of Technology Affiliate Faculty / Researcher 2009–2010 / 2014–2017 Virtual Reality for phobia treatment and rehabilitation Established partnership with clinical partners, co-authored funded grant proposal and mentored junior researchers. Multi-user surgery planning groupware + 3D visualization of 2D X-rays Surgery planning groupware and 3D visualization Human-Computer Interaction researcher and software developer of cross-platform multi-user, multi-device interaction software for interaction with medical imagery. Co-authored funded grant proposal for 3D interaction with 2D imagery, recruited and mentored junior research students. MIT Media Lab Research Affiliate 2014–2017 Tangible Media group Worked with postdoctoral researchers, senior Ph.D. students and visiting research to advise on research projects and co-author research publications. We made significant contributions to the emerging user interface paradigm of Shape- Changing User Interfaces through the inFORM, Physical Telepresence, Sublimate, and Shape Displays publications and projects. We explored implicit sensing in furniture, objects and devices through bio-sensing (Zensei) and multi-spectral inFORM and Physical Telepresence olwal.com/inform illumination ( SpecTrans ) . MIT Media Lab Postdoctoral Fellow 2011–2013 Tangible Media Group; Camera Culture Group Conducted research and development of new user interface technology in collaboration with postdoctoral researchers and Ph.D. students, and co-authored Jamming User Interfaces research publications. 3D gaze tracking for radiologists with Harvard Medical olwal.com/jamming (Best paper award) School, and software/hardware engineering for high-speed laser speckle sensing (SpeckleSense) and deformable physical interfaces ( Jamming User Interfaces ) . Advisor on wearable eye diagnostics projects (Retinal Imaging) and 3D desktop interfaces ( SpaceTop ) . SpeckleSense Rhode Island School of Design Faculty 2012 Department of Digital + Media Teaching: Introduction to Creative Programming Microsoft Research Ph.D. Intern 2006 Adaptive Systems and Interaction Group SurfaceFusion: Hybrid RFID and Computer Vision for Interactive Surfaces. We introduced a hybrid technique that combines RFID and computer vision, to avoid the need for visual markers for interaction with tangible, physical objects on interactive surfaces. ColumbiaSurfaceFusion University Staff Associate Researcher 2001–2003olwal.com/surfacefusion Advisor: Steven Feiner. Multimodal Augmented Reality: SenseShapes, MAVEN, Unit Tactam / Space + Time Founder, Principal Scientist 1999– Public installations, interactive exhibit and museum technology Example: Exhibition interaction, concept, design for MegaMind Science Center Rule Communication Co-founder, Head of R&D 2005–2014 Electronic marketing platform: e-mail, mobile, social media Various IT companies Software Engineer, Doera Service Provider (1999–2000) 1997–2000 Computer Technician, Digital Communication Media (1998–1999) Software Developer, Universum Communications (1997–2000) Academia Docent KTH Royal Institute of Technology Human—Computer Interaction Docent Lecture Programmable Perception: Augmented Reality through 2016 matter, tele-robotics and electronic tattoos Ph.D. KTH Royal Institute of Technology Computer Science Dissertation Unobtrusive Augmentation of Physical Environments: 2009 Interaction Techniques, Spatial Displays and Ubiquitous Sensing M.Sc. KTH Royal Institute of Technology Computer Science & Engineering Thesis Unit—A Modular Framework for Interaction Technique 2002 Design, Development and Implementation Visiting Researcher University of California – Santa Barbara 2005 Augmented Reality: Immaterial Displays, Interactive FogScreen, POLAR Exchange Student Iowa State University 2000 Virtual Reality Applications Center Teaching Assistant KTH Royal Institute of Technology 1997–2002 Computer science and programming labs Thesis supervision and doctoral committee member Ph.D./Lic. committee member (4) Aditya Shekhar Nittala, Martin Weigel, Günter Alce, Sebastian Rauh M.Sc. thesis supervisor (8) Finn Ericson, Dimitrios Lachanas, Henrik Lenberg, Johan Persson, Lars Ringborg, Oskar Rönnberg, Ludvig Suneson, Ermioni Zacharouli B.Sc. thesis supervisor (2) Flaviano Musarra, Henrik Sjelin Course development, lectures, and teaching 2016–18 Stanford University Introduction to the Design of Smart Products 2012 Rhode Island School of Design Introduction to Creative Programming 2006–09 KTH Royal Institute of Technology Multimodal Interaction & Interfaces, Independent studies Advanced Graphics and Interaction, Evaluation Methods in Human—Computer Interaction Grants (~$4.5 million USD) 2016 KTH (Researcher) Pilot projects KTH-SLL 2016 | Stockholms Läns Landsting (SLL) 2016 KTH (Researcher) Smart Systems 2015 | Swedish Foundation for Strategic Research (SSF) 2014 KTH (Researcher) Startup Funding | Swedish Research Council (Vetenskapsrådet) 2013 Tactam Innovation Project | Swedish Post and Telecom Authority (PTS) 2013 Rule Communication Innovation Project | Swedish Post and Telecom Authority (PTS) 2010 MIT (Post-doc) Post-doctoral Fellowship| Swedish Research Council (Vetenskapsrådet) 2010 MIT (Post-doc) Post-doctoral Grant | Foundation Blanceflor 2010 Rule Communication Project Grant | Sweden's Innovation Agency (VINNOVA) 2009 Rule Communication Project Grant | Swedish Institute of Assistive Technology (HI) 2009 KTH (Researcher) Project Grant | Swedish Knowledge Foundation (KK-Stiftelsen) 2008 KTH (Ph.D. Student) Personal Grant | Innovation Bridge (Innovationsbron) 2008 KTH (Ph.D. Student) Personal Grant | Royal Swedish Academy of Engineering Sciences (IVA) 2005 UCSB (Visiting Researcher) Ph.D. Fellowship | Sweden—America Foundation 2003–2009 KTH (Ph.D. Student) Various conference travel grants | KTH Royal Institute of Technology Boards and Committees 2009–15 Swedish Computer Graphics Association, Board member 2005–09 KTH, Member of hiring committee. Evaluation, interviewing and hiring of new faculty. Publications 32 first-tier publications 3545 Citations h-index 32 i10-index 50
Recommended publications
  • NFA-Resume Copy.Pages
    NICHOLAS ARNER 330-418-1965 // [email protected] // nickarner.com Skills Development languages: Swift, Objective-C, Python, C/C++ (literate) Creative coding and prototyping: Max/MSP, Pure Data, OpenFrameworks, Processing, Arduino, RaspberryPi, Unity Specialities & Interests native macOS and iOS development, human-computer interaction research, gestural interface design, product design and development, circuit prototyping, interactive machine learning, augmented reality, multimedia programming Experience & Projects HUMAN-MACHINE INTERFACE ENGINEER, ASTEROID TECHNOLOGIES; SF BAY AREA — NOVEMBER 2017 - MAY 2019 While at Asteroid, I worked to develop a macOS app for building ARKit interactions that are ready to run on iOS devices. I also researched and prototyped AR-focused features and experiments around a variety of modalities, including audio and voice, haptics, gestures, and controllers. SELECTED ALPHA DEVELOPER, GOOGLE PROJECT SOLI; REMOTE/MOUNTAIN VIEW — OCTOBER 2015 - MAY 2017 I was one of 80 developers worldwide to be accepted into Google ATAP’s Project Soli Alpha Developer Project, and was one of 14 developers to be invited to Google HQ to workshop Soli use cases. My work focused on creating instruments using new musical instruments. Work published in NIME 2017 Proceedings. ASSOCIATE CONSULTANT, WORKSTATE CONSULTING; COLUMBUS, OHIO — FEBRUARY 2016 - AUGUST 2017 I was the lead or co-lead developer on mobile app development projects for CBS Sports and Chicago’s Metro Planning Agency. CORE CONTRIBUTOR, AUDIOKIT; REMOTE — AUGUST 2014 - OCTOBER 2016 I was a core contributor of AudioKit, an open-source audio analysis, synthesis, and processing toolkit for iOS and OS X apps. My contributions include operation defaults, presets, tests, and can be viewed on my GitHub.
    [Show full text]
  • Accountable Privacy for Decentralized Anonymous Payments
    Accountable Privacy for Decentralized Anonymous Payments Christina Garman, Matthew Green, and Ian Miers Johns Hopkins University fcgarman, mgreen, [email protected] Abstract. Decentralized ledger-based currencies such as Bitcoin provide a means to construct payment systems without requiring a trusted bank. Removing this trust assumption comes at the significant cost of transac- tion privacy. A number of academic works have sought to improve the privacy offered by ledger-based currencies using anonymous electronic cash (e-cash) techniques. Unfortunately, this strong degree of privacy creates new regulatory concerns, since the new private transactions can- not be subject to the same controls used to prevent individuals from conducting illegal transactions such as money laundering. We propose an initial approach to addressing this issue by adding privacy preserving policy-enforcement mechanisms that guarantee regulatory compliance, allow selective user tracing, and admit tracing of tainted coins (e.g., ransom payments). To accomplish this new functionality we also provide improved definitions for Zerocash and, of independent interest, an efficient construction for simulation sound zk-SNARKs. 1 Introduction The success of decentralized currencies like Bitcoin has led to renewed interest in anonymous electronic cash both in academia [2, 9, 20] and in practice (including Coinjoin, CryptoNote, and DarkWallet). It has also highlighted new problems related to trust, privacy and regulatory compliance. In modern electronic payment systems, users must trust that their bank is not tampering with the system (e.g., by \forging" extra currency), that no party is abusing the privacy of users' transactions, and simultaneously, that other users are not using the system to engage in money laundering or extortion.
    [Show full text]
  • Wide Dynamic Range Multi-Channel Electrochemical Instrument for In-Field Measurements
    Wide Dynamic Range Multi-Channel Electrochemical Instrument for In-Field Measurements Sina Parsnejad, Yaoxing Hu, Hao Wan, Ehsan Ashoori, and Andrew J. Mason Electrical and Computer Engineering, Michigan State Univ., East Lansing, MI, USA {parsneja, huyaoxin, wh1816, ashoorie, mason}@msu.edu Abstract— This paper presents a multi-channel, multi- technique electrochemical instrument with the size, power and performance for portable applications such as point-of-care diagnosis, wearable sensing, and toxic chemical detection. Composed of a custom analog interface and a commercial low- power microcontroller, the portable instrument is capable of dynamically adapting to a wide input current range exhibited by many electrode/sensor types and observed in the transient response of many electrochemical techniques. The instrument can generate several standard electrochemical stimulus waveforms or an arbitrary waveform, independently and in parallel on multiple channels, while streaming measurement results to a USB host. The Fig. 1. Versatile electrochemical instrument for interfacing various portable instrument was tested across multiple conditions against electrodes and streaming measurements to a USB host. a commercial benchtop electrochemical instrument in a potassium ferricyanide solution. The maximum normalized root mean electrode or are tied to a specific host platform. For example, [3] square difference between test results and the commercial would only operate with an Ara smartphone while [2] is instrument is less than 2%, cementing system robustness. optimized to a specific kind of electrode. Furthermore, in real- world applications sensing is not usually possible with just one Keywords—electrochemical sensor; point-of-care sensing. channel/sensor due to the presence of too many variables in the I.
    [Show full text]
  • Seamless Authentication for Ubiquitous Devices
    Dartmouth College Dartmouth Digital Commons Dartmouth College Ph.D Dissertations Theses and Dissertations 5-1-2016 Seamless Authentication for Ubiquitous Devices Shrirang Mare Dartmouth College Follow this and additional works at: https://digitalcommons.dartmouth.edu/dissertations Part of the Computer Sciences Commons Recommended Citation Mare, Shrirang, "Seamless Authentication for Ubiquitous Devices" (2016). Dartmouth College Ph.D Dissertations. 48. https://digitalcommons.dartmouth.edu/dissertations/48 This Thesis (Ph.D.) is brought to you for free and open access by the Theses and Dissertations at Dartmouth Digital Commons. It has been accepted for inclusion in Dartmouth College Ph.D Dissertations by an authorized administrator of Dartmouth Digital Commons. For more information, please contact [email protected]. Seamless Authentication For Ubiquitous Devices Shrirang Mare Technical Report TR2016-793 Dartmouth Computer Science Copyright c 2016, Shrirang Mare All rights reserved Abstract User authentication is an integral part of our lives; we authenticate ourselves to personal computers and a variety of other things several times a day. Authentication is burdensome. When we wish to access to a computer or a resource, it is an additional task that we need to perform – an interruption in our workflow. In this dissertation, we study people’s authentication behavior and attempt to make authentication to desktops and smartphones less burdensome for users. First, we present the findings of a user study we conducted to understand people’s authentication behavior: things they authenticate to, how and when they authenticate, authentication errors they encounter and why, and their opinions about authentication. In our study, participants performed about 39 authentications per day on average; the majority of these authentications were to personal computers (desktop, laptop, smartphone, tablet) and with passwords, but the number of authentications to other things (e.g., car, door) was not insignificant.
    [Show full text]
  • ASL Student Project Description
    Deep Learning for gesture recognition based on novel hardware (Google project Soli) Project description We are currently witnessing several drastic shifts in the computing landscape. The traditional PC and hence the traditional mouse and keyboard-based UI are no longer the primary computing paradigm but are increasingly complemented by other paradigms such as direct-touch and gestural interaction. As our environments become smarter and the line between virtual and real worlds becomes blurred, it also becomes increasingly clear that traditional forms of user input are no longer ade- quate for mobile scenarios. With technologies such as wearable computing, Augmented and Virtual Reality (AR/VR) transi- tioning from the research labs into the mainstream market the need for a general purpose, contact-and wireless, and high precision, high bandwidth man-machine interface becomes increasingly urgent. This project is aiming at building an intelligent algorithm that could recognize low-effort, low-energy and high-bandwidth interactions involving primarily small muscle groups. The interaction paradigm is based on UWB radar sensor (Google project Soli: https://www.youtube.com/watch?v=0QNiZfSsPc0) which is sensitive to tiny motions, immune to optical occlusion, works both indoor/outdoor, through paper/clothing/thin plastics at flexible range. Recently convolutional deep neural networks (DNNs) have made a significant impact in computer vision. DNNs have outper- formed state-of-the-art machine learning algorithms in very large-scale image recognition and hand-written digits recogni- tion. In a recent competition on multimodal recognition of 20 dynamic gestures from the Italian sign language, an algorithm based on convolutional neural networks ranked first among 17 competing methods.
    [Show full text]
  • Light Sensor Development for Ara Platform
    Escola Tecnica` Superior d'Enginyeria de Telecomunicacio´ de Barcelona Master Thesis Light sensor development for Ara platform Author: Supervisor: Alexis DUQUE Pr. Josep Paradells Aspas Wireless Network Group Escola T`ecnicaSuperior d'Enginyeria de Telecomunicaci´ode Barcelona June 2015 Abstract INSA de Lyon - T´el´ecommunications, Services et Usages Escola T`ecnicaSuperior d'Enginyeria de Telecomunicaci´ode Barcelona Master's degree in Telecommunications Engineering Light sensor development for Ara platform by Alexis DUQUE During the last years, Visible Light Communication (VLC), a novel technology that enables standard Light-Emitting-Diodes (LEDs) to transmit data, is gaining significant attention. In the near future, this technology could enable devices containing LEDs { such as car lights, city lights, screens and home appliances { to carry information or data to the end-users, using their smartphone. However, VLC is currently limited by the end-point receiver, such as a the mobile camera, or a peripheral connected through the jack input and to unleash the full potential of VLC, more advanced receiver are required. On other, few year ago, Google ATAP - the Google innovation department - announced the Ara initiative. This consist on a modular phone where parts of the phone, like cameras, sensors or networks can be changed. So when a new feature appears or required by the user it is not needed to change the mobile phone, just to buy the modules with the functionality. This Master Thesis presents the design and development of a simple module that will support communication by light (VLC) using the Ara Module Developer Kit provided by Google. It consists on building a front-end circuit, connecting a photodiode that receives the level of light and use it as data carrier, in order to receive and display data inside a custom Android application on the Ara smartphone.
    [Show full text]
  • Recap of 2015 Google I/O Conference
    Google I/O 2015 Zoya Bylinskii https://events.google.com/io2015/videos with research outlooks Friday, June 12, 15 1 - a yearly 5,000+ conference for developers and tech enthusiasts - watching parties available globally - keynote and many other sessions posted online Key trends (I extracted) from the keynote • the next billion users • the internet of things • context-aware apps Friday, June 12, 15 2 Thinking about the next billion users • cheaper phones (AndroidOne) • optimized apps, search, and loading • offline support • new use cases computer science more in demand as billions of new people come online hot research directions: optimizing efficiency of storage, transmission, and display of data Friday, June 12, 15 3 - already over a billion users for Chrome, Android, and YouTube, almost a billion users for Gmail - 2.8 billion people online (out of 7 billion) - now tackling developing markets, spotty connectivity, small budgets - just six countries, including China, Mexico, and Brazil, will be responsible for 1.2 billion smartphone sales by 2016, but many of them lack pervasive Internet access - $100 AndroidOne phones - project Loon providing mobile broadband (balloons stay up for 400 days) 4000+ apps for wearables already available want impact? start thinking about converting your research into mobile and wearable apps Friday, June 12, 15 4 Internet of Things Friday, June 12, 15 5 - connected devices, seamless manipulation - control, adjust, monitor all your devices using similar (mobile) interfaces Internet of Things consider all the sensor and device use data that will soon be available Friday, June 12, 15 6 - streamlining for the developer - just released: specially-designed OS (Brillo) and communication platform (Weave) looking into the future of smart devices..
    [Show full text]
  • Mr. Yuanjun Xiong
    Mr. Yuanjun Xiong Amazon Rekognition Seattle, United States 1-(425)-283-6215 [email protected] Google scholar: https://scholar.google.co.uk/citations?user=ojKsx6AAAAAJ&hl=en Current Status I am currently a principal scientist in Amazon Rekognition, exploring long-term evo- lution of massive-scale computer vision systems. EDUCATION PhD in Information Engineering August 2012 - July 2016 Department of Information Engineering The Chinese University of Hong Kong, Hong Kong Research areas: machine learning, computer vision, deep learning. Thesis: Visual Understanding by Learning from Multiple Data Aspects Bachelor of Engineering August 2008 - July 2012 Department of Automation Tsinghua University, Beijing, China, 2012 Thesis: Dynamic 3D Human Face Aging Emulation Bachelor of Economics, secondary degree August 2009 - July 2012 School of Economics and Management Tsinghua University, Beijing, China, 2012 EXPERIENCE Principal Scientist October 2017 - Present Amazon Rekognition, Seattle, United States. Postdoctoral Fellow August 2016 - September 2017 Department of Information Engineering, CUHK, Hong Kong PhD Student Fall 2012 - July 2016 Department of Information Engineering, CUHK, Hong Kong Visiting Researcher January 2015 - April 2015 Google Inc., Mountain View, CA, United States. PUBLICATIONS Sijie Yan, Yuanjun Xiong, Kaustav Kundu, Shuo Yang, Siqi Deng, Meng Wang, Wei Xia, Stefano Soatto. \Positive-Congruent Training: Towards Regression-Free Model Updates,", In Computer Vision and Pattern Recognition (CVPR), 2021. Rahul Duggal, Hao Zhou, Shuo Yang, Yuanjun Xiong, Wei Xia, Zhuowen Tu, Stefano Soatto. \Compatibility-Aware Heterogeneous Visual Search,", In Computer Vision and Pattern Recognition (CVPR), 2021. Tianchen Zhao, Xiang Xu, Mingze Xu, Hui Ding, Yuanjun Xiong, Wei Xia. \Learn- ing Self-Consistency for Deepfake Detection,", In International Conference on Com- puter Vision (ICCV), 2021.
    [Show full text]
  • Deconstruct, Imagine, Build: Bringing Advanced Manufacturing to the Maker Community
    Deconstruct, Imagine, Build: Bringing Advanced Manufacturing to the Maker Community Joanne Lo Electrical Engineering and Computer Sciences University of California at Berkeley Technical Report No. UCB/EECS-2016-215 http://www2.eecs.berkeley.edu/Pubs/TechRpts/2016/EECS-2016-215.html December 16, 2016 Copyright © 2016, by the author(s). All rights reserved. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission. Deconstruct, Imagine, Build: Bringing Advanced Manufacturing to the Maker Community by Joanne C. Lo A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering - Electrical Engineering and Computer Sciences in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Eric Paulos, Chair Professor Bj¨ornHartmann Professor Paul Wright Fall 2016 Deconstruct, Imagine, Build: Bringing Advanced Manufacturing to the Maker Community Copyright 2016 by Joanne C. Lo i Contents Contents i List of Figures iii 1 Introduction 3 1.1 Thesis Contributions . 4 1.2 Dissertation Roadmap . 7 2 Fabrication Process Design Framework 12 2.1 Prototyping and the iterative design process . 14 2.2 Technical background . 15 3 Related Work 19 3.1 Hardware prototyping in advanced manufacturing . 19 3.2 Hardware prototyping in the Maker community .
    [Show full text]
  • 亲历google I/O 2015
    I/O 2015 Redux 亲历Google I/O 2015 https://events.google.com/ io2015/ GDG Fuzhou 1 历次Google I/O回顾 1:1000的中签率,直观感受:挤 为什么? 2014年, Google向开发 者分成超过70亿 美金。2014年 11月,Google 打通中国开发者 分成渠道。 各种高科技的前 身和萌芽,每年 的IO大会都是未 来新趋势的前瞻 性盛会。 Google I/O 2008 首届 I/O 大会在 2008 年 5 月 28-29 日举行, 会议的主题是 Android 、Google Maps API 以及 Bionic Libc 数据 库。 Google 在大会上发布了 Google Web 开发套件 的 1.5 候选版,后者支持 Java 1.5 的新功能, 同时提高了性能。 除此之外,Google 还宣布 Google App Engine 将面向所有人,用户使用时不再需要审核。 Google I/O 2009 第二届 Google I/O 同样 关注技术内容。和第一届 相比,本届 I/O 大会出现 了不少大众用户熟悉的产 品,比如 Android、 Chrome 和 Google Wave 等 Google I/O 2010 第三届 Google I/O 终 于出现了大众耳熟能详的 产品——Google 推出了 Android 2.2。 Android 2.2 解决了此 前 Android 系统存在的 诸多问题。它从整体水平 提高了系统性能,同时加 入了 3G 网络分享、 Flash 支持、应用商店等 众多新功能,用户期待已 久的 App2SD 同样出现 在 Android 2.2 中。 Google I/O 2011 本届 Google I/O 围绕 Android 、 Chrome 和 Chrome OS 展开 Google 在第一天发布了最新的 Android 3.1 系统。Android 3.1 的最大变化是将手机系统和平板 系统重新合并,方便开发者统一 开发应用。除此之外,Google 还推出了 Beta 版 Google Music,挑 战亚马逊 Cloud Player 和 Spotify 音乐流媒体服务。 第二天的会议内容主要关注 Chrome 和 Chrome OS。Google 在 大会上公布了 Chrome 最新的用户数字,演示了基于 Chrome 的 游戏《愤怒的小鸟》,另外将 Chrome Web Store 的应用分成从过 去的 20% 下调到 5%。 Google 还展示了两台由三星和宏碁生产的 Chromebook,前者 采用 12 英寸的显示屏,后者采用 11 英寸的显示屏。两者都提供 3G 版本,并且同时在百思买和亚马逊开卖。 Google I/O 2012 2012 年的 Google I/O 因为内容丰 富,大会被调整为 3 天。Google 第 一天的会议主题依旧是 Android,另 外还推出了 Nexus 7 平板电脑和 Google Glass。 Android 4.1 加入了“黄油计划”。“黄 油计划”中,Google 在图形管道上提 供了三重缓冲,渲染速度能够达到 120fps,动画也变得更加顺滑流畅。 Android 4.1 的另一项重要更新是 Google Now 语音助手。Google Now 可以根据用户的地理位置和搜索记录,提供类似酒店预订、天气查询等实用 功能。 同时发布的 Nexus 7 是首款 Google 自有品牌平板电脑,而 Google Glass 则开启了 Google
    [Show full text]
  • Perancangan Tampilan Antarmuka Aplikasi
    PERANCANGAN TAMPILAN ANTARMUKA APLIKASI WEB “INPUT DATA UKURAN” PADA PERUSAHAAN EXPRESS TAILOR Prodi Pengkajian dan Penciptaan Seni Pascasarjana ISI Yogyakarta 2021 Hafiizh Ash Shiddiqi 1620991411 PROGRAM PENCIPTAAN DAN PENGKAJIAN SENI INSTITUT SENI INDONESIA YOGYAKARTA 2021 UPT Perpustakaan ISI Yogyakarta PERANCANGAN TAMPILAN ANTARMUKA APLIKASI WEB “INPUT DATA UKURAN” PADA PERUSAHAAN EXPRESS TAILOR Pertanggungjawaban Tertulis Program Penciptaan dan Pengkajian Seni Pascasarjana Institut Seni Indonesia Yogyakarta, 2020 Oleh: Hafiizh Ash Shiddiqi ABSTRAK Proses perancangan web aplikasi Express Tailor hadir sebagai upaya mengatasi permasalahan jarak dan waktu dalam mengirimkan data ukuran konsumen kepada penyedia jasa, namun dalam proses pembuatanya pihak perusahaan tidak memiliki pendanaan untuk melakukan riset tampilan antarmuka. Hal ini melatarbelakangi untuk mencoba mengadaptasi tampilan aplikasi yang digunakan banyak orang seperti Google Drive, Google Mail, dan Yahoo Mail. Bentuk tampilan ini dipilih karena tampilan ini sudah digunakan lebih dari 15 tahun dan bentuk tampilan antarmuka ini tidak pernah dirubah serta masih digunakan hingga sekarang. Tujuan dari percobaan ini hadir adalah untuk membuktikan dugaan, bahwa dengan mengadaptasi tata letak tampilan antarmuka bertahun-tahun digunakan serta banyak pengguna yang familiar dalam bentuk tata letak tampilan antarmuka tersebut, membuat pengguna aplikasi dapat dengan mudah beradaptasi pada bentuk tampilan yang sama meskipun diimplementasikan pada aplikasi baru. Dengan mengetahui desain tampilan antarmuka tersebut mudah di adaptasi oleh pengguna, hal ini bisa digunakan untuk perusahaan lain yang ingin membuat aplikasi namun tidak memiliki banyak dana pengembangan dalam riset tampilan antar muka. Hasil dari pengujian tahap awal yang dilakukan pada perncangan ini adalah dengan mengadaptasi tampilan antarmuka yang sudah digunakan bertahun- tahun dapat membantu mempercepat pengguna dalam beradaptasi menggunakan aplikasi yang baru.
    [Show full text]
  • Zensei: Embedded, Multi-Electrode Bioimpedance Sensing for Implicit, Ubiquitous User Recognition
    Environmental Sensing CHI 2017, May 6–11, 2017, Denver, CO, USA Zensei: Embedded, Multi-electrode Bioimpedance Sensing for Implicit, Ubiquitous User Recognition 1 1 1† 2 Munehiko Sato ∗ Rohan S. Puri Alex Olwal Yosuke Ushigome Lukas Franciszkiewicz2 Deepak Chandra3 Ivan Poupyrev3 Ramesh Raskar1 1MIT Media Lab 2Takram London 3Google ATAP [email protected], rohan, ushi, lukas @takram.com dchandra, ipoupyrev olwal, raskar @media.mit.edu{ { } { @google.com } } Figure 1. Zensei embeds implicit and uninterrupted user identification in mobile devices, furniture or the environment. Our custom, wide-spectrum multi-electrode sensing hardware allows high-speed wireless data collection of the electrical characteristics of users. A longitudinal 22-day experiment with 46 subjects experiment shows promising classification accuracy and low false acceptance rate. The miniaturized wireless Zensei sensor board (right) has a microprocessor, power management circuit, analog sensing circuit, and Bluetooth module, and is powered by a lithium polymer battery. ABSTRACT ACM Classification Keywords Interactions and connectivity is increasingly expanding to H.5.2. Information interfaces and presentation: User Inter- shared objects and environments, such as furniture, vehicles, faces - Graphical user interfaces; Input devices & strategies. lighting, and entertainment systems. For transparent personal- Author Keywords ization in such contexts, we see an opportunity for embedded recognition, to complement traditional, explicit authentication. Implicit sensing; User recognition; Ubiquitous computing, Electrical sensing; Embedded devices; Bio-sensing We introduce Zensei, an implicit sensing system that leverages INTRODUCTION bio-sensing, signal processing and machine learning to clas- sify uninstrumented users by their body’s electrical properties. People are interacting with more and more smart devices such Zensei could allow many objects to recognize users.
    [Show full text]