(12) United States Patent (10) Patent No.: US 8,527,640 B2 Reisman (45) Date of Patent: Sep

Total Page:16

File Type:pdf, Size:1020Kb

(12) United States Patent (10) Patent No.: US 8,527,640 B2 Reisman (45) Date of Patent: Sep USOO8527640B2 (12) United States Patent (10) Patent No.: US 8,527,640 B2 Reisman (45) Date of Patent: Sep. 3, 2013 (54) METHOD AND APPARATUS FOR BROWSING 5,396,546 A 3, 1995 Remillard USING MULTIPLE COORONATED DEVICE 5,404,393 A 4/1995 Remillard SETS 5,410,326 A 4/1995 Goldstein 5,440,624 A * 8/1995 Schoof, II ................ 379,202.01 5,479.268 A 12/1995 Young et al. (75) Inventor: Richard Reisman, New York, NY (US) 5,648,824 A 7/1997 Dunn et al. 5,689,353 A 11/1997 Darbee et al. (73) Assignee: Teleshuttle Tech2, LLC, New York, NY 5,699,526 A 12/1997 Siefert (US) 5,721,906 A 2, 1998 Siefert 5,761,606 A 6, 1998 Wolzein (*) Notice: Subject to any disclaimer, the term of this 32:3: A g E. al patent is extended or adjusted under 35 5,778.256 A 7, 1998 Darbee U.S.C. 154(b) by 183 days. (Continued) (21) Appl. No.: 12/552,992 FOREIGN PATENT DOCUMENTS EP 106.1490 A2 12/2000 (22) Filed: Sep. 2, 2009 GB 234.8587. A 10, 2000 (65) Prior Publication Data (Continued) US 2009/0319672 A1 Dec. 24, 2009 OTHER PUBLICATIONS Prototypes, field tryouts proceed for enhancedTV. By Steve Behrens, Related U.S. Application Data http://www.current.org/dtv/dtv013e.html, printed Jun. 30, 2004 (63) Continuation of application No. 10/434,042, filed on (document states Jul. 17, 2000). May 8, 2003, now Pat. No. 7,899,915. (Continued) (51) Int. Cl. Primary Examiner — Hieu Hoang G06F 15/16 (2006.01) (74) Attorney, Agent, or Firm — Cooley LLP (52) U.S. Cl. USPC ............................ 709/228; 709/227; 709/229 (57) ABSTRACT (58) Field of Classification Search Systems and methods for navigating hypermedia using mul USPCSee application .................................................. file for complete search history709/227 229 tipleirol coordinateddinated input/input/output devidevice sets. DisclosedD1sclosed systems and methods allow a user and/or an author to control what (56) References Cited resources are presented on which device sets (whether they are integrated or not), and provide for coordinating browsing U.S. PATENT DOCUMENTS activities to enable such a user interface to be employed across multiple independent systems. Disclosed systems and 4,706,121 A 1 1/1987 Young 4,807,031 A 2/1989 Broughton et al. methods also support new and enriched aspects and applica 4,843,562 A 6/1989 Kenyonet al. tions of hypermedia browsing and related business activities. 5,343,239 A 8/1994 Lappington et al. 5,355.480 A 10, 1994 Smith et al. 200 Claims, 10 Drawing Sheets 1. 800 Device Set 1 -- TW TV content Acs Tw Set-Top Box Programming i 210 8 30 850 - - - - - - Session: Coordination enhancemen Cotent Cable Operator Ads TV Portal Programming 820 . 840 860 Keyboard 224 Device Set 2 -- Coordination/Relay Portal Service US 8,527,640 B2 Page 2 (56) References Cited 6,938.268 B1* 8/2005 Hodge ............................ 725/93 6,973,669 B2 * 12/2005 Daniels ......................... 725, 112 U.S. PATENT DOCUMENTS 6,983,312 B1 1, 2006 O'Neil 7,069,575 B1 6/2006 Goode et al. 5,790,423 8, 1998 Lau 7,191,233 B2 * 3/2007 Miller ........................... 709,227 5,809,204 9, 1998 Young et al. 7,218,921 B2 5/2007 Mendiola et al. 5,831,664 11, 1998 Wharton et al. 7,284,202 B1 10/2007 Zenith 5,832,496 11, 1998 Anand et al. 7,424,543 B2 * 9/2008 Rice, III ........................ 709,229 5,850,340 12, 1998 York 7,444,659 B2 10/2008 Lemmons 5,861,881 1, 1999 Freeman et al. 7,853.438 B2 12/2010 Caruso et al. 5,872,712 2, 1999 Brenneman 7,917,645 B2 3/2011 Ikezoye et al. 5,887,243 3, 1999 Harvey et al. 2001/0009547 A1* 7/2001 Jinzaki et al. ................. 370,390 5,892,536 4, 1999 Logan et al. 2001/OO18771 A1 8, 2001 Walker et al. 5,905,865 5, 1999 Palmer et al. 2001/0030644 A1 10/2001 Allport 5,907,322 5, 1999 Kelly et al. 2001/0037461 A1* 11/2001 Conrath ........................ T13 201 5,949.437 9, 1999 Clark 2002, 0004839 A1 1/2002 Wine et al. 5,951,643 9, 1999 Shelton et al. 2002fOO 10652 A1 1/2002 Deguchi 5,954,798 9, 1999 Shelton et al. 2002/0010928 A1 1/2002 Sahota 5,961,603 10, 1999 Kunkel et al. 2002/0010932 A1 1/2002 Nguyen et al. 5,986,692 11, 1999 Logan et al. 2002fOO 19984 A1 2, 2002 Rakib 5.990,883 11, 1999 Byrne et al. 2002fOO32698 A1 3, 2002 Cox 5.991,791 11, 1999 Siefert 2002fOO72982 A1 6/2002 Barton et al. 6,008,777 12, 1999 Yiu 2002fOO78144 A1 6/2002 Lamkin et al. 6,018,768 1, 2000 Ullman et al. 2002, 0083.060 A1 6/2002 Wang et al. 6,025,837 2, 2000 Matthews, III et al. 2002, 0083464 A1 6/2002 Tomsen et al. 6,026,403 2, 2000 Siefert 2002fOO88011 A1 7/2002 Lamkin et al. 6,037,933 3, 2000 Blonstein et al. 2002/0091834 A1* 7/2002 ISOZu et al. .................... 709,227 6,049,539 4, 2000 Lee et al. 2002/0104090 A1 8, 2002 Stettner 6,064,377 5, 2000 Hoarty et al. 2002/012265.6 A1* 9, 2002 Gates et al. ..................... 386, 46 6,073,119 6, 2000 Bornemisza-Wahr et al. 2002/0162120 A1* 10, 2002 Mitchell ....................... 725/135 6,078,348 6, 2000 Klosterman et al. 2002/0162121 A1 10, 2002 Mitchell 6,085,223 T/2000 Carino, Jr. et al. 2003, OO1898.0 A1 1/2003 Gorbatov et al. 6,085,247 T/2000 Parsons, Jr. et al. 2003/0093802 A1 5/2003 Cho et al. 6,097,441 8, 2000 Allport 2003/0154242 A1* 8/2003 Hayes et al. .................. TO9,203 6,104,334 8, 2000 Allport 2003. O154398 A1 8, 2003 Eaton et al. 6,118,492 9, 2000 Milnes et al. 2003/O159153 A1 8, 2003 Falvo et al. 6,130,726 10, 2000 Darbee et al. 2003. O182429 A1 9/2003 Jagels 6,141,003 10, 2000 Chor et al. 2003/0188322 A1 10/2003 Bontempi 6,151,601 11, 2000 Papierniak et al. 6,157,319 12, 2000 Johns et al. FOREIGN PATENT DOCUMENTS 6,161,142 12, 2000 Wolfe et al. WO WO99, 10822 3, 1999 6,166,732 12, 2000 Mitchell et al. WO WOOO?O4707 1, 2000 6,167439 12, 2000 Levine et al. ................. 709/217 WO WOOOO7372 2, 2000 6,169,997 1, 2001 Papierniak et al. WO WOO1,06361 A2 1, 2001 6,182,094 1, 2001 Humpleman et al. WO WOO1,06365 A2 1, 2001 6,209,028 3, 2001 Walker et al. WO WOO1,09714 A2 2, 2001 6,215,526 4, 2001 Barton et al. WO WOO1? 65832 A1 9, 2001 6,216,129 4, 2001 Eldering WO WOO1f892O7 A1 11, 2001 6,219,109 4, 2001 Raynesford et al. WO WOO2,07013 A2 1, 2002 6,219,839 4, 2001 Sampsell WO WO O2, 1971.6 A1 3, 2002 6,233,389 5, 2001 Barton et al. WO WOO2,21841 A1 3, 2002 6,240,555 5, 2001 Shoff et al. WO WOO2,28102 A1 4/2002 6.253,203 6, 2001 O'Flaherty et al. WO WOO2,37297 A1 5, 2002 6.253,238 6, 2001 Lauder et al. WO WOO3,OO7189 A1 1, 2003 6.256,019 T/2001 Allport WO WOO3/107.196 A1 12/2003 6,263,503 T/2001 Margulis 6,278.499 8, 2001 Darbee et al. 6,308,327 10, 2001 Liu et al. OTHER PUBLICATIONS 6,317,885 11, 2001 Fries 6,321,252 11, 2001 Bhola et al. ................... TO9.204 PBS ventures into cable's interactive ‘walled gardens. By Steve 6,323.911 11, 2001 Schein et al. Behrens, http://www.current.org/tech/tech021cable.html, printed 6,324,338 11, 2001 Wood et al. 6,330,593 12, 2001 Roberts et al. Jun. 30, 2004 (document states Nov. 13, 2000). 6,360,053 3, 2002 Wood et al. ETV Cookbook: System Reference: Transports, http://etv.cookbook. 6,377,986 4, 2002 Philyaw etal. org/system? transports.html, printed Jul. 1, 2001 (document states 6,388,714 5, 2002 Schein et al. Jun. 25, 2001). 6,389.472 5, 2002 Hughes et al. ETV Cookbook: System Reference: Platforms, http://etv.cookbook. 6.425,131 T/2002 Crandall et al. org/system/platform.html, printed Jul. 1, 2001 (document states Jun. 6469,753 10, 2002 Klosterman et al. 6,486,892 11, 2002 Stern 25, 2001). 6,513,069 1, 2003 Abato et al. ETV Cookbook: System Reference: Localization. http:// 6,526,581 2, 2003 Edson etv.cookbook.org/system/localization.html, printed Jul. 1, 2001 6,543,051 4, 2003 Manson et al. (document states Jul. 25, 2001). 6,567,011 5/2003 Young et al. ETV Cookbook: System Reference: Middleware, http:// 6,637,028 10, 2003 Voyticky et al. etv.cookbook.org/system/middleware.html, printed Jul. 1, 2001 6,732,369 5, 2004 Schein et al. (document states Jun. 25, 2001). 6,754,715 6, 2004 Cannon et al. ................ TO9,231 ETV Cookbook: System Reference: Storage, http://etv.cookbook. 6,754,904 6, 2004 Cooper et al. org/system? storage.html, printed Jul. 1, 2001 (document states Jun. 6,931,656 8, 2005 Eshelman et al. 25, 2001). US 8,527,640 B2 Page 3 ETV Cookbook: System Reference: Application Services, http:// MetaTV: Universal Portal Platform, http://www.metatv.com/solu etv.cookbook.org/system/application.html, printed Jul.
Recommended publications
  • Advene As a Tailorable Hypervideo Authoring Tool: a Case Study
    Advene as a Tailorable Hypervideo Authoring Tool: a Case Study Olivier Aubert Yannick Prié Daniel Schmitt Université de Lyon, CNRS Université de Lyon, CNRS Université de Strasbourg, Université Lyon 1, LIRIS, Université Lyon 1, LIRIS, LISEC EA-2310, France UMR5205, France UMR5205, France [email protected] [email protected] [email protected] ABSTRACT Many video annotation tools are often strongly tied to a Audiovisual documents provide a great primary material specific application domain. This allows them to offer a cus- for analysis in multiple domains, such as sociology or in- tomized experience, with dedicated tools and interfaces, at teraction studies. Video annotation tools offer new ways of the expense of some difficulty to be used outside of intended analysing these documents, beyond the conventional tran- uses. For instance, a tool like ELAN [8] does not allow over- scription. However, these tools are often dedicated to spe- lapping annotations, which may make sense in the linguis- cific domains, putting constraints on the data model or in- tic context but may not be convenient for other practices. terfaces that may not be convenient for alternative uses. Moreover, most specialized tools being aimed at analysis, Moreover, most tools serve as exploratory and analysis in- they do not particularly insist on the final phase of the pro- struments only, not proposing export formats suitable for cess: publishing documents. Tools like Anvil [4] or Elan [8] publication. propose multiple predefined structured export formats but We describe in this paper a usage of the Advene software, do not allow users to define their own visualizations.
    [Show full text]
  • Curtains Up! Lights, Camera, Action! Documenting the Creation Of
    Curtains Up! Lights, Camera, Action! Documenting the Creation of Theater and Opera Productions with Linked Data and Web Technologies Thomas Steiner, Rémi Ronfard, Pierre-Antoine Champin, Benoît Encelle, Yannick Prié To cite this version: Thomas Steiner, Rémi Ronfard, Pierre-Antoine Champin, Benoît Encelle, Yannick Prié. Curtains Up! Lights, Camera, Action! Documenting the Creation of Theater and Opera Productions with Linked Data and Web Technologies. International Conference on Web Engineering ICWE 2015, International Society for the Web Engineering, Jun 2015, Amsterdam, Netherlands. pp.10. hal-01159826 HAL Id: hal-01159826 https://hal.inria.fr/hal-01159826 Submitted on 22 Dec 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Curtains Up! Lights, Camera, Action! Documenting the Creation of Theater and Opera Productions with Linked Data and Web Technologies Thomas Steiner1?, R´emiRonfard2 Pierre-Antoine Champin1, Beno^ıtEncelle1, and Yannick Pri´e3 1CNRS, Universit´ede Lyon, LIRIS { UMR5205, Universit´eLyon 1, France ftsteiner, [email protected], [email protected] 2 Inria Grenoble Rh^one-Alpes / LJK Laboratoire J. Kuntzmann - IMAGINE, France [email protected] 3CNRS, Universit´ede Nantes, LINA { UMR 6241, France [email protected] Abstract.
    [Show full text]
  • Hypervideo and Annotations on the Web
    Hypervideo and Annotations on the Web Madjid Sadallah Olivier Aubert Yannick Prie´ DTISI - CERIST LIRIS - Universite´ Lyon 1 LIRIS - Universite´ Lyon 1 Alger, Algerie´ UMR 5205 CNRS UMR 5205 CNRS Email: [email protected] Email: [email protected] Email: [email protected] Abstract—Effective video-based Web information system de- the hypermedia system. Hypervideo addresses this issue by ployment is still challenging, while the recent widespread of a specialization of hypermedia centered on interactive video. multimedia further raises the demand for new online audiovisual We define hypervideo as being an interactive video-centric document edition and presentation alternatives. Hypervideo, a specialization of hypermedia focusing on video, can be used on the hypermedia document built upon an audiovisual content - a set Web to provide a basis for video-centric documents and to allow of video objects -. Several kinds of related data are presented more elaborated practices of online video. In this paper, we pro- within the document in a time synchronized way to augment pose an annotation-driven model to conceptualize hypervideos, the audiovisual part around which the presentation is organized promoting a clear separation between video content/metadata in space and time. and their various potential presentations. Using the proposed model, features of hypervideo are grafted to wider video- The articulation of video content with navigation facilities based Web documents in a Web standards-compliant manner. introduces new ways for developing interfaces, rendering the The annotation-driven hypervideo model and its implementation content and interacting with the document. By supplying offer a general framework to experiment with new interaction spatio-temporal behaviors to links, hypervideos allow address- modalities for video-based knowledge communication on the Web.
    [Show full text]
  • Reusable Hypertext Structures for Distance and JIT Learning
    Reusable Hypertext Structures for Distance and JIT Learning Anne Morgan Spalter Rosemary Michelle Simpson Department of Computer Science Department of Computer Science Brown University Brown University Providence, RI, USA Providence, RI, USA Tel: 1-401-863-7615 Tel: 1-401-863-7651 E-mail: [email protected] E-mail: [email protected] ABSTRACT System) developed with Ted Nelson in the 1960s[15], Software components for distance and just-in-time (JIT) FRESS (File Retrieval and Editing System), developed in the learning are an increasingly common method of encouraging 1970s and the first hypertext system used to teach a liberal reuse and facilitating the development process[56], but no arts course[21], and IRIS Intermedia in the 1980s[35][73], a analogous efforts have been made so far for designing hyper- UNIX-based networked hypertext system with advanced fea- text components that can be reused in educationalofferings. 1 tures used for teaching undergraduate and graduate courses We argue that such structures will be of tangible benefit to the online learning community, serving to offload a substan- These efforts led to the notion of an electronic book with tialburden from programmers and designers of software, as interactive illustrations, a new form of textbook that took well as allowing educators without any programming experi- advantage of the power of hypertext and the power of 2D and ence to customize available online resources. 3D interactive computer graphics. This model, however, proved difficult to apply for all but the most determined and We present our motivation for hypertext structure compo- privileged educators. Few teachers have the time, inclination, nents (HTSC) and then propose a set of pedagogicalstruc- ability, and support necessary to write a textbook or develop tures and their building blocks that reflect the categories of interactive software, using either hypertextualor linearfor- lecture, laboratory, creative project, playground, and mats (although when they do, the results can be extraordi- game[36].
    [Show full text]
  • Open Hypervideo As Archive Interface
    ! ! ! ! ! ! !"#$%&'"#()*+#,% ! -.%/(01*)#%2$3#(4-0#"#$%&'()*+!,%&+-+! % % % % % % % % % % % % % % % % % 5,.01-%567#(8%5-$9-('%:;<:% 2$3#(4-0#%=#.*7$% >9"#()*.,(.?%@(,4A%B-(*,%=,9C*.8%@(,4A%=(A%&#CD93%=(-EC#(% % B#(F%/G-+#D*#% H$*)#(.*3'%,4%/""C*#+%/(3.% &,01.019C#%4I(%J#.3-C39$78%K9$.3%9$+%B#+*#$8%>39337-(3% Contents 1. Introduction ......................................................................................................................... 1 1.1. Motivation ..................................................................................................................................1 1.2. Problem Statement and Research Questions .............................................................1 2. Related Work & Interdisciplinary Classification .................................................... 3 2.1. Archival Configurations .......................................................................................................3 2.1.1. On Documenting History ......................................................................................3 2.1.2. Document Access and Archive Interfaces .....................................................4 2.2. Hypermedia-Theory ..............................................................................................................5 2.2.1. Trail and Link Concepts ........................................................................................5 2.2.2. Hypertext as Interface for Narratives..............................................................7 2.2.3. Concepts of Authorship
    [Show full text]
  • University of Southampton Research Repository Eprints Soton
    University of Southampton Research Repository ePrints Soton Copyright © and Moral Rights for this thesis are retained by the author and/or other copyright owners. A copy can be downloaded for personal non-commercial research or study, without prior permission or charge. This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the copyright holder/s. The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the copyright holders. When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given e.g. AUTHOR (year of submission) "Full thesis title", University of Southampton, name of the University School or Department, PhD Thesis, pagination http://eprints.soton.ac.uk Bibliography E. J. Aarseth. Cybertext: Perspectives on Ergodic Literature. Johns Hopkins University Press, Baltimore, 1997. Annette Adler, Anuj Gujar, Beverly L. Harrison, Kenton O’Hara, and Abigail Sellen. A diary study of work-related reading: Design implications for digital reading de- vices. In Proceedings of CHI ’98 Human Factors in Computing Systems, Los Angeles, California, USA, pages 241–248, 1998. M.J. Adler and C. van Doren. How to Read a Book. Simon and Schuster, New York, NY, 1972. Janet Adshead-Lansdale, editor. Dancing texts: intertextuality in interpretation. Dance Books, London, 1999. Maristella Agosti and James Allan. Methods and tools for the construction of hypertext. Information Processing and Management, 33(2):129–271, 1997. R. Akscyn, D. McCracken, and E. Yoder. KMS: A distributed hypermedia system for managing knowledge in organizations.
    [Show full text]
  • RDF-Powered Semantic Video Annotation Tools with Concept Mapping to Linked Data for Next-Generation Video Indexing: a Comprehensive Review
    RDF-Powered Semantic Video Annotation Tools with Concept Mapping to Linked Data for Next-Generation Video Indexing: A Comprehensive Review Leslie F. Sikos Abstract. Video annotation tools are often compared in the literature, however, most reviews mix unstructured, semi-structured, and the very few structured annotation software. This paper is a comprehensive review of video annotations tools generating structured data output for video clips, regions of interest, frames, and media fragments, with a focus on Linked Data support. The tools are compared in terms of supported input and output data formats, expressivity, annotation specificity, spatial and temporal fragmentation, the concept mapping sources used for Linked Open Data (LOD) interlinking, provenance data sup- port, and standards alignment. Practicality and usability aspects of the user in- terface of these tools are highlighted. Moreover, this review distinguishes ex- tensively researched yet discontinued semantic video annotation software from promising state-of-the-art tools that show new directions in this increasingly important field.. Keywords: Video Annotation, Multimedia Semantics, Spatiotemporal Frag- mentation, Video Scene Interpretation, Multimedia Ontologies, Hypervideo Application 1 Introduction While there are metadata formats available for images, audio files, and videos, they are often limited to technical characteristics and many of them are not structured. In contrast to MP3 files, which often provide information about the album, singer or band, release year, genre, and might even include the lyrics, video files typically do not have any information embedded to them about the depicted concepts, actors, or the plot. Online video information retrieval often relies on the text surrounding the media files embedded to web pages, mainly due to the huge “semantic gap” between what computers and humans understand (automatically extractable low-level features and sophisticated high-level content descriptors) [1].
    [Show full text]
  • Hypervideo Expression: Experiences with Hyper-Hitchcock
    Hypervideo Expression: Experiences with Hyper-Hitchcock Frank Shipman Andreas Girgensohn, Lynn Wilcox Department of Computer Science & FX Palo Alto Laboratory, Inc. Center for the Study of Digital Libraries 3400 Hillview Avenue Texas A&M University Palo Alto, CA 94304, USA College Station, TX 77843-3112 {andreasg, wilcox}@fxpal.com [email protected] ABSTRACT is embodied in the Web today and results in authors creating Hyper-Hitchcock is a hypervideo editor enabling the direct structures of content that mirror those of other page-based manipulation authoring of a particular form of hypervideo called hypertext regardless of the media involved. “detail-on-demand video.” This form of hypervideo allows a single These structures are not indicative of the structures people would link out of the currently playing video to provide more details on create when links are placed between nodes of media content the content currently being presented. A workspace is used to without a page metaphor. We are interested in the characteristic select, group, and arrange video clips into several linear sequences. structures and links of hypervideo to provide insight into how links Navigational links placed between the video elements are assigned directly between time-based content (e.g., from audio to audio, or labels and return behaviors appropriate to the goals of the hypervideo and the role of the destination video. Hyper-Hitchcock from video to video) might result in alternative structures and was used by students in a Computers and New Media class to generate the need for additional representational and interface author hypervideos on a variety of topics.
    [Show full text]
  • 9 Hypervideos and Interactive Multimedia Presentations
    Hypervideos and Interactive Multimedia Presentations BRITTA MEIXNER, University of Passau, Germany Hypervideos and interactive multimedia presentations allow the creation of fully interactive and enriched video. It is possible to organize video scenes in a nonlinear way. Additional information can be added to the video ranging from short descriptions to images and more videos. Hypervideos are video-based but also provide navigation between video scenes and additional multimedia elements. Interactive multimedia pre- sentations consist of different media with a temporal and spatial synchronization that can be navigated via hyperlinks. Their creation and description requires description formats, multimedia models, and standards— as well as players. Specialized authoring tools with advanced editing functions allow authors to manage all media files, link and arrange them to an overall presentation, and keep an overview during the whole process. They considerably simplify the creation process compared to writing and editing description documents in simple text editors. Data formats need features that describe interactivity and nonlinear navigation while maintaining temporal and spatial synchronization. Players should be easy to use with extended feature sets keeping elements synchronized. In this article, we analyzed more than 400 papers for relevant work in this field. From the findings we discovered a set of trends and unsolved problems, and propose directions for future research. Categories and Subject Descriptors: H.1.2 [Information Systems]:
    [Show full text]
  • A URI-Based Approach for Addressing Fragments of Media Resources on the Web
    Noname manuscript No. (will be inserted by the editor) A URI-Based Approach for Addressing Fragments of Media Resources on the Web Erik Mannens · Davy Van Deursen · Rapha¨elTroncy · Silvia Pfeiffer · Conrad Parker · Yves Lafon · Jack Jansen · Michael Hausenblas · Rik Van de Walle Received: date / Accepted: date E. Mannens Ghent University - IBBT ELIS - Multimedia Lab Ghent, Belgium E-mail: [email protected] D. Van Deursen E-mail: [email protected] Ghent University - IBBT ELIS - Multimedia Lab Ghent, Belgium E-mail: [email protected] R. Troncy EURECOM, Multimedia Communications Department Sophia Antipolis, France E-mail: [email protected] S. Pfeiffer Vquence Sydney, Australia E-mail: silviapfeiff[email protected] C. Parker Kyoto University Kyoto, Japan E-mail: [email protected] Y. Lafon W3C/ERCIM Sophia Antipolis, France E-mail: [email protected] J. Jansen CWI, Distributed Multimedia Languages and Infrastructures Amsterdam, Netherlands E-mail: [email protected] M. Hausenblas National University of Ireland, Digital Enterprise Research Institute - LiDRC Galway, Ireland E-mail: [email protected] R. Van de Walle E-mail: [email protected] Ghent University - IBBT 2 Abstract To make media resources a prime citizen on the Web, we have to go beyond simply replicating digital media files. The Web is based on hyperlinks between Web resources, and that includes hyperlinking out of resources (e.g. from a word or an image within a Web page) as well as hyperlinking into resources (e.g. fragment URIs into Web pages). To turn video and audio into hypervideo and hyperaudio, we need to enable hyperlinking into and out of them.
    [Show full text]
  • 0 Web Browsing Behavior Analysis and Interactive Hypervideo
    This is a preprint for personal use only. The published paper may be subject to some form of copyright. 0 Web Browsing Behavior Analysis and Interactive Hypervideo LUIS A. LEIVA and ROBERTO VIVO´ , Universitat Politecnica` de Valencia` Processing user interaction data is well-known to be cumbersome and mostly time-consuming, specially when it comes to web browsing behavior analysis. Current tools usually display user interactions as mouse cursor tracks, a video-like visualization scheme that allows researchers to easily inspect what is going on behind the gathered data. However, to date, traditional online video inspection has not explored the full capabilities of hypermedia and interactive techniques. In response to this need, we have developed SMT2ǫ, a web-based tracking system to analyze browsing behavior using feature-rich hypervideo visualizations. We compare our system to related work in academia and industry, showing that ours features unprecedented visualization capabilities. We also show that SMT2ǫ efficiently captures browsing data, and is perceived to be helpful and easy to use. A series of prediction experiments illustrate that raw cursor data are both accessible and easily manipulable, providing evidence that the data can be used to construct and verify hypotheses. Considering its limitations, it is our hope that SMT2ǫ will assist researchers, usability practitioners, and other professionals interested in understanding how users browse the Web. Categories and Subject Descriptors: H.5.1 [Information Interfaces and Presentation]: Multimedia In- formation Systems—Video, Evaluation/methodology; H.5.3 [Information Interfaces and Presentation]: Group and Organization Interfaces—Web-based interaction; H.5.4 [Information Interfaces and Presen- tation]: Hypertext/Hypermedia—Navigation General Terms: Experimentation, Human Factors, Design Additional Key Words and Phrases: User interaction, information visualization, remote tracking, cursor behavior, hypervideo synthesis, interactive analysis, data mining, usability evaluation ACM Reference Format: Leiva, L.
    [Show full text]
  • An Outline for a Functional Taxonomy of Annotation
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Illinois Digital Environment for Access to Learning and Scholarship Repository An Outline for a Functional Taxonomy of Annotation Allen Renear, Steve DeRose, Elli Mylonas, Scholarly Technology Group, Brown University and Andries van Dam Computer Science Department, Brown University First Draft Feb 22, 1999 Last Revision Mar 5, 1999 Presented at Microsoft Research, Redmond WA, April 1999. About this Document This document presents an outline for a functional taxonomy of document annotation. The question that structures the construction of this taxonomy is “what is the annotator doing?” — that is, we classify annotations according to their function, objective, or purpose. Status: This document is an outline for a more comprehensive treatment; we have briefly presented some of the main ideas in order to initiate discussion, but in this document they are not developed fully or explicitly related to existing work. Contact: For more information about this document or its contents contact: Dr. Allen Renear, Director Scholarly Technology Group Brown University / Box 1841 Providence RI, 02912 v: 401-863-7312 e: [email protected] or Dr. Steve DeRose, Chief Scientist Scholarly Technology Group Brown University / Box 1841 Providence RI, 02912 [email protected] The Brown University Scholarly Technology Group (STG) is an applied research and development group specializing in electronic document design and markup systems. STG Technical Note (Annotation)
    [Show full text]