Web Resource Changes Monitoring System Development

Total Page:16

File Type:pdf, Size:1020Kb

Web Resource Changes Monitoring System Development Web Resource Changes Monitoring System Development Lyubomyr Chyrun[0000-0002-9448-1751]1, Yevhen Burov[0000-0001-8653-1520]2, Bohdan Rusyn[0000-0001-8654-2270]3, Liubomyr Pohreliuk[0000-0003-1482-5532]4, Oleh Oleshek[0000-0002-2222-8773]5, Aleksandr Gozhyj [0000-0002-2641-5947]6, Іgor Bobyk[0000-0002-0424-1720]7 1IT Step University, Lviv, Ukraine 2,5,7Lviv Polytechnic National University, Lviv, Ukraine 3-4Karpenko Physico-Mechanical Institute of the NAS Ukraine 6Petro Mohyla Black Sea National University, Nikolaev, Ukraine [email protected], [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. The detection of changes in the web content and notifying users about them in timely manner is an important service for both the owners of the web sites and ordinary users. The system created automatically captures and monitors the differences between the content of the web page at a given mo- ment of time and compares them with saved earlier. The system periodically parses the contents of files on a server, available for public access according to the given pattern. It compares its contents with previously stored version and in case if discrepancies are discovered, immediately notifies the user. Thus, the user will not have to monitor manually the specified Internet resources. Keywords: Web resource, content, SEO technology, information technology, web technology, changes monitoring, web server, intelligent system, operating system, web page, automated monitoring system, neural network, content downloader, automated monitoring, software product, monitoring system, trademark office, computer science, neural network, intelligent automated mon- itoring system, user interface, global network, internet resource, web service, web site, regular expression pattern, apache http server, electronic content commerce system, server part 1 Introduction With every year, the information technology industry gains more and more rapid de- velopment. Smart systems for monitoring, analyzing and managing things are being introduced everywhere. However, the development of information technology is not limited only to this area. All related industries are also developing at a rapid pace. In order to increase productivity and improve the quality of the final product, the human labor is replaced by automation [1]. A circle is creating: the development of the IT sphere stimulates other sectors of employment, which in theirs turn stimulate the de- velopment of IT. That is why this process is becoming increasingly widespread with every year. It is logical that during this rapid development, we have a desire to keep up with the new trends and to develop at the same pace. To meet these needs, we need to constantly “consume” and “absorb” a huge amount of fresh information from mul- tiple sources. Unfortunately, the development of our abilities to perform this process cannot be improved by purchasing new, more powerful computers [2]. For today, it is not enough to read just published books or magazines to keep a finger on the pulse of the modern world [3]. The main reason of this is the huge amount of resources, need- ed for the publication of even one textbook, the main of which is time. The time that is spent on the design, printing and distribution of this publication. Fortunately, we have the Internet for more rational use of opportunities. Exactly thanks to this the global web has gained such popularity. As the main source of information the Internet opens for us lots of data venues. Similarly, the content sites are trying to attract the user with all possible palettes of web site design and functionality [4]. For this pur- pose, owners of Internet resources don’t spare the forces and facilities. In the face of intense competition, there is a need to constantly maintain the quality of the product at the proper level [5]. To achieve this aim it is necessary to invest not only in the devel- opment of something new, but also in preserving the already acquired that is protect- ing the site from unwanted changes and supervising the correct the work of all its components [6]. As an option we can design a lot of data processing, create a bunch of checks to protect against penetration of the unauthorized code into a page. But none method or even a combination of them all will allow to be sure for one hundred percent in the security of the initial data [7]. That is why the logical decision is not to going blindly for the unachievable aim, but to take care of possible solutions of inevi- table problems as they appears [8]. An intelligent system of automated monitoring of changes in web resources serves this purpose. 2 Analytical review of literary and other sources 2.1 Subversion (SVN) The first systems that provided the tracking of changes in the documents were the version control systems [9]. One of them is Subversion or SVN. Officially it was introduced in 2004 by CollabNet and represents a free, centralized version control system. It was built on the basis of CVS, but without its errors and disadvantages. At that time CVS was an unofficial standard among developers [10]. The work with SVN is not much different from the work with other centralized version control systems. Users copy files from the central repository, works with these copies and synchronize them with those that are left in the main repository, commenting on new edits. The ability of two or more users working with the same files is a useful feature. This is provided by creating local copies of the original files. However, this product also has the function of blocking access to a particular file, so that only one user could work with it. Also, the smart decision of this system is the use of the mechanism of delta-compression – the saving of not the original and final contents of the file, but only the differences between them [11]. The key features that SVN system provides for its users [12]: smart tracking of changes of all in a given directory – the file transfers, their re- naming or attributes editing are fixed; convenient mechanism of synchronization of the proper files; emulation procedure: ─ the making of new branches takes place by copying the old ones and adding new edits; ─ automatic finding of differences and their unification during branches merging; edits, no matter how large are they, are fixed in a single repository in the atomic transactions representation; transferring and processing only the differences between files, but not themselves; the same qualitative processing of both binary and text files; two storage formats: ─ repository – database; ─ repository – a set of regular files; the ability to write commits in any language; the ability to create a reflection copy of the repository; Although the SVN system is based on CVS, however, there is little left from it. Here is a list of the major differences [13]: the way of working with text and binary files was improved; boundaries in which the system tracks changes were increased – now it can control not only files but also directories; a set of properties of each file was added to the control; it saves network traffic during transferring files to the repository that are not com- pletely modified but only different parts; the ability to lock files during work was created, so that nobody can makes changes in parallel. For a better understanding of what a version control system represents and what pos- sibilities it offers, see Fig. 1 [14]. Fig. 1. Visualization of a simple Subversion project This figure shows the way of project development. It resembles the structure of a tree with a multitude of branches. Exactly from here the name for the mechanism for cre- ating various temporary versions of one software product is created. The trunk of the project is marked by green color, by yellow – the branches. Branches are created at the time of development for some additional functions in the system. In the case of their successful realization additional branches are merged into the trunk. This pro- cess is called synchronization and is indicated by a red arrow in this figure. Blue – branches of the project that will no longer change, are called tags. Discontinued de- velopment branch is marked by purple [15]. The process of user interaction with the SVN system can be divided into the following parts [16]: 1. Connect to a central repository. 2. Download the latest version of the project from the repository or creation a new one. 3. If the project was created earlier – make the necessary changes to the working copy. If this is a new project – work with it as with any other project, not tracked by this system. 4. Commit of accomplished actions and providing a textual description of what was made. 5. Transfer the completed commit to the repository. Due to the strict observance of this plan the history of the project development is cre- ated with the possibility at any time to return to a certain stage of development and go to a different way [17]. 2.2 SiteLock SiteLock is protecting web pages in real time [18]. This product was introduced in 2009 as the start-up of two programmers – Neville Feather and Scott Lovell, which eventually turned into an advanced security in the field of web security. This product was developed to expand the ability to protect resources written in WordPress CMS. It extends the set of standard features by its own, which will be useful to both the web developers and the owners of Internet resources. The main advantage of this software is the enormous variety of website scanning tools that prevent most problems for ap- pearing.
Recommended publications
  • An Empirical Study of Web Resource Manipulation in Real-World Mobile
    An Empirical Study of Web Resource Manipulation in Real-world Mobile Applications Xiaohan Zhang, Yuan Zhang, Qianqian Mo, Hao Xia, Zhemin Yang, and Min Yang, Fudan University; Xiaofeng Wang, Indiana University, Bloomington; Long Lu, Northeastern University; Haixin Duan, Tsinghua University https://www.usenix.org/conference/usenixsecurity18/presentation/zhang-xiaohan This paper is included in the Proceedings of the 27th USENIX Security Symposium. August 15–17, 2018 • Baltimore, MD, USA ISBN 978-1-939133-04-5 Open access to the Proceedings of the 27th USENIX Security Symposium is sponsored by USENIX. An Empirical Study of Web Resource Manipulation in Real-world Mobile Applications Xiaohan Zhang1,4, Yuan Zhang1,4, Qianqian Mo1,4, Hao Xia1,4, Zhemin Yang1,4, Min Yang1,2,3,4, Xiaofeng Wang5, Long Lu6, and Haixin Duan7 1School of Computer Science, Fudan University 2Shanghai Institute of Intelligent Electronics & Systems 3Shanghai Institute for Advanced Communication and Data Science 4Shanghai Key Laboratory of Data Science, Fudan University 5Indiana University Bloomington , 6Northeastern University , 7Tsinghua University Abstract built into a single app. For the convenience of such an integration, mainstream mobile platforms (including Mobile apps have become the main channel for access- Android and iOS) feature in-app Web browsers to run ing Web services. Both Android and iOS feature in- Web content. Examples of the browsers include Web- app Web browsers that support convenient Web service View [9] for Android and UIWebView/WKWebView for integration through a set of Web resource manipulation iOS [8, 10]. For simplicity of presentation, we call them APIs. Previous work have revealed the attack surfaces of WebViews throughout the paper.
    [Show full text]
  • Pushing Data in Both Directions with Websockets, Part 2
    Menu Topics Archives Downloads Subscribe Pushing Data in Both CODING Directions with WebSockets, Part 2 Pushing Data in Both Directions Message Processing Modes with WebSockets, Part 2 Path Mapping Deployment of Server Using WebSockets’ long-lasting Endpoints connections to build a simple chat app The Chat Application by Danny Coward Conclusion January 1, 2016 Learn More In Part 1 of this article, I introduced WebSockets. I observed that the base WebSocket protocol gives us two native formats to work with: text and binary. This works well for very basic applications that exchange only simple information between client and server. For example, in the Clock application in that article, the only data that is exchanged during the WebSocket messaging interaction is the formatted time string broadcast from the server endpoint and the stop string sent by the client to end the updates. But as soon as an application has anything more complicated to send or receive over a WebSocket connection, it will find itself seeking a structure into which to put the information. As Java developers, we are used to dealing with application data in the form of objects: either from classes from the standard Java APIs or from Java classes that we create ourselves. This means that if you stick with the lowest-level messaging facilities of the Java WebSocket API and want to program using objects that are not strings or byte arrays for your messages, you need to write code that converts your objects into either strings or byte arrays and vice versa. Let’s see how that’s done.
    [Show full text]
  • 1. Plugin Framework Documentation
    1. Plugin Framework Documentation . 3 1.1 Writing Atlassian Plugins . 6 1.1.1 Creating your Plugin Descriptor . 8 1.1.2 Plugin Module Types . 16 1.1.2.1 Component Import Plugin Module . 16 1.1.2.2 Component Plugin Module . 20 1.1.2.3 Module Type Plugin Module . 23 1.1.2.4 Servlet Context Listener Plugin Module . 29 1.1.2.5 Servlet Context Parameter Plugin Module . 32 1.1.2.6 Servlet Filter Plugin Module . 34 1.1.2.7 Servlet Plugin Module . 38 1.1.2.8 Web Item Plugin Module . 41 1.1.2.9 Web Resource Plugin Module . 50 1.1.2.10 Web Section Plugin Module . 56 1.1.3 Adding Plugin and Module Resources . 64 1.1.4 Supporting Minification of JavaScript and CSS Resources . 70 1.1.5 Adding a Configuration UI for your Plugin . 73 1.1.6 Ensuring Standard Page Decoration in your Plugin UI . 75 1.1.7 Using Packages and Components Exposed by an Application . 77 1.1.8 Running your Plugin in the Reference Implementation . 79 1.1.9 OSGi, Spring and the Plugin Framework . 89 1.1.9.1 Behind the Scenes in the Plugin Framework . 94 1.1.9.1.1 Going from Plugin to OSGi Bundle . 94 1.1.9.1.2 Lifecycle of a Bundle . 95 1.1.9.1.3 Automatic Generation of Spring Configuration . 96 1.1.9.2 Converting a Plugin to Plugin Framework 2 . 98 1.1.9.3 OSGi and Spring Reference Documents . 99 1.2 Embedding the Plugin Framework .
    [Show full text]
  • Ajax (In)Security
    Ajax (in)security Billy Hoffman ([email protected]) SPI Labs Security Researcher Overview • Architecture of web applications • What is Ajax? • How does Ajax help? • Four security issues with Ajax and Ajax applications • Guidelines for secure Ajax development Architecture of Web Applications Traditional Web Application Browser receives input from user Uses JavaScript for simple logic and optimizations Sends HTTP request across the Internet Server processes response Backend logic evaluates input (PHP, ASP, JSP, etc) Possibly access other tiers (database, etc) Resource is returned to user Problems with Traditional Web Apps Case Study: MapQuest Reducing the Long Wait • These long pauses are very noticeable • Regular applications don't with the user this way • Reducing the delay between input and response is key – Request is a fixed size – Response is a fixed size – Network speed,latency is fixed – Server processes relatively fixed • Trick the user with better application feedback • This is what Ajax does What is Ajax? Asynchronous JavaScript And XML JavaScript takes on a larger role Send HTTP request Provides immediate feedback to user Application continues to respond to user events, interaction Eventually processes response from server and manipulates the DOM to present results Providing a Rich User Experience Case Study: Google Maps More information on Ajax • Use XmlHttpRequest Object • Sends any HTTP method – Simple: GET, POST, HEAD – WebDav: COPY, DELETE • Limited to where JavaScript came from (hostname, port) • Fetch any kind of
    [Show full text]
  • Ftp Vs Http Protocol
    Ftp Vs Http Protocol Unilobed Bentley unstrings reportedly while Durand always waul his stigmatists lull remittently, he dozing so asymmetrically. When Stuart ad-lib his ageings bumble not centrically enough, is Bryant definite? Jerold often appraised wearily when corruptible Tomlin blackouts balefully and expiate her Libyan. FTP stands for File Transfer Protocol used to transfer files online. To ensure the functionality of the Sophos Web Appliance, configure your network to allow access on the ports listed below. Syntax error in any data since a time when and passive mode, when a tcp connection and get closed prematurely or http or other end the parent directory. What is File Transfer Protocol FTP What she My IP Address. Why FTPFTPSSFTP file transport related protocols are not mentioned but used HTTPS As did general concepts PUTGET are more FTP related requests. Using ftp protocol relatively easy to the protocols can just serve other. The ftp vs protocol, not to the protocol. Expand your ftp vs https protocols offer the size can use in the server needs to the latest version. This ftp vs http is specifically remember the protocols with ftps: it is not at once authenticated and services similar. Rfcs are ftp protocol at this https protocols in the ass with. Smtp server ftp protocol runs on http does it has rules to https because data packets are on. To begin a characterbased command session on a Windows computer, follow these steps. The web server sends the requested content really a response message. XML and JSON and learned that kid are custom data formats indeed.
    [Show full text]
  • Developer's Guide
    Developer's Guide Genesys Web Engagement 8.5.0 3/10/2020 Table of Contents Genesys Web Engagement Developer's Guide 3 High-Level Architecture 5 Monitoring 14 Visitor Identification 16 Events Structure 20 Notification 27 Engagement 28 Application Development 41 Creating an Application 45 Generating and Configuring the Instrumentation Script 47 Customizing an Application 59 Creating Business Information 61 Simple Engagement Model 62 Advanced Engagement Model 71 Publishing the CEP Rule Templates 78 Customizing the SCXML Strategies 96 Customizing the Engagement Strategy 98 Customizing the Chat Routing Strategy 135 Customizing the Browser Tier Widgets 143 Deploying an Application 152 Starting the Web Engagement Server 153 Deploying a Rules Package 154 Testing with ZAP Proxy 163 Sample Applications 176 Get Information About Your Application 177 Integrating Web Engagement and Co-browse with Chat 178 Media Integration 196 Using Pacing Information to Serve Reactive Requests 205 Dynamic Multi-language Localization Application Sample 213 Genesys Web Engagement Developer's Guide Genesys Web Engagement Developer's Guide Welcome to the Genesys Web Engagement 8.5 Developer's Guide. This document provides information about how you can customize GWE for your website. See the summary of chapters below. Architecture Developing a GWE Application Find information about Web Engagement Find procedures to develop an application. architecture and functions. Creating an Application High-Level Architecture Instrumentation Script Engagement Starting the Web Engagement
    [Show full text]
  • Logistics No More Extensions on Labs 3Pm Was the Deadline
    Logistics No more extensions on labs 3pm was the deadline. No exceptions. Final exam Thursday December 6, 3:00-4:00pm Same drill. Final Exam Review Doesn’t count as credit for extratation Same drill Gates 5222, Sat/Sun 1-2pm What happens when you do a google search? A very casual in-depth introduction on this otherwise harmless interview question Context ● We ran out of ideas for presentations ● I actually got this problem 2 years ago ● Kinda gave a 2 minute answer ● Realized that there so much arcane stuff that isn’t common knowledge that can be revealed by answering this question ● A pretty good “last lecture” to tie everything together ● https://github.com/alex/what-happens-when#the-g-key-is-pressed Typical interview answer: “You send an HTTP GET request to google.com. It does some magic backend stuff. You get the result back. Your browser displays the results.” Our answer: A bit more complicated than that! DISCLAIMER: MIGHT CONTAIN A LOT OF ACRONYMS The enter key is pressed Keyboard Some electronic stuff happens on the USB level ● “Universal Serial Bus” ● The USB circuitry of the keyboard is powered by the 5V supply provided over pin 1 from the computer's USB host controller. ● The host USB controller polls that "endpoint" every ~10ms (minimum value declared by the keyboard), so it gets the keycode value stored on it. ● This value goes to the USB SIE (Serial Interface Engine) to be converted in one or more USB packets that follow the low level USB protocol. ● Those packets are sent by a differential electrical signal over D+ and D- pins (the middle 2) at a maximum speed of 1.5 Mb/s, as an HID (Human Interface Device) device is always declared to be a "low speed device" (USB 2.0 compliance).
    [Show full text]
  • Secure Remote Service Execution for Web Media Streaming
    Secure Remote Service Execution for Web Media Streaming vorgelegt von Dipl.-Ing. Alexandra Mikityuk geb. in Leningrad, UdSSR von der Fakultät IV – Elektrotechnik und Informatik der Technischen Universität Berlin zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften - Dr.-Ing. - genehmigte Dissertation Promotionsausschuss: Vorsitzender: Prof. Dr. Thomas Magedanz, Technische Universität Berlin Gutachter: Prof. Dr. Jean-Pierre Seifert, Technische Universität Berlin Gutachter: Prof. Dr. Jean-Claude Dufourd, ParisTech Gutachter: Prof. Dr.-Ing. Ina Schieferdecker, Technische Universität Berlin Tag der wissenschaftlichen Aussprache: 29. August 2017 Berlin 2017 D 83 Abstract Through continuous advancements in streaming and Web technologies over the past decade, the Web has become a platform for media delivery. Web standards like HTML5 have been designed accordingly, allowing for the delivery of applications, high-quality streaming video, and hooks for interoperable content protection. Efficient video encoding algorithms such as AVC/HEVC and streaming protocols such as MPEG-DASH have served as additional triggers for this evolution. Users now employ Web browsers as a tool for receiving streaming media and rendering Web applications, and browsers have been embedded into almost every kind of connected device. The drawback of these technical developments and quick rate of user adoption is that modern Web browsers have introduced significant constraints on devices’ capabilities. First, the computational requirements have risen continuously, resulting in a cycle where modern devices can be nearly outdated after a year or two. Second, as the integration of browser technologies is a complicated matter, not every platform provides the same performance. Different Operating Systems (OSs), chipsets and software engines are the main reasons for this difference in performance.
    [Show full text]
  • Webizing Mobile Augmented Reality Content
    This article was downloaded by: [Sangchul Ahn] On: 10 February 2014, At: 12:48 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK New Review of Hypermedia and Multimedia Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/tham20 Webizing mobile augmented reality content Sangchul Ahnab, Heedong Kob & Byounghyun Yoob a HCI & Robotics, University of Science and Technology, Daejeon, Korea b Imaging Media Research Center, Korea Institute of Science and Technology, Seoul, Korea Published online: 05 Feb 2014. To cite this article: Sangchul Ahn, Heedong Ko & Byounghyun Yoo , New Review of Hypermedia and Multimedia (2014): Webizing mobile augmented reality content, New Review of Hypermedia and Multimedia, DOI: 10.1080/13614568.2013.857727 To link to this article: http://dx.doi.org/10.1080/13614568.2013.857727 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.
    [Show full text]
  • Scalable Vector Graphics (SVG) 1.2
    Scalable Vector Graphics (SVG) 1.2 Scalable Vector Graphics (SVG) 1.2 W3C Working Draft 27 October 2004 This version: http://www.w3.org/TR/2004/WD-SVG12-20041027/ Previous version: http://www.w3.org/TR/2004/WD-SVG12-20040510/ Latest version of SVG 1.2: http://www.w3.org/TR/SVG12/ Latest SVG Recommendation: http://www.w3.org/TR/SVG/ Editor: Dean Jackson, W3C, <[email protected]> Authors: See Author List Copyright ©2004 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract SVG is a modularized XML language for describing two-dimensional graphics with animation and interactivity, and a set of APIs upon which to build graphics- based applications. This document specifies version 1.2 of Scalable Vector Graphics (SVG). Status of this Document http://www.w3.org/TR/SVG12/ (1 of 10)30-Oct-2004 04:30:53 Scalable Vector Graphics (SVG) 1.2 This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/. This is a W3C Last Call Working Draft of the Scalable Vector Graphics (SVG) 1.2 specification. The SVG Working Group plans to submit this specification for consideration as a W3C Candidate Recommendation after examining feedback to this draft. Comments for this specification should have a subject starting with the prefix 'SVG 1.2 Comment:'. Please send them to [email protected], the public email list for issues related to vector graphics on the Web.
    [Show full text]
  • A Repository and Analysis Resource for Post-Translational Modifications And
    Published online 20 November 2014 Nucleic Acids Research, 2015, Vol. 43, Database issue D521–D530 doi: 10.1093/nar/gku1154 ProteomeScout: a repository and analysis resource for post-translational modifications and proteins Matthew K. Matlock, Alex S. Holehouse and Kristen M. Naegle* Department of Biomedical Engineering and the Center for Biological Systems Engineering, Washington University, St Louis, MO 63130, USA Received August 15, 2014; Revised October 28, 2014; Accepted October 29, 2014 ABSTRACT databases––it is the only database that encompasses PTM compendia and allows for the community deposition of ProteomeScout (https://proteomescout.wustl.edu)is PTM experiments, and it is the only web service that has a resource for the study of proteins and their post- built-in subset selection and enrichment for exploring and translational modifications (PTMs) consisting of a analyzing PTM experiments. ProteomeScout is also unique database of PTMs, a repository for experimental data, in the world of protein resources––it is a resource that en- an analysis suite for PTM experiments, and a tool tails a flexible, track-based viewer of protein annotations. for visualizing the relationships between complex The purposeful design of these features was inspired by protein annotations. The PTM database is a com- the historical progression of resources for other types of pendium of public PTM data, coupled with user- molecules, including: the UCSC Genome Browser (1)and uploaded experimental data. ProteomeScout pro- the Gene Expression Omnibus (GEO) (2). The rapid expan- vides analysis tools for experimental datasets, in- sion in the identification of protein PTMs, enabled by high- throughput experimental techniques (3), has led to explo- cluding summary views and subset selection, which sive growth in the number of specialized PTM databases, can identify relationships within subsets of data by although none of which reflect the full capabilities of the testing for statistically significant enrichment of pro- UCSC Genome Browser or the GEO.
    [Show full text]
  • Adaptivity of 3D Web Content in Web-Based Virtual Museums a Quality of Service and Quality of Experience Perspective
    ADAPTIVITY OF 3D WEB CONTENT IN WEB-BASED VIRTUAL MUSEUMS A QUALITY OF SERVICE AND QUALITY OF EXPERIENCE PERSPECTIVE Hussein Bakri A Thesis Submitted for the Degree of PhD at the University of St Andrews 2019 Full metadata for this item is available in St Andrews Research Repository at: http://research-repository.st-andrews.ac.uk/ Please use this identifier to cite or link to this item: http://hdl.handle.net/10023/18453 This item is protected by original copyright Adaptivity of 3D Web Content in Web-Based Virtual Museums A Quality of Service and Quality of Experience Perspective Hussein Bakri This thesis is submitted in partial fulfilment for the degree of Doctor of Philosophy (PhD) at the University of St Andrews December 2018 Abstract The 3D Web emerged as an agglomeration of technologies that brought the third dimension to the World Wide Web. Its forms spanned from being systems with limited 3D capabilities to complete and complex Web-Based Virtual Worlds. The advent of the 3D Web provided great opportunities to museums by giving them an innovative medium to disseminate collections’ information and associated interpretations in the form of digital artefacts, and virtual reconstructions thus leading to a new revolutionary way in cultural heritage curation, preservation and dissemination thereby reaching a wider audience. This audience consumes 3D Web material on a myriad of devices (mobile devices, tablets and personal computers) and network regimes (WiFi, 4G, 3G, etc.). Choreographing and presenting 3D Web components across all these heterogeneous platforms and network regimes present a significant challenge yet to overcome. The challenge is to achieve a good user Quality of Experience (QoE) across all these platforms.
    [Show full text]