Exploring in Video Post Production

By introducing a task management concept to a HTML5 based Quality Control tool

Dennis Karlman

Dennis Karlman Spring 2020 Degree Project in Interaction Technology and Design, 30 credits Supervisor: Jan Erik Mostrom¨ Extern Supervisor: Peter Ahlstrom¨ Examiner: Ola Ringdahl Master’s Programme in Interaction Technology and Design, 30 credits Abstract

Collaboration has always played a big part in our technological advance- ments and will continue to do so in the foreseeable future. The field of video post-production, which Quality Control is a part of, is quite large, involving many people and processes that can be hard to organize. One possible solution to structure the workflow is to introduce an online col- laboration tool in the form of a task management system. This thesis focuses on exploring which functionality and design that should be im- plemented in such a system. By using literature studies together with interviews, iterative prototyp- ing and user testing, it was implied that a task management system could be useful as it would increase workflow and user experience. Some functionalities like sorted lists, automated reporting system, the divid- ing of assets, and gradually saving progress were deemed exciting and potentially useful. A final user interface based on said functionalities was created and user- tested. The flow and design of the interface gained promising results based on both flow testes and qualitative comments. A time-based com- parative test showed that sorted lists and automatic reporting system were 150% more efficient than not having it. The other suggested func- tionalities were also shown to be of interest, but further research needs do be done to be able to draw conclusions. Acknowledgements

I want to thank Codemill for letting me do my master thesis work at their company, as well as my external supervisor Peter Ahlstrom¨ and other staff at Codemill that has helped me and guided me through this project. I would also like to thank both by supervisor Jan Erik Mostrom¨ and my examiner Ola Ringdahl for their encouragement and guidance through the entire course. Contents

1 Introduction 1 1.1 The beginning of communication and collaboration 1 1.2 Codemill and Accurate Video 1 1.3 Objective 2

2 Background 3 2.1 Video production- and post-production workflow 3 2.2 Coronavirus Disease 2019 9

3 Theoretical Framework 11 3.1 Collaboration 11 3.2 Task Management 12 3.3 Accurate Video 14 3.4 QC workers persona 17 3.5 User Testing 18 3.6 How to measure User Testing 19 3.7 Remote user testing 20 3.8 Interview techniques 22 3.9 Prototypes 23 3.10 Basic Design Principles 27

4 Methodology 28 4.1 Literature study 28 4.2 Interviews 28 4.3 Codemill 28 4.4 Prototyping 29 4.5 User testing 30

5 Prototypes 33 5.1 Prototype 1 33 5.2 Prototype 2 36 5.3 Prototype 3 39

6 Results 46 6.1 Interviews 46 6.2 Prototype 1 47 6.3 Prototype 2 47 6.4 Prototype 3 - Flow testing 48 6.5 Prototype 3 - Comparative test 48 6.6 Prototype 3 - Exploring functionality 49

7 Discussion 51 7.1 Interviews 51 7.2 User tests and prototype iteration 51 7.3 Results discussion 52

8 Conclusions 54 8.1 Future Work 54

References 56

A The Instructions for the test 61 A.1 Link to the user test instructions 61 A.2 Image of the user test instructions 61

B The QC report 62 B.1 Image of the QC-Report 62

C Prototypes 63 C.1 Link to the prototypes 63

D Flow testing instructions 64

E Comparative test 65 E.1 Version without sorted lists 65 1(65)

1 Introduction

1.1 The beginning of communication and collaboration

In the earth’s distant past, more than 200 000 years ago, the dawn of humankind took place somewhere in the south of Africa [1]. It was here our species of the homo genus family, the ”Homo Sapiens”, gazed upon the sun for the first time. One may think that one of our key features as a species, complex language, developed around the same time, but that is most likely not the case. Findings and experiments indicate not only that our ancestors gradually started to develop speech and language for more than two million years ago, but that it also holds a central role in our ability to collaborate. By investigating the impact of social learning on hominin lithic technology, Thomas JH Morgan et al. [2] concluded that these techniques to create stone tools were poorly transmitted through observation alone, and most likely required teaching through complex communication, i.e. language of some kind, to be successfully transmitted between individuals. Not only does this emphasize the importantness of communicating with each other during a work process, but it also suggests that as a species, we are prone to collaborate and that it holds a unique role in our evolution as well as technological advancement. One can argue that without the ability to communicate and collaborate efficiently, we would not have the technological advances we have today. Much time has passed since then, but communication and collaboration have and will re- main essential parts of our technology advancement. One of the fields that have seen a tremendous technological advancement in the last decennials, is the video post-production workflow [3]. Video post-production has specialized software designed for each specific area of the workflow, all the way from editing the captured video, to quality control of the final product. As every step of this workflow involves large numbers of both files and sys- tem collaborators, the structure of the systems tends to be quite complicated. With such complex systems, it is vital to find ways to structure the workflow as well as letting the collaborators of the system communicating effectively with each other, as it often tends to make their performance more efficient. This thesis will focus on the exploration of how to introduce a system that will add both structure and collaborative measurements to software specialized for quality control in video post-production.

1.2 Codemill and Accurate Video

Codemill is a digital product development- and IT-consult company, seated in Umea,˚ Swe- den and has been here since 2007. As of May 2020, they have about 60 employees but are continually growing. The company works with many international companies, mostly in the broadcast and media industries. Their primary focus and product is an HTML5 based 2(65) post-production software tool called Accurate Video [4]. Accurate Video is often used as a Quality Control (QC) and Quality Assurance (QA) tool where users can work together with video content stored in the cloud. Accurate Video let the user work with a zoomable timeline, make audio QC and subtitle verification. The product’s strength, however, lies in its modularity and customizable features. Depending on the users’ needs, Codemill will change the content of the product to fit the requested demands [5].

1.3 Objective

This master thesis will be focused on exploring how a collaborative task management sys- tem for video post-production can be implemented in a QC-environment. The focus will be on researching what functionality should be included to create a satisfying experience for the user. Research Questions The main question for the thesis is presented first, followed by its sub questions. Does the functionality of a sorted list and autogenerated QC-report make it preferable over not having it?

(a) What other features could be preferable for a task management system in the context of a video post production QC-environment?

(b) How to design a task management system for QC video post-production and align it aesthetically with a product? 3(65)

2 Background

2.1 Video production- and post-production workflow

Post-production is what occurs after the process of shooting the video. Depending on the type of content that is produced, the post-production process can differ a bit. There are, for example, differences between post-production for Hollywood movies and documentary filmmakers, but most of the workflow is still similar to each other [3]. The main stages of video production are as follow [6]:

• Capture

• Prep

• Edit

• Conform

• Visual Effects (VFX)

• Color

• Sound

• Delivery

The following sub sections will be focused on each one of these stages.

2.1.1 Capture

The first step in making a film is to have the right equipment. The general rule is to for the camera that has the highest-quality of codec [6]. Codec is the meshing of two words: coder and decoder (co/dec). Shortly described, it is the technique to transform and compress video and sound and then decompressing them for editing or playback [7]. With higher quality codec, more information will be included in the captured data, which gives greater options regarding editing. 4(65)

Figure 1: In the process of capturing [6]

After choosing the codec, a decision has to be taken whether to record in log or not. By using a logarithmic profile, the recorded material gives a wide dynamic and tonal range, which allows more options for applying colour and style choices in the editing stage. The downside is that the resulting image appears washed out and requires colour-grading in the editing stage, but it highlights details and retains shadows that otherwise would be lost using a linear profile [8].

2.1.2 Prep

When the shooting of the video is completed, the next step in the workflow is Prep. All the data is now stored on memory cards or Solid-State Drives (SSD’s). More often than not, these files are stored in back up hard drives not to lose the material. To make sure that the files are still identical to the original, the camera operators use a checksum. A checksum is a program that puts the file through an algorithm that uses a cryptographic hash function to produce a string, consisting of numbers and letters, of a fixed length. If two files contain the same information, their checksums will match, thereby can it be used for authentication [9] [6].

Dailies

Dailies are the term used for all of the footage that was shot during a given day. The digital imaging technician (DIT) preps these dailies and then sends them off to the producers and to the editors, which can begin to create assembly edits [6]. With other words, dailies are raw, unedited footage shot during the day and have no cuts at all [10]. However, before the DIT takes the dailies, some important tasks must first occur. The producing of a feature film will have many thousands of individual video and audio files, and these must be appropriately organized since the camera does not do this automatically. A folder structure system is often used with the files renamed so that it is evident in which order they belong, even if they get moved out of the correct folder. The sound must also 5(65) be synced and organized in a similar matter since it often is recorded separately. Even Metadata must be organized and transferred since it can be valuable for the rest of the post-production process. The camera itself can capture metadata, and some are recorded separately. Examples of metadata are such things as lens information, shutter speed, date and timecode [6]. The DIT’s next job may to be to transcode the footage, since the codec used may not be easy, or even impossible, to playback in an ordinary computer. After this, if the footage was log-recorded, the Director of Photography (DP) may apply a Look Up Table (LUT) to the footage so that the producers can see an image that resembles the DP’s vision as close as possible. Then the files are sent to the reviewers (director, client, producer) and the editorial team [6]. A LUT holds a preset of numbers that are used by the software in order to change the colours of the images deliberately. Since log-recording often washes out the colours in favour of an extra detailed image, LUTs are often used to create a predetermined colour grade of the recorded footage quickly [11].

2.1.3 Edit

After the dailies arrive (either on a hard drive or through the ) the assistant editors ingest the files and ensure that all the video, sound and metadata files are included and synced. After that, the post-production may begin. Since editorial teams often consist of more than one person, the files are often transferred to shared storage, thus enabling collaborative work and preventing two persons working on the same file simultaneously. This enables multiple persons working on the same project at the same time, but not on a particular part of it [6].

Stages involved in the editing process

Since the type of film can vary, so can also the number of stages and the time spend in editing. A feature film may take up to 6 months to edit while a short film may only take a couple of weeks. In larger productions, the editorial team follows the production team closely related in time to make an assembly edit that includes all the scenes in the script. By doing this, any potential issues with a shot may be detected, and the production team can do a reshoot of them before they leave the location. After the assembly process, the rough cut takes place where the director works closely with the editor to trim and tweak the assembly edit. The point with having a rough cut is to see how the story works and to see if reshoots are needed. When the rough cut is done, it is usually shown to the producers for feedback. It is also in this stage where different cuts like ”director’s cut” are created. After this, the picture is ”locked”, meaning that no more editorial changes can occur and that it is ready for the next step in post-production. Now, colour, sound and VFX teams to begin working on it to start finalizing the looks of the picture [6]. Visual effects (VFX) is the process where imagery is manipulated or created outside the context of a live-action shot, during post-production. More often than not, this includes computer-generated imagery (CGI) which uses digital effects to create environments that otherwise would be costly, dangerous or impossible to capture on camera [12]. 6(65)

Distributed editing, review and feedback

It is now ubiquitous across all type of filmmaking that the different phases of post-production happen in multiple locations. The editorial team may be placed in New York, e.g. while the colour correction occurs in Chicago and the VFX in Stockholm. With a remote workflow like this, it is vital to keep track of the files and keep them in sync to effectively collaborate between the teams (as well as in them). The internet has proved to be helpful by doing this, as editors can ”lock” the file they are working on thus preventing another user from working on the same file simultaneously. This new way of working has helped the editor with the review process as well. For a long time, the only way for an editor to receive feedback from the director was to have him/her to come directly to the editing room. With online feedback, this has changed. The standard for a long time has been to upload reviewed video to a storage site or similar, then sending back-and-forth among all the collaborators. This procedure enables remote collaboration, but it can be tedious for the editor to manage and process all the feedback, especially if the number of collaborators is large. More and more online services [13] are trying to change this workflow to centralize their creators and collaborators to one platform, where creators can both share content, as well as receiving organized feedback, all in one location. This allows both distributed as well as asynchronous feedback online where the reviewers can leave their comments on different locations as well as on different times, allowing each reviewer to give feedback when it is most convenient for them [6].

2.1.4 Conform

When the edit is locked, it is time to move to the next step in post-production. Conform is the process of transforming the entire project into a format that the colour, sound and finishing team can use in their software. This stage often involves simplifying the timeline, relinking both audio and video files to the original camera files etc. However, how do they keep track of the entire project? The more common way of doing this is to export a reference video, which is used for checking that everything was transferred properly [6].

2.1.5 VFX

As previously stated, VFX stands for ”Visual Effects” [12] and gets more and more com- mon in motion pictures as the tools required has become cheaper and easier to use. There are many different kinds of VFX techniques, where one of the most fundamental is com- positing. Compositing is the process of combining multiple images, so they appear to occur in the same shot. It can often involve having two people standing next to each other, but in reality, neither of them were in the same room at the same time. The editorial team is responsible for selecting the takes that will combine live-action with synthetic elements. These will later be supplied to the VFX team, thus enabling them to begin their work [6].

2.1.6 Colour

After the editorial and VFX teams have done their jobs, the colour correction begins. The process of colour correction involves enhancing and adjusting the visual attributes of the 7(65)

film, including colour balance, exposure etc. Some colour correction has already been applied in the earlier stages of the process, but the colour corrections job is to finalize the film’s appearance according to the DP’s vision. A usual way of approaching this is having the colour teams inspecting the LUTs crafted by the DP as a guideline for the finalized look [6].

Figure 2: Choosing LUTs for the film [6]

2.1.7 Sound

Sound, while not visible, still plays a large part in the experience of watching a film. The work done in this stage can heighten the immersion, excitement and atmosphere of the story. This stage also ensures that the project’s content meets the technical specification for the intended platform. Some examples of work done in this stage are organizing audio tracks into categories (e.g. ”dialogue” and ”music”) and editing the dialogue and music tracks to match the projects intended composition [6].

2.1.8 Delivery

In this final step of the process, the media is packaged and optimized for its intended medium: broadcast, web, theatrical etc. Since file-based delivery has become an industry-standard during the last years, the files specification must meet the intended format. If a failure occurs in this stage, the delivered files may produce picture artefacts, sound hiccups and more during playback [6].

Quality Control

Quality Control, or ”QC” as it is often referred to, is a method of controlling that the de- liverables meet the sought after requirements. QC is not only applied to the visual aspect of the film, but it also includes checking the for technical issues in audio- and video levels, file metadata, a proper timecode et cetera. Figure 3 shows one of Codemills products that specify on performing Audio QC. In many types of projects, QC is done using an automated process, where specialized soft- 8(65) ware does Quality Control over the media and generate QC-report with issues and cohering timecodes. It is also essential to notice that visual QC can be very subjective and may need a human to confirm the issues before sending the QC-report [6].

Figure 3: Accurate Video Audio QC

One example of this kind of specialized software for QC uses machine learning and Arti- ficial Intelligence that supports a combination of automated and manual QC checks. Some parts of the automated procedures include a quality check of subtitles, license verification, loudness detection, identification of audio language etc. The software also has the option of exporting the results in multiple formats, including PDF and Excel [14]. In large productions, quality control can envelop hundred of potential issues. Failing a QC can, therefore, be both expensive as time-consuming, since this means time and money spent on correcting the failure. To prevent failure from happening, the media files should be reviewed thoroughly in previous stages[6]. Some example of common QC flags include:

• Visible production equipment

• Dead/bad pixels

• Incorrect timecode and frame rates

• Shifts in luminance

• Errors in compositing

• Artifacts due to compression of the files

• Video out of focus (blurry images)

• Aliasing errors 9(65)

• Poor colour or grading

• Low level audio

• Unintended humming or buzzing sounds

• Video dropout

• Freeze frames

• Photosensitive Epilepsy Testing (PSE)

• A/V sync (Lip Sync)

• Video signal level

• Maximum true peak (audio) level (dBTP)

All type of check stated above should be checked during this stage and be given a pass or fail level. Pass means that the check meets the test criteria, a fail means that it does not and have to be divided into categories depending on their seriousness:

• Mandatory - These must be corrected before it can be accepted for delivery since the video otherwise can not be shown to an audience. It can be technical issues as well as legal issues. ”PSE”, ”Video signal level” among others, are included in this category

• Technical Warnings - These are technical problems that should be fixed, but does not prevent the video from being shown. ”A/V sync (Lip Sync)” is an example of such an issue.

• Info/Editorial Warnings - These are not any technical issues, but can indicate a prob- lem that can disturb the viewer’s enjoyment. ”Poor colour or grading” and ”Video out of focus” are two examples of such issues.

The type of checks can differ between the type of project and which company that produces the film. Regardless should all checkpoints be included in the QC-report with coherent timelines to the issues [6][15].

2.2 Coronavirus Disease 2019

Because of the impact, this virus has had on the world, and this master thesis, some infor- mation about it has to be mentioned.

2.2.1 Impact on this master thesis

Since the outbreak started in Sweden, reciprocation’s has been gradually implemented to slow down and prevent further spread of the disease among its population. People, in gen- eral, has gotten more reluctant to meet, partly because of the contamination risk, but also because they need to focus more on keeping their businesses running. This has negatively 10(65) affected the opportunities to perform interviews in the designated topic. It has also affected the work environment since most of Codemill’s staff is working remotely since March, which has led to some obstruction in our collaboration since it constrains the possibilities to interact with each other directly. It has also prevented interviews, and user testing, on any QC-workers since everyone has declined while referring to the situation.

2.2.2 Background

The virus named ”SARS-CoV-2” causes a disease called ”coronavirus disease 2019” (COVID- 19). The outbreak was identified for the first time in the city of Wuhan, China, in December 2019. The World Health Organization (WHO) recognized the virus as a pandemic on 11 March 2020 [16]. A pandemic implies an epidemic disease that has spread to multiple continents, or worldwide [17].

2.2.3 Source and spread of the virus

SARS-CoV-2 is a betacoronavirus and have its origin in bats. Since the epicentre of the outbreak has been linked to seafood markets and live animal markets in Wuhan, it is thought to have started as a zoonotic virus. The virus has since evolved the ability to be transferred directly between humans [16]. The virus spreads between people in a similar matter as influenza does, via respiratory droplets from exhaling or coughing. People tend to be most contagious when they are symptomatic, though some spread can occur before the onset of any symptoms. Symptoms are non-specific and can be either asymptomatic or flu-like in their appearance, such as cough, fever and shortness of breath. The incubation period has a range between one to fourteen days, with an average of five days. The disease can be deadly, but mostly for the elderly since 80 % of the deaths were in those over 60 years old and almost all of them had pre-existing health conditions [18].

2.2.4 Preventions

As a preventive measure, The public health agency of Sweden recommends avoiding close contact with symptomatic people and also urges avoidance of touching one’s face and eyes. It is also recommended to wash one’s hands frequently and to stay at home if any symptoms that are linked to Covid-19 are shown. Working from home is also encouraged. [19]. 11(65)

3 Theoretical Framework

This chapter will provide a theoretical background and overview of subjects that are of essence for this paper.

3.1 Collaboration

What is collaboration? The Cambridge Dictionary defines the word itself as ”The situa- tion of two or more people working together to create or achieve the same thing” [20]. Collaboration can also be defined as the interdependence of the group participants as they share unique experiences and ideas. The end result is better than anyone of the involved individuals could have obtained by working alone [21]. Collaboration in businesses is, in general, essential as inter-organizational collaboration makes partaking parties to invest not only resources, achieve goals mutually, sharing infor- mation, resources, making shared decisions and solve problems together. It is an excellent way for allowing better communication within the organization, and it allows a way of coordinating ideas from many people to generate a wide variety of knowledge [22]. In the last decades, virtual collaboration has become very important for business success in the market. According to Chen, Volk and Lin [23] it allows company employees not to be limited to a geographical location to work together, but can be spread around the world. With such ”virtual teams”, clear and concise goals are critical, and so are the ability to communicate effectively. Without clear and continuous communication, the team may fall apart and not reach the goals. Thus, it is not hard to see the value of communicating effectively as being a central part of the process of collaborating effectively. As Prabhkar [24, p. 32] mentions, though many collaborators may use for communicating between collaborators, a dedicating mes- saging application can make the management of messages easier. Thus enabling users to communicate through one single repository instead of using a diversity of applications to accomplish it.

3.1.1 Online Collaboration

When the internet was introduced, a new way of collaborating between members of a group or organization presented itself, namely online collaboration. Online collaboration can be defined as the process of connecting users digitally to communicate in an online space [25], thus allowing members of a team to share information and work together even though they are geographically dispersed. Online collaboration is often categorized into two categories - synchronous and asynchronous. 12(65)

Synchronous Online Collaboration

Synchronous online collaboration occurs when the users work or do their task together at the same time. To be able to do this, specialised software is required with this intention in mind. This type of online collaboration allows users to edit document or information in real-time. Example of work that can be done is editing a text-document together, or make drawings on a virtual whiteboard [26].

Asynchronous Online Collaboration

Asynchronous online collaboration occurs when the users work on the same project but at different times. One of the most famous examples is the email, allowing people to commu- nicate with each other without the concern to respond immediately to the other part. Online collaboration tools that focus on workflows are also a good example of asynchronous col- laboration, as they can route files and documents through organizations based on a fixed process [26].

Online Collaboration Tools

Since online collaboration started to appear, designated tools have been developed to in- crease the efficiency between collaborators. Online collaboration tools can give many ben- efits if introduced correctly into an organization [27]:

• Saved time - When employees collaborate efficiently, time is saved since the end goal is reached faster.

• Improved project management - A structured collaboration tool improves commu- nication and minimizes the risk of mistakes caused by faulty management.

• Better organization - By structuring the workflow, a designated collaboration tool can help by keeping the project better organized.

3.2 Task Management

A task management system, which is a type of online collaborating tool, can be used either by an individual, team or an entire organization to help complete projects more efficiently by organizing tasks and prioritizing them [28]. Task management tools do help people with:

• Work efficiently and reduce waste

• Ensure teams and individuals are being utilized in the correct way

• Stay organized

• Meet deadlines 13(65)

3.2.1 Key components of a Task Management Tool

At the most basic level, a task management tool is used to help its users stay organized. Part of being organized includes setting priorities for tasks, making the progress of tasks visible as they pass through stages of completion and compiling reports or analysis to future tasks and workflows [28].

Prioritization

A task board should be present, which lets the user organize the tasks by priority to ensure that the most critical tasks are completed first. By doing this, it enables the user to on how work should be attacked, rather than jumping from one task to the next without any direction.

Visualization

By visualizing the tasks, the user is reminded not only what to do, but it gives a clearer picture to understand the project as a whole. When every part of a project is laid out in a way that is easy to comprehend, the individual dependencies becomes apparent, and collab- oration follows naturally.

Analysis

A task management tool should always provide some form of analysis. Allowing the end- users either grasp what they are doing or what they have done.

3.2.2 Benefits of an online Task Management Tool

By choosing an online Task Management Tool, users can collaborate remotely by using an internet connection and store the information in the system either in Clouds or servers. The workloads can be managed and organized, letting the users know what tasks need to be done and in which order [28].

3.2.3 The anatomy of a task

When the objective is to finish a product or a service, tasks come into existence. Tasks could be standalone like ”buy this cereal”, or it could be part of a much larger project. Tasks are usually an element of a project which consists of many different tasks that all has to be completed before the project itself is completed. To create, manage and complete tasks, a task management tool is often used. A task often contains some typical elements [29]:

• Title - to recognize the task • Description - provides more detail on what needs to be done. • Due Date - When the task needs to be finished • Assignee - the person that works on the task 14(65)

• Dependencies - of the task is related to other work in the project

• Priority - To know which task should be worked on first

• Completion rate - On long tasks, there is a need to show how much work has been done.

• Completed work - The ability to mark the task as completed

3.3 Accurate Video

One of Codemill’s product, Accurate Video is an HTML5, web-based video platform cre- ated for post-production, broadcast and media professionals. The product is built in Angular [30] and can be with integrated amazon webservices[31] to provide cloud-based storage and a machine learning and artificial intelligence enabled platform called Baton [14], to auto- mate the workflow. Accurate Video can be used to both validate and ensure the quality of the Video, Audio, Metadata, subtitles and [32]. As stated in the introduction, Accurate Video is modular, which means it can be configured to fit the companies needs. At the moment there exist three different versions of the software:

• Accurate Video Visual QC

• Accurate Video Audio QC

• Accurate Video Subtitle QC

Where each one is specialized in its own field. This work has been centralized around the Accurate Video Visual QC, which interface can be seen in Figure 4. This version can also do QC on both Audio and Subtitle, but in less degree than its counterparts.

Figure 4: Accurate Video 1 15(65)

To further understand what is shown in Figure 4 and to understand the product, some key features that it has will be discussed:

Timeline

The timeline, shown in Figure 5, is frame accurate and shows the current position in the video. A frame accurate timeline means that the user can access any frame of the content and can visualize time-based metadata.

Markers

Markers are used by the QC-worker to highlight issues found in the video or its assets. Created markers can be found in rows in the timeline, as shown in Figure 5. Each marker has a:

• Name - specifies the issue

• Description - describes the issue in greater detail

• In point - where in the time line the issue started

• Out point - where in the time line the issue ended

• Track - a track which ought to be checked. Can for example be ”Black Frames”.

• Severity - which based on how severe the issue is, can be set to ”Info”, ”Warning” and ”Serious”, with color coded visualization in the time line.

When the quality control is done, there are often many markers placed in the timeline, which is used to report issues one step back in the video post-production chain. To do the report, a specified QC-report is often used that is mailed to the concerned part.

1Codemill internal 16(65)

Figure 5: Accurate Video 2, current Timeline position displayed in yellow circle 1 and Marker position in red circle 2

Audio- and Subtitle control

The user can verify audio by isolating channels and play multiple tracks in sync. Even sub- titles can be verified by playing multiple tracks in sync. This functionality is not displayed in any figure.

AI integration

Different Artificial Intelligence- (AI) and Machine Learning (ML) systems can be integrated into Accurate Video to further increase the automation in quality control. Baton [14] is one of these products that use AI to check the files automatically and places markers where the program is deemed that it is needed. It can, for example, place markers on every black frame or when the audio is over the set threshold value, depending on the company’s preferences. It can also be used to automate the QC-report to some degree. A workflow scenario for a QC-worker that uses an AI integrated system in Accurate Video can exist of: 3

1. Ingesting the files slated for QC.

2. Let the AI integrated system process the files.

2Codemill internal 3Codemill Internal 17(65)

3. Manually check the results and correct unnecessary markers and add markers where deemed needed.

4. Generate a QC-report and send it to the concerned part.

This functionality is not displayed in any figure.

3.4 QC workers persona

To understand more of the nature of a typical QC-worker, two personas gained from internal research at Codemill 4 will be presented as personas in Figure 6 and in Figure 7:

Figure 6: Persona 1 for QC worker in video post-production

4Codemill Internal Research 18(65)

Figure 7: Persona 2 for QC worker in video post-production

The personas in Figure 6 and Figure 7 are not real people, but rather a conglomeration of different QC-workers which data gained from interviews etc to create two fictional persons that represent a larger group.

3.4.1 Personas

Personas are fictional characters, created based upon research and data collection in order to represent the different users that might use a service or product in a similar way. By creating personas, it can help the developer recognise that different people have different needs and expectations, as well as helping the developer identifying with the user. Personas can help to make the design task less complicated, as they can guide the ideation process and help the designer to create a good user experience for the target user group [33].

3.5 User Testing

What is user testing, and why do we need it? According to Weber: ”User testing is when you analyze and measure the user experience of a product as a whole, or sometimes just a portion of it” [34]. It is, in other words, about testing and quantifying how someone uses a product. By letting users perform specific tasks with a product, the interviewer can be able to find errors and areas for product development, as well as gaining insights on how the quality of the experience was for the user. User testing is not limited to testing products; it can also be used to test interfaces, new- or existing features, among other things. Usability tests should always be considered when developing new products since it helps to discover usability and design issues before release. According to Foggia [35], when a designer is working on some assignment, he or she may 19(65) be so close to the solution that they may not realize that something could be improved, which may come forth by doing user testing. Some of the objectives of doing user testing are:

• Gain insight from the users

• See if we meet users expectations

• Check if the user can perform the tasks that are proposed

• Find out if the product is on the right track

• Get users reactions and feedback

By doing user tests, the designer can hopefully realise what could be improved and make changes accordingly. However, how can user tests be done?

3.5.1 The number of participants to test

Since user testing can be both timeconsuming and costly, the tester usually needs to plan on how many people should be tested. The answer is that as low as five persons during each iteration can be enough. According to Nielsen [36], if a design contains a maximum of 100% of usability problems to be found, the proportion of usability problems discovered while testing a single user is 31% . This means that after collecting data from a single user, the tester has almost learned a third of all there is to know about the usability of the design. When a second user test is conducted, that person will do some of the things that the first user did, so there is always overlap in what the tester learns. Because people are different, there will probably be something new that the user did or felt that was not observed with the first user, thus more insight about usability problems will be gained, but not nearly as much as when the first user did it. When the third user test is conducted, even less new information will be gained since most of the things that the user does has already be done by the first and the second user. Thus, by adding more and more users, the tester will learn less and less new information each time the test is conducted. With this information, Nielsen concluded that testing five people is enough to gain informa- tion about almost all of the theoretical usability problems, conducting more user tests after that does not lead to much more new information. If the tester can conduct more user tests, it is better to iterate the design and have multiple studies with five users each, than having a single study with many users.

3.6 How to measure User Testing

Apart from subjective comments and satisfaction, There exist some different varieties of user testing that can be measured. According to Nielsen [37], these types exists:

• Success rate - Whether the users can perform the task at all

• The time a task requires 20(65)

• The error rate

In the subsections that follow, a more in-depth understanding of how each measurement can be implemented will be discussed.

3.6.1 Success Rate

In a user test scenario, a user may be given instructions to find a specific item or a particular place in a web-based design. If the user can reach the goal of the task, then it is deemed successful, if not, then it is a failure. The user success rate tells the percentage of tasks that users complete correctly [38]. The Success Rate says nothing about why a user failed or how well they performed the tasks, it just tells if the task is performed. The person conducting the test can also implement a third option, namely partial success. Partial success can be given if the user did perform the task, but had to be given clues to find the goal, or clicked around to much in the design before the goal was achieved (thus generating errors). By summarising the results, the tester gets a percentage of how well the users could perform their tasks. According to Nielsen, the average websites do often score less than 50% the first time a user is introduced to the site, so the tester should not be let down if the success rate is not close to 100% [38].

3.6.2 Qualitative Comparative Usability Testing

In a comparative usability testing, two different versions of a design are tested against each other to find out which one is preferable according to some measurement, either time it takes to conduct the task given, task-completion rates, or something similar. To achieve valuable opinions of each design, it is encouraged to let the participants think aloud during the test, as well as allowing them to leave comments afterwards. Much information can be gained by letting the participants discuss what differences and advantages they thought each design had compared to the other [39].

3.7 Remote user testing

If the person conducting the tests want to achieve everything above, but can not meet the users in person, can user tests still be achieved? Well, yes. According to Schade [40], remote usability tests are very similar to traditional usability tests with the key difference that the participant and the tester are in two different physical locations. It is often recommended to do in-person user testing because it is easier for the person that is conducting the test to read user’s body language, and to recognize the appropriate time for either follow up questions or probing. However, it is not always in-person user testing can be done. The user may be located far away geographically, or there may be other constraints that make an in-person usability test deemed impossible to conduct. Moreover, it is always preferable to do remote testing instead of skipping the tests altogether. To do a successful user testing, whether it is conducted remote or not, Gorski[41], set up a 5-step process:

1. Define your goals and target group 21(65)

2. Recruit testers

3. Write a test script, define the setup and run a pilot test

4. Run the test sessions

5. Summarize what you have learned

Each of these steps will be discussed in further detail.

Define your goals and target group

The first step is always to define which tasks that are ought to be tested, as well as which target group and scenarios to be considered.

Recruit testers

The tester ought to look for volunteers that fit the desired profile. If the design that is to be tested is made for a specific target group, then the testers should belong to that group. As Nielsen stated [36], five participants are deemed enough on each test to gain insight on crucial usability issues.

Write a test script, define the setup and run a pilot test

A test script is helpful for the tester to structure and facilitate the test session. These are often constructed in three parts:

• Contextual questions - to help the testers feel more relaxed and comfortable, as well as gaining more information about the user.

• Scenarios and tasks - explain the scenario for the user and let them perform the task.

• Questions about the experience - to gain more information regarding the design and test.

Depending on the which platform the product will be tested in, a minimal setup is required. For desktops, using a screen sharing tool that offers screen sharing and recording is recom- mended. To test a clickable prototype, create it in a software that allows the user to open in by sending them a link [42]. After the script is done and the test environment is set up, it is preferable to run a pilot test to validate the script and chosen method of conducting the test. It is crucial to perform this stage since it helps the conductor discover errors that might else occur during the real tests and affect the results 22(65)

Run the test session

While following the test script, encourage the user to ”think aloud” and describe what they are thinking about while executing the given task. Encourage users to be as critical as possible so that the feedback will be given as honest and valuable as possible. Except for questions about the experience, the tester can gain info using task success rate [38] and time needed to execute the specific task.

Summarize what you have learned

Summarize and analyze the information gained from the test to extract insights for the subsequent development of the product.

3.8 Interview techniques

The main purpose of the interviews was to gain information on how quality control is per- formed by the operator in a video post-production environment.

3.8.1 Type of interview

According to Wilson, there are three types of interview techniques that can be used [43]:

• Structured Interviews

• Semi-Structured Interviews

• Unstructured Interviews

All of these techniques have their strengths and weaknesses in a given situation. Structured interviews are often performed with a verbal questionnaire where the interaction is limited by a script that contains a fixed set of questions. It is good to use this kind of interview when the interviewer needs standardized answers, which is useful for obtaining general information about demographics i.a. This method is, however, not that great while exploring a new field where the interviewer might need to take the interview in different directions depending on the answers. Unstructured interviews are conversations with users where there is a general topic, but no specific questions or predetermined interview format. This interview-technique is often used to gather rich, in-depth data about the users experience without enforcing restrictions on what they can express. Unstructured interviews are good to use when gathering data in general themes and developing insight about the user’s interactions with technology, but it should not be used if the interviewer has specific questions that he or she would like to be answered. Semi-structured interviews combines the predefined questions from the structured inter- views with the open-ended exploration of the unstructured interviews. The interviewer often has a fixed set of questions, but are not limited to them and can change the flow of the 23(65) interview depending on the user’s questions. This interview technique is commonly used when there is some knowledge about the topic, but further details are still needed. The in- terviewer should use this technique to gather information about tasks, task flow and facts as well as understanding user goals. In this paper, the semi-structured interview was chosen since some knowledge about the area was gained through literature studies, but further data was deemed necessary to understand the workflow in video post-production. On top of this, the structure of the interviews was based on Halls three boxes [44]:

• Introduction - Describe the purpose of the conversation and the topic.

• Body - Ask open-ended questions that encourage the subject to talk freely and use the prepared questions more as a checklist than a script.

• Conclusion - As soon you have the wanted information, wrap up the interview.

3.9 Prototypes

3.9.1 The definition of a prototype

A prototype is an early model, sample or release of a product created with the intent to test a concept or a process. Typically, a prototype is used to evaluate a new design or function that ought to be implemented in a later stage of the development. The purpose of a prototype is to have a tangible model of the solutions defined and discussed by the designers during the concept stage. Instead of going through the entire design cycle based solely on a supposed solution, prototypes allows the designer to validate a concept by putting an early version of the solution in front of users and collecting feedback as quickly as possible [45]. While prototyping often explores the interpretation of a new design for a user interface, it is not limited to that aspect. In software development, prototypes can be used to gain information and knowledge about the following dimensions [46]:

• System Performance - Resource usage of the system

• System functionality - External behaviour of the system

• User interface - Presentation and user interaction aspects

3.9.2 Prototype iteration

As Goldman and Narayanaswamy state: ”The process of developing and evolving com- plex software systems is intrinsically exploratory in nature” [46]. In the development of software, some prototyping activity is, therefore, more or less inevitable in every step of that process. A prototype is developed as an approximation to an envisioned target system following one or more of the dimensions: user interface, system functionality and system performance. The prototype is, however, not the end-product of the process; instead, it is a good idea to have an iterative process. By iterating the prototype through gradually intro- ducing aspects of the dimensions mentioned above, the developer can learn and understand 24(65) different aspects of the system, e.g. trade-offs among different implementation strategies or the actual requirements of the system, or its users. The basic parts of the iteration process, often called ”Rapid prototyping process” is shown in Figure 8. The process more or less consists of four parts:

1. Ideate - Use the knowledge gained from research about the subject or previous itera- tions to pinpoint features or design elements that should be included in the prototype.

2. Prototype - Create a prototype to test the features from the previous step.

3. Test - Test the prototype either by sharing it with the user and check whether the functions and aesthetics of the prototype meet the expectations or conduct a user test.

4. Analyze - Analyze the results from the tests to gain more knowledge about the key features of the prototype. The knowledge can be used in the next loop of the iteration to ideate new ideas and design.

Before entering the rapid prototyping process, the developer ought to gain some knowledge about the user by empathizing their situation through user interviews and user observation. The developer should also define what problems are going to be solved with the prototype. It is prevalent that, during the rapid prototyping process, the developer learns more about its intended users and that the tests reveal insights that redefine the problem [47]. This suggests that iterative prototyping can not only be used as means of improving the original design, it can also be used as a way of learning about the users, the problem they face and fitting solutions to those said problems. Iterative prototyping can, in other words, be used as a technique in the problem definition area of research to stimulate both interaction and discovery during the design process and has the potential to generate insightful, user- driven ideation [48]. 25(65)

Figure 8: Basic prototype iteration process

3.9.3 Fidelity

Prototypes do not necessarily look like final products - they can have different types of fidelity. A prototypes fidelity refers to how it conveys the final products look-and-feel. Depending on the visual design, content and interactivity of a prototype, it can range from Low-Fidelity (Lo-Fi) to High Fidelity (Hi-Fi). The design process often starts with Lo-Fi prototypes to explore design and functionality approaches. As the process moves forwards and the design and functionality becomes more rigid, the prototypes are created with higher and higher levels of fidelity [49].

3.9.4 Different types of prototypes

Prototypes can be constructed in many ways. Some usual techniques used to construct prototypes for User Experience (UX) will be discussed below.

Paper prototyping

Paper prototyping allows the designer to create a prototype of a digital product interface without any use of digital software. The technique is based on creating hand drawings of different screens that ought to represent the user interface of a product. While relatively simple in its execution, it can be useful when the designer needs to explore different ideas and redefine designs quickly. The most common use is in the early stage of design and de- 26(65) velopment, where the designer might try different approaches to the design- and/or solution. Some of the benefits of this technique include:

• Fast prototyping - Sketching on paper is quickly done and does not require the de- signer to have knowledge about design software.

• Allow early testing - By testing the prototypes early, big-picture problems and infor- mation of the user’s needs quickly become clear.

• Facilitate adjustments - It allows the designer to make changes during the test ses- sion by sketching up a function that the user would have liked, or erasing parts that were deemed unnecessary.

Nevertheless, this technique also has its disadvantages. It is hard to convey complicated op- erations of highly-interactive interfaces, and it does not give the same feel of a real product as a digital prototype would have [49].

Clickable wireframes

A wireframe is a visual representation of a product created in a software tool made specif- ically for prototyping. These software tools allow the designer to create static wireframes and link them together to make a clickable, interactive prototype. Wireframes can range from low- to mid in their fidelity and can give the user that testes them a better understand- ing of how the product functions than paper prototypes [49].

High-fidelity prototyping

Hi-Fi prototypes are created to appear and function as similar as possible to the finished product. Hi-Fi prototypes are often created when the designer has quite a solid understand- ing of the functionality of the system and wants to user test it. The Hi-Fi prototypes are often accompanied by realistic and detailed design, reality-based content (like text) and are highly realistic in their interactions. While some disadvantages come with Hi-Fi prototypes, like it is costly to create since it takes much time to do them, they have many advantages over prototypes that belongs to the lower range of fidelity:

• Meaningful feedback during usability testing - As Hi-Fi prototypes often look like a real product, user test participants will often behave more naturally, as if they are using the real product.

• Testability of specific UI elements or interactions - With Hi-Fi interactivity, the designer can test graphical elements or specific interactions [49].

• Better understanding of the product - Since Hi-Fi prototypes often works as a real product, it gives the participants a clearer idea of how a product is supposed to work in ways which prototypes created in lower fidelity can not. 27(65)

3.10 Basic Design Principles

When designing a web page, an app, an interface or similar things, it is always good practice to follow some of the basic design principles [50].

Proximity

Proximity helps by creating a relationship between related or similar elements. The ele- ments does not need to be grouped, instead, they should be connected by colour, size font, etc [50].

Alignment

Alignment is used to create a sharp, ordered appearance by ensuring that the elements in the design have a pleasing connection with each other [51].

Repetition

Repetition strengthens a design by tying together elements using colour, shapes, typefaces. By using repetition, the designer can create association and consistency [50]. Repetition and consistency are especially crucial in branding because it makes the products look to be instantly recognizable, and can create cohesion between different products that belong to the same family [51]. 28(65)

4 Methodology

In this chapter, the process for the work of this master thesis is presented. This chapter is consisting of four sections: Literature study, interviews, UX-design and iteration, prototype, and finally, user testing.

4.1 Literature study

The literature study aimed to gain information about the subject, quality control in video post-production, and how the workflow can look like in the industry today. Some focus was also directed at finding out more how people collaborate and how important it can be, even in a digital scenario. Several of these articles were found using The online library at Umea˚ University, as well as Google Scholar. Some information was also found using other sources, which credibilities were evaluated before they were included in this paper.

4.2 Interviews

The focus of the interviews was to gain information on how the participant operates its video post-production stages, as well as to create an understanding of the work procedure for an average QC-worker. The interviews did also try to gain knowledge of how collab- oration occurs in QC teams. The aim was to find whether they were located at the same place, or collaborated through specified online collaboration tools, or through different asynchronous methods like emails or other message technologies. Since some knowledge regarding the subject was gained from literature studies and Codemills internal research, the semi-structured interview technique was chosen. The answers from the interviews can be found in chapter 6.

4.3 Codemill

A lot of the information on how QC work occurs was gained from material accesses through Codemill.

4.3.1 Accurate Video

By studying the product, taking part of demos 1 and interviewing people from the develop- ment team, information on how the system was built and how it can be used was gained.

1Codemill Internal development 29(65)

More information about Accurate Video can be found in section 3.3.

4.3.2 Codemill internal development

To further gain knowledge on how current, and future users of Accurate Video, wants their modified version of Accurate Video to be built, demos, wireframes and prototypes 2 were studied. Neither of these demos, wireframes and prototypes can be shown in this report due to secrecy and respect for the different companies.

4.4 Prototyping

To gain more knowledge of different functions that could be useful for teams of QC workers, prototypes were created, tested and iterated to include new functionality and design. This method was chosen, since prototyping can be used not only to develop the product further in terms of user interface design, but can also let the developer learn more about its intended users, as well as give insights that redefines the problem that is ought to be solved [47][46]. Some short information about the prototype will be discussed in this section, but they will be further discussed in chapter 5. Before the first prototype was created, basic knowledge about the Video post-production workflow, especially the QC part, were gained through literature studies and interviews. Then this information, about how the work was done, were used to ideate functionality and design for a first prototype.

First prototype

The first prototype was a paper prototype since it contained a hypothetical functionality that was in need of validation before further development. The paper prototype technique was used since it is created very quickly and can be used to explain the concept for a participant of a user test [49]. The test itself was constructed to find out if the functionality could hypothetically solve a relevant, real problem.

Second prototype

After analyzing the results from the tests of the first prototype, more insight into the QC workers problems were obtained, and the idea of a task management system was born. A clickable wireframe, with low fidelity, was created to test the hypothetical functionality in a similar way as the first prototype, as well as to get input on the user interface. The wireframe was created using a digital design tool kit called Sketch [52].

Third prototype

After analysing the results from the second round of user tests, a third prototype was created with higher fidelity than the previous ones. For this last prototype, a more comprehensive

2Codemill Internal Development 30(65) user test was conducted that will be discussed in the following section.

4.5 User testing

The goal of the user test was to validate the thesis objectives, found in section 1.3. Since the intended participants to the user test were members of larger QC teams, it became clear that the user test had to work remotely. The reason for this is that QC teams work with large productions, which are located in the main capital, or larger cities. Since travelling was restricted during this phase due to the corona outbreak, remote testing was deemed as the best choice.

4.5.1 Construction of the test

The test sessions were conducted in a video conference tool called Google Meet [53], since it allows screen sharing which made it possible to observe both the user and what how it interacted with the prototype simultaneously. It also has a recording function, which made it easier to document the test sessions. The prototype itself was created using Sketch, and the prototype was uploaded in their service, called ”Sketch Cloud” [54], which allows the creator to share documents by sending a link to the project. These projects keep the connection (hotspots) between the images created in Sketch, thus allowing anyone with the link to be able to try the prototype. Since the test session was rather long, approximately 45 minutes, and was constructed in a way that required the participants to have written information at hand; instructions were created using a survey administration app called ”Google Forms” [55]. Google Forms was used since it could both gather information about the participant using a digital form, as well as dividing the form into sections, thus allowing to keep the test structured. The form can be found in Appendix A. The participant was also required, as a part of the test, to fill a simplified QC-report which was based on a real version [15]. Before the tests began, the participants were sent links to both the instructions as well as a link to a file storage system called ”Google Drive” [56], where the QC-report was stored. The QC-report was created in two versions, one in Excel and one in PDF, where the participant could choose one they wanted to use. The PDF version can be found in Appendix B. A test script was written to structure and organize the test. Before any real user tests were conducted, a pilot test was done to ensure that the technology worked and to find errors in the test that ought to be corrected before the proper testing.

4.5.2 Conducting the test

Since Nielsen [36] concluded that most of the usability problems of a prototype could be discovered with as little as five participants in a user test, the study aimed at this number while recruiting people. The final number of user tests were, however, six persons. At the start of the test, each participant was informed that the test was voluntary and could 31(65) be abandoned at any time. They were also asked to give permission to have the test session recorded. The test was constructed in five sections:

1. Info about the participant

2. Short introduction

3. Five flow testes

4. A Comparative test

5. Exploring functionality

Info about the participant

The participants had to give information about their current company, job title, if they were familiar with Accurate Video and if they had some knowledge about how QC is performed in video post-production.

Short introduction

Before any of the tests where the participants had an active role, they were introduced briefly to the concept. This was done by using a version of the prototype that was not used later on in the tests. This was deemed necessary since the prototype included many new concepts and functionalities that the participants were not familiar with. The conductor of the tests shared his screen while clicking through the prototype and explaining the functions. The participant could at any time ask questions if they deemed it necessary.

Five flow tests

Five flow tests were conducted in a separate version of the prototype to discover how the flow of the interface worked and if any elements or functionality was not clear enough. The instructions of the flow tests were written. During the flow tests, the participants were asked to ”Think-Out-Loud” and to be as blunt as possible if they had any criticism. The flow tests were measured using success rate [38], and each flow was given a score from 0-1, where 0 equals failure and 1 a success. The score depended on error clicks or if the participant got stuck and needed help:

• Score 1,0 if the participant completed the flow and had a maximum of 3 error clicks.

• Score 0,5 if the error click rate was between 4-6.

• Score 0 if the participant could not complete the task without help or had a error click that exceeded 6. 32(65)

After the flow tests were conducted, the participants were given the opportunity to comment on the design, flow and functionality that were included in the tests. The flow tests can be found in Appendix D and the results, presented as a success rate and selected comments, can be found in section 6.4.

A Comparative test

A qualitative comparative test was then conducted to examine the tests main objective; if a sorted list and an auto-generated QC report are preferred over not having it. The participants were given the same instructions to complete two different prototypes. One prototype had sorted lists, and auto-generated QC report while the other one did not. The instructions were in short:

1. Find the task with the shortest due date and start working on it.

2. Generate a QC Report based on the found problem.

The prototype that excluded sorted lists and QC-report had an unsorted interface with assets. After the right asset was found, the participant had to manually fill the QC-report found in Appendix B with the help of the instructions provided. Further details about the prototypes can be found in Appendix E. The time to do the tests was measured to determine if one prototypes workflow was more efficient compared to the other. The participants could also leave comments on which pro- totype they preferred. The results can be found in section 6.5.

Exploring functionality

The participants of the user tests were shown, by screen sharing, the functions of the proto- type by watching the conductor do a walk-through of the prototype. The participants could give feedback and share their thoughts about hove much value each function could give, as well as other thoughts about the design and flow. As some of these functions were not included in either the task flow or comparative test, it was deemed necessary to get at least some subjective evaluation to confirm their significance. The functions examined were:

1. Sorted lists

2. Dividing the asset in divisions

3. Auto-generated report

4. The ability of a checklist and to save progress using it

5. The use of colors instead of icons

6. The overall design and functionality of the system

The feedback given to the points above can be found in section 6.6. 33(65)

5 Prototypes

This chapter will present how the prototypes have evolved with added functionality and design due to the information gained by doing rapid iteration process [47] together with continuously literature studies and interviews.

5.1 Prototype 1

The initial approach of introducing a collaborative tool to Accurate Video (and QC in gen- eral) was to introduce a ”To-do-list”. The placement of the button that unfolds the list is presented in Figure 9. The unfolded version is presented in Figure 10. Since this was an idea that needed quick verification before any further development, the prototype was created on paper as it was deemed adequate for presenting the concept visually in user interviews.

Figure 9: Lo-fi sketch of Accurate Video with a to-do list

During the ideation process, it became clear that different companies seems to have stan- 34(65) dardized which issues to look for in QC, but they did not seem to have this ”list” of things to do digitized. Instead, through knowledge gained from interviews, it seemed that every QC worker ought to know which issues to look for when working on an asset, or perhaps the information on which issues to look for was in another document, either digitally or manually. The concept consisted of digitizing these lists, thus making them easily accessible by having them ”in-context”, namely in the same tool as they work in. Another feature was introducing synchronous collaboration in the QC environment by let- ting multiple users work on the same asset. This can be seen in Figure 10. The tasks represented different issues to look for, ”black frames” for example. Each task could exist in five states:

• To edit - the task has yet to be worked on.

• Is editing - a QC worker has claimed this task, which means that no one else can claim it.

• In progress - a QC worker has stopped editing the task, however, some of the work on this task has been done, but it is not completed. The progress has been saved to enable the next QC worker to pick up where the other one left.

• Completed - the task was finished without any issues.

• Completed with issues - this would mean that the task has been completed but con- tain issues, which would be graded in categories like ”Information”, ”warning” or ”serious”

Each of these states would be colour coded to make them recognizable easily. The unfolded To-do-list seen in Figure 9 would also be colour coded to verify if any tasks has been worked on or not. 35(65)

Figure 10: Lo-fi sketch with the to-do list expanded

Due to the given feedback of the conducted user tests, presented in section 6.2 two conclu- sions were made while analyzing the answers:

• QC workers does not work synchronously on assets and would probably not want to do that.

• QC workers does often work by checking multiple issues at the same time, thus would not want to be confined to check only on one type of issue at a time.

Instead of trying to force these changes upon the regular QC workflow, another ideation process began to try to find a collaborative solution that would actually improve the effec- tiveness of the workflow. Hence, the idea of designing a task management system came to fruition. 36(65)

5.2 Prototype 2

When the idea of a task management system was born, more literature studies were con- ducted on the field to gain information. Interviews with the staff at Codemill 1 were also conducted to specify key functionalities that should be included in the system. Some of these were:

• User login - to access the system and to have the ability to constrict access based on responsibility.

• Structured lists - to have the assets sorted in order, allowing the worker to always know which asset to work on.

• Autogenerated QC Report - to increase effectiveness and minimize human errors originating from filling the QC reports manually.

After a user with full access logs into the system, they are greeted with the start page as seen in Figure 11.

Figure 11: Start View for the task management system

1Codemill Internal Development 37(65)

In the left field, there is a sorted list of assets, automatically sorted by the due date (the date the asset has to be finished on), but the user has the opportunity to either sort the list by different categories or search for an asset. The right field consists of assets that are in edit by other QC workers. Since they are already claimed, no other QC worker can access them, thus prohibiting synchronous collaboration. The automated QC report view, as seen in Figure 12 can be reached by the tab button in the right corner. In the report view, the user can view a preview of the QC-report and export it in the format of an Excel- or PDF file, enabling it to be attached to an email to be sent upstreams the Video post-production chain. After a QC report has been exported, the user can archive the entire asset, thus removing it from the list and the system.

Figure 12: Report view for the task management system

Another feature that was included is the possibility to divide the asset into specified divi- sions, in this example ”Video & Text”, ”Audio”, ”Metadata” and ”Rights”. The idea came to fruition after discovering that QC-workers can be specialized in certain areas, as can be seen in Figure 7. Figure 11 shows all these divisions, but in a scenario where the user of the system only has the competence to do work in ”Video & Text”, only that division would be visible to the user. When all of the assets divisions have been completed, the asset appears in the list of the report view. While there, the user can easily access the different parts of 38(65) the preview of the QC-report by using the corresponding tabs, which are also colour coded. The colour correlates with found issues in the QC. If no issues were found, the tab is marked green; if issues of degree ”serious” are found, the tab is marked in red, etc. A basic flow chart of this functionality can be seen in figure 13. After all divisions are finished, they end up in the Report view. The user that has access to this stage can, if deemed necessary, review the work done by the other QC-workers and if the work is deemed incomplete, reject one or more divisions, or the entire asset. The work done can either be removed, or saved together with comments from the reviewer. The purpose of this would be to quickly give relevant feedback to the other QC workers if their work was not adequate. This functionality was, however, not implemented in the prototype.

Figure 13: Flow chart for the division functionality 39(65)

5.3 Prototype 3

After analyzing the user test results from prototype 2, it was deemed that it could indeed improve the workflow for QC workers. Another prototype was begun that used Codemill’s own component library accessed through an online collaboration tool named Abstract [57]. The reason behind this was to construct a prototype with higher fidelity that would create an association and consistency with Codemills own product, Accurate Video, thus ensuring following some of the basic design principles [51]. The parts used were the colour chart and button designs. This prototype was also created to be a part of a more thoroughly user test; hence it was deemed more necessary to give it a cleaner look resembling the actual product as well as making it more clickable than previous versions. The prototype itself was built upon the previous two prototypes and used some of their functionalities. The workflow presented as a flow chart in Figure 13 is still used in this prototype. The Video & Text division, as it appears for a user that has access to all the functions can be seen in Figure 14. Except for the visual improvement, an admin tab has also been included, which will be discussed further below.

Figure 14: Video & Text division

By inspecting an asset in the task-list, the user can find more information about the asset, 40(65) as seen in Figure 15. In this view, the user can get info about the duration of the asset, frame rate, audio files included and more. Info about the status of the entire asset across all divisions can also be found. In this case, no work has been done in either division, hence the blue colour. The ”To-do list” has also returned, but is now called ”checklist”, which functionality will be discussed below.

Figure 15: Inspecting an asset

To work on an asset, the user can either claim it in any of the division view or in the inspect view. By claiming an asset, the user opens up Accurate Video, as seen in Figure 16. All the other users of the system will henceforth see the asset in the ”In Edit-field”. The logo in Figure 16 represents Accurate Video as presented in section 3.3. The user performs their QC work in this view. If the work is completed, the user can save it by using the finish-button. If the user wants to discard the asset without saving, the discard-button is used. If the user logs out of the system, or want to pause the work to do something else, the pause button can be used which alters the dynamic bottom bar as seen in Figure 19. This enables the user to quickly resume his work after either leaving the system or doing something else in the system while pausing the QC work. 41(65)

Figure 16: Working on an asset

There is also a new feature included in this version; the ability to have work instructions, as well as the ability to save partially completed work, by using the checklist. While working on an asset, the user can access the checklist by clicking the checklist-button. The checklist view can be seen in Figure 17. The idea is that every row in the checklist correlates with the marker rows found in Accurate Video, and is standardized by the company that performs the QC. If a user creates a marker, it falls under the same corresponding row in the checklist, as seen in Figure 18. The rows in the checklist are also what constitutes the QC Report, together with file info and info about which users that have worked on the asset. If the user wants to find out instructions on how to perform a certain task, they can access the work instructions by unfolding each item in the checklist. Since the checklist occupies a lot of space, the user can separate this view from the rest of the system by clicking on the ”New window”-button. A separate tab in the user’s internet browser will open with the checklist view, thus enabling visibility to perform QC-work but still be able to have the instructions nearby. 42(65)

Figure 17: Inspecting the instructions in the checklist

As some assets can be huge and the work to fully complete these can be quite long, it would be beneficial to be able to save progress made on an asset even thou it is not fully completed. By clicking on the ”Create Progress”-button in Figure 17, the user can access the view shown in Figure 18. By clicking on the corresponding boxes, the user can mark that these rows have been completely controlled and no further work is needed in these areas. The boxes are colour coded after the worst severity that each row contains. If the QC worker has done some work on other rows, these markings will also be saved, but the row is not marked as completed, and the other QC-workers must control these rows when they continue to work on the asset. After the user has marked the completed rows, he can leave a comment if he chooses to do so and save the progress. The asset will return to the task-list and marked as ”In progress” as seen in Figure 14 and if the user inspects that asset, the checklist in Figure 15 will have the corresponding saved rows in its checklist. 43(65)

Figure 18: Saving the progress 44(65)

Figure 19: Paused asset

As some user of the system should have some overview of the system and the assets and users that are in it, the admin view in figure 20 was created. The user with access to this view can quickly get information about:

• How many users that logged in

• How many assets that are in edit and in which department

• Number of assets that are unfinished and their status in each division

• Number of fully completed assets and thus also number of reports to send

• Number of assets that have a relative short due date 45(65)

Figure 20: Admin page 46(65)

6 Results

6.1 Interviews

The highlights from each interview will be presented in the subcategories, translated to English.

6.1.1 Interview with an independent director, producer and editor

During his career, he has produced music videos, commercials and other projects, like short movies. As he works as a freelancer, he gets the opportunity to build a new team for each project. The key points gained from the interview was:

• He does 90% of the cutting and QC of the projects by himself using only Adobe Premiere. The other 10% is sent to a company called ”Trickfabriken” which uses Frame IO and a service called ”Addtox” [58], which is an online cloud-based QC service for ads where the user uploads an ad and get error codes back with information if the QC failed. This service was used when the project featured ads produced for TV.

• When he does the QC by himself, he mostly checks general stuff like; the length of logos used, the length of the subtitles, sound quality and codec of the video.

• He said that because of the internet and social networks like and Instagram, there had been a considerable change in the last five years on how you produce ma- terial. A video project to a commercial may have to be produced in 10-15 variants to fit in all of the formats. A video for Instagram must have a square ratio, but the same video should also be able to be shown in a widescreen mode for YouTube etc. It may also have shorter versions of 5 to 10 seconds that should work silently as a Facebook ad.

6.1.2 Interview with staff at Codemill

The key points from the interviews were:

• In larger companies, an AI-integrated software often does the initial QC control, and then QC workers often check the validity of the markers and remove or adds addi- tional markers.

• As many QC workers fill the QC report manually, there probably exists a desire to make this process more automatic to ease the workload and to increase effectiveness. 47(65)

• QC reports are often sent through the mail to the desired person.

• The QC process can be quite extensive, thus generating many marker-rows in Accu- rate Video.

6.2 Prototype 1

Some key points from staff at Codemill gained from user testing prototype 1 will be pre- sented below:

• ”Looks good, but maybe takes up to much visual space and hinders the worker to perform QC. Maybe it can be done in another window?”

• ”I do not think QC is performed synchronously on the same asset”

• ”I think a to-do-list would be useful”

• ”Many QC workers seems to already know how they are going to perform the QC work. Not sure if a to-do-list is necessary”

• ”As many QC workers seems to work on multiple tracks at the same time, they would probably not want to be confined to only do one task at a time”

• ”I like the idea to have a status on each asset as it could simplify the workflow”

6.3 Prototype 2

Some key points gained from staff at Codemill while user testing prototype 2 will be pre- sented below:

• ”Create a view where the user can easily continue the work if they logged out of the system during a lunch for example”

• ”I like the idea of having a system to organize the work”

• ”The system should have a view for the supervisor of the project to get an overview of the system”

• ”I like the idea that QC-reports could be automatically generated”

• ”Would be great if a standardized checklist could implemented, as it would likely constitute the QC-report”

• ”Maybe change ”To-do” to ”Tasks” and the ”Work” button to something else?”

• ”I think that dividing the asset in divisions could work” 48(65)

6.4 Prototype 3 - Flow testing

The success rate from the flow testes score, represented in Figure 21 was approximately 78%.

Figure 21: Task success data

6.4.1 Selected Comments • ”Hard to grasp the idea of ”In progress” versus ”In Edit”.”

• ”Does not feel intuitive to click on a button to create progress in the checklist, the user should be able to do this instantly.”

• ”Confused by the color differences in the checklist’s boxes.”

• ”Difficult to understand that the admin view was the one to look for, maybe rename it to ”Overview”?”

• ”The summation of the unfinished assets in the admin-view is a tad small.”

6.5 Prototype 3 - Comparative test

The data gained from the comparative test is presented in Figure 22. To complete the task with a sorted list of assets and an autogenerated report, took on average 123 seconds less to complete, compared to the version that did not have these features. That is an improvement of 150%. 49(65)

Figure 22: Comparative test data. Time presented in seconds.

6.5.1 Selected comments

When asked which version they preferred, all users agreed that the version with the sorted lists and QC report was to be preferred. Some key points from the comments will be pre- sented below:

• ”Nice to not have to do the report manually. Prefers a system that keep track of the user, time codes and everything”.

• ”Nice to have a sorted list”

• ”The autogenerated QC report feels much smoother. To write a QC report manually feels tedious and can be subject to human errors”

6.6 Prototype 3 - Exploring functionality

This section contains some of the key points given by the users when asked about their opinions regarding specific functionality or design elements of the system.

6.6.1 Sorted lists • ”Makes it easier to find the asset and it is good that the user can sort it as well”.

• ”Absolutely preferable”

• ”Makes the user finding the to work on much easier.”

• ”Maybe the list of assets in edit does not have to be included, or at least not as large as the other list” 50(65)

6.6.2 Dividing the asset in divisions • ”I think it can be useful.”

• ”It can be useful, but it depends on the customer. If a person does QC on all parts, then it would not be preferable.”

• ”I think it works fine, I also think that QC workers often work in specific areas.”

• ”I think it would be beneficial, but if a QC worker has access to several divisions, they may want to have them represented on a singular view instead of different tabs.”

6.6.3 Autogenerated Report • ”Absolutely an advantage”

• ”Makes the work more efficient and minimizes human errors”

6.6.4 The ability of a checklist and to save progress using it • ”Remove the button and make it possible for the user to interact directly with the list, to improve the work flow”

• ”Interesting concept of storing progress, maybe it would have been nice to be able to select multiple rows at once.”

• ”I do not think the message functionality contributes with anything and could be excluded”

• ”Make the checkboxes have one color instead of multiple colors. Represent the state of the asset on the row instead by using color or text.”

• ”I think the idea is very great! It is also great that the checklist can be opened in another window thus allowing full visibility of the work environment”

6.6.5 The use of colors instead of icons • ”Daringly to choose colors because some people are color blind. But people with color blindness probably do not work with QC.”

• ”Maybe choose icons instead to make it clearer, it was confusing at first when I did not know what the colors represented.”

6.6.6 The overall design and functionality of the system • ”Really interesting and exciting! I like that it is the same look and feel as our own product. The system overall feels easy to grasp even though it is pretty complex and is built like a shell around an already complex product.”

• ”Great concept where specific parts of it will absolutely be useful for further develop- ment of the system. The review part (though not implemented in the prototype) and the sorted lists are some of the things we have thought of.” 51(65)

7 Discussion

7.1 Interviews

Not surprisingly, finding participants for the interview was difficult. QC-teams only exists in larger productions, which in usual cases often can be reluctant to do interviews due to tight schedules and lot of work. As this all happened amidst the corona-outbreak, the few companies that actually answered the emails and phone calls did not have time to conduct an interview, which led to that no interview could be done with actual QC-workers, not even those who work alone. To gain some knowledge about the reality of video post-production, an interview with an independent director/producer/editor was conducted, which is presented in the results. To gain as much knowledge as possible on how the QC work procedure looks like, some small interviews were also conducted with the staff at Codemill. All of the interviews were con- ducted using the Semi-structured interview technique [43] and was structured around Halls three boxes [44], since some information was already gained through literature studies. The questions asked were different in each interview, since they were adapted after each partic- ipant’s work title and experience.

7.2 User tests and prototype iteration

The design and functionality of the final prototypes are based mainly on the method of user testing [34] and prototype iteration [47], as a way to gradually gain insight of the users, their expectations as well as continuously improving the systems functionality and interface design. In the best of scenarios, a close relationship with the actual QC workers that use Accurate Video would have been preferred, since they constitute the target group, or QC workers in video post-production in general. Even though rigorously attempts have been made to reach those persons, it was at last deemed impossible, mainly because the development was conducted amidst the corona outbreak [16]. The situation aggravated an already harsh user testing environment which would have been solely remote due to the companies geographical locations, to one, a more or less impossible situation due to company reconstruction and layoffs during the pandemic. The user testing has, as a response to this situation solely been conducted using selected members of the staff at Codemill. All participants have worked with Accurate Video and have some insight on how the QC work is done since the software 3.3 is specified on QC in video post-production. The participants had professions ranging from UX-designer, de- veloper, product manager, senior solution architect and sales manager to cover such a broad area of knowledge as possible. Even though they were not actual users, their input and contribution are deemed sufficient enough for the results to have relevance in the field. 52(65)

7.3 Results discussion

7.3.1 Flow testes

The data gained from the flow testes shows a relatively high score of 78%, thus indicating that most of the interface’s flow works. The score is a bit high compared to the usual results according to Nielsen [38]; this may be caused due to several reasons. The factor deemed to have the most impact was the short introduction of the system (where another version prototype was used than the one in the flow testes). The users may have got a bit primed of the system before using it by themselves, even though the introduction’s explanation kept to a minimum. A short introduction was, however, deemed necessary since the system introduced many new concepts that can be hard to grasp without any explanation. The most flow with the least amount of success was flow test 5, where the users were asked to find information about how many unfinished assets that were in the system. The most likely cause for this deviation is that many users still had a hard time to grasp the concept of the system, accompanied by a poor choice of naming the view to ”admin”. However, even considering these facts, the user task flow of the systems interface is deemed to work quite satisfactory and can be used upon further development.

7.3.2 Comparative test

The subject of this comparative test was to find an answer to if a sorted list of assets and an autogenerated QC report was to be preferred over not having this functionality. The test was constructed as to simulate a real case scenario where the user had to find the asset to work on, find an issue in it and generate a QC report. The results were that it took, on average 123 seconds less for the users to reach the same result using a sorted list and autogenerated report, than having to do everything manually. The improvement in time was 150% when comparing the two versions to each other. The most significant improvement in time was gained through making the QC report automatic. The participants were also favourable for the version with a sorted list and automatic QC report.

7.3.3 Other functionality

Since the evaluation of these functionalities (excluding sorted lists and automatic report) were evaluated through subjective measures, it may be difficult to represent these findings as facts. The results gained should instead be used as a guidance of suggested functionalities that could be included in the further development of an actual system. The autogenerated QC report and the sorted lists were both subject to increase the workflows effectiveness and gaining appraisal comments. The checklist was deemed as an interesting concept which could through further develop- ment and iterations be a new way of saving progress as well as giving instructions to recent employed QC-workers. Since its relation to the contents of the QC-report is quite strong, it could be valuable if further research was done. Dividing the assets into divisions was also deemed as an interesting concept, although its 53(65) relevance is highly subjective to the structure of the QC company. Further research should be conducted in the field to prove its worth. However, as it increases collaboration by utilizing the workforce effectively, it could help with structuring the workflow saving the company time [27]. The comments of the user interfaces aesthetics/design also suggested that further iterations are needed to increase the workflow and clarify some aspects of the system. 54(65)

8 Conclusions

Even though there exist limitations on this study because the number of user tests conducted was quite small, and were not done with the sought target group, some conclusions can be made. Due to the large number of assets a QC company can receive, and the number of employees that work with them, a system that organizes these assets and structure the workflow is probably useful. Such a system can utilize individual competence, save time and also be used to improve project management. This thesis proposes that a task management system could be used as a solution. Such a system would allow each asset to have separate states, allowing the user of the system to quickly realize if an asset is completed or not. It would also have the functionality of prioritizing the assets based on its properties, for example, its due date. The findings highly suggest that such system should include sorted lists and autogenerated QC report, as it increases the effectiveness of the workflow and creates a more satisfying ex- perience for the user, as well as minimizes the risk of faulty QC reports due to human errors. Sorted lists, as presented in the solution, also helps to structure the workflow by allowing the user to quickly find which asset to work on, without having to be signed explicitly to that task. The findings also suggest that other functionalities of the system could be of interest. The ability to have a checklist with work instructions and the ability to save progress by using it was deemed interesting. Such functionality could be used to organize the work procedure by standardizing the companies QC checkpoints as well as make the introduction of new employees more rapid, as they would always have work instructions available. The functionality to divide the asset into separate divisions based on the nature of the QC work was also deemed interesting as a concept. However, because the divisions’ categoriza- tion is highly connected to the company’s own work procedure, future research is needed to conclude if this function holds a real value or not. The user interface of the suggested solution had a positive response from the user tests, as well as a high success rate gained from flow testing. Even though it is not completed, the findings indicate that the structure could be used as a foundation for further development.

8.1 Future Work

The findings of this report should be validated by conducting more thorough user studies with actual QC workers. QC workers from different companies would be preferable since it could validate the usefulness of dividing the asset into divisions. 55(65)

The user interface of the system would need to go through more iterations to increase its workflow and clarity. One suggested area for improvement is to implement symbols or text to a further degree to explain each asset’s status. 56(65)

References

[1] Carina M. Schlebusch, Helena Malmstrom,¨ Torsten Gunther,¨ Per Sjodin,¨ Alexan- dra Coutinho, Hanna Edlund, Arielle R. Munters, Mario´ Vicente, Maryna Steyn, Himla Soodyall, Marlize Lombard, and Mattias Jakobsson. Southern african ancient genomes estimate modern human divergence to 350,000 to 260,000 years ago. Sci- ence, 358(6363):652–655, 2017. [2] Thomas JH Morgan, Natalie T Uomini, Luke E Rendell, Laura Chouinard-Thuly, Sally E Street, Hannah M Lewis, Catherine P Cross, Cara Evans, Ronan Kearney, Ignacio de la Torre, et al. Experimental evidence for the co-evolution of hominin tool-making teaching and language. Nature communications, 6(1):1–8, 2015. [3] R. Nelson. Post-production workflow. https://untamedscience.com/ filmmaking/post-production/post-production-workflow/ (visited 2020-03- 09). [4] Codemill. The story behind Codemill. The passion, the power and the potential. https://codemill.se/about/ (visited 2020-02-10). [5] Codemill. Accurate Video - World class video software. https://accurate.video/ #qc (visited 2020-02-10). [6] Lisa Macnamara. Workflow. https://workflow.frame.io/guide/#ch=capture& s=how-to-choose-your-capture-codec (visited 2020-03-09). [7] Michael Fitzer. What is Codec? https://www.videomaker.com/article/f6/ 14743-what-is-a-codec (visited 2020-03-09). [8] contributors. Log profile — Wikipedia, the free encyclope- dia. https://en.wikipedia.org/w/index.php?title=Log_profile&oldid= 901769637, 2019. [Online; accessed 9-March-2020]. [9] Chris Hoffman. What Is a Checksum (and Why Should You Care? https://www. howtogeek.com/363735/what-is-a-checksum-and-why-should-you-care/ (visited 2020-03-09). [10] Studiobinder. What are dailies in film? who needs them and why. https://www. studiobinder.com/blog/what-are-dailies-in-film/, 2019. [Online; accessed 11-March-2020]. [11] Fairclouch, Steve. What is a lut and what does it do? http://www.thevideomode. com/tuition/what-is-a-lut-3036/, 2017. [Online; accessed 11-March-2020]. [12] Wikipedia contributors. Visual effects — Wikipedia, the free encyclo- pedia. https://en.wikipedia.org/w/index.php?title=Visual_effects& oldid=943352708, 2020. [Online; accessed 11-March-2020]. 57(65)

[13] Frame IO. The modern video workflow, (re)defined. https://frame.io/, 2020. [Online; accessed 11-March-2020].

[14] Interra Systems. BATON Enterprise-class Automated File-based QC. http://www. interrasystems.com/file-based-qc.php (visited 2020-03-12).

[15] DPP. QC Requirements. https://www.thedpp.com/search?q=qc# (visited 2020- 03-12).

[16] Centers for Disease Control and Prevention. Coronavirus disease 2019 (covid-19). https://www.cdc.gov/coronavirus/2019-ncov/cases-updates/ summary.html, 2020. [Online; accessed 16-March-2020].

[17] Wikipedia contributors. Pandemic — Wikipedia, the free encyclopedia. https://en. wikipedia.org/w/index.php?title=Pandemic&oldid=945931811, 2020. [On- line; accessed 17-March-2020].

[18] Wikipedia contributors. 2019–20 coronavirus pandemic — Wikipedia, the free ency- clopedia. https://en.wikipedia.org/w/index.php?title=2019%E2%80%9320_ coronavirus_pandemic&oldid=945984130, 2020. [Online; accessed 17-March- 2020].

[19] Public Health Agency of Sweden. Faq about covid-19. https://www. folkhalsomyndigheten.se/the-public-health-agency-of-sweden/ communicable-disease-control/covid-19/, 2020. [Online; accessed 16- March-2020].

[20] Cambride University Press. Meaning of collaboration in English. https: //dictionary.cambridge.org/dictionary/english/collaboration (visited 2020-02-11).

[21] Lesley G Hathorn and Albert L Ingram. Online collaboration: Making it work. Edu- cational Technology, 42(1):33–40, 2002.

[22] Wikipedia contributors. Collaboration — Wikipedia, the free encyclope- dia. https://en.wikipedia.org/w/index.php?title=Collaboration&oldid= 934470082, 2020. [Online; accessed 11-February-2020].

[23] Jason CH Chen, L Volk, and B Lin. Virtual collaboration in the workplace. Issues in Information Systems, 1:77–83, 2004.

[24] Bipin Kannan Prabhakar. Internet-based collaboration software: A study of impacts on distributed collaborative work. Mississippi State University, 1999.

[25] Schmidt, Casey. Online collaboration - how modern teams succeed together). https: //www.canto.com/blog/online-collaboration/, 2019. [Online; accessed 7- May-2020].

[26] Echo Brown. Types of Online Collaboration. https://www.eztalks.com/ unified-communications/what-is-online-collaboration.html [Online; ac- cessed 7-May-2020]. 58(65)

[27] Tim Eisenhauer. How Online Collaboration Tools Benefit Your Busi- ness. https://axerosolutions.com/blogs/timeisenhauer/pulse/180/ how-online-collaboration-tools-benefit-your-business [Online; accessed 7-May-2020].

[28] Lynn, Rachaelle. What is a task management tool?). https://www.planview.com/ resources/articles/lkdc-task-management-tool/, 2020. [Online; accessed 6- May-2020].

[29] Robins, David. Anatomy of a task). https://www.binfire.com/blog/2018/02/ anatomy-of-a-task/, 2018. [Online; accessed 6-May-2020].

[30] Google. Introduction to Angular concepts. https://angular.io/guide/ architecture (visited 2020-02-21).

[31] Amazon. Start Building on AWS Today. https://aws.amazon.com/ (visited 2020- 05-07).

[32] Codemill. Quality Control Validation.

[33] Dam R.F and Teo Y.S. Personas - A Simple Introduction. https://www.interaction-design.org/literature/article/ personas-why-and-how-you-should-use-them (visited 2020-05-07).

[34] Weber, Jonathan. Why you need user testing (and how to convince others to feel the same). https://uxplanet.org/the-case-for-user-testing-87d82da3c19c, 2018. [Online; accessed 5-May-2020].

[35] Foggia, Leonel. Usability testing: what is it and how to do it?). https://uxdesign. cc/usability-testing-what-is-it-how-to-do-it-51356e5de5d, 2018. [On- line; accessed 5-May-2020].

[36] Nielsen, Jakob. Why you only need to test with 5 users). https://www.nngroup. com/articles/why-you-only-need-to-test-with-5-users/, 2000. [Online; accessed 6-May-2020].

[37] Nielsen, Jakob. Usability metrics. https://www.nngroup.com/articles/ usability-metrics/, 2001. [Online; accessed 5-May-2020].

[38] Nielsen, Jakob. Success rate: The simplest usability metric. https://www.nngroup. com/articles/success-rate-the-simplest-usability-metric/, 2001. [On- line; accessed 5-May-2020].

[39] Ross, Jim. Conducting qualitative, comparative usability test- ing. https://www.uxmatters.com/mt/archives/2017/03/ conducting-qualitative-comparative-usability-testing.php, 2017. [Online; accessed 6-May-2020].

[40] Schade, Amy. Remote usability tests: Moderated and unmoderated). https://www. nngroup.com/articles/remote-usability-tests/, 2013. [Online; accessed 6- May-2020]. 59(65)

[41] Gorski, Gabriel. How to run a remote usability testing? https://uxdesign.cc/ how-to-run-a-remote-usability-testing-4350c7786f20, 2018. [Online; ac- cessed 5-May-2020].

[42] Lieberman, Rain. Remote user testing: Why and how to run a test). https://blog.prototypr.io/ remote-user-testing-why-and-how-to-run-a-test-3540040aa00f, 2018. [Online; accessed 6-May-2020].

[43] Wilson, Chauncey. Interview techniques for ux practitioners: A user centered de- sign method. Elsevier Science and Technology, 2014. https://ebookcentral. proquest.com/lib/umeaub-ebooks/detail.action?docID=1573353. [Online; accessed 16-March-2020].

[44] Hall, Erika. Interviewing humans. https://alistapart.com/article/ interviewing-humans/, 2013. [Online; accessed 16-March-2020].

[45] Virginia Ramirez. What is a Prototype? https://medium.com/nyc-design/ what-is-a-prototype-924ff9400cfd [Online; accessed 12-May-2020].

[46] Neil Goldman and Khaled Narayanaswamy. Software evolution through iterative pro- totyping. In Proceedings of the 14th international conference on Software engineering, pages 158–172, 1992.

[47] Yu Siang Teo. The Ultimate Guide to Understanding UX Roles and Which One You Should Go For. https://www.interaction-design.org/literature/article/ the-ultimate-guide-to-understanding-ux-roles-and-which-one-you-should-go-for [Online; accessed 12-May-2020].

[48] David McKenzie. Generative prototype iteration in the front end of the design pro- cess. In DS82: Proceedings of the 17th International Conference on Engineering and Product Design Education (E&PDE15), pages 272–277, 2015.

[49] Nick Babish. Prototyping 101: The Difference between Low-fidelity and High- fidelity Prototypes and When to use Each. https://theblog.adobe.com/ prototyping-difference-low-fidelity-high-fidelity-prototypes-use/ [Online; accessed 12-May-2020].

[50] Anahat Rawal. 10 Basic Principles of Graphic Design. https://medium.com/ @anahatrawal/10-basic-principles-of-graphic-design-b74be0dbdb58 [Online; accessed 6-May-2020].

[51] Amy Copperman. 8 Basic Principles of Design to Help You Cre- ate Awesome Graphics. https://blog.adobespark.com/2016/07/27/ 8-basic-design-principles-to-help-you-create-better-graphics/ [Online; accessed 6-May-2020].

[52] Sketch B.V. Sketch. https://www.sketch.com/ [Online; accessed 12-May-2020].

[53] Google. Google Meet. https://meet.google.com/ [Online; accessed 12-May- 2020]. 60(65)

[54] Sketch B.V. Sketch Cloud. https://www.sketch.com/docs/sketch-cloud/ [On- line; accessed 12-May-2020].

[55] Google. Google Forms. https://www.google.com/forms/about/ [Online; ac- cessed 12-May-2020].

[56] Inc dba Abstract Elastic Projects. Abstract. https://www.google.com/drive/ [On- line; accessed 12-May-2020].

[57] Google. Google Drive. https://www.abstract.com/?utm_ medium=Paid-Search&utm_source=PS-Adwords&utm_campaign= Europe-Brazil-Sketch&utm_content=Sketch-Version-Control&utm_term= abstract%20sketch%20version%20control [Online; accessed 16-May-2020].

[58] Adamed Grupa. Addtox. https://www.adtoox.com/en/start/ [Online; accessed 17-May-2020]. 61(65)

A The Instructions for the test

A.1 Link to the user test instructions https://forms.gle/jgX2AnXqNMvuXTQg9

A.2 Image of the user test instructions

Figure 23: Instructions used to fill The QC report 62(65)

B The QC report

B.1 Image of the QC-Report

Figure 24: The PDF version of the QC-report 63(65)

C Prototypes

C.1 Link to the prototypes https://www.sketch.com/s/ZLeMJ 64(65)

D Flow testing instructions

Figure 25: Instructions used in the flow testes 65(65)

E Comparative test

E.1 Version without sorted lists

The prototype in Figure 26 has assets that only holds info about its thumbnail, name, ID and ingestion date. The layout is based on a real system.

Figure 26: Unsorted Assets