Standard Operating Procedure for a Common Operating Platform for Online 3D Modeling

Prepared for: Ohio UAS Center

Prepared by: University of Cincinnati

Revision Date: April 20, 2020 Version Number: 2.0

Table of Contents

List of Figures ...... 3

Abstract ...... 4

Additional Reference Materials ...... 5

1.0 Purpose and Scope ...... 6

2.0 System Architecture ...... 6

2.1 The Web Interface...... 6 2.1.1. Graphical User Interface ...... 7 2.1.2. Application Programming Interface ...... 7 2.2 Backend Systems ...... 8 3.0 Installation...... 9

3.1 System Requirements...... 9 3.2 Configuring Software for the Common Operating Platform ...... 10 4.0 Usage...... 15

4.1 Register and Login ...... 15 4.2 Navigating the dashboard ...... 16 4.3 Processing a 3D model on the Common Operating Platform ...... 17 4.4 Visualize a point cloud file ...... 18 4.5 Change Password ...... 20 4.6 Create API Keys ...... 21 4.7 Sharing Content ...... 21 References ...... 23

2 | Page

List of Figures

Figure 1: Common Operating Platform System Architecture ...... 6

3 | Page

Abstract

PLEASE UPDATE This Standard Operating Procedure (SOP) outlines the basic tools and procedures for utilizing small Unmanned Aircraft Systems (sUAS) for mapping construction sites and producing their accurate 3D model representation. This document is prepared for the Ohio UAS Center by the University of Cincinnati. This document intends to provide the Ohio Department of Transportation (ODOT) personnel, who are assigned responsibilities associated with the deployment and the use of Unmanned Aircraft System (UAS) information to map out construction sites and produce 3D models, orthomosaics and façades of the mapped-out area.

This document will cover what hardware and software was used, the procedures to collect and process these images, and document case studies in areas of bridge and facilities inspection.

Please note that this is document is still currently in a draft form and that images and tables have been taken from other locations to meet the needs of this document.

4 | Page

Additional Reference Materials

1. Pix4D Mapper User Manual: LINK

5 | Page

1.0 Purpose and Scope This document is intended to inform the reader about the capabilities of the Common Operating Platform(v1.5) and define a standard operating procedure that covers usage, setup, and maintenance. In a nutshell, the common operating platform is a web-based service that is designed to process 3D models. The platform consists of a website, which is a frontend interface that can be used to create 3D modeling projects. A backend system serves this website and executes all of the processing that it entails. The idea is to have a secure platform, which is accessible to a broad audience, take care of all of their processing requirements and eliminates the need for an uninitiated audience to install highly technical software on their personal computers and maintain any sophisticated computing hardware. Sections 2 and 3 are meant for developers and system administrators who will want to understand how the platform works under the hood. Normal users who want to learn how to use the site should skip to section 4. 2.0 System Architecture The common operating platform consists of mainly two components. 1. A web interface which enables users to upload and interact data to the system 2. A backend processing system responsible for processing uploaded data The website was created using a Django backend with a Postgres database and runs Twitter bootstrap on the frontend. The backend processing system was implemented using a combination of Python, and Bash scripts. The common operating platform also facilitates API programming. The platforms HTTP based REST APIs have been used in tandem with the Microsoft HoloLens for visualizing 3D models (processed by the backend). The following sections delve deeper into the design and architecture of the system.

Figure 1: Common Operating Platform System Architecture 2.1 The Web Interface There are two kinds of web interfaces available. The first one a graphical user interface (GUI) which is accessible using a web browser and the second one is an application programming

6 | Page interface (API) to be used by the devices or programs. The website interface was written in HTML, CSS, and JavaScript using Twitter’s bootstrap frontend interface library. J-Query and Ajax were used for implementing the functionality. 2.1.1. Graphical User Interface The standard graphical user interface (GUI) has a username/password authentication available on both interfaces. Users may register for an account using a sign-up page. The GUI has several different views which accomplish specific functionality. One such view is the virtual file system interface. It is akin to a PC’s desktop-like view of the different files and folders on a computer. The virtual file system is designed after a Unix based file system [1] and tries to emulate operations to store data in an orderly fashion online. The user has options to create files and folders as well as features to handle them like delete. For 3D model processing projects, users are required to create special project directories and upload their images into it. The project may be of two types based on the 3D model processing engine the user wants. They are Pix4DEngine Server [2] and OpenDroneMap [3]. The prior application is proprietary, and the latter is open sourced. Once the files have been uploaded, 3D model processing may be initiated by the user using the options provided on the website. Once the processing is initiated, the images are processed in the backend server using the chosen processing engine. The status of the processing can be seen in the backend jobs tab. This view consists of a history of all the jobs that have been processed by the system. All projects that the system processes will show up here along with its status. The view contains links to Potree [4], which is an open source online 3D point cloud viewer where the user can view the finished 3D model once it has been processed. There also links to the project directory which houses the images for the 3D model processing job. Apart from these two views, there is a settings view that provides two features. The first is the change password feature. The user is required to type his/her old password, followed by the new password to reset it. The second feature is the API key manager. An API key is a universally unique identifier (UUID) which must be used in the HTTP header of an API request to authenticate access to the backend server. The user may generate API keys for their automated programs or any devices which are required to query the backend server for information. Apart from creating keys, the option to disable API keys is also available. 2.1.2. Application Programming Interface The server also provides representational state transfer (REST)[5] application programming interfaces (API) for facilitating interoperability between different computer systems over a network. Responses to API queries are exclusively in JavaScript object notation (JSON) form and uses HTTP methods GET and POST. The API requests are authenticated using a universally unique identifier (UUID) API key. API keys are device specific and may be generated by the user using the options in the settings view. The API key should be attached to the header (X- API-Key) of the HTTP request. The different APIs are:- 1. Uploads API: Upload file to the virtual file system 7 | Page

• URL: /api/v1/upload • Request type: POST • Headers: X-API-KEY • Request arguments: i. docfile: File to upload ii. dir_name: Virtual file system directory where the file is to be uploaded iii. owner: Username of the owner of the file 2. List directories API: List the directories in a specified virtual file system directory • URL: /api/v1/list/dir/ • Request Type: GET • Headers: X-API-KEY • Request arguments: i. path: The virtual file system directory path which the API should list 3. List files API: List the files in a virtual file system directory • URL: /api/v1/list/files/ • Request type: GET • Headers: X-API-KEY • Request arguments: i. path: The virtual file system directory path which the API should list 4. Download file API: Download file • URL: /api/v1/download/ • Request type: GET • Headers: X-API-KEY • Request arguments: i. file: Full virtual file system path of the file to download It is also worthy of mentioning the lbstatus view, which is a kind of API. The lbstatus view is an indicator of the system's readiness. It is primarily for use by the load balancer. It accepts an optional argument, status, which can be used to enable or disable lbstatus. The lbstatus of the system can have two values, 'OK' and 'NOT_OK', with the former indicating that the system is ready to receive requests and the latter indicating the opposite. The system administrator can also use this feature as a system health check or when an instance needs to be taken out of rotation. 2.2 Backend Systems The back-end server is responsible for serving the website, the APIs and the processing of the aforementioned 3D models. For the processing of 3D models, we use two software, Pix4DMapper, a proprietary 3D model processor and OpenDroneMap, which is open source. The need for the inclusion of these applications in our system will computationally intensive. It also must scale to service the needs of many users. The system will need to process multiple 3D models concurrently given limited server capability. So, the objective is to distribute several moving parts amongst multiple servers to spread the load. With this in mind, the backend system was designed to contain multiple blocks which can run independently of each other. Together they create a workflow of numerous independent systems

8 | Page that work together towards a common objective. So, the following is a high-level overview of the processing workflow starting when a user uploads images for processing, and it gets processed by the backend system. 1. The user creates a 3D model processing project on the website a. Creates project directory on the virtual file system b. Uploads images associated with the project 2. The user initiates project processing 3. Django processes the request a. Collect data about the processing task from the file system and the database b. Queues a processing job c. Queue distributes job to backend controller in a round robin fashion. (There may be more than one backend worker based on demand) 4. Backend controller receives job info and starts processing a. Fetches required data from file storage and database b. Initiates processing job in the swarm space c. Update database with the status of processing 5. The user is notified on the website when the project has completed processing or if processing encountered an error We chose Django as the web framework for the site. Django is a free and open-source web framework, written in Python, which follows the model-view- template (MVT) architectural pattern[6]. It enables us to build a high-performance website easily and quickly, while not having to worry about implementing standard operations required of a modern site. Apart from serving the site, Django also handles setting up 3D model processing tasks in the backend. When the user initiates the processing of a 3D model through the site, Django will collect data about the job and add an entry in a queue with that information. This entry is then distributed to a backend controller which will start processing the project workflow on the docker swarm space. The backend controller will have several worker processes that will be on standby to receive 3D model processing jobs. Each worker is designed to handle one processing request at a time. While processing a request, it will intimate the queue that is busy processing and is not available to service any requests at this time. It will, however, send a ready to consume request to the queue once it has completed the task at hand. With this design, the system administrator or an automated service can spawn multiple instances of the worker process to service multiple processing requests concurrently. 3.0 Installation 3.1 System Requirements Hardware: 1. CPU: Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz (12 cores or higher) 2. Memory (RAM): 64 GB 3. Disk: 10 TB

9 | Page

Software: This software needs to be installed system-wide. 1. Operating System: RedHat Enterprise Linux 7.6 2. python v2.7.5 3. python-pip2 compatible with python v2.7.5 4. python v3.6.5 or higher 5. python-pip3 v10.0.1 or higher 6. Docker v18.x or higher 7. Postgres v10.x or higher (Additional configuration required, see section 3.2) 8. Supervisord (Additional configuration required, see section 3.2) 9. python-virtualenv 10. docker python SDK (Additional configuration required, see section 3.2) Docker Images 1. opendronemap/opendronemap:v0.3.1 (pull from docker-hub) 2. connormanning/entwine (pull from docker-hub) 3. connormanning/http-server (pull from docker-hub) 4. docker-pix4d (build from source, see section 3.2) 5. common operating platform uas-website (build from source, see section 3.2) Most of these software dependencies can be installed off the shelf as is. Documentation on those processes may be found available online. This documentation will focus on explaining parts of the installation that requires specific configuration for the common operating platform. 3.2 Configuring Software for the Common Operating Platform I. Postgres Database: The Postgres database is the first component that needs to be set up during the installation process. It needs to be installed system-wide and requires some configuration to make it accessible for the website. The Postgres database can be installed as follows.

# Install the repository rpm yum install https://download.postgresql.org/pub/repos/yum/11/redhat/rhel-7- x86_64/pgdg-redhat11-11-2.noarch.rpm # Install the client packages yum install postgresql11 # Install the server packages yum install postgresql11-server # Initialize the database and enable automatic start /usr/pgsql-11/bin/postgresql-11-setup initdb systemctl enable postgresql-11 systemctl start postgresql-11

10 | Page

Make sure that postgres is up and running by using the following command. The presence of a host of processes as can be seen below indicates that postgres is up and running.

$ ps -ef | grep postgres postgres 1147 1 0 13:22 ? 00:00:00 /usr/bin/postgres -D /var/lib/pgsql/data -p 5432 postgres 1152 1147 0 13:22 ? 00:00:00 postgres: logger process postgres 1154 1147 0 13:22 ? 00:00:00 postgres: checkpointer process postgres 1155 1147 0 13:22 ? 00:00:00 postgres: writer process postgres 1156 1147 0 13:22 ? 00:00:00 postgres: wal writer process postgres 1157 1147 0 13:22 ? 00:00:00 postgres: autovacuum launcher process postgres 1158 1147 0 13:22 ? 00:00:00 postgres: stats collector process ninja 24045 10943 0 14:20 pts/3 00:00:00 grep -- color=auto postgres

Now, we need to make some configuration changes. The first edit is to make the following change to /var/lib/pgsql/data/postgresql.conf

# Edit /var/lib/pgsql/data/postgresql.conf # Change listen_addresses = 'localhost' to listen_addresses = '*'

Once that is done, make the following changes to /var/lib/pgsql/data/pg_hba.conf. Edit the entries for IPv4 connections on localhost and network to use password by changing the entry to md5.

# IPv4 local connections: host all all 127.0.0.1/32 md5 host all all 0.0.0.0/0 md5

The next step is to restart Postgres and check if it up and running with the method described above. systemctl restart postgresql-11

11 | Page

If your database server is up and running, then you are ready to proceed. If it is not, then one of the above steps was improperly completed, and that requires further debugging which is out of the scope of this documentation. The final step is to create a new database user and create the necessary database and privileges to run the common operating platform.

# Change user to postgres sudo su – postgres # Login to psql database using the client psql # Create database CREATE DATABASE ; # Create a new database user for the Django application CREATE USER WITH PASSWORD 'password'; # ALTER ROLE SET client_encoding TO 'utf8'; ALTER ROLE SET default_transaction_isolation TO 'read committed'; ALTER ROLE SET timezone TO 'UTC'; # Set newly created user as the owner of the newly created database GRANT ALL PRIVILEGES ON DATABASE TO ;

II. Building the website: Once the database is set up, the next step is to set up the site. The site is run inside a Docker container and is isolated from the host RHEL server apart from the database. For the time being, we will build the site from source on the RHEL server. This step assumes that docker is already installed and running. Now, go to the folder where you cloned a copy of the source code for the common operating platform website.

cd docker build -t uas-website . docker images REPOSITORY TAG IMAGE ID CREATED SIZE uas-website latest f75335a66e99 1 minute ago 715MB

The build process should take some time, and once it is completed, you should see the repository available locally by using the “docker images” command. Once the website is successfully built, we can proceed to set up the backend controller. III. Setting up the backend controller: Now that we have a working database and the website docker container is built, it is time to set up the backend controller. The backend controller can run on a Linux operating system with the necessary dependencies installed. Additional storage mounted at /data/ is recommended for storing uploads and other processed data. Clone the GitHub directory on the target machine. The use of a virtualenv is highly

12 | Page

recommended but not mandatory. If using a virtualenv, create it(named "venv") inside the newly cloned repository. The following command may be used to do this. virtualenv -p python3 /venv # Activate virtualenv Source /venv/bin/activate # Install dependencies (docker-SDK) pip install docker

The website can now be started up using the start_website.py script as follows. python start_website.py --help usage: start_website.py [-h] [-i IMAGE] [-v VERSION] [-e ENVIRONMENT] [-d PSQL_DB] [-U PSQL_USER] -P PSQL_PASSWD [-S PSQL_HOST] [-q PSQL_PORT] [-wh HOSTNAME] [-wp WEBSITE_PORT] [-g GREYHOUND_SERVER] [-gp GREYHOUND_PORT]

Script to start UAS Website optional arguments: -h, --help show this help message and exit -i IMAGE, --image IMAGE Name of docker image -v VERSION, --version VERSION Version of docker image -e ENVIRONMENT, --environment ENVIRONMENT Environment of docker container -d PSQL_DB, --psql_db PSQL_DB Database Name -U PSQL_USER, --psql_user PSQL_USER Database user -P PSQL_PASSWD, --psql_passwd PSQL_PASSWD Database password -S PSQL_HOST, --psql_host PSQL_HOST Database hostname -q PSQL_PORT, --psql_port PSQL_PORT Database port number -wh HOSTNAME, --hostname HOSTNAME Hostname of website -wp WEBSITE_PORT, --website_port WEBSITE_PORT Port number of website -g GREYHOUND_SERVER, --greyhound_server GREYHOUND_SERVER Hostname of greyhound server -gp GREYHOUND_PORT, --greyhound_port GREYHOUND_PORT Port number of greyhound server

13 | Page

The http-server which serves point clouds to potree can also be run using the help of the starter script “start_greyhound.py”. See if the container is up and running using the “docker ps” command.

python start_greyhound.py

If both instances are up and running, then open up a browser (Google Chrome or Mozilla Firefox is recommended) and try to access the website by typing in the IP Address or the DNS name of the server. You should be able to see the home page now. IV. docker-pix4d: The docker-pix4d container is responsible for processing pix4d projects on the common operating platform. It needs to be built from source as follows. The first step is to clone a copy of the code to the server. Then, create a folder called “bin” inside and copy the “pix4dengine-0.1.0-py3-none-any.whl” and "pix4dmapper_4.3.31_amd64.deb" files within. Version updates will require changes to the Dockerfile. Then build it as shown below.

docker build -t docker-pix4d .

V. Supervisord: Once we have the copy of the backend controller installed on the server, it is now time to configure workers to run it periodically. A worker is an instance of the backend controller. The system is capable of running multiple workers based on the availability of computational resources. The first step is to pull necessary docker containers onto the host system. Though this step is not necessary, it is considered a good practice to do this anyway. The backend uses the following containers: i. docker-pix4d ii. opendronemap/opendronemap:v0.3.1 iii. connormanning/entwine iv. connormanning/http-server

The first one was built in the previous step, but the remaining three can be pulled from docker hub using the “docker pull” command. The supervisor python package will be used to run instances of the backend controller, which processes workflows on the common operating platform. It is installed system- wide using python-pip2.

pip2 install supervisor

The next step is to create an api_key for the worker library. This key is used to authenticate requests to the website API to gain access to the underlying pending task information. To this, just log in to the website. Goto Settings, and create a new API key. Copy the value of

14 | Page

API-key and paste it in the settings.py file under "api_key". Also, make sure the entries for hostname and the username and password for Pix4D are also correct. Now, it is time to run the worker instance. All worker instances are run under supervisor. Install supervisor and create a configuration file at /etc/supervisor/conf.d/worker#.conf, where # is the id of the worker labeled sequentially. A sample configuration file is shown below.

[program:worker1] command = /UAS-Backend- Controller/run_worker.sh ; Command to start app user = root ; User to run as stdout_logfile = /data/uas/logs/supervisor/backend1.log ; Where to write log messages autostart = True autorestart = True redirect_stderr = true ; Save stderr in the same log environment=LANG=en_US.UTF-8,LC_ALL=en_US.UTF-8 ; Set UTF-8 as default encoding

To run multiple instances of the worker, create additional configuration files as necessary. The platform is now set up and ready to use. 4.0 Usage This section documents the various features of the common operating platform and how to use them. The Platform currently supports processing 3D models using Pix4D and OpenDroneMap as well as visualizing them on the website. Users can also process point cloud files(.las and .laz) offline, on their PCs and upload it to the Common Operating Platform for visualization on the onboard Potree [4] viewer. For optimal user experience please use Google Chrome or Mozilla Firefox. Internet Explorer and Microsoft Edge are not supported. 4.1 Register and Login To register for a user account, simply navigate to http:///register and fill up the registration form.

15 | Page

Fig. Register user account To login, navigate to http:///login/ and enter the username and password you used to register.

4.2 Navigating the dashboard Once the user is logged in, navigating the dashboard is easy to use with the overhead taskbar. Click on the available buttons to perform the desired action. Here is a brief description of the available actions. i. Files: Shows the page where all the uploaded files are stored ii. Backend Jobs: Displays the page where every 3D modeling projects created on the platform along with their status and associated files

16 | Page iii. Settings: Displays the view which has options to change the password and create/delete API keys iv. Logout: Logs the user out

4.3 Processing a 3D model on the Common Operating Platform 3D models can be processed on the Common Operating Platform using Pix4D or OpenDroneMap using the following steps. Users would need to be logged in to do this. i. Create a Pix4D/ODM Project: Use the “Create PIX4D Project” or “Create ODM Project” buttons on the lower right side to create your project folders.

ii. Upload image files into the project folder: After opening your newly created project folder, use the “Upload Files” button on the lower right side to upload your images.

17 | Page iii. Start processing: When you’re ready, click on the green “Process Project” button on the right side to initiate project processing in the backend server.

iv. Check the progress of jobs on the “Backend Jobs” tab: This tab has a table of all past and present projects that the system has processed. . Click on the link corresponding to the point cloud you want to view in the second column. Progress can also be checked by navigating to the project directory. There will be a box in the top right side indicating progress. Once the project has finished processing successfully, a link to view the point cloud on the on-board potree viewer will also appear.

The status of your project toggles from ‘created’ → ‘pending’ → ‘queued’ → ‘processing’ → ‘done’ or ‘error’, and it keeps getting updated by the backend processing system.

4.4 Visualize a point cloud file The Common Operating Platform supports the processing of point cloud files (.las and laz). The user has to upload it using the website using the following steps:

18 | Page

i. Create a Visualization Project: Use the “Visualize 3D model” button on the lower right side.

ii. Upload point cloud file into the project folder: After opening the newly created project folder, use the “Upload Files” button on the lower right side to upload your point cloud file(s).

iii. Start Processing: When you’re ready, click on the green “Process Project” button on the right side to initiate project processing in the backend server.

19 | Page

iv. Check the progress of jobs in the “Backend Jobs” tab: This tab has a table of all past and present projects that the system has processed. Click on the link corresponding to the point cloud you want to view in the second column.

The status of your project toggles from ‘created’ → ‘pending’ → ‘queued’ → ‘processing’ → ‘done’ or ‘error,’ and it keeps getting updated by the backend processing system. 4.5 Change Password Users can change their passwords in the overhead settings tab. However, if the user forgot their previous password or is not able to login to the platform, please contact the system administrator at [email protected] CHANGE THIS. Please note that this view is only available if you are signed in.

20 | Page

4.6 Create API Keys Users can create API keys for their devices in the settings tab. Please note that the api keys cannot be edited once they have been generated. The api keys are 128 bit randomly generated universally unique identifiers used for authenticating device api requests to the server. More information on how to use APIs can be found in the API documentation.

4.7 Sharing Content The Common Operating Platform supports the sharing of files and projects. To share data and point cloud projects, the user has to share the associated link. The associated link may be found in the address bar of your browser. For example, to share an uploaded video, log in to the dashboard, navigate to the location where the video is stored, right-click the link to the video as shown below and select “copy link location” to copy the link to your clipboard. This link you have just copied can now be shared with others. They will not need to login to view shared content.

Right click here

Similarly, you can also share the link to the point cloud files generated by the common operating platform. This can be done by navigating to the potree viewer(shown below) and sharing the link from the address bar. The user will not need to login to view the point cloud.

21 | Page

Once a project has finished processing, the entire project including the images, 3D models, logs etc. associated with it can be downloaded as a zip file using the link available in the “Processed files” column found in the “Backend Jobs” tab.

22 | Page

References

[1] Y. Liu, Y. Yue, and L. Guo, “UNIX File System,” in UNIX Operating System, Berlin, Heidelberg: Springer Berlin Heidelberg, 2011, pp. 149–185. [2] Pix4D SA, “Pix4DEngine Server,” 2019. [Online]. Available: https://www.pix4d.com/enterprise/pix4dengine. [3] “Drone Mapping Software - OpenDroneMap.” [Online]. Available: https://www.opendronemap.org/. [Accessed: 28-Feb-2019]. [4] “Entwine & Potree.” [Online]. Available: http://potree.entwine.io/. [Accessed: 28-Feb-2019]. [5] M. Masse, REST API Design Rulebook: Designing Consistent RESTful Web Service Interfaces. [6] N. George, Mastering Django: Core (The Django Book), 2nd Edition.

23 | Page