Report of Contributions

Total Page:16

File Type:pdf, Size:1020Kb

Report of Contributions CHEP 2016 Conference, San Francisco, October 8-14, 2016 Report of Contributions https://indico.cern.ch/e/505613 CHEP 2016 Conf … / Report of Contributions Experiment Management System f … Contribution ID: 0 Type: Poster Experiment Management System for the SND Detector Tuesday, 11 October 2016 16:30 (15 minutes) We present a new experiment management system for the SND detector at the VEPP-2000 col- lider (Novosibirsk). Substantially, it includes as important part operator access to experimental databases (configuration, conditions and metadata). The system is designed in client-server architecture. A user interacts with it via web-interface. The server side includes several logical layers: user interface templates, template variables description and initialization, implementation details like database interaction. The templates are believed to have a simple enough structure to be used not only IT professionals but also by physicists. Experiment configuration, conditions and metadata are stored in a database being managedby DBMS MySQL, ones being composed as records having hierarchical structure. To implement the server side NodeJS, a modern JavaScript framework, has been chosen. A new template engine is designed. The important feature of our engine is asynchronous computations hiding. The engine provides heterogeneous synchronous-style expressions (including synchronous or asynchronous values or functions calls). This helps template creators to focus on values toget but not on callbacks to handle. A part of the system is put into production. It includes templates dealing with showing and edit- ing first level trigger configuration and equipment configuration and also showing experiment metadata and experiment conditions data index. Secondary Keyword (Optional) Monitoring Primary Keyword (Mandatory) Databases Tertiary Keyword (Optional) Primary author: Mr PUGACHEV, Konstantin (Budker Institute of Nuclear Physics (RU)) Co-author: KOROL, Aleksandr (Budker Institute of Nuclear Physics (RU)) Presenter: Mr PUGACHEV, Konstantin (Budker Institute of Nuclear Physics (RU)) Session Classification: Posters A / Break Track Classification: Track 2: Offline Computing October 6, 2021 Page 1 CHEP 2016 Conf … / Report of Contributions Reconstruction software of the sili … Contribution ID: 2 Type: Oral Reconstruction software of the silicon tracker of DAMPE mission Thursday, 13 October 2016 11:00 (15 minutes) DAMPE is a powerful space telescope launched in December 2015, able to detect electrons and pho- tons in a wide range of energy (5 GeV to 10 TeV) and with unprecedented energy resolution. Silicon tracker is a crucial component of detector, able to determine the direction of detected particles and trace the origin of incoming gamma rays. This contribution covers the reconstruction software of the tracker, comprising the geometry convertor, track reconstruction and detector alignment al- gorithms. The convertor is an in-house, standalone system that converts the CAD drawings ofthe detector and implements the detector geometry in the GDML (Geometry Description Markup Lan- guage) format. Next, the particle track finding algorithm is described. Since the DAMPE tracker identifies independently the particle trajectory in two orthogonal projections, there is an inherent ambiguity in combining the two measurements. Therefore, the 3D track reconstruction becomes a computationally intensive task and the number of possible combinations increases quadratically with the number of particle tracks. To alleviate the problem, a special technique is developed, which reconstructs track fragments independently in two projections and combine the final result using a 3D Kalman fit of pre-selected points. Finally, the detector alignment algorithm allowsto align the detector geometry based on real data with precision better than the resolution of tracker. The algorithm optimises a set of around four thousand parameters (offsets and rotations ofdetect- ing elements) in an iterative procedure, based on the minimisation of the global likelihood fit of reconstructed tracks. Since the algorithm is agnostic of the detector premises, it could be used for similar optimisation problems with minor modifications by other projects. This contribution will give an insight into the developed algorithms and the results obtained during the first years of operational experience on ground and on orbit. Tertiary Keyword (Optional) Collaborative tools Secondary Keyword (Optional) Algorithms Primary Keyword (Mandatory) Reconstruction Primary author: Dr TYKHONOV, Andrii (Universite de Geneve (CH)) Co-author: Prof. WU, Xin (Universite de Geneve (CH)) Presenter: Dr TYKHONOV, Andrii (Universite de Geneve (CH)) Session Classification: Track 2: Offline Computing October 6, 2021 Page 2 CHEP 2016 Conf … / Report of Contributions Reconstruction software of the sili … Track Classification: Track 2: Offline Computing October 6, 2021 Page 3 CHEP 2016 Conf … / Report of Contributions HEPData - a repository for high en … Contribution ID: 3 Type: Oral HEPData - a repository for high energy physics data exploration Thursday, 13 October 2016 14:00 (15 minutes) The Durham High Energy Physics Database (HEPData) has been built up over the past fourdecades as a unique open-access repository for scattering data from experimental particle physics. Itis comprised of data points from plots and tables underlying over eight thousand publications, some of which are from the Large Hadron Collider (LHC) at CERN. HEPData has been rewritten from the ground up in the Python programming language andisnow based on the Invenio 3 framework. The software is open source with the current site available at http://hepdata.net with: 1) a more stream-lined submission system; 2) advanced submission reviewing functionalities; 3) powerful full repository search; 4) an interactive data plotting library; 5) an attractive, easy to use interface; and 6) a new data driven visual exploration tool. Here we will report on our efforts to bring findable, accessible, interoperable, and reusable (FAIR) principles to high energy physics. Our presentation will cover the background of HEPData, limitations of the current tool, and why we created the new system using Invenio 3. We will present our system by considering four im- portant aspects of the work: 1) the submission process; 2) making the data discoverable; 3) making data first class citable objects; and 4) making data interoperable and reusable. Tertiary Keyword (Optional) Visualization Primary Keyword (Mandatory) Databases Secondary Keyword (Optional) Preservation of analysis and data Primary author: Dr MAGUIRE, Eamonn James (CERN) Co-authors: Prof. KRAUSS, Frank Martin (University of Durham (GB)); Dr WATT, Graeme (Durham University); STYPKA, Jan Andrzej (AGH University of Science and Technology (PL)); HEINRICH, Lukas Alexander (New York University (US)); Dr WHALLEY, Michael (Durham University); Dr MELE, Salvatore (CERN) Presenter: HEINRICH, Lukas Alexander (New York University (US)) Session Classification: Track 8: Security, Policy and Outreach Track Classification: Track 8: Security, Policy and Outreach October 6, 2021 Page 4 CHEP 2016 Conf … / Report of Contributions Reconstruction of Micropattern D … Contribution ID: 4 Type: Oral Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks Tuesday, 11 October 2016 14:45 (15 minutes) Micropattern gaseous detector (MPGD) technologies, such as GEMs or MicroMegas, are particu- larly suitable for precision tracking and triggering in high rate environments. Given their relatively low production costs, MPGDs are an exemplary candidate for the next generation of particle de- tectors. Having acknowledged these advantages, both the ATLAS and CMS collaborations at the LHC are exploiting these new technologies for their detector upgrade programs in the coming years. When MPGDs are utilized for triggering purposes, the measured signals need to be pre- cisely reconstructed within less than 200 ns, which can be achieved by the usage of FPGAs. In this work, we present for a novel approach to identify reconstructed signals, their timing and the corresponding spatial position on the detector. In particular, we study the effect of noise and dead readout strips on the reconstruction performance. Our approach leverages the potential of convolutional neural networks (CNNs), which have been recently manifesting an outstanding per- formance in a range of modeling tasks. The proposed neural network architecture of our CNNis designed simply enough, so that it can be modeled directly by an FPGA and thus provide precise information on reconstructed signals already in trigger level. Primary Keyword (Mandatory) Artificial intelligence/Machine learning Secondary Keyword (Optional) Reconstruction Tertiary Keyword (Optional) Primary author: Mrs FLEKOVA, Lucie (Technical University of Darmstadt) Co-authors: DUDDER, Andreas Christian (Johannes-Gutenberg-Universitaet Mainz (DE)); SCHOTT, Matthias (Johannes-Gutenberg-Universitaet Mainz (DE)) Presenter: Mrs FLEKOVA, Lucie (Technical University of Darmstadt) Session Classification: Track 1: Online Computing Track Classification: Track 1: Online Computing October 6, 2021 Page 5 CHEP 2016 Conf … / Report of Contributions Federated data storage system pro … Contribution ID: 6 Type: Poster Federated data storage system prototype for LHC experiments and data intensive science Thursday, 13 October 2016 16:30 (15 minutes) Rapid increase of data volume from the experiments running at the Large Hadron
Recommended publications
  • Book of Abstracts Ii Contents
    CHEP 2016 Conference, San Francisco, October 8-14, 2016 Monday, 10 October 2016 - Friday, 14 October 2016 San Francisco Marriott Marquis Book of Abstracts ii Contents Experiment Management System for the SND Detector 0 .................. 1 Reconstruction software of the silicon tracker of DAMPE mission 2 ............ 1 HEPData - a repository for high energy physics data exploration 3 ............. 2 Reconstruction of Micropattern Detector Signals using Convolutional Neural Networks 4 3 Federated data storage system prototype for LHC experiments and data intensive science 6 ................................................ 3 BelleII@home: Integrate volunteer computing resources into DIRAC in a secure way 7 . 4 Reconstruction and calibration of MRPC endcap TOF of BESIII 8 .............. 5 RootJS: Node.js Bindings for ROOT 6 9 ............................ 6 C++ Software Quality in the ATLAS experiment: Tools and Experience 10 . 6 An automated meta-monitoring mobile application and frontend interface for the WLCG computing model 11 ..................................... 7 Experience of Google’s latest Deep Learning library, TensorFlow, with Docker in a WLCG cluster 12 ........................................... 8 Flexible online monitoring for high-energy physics with Pyrame 13 ............ 8 Simulation of orientational coherent effects via Geant4 14 .................. 9 Detector control system for the AFP detector in ATLAS experiment at CERN 15 . 10 The InfiniBand based Event Builder implementation for the LHCb upgrade16 . 11 JavaScript ROOT v4 17 ..................................... 12 The evolution of monitoring system: the INFN-CNAF case study18 . 13 Statistical and Data Analysis Package in SWIFT 19 ...................... 13 Analysis Tools in Geant4 10.2 20 ................................ 14 Online & Offline Storage and Processing for the upcoming European XFEL Experiments 21 15 Future approach to tier-0 extension 22 ............................. 15 iii Internal security consulting, reviews and penetration testing at CERN 23 .
    [Show full text]
  • Bokudrive – Sync and Share Online Storage
    BOKU-IT BOKUdrive – Sync and Share Online Storage At https://drive.boku.ac.at members of the University of Natural Resources and Life Sciences have access to a modern Sync and Share online storage facility. The data of this online storage are stored on servers and in data centers of the University of Natural Resources and Life Sciences. The solution is technically based on the free software "Seafile". Users can access their data via a web interface or synchronize via desktop and mobile clients. Seafile offers similar features to popular online services such as Dropbox and Google Drive. The main difference is that Seafile can be installed as open source software on its own servers and its data is stored completely on servers and in data centers of the University of Natural Resources and Life Sciences. Target group of the documentation:BOKU staff, BOKU students Please send inquiries: BOKU-IT Hotline [email protected] Table of contents 1 What is BOKUdrive ? ............................................................................................................... 3 2 BOKUdrive: First steps ............................................................................................................ 4 2.1 Seadrive Client vs. Desktop Syncing Client ...................................................................... 4 2.2 Installation of the Desktop Syncing Client ......................................................................... 5 3 Shares, links and groups ........................................................................................................
    [Show full text]
  • Open Virtualization Infrastructure for Large Telco: How Turkcell Adopted Ovirt for Its Test and Development Environments
    Open Virtualization Infrastructure for large Telco: How Turkcell adopted oVirt for its test and development environments DEVRIM YILMAZ SAYGIN BAKTIR Senior Expert Cloud Engineer Cloud Systems Administrator 09/2020 This presentation is licensed under a Creative Commons Attribution 4.0 International License About Turkcell ● Turkcell is a digital operator headquartered in Turkey ● Turkcell Group companies operate in 5 countries – Turkey, Ukraine, Belarus, Northern Cyprus, Germany ● Turkcell is the only NYSE-listed company in Turkey. ● www.turkcell.com.tr 3 Business Objectives ● Alternative solutions compatible with Turkcell operational and security standards ● Dissemination of open source infrastructure technologies within the company ● Competitive infrastructure with cost advantage 3 The journey of oVirt 4 The Journey of oVirt 3. Step three 1. Research & 2. Go-Live 3. Go-Live 4. Private Cloud 5. Go-Live Development Phase-1 Phase-2 Automation RHV 5 Research & Development ● Motivation Factors ○ Cost 1. Research & ○ Participation Development ○ Regulation ○ Independence ○ Expertise ● Risk Factors ○ Security ○ Quality ○ Compliance ○ Support ○ Worst Practices 6 Research & Development ● Why oVirt? ○ Open Source licensing 1. Research & ○ Community contribution Development ○ The same roadmap with commercial product ○ Support via subscription if required ○ Adequate features for enterprise management ○ Rest API support 6 Research & Development ● Difficulties for new infra solution ○ Integration with current infrastructure 1. Research & - Centralized Management Development - Certified/Licensed Solutions - Integration Cost ○ Incident & Problem Management - 3rd Party Support - Support with SLA ○ Acquired Habits - Customer Expectations - Quality of IT Infrastructure Services 6 Research & Development ● What we achieved ○ Building of PoC environment 1. Research & ○ V2V Migration Development ○ Upgrade Tests starting with v.4.3.2 ○ Functional Tests ○ Backup Alternative Solutions 6 Go-Live Phase-1 ● Phase-1 contains : ○ Building of new oVirt platform with unused h/w 2.
    [Show full text]
  • Red Hat Enterprise Linux 7 Libreswan Cryptographic Module Version 7.0 and Version Rhel7.20190509 FIPS 140-2 Non-Proprietary Security Policy
    Red Hat Enterprise Linux 7 Libreswan Cryptographic Module version 7.0 and version rhel7.20190509 FIPS 140-2 Non-Proprietary Security Policy Version 1.3 Last update: 2021-05-03 Prepared by: atsec information security corporation 9130 Jollyville Road, Suite 260 Austin, TX 78759 www.atsec.com ©2021 Red Hat®, Inc. / atsec information security corporation Page 1 of 23 This document can be reproduced and distributed only whole and intact, including this copyright notice. Red Hat Enterprise Linux 7 Libreswan Cryptographic Module FIPS 140-2 Non-Proprietary Security Policy Table of contents 1 Introduction ........................................................................................................................... 3 2 Cryptographic Module Specification ...................................................................................... 4 2.1 Module Overview ......................................................................................................... 4 2.2 FIPS 140-2 Validation ................................................................................................... 5 2.3 Modes of Operation ...................................................................................................... 6 3 Cryptographic Module Ports and Interfaces ........................................................................... 7 4 Roles, Services and Authentication ....................................................................................... 8 4.1 Roles ...........................................................................................................................
    [Show full text]
  • File Synchronization As a Way to Add Quality Metadata to Research Data
    File Synchronization as a Way to Add Quality Metadata to Research Data Master Thesis - Master in Library and Information Science (MALIS) Faculty of Information Science and Communication Studies - Technische Hochschule Köln Presented by: Ubbo Veentjer on: September 27, 2016 to: Dr. Peter Kostädt (First Referee) Prof. Dr. Andreas Henrich (Second Referee) License: Creative-Commons Attribution-ShareAlike (CC BY-SA) Abstract Research data which is put into long term storage needs to have quality metadata attached so it may be found in the future. Metadata facilitates the reuse of data by third parties and makes it citable in new research contexts and for new research questions. However, better tools are needed to help the researchers add metadata and prepare their data for publication. These tools should integrate well in the existing research workflow of the scientists, to allow metadata enrichment even while they are creating, gathering or collecting the data. In this thesis an existing data publication tool from the project DARIAH-DE was connected to a proven file synchronization software to allow the researchers prepare the data from their personal computers and mobile devices and make it ready for publication. The goal of this thesis was to find out whether the use of file synchronization software eases the data publication process for the researchers. Forschungsadaten, die langfristig gespeichert werden sollen, benötigen qualitativ hochwertige Meta- daten um wiederauffindbar zu sein. Metadaten ermöglichen sowohl die Nachnutzung der Daten durch Dritte als auch die Zitation in neuen Forschungskontexten und unter neuen Forschungsfragen. Daher werden bessere Werkzeuge benötigt um den Forschenden bei der Metadatenvergabe und der Vorbereitung der Publikation zu unterstützen.
    [Show full text]
  • Clouder Documentation Release 1.0
    Clouder Documentation Release 1.0 Yannick Buron May 15, 2017 Contents 1 Getting Started 3 1.1 Odoo installation.............................................3 1.2 Clouder configuration..........................................4 1.3 Services deployed by the oneclick....................................6 2 Connect to a new node 9 3 Images 13 4 Applications 15 4.1 Application Types............................................ 15 4.2 Application................................................ 16 5 Services 21 6 Domains and Bases 25 6.1 Domains................................................. 25 6.2 Bases................................................... 27 7 Backups and Configuration 31 7.1 Backups................................................. 31 7.2 Configuration............................................... 33 i ii Clouder Documentation, Release 1.0 Contents: Contents 1 Clouder Documentation, Release 1.0 2 Contents CHAPTER 1 Getting Started In this chapter, we’ll see a step by step guide to install a ready-to-use infrastructure. For the example, the base we will create will be another Clouder. Odoo installation This guide will not cover the Odoo installation in itself, we suggest you read the installation documentation on the official website. You can also, and it’s probably the easier way, use an Odoo Docker image like https://hub.docker.com/ _/odoo/ or https://hub.docker.com/r/tecnativa/odoo-base/ Due to the extensive use of ssh, Clouder is only compatible with Linux. Once your Odoo installation is ready, install the paramiko, erppeek and apache-libcloud python libraries (pip install paramiko erppeek apache-libcloud), download the OCA/Connector module on Github and the Clouder modules on Github and add them in your addons directory, then install the clouder module and clouder_template_odoo (this module will install a lot of template dependencies, like postgres, postfix etc...).
    [Show full text]
  • Deploying Netapp HCI for Red Hat Openshift on RHV HCI Netapp September 23, 2021
    Deploying NetApp HCI for Red Hat OpenShift on RHV HCI NetApp September 23, 2021 This PDF was generated from https://docs.netapp.com/us-en/hci- solutions/redhat_openshift_deployment_summary.html on September 23, 2021. Always check docs.netapp.com for the latest. Table of Contents Deploying NetApp HCI for Red Hat OpenShift on RHV . 1 Deployment Summary: NetApp HCI for Red Hat OpenShift on RHV . 1 1. Create Storage Network VLAN: NetApp HCI for Red Hat OpenShift on RHV. 1 2. Download OpenShift Installation Files: NetApp HCI for Red Hat OpenShift on RHV . 2 3. Download CA Certificate from RHV: NetApp HCI for Red Hat OpenShift on RHV . 4 4. Register API/Apps in DNS: NetApp HCI for Red Hat OpenShift on RHV . 5 5. Generate and Add SSH Private Key: NetApp HCI for Red Hat OpenShift on RHV. 7 6. Install OpenShift Container Platform: NetApp HCI for Red Hat OpenShift on RHV . 8 7. Access Console/Web Console: NetApp HCI for Red Hat OpenShift on RHV . 10 8. Configure Worker Nodes to Run Storage Services: NetApp HCI for Red Hat OpenShift on RHV. 11 9. Download and Install NetApp Trident: NetApp HCI for Red Hat OpenShift on RHV . 13 Deploying NetApp HCI for Red Hat OpenShift on RHV Deployment Summary: NetApp HCI for Red Hat OpenShift on RHV The detailed steps provided in this section provide a validation for the minimum hardware and software configuration required to deploy and validate the NetApp HCI for Red Hat OpenShift on RHV solution. Deploying Red Hat OpenShift Container Platform through IPI on Red Hat Virtualization consists of the following steps: 1.
    [Show full text]
  • 8. IBM Z and Hybrid Cloud
    The Centers for Medicare and Medicaid Services The role of the IBM Z® in Hybrid Cloud Architecture Paul Giangarra – IBM Distinguished Engineer December 2020 © IBM Corporation 2020 The Centers for Medicare and Medicaid Services The Role of IBM Z in Hybrid Cloud Architecture White Paper, December 2020 1. Foreword ............................................................................................................................................... 3 2. Executive Summary .............................................................................................................................. 4 3. Introduction ........................................................................................................................................... 7 4. IBM Z and NIST’s Five Essential Elements of Cloud Computing ..................................................... 10 5. IBM Z as a Cloud Computing Platform: Core Elements .................................................................... 12 5.1. The IBM Z for Cloud starts with Hardware .............................................................................. 13 5.2. Cross IBM Z Foundation Enables Enterprise Cloud Computing .............................................. 14 5.3. Capacity Provisioning and Capacity on Demand for Usage Metering and Chargeback (Infrastructure-as-a-Service) ................................................................................................................... 17 5.4. Multi-Tenancy and Security (Infrastructure-as-a-Service) .......................................................
    [Show full text]
  • The Linux Command Line
    The Linux Command Line Fifth Internet Edition William Shotts A LinuxCommand.org Book Copyright ©2008-2019, William E. Shotts, Jr. This work is licensed under the Creative Commons Attribution-Noncommercial-No De- rivative Works 3.0 United States License. To view a copy of this license, visit the link above or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042. A version of this book is also available in printed form, published by No Starch Press. Copies may be purchased wherever fine books are sold. No Starch Press also offers elec- tronic formats for popular e-readers. They can be reached at: https://www.nostarch.com. Linux® is the registered trademark of Linus Torvalds. All other trademarks belong to their respective owners. This book is part of the LinuxCommand.org project, a site for Linux education and advo- cacy devoted to helping users of legacy operating systems migrate into the future. You may contact the LinuxCommand.org project at http://linuxcommand.org. Release History Version Date Description 19.01A January 28, 2019 Fifth Internet Edition (Corrected TOC) 19.01 January 17, 2019 Fifth Internet Edition. 17.10 October 19, 2017 Fourth Internet Edition. 16.07 July 28, 2016 Third Internet Edition. 13.07 July 6, 2013 Second Internet Edition. 09.12 December 14, 2009 First Internet Edition. Table of Contents Introduction....................................................................................................xvi Why Use the Command Line?......................................................................................xvi
    [Show full text]
  • Paas Solutions Evaluation
    PaaS solutions evaluation August 2014 Author: Sofia Danko Supervisors: Giacomo Tenaglia Artur Wiecek CERN openlab Summer Student Report 2014 CERN openlab Summer Student Report 2014 Project Specification OpenShift Origin is an open source software developed mainly by Red Hat to provide a multi- language PaaS. It is meant to allow developers to build and deploy their applications in a uniform way, reducing the configuration and management effort required on the administration side. The aim of the project is to investigate how to deploy OpenShift Origin at CERN, and to which extent it could be integrated with CERN "Middleware on Demand" service. The student will be exposed to modern cloud computing concepts such as PaaS, and will work closely with the IT middleware experts in order to evaluate how to address service needs with a focus on deployment in production. Some of the tools that are going to be heavily used are Puppet and Openstack to integrate with the IT infrastructure. CERN openlab Summer Student Report 2014 Abstract The report is a brief summary of Platform as a Service (PaaS) solutions evaluation including investigation the current situation at CERN and Services on Demand provision, homemade solutions, external market analysis and some information about PaaS deployment process. This first part of the report is devoted to the current status of the process of deployment OpenShift Origin at existing infrastructure at CERN, as well as specification of the common issues and restrictions that were found during this process using different machines for test. Furthermore, the following open source software solutions have been proposed for the investigation of possible PaaS provision at CERN: OpenShift Online; Cloud Foundry; Deis; Paasmaster; Cloudify; Stackato; WSO2 Stratos.
    [Show full text]
  • Initial Definition of Protocols and Apis
    Initial definition of protocols and APIs Project acronym: CS3MESH4EOSC Deliverable D3.1: Initial Definition of Protocols and APIs Contractual delivery date 30-09-2020 Actual delivery date 16-10-2020 Grant Agreement no. 863353 Work Package WP3 Nature of Deliverable R (Report) Dissemination Level PU (Public) Lead Partner CERN Document ID CS3MESH4EOSC-20-006 Hugo Gonzalez Labrador (CERN), Guido Aben (AARNET), David Antos (CESNET), Maciej Brzezniak (PSNC), Daniel Muller (WWU), Jakub Moscicki (CERN), Alessandro Petraro (CUBBIT), Antoon Prins Authors (SURFSARA), Marcin Sieprawski (AILLERON), Ron Trompert (SURFSARA) Disclaimer: The document reflects only the authors’ view and the European Commission is not responsible for any use that may be made of the information it contains. This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 863353 Table of Contents 1 Introduction ............................................................................................................. 3 2 Core APIS .................................................................................................................. 3 2.1 Open Cloud Mesh (OCM) ...................................................................................................... 3 2.1.1 Introduction .......................................................................................................................................... 3 2.1.2 Advancing OCM ....................................................................................................................................
    [Show full text]
  • ERDA User Guide
    User Guide 22. July 2021 1 / 116 Table of Contents Introduction..........................................................................................................................................3 Requirements and Terms of Use...........................................................................................................3 How to Access UCPH ERDA...............................................................................................................3 Sign-up.............................................................................................................................................4 Login................................................................................................................................................7 Overview..........................................................................................................................................7 Home................................................................................................................................................8 Files..................................................................................................................................................9 File Sharing and Data Exchange....................................................................................................15 Share Links...............................................................................................................................15 Workgroup Shared Folders.......................................................................................................19
    [Show full text]