Optimizing IBM Algorithmics' Mark-To-Future Aggregation Engine

Total Page:16

File Type:pdf, Size:1020Kb

Optimizing IBM Algorithmics' Mark-To-Future Aggregation Engine Optimizing IBM Algorithmics’ Mark-to-future Aggregation Engine for Real-time Counterparty Credit Risk Scoring Amy Wang Jan Treibig Bob Blainey IBM Toronto Software Lab RRZE, University Erlangen IBM Toronto Software Lab [email protected] [email protected] [email protected] Peng Wu Yaoqing Gao Barnaby Dalton IBM T.J.Watson Research IBM Toronto Software Lab IBM Toronto Software Lab Center [email protected] [email protected] [email protected] Danny Gupta Fahham Khan Neil Bartlett IBM Toronto Software Lab IBM Toronto Software Lab IBM Algorithmics [email protected] [email protected] [email protected] Lior Velichover James Sedgwick Louis Ly IBM Algorithmics IBM Algorithmics IBM Algorithmics [email protected] [email protected] [email protected] ABSTRACT full collateral modelling into account. The concept of default and its associated painful repercus- sions have been a particular area of focus for financial insti- Categories and Subject Descriptors tutions, especially after the 2007/2008 global financial crisis. D.1.3 [Concurrent Programming]: Parallel programming Counterparty credit risk (CCR), i.e. risk associated with a counterparty default prior to the expiration of a contract, has gained tremendous amount of attention which resulted General Terms in new CCR measures and regulations being introduced. In Algorithms, Economics, Performance particular users would like to measure the potential impact of each real time trade or potential real time trade against Keywords exposure limits for the counterparty using Monte Carlo sim- ulations of the trade value, and also calculate the Credit Risk Analytics, Multicore, Collateral Value Adjustment (i.e, how much it will cost to cover the risk of default with this particular counterparty if/when the 1. INTRODUCTION trade is made). These rapid limit checks and CVA calcu- Counterparty Credit Risk (CCR) is a metric used by fi- lations demand more compute power from the hardware. nancial institutions to evaluate the likelihood of the coun- Furthermore, with the emergence of electronic trading, the terparty of a financial contract (referred to as counterparty extreme low latency and high throughput real time compute for short) to default prior to the expiration of the contract. requirement push both the software and hardware capabili- It is critical for a financial institution to predict the CCR ties to the limit. Our work focuses on optimizing the com- of a counterparty when making a trading decision and when putation of risk measures and trade processing in the exist- pricing the value of a trade. Traditionally, trades are made ing Mark-to-future Aggregation (MAG) engine in the IBM by human beings, and response time for CCR typically falls Algorithmics product offering. We propose a new software into the range of hundredth of milliseconds. The emergence approach to speed up the end-to-end trade processing based of electronic and e-commerce trading, however, demands on a pre-compiled approach. The net result is an impres- a much faster response time and higher throughput over sive speed up of 3-5x over the existing MAG engine using the current generation of CCR software which are designed a real client workload, for processing trades which perform mainly for human traders. Furthermore, it is also highly limit check and CVA reporting on exposures while taking desirable to improve the precision of risk computation. A Permission to make digital or hard copies of all or part of this work for per- CCR is more precise if its computation takes into consid- sonal or classroom use is granted without fee provided that copies are not eration more number of market scenarios and/or involves made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components more timesteps. All of these requirements demand highly of this work owned by others than the author(s) must be honored. Abstract- efficient software implementations and effective utilization ing with credit is permitted. To copy otherwise, or republish, to post on of hardware resources. servers or to redistribute to lists, requires prior specific permission and/or a The Mark-to-Future Aggregation Engine (MAG) is a key fee. Request permissions from [email protected]. component of the risk computation software from Algorith- WHPCF’13, November 18 2013, Denver, CO, USA mics that performs statistical measurements of the CCR Copyright is held by the owner/author(s). Publication rights licensed to ACM. ACM 978-1-4503-2507-3/13/11 ...$15.00. computation. The current generation of MAG was designed http://dx.doi.org/10.1145/2535557.2535567 for human traders and sustains a throughput of 3-5 trades per second with a latency of up to 300 ms per trade. In this between computation. A node consists of a computation paper, we describe our approach to improve the end-to-end kernel and its internal data called states. States are typ- throughput and latency of the MAG engine. The targeted ically vectors or dense matrices called sheets. A sheet is risk precision is defined in terms of 5000 market scenarios a two-dimension data structure organized by scenarios and by 250 timesteps. time points. In the current implementation, sheets are in There have been many recent work on performance opti- memory sequentially along the scenario dimension. There mizations for financial codes, many exploit accelerator tech- are two types of nodes in a computation graph, consolida- nologies such as GPGPUs, CELL BE, and FPGAs [2, 1, tion nodes and transformation nodes. Both types of nodes 8], while others focus on algorithm level improvement such produce a new result, while only consolidation nodes may as [6]. One notable work in this area is from Maxeler Tech- modify its own states. When applying computation from nologies on using FPGA to accelerate credit derivatives com- a consolidation node, states are often first read and then putation for JPMC [11]. This work focuses primarily on the modified, such as element-wise summation of an incoming pricing aspect of a trade as pricing algorithms are highly sheet into the sheet associated with the consolidation node. parallel. A similar effort of employing FPGA to speedup A transformation node, on the other hand, does not modify the pricing engine was taken by Algorithmics in the past [4]. any states. In this work, we focus on another critical component of To give a sense of the scale of the data structure, a typi- a financial software, the aggregation engine. In contrast to cal production run of the MAG engine today may monitor prior published work in risk analysis, our work targets a 10,000+ counterparties (i.e., 10,000+ computation graphs). real production code. We found that optimizing a complex On average, each computation graph contains 10 nodes. And piece of production software requires one to take a holistic states associated with a computation graph node can be sev- approach and to tackle performance bottlenecks at all lev- eral mega-bytes. els, such as algorithm and data structure design, redundant computation elimination, memory subsystem optimization, 2.2 Trade Risk Scoring Computation and exploiting parallelism. In this paper, we demonstrate A trade consists of two pieces of information, a trade value the steps taken to identify performance bottlenecks in the sheet and trade parameters. The trade value sheet usually MAG engine and the techniques to address some of the over- comes from the pricing engine, with simulated floating point head. We are able to demonstrate a speed up of 3-5x over values over a set of market scenarios and timesteps. Trade the existing MAG engine for the limit checking and CVA parameters include which counterparty is trading and other reporting on exposures scenario using a real client workload information such as the maturity date of the trade. The on an off-the-shelf multicore server. We believe this work is counterparty information of trade parameters determines a good starting point to closing the gap between the per- which computation graph to be used for trade evaluation. formance of existing MAG engine and the ultimate latency When evaluating a trade on a computation graph, it typi- and throughput requirement for an online trading system. cally refers to the process of absorbing the trade value sheet The paper is organized as follows. Section 2 gives an into some consolidation nodes of the graph and/or comput- overview of the current MAG engine implementation. Sec- ing statistical measures on computation graph nodes. tion 3 describes our approach to optimizing performance and A trade can be either read-only or commit. Read-only Section 4 explains our optimizations for three important ker- trades (e.g., what-if or lookup trades) do not modify any nels. Section 5 presents our approach to doing platform- state of the computation graph, whereas commit trades do. specific optimizations for the MAG engine using adaptive When evaluating a trade, computation kernels associated optimization. Performance results are discussed in Section with the computation graph are executed in a post-fix order 6. And we conclude and outline future work in Section 7. similar to evaluating an expression tree. A computation ker- nel on a consolidation node takes as input its own state, as 2. THE MAG ENGINE well as the output, or states, of its children. A particular leaf The Mark-to-future Aggregation (MAG) engine is where node, as selected by the trade parameter, takes as input its statistical measurements for CCR such as Credit Value Ad- own state and the trade sheet. This process of propagating justment (CVA) and collateral modeling are computed. This the trade value from the leaf node up is termed contribution section gives an overview of the data structure and compu- absorption process.
Recommended publications
  • Theoretical Computer Science Knowledge a Selection of Latest Research Visit Springer.Com for an Extensive Range of Books and Journals in the field
    ABCD springer.com Theoretical Computer Science Knowledge A Selection of Latest Research Visit springer.com for an extensive range of books and journals in the field Theory of Computation Classical and Contemporary Approaches Topics and features include: Organization into D. C. Kozen self-contained lectures of 3-7 pages 41 primary lectures and a handful of supplementary lectures Theory of Computation is a unique textbook that covering more specialized or advanced topics serves the dual purposes of covering core material in 12 homework sets and several miscellaneous the foundations of computing, as well as providing homework exercises of varying levels of difficulty, an introduction to some more advanced contempo- many with hints and complete solutions. rary topics. This innovative text focuses primarily, although by no means exclusively, on computational 2006. 335 p. 75 illus. (Texts in Computer Science) complexity theory. Hardcover ISBN 1-84628-297-7 $79.95 Theoretical Introduction to Programming B. Mills page, this book takes you from a rudimentary under- standing of programming into the world of deep Including well-digested information about funda- technical software development. mental techniques and concepts in software construction, this book is distinct in unifying pure 2006. XI, 358 p. 29 illus. Softcover theory with pragmatic details. Driven by generic ISBN 1-84628-021-4 $69.95 problems and concepts, with brief and complete illustrations from languages including C, Prolog, Java, Scheme, Haskell and HTML. This book is intended to be both a how-to handbook and easy reference guide. Discussions of principle, worked Combines theory examples and exercises are presented. with practice Densely packed with explicit techniques on each springer.com Theoretical Computer Science Knowledge Complexity Theory Exploring the Limits of Efficient Algorithms computer science.
    [Show full text]
  • Graph Visualization and Navigation in Information Visualization 1
    HERMAN ET AL.: GRAPH VISUALIZATION AND NAVIGATION IN INFORMATION VISUALIZATION 1 Graph Visualization and Navigation in Information Visualization: a Survey Ivan Herman, Member, IEEE CS Society, Guy Melançon, and M. Scott Marshall Abstract—This is a survey on graph visualization and navigation techniques, as used in information visualization. Graphs appear in numerous applications such as web browsing, state–transition diagrams, and data structures. The ability to visualize and to navigate in these potentially large, abstract graphs is often a crucial part of an application. Information visualization has specific requirements, which means that this survey approaches the results of traditional graph drawing from a different perspective. Index Terms—Information visualization, graph visualization, graph drawing, navigation, focus+context, fish–eye, clustering. involved in graph visualization: “Where am I?” “Where is the 1 Introduction file that I'm looking for?” Other familiar types of graphs lthough the visualization of graphs is the subject of this include the hierarchy illustrated in an organisational chart and Asurvey, it is not about graph drawing in general. taxonomies that portray the relations between species. Web Excellent bibliographic surveys[4],[34], books[5], or even site maps are another application of graphs as well as on–line tutorials[26] exist for graph drawing. Instead, the browsing history. In biology and chemistry, graphs are handling of graphs is considered with respect to information applied to evolutionary trees, phylogenetic trees, molecular visualization. maps, genetic maps, biochemical pathways, and protein Information visualization has become a large field and functions. Other areas of application include object–oriented “sub–fields” are beginning to emerge (see for example Card systems (class browsers), data structures (compiler data et al.[16] for a recent collection of papers from the last structures in particular), real–time systems (state–transition decade).
    [Show full text]
  • Experimental Algorithmics from Algorithm Desig
    Lecture Notes in Computer Science 2547 Edited by G. Goos, J. Hartmanis, and J. van Leeuwen 3 Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo Rudolf Fleischer Bernard Moret Erik Meineche Schmidt (Eds.) Experimental Algorithmics From Algorithm Design to Robust and Efficient Software 13 Volume Editors Rudolf Fleischer Hong Kong University of Science and Technology Department of Computer Science Clear Water Bay, Kowloon, Hong Kong E-mail: [email protected] Bernard Moret University of New Mexico, Department of Computer Science Farris Engineering Bldg, Albuquerque, NM 87131-1386, USA E-mail: [email protected] Erik Meineche Schmidt University of Aarhus, Department of Computer Science Bld. 540, Ny Munkegade, 8000 Aarhus C, Denmark E-mail: [email protected] Cataloging-in-Publication Data applied for A catalog record for this book is available from the Library of Congress. Bibliographic information published by Die Deutsche Bibliothek Die Deutsche Bibliothek lists this publication in the Deutsche Nationalbibliografie; detailed bibliographic data is available in the Internet at <http://dnb.ddb.de> CR Subject Classification (1998): F.2.1-2, E.1, G.1-2 ISSN 0302-9743 ISBN 3-540-00346-0 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag.
    [Show full text]
  • Algorithmics and Modeling Aspects of Network Slicing in 5G and Beyonds Network: Survey
    Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. Digital Object Identifier 10.1109/ACCESS.2017.DOI Algorithmics and Modeling Aspects of Network Slicing in 5G and Beyonds Network: Survey FADOUA DEBBABI1,2, RIHAB JMAL2, LAMIA CHAARI FOURATI2,AND ADLENE KSENTINI.3 1University of Sousse, ISITCom, 4011 Hammam Sousse,Tunisia (e-mail: [email protected]) 2Laboratory of Technologies for Smart Systems (LT2S) Digital Research Center of Sfax (CRNS), Sfax, Tunisia (e-mail: [email protected] [email protected] ) 3Eurecom, France (e-mail: [email protected] ) Corresponding author: Rihab JMAL (e-mail:[email protected]). ABSTRACT One of the key goals of future 5G networks is to incorporate many different services into a single physical network, where each service has its own logical network isolated from other networks. In this context, Network Slicing (NS)is considered as the key technology for meeting the service requirements of diverse application domains. Recently, NS faces several algorithmic challenges for 5G networks. This paper provides a review related to NS architecture with a focus on relevant management and orchestration architecture across multiple domains. In addition, this survey paper delivers a deep analysis and a taxonomy of NS algorithmic aspects. Finally, this paper highlights some of the open issues and future directions. INDEX TERMS Network Slicing, Next-generation networking, 5G mobile communication, QoS, Multi- domain, Management and Orchestration, Resource allocation. I. INTRODUCTION promising solutions to such network re-engineering [3]. A. CONTEXT AND MOTIVATION NS [9] allows the creation of several Fully-fledged virtual networks (core, access, and transport network) over the HE high volume of generated traffics from smart sys- same infrastructure, while maintaining the isolation among tems [1] and Internet of Everything applications [2] T slices.The FIGURE 1 shows the way different slices are complicates the network’s activities and management of cur- isolated.
    [Show full text]
  • A Generative Middleware Specialization Process for Distributed Real-Time and Embedded Systems
    A Generative Middleware Specialization Process for Distributed Real-time and Embedded Systems Akshay Dabholkar and Aniruddha Gokhale ∗Dept. of EECS, Vanderbilt University Nashville, TN 37235, USA Email: {aky,gokhale}@dre.vanderbilt.edu Abstract—General-purpose middleware must often be special- domain and product variant – a process we call middleware ized for resource-constrained, real-time and embedded systems specialization. to improve their response-times, reliability, memory footprint, and even power consumption. Software engineering techniques, Most prior efforts at specializing middleware (and other such as aspect-oriented programming (AOP), feature-oriented system artifacts) [1]–[6] often require manual efforts in iden- programming (FOP), and reflection make the specialization task tifying opportunities for specialization and realizing them on simpler, albeit still requiring the system developer to manually identify the system invariants, and sources of performance the software artifacts. At first glance it may appear that these and memory footprint bottlenecks that determine the required manual efforts are expended towards addressing problems that specializations. Specialization reuse is also hampered due to a are purely accidental in nature. A close scrutiny, however, lack of common taxonomy to document the recurring specializa- reveals that system developers face a number of inherent tions. This paper presents the GeMS (Generative Middleware complexities as well, which stem from the following reasons: Specialization) framework to address these challenges. We present results of applying GeMS to a Distributed Real-time 1. Spatial disparity between OO-based middleware design and Embedded (DRE) system case study that depict a 21-35% and domain-level concerns - Middleware is traditionally ˜ reduction in footprint, and a 36% improvement in performance designed using object-oriented (OO) principles, which enforce while simultaneously alleviating 97%˜ of the developer efforts in specializing middleware.
    [Show full text]
  • 1. Immersive Analytics: an Introduction
    1. Immersive Analytics: An Introduction Tim Dwyer1, Kim Marriott1, Tobias Isenberg2, Karsten Klein3, Nathalie Riche4, Falk Schreiber1,3, Wolfgang Stuerzlinger5, and Bruce H. Thomas6 1 Monash University, Australia [Tim.Dwyer, Kim.Marriott]@monash.edu 2 Inria and University Paris-Saclay [email protected] 3 ? University of Konstanz, Germany [Karsten.Klein, Falk Schreiber]@uni-konstanz.de 4 Microsoft, USA [email protected] 5 Simon Fraser University, Canada [email protected] 6 University of South Australia [email protected] Abstract. Immersive Analytics is a new research initiative that aims to remove barriers between people, their data and the tools they use for analysis and decision making. Here we clarify the aims of immersive analytics research, its opportunities and historical context, as well as providing a broad research agenda for the feld. In addition, we review how the term immersion has been used to refer to both technological and psychological immersion, both of which are central to immersive analytics research. Keywords: immersive analytics, multi-sensory, 2D and 3D, data analytics, decision making 1.1. What is Immersive Analytics? Immersive Analytics is the use of engaging, embodied analysis tools to support data understanding and decision making. Immersive analytics builds upon the felds of data visualisation, visual analytics, virtual reality, computer graphics, and human-computer interaction. Its goal is to remove barriers between people, their data, and the tools they use for analysis. It aims to support data understanding and decision making everywhere and by everyone, both working individually and collaboratively. While this may be achieved through the use of immersive virtual environment technologies, multisensory presentation, data physicalisation, natural interfaces, or responsive analytics, the feld of immersive analytics is not tied to the use of specifc techniques.
    [Show full text]
  • An Interactive System for Scheduling Jobs in Electronic Assembly
    An Interactive System for Scheduling Jobs in Electronic Assembly Jouni Smed Tommi Johtela Mika Johnsson Mikko Puranen Olli Nevalainen Turku Centre for Computer Science TUCS Technical Report No 233 January 1999 ISBN 952-12-0364-1 ISSN 1239-1891 Abstract In flexible manufacturing systems (FMS) the interaction between the pro- duction planner and the scheduling system is essential. This is a typical situation in printed circuit board (PCB) assembly. We discuss the structure and operation of an interactive scheduling system for surface mount compo- nent printing involving multiple criteria. The user can compose a schedule by using a heuristic algorithm, but the schedule can be manipulated also directly via a graphical user interface. In addition to system description, we present statistical data of the effect of the system in an actual production environment. Keywords: production planning, printed circuit boards, group technology, flexible manufacturing, multiple criteria, fuzzy scheduling TUCS Research Group Algorithmics 1 Introduction A flexible manufacturing system (FMS) comprises a group of programmable production machines integrated with automated material handling equip- ment which are under the direction of a central controller to produce a va- riety of parts at non-uniform production rates, batch sizes and quantities. Flexibility in manufacturing provides an opportunity to capitalize on ba- sic strengths of a company. The flexibility of the FMS is characterized by how well it responds to changes in the product design and the production schedules. The control of the FMS requires a complex interaction of two components [1]: 1. computers to perform automated control and routing activities, and 2. humans to supervise the automation, to monitor the system flow and output, to intervene in the unexpected operation of the system, and to compensate the effect of unanticipated events.
    [Show full text]
  • Algorithmics in Secondary School: a Comparative Study Between Ukraine and France Simon Modeste, Maryna Rafalska
    Algorithmics in secondary school: A comparative study between Ukraine And France Simon Modeste, Maryna Rafalska To cite this version: Simon Modeste, Maryna Rafalska. Algorithmics in secondary school: A comparative study between Ukraine And France. CERME 10, Feb 2017, Dublin, Ireland. pp.1634-1641. hal-01938178 HAL Id: hal-01938178 https://hal.archives-ouvertes.fr/hal-01938178 Submitted on 28 Nov 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Algorithmics in secondary school: A comparative study between Ukraine And France Simon Modeste¹ and Maryna Rafalska² ¹Université de Montpellier, IMAG – UMR CNRS 5149; [email protected] ²National Pedagogical Dragomanov University; [email protected] This article is focused on the teaching and learning of Algorithmics, a discipline at the intersection of Informatics and Mathematics. We focus on the didactic transposition of Algorithmics in secondary school in France and Ukraine. Based on epistemological and didactical frameworks, we identify the general characteristics of the approaches in the two countries, taking into account the organization of the content and the national contexts (in the course of Mathematics in France and in the course of Informatics in Ukraine).
    [Show full text]
  • Offline Drawing of Dynamic Trees: Algorithmics and Document
    Offline Drawing of Dynamic Trees: Algorithmics and Document Integration? Malte Skambath1 and Till Tantau2 1 Department of Computer Science, Kiel University, Germany [email protected] 2 Institute of Theoretical Computer Science, Universit¨atzu L¨ubeck, Germany [email protected] Abstract. While the algorithmic drawing of static trees is well-under- stood and well-supported by software tools, creating animations depict- ing how a tree changes over time is currently difficult: software support, if available at all, is not integrated into a document production workflow and algorithmic approaches only rarely take temporal information into consideration. During the production of a presentation or a paper, most users will visualize how, say, a search tree evolves over time by manually drawing a sequence of trees. We present an extension of the popular TEX typesetting system that allows users to specify dynamic trees inside their documents, together with a new algorithm for drawing them. Running TEX on the documents then results in documents in the svg format with visually pleasing embedded animations. Our algorithm produces anima- tions that satisfy a set of natural aesthetic criteria when possible. On the negative side, we show that one cannot always satisfy all criteria simultaneously and that minimizing their violations is NP-complete. 1 Introduction Trees are undoubtedly among the most extensively studied graph structures in the field of graph drawing; algorithms for drawing trees date back to the origins of the field [26,40]. However, the extensive, ongoing research on how trees can be drawn efficiently, succinctly, and pleasingly focuses on either drawing a single, \static" tree or on interactive drawings of \dynamic" trees [11,12,27], which are trees that change over time.
    [Show full text]
  • CUBICA: an Example of Mixed Reality
    Journal of Universal Computer Science, vol. 19, no. 17 (2013), 2598-2616 submitted: 15/2/13, accepted: 22/4/13, appeared: 1/11/13 © J.UCS CUBICA: An Example of Mixed Reality Juan Mateu (Universidad Autónoma de Madrid, Madrid, Spain [email protected]) Xavier Alaman (Universidad Autónoma de Madrid, Madrid, Spain [email protected]) Abstract: Nowadays, one of the hot issues in the agenda is, undoubtedly, the concept of Sustainable Computing. There are several technologies in the intersection of Sustainable Computing and Ambient Intelligence. Among them we may mention “Human-Centric Interfaces for Ambient Intelligence” and “Collaborative Smart Objects” technologies. In this paper we present our efforts in developing these technologies for “Mixed Reality”, a paradigm where Virtual Reality and Ambient Intelligence meet. Cubica is a mixed reality educational application that integrates virtual worlds with tangible interfaces. The application is focused on teaching computer science, in particular “sorting algorithms”. The tangible interface is used to simplify the abstract concept of array, while the virtual world is used for delivering explanations. This educational application has been tested with students at different educational levels in secondary education, having obtained promising results in terms of increased motivation for learning and better understanding of abstract concepts. Keywords: Human-centric interfaces for AmI environments, ubiquitous and ambient displays environments, collaborative smart objects Categories: H.1.2, H.5.2, K.3.1, L.3.1, L.7.0 1 Introduction Nowadays one of the hot issues in the agenda is, undoubtedly, the concept of Sustainable Computing. Ambient Intelligence technologies may contribute to sustainability in many ways, as the core of Ambient Intelligence is related with understanding the environment and being able to actuate on it in an intelligent way.
    [Show full text]
  • Algorithmics of Motion: from Robotics, Through Structural Biology, Toward Atomic-Scale CAD Juan Cortés
    Algorithmics of motion: From robotics, through structural biology, toward atomic-scale CAD Juan Cortés To cite this version: Juan Cortés. Algorithmics of motion: From robotics, through structural biology, toward atomic-scale CAD. Robotics [cs.RO]. Universite Toulouse III Paul Sabatier, 2014. tel-01110545 HAL Id: tel-01110545 https://tel.archives-ouvertes.fr/tel-01110545 Submitted on 28 Jan 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Habilitation a Diriger des Recherches (HDR) delivr´eepar l'Universit´ede Toulouse pr´esent´eeau Laboratoire d'Analyse et d'Architecture des Syst`emes par : Juan Cort´es Charg´ede Recherche CNRS Algorithmics of Motion: From Robotics, Through Structural Biology, Toward Atomic-Scale CAD Soutenue le 18 Avril 2014 devant le Jury compos´ede : Michael Nilges Institut Pasteur, Paris Rapporteurs Fr´ed´ericCazals INRIA Sophia Antipolis-Mediterrane St´ephane Doncieux ISIR, UPMC-CNRS, Paris Oliver Brock Technische Universit¨atBerlin Examinateurs Jean-Pierre Jessel IRIT, UPS, Toulouse Jean-Paul Laumond LAAS-CNRS, Toulouse Pierre Monsan INSA, Toulouse Membre invit´e Thierry Sim´eon LAAS-CNRS, Toulouse Parrain Contents Curriculum Vitæ1 1 Research Activities5 1.1 Introduction...................................5 1.2 Methodological advances............................6 1.2.1 Algorithmic developments in a general framework...........6 1.2.2 Methods for molecular modeling and simulation...........
    [Show full text]
  • Virtual Reality And/Or Augmented Reality), Immersive 3D Visualization (Immersive Analytics)
    JOB ADVERTISEMENT IMT ATLANTIQUE is recruiting an Associate Professor in Computer Science, specialized in HCI and Mixed Reality Brest campus http://www.imt-atlantique.fr Start date: December 2020 Keywords: Human-Computer Interaction, 3D User Interaction, Mixed Reality (Virtual Reality and/or Augmented Reality), Immersive 3D Visualization (Immersive Analytics) Administrative status: Associate/Assistant Professor, civil servant of the French Ministry of Economy and Industry Presentation of the school IMT Atlantique, internationally recognized for the quality of its research, is a leading general engineering school under the aegis of the Ministry of Industry and Digital Technology, ranked in the 3 main international rankings (THE, SHANGAI, QS) Situated on 3 campuses, Brest, Nantes and Rennes, IMT Atlantique aims to combine digital technology and energy to transform society and industry through training, research and innovation. It aims to be the leading French higher education and research institution in this field on an international scale. With 290 researchers and permanent lecturers, 1000 publications and 18 M€ of contracts, it supervises 2300 students each year and its training courses are based on cutting-edge research carried out within 6 joint research units: GEPEA, IRISA, LATIM, Lab-STICC, LS2N and SUBATECH. Presentation of the scientific field The computer science department of IMT Atlantique is recruiting a research lecturer specialized in the fields of HCI and Mixed Reality for its INUIT team . The computer science department of IMT-Atlantique is located in Brest. With about 50 people it contributes to the Lab-STICC (Laboratoire des Sciences et Techniques de l'Information, de la Communication et de la Connaissance, UMR CNRS 6285) research laboratory, which is present inthe cities of Brest, Quimper, Lorient and Vannes.
    [Show full text]