Scalable and Coordinated Scheduling for Cloud-Scale Computing

Total Page:16

File Type:pdf, Size:1020Kb

Scalable and Coordinated Scheduling for Cloud-Scale Computing Apollo: Scalable and Coordinated Scheduling for Cloud-Scale Computing Eric Boutin, Jaliya Ekanayake, Wei Lin, Bing Shi, and Jingren Zhou, Microsoft; Zhengping Qian, Ming Wu, and Lidong Zhou, Microsoft Research https://www.usenix.org/conference/osdi14/technical-sessions/presentation/boutin This paper is included in the Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation. October 6–8, 2014 • Broomfield, CO 978-1-931971-16-4 Open access to the Proceedings of the 11th USENIX Symposium on Operating Systems Design and Implementation is sponsored by USENIX. Apollo: Scalable and Coordinated Scheduling for Cloud-Scale Computing Eric Boutin, Jaliya Ekanayake, Wei Lin, Bing Shi, Jingren Zhou Microsoft Zhengping Qian, Ming Wu, Lidong Zhou Microsoft Research Abstract tions submit jobs to the clusters every day, resulting in a Efficiently scheduling data-parallel computation jobs peak rate of tens of thousands of scheduling requests per over cloud-scale computing clusters is critical for job second. The submitted jobs are diverse in nature, with a performance, system throughput, and resource utiliza- variety of characteristics in terms of data volume to pro- tion. It is becoming even more challenging with growing cess, complexity of computation logic, degree of paral- cluster sizes and more complex workloads with diverse lelism, and resource requirements. A scheduler must (i) characteristics. This paper presents Apollo, a highly scale to make tens of thousands of scheduling decisions scalable and coordinated scheduling framework, which per second on a cluster with tens of thousands of servers; has been deployed on production clusters at Microsoft (ii) maintain fair sharing of resources among different to schedule thousands of computations with millions of users and groups; and (iii) make high-quality scheduling tasks efficiently and effectively on tens of thousands of decisions that take into account factors such as data local- machines daily. The framework performs scheduling de- ity, job characteristics, and server load, to minimize job cisions in a distributed manner, utilizing global cluster latencies while utilizing the resources in a cluster fully. information via a loosely coordinated mechanism. Each This paper presents the Apollo scheduling framework, scheduling decision considers future resource availabil- which has been fully deployed to schedule jobs in cloud- ity and optimizes various performance and system fac- scale production clusters at Microsoft, serving a variety tors together in a single unified model. Apollo is ro- of on-line services. Scheduling billions of tasks daily bust, with means to cope with unexpected system dy- efficiently and effectively, Apollo addresses the schedul- namics, and can take advantage of idle system resources ing challenges in large-scale clusters with the following gracefully while supplying guaranteed resources when technical contributions. needed. To balance scalability and scheduling quality, • 1 Introduction Apollo adopts a distributed and (loosely) coordi- nated scheduling framework, in which indepen- MapReduce-like systems [7, 15] make data-parallel dent scheduling decisions are made in an optimistic computations easy to program and allow running jobs and coordinated manner by incorporating synchro- that process terabytes of data on large clusters of com- nized cluster utilization information. Such a de- modity hardware. Each data-processing job consists of sign strikes the right balance: it avoids the subop- a number of tasks with inter-task dependencies that de- timal (and often conflicting) decisions by indepen- scribe execution order. A task is a basic unit of compu- dent schedulers of a completely decentralized archi- tation that is scheduled to execute on a server. tecture, while removing the scalability bottleneck Efficient scheduling, which tracks task dependencies and single point of failure of a centralized design. and assigns tasks to servers for execution when ready, is critical to the overall system performance and ser- To achieve high-quality scheduling decisions, • vice quality. The growing popularity and diversity of Apollo schedules each task on a server that min- data-parallel computation makes scheduling increasingly imizes the task completion time. The estimation challenging. For example, the production clusters that model incorporates a variety of factors and al- we use for data-parallel computations are growing in lows a scheduler to perform a weighted decision, size, each with over 20,000 servers. A growing commu- rather than solely considering data locality or server nity of thousands of users from many different organiza- load. The data parallel nature of computation al- USENIX Association 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’14) 285 lows Apollo to refine the estimates of task execu- tion time continuously based on observed runtime statistics from similar tasks during job execution. To supply individual schedulers with cluster infor- • mation, Apollo introduces a lightweight hardware- independent mechanism to advertise load on servers. When combined with a local task queue on each server, the mechanism provides a near-future view of resource availability on all the servers, which is used by the schedulers in decision making. To cope with unexpected cluster dynamics, subopti- • Figure 1: A sample SCOPE execution graph. mal estimations, and other abnormal runtime behav- iors, which are facts of life in large-scale clusters, our engineering experiences in developing and deploy- Apollo is made robust through a series of correc- ing Apollo to our cloud infrastructure in Section 4. A tion mechanisms that dynamically adjust and rec- thorough evaluation is presented in Section 5. We review tify suboptimal decisions at runtime. We present a related work in Section 6 and conclude in Section 7. unique deferred correction mechanism that allows 2 Scheduling at Production Scale resolving conflicts between independent schedulers only if they have a significant impact, and show that Apollo serves as the underlying scheduling framework such an approach works well in practice. for Microsoft’s distributed computation platform, which supports large-scale data analysis for a variety of busi- To drive high cluster utilization while maintaining ness needs. A typical cluster contains tens of thousands • low job latencies, Apollo introduces opportunistic of commodity servers, interconnected by an oversub- scheduling, which effectively creates two classes of scribed network. A distributed file system stores data in tasks: regular tasks and opportunistic tasks. Apollo partitions that are distributed and replicated, similar to ensures low latency for regular tasks, while using GFS [12] and HDFS [3]. All computation jobs are writ- the opportunistic tasks for high utilization to fill in ten using SCOPE [32], a SQL-like high-level scripting the slack left by regular tasks. Apollo further uses a language, augmented with user-defined processing logic. token based mechanism to manage capacity and to The optimizer transforms a job into a physical execution avoid overloading the system by limiting the total plan represented as a directed acyclic graph (DAG), with number of regular tasks. tasks, each representing a basic computation unit, as ver- tices and the data flows between tasks as edges. Tasks To ensure no service disruption or performance re- that perform the same computation on different parti- • gression when we roll out Apollo to replace a previ- tions of the same inputs are logically grouped together ous scheduler deployed in production, we designed in stages. The number of tasks per stage indicates the Apollo to support staged rollout to production clus- degree of parallelism (DOP). ters and validation at scale. Those constraints have Figure 1 shows a sample execution graph in SCOPE, received little attention in research, but are never- greatly simplified from an important production job that theless crucial in practice and we share our experi- collects user click information and derives insights for ences in achieving those demanding goals. advertisement effectiveness. Conceptually, the job per- forms a join between an unstructured user log and a We observe that Apollo schedules over 20,000 tasks structured input that is pre-partitioned by the join key. per second in a production cluster with over 20,000 ma- The plan first partitions the unstructured input using the chines. It also delivers high scheduling quality, with 95% partitioning scheme from the other input: stages S1 and of regular tasks experiencing a queuing delay of under S2 respectively partition the data and aggregate each par- 1 second, while achieving consistently high (over 80%) tition. A partitioned join is then performed in stage S4. and balanced CPU utilization across the cluster. The DOP is set to 312 for S1 based on the input data vol- The rest of the paper is organized as follows. Sec- ume, set to 10 for S5, and set to 150 for S2, S3, and S4. tion 2 presents a high-level overview of our distributed computing infrastructure and the query workload that 2.1 Capacity Management and Tokens Apollo supports. Section 3 presents an architectural In order to ensure fairness and predictability of perfor- overview, explains the coordinated scheduling in detail, mance, the system uses a token-based mechanism to al- and describes the correction mechanisms. We describe locate capacity to jobs. Each token is defined as the right 286 11th USENIX Symposium on Operating
Recommended publications
  • Stclang: State Thread Composition As a Foundation for Monadic Dataflow Parallelism Sebastian Ertel∗ Justus Adam Norman A
    STCLang: State Thread Composition as a Foundation for Monadic Dataflow Parallelism Sebastian Ertel∗ Justus Adam Norman A. Rink Dresden Research Lab Chair for Compiler Construction Chair for Compiler Construction Huawei Technologies Technische Universität Dresden Technische Universität Dresden Dresden, Germany Dresden, Germany Dresden, Germany [email protected] [email protected] [email protected] Andrés Goens Jeronimo Castrillon Chair for Compiler Construction Chair for Compiler Construction Technische Universität Dresden Technische Universität Dresden Dresden, Germany Dresden, Germany [email protected] [email protected] Abstract using monad-par and LVars to expose parallelism explicitly Dataflow execution models are used to build highly scalable and reach the same level of performance, showing that our parallel systems. A programming model that targets parallel programming model successfully extracts parallelism that dataflow execution must answer the following question: How is present in an algorithm. Further evaluation shows that can parallelism between two dependent nodes in a dataflow smap is expressive enough to implement parallel reductions graph be exploited? This is difficult when the dataflow lan- and our programming model resolves short-comings of the guage or programming model is implemented by a monad, stream-based programming model for current state-of-the- as is common in the functional community, since express- art big data processing systems. ing dependence between nodes by a monadic bind suggests CCS Concepts • Software and its engineering → Func- sequential execution. Even in monadic constructs that explic- tional languages. itly separate state from computation, problems arise due to the need to reason about opaquely defined state.
    [Show full text]
  • APPLYING MODEL-VIEW-CONTROLLER (MVC) in DESIGN and DEVELOPMENT of INFORMATION SYSTEMS an Example of Smart Assistive Script Breakdown in an E-Business Application
    APPLYING MODEL-VIEW-CONTROLLER (MVC) IN DESIGN AND DEVELOPMENT OF INFORMATION SYSTEMS An Example of Smart Assistive Script Breakdown in an e-Business Application Andreas Holzinger, Karl Heinz Struggl Institute of Information Systems and Computer Media (IICM), TU Graz, Graz, Austria Matjaž Debevc Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia Keywords: Information Systems, Software Design Patterns, Model-view-controller (MVC), Script Breakdown, Film Production. Abstract: Information systems are supporting professionals in all areas of e-Business. In this paper we concentrate on our experiences in the design and development of information systems for the use in film production processes. Professionals working in this area are neither computer experts, nor interested in spending much time for information systems. Consequently, to provide a useful, useable and enjoyable application the system must be extremely suited to the requirements and demands of those professionals. One of the most important tasks at the beginning of a film production is to break down the movie script into its elements and aspects, and create a solid estimate of production costs based on the resulting breakdown data. Several film production software applications provide interfaces to support this task. However, most attempts suffer from numerous usability deficiencies. As a result, many film producers still use script printouts and textmarkers to highlight script elements, and transfer the data manually into their film management software. This paper presents a novel approach for unobtrusive and efficient script breakdown using a new way of breaking down text into its relevant elements. We demonstrate how the implementation of this interface benefits from employing the Model-View-Controller (MVC) as underlying software design paradigm in terms of both software development confidence and user satisfaction.
    [Show full text]
  • Freebsd File Formats Manual Libarchive-Formats (5)
    libarchive-formats (5) FreeBSD File Formats Manual libarchive-formats (5) NAME libarchive-formats —archive formats supported by the libarchive library DESCRIPTION The libarchive(3) library reads and writes a variety of streaming archive formats. Generally speaking, all of these archive formats consist of a series of “entries”. Each entry stores a single file system object, such as a file, directory,orsymbolic link. The following provides a brief description of each format supported by libarchive,with some information about recognized extensions or limitations of the current library support. Note that just because a format is supported by libarchive does not imply that a program that uses libarchive will support that format. Applica- tions that use libarchive specify which formats theywish to support, though manyprograms do use libarchive convenience functions to enable all supported formats. TarFormats The libarchive(3) library can read most tar archives. However, itonly writes POSIX-standard “ustar” and “pax interchange” formats. All tar formats store each entry in one or more 512-byte records. The first record is used for file metadata, including filename, timestamp, and mode information, and the file data is stored in subsequent records. Later variants have extended this by either appropriating undefined areas of the header record, extending the header to multiple records, or by storing special entries that modify the interpretation of subsequent entries. gnutar The libarchive(3) library can read GNU-format tar archives. It currently supports the most popular GNU extensions, including modern long filename and linkname support, as well as atime and ctime data. The libarchive library does not support multi-volume archives, nor the old GNU long filename format.
    [Show full text]
  • Process Scheduling
    PROCESS SCHEDULING ANIRUDH JAYAKUMAR LAST TIME • Build a customized Linux Kernel from source • System call implementation • Interrupts and Interrupt Handlers TODAY’S SESSION • Process Management • Process Scheduling PROCESSES • “ a program in execution” • An active program with related resources (instructions and data) • Short lived ( “pwd” executed from terminal) or long-lived (SSH service running as a background process) • A.K.A tasks – the kernel’s point of view • Fundamental abstraction in Unix THREADS • Objects of activity within the process • One or more threads within a process • Asynchronous execution • Each thread includes a unique PC, process stack, and set of processor registers • Kernel schedules individual threads, not processes • tasks are Linux threads (a.k.a kernel threads) TASK REPRESENTATION • The kernel maintains info about each process in a process descriptor, of type task_struct • See include/linux/sched.h • Each task descriptor contains info such as run-state of process, address space, list of open files, process priority etc • The kernel stores the list of processes in a circular doubly linked list called the task list. TASK LIST • struct list_head tasks; • init the "mother of all processes” – statically allocated • extern struct task_struct init_task; • for_each_process() - iterates over the entire task list • next_task() - returns the next task in the list PROCESS STATE • TASK_RUNNING: running or on a run-queue waiting to run • TASK_INTERRUPTIBLE: sleeping, waiting for some event to happen; awakes prematurely if it receives a signal • TASK_UNINTERRUPTIBLE: identical to TASK_INTERRUPTIBLE except it ignores signals • TASK_ZOMBIE: The task has terminated, but its parent has not yet issued a wait4(). The task's process descriptor must remain in case the parent wants to access it.
    [Show full text]
  • Scheduling: Introduction
    7 Scheduling: Introduction By now low-level mechanisms of running processes (e.g., context switch- ing) should be clear; if they are not, go back a chapter or two, and read the description of how that stuff works again. However, we have yet to un- derstand the high-level policies that an OS scheduler employs. We will now do just that, presenting a series of scheduling policies (sometimes called disciplines) that various smart and hard-working people have de- veloped over the years. The origins of scheduling, in fact, predate computer systems; early approaches were taken from the field of operations management and ap- plied to computers. This reality should be no surprise: assembly lines and many other human endeavors also require scheduling, and many of the same concerns exist therein, including a laser-like desire for efficiency. And thus, our problem: THE CRUX: HOW TO DEVELOP SCHEDULING POLICY How should we develop a basic framework for thinking about scheduling policies? What are the key assumptions? What metrics are important? What basic approaches have been used in the earliest of com- puter systems? 7.1 Workload Assumptions Before getting into the range of possible policies, let us first make a number of simplifying assumptions about the processes running in the system, sometimes collectively called the workload. Determining the workload is a critical part of building policies, and the more you know about workload, the more fine-tuned your policy can be. The workload assumptions we make here are mostly unrealistic, but that is alright (for now), because we will relax them as we go, and even- tually develop what we will refer to as ..
    [Show full text]
  • Scheduling Light-Weight Parallelism in Artcop
    Scheduling Light-Weight Parallelism in ArTCoP J. Berthold1,A.AlZain2,andH.-W.Loidl3 1 Fachbereich Mathematik und Informatik Philipps-Universit¨at Marburg, D-35032 Marburg, Germany [email protected] 2 School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh EH14 4AS, Scotland [email protected] 3 Institut f¨ur Informatik, Ludwig-Maximilians-Universit¨at M¨unchen, Germany [email protected] Abstract. We present the design and prototype implementation of the scheduling component in ArTCoP (architecture transparent control of parallelism), a novel run-time environment (RTE) for parallel execution of high-level languages. A key feature of ArTCoP is its support for deep process and memory hierarchies, shown in the scheduler by supporting light-weight threads. To realise a system with easily exchangeable components, the system defines a micro-kernel, providing basic infrastructure, such as garbage collection. All complex RTE operations, including the handling of parallelism, are implemented at a separate system level. By choosing Concurrent Haskell as high-level system language, we obtain a prototype in the form of an executable specification that is easier to maintain and more flexible than con- ventional RTEs. We demonstrate the flexibility of this approach by presenting implementations of a scheduler for light-weight threads in ArTCoP, based on GHC Version 6.6. Keywords: Parallel computation, functional programming, scheduling. 1 Introduction In trying to exploit the computational power of parallel architectures ranging from multi-core machines to large-scale computational Grids, we are currently developing a new parallel runtime environment, ArTCoP, for executing parallel Haskell code on such complex, hierarchical architectures.
    [Show full text]
  • Scheduling Weakly Consistent C Concurrency for Reconfigurable
    SUBMISSION TO IEEE TRANS. ON COMPUTERS 1 Scheduling Weakly Consistent C Concurrency for Reconfigurable Hardware Nadesh Ramanathan, John Wickerson, Member, IEEE, and George A. Constantinides Senior Member, IEEE Abstract—Lock-free algorithms, in which threads synchronise These reorderings are invisible in a single-threaded context, not via coarse-grained mutual exclusion but via fine-grained but in a multi-threaded context, they can introduce unexpected atomic operations (‘atomics’), have been shown empirically to behaviours. For instance, if another thread is simultaneously be the fastest class of multi-threaded algorithms in the realm of conventional processors. This article explores how these writing to z, then reordering two instructions above may algorithms can be compiled from C to reconfigurable hardware introduce the behaviour where x is assigned the latest value via high-level synthesis (HLS). but y gets an old one.1 We focus on the scheduling problem, in which software The implication of this is not that existing HLS tools are instructions are assigned to hardware clock cycles. We first wrong; these optimisations can only introduce new behaviours show that typical HLS scheduling constraints are insufficient to implement atomics, because they permit some instruction when the code already exhibits a race condition, and races reorderings that, though sound in a single-threaded context, are deemed a programming error in C [2, §5.1.2.4]. Rather, demonstrably cause erroneous results when synthesising multi- the implication is that if these memory accesses are upgraded threaded programs. We then show that correct behaviour can be to become atomic (and hence allowed to race), then existing restored by imposing additional intra-thread constraints among scheduling constraints are insufficient.
    [Show full text]
  • Fork-Join Pattern
    Fork-Join Pattern Parallel Computing CIS 410/510 Department of Computer and Information Science Lecture 9 – Fork-Join Pattern Outline q What is the fork-join concept? q What is the fork-join pattern? q Programming Model Support for Fork-Join q Recursive Implementation of Map q Choosing Base Cases q Load Balancing q Cache Locality and Cache-Oblivious Algorithms q Implementing Scan with Fork-Join q Applying Fork-Join to Recurrences Introduction to Parallel Computing, University of Oregon, IPCC Lecture 9 – Fork-Join Pattern 2 Fork-Join Philosophy When you come to a fork in the road, take it. (Yogi Bera, 1925 –) Introduction to Parallel Computing, University of Oregon, IPCC Lecture 9 – Fork-Join Pattern 3 Fork-Join Concept q Fork-Join is a fundamental way (primitive) of expressing concurrency within a computation q Fork is called by a (logical) thread (parent) to create a new (logical) thread (child) of concurrency ❍ Parent continues after the Fork operation ❍ Child begins operation separate from the parent ❍ Fork creates concurrency q Join is called by both the parent and child ❍ Child calls Join after it finishes (implicitly on exit) ❍ Parent waits until child joins (continues afterwards) ❍ Join removes concurrency because child exits Introduction to Parallel Computing, University of Oregon, IPCC Lecture 9 – Fork-Join Pattern 4 Fork-Join Concurrency Semantics q Fork-Join is a concurrency control mechanism ❍ Fork increases concurrency ❍ Join decreases concurrency q Fork-Join dependency rules ❍ A parent must join with its forked children
    [Show full text]
  • CDE: Run Any Linux Application On-Demand Without Installation
    CDE: Run Any Linux Application On-Demand Without Installation Philip J. Guo Stanford University [email protected] Abstract with compiling, installing, and configuring software and their myriad of dependencies. For example, the official There is a huge ecosystem of free software for Linux, but Google Chrome help forum for “install/uninstall issues” since each Linux distribution (distro) contains a differ- has over 5800 threads. ent set of pre-installed shared libraries, filesystem layout In addition, a study of US labor statistics predicts that conventions, and other environmental state, it is difficult by 2012, 13 million American workers will do program- to create and distribute software that works without has- ming in their jobs, but amongst those, only 3 million will sle across all distros. Online forums and mailing lists be professional software developers [24]. Thus, there are are filled with discussions of users’ troubles with com- potentially millions of people who still need to get their piling, installing, and configuring Linux software and software to run on other machines but who are unlikely their myriad of dependencies. To address this ubiqui- to invest the effort to create one-click installers or wres- tous problem, we have created an open-source tool called tle with package managers, since their primary job is not CDE that automatically packages up the Code, Data, and to release production-quality software. For example: Environment required to run a set of x86-Linux pro- grams on other x86-Linux machines. Creating a CDE • System administrators often hack together ad- package is as simple as running the target application un- hoc utilities comprised of shell scripts and custom- der CDE’s monitoring, and executing a CDE package re- compiled versions of open-source software, in or- quires no installation, configuration, or root permissions.
    [Show full text]
  • An Analytical Model for Multi-Tier Internet Services and Its Applications
    An Analytical Model for Multi-tier Internet Services and Its Applications Bhuvan Urgaonkar, Giovanni Pacificiy, Prashant Shenoy, Mike Spreitzery, and Asser Tantawiy Dept. of Computer Science, y Service Management Middleware Dept., University of Massachusetts, IBM T. J. Watson Research Center, Amherst, MA 01003 Hawthorne, NY 10532 fbhuvan,[email protected] fgiovanni,mspreitz,[email protected] ABSTRACT that is responsible for HTTP processing, a middle tier Java enterprise Since many Internet applications employ a multi-tier architecture, server that implements core application functionality, and a back- in this paper, we focus on the problem of analytically modeling the end database that stores product catalogs and user orders. In this behavior of such applications. We present a model based on a net- example, incoming requests undergo HTTP processing, processing work of queues, where the queues represent different tiers of the by Java application server, and trigger queries or transactions at the application. Our model is sufficiently general to capture (i) the database. behavior of tiers with significantly different performance charac- This paper focuses on analytically modeling the behavior of multi- teristics and (ii) application idiosyncrasies such as session-based tier Internet applications. Such a model is important for the follow- workloads, tier replication, load imbalances across replicas, and ing reasons: (i) capacity provisioning, which enables a server farm caching at intermediate tiers. We validate our model using real to determine how much capacity to allocate to an application in or- multi-tier applications running on a Linux server cluster. Our exper- der for it to service its peak workload; (ii) performance prediction, iments indicate that our model faithfully captures the performance which enables the response time of the application to be determined of these applications for a number of workloads and configurations.
    [Show full text]
  • IDOL Keyview PDF Export SDK 12.6 C Programming Guide
    KeyView Software Version 12.6 PDF Export SDK C Programming Guide Document Release Date: June 2020 Software Release Date: June 2020 PDF Export SDK C Programming Guide Legal notices Copyright notice © Copyright 1997-2020 Micro Focus or one of its affiliates. The only warranties for products and services of Micro Focus and its affiliates and licensors (“Micro Focus”) are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Micro Focus shall not be liable for technical or editorial errors or omissions contained herein. The information contained herein is subject to change without notice. Documentation updates The title page of this document contains the following identifying information: l Software Version number, which indicates the software version. l Document Release Date, which changes each time the document is updated. l Software Release Date, which indicates the release date of this version of the software. To check for updated documentation, visit https://www.microfocus.com/support-and-services/documentation/. Support Visit the MySupport portal to access contact information and details about the products, services, and support that Micro Focus offers. This portal also provides customer self-solve capabilities. It gives you a fast and efficient way to access interactive technical support tools needed to manage your business. As a valued support customer, you can benefit by using the MySupport portal to: l Search for knowledge documents of interest l Access product documentation l View software vulnerability alerts l Enter into discussions with other software customers l Download software patches l Manage software licenses, downloads, and support contracts l Submit and track service requests l Contact customer support l View information about all services that Support offers Many areas of the portal require you to sign in.
    [Show full text]
  • Forcepoint DLP Supported File Formats and Size Limits
    Forcepoint DLP Supported File Formats and Size Limits Supported File Formats and Size Limits | Forcepoint DLP | v8.8.1 This article provides a list of the file formats that can be analyzed by Forcepoint DLP, file formats from which content and meta data can be extracted, and the file size limits for network, endpoint, and discovery functions. See: ● Supported File Formats ● File Size Limits © 2021 Forcepoint LLC Supported File Formats Supported File Formats and Size Limits | Forcepoint DLP | v8.8.1 The following tables lists the file formats supported by Forcepoint DLP. File formats are in alphabetical order by format group. ● Archive For mats, page 3 ● Backup Formats, page 7 ● Business Intelligence (BI) and Analysis Formats, page 8 ● Computer-Aided Design Formats, page 9 ● Cryptography Formats, page 12 ● Database Formats, page 14 ● Desktop publishing formats, page 16 ● eBook/Audio book formats, page 17 ● Executable formats, page 18 ● Font formats, page 20 ● Graphics formats - general, page 21 ● Graphics formats - vector graphics, page 26 ● Library formats, page 29 ● Log formats, page 30 ● Mail formats, page 31 ● Multimedia formats, page 32 ● Object formats, page 37 ● Presentation formats, page 38 ● Project management formats, page 40 ● Spreadsheet formats, page 41 ● Text and markup formats, page 43 ● Word processing formats, page 45 ● Miscellaneous formats, page 53 Supported file formats are added and updated frequently. Key to support tables Symbol Description Y The format is supported N The format is not supported P Partial metadata
    [Show full text]