Introducing Krylov Ebay AI Platform - Machine Learning Made Easy

Total Page:16

File Type:pdf, Size:1020Kb

Introducing Krylov Ebay AI Platform - Machine Learning Made Easy Introducing Krylov eBay AI Platform - Machine Learning Made Easy Henry Saputra Technical Lead for Krylov - eBay Unified AI Platform GPU Technology Conference, 2018 Agenda 1. Data Science and Machine Learning at eBay 2. Introducing Krylov 3. Compute Cluster and Accelerator Support with Nvidia GPU 4. Quickstart Example 5. Future Roadmap 6. Q & A Data Science and Machine Learning at eBay eBay Patterns - Tools and Frameworks Patterns for ML Training Tools • Single node • Languages: R, Python, Scala, C++ • Distributed training • IDE-like: RStudio, Notebooks (Juptyer), Python IDE • Frameworks: NumPy, SciPy, matplotlib, Scikit-learn, Spark MLLib, H2O • Deep learning (GPUs) Weka, XGBoost, Moses • Pipelines: Cron, Luigi, Apache Airflow, Apache Oozie Distributed Training Deep Learning Key takeaway = CHOICE 1. Flexibility of software 2. Flexibility of hardware configuration Problems and Challenges 1. 50%-70% is plumbing work a. Accessing and moving secured data b. Environment and tools setup c. Sub-optimal compute instances - NVIDIA GPUs and High memory/ CPUs instances d. Long wait time from platform and infrastructure 2. Lost of productivity and opportunities a. ML lifecycle management of models and features b. Building robust training model pipelines: prepare data, algorithm, hyperparameters tuning, cross validation 3. Collaborations almost impossible 4. Research vs Applied ML Introducing Krylov: Unified eBay AI Platform Overview ● Krylov is the core project of the eBay unified AI Platform initiative to enable easy to use and powerful cloud-based data science and machine learning platform. ● The objective of the project is to enable machine learning jobs with easy access to secured-data and eBay cloud computing resources. ● The main goals for the Krylov initiative are: ○ Easy and secure access to training datasets ○ Access to compute in high performance machines, such as GPUs, or cluster of machines. ○ Familiar tools and flexible software to run machine learning model training jobs ○ Interactive data analysis and visualization, with multi-tenancy support to allow quick prototyping of algorithms and data access ○ Sharing and collaboration of ML work between teams in eBay ML Lifecycle Management Lifecycle MODEL BUILDING MODEL TRAINING MODEL INFERENCING Interactive, iterative Automatable, repeatable, scalable Deployable, Scalable MODEL RE-FITTING Interactive, iterative MODEL RE-TRAINING Interactive, iterative Data + Lifecycle Management Krylov Staircase Design for AI Platform eBay AI Platform Components AI Speech Recognition Machine Translation Computer Vision Information Retrieval Modules Natural Language Understanding … AI Engine - Krylov Data Learning Model Access Pipelines Experimentation AI Hub (Shared Movement Data Scientist Model Lifecycle Repository) Workspaces Management Inferencing Discovery Infrastructure - Krylov Preparation GPU Tall instances Fast Storage Krylov High Level Architecture Krylov Main Features and Concepts 1. Client Command Line Interface (CLI) via krylovctl program 2. ML Application and Run Specification 3. ML Pipelines: Workflow and Workspace 4. Namespaces - For quota and data isolation 5. Jobs and Runs - Managed by Krylov Tools and Minions 6. Secure Data Access - HDFS, NFS, OpenStack Swift, Custom Krylov CLI - krylovctl Krylov ML Application ● Krylov ML Application is a versioned unit of deployment that contains declaration of the developers’ programs ● Implemented as client project used as source to build deployment artifact ● Three main parts: ○ mlapplication.json and artifact.sjon configuration files ○ Source code of the programs ○ Dependencies management via Dockerfile ● Supported types of programs: JVM languages (Java, Scala), Python, Shell script ● Using the ML Application as source, developers can build deployment artifact that can be used by the Run Specification file to deploy it into one of the nodes in the cluster Krylov ML Application Example { "tasks": { "prepare_data": { "program": "com.ebay.oss.krylov.workflow.JvmMainProgram", "parameters": { "className": "com.ebay.krylov.helloai.HelloWorld" } }, "train_model": { "program": "com.ebay.oss.krylov.workflow.PythonProgram", "parameters": { "file": "helloai-python/helloai/helloworld.py", "args": [] } }, ... Krylov Run Specification ● The Krylov Run Specification is a runtime configuration to add override configuration and parameter passing for each Task in the ML Application job submissions ● It tells Krylov master API server of which the artifact created by ML Application will be used in the compute cluster ● Defined as runspec.json file or can be passed as argument to krylovctl client program. ● The runspec.json file also has definition for the compute resources, such as which NVIDIA GPUs to use, CPU, memory, and which Docker image for dependencies used in ML Application programs Krylov Run Specification Example { "jobName": "job-sample", "artifact": "myartifact", "artifactTag": "latest", "mlApplication": "com.ebay.oss.krylov.workflow.app.GenericMLApplication", "applicationParameters": { }, "tasks": { "prepare_data": { "taskParameters": { "prepare_data_parameter_key": "prepare_data_parameter_value" } } } Krylov ML Pipelines: Workflow ● Krylov ML batch lifecycle pipeline is defined as Krylov Workflow definition ○ Declarative ○ Default Generic Workflow ● Important concepts for Krylov Workflow: ○ Workflow - A single pipeline defined within Krylov and the unit of deployment for an ML Application ■ Each Workflow contains one or more Tasks ■ The Tasks are connected to each other as Directed Acyclic Graph (DAG) structure ○ Task - smallest unit of execution that run developers’ Program and executed in a single machine ○ Flows - Contains one or more key-value pairs of name and declaration of Tasks DAGs ○ Flow - The chosen key that will be run from possible selection in the Flows definition Workflow Example in mlapplication.json { "tasks": { ... }, "flows": { "sample_flow": { "prepare_data": ["train_model"], "train_model": ["output"] } }, "flow": "sample_flow" } Workflow Runs Flow Krylov ML Pipelines: Workspace ● A Workspace is an interactive web application to allow developers to use web browser to do ML model prototyping, data preparation and exploration ● The Workspace is run as Jupyter Notebook servers and launched on high CPU/ memory or NVIDIA GPU instances ● Enhance the JupyterHub project to allow distributed launching of multi-tenants Jupyter Notebook servers in Krylov compute cluster using Kubernetes ● Krylov Workspace uses configuration file on creation time to override and customize default parameters Workspace Deployment Flow Krylov Compute Cluster Krylov Cluster Infrastructure Krylov Compute Cluster Deployment Krylov Cluster Monitoring ● Metrics - Grafana, InfluxDb, and Telegraf for GPU monitoring Krylov Metrics Management Flow Krylov Compute Resources Management Quickstart Example Steps to Submit Krylov Workflow Job with CLI 1. Download krylovctl program from Krylov release repository 2. Run `krylovctl project create` to create new project in the local machine 3. Update or add code to the Krylov project for the machine learning programs 4. Register them as Program within a Task in the mlapplication.json 5. Add new Flow for the defined Tasks to construct the Workflow as a Directed Acyclic Graph (DAG) 6. Run `krylovctl project build` to build the project. 7. Run `krylovctl artifact create` to copy the runnables of the program into an artifact file 8. Run `krylovctl artifact upload` to upload the artifact file for remote execution 9. Run `krylovctl job run` for local execution, or `krylovctl job submit` for running it in the computing cluster Demo Time ● Here we go ... Future Roadmap Future Roadmap 1. Inferencing Platform 2. Exploration and documentation of RESTful APIs for job management 3. Data Source and Dataset abstraction via Krylov SDKs 4. Managed ML Pipelines - Computer Vision, NLP, Machine Translation 5. Distributed Deep Learning 6. AutoML - Hyper Parameters Tuning 7. AI Hub to share ML Applications and Datasets Question?.
Recommended publications
  • NVIDIA GEFORCE GT 630 Step up to Better Video, Photo, Game, and Web Performance
    NVIDIA GEFORCE GT 630 Step up to better video, photo, game, and web performance. Every PC deserves dedicated graphics. The NVIDIA® GeForce® GT 630 graphics card delivers a premium multimedia experience “Sixty percent of integrated and reliable gaming—every time. Insist on NVIDIA dedicated graphics owners want dedicated graphics for the faster, more immersive performance your graphics in their next PC1”. customers want. GEFORCE GT 630 TECHNICAL SPECIFICATIONS CUDA CORES > 96 GRAPHICS CLOCK > 810 MHz PROCESSOR CLOCK > 1620 MHz Deliver more Accelerate performance by up Give customers the freedom performance—and fun— to 5x over today’s integrated to connect their PC to any MEMORY INTERFACE with amazing HD video, graphics solutions2 and provide 3D-enabled TV for a rich, > 128 Bit photo, web, and gaming. additional dedicated memory. cinematic Blu-ray 3D experience FRAME BUFFER in their homes4. > 512/1024 MB GDDR5 or 1024 MB DDR3 MICROSOFT DIRECTX > 11 BEST-IN-CLASS PERFORMANCE MICROSOFT DIRECTCOMPUTE > Yes BLU-RAY 3D4 > Yes TRUEHD AND DTS-HD AUDIO BITSTREAMING > Yes NVIDIA PHYSX™-READY > Yes MAX. ANALOG / DIGITAL RESOLUTION > 2048x1536 (analog) / 2560x1600 (digital) DISPLAY CONNECTORS > Dual Link DVI, HDMI, VGA NVIDIA GEFORCE GT 630 | SELLSHEET | MAY 12 NVIDIA® GEFORCE® GT 630 Features Benefits HD VIDEOS Get stunning picture clarity, smooth video, accurate color, and precise image scaling for movies and video with NVIDIA PureVideo® HD technology. WEB ACCELERATION Enjoy a 2x faster web experience with the latest generation of GPU-accelerated web browsers (Internet Explorer, Google Chrome, and Firefox) 5. PHOTO EDITING Perfect and share your photos in half the time with popular apps like Adobe® Photoshop® and Nik Silver EFX Pro 25.
    [Show full text]
  • Data Sheet: Quadro GV100
    REINVENTING THE WORKSTATION WITH REAL-TIME RAY TRACING AND AI NVIDIA QUADRO GV100 The Power To Accelerate AI- FEATURES > Four DisplayPort 1.4 Enhanced Workflows Connectors3 The NVIDIA® Quadro® GV100 reinvents the workstation > DisplayPort with Audio to meet the demands of AI-enhanced design and > 3D Stereo Support with Stereo Connector3 visualization workflows. It’s powered by NVIDIA Volta, > NVIDIA GPUDirect™ Support delivering extreme memory capacity, scalability, and > NVIDIA NVLink Support1 performance that designers, architects, and scientists > Quadro Sync II4 Compatibility need to create, build, and solve the impossible. > NVIDIA nView® Desktop SPECIFICATIONS Management Software GPU Memory 32 GB HBM2 Supercharge Rendering with AI > HDCP 2.2 Support Memory Interface 4096-bit > Work with full fidelity, massive datasets 5 > NVIDIA Mosaic Memory Bandwidth Up to 870 GB/s > Enjoy fluid visual interactivity with AI-accelerated > Dedicated hardware video denoising encode and decode engines6 ECC Yes NVIDIA CUDA Cores 5,120 Bring Optimal Designs to Market Faster > Work with higher fidelity CAE simulation models NVIDIA Tensor Cores 640 > Explore more design options with faster solver Double-Precision Performance 7.4 TFLOPS performance Single-Precision Performance 14.8 TFLOPS Enjoy Ultimate Immersive Experiences Tensor Performance 118.5 TFLOPS > Work with complex, photoreal datasets in VR NVIDIA NVLink Connects 2 Quadro GV100 GPUs2 > Enjoy optimal NVIDIA Holodeck experience NVIDIA NVLink bandwidth 200 GB/s Realize New Opportunities with AI
    [Show full text]
  • We Need Net Neutrality As Evidenced by This Article to Prevent Corporate
    We need Net Neutrality as evidenced by this article to prevent corporate censorship of individual free speech online, whether its AOL censoring DearAOL.com emails protesting their proposed email fee for prioritized email delivery that evades spam filters, AT&T censoring Pearl Jam which this article is about, or Verizon Wireless censoring text messages from NARAL Pro Choice America. If the FCC won't reclassify broadband under Title II the FTC should regulate Net Neutrality, also the DOJ should investigate corporations engaging in such corporate censorship and if they are violating competition laws break them up. Pearl Jam came out in favor of net neutrality after AT&T censored a broadcast a performance they did in Chicago last Sunday. I guess AT&T didn?t like Pearl Jam?s anti-Bush message. I don?t know if Pearl Jam?s sudden embrace of net neutrality is out of ignorance, or if it?s retaliation. It doesn?t really matter because it should help bring some more awareness to the issue. Here?s the issue with net neutrality, in a nutshell. AT&T wants to charge companies like Amazon, eBay, and Google when people like you and me access their web pages. And if the companies don?t pay, AT&T will make the web sites slower. The idea is that if one company doesn?t pay the fees but a competitor does, AT&T customers will probably opt to use the faster services. IT"S WORTH NOTING: Without content, an Internet connection has no value. Proponents say AT&T built the infrastructure, so they have the right to charge whoever uses it.
    [Show full text]
  • WHITE PAPER Encrypted Traffic Management January 2016
    FRAUNHOFER INSTITUTE FOR COMMUNICATION, INFORMATION PROCESSIN G AND ERGONOMICS FKI E WHITE PAPER ENCRYPTED TRAFFIC MANAGEMENT WHITE PAPER Encrypted Traffic Management January 2016 Raphael Ernst Martin Lambertz Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE in Wachtberg and Bonn. Project number: 108146 Project partner: Blue Coat Systems Inc. Fraunhofer FKIE White paper Encrypted Traffic Management 3 | 33 Contents 1 Introduction .......................................................................................... 5 2 The spread of SSL ................................................................................. 6 3 Safety issues in previous versions of SSL ............................................... 8 4 Malware and SSL ................................................................................... 9 5 Encrypted Traffic Management .............................................................. 11 5.1 Privacy ...................................................................................................................... 12 5.1.1 Requirements ............................................................................................................ 12 5.2 Compatibility ............................................................................................................ 12 5.2.1 Requirements ............................................................................................................ 12 5.3 Performance ............................................................................................................
    [Show full text]
  • Download Gtx 970 Driver Download Gtx 970 Driver
    download gtx 970 driver Download gtx 970 driver. Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. What can I do to prevent this in the future? If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Another way to prevent getting this page in the future is to use Privacy Pass. You may need to download version 2.0 now from the Chrome Web Store. Cloudflare Ray ID: 67a229f54fd4c3c5 • Your IP : 188.246.226.140 • Performance & security by Cloudflare. GeForce Windows 10 Driver. NVIDIA has been working closely with Microsoft on the development of Windows 10 and DirectX 12. Coinciding with the arrival of Windows 10, this Game Ready driver includes the latest tweaks, bug fixes, and optimizations to ensure you have the best possible gaming experience. Game Ready Best gaming experience for Windows 10. GeForce GTX TITAN X, GeForce GTX TITAN, GeForce GTX TITAN Black, GeForce GTX TITAN Z. GeForce 900 Series: GeForce GTX 980 Ti, GeForce GTX 980, GeForce GTX 970, GeForce GTX 960. GeForce 700 Series: GeForce GTX 780 Ti, GeForce GTX 780, GeForce GTX 770, GeForce GTX 760, GeForce GTX 760 Ti (OEM), GeForce GTX 750 Ti, GeForce GTX 750, GeForce GTX 745, GeForce GT 740, GeForce GT 730, GeForce GT 720, GeForce GT 710, GeForce GT 705.
    [Show full text]
  • Datasheet Quadro K600
    ACCELERATE YOUR CREATIVITY NVIDIA® QUADRO® K620 Accelerate your creativity with FEATURES ® ® > DisplayPort 1.2 Connector NVIDIA Quadro —the world’s most > DisplayPort with Audio > DVI-I Dual-Link Connector 1 powerful workstation graphics. > VGA Support ™ The NVIDIA Quadro K620 offers impressive > NVIDIA nView Desktop Management Software power-efficient 3D application performance and Compatibility capability. 2 GB of DDR3 GPU memory with fast > HDCP Support bandwidth enables you to create large, complex 3D > NVIDIA Mosaic2 SPECIFICATIONS models, and a flexible single-slot and low-profile GPU Memory 2 GB DDR3 form factor makes it compatible with even the most Memory Interface 128-bit space and power-constrained chassis. Plus, an all-new display engine drives up to four displays with Memory Bandwidth 29.0 GB/s DisplayPort 1.2 support for ultra-high resolutions like NVIDIA CUDA® Cores 384 3840x2160 @ 60 Hz with 30-bit color. System Interface PCI Express 2.0 x16 Quadro cards are certified with a broad range of Max Power Consumption 45 W sophisticated professional applications, tested by Thermal Solution Ultra-Quiet Active leading workstation manufacturers, and backed by Fansink a global team of support specialists, giving you the Form Factor 2.713” H × 6.3” L, Single Slot, Low Profile peace of mind to focus on doing your best work. Whether you’re developing revolutionary products or Display Connectors DVI-I DL + DP 1.2 telling spectacularly vivid visual stories, Quadro gives Max Simultaneous Displays 2 direct, 4 DP 1.2 you the performance to do it brilliantly. Multi-Stream Max DP 1.2 Resolution 3840 x 2160 at 60 Hz Max DVI-I DL Resolution 2560 × 1600 at 60 Hz Max DVI-I SL Resolution 1920 × 1200 at 60 Hz Max VGA Resolution 2048 × 1536 at 85 Hz Graphics APIs Shader Model 5.0, OpenGL 4.53, DirectX 11.24, Vulkan 1.03 Compute APIs CUDA, DirectCompute, OpenCL™ 1 Via supplied adapter/connector/bracket | 2 Windows 7, 8, 8.1 and Linux | 3 Product is based on a published Khronos Specification, and is expected to pass the Khronos Conformance Testing Process when available.
    [Show full text]
  • NVIDIA AI-On-5G Computing Platform Adopted by Leading Service and Network Infrastructure Providers
    NVIDIA AI-on-5G Computing Platform Adopted by Leading Service and Network Infrastructure Providers Fujitsu, Google Cloud, Mavenir, Radisys and Wind River to Deliver Solutions for Smart Hospitals, Factories, Warehouses and Stores GTC -- NVIDIA today announced it is teaming with Fujitsu, Google Cloud, Mavenir, Radisys and Wind River to develop solutions for NVIDIA’s AI-on-5G platform, which will speed the creation of smart cities and factories, advanced hospitals and intelligent stores. Enterprises, mobile network operators and cloud service providers that deploy the platform will be able to handle both 5G and edge AI computing in a single, converged platform. The AI-on-5G platform leverages the NVIDIA Aerial™ software development kit with the NVIDIA BlueField®-2 A100 — a converged card that combines GPUs and DPUs including NVIDIA’s “5T for 5G” solution. This creates high-performance 5G RAN and AI applications in an optimal platform to manage precision manufacturing robots, automated guided vehicles, drones, wireless cameras, self-checkout aisles and hundreds of other transformational projects. “In this era of continuous, accelerated computing, network operators are looking to take advantage of the security, low latency and ability to connect hundreds of devices on one node to deliver the widest range of services in ways that are cost-effective, flexible and efficient,” said Ronnie Vasishta, senior vice president of Telecom at NVIDIA. “With the support of key players in the 5G industry, we’re delivering on the promise of AI everywhere.” NVIDIA and several collaborators in this new AI-on-5G ecosystem are members of the O-RAN Alliance, which is developing standards for more intelligent, open, virtualized and fully interoperable mobile networks.
    [Show full text]
  • Nvidia A100 Supercharges Cfd with Altair and Google Cloud
    NVIDIA A100 SUPERCHARGES CFD WITH ALTAIR AND GOOGLE CLOUD Image courtesy of Altair Ultra-Fast CFD powered by Altair, Google TAILORED FOR YOU Cloud, and NVIDIA > Get exactly as many GPUs as you need, for exactly as long as you need them, and pay The newest A2 VM family on Google Cloud Platform was designed to just for that. meet today’s most demanding applications. With up to 16 NVIDIA® A100 Tensor Core GPUs in a node, each offering up to 20X the compute FASTER OUT OF THE BOX performance of the previous generation and a whopping total of up to > The new Ampere GPU architecture on Google Cloud A2 instances delivers 640GB of GPU memory, CFD simulations with Altair ultraFluidX™ were the unprecedented acceleration and flexibility ideal candidate to test the performance and benefits of the new normal. to power the world’s highest-performing elastic data centers for AI, data analytics, With ultraFluidX, highly resolved transient aerodynamics simulations and HPC applications. can be performed on a single server. The Lattice Boltzmann Method is a perfect fit for the massively parallel architecture of NVIDIA GPUs, THINK THE UNTHINKABLE and sets the stage for unprecedented turnaround times. What used to > With a total of up to 312 TFLOPS FP32 peak be overnight runs on single nodes now becomes possible within working performance in the same node linked with NVLink technology, with an aggregate hours by utilizing state-of-the-art GPU-optimized algorithms, the new bandwidth of 9.6TB/s, new solutions to NVIDIA A100, and Google Cloud’s A2 VM family, while delivering the high many tough challenges are within reach.
    [Show full text]
  • NVIDIA and Tech Leaders Team to Build GPU-Accelerated Arm Servers for New Era of Diverse HPC Architectures
    NVIDIA and Tech Leaders Team to Build GPU-Accelerated Arm Servers for New Era of Diverse HPC Architectures Arm, Ampere, Cray, Fujitsu, HPE, Marvell to Build NVIDIA GPU-Accelerated Servers for Hyperscale-Cloud to Edge, Simulation to AI, High-Performance Storage to Exascale Supercomputing SC19 -- NVIDIA today introduced a reference design platform that enables companies to quickly build GPU-accelerated Arm®-based servers, driving a new era of high performance computing for a growing range of applications in science and industry. Announced by NVIDIA founder and CEO Jensen Huang at the SC19 supercomputing conference, the reference design platform -- consisting of hardware and software building blocks -- responds to growing demand in the HPC community to harness a broader range of CPU architectures. It allows supercomputing centers, hyperscale-cloud operators and enterprises to combine the advantage of NVIDIA's accelerated computing platform with the latest Arm-based server platforms. To build the reference platform, NVIDIA is teaming with Arm and its ecosystem partners -- including Ampere, Fujitsu and Marvell -- to ensure NVIDIA GPUs can work seamlessly with Arm-based processors. The reference platform also benefits from strong collaboration with Cray, a Hewlett Packard Enterprise company, and HPE, two early providers of Arm-based servers. Additionally, a wide range of HPC software companies have used NVIDIA CUDA-X™ libraries to build GPU-enabled management and monitoring tools that run on Arm-based servers. “There is a renaissance in high performance computing,'' Huang said. “Breakthroughs in machine learning and AI are redefining scientific methods and enabling exciting opportunities for new architectures. Bringing NVIDIA GPUs to Arm opens the floodgates for innovators to create systems for growing new applications from hyperscale-cloud to exascale supercomputing and beyond.'' Rene Haas, president of the IP Products Group at Arm, said: “Arm is working with ecosystem partners to deliver unprecedented performance and efficiency for exascale-class Arm-based SoCs.
    [Show full text]
  • Untangling a Worldwide Web Ebay and Paypal Were Deeply Integrated
    CONTENTS EXECUTIVE MESSAGE PERFORMANCE Untangling a worldwide web eBay and PayPal were deeply integrated; separating them required a global effort CLIENTS Embracing analytics Securing patient data eBay’s separation bid It was a match made in e-heaven. In 2002, more than 70 During the engagement, more than 200 Deloitte Reducing IT risk percent of sellers on eBay, the e-commerce giant, accepted professionals helped the client: Audits that add value PayPal, the e-payment system of choice. So, for eBay, the • Separate more than 10,000 contracts. Raising the audit bar US$1.5 billion acquisition of PayPal made perfect sense. Not • Build a new cloud infrastructure to host 7,000 Blockchain link-up only could the online retailer collect a commission on every virtual servers and a new enterprise data Trade app cuts costs item sold, but it also could earn a fee from each PayPal warehouse, one of the largest in the world. Taking on corruption transaction. • Prepare more than 14,000 servers to support the split of more than 900 applications. TALENT Over time, however, new competitors emerged and • Migrate more than 18,000 employee user profiles new opportunities presented themselves, leading eBay and 27,000 email accounts to the new PayPal SOCIETY management to realize that divesting PayPal would allow environment. both companies to capitalize on their respective growth • Relocate 4,500-plus employees from 47 offices. “This particular REPORTING opportunities in the rapidly changing global commerce and • Launch a new corporate network for PayPal by engagement was payments landscape. So, in September 2014, eBay’s board integrating 13 hubs and 83 office locations.
    [Show full text]
  • Apache Spot: a More Effective Approach to Cyber Security
    SOLUTION BRIEF Data Center: Big Data and Analytics Security Apache Spot: A More Effective Approach to Cyber Security Apache Spot uses machine learning over billions of network events to discover and speed remediation of attacks to minutes instead of months—streamlining resources and cutting risk exposure Executive Summary This solution brief describes how to solve business challenges No industry is immune to cybercrime. The global “hard” cost of cybercrime has and enable digital transformation climbed to USD 450 billion annually; the median cost of a cyberattack has grown by through investment in innovative approximately 200 percent over the last five years,1 and the reputational damage can technologies. be even greater. Intel is collaborating with cyber security industry leaders such as If you are responsible for … Accenture, eBay, Cloudwick, Dell, HP, Cloudera, McAfee, and many others to develop • Business strategy: You will better big data solutions with advanced machine learning to transform how security threats understand how a cyber security are discovered, analyzed, and corrected. solution will enable you to meet your business outcomes. Apache Spot is a powerful data aggregation tool that departs from deterministic, • Technology decisions: You will learn signature-based threat detection methods by collecting disparate data sources into a how a cyber security solution works data lake where self-learning algorithms use pattern matching from machine learning to deliver IT and business value. and streaming analytics to find patterns of outlier network behavior. Companies use Apache Spot to analyze billions of events per day—an order of magnitude greater than legacy security information and event management (SIEM) systems.
    [Show full text]
  • Quadro Vws on Google Cloud Platform
    Industry Support for Quadro vWS on Google Cloud Platform “Access to NVIDIA Quadro Virtual Workstation on the Google Cloud Platform will empower many of our customers to deploy and start using Autodesk software quickly, from anywhere. For certain workflows, customers leveraging NVIDIA T4 and RTX technology will see a big difference when it comes to rendering scenes and creating realistic 3D models and simulations. We’re excited to continue to collaborate with NVIDIA and Google to bring increased efficiency and speed to artist workflows." - Eric Bourque, Senior Software Development Manager, Autodesk “Many organizations are seeing the need for greater flexibility as they transition into a multi and hybrid cloud environment. NVIDIA Quadro Virtual Workstation alongside Citrix Workspace cloud services on the GCP Marketplace ensures today's workforce can have instant access to graphically rich 3D applications, from anywhere. With support for the latest Turing architecture-based NVIDIA T4 GPUs, we can assure Citrix Workspace users can experience interactive, high-fidelity rendering with RTX technology on GCP.” - James Hsu, Technical Alliance Director, Citrix “CL3VER has changed the way customers engage with products. Our cloud-based real-time rendering solution enables customers to interact with and visualize 3D content directly to the web. With Quadro vWS from the GCP marketplace powered by NVIDIA T4 and RTX technology, customers can now experience products on physical showrooms virtually with photorealistic quality, visualize 3D scenes in real-time,
    [Show full text]