DOCSLIB.ORG
Explore
Sign Up
Log In
Upload
Search
Home
» Tags
» Accelerator (library)
Accelerator (library)
A Taxonomy of Accelerator Architectures and Their
WWW 2013 22Nd International World Wide Web Conference
Matrox Imaging Library (MIL) 9.0 Update 58
An FPGA-Accelerated Embedded Convolutional Neural Network
AI Chips: What They Are and Why They Matter
The Developer's Guide to Azure
Scheduling Dataflow Execution Across Multiple Accelerators
The Jabberwocky Programming Environment for Structured Social Computing
Windows GUI Context Extraction
Final Copy 2021 06 24 Foyer
SDP Memo 50: the Accelerator Support of Execution Framework
Memory-Efficient Pipeline-Parallel DNN Training
VICTORIA UNIVERSITY of WELLINGTON Te Whare W
Zero-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning
Specialized Hardware Accelerator • 99+ Hardware Startup Companies
Enabling Compute-Communication Overlap in Distributed Deep
H.R. 1757--High Performance Computing and High Speed Networking Applications Act of 1993. Hearings Before
Release 1.1.7 William Falcon Et
Top View
A Language-Based Serverless Function Accelerator
Hardware-Software Co-Design for an Analog-Digital Accelerator for Machine Learning
Ptask: Operating System Abstractions to Manage Gpus As Compute Devices
Ambit: In-Memory Accelerator for Bulk Bitwise Operations Using Commodity DRAM Technology
Stateful Dataflow Multigraphs: a Data-Centric Model for Performance Portability on Heterogeneous Architectures
Heterodoop : a Mapreduce Programming System for Accelerator Clusters
Hereby Granted, Provided That the Above Copyright Notice and This Permission Notice Appear in All Copies
Programming and Runtime Support to Blaze FPGA Accelerator Deployment at Datacenter Scale
Wizards and Their Wonders: Portraits in Computing
Network 2030 Architecture Framework
Zero-Offload: Democratizing Billion-Scale Model Training
Comprehensive Reachability Refutation and Witnesses
Spatial: a Language and Compiler for Application Accelerators
In-DRAM Bulk Bitwise Execution Engine Arxiv:1905.09822V3 [Cs.AR]
Efficient Large-Scale Language Model Training on GPU Clusters Using Megatron-LM
NVIDIA DGX SATURNV Giant Leap Towards Exascale AI
The Compiler Forest
Windows NT Symposium WIN-NT Sunday, July 11, 1999
{A Domain-Specific Approach to Heterogeneous Parallelism}
Pervasive and Mobile Computing Learning Multi-Level
Route Planning in Transportation Networks
Shuihai CV.Pdf
Mapping U.S. Multinationals' Global AI R&D Activity
Machine Learning @ Microsoft
Dryad: Distributed Data-Parallel Programs from Sequential Building Blocks
Yuxuan Zhang
Windows Azure Prescriptive Guidance.Pdf
Deep Learning with Python (Manning, 2017) • Book: M
Using Data Parallelism to Program Gpus for General-Purpose Uses