F2: MEMORY-CENTRIC COMPUTING from Iot to ARTIFICIAL INTELLIGENCE and MACHINE LEARNING
ISSCC 2019 / FORUM / F2: MEMORY-CENTRIC COMPUTING FROM IoT TO ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING F2: Memory-Centric Computing from IoT to Artificial Intelligence and Machine Learning Organizer: Fatih Hamzaoglu, Intel, Hillsboro, OR Committee: Meng-Fan Chang, National Tsing Hua University, Hsinchu, Taiwan Ki-Tae Park, Samsung, Hwaseong, Korea Yasuhiko Taito, Renesas, Kodaira, Tokyo, Japan Alicia Klinefelter, Nvidia, Durham, NC Naveen Verma, Princeton University, Princeton, NJ This forum will present state-of-the-art memory-centric architectures, as well as future innovative solutions to enable energy-efficient, high- performance AI/ML applications. It will also describe the challenges and solutions from edge processors to cloud applications, such as algorithmic accuracy, cost, security and practicality, including technology readiness to areas where further technology development is needed. At the high- performance and machine-learning end, emerging and storage-class memories are going to change the memory hierarchy. Meanwhile, low-power high-bandwidth DRAMs and SRAMs continue to be innovated around, to remain the workhorses of the latest process nodes (HBM, GDDR6, 7nm- FinFET SRAM etc.). Furthermore, with the explosive growth of memory intensive workloads like machine learning, video capture/playback, language translation, etc. there is a tremendous interest in preforming some compute near memory, by placing logic inside the DRAM/NVM main-memory die (AKA near-memory compute), or even doing the compute within the SRAM/STTRAM/RRAM array embedded within the compute die (AKA in- memory compute). In either case, the motivation is to reduce the significant data movement between main/embedded memory and compute units, as well as to reduce latency by preforming many operations in parallel, inside the array.
[Show full text]