TSMC, Synopsys, AWS, Icmanage and Xilinx for Supporting This Work and Enabling This to Be Possible

TSMC, Synopsys, AWS, Icmanage and Xilinx for Supporting This Work and Enabling This to Be Possible

M F G 3 0 4 Electronic design automation: Scaling EDA workflows Mark Duffield Simon Burke WW Tech Lead, Semiconductor Distinguished Engineer Amazon Web Services Xilinx © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Abstract Semiconductor product development is constantly pushing the boundaries of physics to meet power, performance, and area (PPA) requirements for silicon devices. Electronic design automation (EDA) workflows, from RTL to GDSII, require scale-out architectures to meet the constantly changing semiconductor design process. This session will discuss deployment tools, methods, and use cases for running the entire EDA workflow on AWS. Using customer examples, we will show how AWS can improve performance, meet tape-out windows, and effortlessly scale-out to meet unforeseen demand. Agenda EDA on AWS Customer use cases The Xilinx AWS journey with Simon Burke Deployment tools and methods Related breakouts [MFG206-L] [Leadership session: AWS for the Semiconductor industry] Monday, Dec 2, 4:00 PM - 5:00 PM – Aria, Level 1 West, Bristlecone 9 Red [MFG404] [Using Amazon SageMaker to improve semiconductor yields] Wednesday, Dec 4, 8:30 AM - 9:30 AM – Aria, Level 3 West, Starvine 1 [MFG403] [Telemetry as the workflow analytics foundation in a hybrid environment] Wednesday, Dec 4, 10:00 AM - 11:00 AM – Aria, Plaza Level East, Orovada 3 [MFG405] [Launch a turnkey scale-out compute environment in minutes on AWS] Thursday, Dec 5, 12:15 PM - 2:30 PM – Aria, Level 1 East, Joshua 7 [MFG304] [Electronic design automation: Scaling EDA workflows] Thursday, Dec 5, 3:15 PM - 4:15 PM – Aria, Level 1 West, Bristlecone 7 Green © 2019, Amazon Web Services, Inc. or its affiliates. All rights reserved. Semiconductor design to product distribution Design and Wafer Chip PCB and Product Product verification production packaging assembly integration distribution Many opportunities for cloud-accelerated innovation Digital IC design workflow Front-end design Back-end design Production and test Phase Design Physical Physical Power/Signal Tape out/ Silicon Design specification Synthesis verification layout verification analysis manufacturing validation • Floorplanning • LVS/DRC/ERC • Power • OPC • Chip tests • Design capture Simulation DFT insertion • Wafer tests • Design modeling • Functional • Placement • Extraction • Thermal • Yield analysis • Formal • Routing • Timing • Signal integrity • Gate-level Workloads • High job concurrency • More multi-threading • Often performed by third parties • Single-threaded • Memory intensive • Big data analytics • Mixed random/sequential file I/O, • Long run times • AI/ML metadata-intensive • Large files • Millions of jobs and small files • More sequential data access patterns Characteristics Advanced node design and signoff Cloud is becoming the new signoff platform Electronic Design Automation infrastructure Traditional EDA IT stack Corporate data center Remote desktop • License managers Remote desktop client • Workload schedulers • Directory services Compute nodes Shared file storage Electronic Design Automation infrastructure on AWS On AWS, secure and well- Virtual Private Cloud on AWS optimized EDA clusters can be automatically created, Remote desktop operated, and torn down in just minutes • License managers • Workload schedulers Encryption everywhere, with your • Directory services own keys Corporate datacenter On-premises Cloud-based, auto-scaling HPC clusters HPC resources Machine learning and analytics AWS Snowball Amazon Simple Storage Service (Amazon S3) and Amazon Simple Storage Shared file storage Storage cache Service Glacier Third-party IP providers and collaborators AWS Direct Connect Faster design throughput with rapid, massive scaling Scale up when needed, then scale down Think big In a traditional EDA datacenter, the only What if you could launch certainty is that you always have the wrong one million concurrent number of servers—too few or too many verification jobs? CPU CORES OVER TIME Every additional EDA server launched in the cloud can improve speed of innovation— if there are no other constraints to scaling Product development cycle Overnight or over-weekend workloads reduced to an hour or less Our own journey: Our own digital transformation AWS silicon US expands US expands optimizations deployment in AWS deployment in AWS Formed 2014, Austin AWS One Team Full SoC development Multi-site Multi-site in the cloud Born in the cloud acquisition of development development Annapurna Latest semiconductor Israel expands fab 7nm process productivity via AWS Multiple end-to-end Annapurna startup silicon projects in AWS Multi-site Formed 2011, Israel Hybrid model Started with on-prem datacenter On-prem data center On-prem data center On-prem data center only for emulators 2011 2014 2015 2016 2017 Today AWS global infrastructure 22 geographic regions A region is a physical location in the world where we have multiple Availability Zones 69 Availability Zones Distinct locations that are Network engineered to be insulated AWS offers highly reliable, low latency, and high throughput from failures in other network connectivity. This is achieved with a fully redundant Availability Zones 100 Gbps network that circles the globe. Amazon custom hardware • The AWS global infrastructure is Silicon built on Amazon’s own Compute hardware Routers servers • By using its own custom ... hardware, AWS provides ... customers with the highest ... levels of reliability, the fastest pace of innovation, all at the lowest possible cost The internet • AWS optimizes this hardware for only one set of requirements: Storage servers Workloads run by AWS Load balancers customers AWS Inferentia: Custom silicon for deep learning aws.amazon.com/machine-learning/inferentia/ Amazon silicon AWS Graviton AWS Inferentia AWS Nitro System Powerful and efficient server Machine learning hardware Cloud hypervisor, network, chip for modern applications and software at scale storage, and security 100% developed in the cloud: RTL → GDSII High clock speed compute instances: z1d EDA stack on AWS Up to 4 GHz sustained, all-turbo Desktop visualization performance • Z1d instances are optimized for memory-intensive, compute-intensive applications License managers Workload schedulers • Up to physical 24 cores Directory services • Custom Intel Xeon scalable processor • Up to 4 GHz sustained, all-turbo performance Cloud-based, auto-scaling HPC clusters • Up to 384GiB DDR4 memory • Enhanced networking, up to 25 Gbps throughput Shared file storage Storage cache Featuring High memory instances: R5 EDA stack on AWS Up to 3.1 GHz sustained, all-turbo Desktop visualization performance • R5 instances are optimized for memory-intensive, compute-intensive applications License managers Workload schedulers • Up to physical 48 cores Directory services • Custom Intel Xeon scalable processor • Up to 3.1 GHz sustained, all-turbo performance Cloud-based, auto-scaling HPC clusters • Up to 768 GiB DDR4 memory • Enhanced networking, up to 25 Gbps throughput Shared file storage Storage cache Featuring High memory instances: X1e EDA stack on AWS 2.3 GHz performance Desktop visualization • X1e instances are optimized for memory-intensive workloads • Up to physical 64 cores • High-frequency Intel Xeon E7-8880 v3 (Haswell) processors License managers Workload schedulers with Turbo Boost Directory services • Up to 4 TiB DDR4 memory • Enhanced networking, up to 25 Gbps throughput Cloud-based, auto-scaling HPC clusters Shared file storage Storage cache Featuring FPGA accelerator development: F1 Up to 8x Xilinx UltraScale+ VU9P, each FPGA has: EDA stack on AWS • Dedicated PCIe x16 interface to the CPU • Approx. 2.5 million logic elements • Approx. 6,800 DSP engines Desktop visualization • 64 GiB ECC-protected memory, 288-bit wide bus • Virtual JTAG interface for debugging License managers • Fabricated using a 16 nm process Workload schedulers Directory services Instance capability • 2.7 GHz Turbo all cores and 3.0 GHz Turbo one core Cloud-based, auto-scaling HPC clusters • Up to 976 GiB of memory • Up to 4 TB of NVMe SSD storage Shared file storage Storage cache Featuring Amazon Elastic Compute Cloud (Amazon EC2) bare metal instances EC2 BARE METAL • Provide applications with direct access to hardware • Built on the Nitro system and ideal for workloads that are not virtualized, require specific types of hypervisors, or have licensing models that restrict virtualization Comprehensive storage portfolio Block storage File storage Object storage SSD Amazon S3 lifecycle management io1 gp2 HDD Amazon Elastic Amazon Elastic Amazon FSx Amazon S3 Amazon S3 Glacier Block Store st1 sc1 File System for Lustre (Amazon EBS) (Amazon EFS) Mapping storage to EDA data types Data type Storage solutions Tools ONLY - IP libraries DIY/marketplace Amazon EFS Amazon FSx Amazon S3 archive READ NFS server for Lustre PERSISTENT Project WRITE - Home DIY/Marketplace Amazon FSx READ Amazon S3 archive NFS server for Lustre Workspaces WRITE - Scratch DIY/Marketplace READ Amazon FSx for Lustre NFS server TEMPORARY Commercial schedulers AWS supported by popular workload and resource managers • IBM Spectrum LSF resource connector • Univa UGE and NavOps launch • Altair Accelerator (RTDA NC) Remote desktops with NICE DCV • Native clients on Linux, Mac, Windows • HTML5 for web clients • Dynamic hardware compression • Encrypted communication EC2 instance • Multi-monitor support • Support for various peripherals Single or multiple persistent sessions No added cost on an Amazon EC2 instance

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    59 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us