Fibre Channel Over Ethernet (Fcoe) Data Center Bridging (DCB) Concepts and Protocols

Total Page:16

File Type:pdf, Size:1020Kb

Fibre Channel Over Ethernet (Fcoe) Data Center Bridging (DCB) Concepts and Protocols Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB) Concepts and Protocols Version 15 • Fibre Channel over Ethernet (FCoE) and Ethernet Basics • Storage in an FCoE Environment • EMC RecoverPoint and EMC Celerra MPFS as Solutions • Troubleshooting Basic FCoE and CEE Problems Mark Lippitt Erik Smith David Hughes Copyright © 2008–2015 EMC Corporation. All rights reserved. EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice. THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license. EMC2, EMC, and the EMC logo are registered trademarks or trademarks of EMC Corporation in the United State and other countries. All other trademarks used herein are the property of their respective owners. For the most up-to-date regulator document for your product line, go to EMC Online Support (https://support.emc.com). Part number H6290.19 2 Fibre Channel over Ethernet (FCoE) Concepts and Protocols TechBook Contents Chapter 1 Introduction to Fibre Channel over Ethernet Introduction ....................................................................................... 20 History................................................................................................ 22 Benefits ............................................................................................... 24 Terminology....................................................................................... 25 Management tools............................................................................. 27 Cable management recommendations .......................................... 28 Enabling technologies ...................................................................... 29 Converged Network Adapter...................................................29 Fibre Channel Forwarder ..........................................................30 FIP Snooping Bridge...................................................................30 Data Center Bridging (DCB) .....................................................38 Priority Flow Control and PAUSE ...........................................39 Data Center Bridging eXchange ...............................................40 Protocols............................................................................................. 42 FCoE encapsulation....................................................................42 FCoE Initialization Protocol (FIP) ............................................44 Physical connectivity options for FCoE......................................... 58 Optical (fiber) cable ....................................................................58 Twinax (copper) cable................................................................59 Logical connectivity options ........................................................... 61 FCoE fabrics.................................................................................61 Chapter 2 Ethernet Basics Ethernet history................................................................................. 66 Communication modes of operation .......................................66 Ethernet devices..........................................................................69 Auto-negotiation.........................................................................70 Gigabit Ethernet..........................................................................71 Fibre Channel over Ethernet (FCoE) Concepts and Protocols TechBook 3 Contents 10GBaseT .....................................................................................71 40GbE technology.......................................................................72 Protocols............................................................................................. 75 OSI networking protocol ...........................................................75 Ethernet frame-based protocol .................................................80 Ethernet switching concepts ........................................................... 86 Fibre Channel switching versus Ethernet bridging...............86 Gratuitous ARP...........................................................................89 Unicast flood ...............................................................................90 Spanning Tree Protocol (STP)...................................................92 Link Aggregation......................................................................105 Access Control Lists .................................................................112 Ethernet fabric................................................................................. 122 Ethernet fabric overview .........................................................122 Transparent Interconnect of Lots of Links (TRILL).............123 Brocade VCS Fabric technology .............................................123 VLAN ............................................................................................... 133 Description ................................................................................133 History........................................................................................134 802.1Q — VLAN tagging.........................................................135 Chapter 3 EMC Storage in an FCoE Environment FCoE connectivity........................................................................... 142 EMC storage in an FCoE environment........................................ 143 VMAX.........................................................................................143 VNX series .................................................................................145 CLARiiON CX4.........................................................................147 Prior to installing FCoE I/O module........................................... 149 VMAX.........................................................................................149 VNX and CX4............................................................................149 Supported topologies for FCoE storage connectivity................ 150 FCoE storage connectivity best practices and limitations ........ 152 Best practices .............................................................................152 Limitations.................................................................................152 FCoE storage connectivity requirements and support.............. 153 Supported switches ..................................................................153 Cabling support ........................................................................153 Chapter 4 Solutions in an FCoE Environment EMC RecoverPoint with Fibre Channel over Ethernet ............. 156 RecoverPoint replication in an FCoE environment.............156 4 Fibre Channel over Ethernet (FCoE) Concepts and Protocols TechBook Contents Continuous remote replication using a VNX series or CLARiiON splitter ....................................................................157 Continuous data protection using a host-based splitter .....158 Concurrent local and remote data protection using an intelligent fabric-based splitter ...............................................159 Related documentation ............................................................161 EMC Celerra Multi-Path File System in an FCoE environment..................................................................................... 162 Introduction ...............................................................................162 EMC Celerra Multi-Path File System (MPFS).......................164 Setting up MPFS in an FCoE environment on a Linux host..............................................................................................169 MPFS in an FCoE environment using Cisco Nexus switches with redundant path ................................................172 Setting up MPFS in an FCoE environment on a Windows 2003 SP2 host ...........................................................173 Chapter 5 Troubleshooting Basic FCoE and CEE Problems and Case Studies Troubleshooting basic FCoE and CEE problems ........................ 180 Process flow ...............................................................................180 Documentation..........................................................................182 Creating questions ....................................................................182 Creating worksheets.................................................................183 Log messages.............................................................................184 OSI layers ...................................................................................186 FC layers.....................................................................................187 Connectivity problems .............................................................188 Physical interface status...........................................................190 Interface errors ..........................................................................193 MAC layer..................................................................................197 Understanding FCoE
Recommended publications
  • On Ttethernet for Integrated Fault-Tolerant Spacecraft Networks
    On TTEthernet for Integrated Fault-Tolerant Spacecraft Networks Andrew Loveless∗ NASA Johnson Space Center, Houston, TX, 77058 There has recently been a push for adopting integrated modular avionics (IMA) princi- ples in designing spacecraft architectures. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and de- sign complexity. Ethernet technology is attractive for inclusion in more integrated avionic systems due to its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components. Furthermore, Ethernet can be augmented with a variety of quality of service (QoS) enhancements that enable its use for transmitting critical data. TTEthernet introduces a decentralized clock synchronization paradigm enabling the use of time-triggered Ethernet messaging appropriate for hard real-time applications. TTEther- net can also provide two forms of event-driven communication, therefore accommodating the full spectrum of traffic criticality levels required in IMA architectures. This paper explores the application of TTEthernet technology to future IMA spacecraft architectures as part of the Avionics and Software (A&S) project chartered by NASA's Advanced Ex- ploration Systems (AES) program. Nomenclature A&S = Avionics and Software Project AA2 = Ascent Abort 2 AES = Advanced Exploration Systems Program ANTARES = Advanced NASA Technology Architecture for Exploration Studies API = Application Program Interface ARM = Asteroid Redirect Mission
    [Show full text]
  • HP SN3000B Fibre Channel Switch Quickspecs
    QuickSpecs HP SN3000B Fibre Channel Switch Overview To remain competitive, IT organizations must keep pace with ever-increasing workloads without a similar increase in their budgets or resources. While virtualization has provided some relief by enabling the benefits of faster deployment and consolidation, it also tends to put additional stress on data center networks. In addition, the move toward cloud computing, which promises greater efficiency and a more service-oriented business model, means that these networks will face even greater demands. The HP SN3000B Fibre Channel Switch meets the demands of hyper-scale, private cloud storage environments by delivering market- leading 16 Gb Fibre Channel technology and capabilities that support highly virtualized environments. Designed to enable maximum flexibility and investment protection, the SN3000B Switch is configurable in 12 or 24 ports and supports 4, 8, or 16 Gbps speeds in an efficiently designed 1U package. It also provides a simplified deployment process and a point-and-click user interface-making it both powerful and easy to use. The SN3000B Switch offers low-cost access to industry-leading Storage Area Network (SAN) technology while providing "pay-as-you-grow" scalability to meet the needs of an evolving storage environment. The SN3000B is available in two models: HP SN3000B 16Gb 24-port/24-port Active Fibre Channel Switch HP SN3000B 16Gb 24-port/12-port Active Fibre Channel Switch DA - 14277 Worldwide — Version 6 — March 25, 2013 Page 1 QuickSpecs HP SN3000B Fibre Channel Switch
    [Show full text]
  • Ethernet Alliance Hosts Third IEEE 802.1 Data Center Bridging
    MEDIA ALERT Ethernet Alliance® Hosts Third IEEE 802.1 Data Center Bridging Interoperability Test Event Participation is Open to Both Ethernet Alliance Members and Non‐Members WHAT: The Ethernet Alliance Ethernet in the Data Center Subcommittee has announced it will host an IEEE 802.1 Data Center Bridging (DCB) interoperability test event the week of May 23 in Durham, NH at the University of New Hampshire Interoperability Lab. The Ethernet Alliance invites both members and non‐members to participate in this third DCB test event that will include both protocol and applications testing. The event targets interoperability testing of Ethernet standards being developed by IEEE 802.1 DCB task force to address network convergence issues. Testing of protocols will include projects such as IEEE P802.1Qbb Priority Flow Control (PFC), IEEE P802.1Qaz Enhanced Transmission Selection (ETS) and DCB Capability Exchange Protocol (DCBX). The test event will include testing across a broad vendor community and will exercise DCB features across multiple platforms as well as exercise higher layer protocols such as Fibre Channel over Ethernet (FCoE), iSCSI over DCB, RDMA over Converged Ethernet (RoCE) and other latency sensitive applications. WHY: These test events help vendors create interoperable, market‐ready products that interoperate to the IEEE 802.1 DCB standards. WHEN: Conference Call for Interested Participants: Friday, March 4 at 10 AM PST Event Registration Open Until: March 4, 2011 Event Dates (tentative): May 23‐27, 2011 WHERE: University of New Hampshire Interoperability Lab (UNH‐IOL) Durham, NH WHO: The DCB Interoperability Event is hosted by the Ethernet Alliance. REGISTER: To get more information, learn about participation fees and/or register for the DCB plugfest, please visit Ethernet Alliance DCB Interoperability Test Event page or contact [email protected].
    [Show full text]
  • Sun Storage 16 Gb Fibre Channel Pcie Host Bus Adapter, Emulex Installation Guide for HBA Model 7101684
    Sun Storage 16 Gb Fibre Channel PCIe Host Bus Adapter, Emulex Installation Guide For HBA Model 7101684 Part No: E24462-10 August 2018 Sun Storage 16 Gb Fibre Channel PCIe Host Bus Adapter, Emulex Installation Guide For HBA Model 7101684 Part No: E24462-10 Copyright © 2017, 2018, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable: U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs.
    [Show full text]
  • Data Center Ethernet 2
    DataData CenterCenter EthernetEthernet Raj Jain Washington University in Saint Louis Saint Louis, MO 63130 [email protected] These slides and audio/video recordings of this class lecture are at: http://www.cse.wustl.edu/~jain/cse570-15/ Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-1 OverviewOverview 1. Residential vs. Data Center Ethernet 2. Review of Ethernet Addresses, devices, speeds, algorithms 3. Enhancements to Spanning Tree Protocol 4. Virtual LANs 5. Data Center Bridging Extensions Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-2 Quiz:Quiz: TrueTrue oror False?False? Which of the following statements are generally true? T F p p Ethernet is a local area network (Local < 2km) p p Token ring, Token Bus, and CSMA/CD are the three most common LAN access methods. p p Ethernet uses CSMA/CD. p p Ethernet bridges use spanning tree for packet forwarding. p p Ethernet frames are 1518 bytes. p p Ethernet does not provide any delay guarantees. p p Ethernet has no congestion control. p p Ethernet has strict priorities. Washington University in St. Louis http://www.cse.wustl.edu/~jain/cse570-15/ ©2015 Raj Jain 4-3 ResidentialResidential vs.vs. DataData CenterCenter EthernetEthernet Residential Data Center Distance: up to 200m r No limit Scale: Few MAC addresses r Millions of MAC Addresses 4096 VLANs r Millions of VLANs Q-in-Q Protection: Spanning tree r Rapid spanning tree, … (Gives 1s, need 50ms) Path determined by r Traffic engineered path spanning tree Simple service r Service Level Agreement.
    [Show full text]
  • SAS Enters the Mainstream Although Adoption of Serial Attached SCSI
    SAS enters the mainstream By the InfoStor staff http://www.infostor.com/articles/article_display.cfm?Section=ARTCL&C=Newst&ARTICLE_ID=295373&KEYWORDS=Adaptec&p=23 Although adoption of Serial Attached SCSI (SAS) is still in the infancy stages, the next 12 months bode well for proponents of the relatively new disk drive/array interface. For example, in a recent InfoStor QuickVote reader poll, 27% of the respondents said SAS will account for the majority of their disk drive purchases over the next year, although Serial ATA (SATA) topped the list with 37% of the respondents, followed by Fibre Channel with 32%. Only 4% of the poll respondents cited the aging parallel SCSI interface (see figure). However, surveys of InfoStor’s readers are skewed by the fact that almost half of our readers are in the channel (primarily VARs and systems/storage integrators), and the channel moves faster than end users in terms of adopting (or at least kicking the tires on) new technologies such as serial interfaces. Click here to enlarge image To get a more accurate view of the pace of adoption of serial interfaces such as SAS, consider market research predictions from firms such as Gartner and International Data Corp. (IDC). Yet even in those firms’ predictions, SAS is coming on surprisingly strong, mostly at the expense of its parallel SCSI predecessor. For example, Gartner predicts SAS disk drives will account for 16.4% of all multi-user drive shipments this year and will garner almost 45% of the overall market in 2009 (see figure on p. 18).
    [Show full text]
  • Gen 6 Fibre Channel Technology
    WHITE PAPER Better Performance, Better Insight for Your Mainframe Storage Network with Brocade Gen 6 TABLE OF CONTENTS Brocade and the IBM z Systems IO product team share a unique Overview .............................................................. 1 history of technical development, which has produced the world’s most Technology Highlights .................................. 2 advanced mainframe computing and storage systems. Brocade’s Gen 6 Fibre Channel Technology technical heritage can be traced to the late 1980s, with the creation of Benefits for z Systems and Flash channel extension technologies to extend data centers beyond the “glass- Storage ................................................................ 7 house.” IBM revolutionized the classic “computer room” with the invention Summary ............................................................. 9 of the original ESCON Storage Area Network (SAN) of the 1990s, and, in the 2000s, it facilitated geographically dispersed FICON® storage systems. Today, the most compelling innovations in mainframe storage networking technology are the product of this nearly 30-year partnership between Brocade and IBM z Systems. As the flash era of the 2010s disrupts between the business teams allows both the traditional mainframe storage organizations to guide the introduction networking mindset, Brocade and of key technologies to the market place, IBM have released a series of features while the integration between the system that address the demands of the data test and qualification teams ensures the center. These technologies leverage the integrity of those products. Driving these fundamental capabilities of Gen 5 and efforts are the deep technical relationships Gen 6 Fibre Channel, and extend them to between the Brocade and IBM z Systems the applications driving the world’s most IO architecture and development teams, critical systems.
    [Show full text]
  • FICON Native Implementation and Reference Guide
    Front cover FICON Native Implementation and Reference Guide Architecture, terminology, and topology concepts Planning, implemention, and migration guidance Realistic examples and scenarios Bill White JongHak Kim Manfred Lindenau Ken Trowell ibm.com/redbooks International Technical Support Organization FICON Native Implementation and Reference Guide October 2002 SG24-6266-01 Note: Before using this information and the product it supports, read the information in “Notices” on page vii. Second Edition (October 2002) This edition applies to FICON channel adaptors installed and running in FICON native (FC) mode in the IBM zSeries procressors (at hardware driver level 3G) and the IBM 9672 Generation 5 and Generation 6 processors (at hardware driver level 26). © Copyright International Business Machines Corporation 2001, 2002. All rights reserved. Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents Notices . vii Trademarks . viii Preface . ix The team that wrote this redbook. ix Become a published author . .x Comments welcome. .x Chapter 1. Overview . 1 1.1 How to use this redbook . 2 1.2 Introduction to FICON . 2 1.3 zSeries and S/390 9672 G5/G6 I/O connectivity. 3 1.4 zSeries and S/390 FICON channel benefits . 5 Chapter 2. FICON topology and terminology . 9 2.1 Basic Fibre Channel terminology . 10 2.2 FICON channel topology. 12 2.2.1 Point-to-point configuration . 14 2.2.2 Switched point-to-point configuration . 15 2.2.3 Cascaded FICON Directors configuration. 16 2.3 Access control. 18 2.4 Fibre Channel and FICON terminology.
    [Show full text]
  • Fibre Channel Testing for Avionics Applications
    FIBRE CHANNEL TESTING FOR AVIONICS APPLICATIONS Item Type text; Proceedings Authors Warden, Gary; Fleissner, Bill Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings Rights Copyright © International Foundation for Telemetering Download date 23/09/2021 09:45:36 Link to Item http://hdl.handle.net/10150/605804 FIBRE CHANNEL TESTING FOR AVIONICS APPLICATIONS Gary Warden Bill Fleissner AIM-USA AIM-USA 2252 Bandit Trail 600 W Riechmuth Rd Beavercreek, Ohio Valley, NE 68064 45434 (866) 246-1553 (937) 427-1280 Abstract - Fibre Channel is being implemented speed serial transmissions placed in routed as an avionics communication architecture for switched architectures. a variety of new military aircraft and upgrades to existing aircraft. The Fibre Channel You will see that this “shared characteristic” standard (see T11 web site www.t11.org) places two important stress points on incumbent defines various network topologies and testing methodologies and strategies. First, the multiple data protocols. Some of the shear volume of the data makes it impossible to topologies and protocols (ASM, 1553, RDMA) hold onto a philosophy of logging all data on the are suited for Avionics applications, where the network so as to not miss something of movement of data between devices must take importance for post-run or post-flight analysis. place in a deterministic fashion and needs to And secondly, in a switched topology there is no be delivered very reliably. All aircraft flight single tap point in the system where all the data hardware needs to be tested to be sure that it may be seen. Perhaps as important is the fact will communicate information properly in the that, since shared media transports have been Fibre Channel network.
    [Show full text]
  • Serial Attached SCSI (SAS)
    Storage A New Technology Hits the Enterprise Market: Serial Attached SCSI (SAS) A New Interface Technology Addresses New Requirements Enterprise data access and transfer demands are no longer driven by advances in CPU processing power alone. Beyond the sheer volume of data, information routinely consists of rich content that increases the need for capacity. High-speed networks have increased the velocity of data center activity. Networked applications have increased the rate of transactions. Serial Attached SCSI (SAS) addresses the technical requirements for performance and availability in these more I/O-intensive, mission-critical environments. Still, IT managers are pressed on the other side by cost constraints and the need White paper for flexibility and scalability in their systems. While application requirements are the first measure of a storage technology, systems based on SAS deliver on this second front as well. Because SAS is software compatible with Parallel SCSI and is interoperable with Serial ATA (SATA), SAS technology offers the ability to manage costs by staging deployment and fine-tuning a data center’s storage configuration on an ongoing basis. When presented in a small form factor (SFF) 2.5” hard disk drive, SAS even addresses the increasingly important facility considerations of space, heat, and power consumption in the data center. The backplanes, enclosures and cabling are less cumbersome than before. Connectors are smaller, cables are thinner and easier to route impeding less airflow, and both SAS and SATA HDDs can share a common backplane. White paper Getting this new technology to the market in a workable, compatible fashion takes various companies coming together.
    [Show full text]
  • Microsoft SQL Server 2008 DSS Performance with Fcoe, Iscsi, And
    Technical Report Evaluating the Performance of FCoE, iSCSI, and FC Using DSS Workloads with Microsoft SQL Server 2008 Wei Liu, NetApp Kent Swalin, Vinay Kulkarni, IBM Scott Hinckley, Emulex June 2010 | TR-3853 ABSTRACT This technical report describes a series of tests performed by NetApp and IBM with support from Emulex to compare the performance of different storage protocols (FCoE, iSCSI, and Fibre Channel) using Decision Support System (DSS) workloads with Microsoft® SQL Server® 2008 on IBM x3850 X5 server, NetApp® FAS3070 storage systems, and Emulex adapters. TABLE OF CONTENTS 1 EXECUTIVE SUMMARY ............................................................................................................. 3 2 INTRODUCTION ......................................................................................................................... 3 3 TEST ENVIRONMENT ................................................................................................................ 3 3.1 IBM SYSTEM 3850 X5 SERVER ................................................................................................................. 4 3.2 EMULEX LPe12002-M8 HBA ...................................................................................................................... 5 3.3 EMULEX OCe10102 CNA ........................................................................................................................... 5 3.4 NETAPP FAS3070 STORAGE ...................................................................................................................
    [Show full text]
  • IEEE Std 802.3™-2012 New York, NY 10016-5997 (Revision of USA IEEE Std 802.3-2008)
    IEEE Standard for Ethernet IEEE Computer Society Sponsored by the LAN/MAN Standards Committee IEEE 3 Park Avenue IEEE Std 802.3™-2012 New York, NY 10016-5997 (Revision of USA IEEE Std 802.3-2008) 28 December 2012 IEEE Std 802.3™-2012 (Revision of IEEE Std 802.3-2008) IEEE Standard for Ethernet Sponsor LAN/MAN Standards Committee of the IEEE Computer Society Approved 30 August 2012 IEEE-SA Standard Board Abstract: Ethernet local area network operation is specified for selected speeds of operation from 1 Mb/s to 100 Gb/s using a common media access control (MAC) specification and management information base (MIB). The Carrier Sense Multiple Access with Collision Detection (CSMA/CD) MAC protocol specifies shared medium (half duplex) operation, as well as full duplex operation. Speed specific Media Independent Interfaces (MIIs) allow use of selected Physical Layer devices (PHY) for operation over coaxial, twisted-pair or fiber optic cables. System considerations for multisegment shared access networks describe the use of Repeaters that are defined for operational speeds up to 1000 Mb/s. Local Area Network (LAN) operation is supported at all speeds. Other specified capabilities include various PHY types for access networks, PHYs suitable for metropolitan area network applications, and the provision of power over selected twisted-pair PHY types. Keywords: 10BASE; 100BASE; 1000BASE; 10GBASE; 40GBASE; 100GBASE; 10 Gigabit Ethernet; 40 Gigabit Ethernet; 100 Gigabit Ethernet; attachment unit interface; AUI; Auto Negotiation; Backplane Ethernet; data processing; DTE Power via the MDI; EPON; Ethernet; Ethernet in the First Mile; Ethernet passive optical network; Fast Ethernet; Gigabit Ethernet; GMII; information exchange; IEEE 802.3; local area network; management; medium dependent interface; media independent interface; MDI; MIB; MII; PHY; physical coding sublayer; Physical Layer; physical medium attachment; PMA; Power over Ethernet; repeater; type field; VLAN TAG; XGMII The Institute of Electrical and Electronics Engineers, Inc.
    [Show full text]