University of California Los Angeles

Energy Efficient Computing with the Low Power, Energy Aware Processing (LEAP) Architecture

A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Philosophy in Electrical Engineering

by

Dustin Hale McIntire

2012 c Copyright by Dustin Hale McIntire 2012 Abstract of the Dissertation Energy Efficient Computing with the Low Power, Energy Aware Processing (LEAP) Architecture

by

Dustin Hale McIntire Doctor of Philosophy in Electrical Engineering University of California, Los Angeles, 2012 Professor William J. Kaiser, Chair

Recently, a broad range of ENS applications have appeared for large-scale sys- tems, introducing new requirements leading to new embedded architectures, as- sociated algorithms, and supporting software systems. These new requirements include the need for diverse and complex sensor systems that present demands for energy and computational resources as well as for broadband communication. To satisfy application demands while maintaining critical support for low energy operation, a new multiprocessor node hardware and software architecture, Low Power Energy Aware Processing (LEAP), has been developed. In this thesis we described the LEAP design approach, in which the system is able to adaptively select the most energy efficient hardware components matching an applications needs. The LEAP platform supports highly dynamic requirements in sensing fidelity, computational load, storage media, and network bandwidth. It focuses on episodic operation of each component and considers the energy dissipation for each platform task by integrating fine-grained energy dissipation monitoring and sophisticated power control scheduling for all subsystems, including sensors. In

2 addition to the LEAP platforms unique hardware capabilities, its software archi- tecture has been designed to provide an easy to use power management interface, a robust, fault tolerant operating environment, and to enable remote upgrade of individual software components.

Current research topics such as mobile computing and embedded networked sensing (ENS) have been addressing energy efficiency as a cornerstone necessity, due to their requirement for portability and long battery life times. This the- sis discusses one such related project that, while currently directed toward ENS computing applications, is generally applicable to a wide ranging set of applica- tions including both mobile and enterprise computing. While relevant to many applications, it is focuses on ENS environments necessitating high performance computing, networking, and storage systems while maintaining low average power operations.

3 The dissertation of Dustin Hale McIntire is approved.

John Wallace

Majid Sarrafzadeh

Gregory Pottie

William J. Kaiser, Committee Chair

University of California, Los Angeles 2012

ii This thesis is dedicated to my family and friends.

iii Table of Contents

1 Introduction ...... 1

1.1 Motivation ...... 3

1.2 Thesis Contributions ...... 5

1.3 Outline ...... 6

2 Background and Motivation ...... 8

2.1 Defining Energy Efficiency ...... 8

2.2 Energy Tradeoffs ...... 9

2.2.1 Processing Resources ...... 11

2.2.2 Communications Resources ...... 15

2.2.3 Storage Resources ...... 16

2.3 Prior Work ...... 18

2.3.1 Established Power Management Solutions ...... 18

2.3.2 Related Research ...... 23

2.4 The Case for Energy Accounting ...... 36

2.4.1 Dynamic power profiles ...... 37

2.4.2 Energy Correlation Effects ...... 38

3 A New Approach: LEAP ...... 46

3.1 Hardware Enablement ...... 48

3.1.1 Power Supply Characteristics ...... 49

3.1.2 Power Measurement Sensor ...... 51

iv 3.1.3 Data Conversion ...... 57

3.1.4 Power Switching ...... 61

3.1.5 Energy Accounting ...... 62

3.2 LEAP Enabled Software ...... 100

3.2.1 Operating System Integration ...... 100

3.2.2 Energy Profiling Tools ...... 105

3.2.3 Power Scheduling Tools ...... 110

4 LEAP Application: GeoNET ...... 113

4.1 GeoNET Design ...... 116

4.1.1 Hardware Architecture ...... 116

4.1.2 Software Architecture ...... 125

5 Field Experiments ...... 137

5.1 El Centro ...... 138

5.1.1 Data Collection Efficiency ...... 139

5.1.2 Data Archiving Efficiency ...... 140

5.1.3 Detection Efficiency ...... 141

5.2 Stunt Ranch ...... 146

5.2.1 Detection Efficiency ...... 147

6 LEAP Applications and Partnerships ...... 154

6.1 LEAP Applications and Markets ...... 154

6.2 LEAP Inspired Products, Research, and Partnerships ...... 155

v 7 Conclusion ...... 162

A Hardware Design Files ...... 164

A.1 RefTek RT619 ...... 164

B LEAP2 Design Documents ...... 184

B.1 Hardware Schematics and PCB Layout ...... 184

B.1.1 EMAP2 ...... 184

B.1.2 HPM ...... 197

B.1.3 SIM ...... 207

B.1.4 MPM ...... 212

References ...... 218

vi List of Figures

2.1 ENS system configured with example sensor array ...... 9

2.2 Graph of power contributions of various laptop subsystems [MV04a]. 10

2.3 Notional computation and communications requirements by appli- cation ...... 11

2.4 Relative energy efficiency, execution speed, and efficiency-speed product calculations for each of the three benchmark algorithms on each processor architecture ...... 13

2.5 Comparison of radio chipsets with equal link margins at 10-6 BER showing relative energy efficiency per bit, throughput, and efficiency- throughput ...... 16

2.6 Comparison of storage media showing relative energy efficiency per read, write, throughput, and efficiency-throughput ...... 18

2.7 ACPI architecture diagram (ACPI Specification v3.0b) ...... 20

2.8 OnNow architecture diagram (Microsoft Corp.) ...... 22

2.9 Wakeup information display from PowerTOP ...... 29

2.10 Power Tutor graphical display ...... 30

2.11 Qualcomm’s TREPN power profiling ...... 34

2.12 PXA270 power dissipation over time, when running the dcache test program, for six CPU speeds measured by the EMAP ASIC on LEAP2 ...... 38

2.13 CPU, SDRAM, NAND and NOR power draw over time when copying

a 7Mb file from NAND to NOR using JFFS2...... 42

vii 2.14 Power profile when writing directly from user space utilities. . . . 43

2.15 Typical PC hardware architecture [Wik07] ...... 44

2.16 Energy correlation effects showing CPU power increase due to L1 cache hit ...... 45

2.17 Traditional sampling methods: oscilloscope power profiling setup measuring the UUT (a) and fuel gauge circuit (b) ...... 45

3.1 Applications development using a LEAP enabled design flow . . . 47

3.2 Energy-aware architecture components: power sensors, power switches, analog conversion, energy accounting ...... 48

3.3 Typical set of ENS subsystems included the in-situ measurement in LEAP architecture ...... 49

3.4 Power profile of Intel Core 2 Quad CPU Q6600 2.4GHz ...... 50

3.5 Power filter formed through bypass capacitors and trace inductance 51

3.6 Resistive current sensor types a) high-side b) low-side and typical PCB power plane regions ...... 52

3.7 ADS7888 ADC power consumption over a range of sampling fre- quencies ...... 60

3.8 Parasitic diode affect when driving I/O from a powered domain to

an unpowered domain (Vin > Vdd)...... 62

3.9 Original LEAP platform hardware architecture ...... 64

3.10 EMAP ASIC architecture block diagram with interconnect . . 65

3.11 Schematic of GPIO pin logic ...... 68

3.12 Programmable interrupt controller logic ...... 69

viii 3.13 to Host Controller read cycle waveforms ...... 70

3.14 Data collection state machine showing energy measurement method 74

3.15 ASC state machine for the TLV2548 (a) and waveform example of multiple ASC modules sampling in parallel on EMAP2 (b) . . . . 77

3.16 Power Management Scheduler architecture ...... 78

3.17 Power Management Scheduler table entry ...... 80

3.18 Power Management Scheduler state logic ...... 81

3.19 Power comparison of four energy measurement solutions: (a) Total sampling power by number of channels (b) Total sampling power by number by sample rate and (c) applications processor sample read back time ...... 86

3.20 Comparison of four energy measurement solutions when including applications processor power, assuming 1W operation with 100 sample reads/sec ...... 87

3.21 Photo and block diagram of the LEAP2 platform ...... 89

3.22 Photo and block diagram of the energy management and account- ing (EMAP2) module ...... 90

3.23 Measurement accuracy of the EMAP2 current sensors and ADC for the following channels: (a) HPM CPU (b) HPM SDRAM (c) HPM NAND ...... 92

3.24 CPU energy overhead when reading from the EMAP ASIC on EMAP2: (a) CPU sampling energy by number of sampled channels (b) by sample rate ...... 93

3.25 Photo and block diagram of the host processor module (HPM) . . 95

ix 3.26 Memory map of the HPM resources from the PXA270 CPU . . . 96

3.27 Photo and block diagram of the Sensor Interface Module (SIM) . 98

3.28 Photo and block diagram of the MiniPCI Module (MPM) . . . . . 99

3.29 Block diagram of the developed Linux energy registry ...... 103

3.30 Etop screenshot displaying per-subsystem power and energy informa-

tion as well as per-process energy consumption...... 106

3.31 LTT screenshot displaying NOR flash write information as well as per-

process energy consumption...... 109

3.32 Example commands from emap-client demonstrating scheduling capability ...... 111

4.1 Map of aftershock activity near Japan as of March 31, 2011 . . . . 114

4.2 GeoNET (RefTek RT155) product photo (a) and internal bays (b) 116

4.3 RT619 module photo ...... 118

4.4 GeoNET RT614 Analog module performance measurements: (a) PSD of the noise floor from DC to 100Hz and (b) dynamic range of a 17.08Hz tone ...... 120

4.5 RT155 Backbone Serial Data Protocol ...... 121

4.6 RT619 architecture highlighting additional GeoNET capabilities . 122

4.7 GeoNET FPGA architecture highlighting additional sensing capa- bilities ...... 123

4.8 Diagram of the Geonet FPGA’s Data Acquisition Unit ...... 124

4.9 Diagram of RT619 software applications ...... 127

x 4.10 Example of email report generated during El Centro deployment demonstrating display of distributed detection from two units . . 132

5.1 Locations of the B003 and B004 units installed in El Centro, CA . 138

5.2 Photo of B003 and B004 unit installations at El Centro test site . 139

5.3 Energy consumed by adlogger to capture and store different format types and file systems ...... 140

5.4 Energy consumed using different file systems and compression for data archiving ...... 142

5.5 Energy consumed for several event detection methods ...... 144

5.6 Energy contribution in joules from the component software appli- cations used for event detection reporting ...... 145

5.7 Locations of the two units installed at Stunt Ranch ...... 147

5.8 Photo of B003 unit installation at Stunt Ranch facility ...... 148

5.9 Solar charging and discharging over a one week time period . . . 151

5.10 Battery state during July and November periods ...... 152

5.11 Energy consumed using CPU and FPGA data collection methods 153

6.1 Photo of MicroLEAP platform ...... 156

6.2 Photo of the LEAP server project and RTDEMS block diagram . 157

6.3 Photo of LEAP enabled LabVIEW components ...... 158

6.4 DARPA ADAPT module attached to expansion carrier board. . . 159

6.5 Photo of the Agile Communications NetRAT handheld computing device ...... 160

xi List of Tables

2.1 Processor Types for Energy Benchmarks ...... 13

2.2 Radio chipsets used for energy efficiency calculations ...... 16

2.3 Storage media types used for energy efficiency calculations . . . . 17

3.1 Power Domains with operating range, Sense Resistors, Amplifier Gain, and Measurement Efficiency ...... 57

3.2 Power Switching Considerations ...... 61

3.3 EMAP Wishbone SoC bus addresses ...... 66

3.4 Clock Manager domain frequency bounds for an Actel IGLOO FPGA derived from place and route timing reports ...... 67

3.5 EMAP Energy Accounting Controller Registers ...... 73

3.6 CAC rollover periods for data width and sample rates ...... 75

3.7 Quicklogic QL8250 and Actel IGLOO AGL600V2 implementation details ...... 83

3.8 Implementation details of energy measurement solutions . . . . . 85

3.9 EMAP2 operating states and power consumption ...... 90

3.10 PXA270 operating frequencies and operating power ...... 97

3.11 SIM operating states and power consumption ...... 98

3.12 Qualitative comparison of Etop and Endoscope ...... 108

4.1 GeoNET design requirements list ...... 117

4.2 GeoNET configuration of the RT155’s modular bays...... 118

xii 5.1 GeoNET Energy Efficiency during El Centro experiment assuming one valid event per day ...... 146

5.2 GeoNET Energy Efficiency during Stunt Ranch experiment assum- ing one valid event per day ...... 148

xiii Acknowledgments

This thesis would not have been possible without the assistance and guidance of many individuals. First and foremost, I would like to thank my advisor Professor William Kaiser, who has provided me guidance and mentorship over many years and to whom I am eternally grateful. His broad academic knowledge and prac- tical knowhow have led to new ideas and research directions, and I am sincerely grateful for their time and effort. I would also like to thank Professor Gregory Pottie, Professor Deborah Estrin, Professor Majid Sarrafzadeh, and Professor John Wallace for agreeing to serve on my Ph.D. committee.

I would also like to thank Professor Paul Davis and Igor Stubalo in the Geol- ogy Department for providing their time and support in the development of the GeoNet system. In addition the people at Reftek, specifically Phil Davidson and Bruce Elliot have also provided much assistance in the development of a now pro- duction ready seismic detection system. I would also like to express my utmost gratitude to many current and former colleagues in the ASCENT research lab whose experiences and sight have been critical in the culmination of this thesis. I would like to specifically mention Thanos Stathopoulous, Winston Wu, Lawrence Au, Brett Jordan, Timothy Chow, Digvijay Singh, and Amarjeet Singh for their help.

Finally, my sincere thanks to my wife Tamra and three children Jake, Cassidy, and Lauren for their limitless patience and support throughout my graduate study.

This thesis contains texts and figures from papers listed in my publication section. This work is supported by National Science Foundation under grant CNS-0453809 and ANI-0331481.

xiv Vita

1974 Born, Eau Claire, WI

1996 B.S., Electrical Engineering Stanford University Palo Alto, California

1996-2000 Member of Technical Staff III Rockwell Science Center Thousand Oaks, CA

1999 Graduate Certificate in Networked Systems Stanford University Palo Alto, California

2000-2008 Systems Architect Sensoria Corporation Redwood City, CA

2004 M.S., Electrical Engineering University of California, Los Angeles Los Angeles, California

2008-current CTO Arrayent Corporation Redwood City, CA

Publications

xv Dustin McIntire, Thanos Stathopoulos, Sasank Ready, Thomas Schmidt, and William J. Kaiser. Energy Efficient Sensing with the Low Power, Energy Aware Processing (LEAP) Architecture. ACM Transactions on Embedded Computing Systems, Volume 11, Issue 2, July 2012.

Thanos Stathopoulos, Dustin McIntire, William J. Kaiser. The Energy Endo- scope: Real-Time Detailed Energy Accounting for Wireless Sensor Nodes. Pro- ceedings of the 7th International Conference on Information Processing in Sensor Networks, IPSN 2008, pages 383-394, April 2008.

L.K. Au, W.H. Wu, M.A. Batalin, D.H. Mclntire, and W.J. Kaiser. MicroLEAP: Energy-aware Wireless Sensor Platform for Biomedical Sensing Applications. In Biomedical Circuits and Systems Conference, 2007. BIOCAS 2007. IEEE, pages 158-162, November 2007.

Dustin McIntire, Thanos Stathopoulos, William J. Kaiser. etop: sensor network application energy profiling on the LEAP2 platform. Proceedings of the 6th International Conference on Information Processing in Sensor Networks, IPSN 2007, pages 576-577, April 2007.

Thanos Stathopoulos, Martin Lukac, Dustin McIntire, John Heidemann, Deborah Estrin, William J. Kaiser. End-to-End Routing for Dual-Radio Sensor Networks. 26th IEEE International Conference on Computer Communications, Joint Con- ference of the IEEE Computer and Communications Societies. INFOCOM 2007. pages 2252-2260, 2007.

xvi Dustin McIntire, Kei Ho, Bernie Yip, Amarjeet Singh, Winston H. Wu, William J. Kaiser. The low power energy aware processing (LEAP)embedded networked sensor system. Proceedings of the Fifth International Conference on Information Processing in Sensor Networks, IPSN 2006, IEEE, pages 449-457, April 2006.

Maxim Pedyash, Victor Panov, Nikolai Romanov, Dustin McIntire, Gary Gould, Loren Clare, Jonathan Agre, Allen Twarowski, and James Duncan. Modular wireless integrated network sensor (WINS) node with a dual bus architecture. US Patent 2004021788, November 2004.

William Merrill, Lewis Girod, Brian Schiffer, Dustin McIntire, Guillaume Rava, Katayoun Sohrabi, Fredric Newberg, Jeremy Elson, William J. Kaiser. Dynamic Networking and Smart Sensing Enable Next-Generation Landmines. IEEE Per- vasive Computing, Vol. 3, No. 4, pages: 84-90, April 2004.

Victor S. Lin, Dustin McIntire, and Charles Chien. Robust Adaptive Error Con- trol. IEEE Wireless Communications and Networking Conference, 2000. WCNC 2000, IEEE, pages 644 - 648.

xvii CHAPTER 1

Introduction

A broad range of embedded networked sensor applications have appeared for large-scale systems focused on the most urgent demands in environmental mon- itoring, security, and other applications [EPS01, HRA07, OAB03]. These appli- cations not only introduce important demand for Embedded Networked Sensing (ENS) systems, they also introduce fundamental new requirements that lead to the need for new ENS embedded architectures, associated algorithms, and sup- porting software systems. New requirements include the need for diverse and complex sensor systems that present demands for energy and computational re- sources as well as for broadband communication. Sensing resources include imag- ing devices, chemical and biological sensors, and others [ABF04]. Computational system demands for new ENS applications now include image processing, sta- tistical computing, and optimization algorithms required for selection of proper sensor sampling [BSY04].

In addition to environmental monitoring, biomedical systems such as wire- less body networks (WBN) are becoming increasingly important as a means to address rising health care costs [AWB07]. These WBN systems collect real-time patient physiologic data to improve ailment diagnosis accuracy and decrease pa- tient recovery time. Patients are equipped with small, body-worn sensing systems to collect and analyze biological signals and in some cases transmit alert infor- mation wirelessly to trained medical personnel [WBB08]. These new biomedical

1 platforms require long battery lifetimes, while also requiring substantial sensing, processing, and networking capabilities.

The emergence of smart phone technology has revolutionized mobile com- puting by providing desktop capabilities in small portable devices. Users now demand platforms enabling rich media content and diverse sets of applications. These include computing and networking intensive applications such as stream- ing high definition video and real-time gaming capabilities. At the same time smart phone package sizes and battery density have remained nearly constant while users expect ever longer battery lifetimes.

The situation for enterprise computing remains critical. The total data center power consumption from servers, storage, communications, cooling, and power distribution equipment accounts for between 1.7% and 2.2% of total electricity use in the U.S. in 2010 [Mar11]. This is the equivalent to 5.8 million households, or 5% of the entire housing stock. While the rate of growth has slowed in the past few years, increased reliance upon cloud computing may accelerate the trend toward larger data centers and higher energy requirements.

Worldwide energy use in general purpose computing continues to increase as well. The number of computers in use worldwide continues to grow, from 18.7 per 1000 people in 1990 to 217.1 per 1000 people in 2011, reaching 2 billion total by 2014 [Bri08]. At the same time, the power required by an individual computer has remained nearly constant over the last 20 years [Col03]. This results in an increasing demand for worldwide energy resources. However, much of this new energy demand is due to waste. The U.S. Department of Energy reports that an average PC wastes up to 400 kilowatt-hours of electricity annually [RGK02].

It is critical that future computing platforms, across all application domains, consider energy efficiency as a primary design objective. This thesis proposes a

2 solution that enables run time optimization of performance under energy con- straints. This will be accomplished through the development of a new hardware and software architecture including a custom ASIC solution that together provide the first task-resolved and component-resolved measurements of energy dissipa- tion integrated into an embedded platform. This new architecture is then verified by applying the design in a state of the art environmental sensing system. This sensing system is validated in several long term field deployments and the results from these experiments are given.

1.1 Motivation

Conventional embedded networked sensor architecture, based on early sensor node systems, were developed for transducer systems that presented demand for only low energy, relatively low complexity computation, and narrowband com- munication [ASS02, ADL98, HSW00]. In contrast, the information needed in environmental monitoring and many other sensing applications now requires an investment in energy for sensing, processing, and communication. These re- quirements were once believed to preclude low average power and long lifetime operation with constrained storage battery energy resources. However, it is im- portant to note (as will be shown here) that new ENS architectures designed to permit selection of processing and communication components for high en- ergy efficiency and that include methods for proper scheduling of operation, can achieve low energy operation. These take advantage of the general characteristic of many applications that demand only intermittent operation of many plat- form resources and therefore intermittent and infrequent dissipation of energy. Additionally, heterogeneous sets of resources, including computation, communi- cations, and sensing often permit the use of different methods to achieve the same

3 system level objective often with significantly diverse energy requirements. We will demonstrate that proper scheduling of the higher performance components, though requiring higher peak power operation, may result in a net energy benefit due to higher efficiency operations. The opportunity for efficiency improvement is enhanced by the increased number of heterogeneous hardware resources available to systems today. For example, the latest Qualcomm mobile applications pro- cessor (MSM8960), intended for the smart phone market, contains nine separate computational units including three general purpose CPU types, several special- ized DSP’s, and a mobile GPU, as well as four different wireless communications interfaces [Qua11].

It is important to note that the LEAP architecture introduced as part of this thesis is necessary since, as will be discussed, run-time measurement of energy integrated with the platform is required for optimization of energy and perfor- mance for many important applications. A series of critical characteristics for determining platform energy dissipation and sensor node platform performance are in fact only known only at runtime. For example, the dynamic nature of events means that as time evolves, data-dependent processing requirements will demand that platform schedules adapt. As an example, processor resource con- flicts resulting from the requirements for both processing sensor data (in order to derive event information) while also maintaining network communication be- tween nodes, presents a resource conflict that can be determined only at runtime due to the inherently unpredictable nature of events.

As a further example, in target detection and tracking, processor task execu- tion time is also data-dependent and will vary with the nature of received sensor signals (including noise and interference sources) and the varying demand for source identification. Also, when presented with the opportunity to select from

4 multiple wireless interfaces, we require runtime adaptation since available band- width and network connectivity, for example, is not known except at runtime.

Finally, since the rate of arrival of events and their priority are also known only at runtime, the platform’s operating schedules also must be adaptive and therefore must rely on runtime energy measurement. For example, methods that describe application requirements based upon on hardware power state dependen- cies [DPN02], are effective only in the restrictive limit where application charac- teristics are inherently stationary. Of course, this is not typical of many important sensor network applications.

1.2 Thesis Contributions

As part of this thesis, two state-of-the-art embedded networked sensing (ENS) platforms are designed, fabricated, and tested, including a custom application specific integrated circuit (ASIC) providing one of a kind energy accounting and power scheduling features. The first is used to a reference design and to ver- ify the utility of the new design approach and across a broad range of ENS applications. The second refines this design approach for specific, highly de- manding, seismic application. In addition, several new software components are composed which provide per-user and per-process energy information, facilitating energy-aware application development. This software architecture is augmented by development of a new operating system facility whereby applications will be able to register energy accounting requests through a centralized energy resource manager. This new energy-aware capability is then applied to ongoing seismic research project, where these new energy aware capabilities are tested and evalu- ated. In addition, new energy profiling applications are utilized to perform system optimization during operation in multiple field experiments. This includes de-

5 velopment — and over-the-air deployment — of new applications which reduce average energy usage while not interrupting the seismic detection mission.

The specific contributions of this thesis include:

1. A new hardware and software architecture providing the first task-resolved and component-resolved measurements of energy dissipation allowing run- time adaptation to deployed systems.

2. Construction of a new, advanced seismic sensing platform based upon this new energy aware architecture. The new platform provides significant ad- vances over the best commercially available systems in several categories including low energy operation, rapid event reporting, and autonomous system operation.

3. Validation of the new architecture through several long term in-field exper- iments and providing analysis of these deployments which verify the utility of the energy aware approach.

1.3 Outline

The remaining chapters of this thesis are organized as follows:

Chapter 2 discusses the existing state of technology and reviews relevant re- search and commercial material.

Chapter 3 presents a new embedded system architecture, low power energy aware processing (LEAP). Here we discuss platform design considerations, as well as hardware and software architectures in detail. Existing solutions and shortcomings are reviewed and a real-time system energy accounting method is introduced for enabling energy-efficient operations. Finally we present the

6 performance of one reference implementation of LEAP and its enabling software.

Chapter 4 describes an application of the LEAP design architecture in a UCLA CENS project named GeoNET, designed for rapid detection of earth- quake aftershock events. We present the design challenges and requirements of the GeoNET application and provide solutions through a custom LEAP enabled design. Measurement data for the latest implementation of the GoeNET seismic detection system is also presented.

Chapter 5 describes several long term deployment exercises using the GeoNET system. The experiment objectives are discussed as well as review of the techni- cal details for each exercise. This includes discussion of the specific algorithms, equipment, and communications methods used by RT155 sensing systems. Mea- surements from these experiments are used to provide energy-aware modifications while in the field, validating the LEAP design principles.

Chapter 6 discusses LEAP applications beyond the existing GeoNET pro- gram. This includes applications outside of traditional embedded sensing. Both commercial and research programs using LEAP technology are described.

Chapter 7 summarizes the work presented in this thesis and explores future research directions.

7 CHAPTER 2

Background and Motivation

2.1 Defining Energy Efficiency

We first address the energy problem by understanding the challenges involved in building an energy efficient computing system. As seen in Figure 2.1, we must consider energy as a system asset to be invested in platform resources such as sensors, processors, storage, and communications devices. Our desired outcome is to maximize the information benefit provided to an application for any given energy expenditure. To this end, an application may choose from a heterogeneous set of processing capabilities, storage media, communications devices, and sensor types. For a particular application, hardware platform, and operating environment, several candidate sets of resources may yield equivalent information return, yet may require different energy investments to achieve this. We consider the platform resource set with the lower energy investment and still providing the task’s required information content to be more efficient. It is important to observe that the algorithms employed, implementation strategies, and sensor modalities may all be modified to achieve lower energy so long as the resulting application achieves the same objective.

The energy contributions of individual components within the system that are utilized for any given application are obviously dependent upon the application’s operations and interactions with operating system services. An example is shown

8 Figure 2.1: ENS system configured with example sensor array in Figure 2.2 for several common PC applications running on a laptop computer. It is clear that the dominant power consumer is not always the laptop CPU, but rather depends on the specific application being executed.

2.2 Energy Tradeoffs

The optimal match of application to hardware resource is complex due to the wide range of requirements across the space of ENS application. Figure 2.3 notes several popular ENS applications and their typical communications and processing resources needs. Finding the best match requires understanding the spectrum of efficiency and operating power across different technologies.

In order to get a better understanding of the variations of peak power and energy efficiency across various system resource types we first survey several ubiq- uitous commercial technologies. The following sections present energy efficiency analysis of several common hardware component categories. We address the pri- mary design challenges for constructing an energy efficient computing system including computation, networking, sensing, and data storage, focusing on hard- ware components used frequently in ENS platforms, though the results are also

9 Figure 2.2: Graph of power contributions of various laptop subsystems [MV04a].

10 Figure 2.3: Notional computation and communications requirements by applica- tion . applicable to general purpose computing. The benchmark results influence proper hardware selection criteria, architecture choices, and system design.

2.2.1 Processing Resources

A critical decision early in the design cycle of any ENS platform is the choice of applications processor. This choice dictates to a large extent the cost, perfor- mance capability, and battery lifetime of the system. Designers are frequently incorrectly biased toward a processor choice that consumes the least amount of static or idle power and would therefore presumably have the longest battery lifetime. Unfortunately, a power metric alone is not sufficient for optimization of platform operations for each computing task since this does not consider task execution time and corresponding task energy costs. Alternatively choosing the processor option solely on the shortest execution time is also not sufficient since will neglect the peak power required. The proper choice requires both power and execution time for all components to be considered.

In order to assess the energy tradeoffs we selected several algorithms which are

11 typical in a typical ENS system workload. We ran these algorithms on a selection of processors previously applied to ENS platform design, to provide a quantita- tive value of processor efficiency relevant to embedded sensing applications. For each processor, we measured the benchmark algorithm’s execution energy usage. Our first benchmark algorithm represents the ENS requirement for network ap- plications relying on typically high error rate wireless communication channels. In this environment, data often requires verification of its validity. The cyclic redundancy check (CRC) algorithm is commonly required to establish reliable communications in these environments. The second and third benchmarks rep- resent the requirement to process sensor data for many purposes including event detection and characterization using the widely applied finite impulse response (FIR) filter algorithm family and fast Fourier transform (FFT) [PK05].

The processors selected for benchmarking consisted of representatives from a wide range in peak power and performance ratings and have been used in numerous low power ENS platforms designs. These are listed in Table 2.1 The benchmark algorithms were run on each of the processors and the total energy consumed during the benchmark execution was measured. A high speed digital sampling oscilloscope recorded instantaneous current and voltage values during benchmark execution. These values were post processed to calculate energy usage for each processor.

The energy efficiency metric is calculated by integrating instantaneous power values taken by a bench top oscilloscope over a benchmark execution period. The energy-delay product [GH96] is useful when optimizing for energy consumption as well as execution latency. Some ENS applications may be latency tolerant; therefore trading off increased latency for reduced energy usage is desirable. In these cases, the energy efficiency metric alone is useful. However, in cases where

12 Processor Data Path Max. Frequency ATmega128 (AVR) 8-bit 16 MHz MSP430F1611 16-bit 8 MHz LPC2106 (ARM7) 32-bit 60 MHz PXA255 (Xscale) 32-bit 400 MHz DM3730 (ARM-A8) 32-bit 1 GHz

Table 2.1: Processor Types for Energy Benchmarks specific latency bounds need to be met, or when optimizing for both energy consumption and performance, the energy-delay product is an appropriate metric. The optimal operating point is one that minimizes both energy consumption and the energy-delay product.

Figure 2.4: Relative energy efficiency, execution speed, and efficiency-speed prod- uct calculations for each of the three benchmark algorithms on each processor architecture

Examining the benchmark results shown in Figure 2.4 reveals several orders of magnitude energy efficiency variations between processor architectures. Also, these efficiency differences vary significantly depending on the type of operations used by each benchmark application. In general, one can see that higher perfor- mance architectures also tend to display a substantial execution energy advantage as well as energy-delay advantage. This may be expected since performance ad- vances due to architectural features including pipelining, branch prediction, and others may result in dramatic improvement in energy efficiency due to workload

13 optimizations [GH96, ZBS04].

However, to effectively utilize the energy efficiency benefits of the higher per- formance architectures, it is important to also consider the typical high energy cost associated with a transition of the platform between operating states accord- ing to workload demand. For example the transition from a dormant standby condition to a high performance operating mode requires a substantial energy investment for restoration of system software and hardware state. These transi- tion energy costs contribute to the determination of which processing resource is most efficient for a given task.

Many ENS experimental system deployments demonstrate that applications such as environmental monitoring or seismic sensing may be supported by sys- tems that include lengthy dormant operational periods, during which ENS nodes are vigilant for events but may not be presented with large computing tasks. During these dormancy periods, in the absence of assigned computationally in- tensive tasks, the platform may be most efficiently supported by the lowest power processor selection. However, this selection is not applicable upon the arrival of events that present large computing demand. These events may require a high efficiency processing system to most effectively perform the required computing task. Here the associated transition energy costs are compensated by the energy efficiency savings.

To address this computing diversity, a heterogeneous multi-processor solution may be the proper design decision. By utilizing multiple processors, or multi- modal operation of a single processor, one is able to select the solution that best matches the present computing needs. The application must properly assign tasks to the different processors or modalities to effectively utilize the range of operations. For example the the Texas Instruments DM3730 with characteristics

14 described in Table 2.1 may be operated both as a high performance applications processor but may also be reconfigured to operate from the internal Davinci DSP while placing the ARM CPU into a low power suspend state.

2.2.2 Communications Resources

Just as energy efficiency has led to new design choices for processors, it should also guide selection of wired and wireless network interfaces. In addition to selection of interfaces, one may also potentially operate from multiple capability modes from a single technology option, for example selecting between the IEEE 802.11 a, b, g and n modes and transmissions rates. Indeed, the required link bandwidth and network capacity can be highly time variant and application dependent. The correct selection requires knowledge of transmission efficiency for a given communications technology.

We next survey a set of common radio chipsets in a fashion analogous to the previous processor review. The chipsets cover a broad spectrum of avail- able bandwidth and networking standards in order to evaluate the spectrum of efficiencies available. For purposes of this comparison, data throughput was esti- mated to be 40 percent of the air interface rate for each system. This is consistent with simulation and empirical studies of IEEE 802.11 and IEEE 802.15.4 wireless interfaces [Gas02, KA00]. The link power values are calculated from the sum of transmitter and receiver power. Values are calculated from vendor data. An identical bit error rate (BER) and 90dB path loss is also applied for all interfaces.

While specific experimental conditions related to range, environment, pres- ence of interference, and choice of networking protocol will influence choices, a clear contrast is noted between these interfaces. As shown in Figure 2.5, the IEEE 802.11 systems are approximately 10 times more energy efficient per bit

15 Chipset Standard Max. Throughput TR1000 Proprietary 115.2 Kbps CC2420 802.15.4 250 Kbps CC1110 Proprietary 500 Kbps Atheros 5004 802.11b 11 Mbps Atheros 5004 802.11a 54 Mbps Atheros 5004 802.11g 54 Mbps

Table 2.2: Radio chipsets used for energy efficiency calculations transferred than the example IEEE 802.15.4 system and other proprietary radio alternatives. When applying the energy-delay metric, similar to the processor evaluation results, this deviation is more extreme. However, it is also seen that the 802.11g interface system requires a peak operating power much greater than that of the other choices. This result leads to an additional demand upon the ENS architecture to include multiple wireless interfaces each selected to provide both low power operation and high efficiency data transfer as variable application demands require.

Figure 2.5: Comparison of radio chipsets with equal link margins at 10-6 BER showing relative energy efficiency per bit, throughput, and efficiency-throughput

2.2.3 Storage Resources

Platform data storage is an important design consideration for any embedded sys- tem. Increasing local storage resources may increase platform size, platform cost, and both static and dynamic power consumption. Conversely, increasing local

16 Part Number Type Density 28F256P30 NOR Flash 256Mb CY62167DV18 SRAM 128Mb MT29F8G08FABWP NAND Flash (SLC) 8Gb MT48LC16M32L2 SDRAM (SDR) 512Mb MT42L128M32D2 SDRAM (DDR2) 2Gb

Table 2.3: Storage media types used for energy efficiency calculations data storage may permit system implementations that rely on lengthy operation periods where only data acquisition and storage need occur. This reduces demand for frequent energy intensive use of processor and wireless interfaces. Selecting the proper storage type is equally critical. Each storage type has specific advan- tages and disadvantages when considering characteristics such as access latency, storage density, cost per byte, and power consumption. Traditional storage types used in embedded platforms consist of several varieties of SDRAM, low-power SRAM, memory-mapped NOR and NAND flash, and serial NOR flash.

The sequential read and write efficiencies for a variety of volatile and non- volatile memory types shown in Table 2.3 are calculated in Figure 2.6. The same trend toward high efficiency benefits gained by using high performance compo- nents is present in the storage media as was found in the previous sections. Again the power overhead required to maintain these high performance storage types prevents them from being used universally, but instead a determination must be made when to migrate from one type to another depending upon application specific data trends.

The optimal mixture of storage size and memory technology is highly applica- tion dependent. This dependency can be attributed to several aspects including data longevity, organization, and throughput [LC03]. Long lived data accessed sequentially, and acquired at a modest rate, e.g. chronological temperature data,

17 Figure 2.6: Comparison of storage media showing relative energy efficiency per read, write, throughput, and efficiency-throughput will dictate a different storage medium type than data from short lived, randomly organized burst data such as temporary packet buffers in an active network. An energy efficient platform must be able to choose at real-time the most efficient storage type given the application’s desired storage characteristics.

2.3 Prior Work

We address related work in two sections. First we discuss power management solutions and commercial standards currently in widespread use. We then detail prior work in related research topic areas.

2.3.1 Established Power Management Solutions

Current power management solutions for general purpose platforms suffer from one or more of the following issues: a requirement for the host CPU to be active, a large memory footprint that is difficult to run on memory constrained systems, use of overly simplified models of device power requirements, or are single point solutions that are not adaptive to dynamic environments.

18 2.3.1.1 ACPI

The predominant power management standard in general purpose computing today is the Advanced Configuration and Power Interface (ACPI) [ACP05]. ACPI relies upon a dedicated register set, BIOS, and platform description table in order to provide device level power management status and control information. It is widely used in general purpose computing equipment as well as many other systems based on the Intel x86 architecture. This power management architecture has proven to be versatile and scalable as it is supported by the majority of hardware vendors and desktop operating systems today. While ACPI defines the hardware’s power management capabilities, it relies upon a separate component within the operating system, the operating system directed power management (OSPM) layer, to the device power state most appropriate at a particular instant.

Traditionally, operating systems have used some form of software idle timers to make device level power management decisions [Lin07a, Mic06]. Software idle timers reduce energy consumption by powering down a component subsys- tem after a predefined period of inactivity. The time out settings can range in complexity from simple user defined constant values [Rus02] to non-stationary stochastic processes with online learning [BLR01, QQM00]. This approach is often effective in user based interactive systems, but may violate environmen- tal real-time constraints for embedded systems. In addition, relying exclusively on idle timers neglects run time application profiling information which may be used to improve power management decisions or to modify application fidelity requirements [FS99a].

Another significant drawback of the software idle timer architecture is its dependence on the platform’s host processor. The host CPU must continue to operate in order to maintain the idle timers, calculate new timeout values, and to

19 Figure 2.7: ACPI architecture diagram (ACPI Specification v3.0b) transition devices into power management states. Often in embedded systems, the host processor may be a significant or even the predominant energy consumer [MV04b] making this choice less desirable. In environments that would allow extended periods of low vigilance, the energy overhead required to maintain CPU based idle timers may dominate. In contrast it would preferable to provide the platform with a schedulable means to transition device power states that does not require any host processor intervention whatsoever. This would prevent costly and unnecessary CPU wake up transitions.

In addition, ACPI is not well supported in compact, embedded platforms. This can be attributed to the need for a separate, complex BIOS (basic in- put/output system) software component as well as a memory intensive platform description table generally not included in compact embedded systems. Further, in these embedded systems power management is typically performed directly in the operating system with no intervening power management layer. Solutions are

20 frequently platform and processor specific and are often tuned for a particular application or environment. For example, the modern cellular phone platform has been optimized to prolong operating lifetime for a specific wireless protocol and user interface configuration. For this platform, networking power manage- ment settings are often fixed, while user interface settings may allow some manual adjustments such as the illumination timeout schedule and speaker volume. It is also important to note that the mobile phone platform does not adaptively change power management behavior as new applications are added or when sup- porting different users. These limitations severely restrict the platform’s ability to support dynamic environments and adaptive behavior, since it would require frequent manual parameter adjustment.

2.3.1.2 OnNow

Microsoft’s OnNow design initiative [Mic01] acknowledges the necessity to enable individual, device level power management changes in power-aware computing systems. OnNow seeks to build on top of ACPI enabled systems by extending power management capabilities into device driver and operating system software as well as recommending minimum hardware power management capabilities. It serves as a framework for ACPI power management information to be sent to power-aware applications and for power management directives to be propagated to platform hardware components. OnNow defines a limited set of global power management states, which are controlled directly by the operating system. The platform enters low power sleep states with any application required wake up ca- pabilities enabled. This provides a means for individual applications to customize system sleep states depending on current requirements and provides application driven device wakeup capabilities.

21 Figure 2.8: OnNow architecture diagram (Microsoft Corp.)

When the system is in its working state, devices are allowed to perform inde- pendent power management actions without coordination through the operating system. This provides a basic mechanism to do local power management based upon information available to the device driver (such as the number of open file handles). In addition the operating system dictates the CPU’s operation based upon usage and power source information.

OnNow is limited strictly to ACPI enabled systems (largely laptops, desk- tops, and servers). As noted previously, ACPI enabled hardware is suboptimal for typical ENS platforms. In addition, device drivers must make power man- agement optimizations based solely upon locally available information, whereas global system information is available at the operating system, no framework is provided to propagate this information.

OnNow also does not provide energy accounting features, which would enable

22 device level efficiency optimization. Rather, it is assumed that the individual device driver is sufficiently power-aware and will make proper power management decisions without operating system intervention. As was discussed previously, application level software can have significant impact on overall energy efficiency, thus we actually require a means to provide energy performance information to the operating system to make power management include both hardware and software decisions. This cannot be done efficiently within the device driver alone without system support for platform wide energy accounting information.

2.3.2 Related Research

The volume of prior related research is vast, encompassing general topics such as computer architecture, mobile computing, enterprise computing, as also specialty topics such as sensor networks and information theory. In general related research can be decomposed into two categories of solutions, software driven (top-down) and hardware driven (bottom-up). Following this review of related work, we analyze prior ENS platform design architectures.

2.3.2.1 Software Driven Solutions

The top-down approach to power management is typically a software defined so- lution requiring only limited augmentation to existing commercial hardware (for energy monitoring purposes) and utilizes one or both of the following features: device energy models for power prediction; external test equipment (such as digi- tal multi-meters or oscilloscopes) to perform offline platform power profiling. The top-down approach can be organized into categories including: device level power management algorithms, power-aware memory management, energy-aware oper- ating system design, application assisted power management, and system level

23 energy profiling.

Device Level Power Management Perhaps the most well studied topic in power-aware computing in the concept of dynamic power management (DPM). Not surprisingly, one of the most studied device subsystems in DPM is the tradi- tional computer hard drive. It has one of the most dynamic power consumption profiles due to it numerous motors and relays. In addition, long spin-up and spin- down delays introduce significant energy penalties for incorrectly predicted power states. DPM algorithms for single device power optimization through batched I/O requests [PS03] and timeout schemes are well studied, including optimal non-adaptive algorithms [RIG02] as well as predictive [DKM94] and adaptive [HLS00] algorithms. Chung et al. used finite-state Markov chains to perform on- line adaptation, extending optimum stationary control policies through a sliding window approach to handle unknown and non-stationary workloads. Qui et al. [QQM00] used stochastic Petri nets to model concurrency and mutual exclusion primitives to DPM scheduling. Other solutions integrate real-time constraints [SCI01, UK03] for time critical applications. DPM implementations for IEEE 802.11 networking interfaces are numerous, typically augmenting or superseding the integrated power saving mode (PSM) capability. Anand et al. [ANF03] imple- mented an adaptive PSM feature for optimizing distributed file system energy. Additionally, optimization of processor voltage and frequency (digital voltage scaling), have been discussed at great detail [CB95]. In all reviewed DPM work however, power profiling information was either derived from manufacturer’s data sheets or measured through external devices such as digital multimeters or oscil- loscopes.

24 Power-aware Memory Management While DPM algorithms focus on the power states of individual platform devices, power-aware memory management software allows memory components to enter reduced power states through new allocation schemes incorporating memory technology characteristics. Zheng et al. [ZGS03] constructed a logical-disk subsystem allowing for the use of several different storage technologies, including hard disk, compact flash, and wireless LAN, as well as numerous file system providers. They assessed the complex interactions between these components and found strong energy dependencies on device power management mechanisms to file system usage patterns.

Others have constructed power-aware virtual memory systems which sleep RAM components dynamically by tracking active memory segments and utilizing allocation schemes to minimize the number of required active nodes. This allows inactive memory areas to be put into low power modes, reducing system energy. According the Huang et al. [HPK03] significant energy reductions were possible (over 50 percent) using RAMBUS memory. However, this memory technology is no longer in widespread use.

Lee and Chang [LC03] proposed a memory allocation scheme incorporating the data retention energy requirements for several non-volatile memory technolo- gies. Using trace-driven simulation techniques, the authors derived the location and number of required memory accesses for a particular task set. When mapped to the optimal storage technology they found a significant energy reduction (up to 26 percent) of memory components is possible.

Energy-aware Operating Systems Other projects have investigated energy management from the operating system level rather than due to individual com- ponents such as disk drives and CPU memory. Resource containers [BDM99] de-

25 couple resource tracking from the traditional operating system protection domain model. These containers allow object tracking of non-process based resources, such as network bandwidth, disk I/O, and battery energy. Waitz [Wai03] utilized resource containers to provide energy tracking of the CPU from within the Linux operating system. However, this system did not incorporate energy data from additional device subsystems such as CPU memory or network interfaces.

ECOsystem [HCA02] incorporates energy as a schedulable operating system resource, through the current abstraction. Per-process energy information is criti- cal in ECOsystem as processes need to be charged appropriately, to enable energy- based scheduling. ECOsystem uses a combination of battery lifetime sampling (a system-wide energy measurement) and system modeling. It was one of the first to describe the dynamic power relationships between platform devices. However, in ECOSystem the CPU, and network interfaces are assumed to use fixed power values for simplicity, and the hard drive is modeled for a reduced number of operating states.

Application Assisted Power Management Application assisted power man- agement operates by providing a direct communication capability between ap- plication software, the operating system, and specific hardware devices. This cross layer communication bridging capability allows tight coordination of power states through software triggering mechanisms. Microsoft’s OnNow framework, described previously, is one such implementation. In addition, several research implementations exist with similar feature sets.

Anand et al. [ANF04] expose device power states to applications. These applications utilize ’ghost hints’ to inform device drivers of missed I/O events due to incorrect power management states. Devices are then able to proactively

26 enable interfaces for applications prior to activation. One shortcoming of this implementation is the requirement for all applications using a managed device to support the ghosting hint capability to achieve significant energy reduction.

Heath et al. [HPH02, HPB02] investigated application transformations that increase device idle times and propagate idle time information to the operat- ing system. Compiler optimizations transform application code to provide pre- activation and deactivation for laptop disk access. This was shown to significantly reduce disk energy consumption for the chosen task set using simulated disk hard- ware.

An alternative implementation [WBB02] allows applications to declare the priority of a disk I/O access as deferrable or abortable through the use of a oper- ation time-out and cancel flag. When deployed on a hard disk using a modified Linux Ext2 file system, results were found to compare with oracle performance. Finally, Lu et al. [LBM00] describe an online task scheduling algorithm to op- timize device I/O access requirement to optimally aggregate I/O requests across multiple devices. However, this algorithm assumes each device may operate inde- pendently from one another, which ignores hardware dependency relationships.

System Level Energy Profiling Energy profiling has been investigated using several methods ranging from purely model driven systems to online implemen- tations. Tan et al. [TRJ05] characterize the operating system as a multiple entry/multiple exit program where the transition from each input to each out- put is first characterized with a probability distribution function and an energy dissipation value. Their intention is to allow high level designers to compare the energy costs using alternative operating systems as well as predicting the energy overhead when using different operating system primitives. The feasibility of

27 profiling an entire operating system for each version release is questionable.

Joule Watcher [Bel00] utilizes hardware performance monitoring units (PMUs) within the CPU to track instruction count and to calculate energy dissipation for the CPU, L2 cache, and main memory devices. Micro-benchmarks are used to find the correlation of events to energy values within the CPU. An external digital multimeter (DMM) provides the power profiling data during the micro- benchmark execution. The PMU is then used to track execution and energy of individual threads. XTREM [GMJ04] uses an identical profiling architecture for the Intel PXA255. The authors then ran energy benchmarks on several common multimedia applications to verify model accuracy to actual hardware.

PowerTOSSIM [SHC04] is a power-aware extension to the TOSSIM TinyOS simulator. PowerTOSSIM uses an empirically-generated model of hardware be- havior to simulate power dissipation on mote-class devices. The model is obtained by instrumenting and profiling a single node during actual operations. An oscil- loscope was used to measure power dissipation of the entire mote and a set of micro-benchmarks were used to exercise each subsystem independently.

Jung et al. [JTB07] introduce a model-based design tool that can identify the dominant energy-intensive hardware components over a set of operating patterns. The authors propose several operating states where the system components are operating in different power modes, where measured power values can be used to populate the model parameters. The proposed models assume that power dissipation of subsystems is constant among operations. However, this is not always the case, especially when a DVFS-enabled CPU is present.

PowerScope [FS99b] combines hardware instrumentation and kernel software support to provide per-process energy usage information. It uses an external multimeter and a second computer for data collection and the data processing

28 does not occur in real-time. Furthermore, the power-measuring system itself requires significant energy and physical resources to operate, thereby limiting its application in large-scale systems.

PowerTOP [Lin07b] uses platform ACPI information to show how effectively the system is using the various hardware power-saving features. It displays the list of processes causing CPU wakeups from low power states. It does not collect run- time energy information but instead helps the user find applications responsible for exiting low power idle and sleep states. This can be used to tune applications such that the CPU remains in sleep states for longer periods of time and thereby reducing system power.

Figure 2.9: Wakeup information display from PowerTOP

With the emergence of the smart phone, a ubiquitous high performance mo- bile computing platform, consumers are able to run a wide variety of applica- tions encompassing many of the elements of traditional sensor networks. Sensors such as geolocation, motion, light, compass direction, atmospheric pressure, and

29 temperature have become standard equipment on these systems [ifi12]. In addi- tion, several wireless interfaces including short range (Bluetooth), medium range (WiFi), and long range (cellular, LTE) are available to accompany the gigahertz speed multicore applications processors needed to run the demanding application sets required on these systems. The hardware components used on these plat- forms have incorporated a number of power-saving features, allowing components to dynamically adjust their power consumptions based on required functionality and performance. In order to enable users to gain insight into the power expen- ditures of these platforms, several power profiling applications have appeared.

Figure 2.10: Power Tutor graphical display

In 2011 Zhang et. al. [ZTQ10] developed PowerTutor (Figure 2.10), an An- droid application that estimates energy usage of smart phone subsystems based upon prior statistical modeling of the hardware. Energy usage of several platform subsystems including CPU, wireless, camera, audio, and storage are included. These energy models are obtained using methods similar to Jung’s methods in that these models are derived by exercising micro benchmark applications that

30 attempt to isolate specific components under test (such as CPU, LCD, wireless, etc) while minimizing power variation due to other subsystems. As with prior energy benchmarking applications, the platform’s lack of fine-grained power con- sumption data severely restricts the depth of insight one can gain for adaptive operation.

2.3.2.2 Hardware Driven Solutions

The bottom up approach to power management is driven by low-power hardware VLSI design and utilizes one or more of the following features: transistor or instruction level energy models; CPU instruction counting, battery modeling; compiler instruction set optimization.

Instruction Level Energy Optimization Numerous techniques exist for en- ergy estimation at the instruction level. These typically involve cycle accurate CPU simulations utilizing energy information from transistor level or logic gate level information. Steinke et al. [SKW01] propose an instruction set energy model for an ARM processor without detailed knowledge of the internal proces- sor structure. Their models are based upon a power profiling method derived by exercising the internal functional units and external data buses. In contrast, Rizo-Morente et al. [RCB06] proposed a method for estimating energy usage by modeling the CPU as a linear system excited by the current due to each instruc- tion. The authors claim average correlation of 93 percent between estimated and measured values.

Using a different instruction level energy simulation model as well as a cus- tom power profiling tool, Simunic et al. [SBM99, SBM00] discovered that several source code optimization techniques are able to dramatically reduce power con-

31 sumption by up to 90 percent on an ARM processor for an example MPEG decoding algorithm, while traditional performance based compilers were shown to be ineffective at finding low power instruction set implementations. Much of the inefficiencies were found at inter-procedural boundaries where traditional compilers did not optimize for unnecessary memory access.

SoftWatt [GSI02] models the CPU, memory hierarchy, and hard disk using a simulated operating system, SimOS. Each of these devices is modeled at the micro-level including cycle accurate emulation of the MIPS CPU, including energy calculations using analytical power models. The hard disk is modeled as a finite state machine composed of several management states, while the CPU cache energy is calculated from individual memory accesses. The authors do not present any data to verify accuracy of their models however.

Marculescu [Mar00] proposes a scheme to trade instruction level parallelism (ILP) for low power operation using software execution traces. His scheme searches for the point of diminishing returns whereby adding additional instruc- tions into the dispatch unit will no longer efficiently execute due to data de- pendency hazards. Instead, by decreasing the number of simultaneously live instructions, data hazards are less likely to require pipeline stalls. CPU power values were calculated using a proprietary Intel power simulator.

Battery Modeling Other well-known power management techniques exist to extend system lifetime through battery relaxation. By controlling current draw characteristics, additional battery energy is available prior to complete discharge. Lahiri et al. [LDP02] presents an introduction to battery-aware scheduling includ- ing the underpinnings of the battery recovery effects. Benini et al. [BCM00] and Luo and Jha [LJ01] discuss DPM techniques that incorporate battery modeling

32 algorithms. By duty-cycling large current drains from the battery. Battery relax- ation is able to recover lost charge due to electro-chemical phenomena. Martin and Siewiorek [MS96] describe the effects of battery relaxation when coupled to a DVS enabled CPU. They show the non-linear relationship due to battery recov- ery while duty-cycling (using DPM) versus the traditional approach used under DVS. This results in extended lifetime under DPM not typically considered.

Energy-aware Hardware SPOT [JDC07] is an add-on board that enabling energy accounting on the MICA2 mote platforms. Charge accumulation is per- formed via a voltage to frequency converter circuit, similar to a sigma-delta ADC architecture, achieving a high resolution output and large dynamic range. The SPOT module also includes a dedicated counter value to provide charge accumu- lation timing information. However, a low bandwidth read back channel hinders energy measurement at high sample rates. Additionally, SPOT is a single-channel design, monitoring only the mote platforms input current and cannot easily sup- port per-subsystem power information.

In 2011, the Qualcomm Corporation released the MSM8660 mobile develop- ment platform (MDP) [Bsq12], a reference platform for development of software applications optimized for their flagship Snapdragon S3 applications processor. Integrated into the MDP architecture are numerous current sensors located on the voltage regulator supplies. These supplies power various functions on the MDP and several different subsystems within the MSM8660 itself, for example the SMP applications CPU’s, Adreno graphics processor, sensor processor subsys- tem (SPS), and audio DSP. Simultaneously Qualcomm released TREPN [Ben11], an Android application that acquires and displays energy profiling information from the MDP’s current sensors. Using TREPN software designers are encour- aged to run power profiles of their applications in order to enable optimization

33 during development. Applications are able to send intents to the TREPN profiler so that critical sections of software execution may be correlated with energy data.

Figure 2.11: Qualcomm’s TREPN power profiling

One significant limitation of the MDP’s power measurement features is its low bandwidth sampling. Power profiling data is limited to less than one hundred hertz per sensor due to the single I2C bus that connects the all current sensors. As will be discussed later, operating the power profiling at less than the oper- ating systems scheduler clock rate restricts the possibility of per process energy accounting. Additionally, since collection of the current sensor data requires soft- ware execution on the Scorpion applications processors, TREPN does not provide a method for power profiling in low duty cycle or low power operations where many of the processing resources are disabled. Unfortunately, due to licensing concerns Qualcomm removed TREPN from public access in late 2011 with no schedule for future release.

34 2.3.2.3 ENS Platform Architecture

In prior ENS research, a wide array of wireless sensor platforms with large vari- ations in power, performance, and cost [Int02, SBC05, BBC96, ACP99, HSW00, PSC05, LRR05, Int02, LS05], have been constructed. These platforms can often be constrained by the performance limitations of low complexity components re- quired for long battery lifetime, or by power limitations, caused by wasted energy during long device dormancy periods.

There are notable platform designs which have attempted overcome these constraints by operating over a wide dynamic range in power and capability. The PASTA [WBF04] platform attempted to solve both issues through dynamic scheduling of processing and communications resources. This platform was able to dynamically utilize various sensing, communications, and processing subsys- tems depending on anticipated workload. It also had an integrated macro level energy measurement capability. M-Platform [LPZ07] shares many similarities with PASTA, with emphasis on design for modularity, including a custom inter- module bus protocol designed for high efficiency data transfers. Each board has bus custom logic to perform TDMA data transfers on the platform’s backplane bus.

Triage [BSC05] is a tiered hardware and software architecture for microservers utilizing a high-power platform with ample resources and a resource constrained low-power platform. The low-power platform schedules tasks to optimize the use of the high-power platform. Scheduling is based on hardware-assisted profiling of execution time and energy usage. The energy profiler uses information about the remaining battery energy as well as energy consumption during a particular operation. However, the energy profiler does not provide per-subsystem infor- mation or power tracing capabilities; instead, an external data acquisition board

35 must be used.

The first generation LEAP1 [MSR12] platform utilized a heterogeneous pro- cessor and radio architecture to provide both low power and high efficiency com- putation and networking operations. The EMAP module performed energy ac- counting, power domain scheduling, and sensor data acquisition. It was the first platform design to do highly granular, device level energy profiling and power management functions.

The iMote2 [Int07] platform is designed around a high performance PXA271 processor and has 802.15.4 connectivity through its CC2420 radio. It has nu- merous serial interfaces available for expansion boards on a dedicated connector. XSM [DGA05] is a based platform utilizing micropower analog circuits to facilitate operating at very long intervals in a low vigilance mode of operation.

2.4 The Case for Energy Accounting

In our introduction we described a set of new ENS system requirements associ- ated with many important applications and the resulting requirements for new platform architecture. In the previous sections we have described the wide range in energy efficiency between heterogeneous system components including compu- tation, storage, communications, and sensing. We have also noted that heteroge- neous sets of resources often permit the use of different methods to achieve the same system level objective. Finally we also note that a general characteristic of many ENS applications is demand for intermittent operation of many platform resources and therefore intermittent and infrequent dissipation of energy.

In order to make beneficial use of these characteristics and enable signifi-

36 cantly lower average power, we require a new embedded system architecture. This introduces a design where components are not selected solely for low average power dissipation, but are selected with both energy and performance criteria to achieve highest energy efficiency and lowest system energy dissipation for a specific measurement objective. This also includes energy management to enable scheduling of component operation. Thus, it is then possible to select the most energy efficient components to meet the demands of each sensing task. It is also important to note that this system design must address the requirements that arise from the need for dynamic task selection for applications that present events and phenomena that are unpredictable and known only at runtime. Specifically, an energy accounting capability is required for in-field adaptive optimization of energy and performance with methods that may only operate if provided with real-time knowledge of subsystem energy dissipation. In order to argue the need for a real-time energy accounting mechanism, we first address several misconcep- tions in power management and also present deficiencies of existing solutions.

2.4.1 Dynamic power profiles

When presenting new power management schemes in platforms lacking integrated support for device level, real-time energy profiling, a common misconception is to assume the accuracy of pre-generated, average power values when exercising power management states [DKM94, HLS00, LM99]. In these schemes, platform power models are empirically generated offline to simulate real-time power dis- sipation. The models are obtained by instrumenting and profiling a single node and running a predefined set of software benchmarks. Typically, an oscilloscope is used to measure power dissipation of the entire platform through a shunt current monitoring method. While this method proves applicable to limited complexity

37 devices such as microcontroller based systems, it does not accurately capture the dynamic nature of modern, high performance microprocessor based system-on- chip (SoC) devices. As can be seen in Figure 2.12, the power profile for a PXA270 processor enabled with dynamic voltage and frequency scaling (DVFS) may vary by over an order of magnitude while executing the benchmark application. This implies that assuming average power values leads to significant estimation errors when applied to power dynamic platforms. Additionally, average value estima- tions generated via under sampling methods can completely miss short term events altogether.

1000 620 MHz

800 520 MHz

600 416 MHz Power (mW) 400 312 MHz

208 MHz

200

104 MHz

0 0 5 10 15 20 25 30 35 40 Time (sec) Figure 2.12: PXA270 power dissipation over time, when running the dcache test program, for six CPU speeds measured by the EMAP ASIC on LEAP2

2.4.2 Energy Correlation Effects

When operating without an integrated real-time energy accounting mechanism, power management schemes are unable to measure two critical aspects present

38 in complex systems: First, the correlation between software tasks and energy usage, and secondly, the correlation between individual hardware components themselves. We next show that energy independence of software tasks and of hardware components, as required by average power modeling, is invalid.

2.4.2.1 Software Task Sets

Figure 2.13-a shows the power profile implications for both the system CPU and flash storage media when operating a file system during a typical file copy operation. Since the flash device is a NOR based component, it supports an erase-suspend on write capability. This allows block erase operations (which may require tens of milliseconds to complete) to be suspended in order to perform other more immediate flash operations. The Linux file system software (Journaling Flash File System—JFFS2) utilizes this feature to perform interleaved write and erase operations. This behavior explains the pulse-like shape of Figure 2.13-b.

If instead, the NOR flash device is written directly through user-space erase and write operations (rather than through the JFFS2 file system), the power profile, is dramatically different. This is shown in Figure 2.14. It is apparent, from both the CPU and flash power profiles, that these software implementations, though providing equivalent functionality, have dramatically different impacts on overall energy usage. This is can be seen from the difference of the power profile integration between the two examples.

2.4.2.2 Hardware Components

The energy relationship between individual hardware components is also difficult to capture without specific hardware support for energy accounting. For exam- ple, when using the oscilloscope profiling method explained previously, a set of

39 software micro-benchmarks [GMJ04, Bel00, SHC04] are often used in order to exercise each hardware subsystem independently. This is required in order to derive individual hardware component energies. However, many hardware de- vices require hierarchical operational relationships with other components, thus preventing independent operation, for example the use of a communications bus between a CPU and device subsystem.

This is the case for components such as an 802.11 WLAN device attached to a PCI bus. Here, power independence of the WLAN device from its parent PCI bus, and as is shown in Figure 2.15 for a typical PC, from the parent bridging hardware (e.g. Northbridge and Southbridge) cannot be assumed, since bus activation is required prior to device operation. At the same time the parent PCI bus may also connect numerous other components not related to the targeted device. Therefore the operation of resources unrelated to the micro-benchmark, but connected to some parent resource can have significant impact on the execution and energy calculations related to the benchmarking application. In order to claim micro- benchmarking accuracy, device energy independence must be assumed, but due to connectivity of shared parent resources this is not correct.

Micro-benchmark use is also limited to known a priori hardware relationships since the benchmark models themselves are developed based upon exercising only limited subsets of the system and developing an estimation based upon weighted summations of these benchmarks. This may often neglect correlation effects that can have significant influence on total power usage. Figure 2.16 demonstrates one such example where the relationship between the CPU and cache is only discov- ered through our direct measurements. While it is generally accepted that there is an efficiency loss due to data cache miss cycles we also found CPU power is significantly increased during data cache hits due to fewer stalled instruction cy-

40 cles. This would have resulted in an 8% estimation error using micro-benchmark modeling techniques where the CPU’s actual energy expenditure is dependent upon data set organization within the cache.

2.4.2.3 Traditional Energy Accounting Solutions

Traditional energy accounting techniques in ENS systems as well as in mobile computing rely on external device support, such as oscilloscope sampling or data acquisition systems[DKM94, BSC05, ANF03, ANF04, BCM00, RG00], on inter- nal device support such as commercial ”fuel gauge” circuits [LPZ07, SBC05], or peripheral circuit board modules [JDC07]. As shown in Figure 2.17, power dis- sipation is typically measured at the node power supply circuit input, thereby providing only system-wide energy consumption information. High sampling rate devices such as an oscilloscope or other data acquisition system provide high sampling rates and as such can acquire and display power dissipation data in real-time. However, they are external devices and may not probe all internal subsystem circuit paths. Likewise, these are impractical for use in deployed sys- tems that need compact low-power real-time energy consumption information. Moreover, they are large, costly, and require substantial power to operate, and are thus impractical for use in a deployed system. A fuel gauge or an integrated peripheral solution is sufficiently low-power to be included in actual field applica- tions; however, those devices cannot meet real-time constraints as they are limited either by their internal sampling rate or by the speed of the communications bus. We demonstrate this issue later in Section 3.1.5.2.

41 File copy from NAND to NOR: CPU and SDRAM 800 CPU SDRAM 700

600

500

400 Power (mW) 300

200

100

0 0 20 40 60 80 100 120 140 Time (sec)

File copy from NAND to NOR: NAND and NOR 40 NOR NAND 35

30

25

20 Power (mW) 15

10

5

0 0 20 40 60 80 100 120 140 Time (sec) Figure 2.13: CPU, SDRAM, NAND and NOR power draw over time when copying a 7Mb file from NAND to NOR using JFFS2.

42 Raw file copy from NAND to NOR: CPU and SDRAM 800 CPU SDRAM 700

600

500

400 Power (mW) 300

200

100

0 0 20 40 60 80 100 120 140 Time (sec)

Raw file copy from NAND to NOR: NOR and NAND 40 NOR NAND 35

30

25

20 Power (mW) 15

10

5

0 0 20 40 60 80 100 120 140 Time (sec) Figure 2.14: Power profile when writing directly from user space utilities.

43 Figure 2.15: Typical PC hardware architecture [Wik07]

44 Figure 2.16: Energy correlation effects showing CPU power increase due to L1 cache hit

Figure 2.17: Traditional sampling methods: oscilloscope power profiling setup measuring the UUT (a) and fuel gauge circuit (b)

45 CHAPTER 3

A New Approach: LEAP

Having now described the needs of new ENS applications and detailing prior ENS design approaches to energy management we now discuss the specific design criteria of the LEAP architecture. We start by outlining our design goals for this new energy-aware architecture. Specifically we require the following architecture advancements in order to meet our energy-aware design criteria:

1. Creation of a new high resolution, in-situ power monitoring system.

2. Provide a constantly vigilant method yet require low overhead for operation.

3. Be consistent with platform’s low power operation and not require signifi- cant additional energy resources to operate.

4. Provide fine grained, component or device level power measurement suffi- cient to attribute energy to specific functions within the system.

5. Utilize sampling rates to be consistent with accurately measuring all dy- namic power features of the region under measurement. This may vary with the specific component being measured and its operating state.

6. Provide energy measurement across a large number of separate power do- mains. Each must be resolvable as an independent energy contribution to the system’s whole.

46 7. Enable power collapse of idle resources while maintaining sufficient bound- ary isolation as to not prohibit normal operation of operational resources.

These architecture advancements can be used to enable a new energy-aware application development process as shown in Figure 3.1. The highlighted section in blue notes the enhanced design cycle including in-situ power measurements and visualization tools providing iterative development based upon energy dissipation data. We now detail how the noted design criteria flow into a design specification

Figure 3.1: Applications development using a LEAP enabled design flow for the LEAP architecture. The following sections provide specifics of both the software and hardware components for the LEAP architecture including design rationale and technical details. This is followed by a description a custom de- signed reference implementation of this architecture in the LEAP2 ENS platform. Our LEAP2 platform has served as both a test bench for validation of the LEAP principles as well as an enabler for development of energy-aware ENS applications including profiling utilities, networking protocols, and detection algorithms.

47 3.1 Hardware Enablement

In order to fulfill our stated objectives, we first assess the hardware require- ments for in-situ energy measurement. A proper design must overcome a signif- icant challenge to provide both a highly accurate measurement capability and a sophisticated accounting system all while maintaining very low power opera- tion. Our measurement system is composed of several building blocks noted in Figure 3.2. Specifically, we require the following blocks: power sensors, power switches, analog-to-digital converters (ADC), and energy accounting processors.

Figure 3.2: Energy-aware architecture components: power sensors, power switches, analog conversion, energy accounting

These building blocks form a naturally hierarchical design where each block may

48 Figure 3.3: Typical set of ENS subsystems included the in-situ measurement in LEAP architecture be replicated n times when connected into the next level block. For example, one or more power sensors may be aggregated into a single ADC as well as one or more ADC’s may be connected to an energy accounting processor. The design for each building block may be homogenous across the entire system or custom building blocks may be used in a heterogeneous design where each building block may be optimized to the particular power domain. The system designer may choose from a variety of building blocks depending on the specific requirements. We now discuss the specific selection criteria for each of these building blocks.

3.1.1 Power Supply Characteristics

Before embarking on design of the individual building blocks for the in-situ mea- surement system, we first look at the characteristics of power usage in typical ENS electronics. A plot of the frequencies in our supply voltage and supply cur- rent can be used to determine the information bandwidth of the power supplies. A plot of the power spectral density (PSD) provides insight into the frequencies of interest. We first perform an experiment to measure the current on the CPU and SDRAM power supplies in a typical PC motherboard at high frequency—

49 0 10

−1 10

−2 10

−3 10

−4 10 Relative PSD

−5 10

−6 10

−7 10 0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 4 Frequency x 10

Figure 3.4: Power profile of Intel Core 2 Quad CPU Q6600 2.4GHz

5MSa/s. (It is assumed that the regulated supply voltages will maintain a small variation compared to its DC value, so that power is proportional to the supply current). The power profile of a high performance CPU, such as the Intel Core 2 Quad, operates with the highest frequency clock rates and contains complex internal clock gating functions. It thus likely contains the highest frequency com- ponents of any of the PC power supplies. The PSD of this CPU’s power supply, measured in-situ while exercising dynamic test bench loading, is shown in Fig- ure 3.4. As can be seen, 99% of the CPU supply’s power is contained within the first 500Hz and 99.9% within the first 1kHz. All higher frequency components of the power signal have been attenuated by the bypass capacitors placed near each of the CPU power pins as well as the inductive properties of the PCB traces themselves as shown in Figure 3.5. Each of the building blocks constructed for our in-situ measurement system must comply with the maximum supply frequency

50 Figure 3.5: Power filter formed through bypass capacitors and trace inductance expected for that power domain. Also per the Nyquist theorem, a minimum ADC sampling frequency of at least 2 times the maximum supply frequency can fully capture the signal and thus adequately meet the resolution requirement. Of course actual ADC sample frequency selection will depend upon any front-end filtering required for rejection of higher frequency aliasing components, but our maximum supply frequency provides the filter’s cutoff point.

Having established the minimum signal bandwidth for our in-situ measure- ment system we next proceed with the design of each of the building blocks in that system.

3.1.2 Power Measurement Sensor

Our architecture first requires multiple sensors capable of accurately measur- ing instantaneous power dissipation across a wide dynamic range. This power measurement is most frequently performed by separately measuring instanta- neous voltage v(t) and current i(t) to generate the product P (t) = v(t)i(t). The voltage measurement component is commonly made by direct connection to the ADC input. (We address the measurement considerations for the ADC in the next section). The instantaneous current calculation is performed either through magnetic coupling or through a shut resistor placed in series with either the sup- ply voltage (high-side) or the ground reference (low-side). Magnetic coupling

51 methods, while non-intrusive to the measurement circuit are restrictive due to the size, cost, and nonlinearity issues. While shunt resistors are ideal from cost and size perspectives but induce a power efficiency loss of the measured resource. The resistive method was chosen for our LEAP design though we later detail the power efficiency loss due to this technique. We also must consider whether to use

Figure 3.6: Resistive current sensor types a) high-side b) low-side and typical PCB power plane regions the high-side or low-side resistive measurement method. A challenge when using low-side measurement is the insertion of the shunt resistor into the signal return path. This causes signal integrity issues for high frequency signals due to the disruption of the return path. Also, as part of our LEAP design requirements we require isolation of individual power domains, creating additional non-continuous supply planes. A low-side solution would then lack any low impedance, contin- uous return path and would cause signal integrity issues for data transmission. Figure 3.6 shows a typical LEAP enabled circuit board power plane with isolated supply voltages. We therefore select high-side measurement methods for LEAP in order to maintain a low impedance ground path and allow the supply planes to be isolated into regions dedicated to particular hardware resources.

In order to minimize power loss in the current sensing circuit, a small valued shunt resistor is used combined with a specialized current-sense amplifier circuit. In the high-side sensing configuration, the current-sense amplifier is low-power,

52 high gain circuit that is highly resistant to common mode noise. Additionally the current sensing amplifier must also provide accuracy across a wide dynamic range, since as part of our LEAP design philosophy we seek to provide a wide operating range for all system components. To maximize the sensor’s dynamic range we choose a resistor value, Rs based upon the anticipated maximum current value, Imax, the maximum full scale voltage of the ADC Vfs, the gain value of the current amplifier, Ag, and its given input offset voltage, Voff . Using higher gain values, Ag, permits smaller sense resistors, Rs, at the expense of more error due to non-linearity of the input offset voltage, exacerbated by the high-side common mode signal. As the differential input voltage developed across the shunt resistor at the input of the amplifier decreases, the inherent input offset voltage becomes a larger percentage of the measured input signal, resulting in an increase in measurement error. The correct component choices are made by reviewing the tradeoffs among the various values for Rs and Ag based upon the given Vfs and

Voff . We define the design limits to be set at a maximum permitted measurement error Emax , which occurs at the minimum expected measurement current Imin, and the maximum current sense output value defined by the Vfs of the ADC at

Imax. The total measurement error Etot is composed of several error components including using the root-sum-square (RSS) method.

q 2 2 Etot = (Eoff + Eg ) (3.1)

We now detail the selection of components using an example following the procedure used in development of the LEAP architecture. First components ca- pable of operating of a wide dynamic range in power dissipation are selected. The dynamic range of many components spans several orders of magnitude. For our example implementation we assume a region of operation for a circuit that

53 operates in between Pmin and Pmax. We assume the region uses a typical supply voltage, which sets the maximum and minimum currents across the shunt resis- tor. We also select the same supply voltage for the ADC’s Vfs to maximize the reference voltage range. While it would be possible to increase Vfs to reduce am- plifier error, it is not a common feature of ADC components to allow Vfs greater than Vdd. Finally we select a maximum error Emax at the minimum input level as this gives comparable accuracy to existing off the shelf fuel gauge components. For our example we also select the Texas Instruments INA21x family of off the shelf current sense amplifier components based upon their low quiescent current, high common mode rejection, and low input offset voltage. The INA21x family data sheet states a quiescent current of 100uA, common mode rejection ratio of 120dB, input offset voltage of 5uA. The maximum error requirement sets the minimum sense voltage required at the input to the amplifier Vs,min determined by Pmin and Rs. Amplifier total error, Etot, is set by the combination of Voff , common mode offset voltage Vcm, and gain error Eg as follows:

Selected Design Constants:

Pmin = 1mW, Pmax = 1W

Vdd = 3.3V

Vfs = Vdd

Emax = 10%

INA21x amplifier component contants:

Voff = 5uV

Eg = 0.02% CMRR = 120dB

54 Iq = 100uA

Common mode voltage is then:

1 Vcm = CMRR · Vdd (3.2) 10 20

q 2 2 Verr = Vio + Vcm (3.3) q 2 2 Eio = Emax − Eg (3.4)

Verr · 100 Pmin Vs,min = = Imin · Rs = · Rs (3.5) Eio Vdd Then we get:

Vcm = 3.3uV

Verr = 5.99uV

Vs,min = 599uV

Rs = 0.2ohms

The maximum gain value is now defined using Vfs, Rs, and Pmax. We desire the output of the amplifier to be at the ADC’s full scale setting at the point the sensed load reaches its maximum current.

V A = fs (3.6) g,max Pmax Rs · Vdd

Since we set Vfs to be the same as Vdd this becomes:

1 Ag,max = = 5 (3.7) Rs · Pmax

55 The final step is to calculate the power overhead required to operate the circuit. This is found as the ratio of the power used by the load Pload to the power used by the measurement circuit composed of sum of the power used in the sense amplifier Psense and the power used in the shunt resistor Pshunt . We find this for both the maximum and minimum loads and derive the circuit efficiency as follows:

P + P Efficiency = 1 − sense shunt (3.8) 1 + Pload

Where the power at the load, Pload, is in the range {Pmin,Pmax}. The power dissipated in the sensing amplifier circuit is just:

Psense = Vdd · Iq (3.9)

The power dissipated in the shunt resistor is found from the relationship

2 {Pmin,Pmax} 2 Pshunt = Ishunt · Rs = ( ) · Rs (3.10) Vdd

For the current example we find the efficiency Effmin = 75% and Effmax = 98% at the operating points {Pmin, Pmax} set to 1mW, 1000mW . The above example can be repeated for find the best Rs and Ag for each power domain in the design using the estimated Pmin and Pmax for each of the platform’s power domains. Table 3.1 shows the calculated values for several power domains measured on the LEAP2 reference platform.

56 Power Domain Power Range Vdd Rs Ag Efficiency HPM -Processor 25mW-1.5W 0.8-1.1V 0.1 24 0.99 HPM -SDRAM 10mW-500mW 1.8V 0.2 60 0.98-0.99 1mW-300mW 3.3V 0.2 180 0.75-0.99 GPS 1mW-200mW 3.3V 0.47 115 0.75-0.99 SDIO .1mW-100mW 3.3V 0.47 230 0.23-0.99

Table 3.1: Power Domains with operating range, Sense Resistors, Amplifier Gain, and Measurement Efficiency

3.1.3 Data Conversion

Having described the requirements for our power sensor design we now address digitization of the power sensor’s analog signals using an analog to digital con- verter (ADC). The continuous time power calculation P (t) = v(t)i(t) is frequently measured as the discrete time equivalent using ADC sampling. The energy con- sumed over a given period measured by an ADC can be calculated using a sum- mation of the products of ADC samples for the current and voltage signals. The sums of the products correspond to the incremental dissipated energy values and can be calculated using Equations 3.11.

t X ∆E = (Ki · I(t) · Kv · V (t)) (3.11) 0

VADC,range Ki = B (3.12) (2 − 1) · Ai · Rsense,i · fsample

VADC,range Kv = B (3.13) (2 − 1) · Av · fsample where,

VADC,range = input dynamic range of ADC (V) B = number of bits available in ADC

Av = voltage gain of the ADC amplifying stage (V/V)

57 Rsense = current-sense resistor (Ω) fsample = sampling frequency of ADC (Hz)

This Since Ki and Kv are typically constants in the design, the energy mea- surement distills into a single multiply and accumulate (MAC) operation per cycle per channel. However, since most systems employ voltage regulators at the supply, the V (t) measurement can be replaced with a simple constant Vsupply for each power domain. The equation drops to a simple accumulation operation (charge measurement) with constant scaling factors applied at the end as shown in Equation 3.14. However with the use of digital voltage scaling techniques, we cannot make this assumption and must revert to methods consistent with Equation 3.11.

∆E = ∆Q · Vsupply (3.14)

The proper ADC selection involves considering multiple design properties si- multaneously including maximum sampling rate, sample data word size/resolu- tion, number of input channels, and data interface type. Each property has direct impact on the ADC’s power dissipation and must be selected with careful consid- eration of its implications on both ADC and total system power use. For example increasing the ADC sample rate not only increases the power consumed for sam- ple conversion, but also implies increased power dissipation for data transfer out of the ADC sample buffer as well as additional power require for data processing.

Most modern ADC’s also implement power down capabilities so that the ADC can be put into low power sleep states between conversion cycles. Here the ADC power consumption is a combination of sleep mode power Psleep and active mode

58 power Pon. Equation 3.15 shows the amount of time the ADC spends in active mode is a function of the ADC’s sample clock rate. Normally we would seek to reduce the sample clock rate to match the minimum acceptable value in order to reduce the power dissipated by the clock generator. However, in the case of the ADC this may actually increase the total ADC power. Since the ADC requires a constant number of sample clock cycles to generate the conversion result, slowing the clock extends the time required to generate the output sample. Thus there will be additional power consumption due to the added time spent out of the low power sleep state due to the extended sample conversion. Equation 3.16 shows how average power relates to the particular ADC settings for sample rate fs, sleep power Psleep, ADC sample width in bits Nb, power up transition time tpu, and sample clock fclk.

ton = (fs · (tpu + fclk · Nb)) (3.15)

Pave = Pon(fclk) · ton + Psleep(1 − ton) (3.16)

As an example, Figure 3.7 shows the average power for a Texas Instruments AD7888 over a wide range of sample rates. As can be seen, significant power savings in the ADC are possible by accurately matching the sample rate to the power supply’s information bandwidth.

In addition to proper sample rate selection, ADC resolution also has significant implications on power. Selecting a sample size beyond the fidelity of the power sensor inputs provides no additional information at the expense of additional conversion power, data word transfer power, and computation power. The correct choice defines the effective number of bits (ENOB) to match the expected error margin set during the specification of the power sensor noted in a previous section. Likewise, careful attention to the ADC data interface is necessary to 1) assure

59 Figure 3.7: ADS7888 ADC power consumption over a range of sampling frequen- cies sufficient bandwidth for the selected data rate, resolution and number of channels and 2) prevent the selected data interface from imposing significant overhead necessary for to data collection. We first verify that the ADC’s data interface can handle the required aggregate data rate by combining the parameters to generate the total required data throughput, BWtot, including any data addressing or framing overhead:

BWtot = fs · tonNb · tonNc + BWoverhead(fs,Nc) (3.17)

It is important that the aggregate bandwidth should not oversubscribe the un- derlying data bus. As an example using common ADC values, fs=1KHz, Nb=12,

2 Nc=8 the aggregate data bandwidth is calculated as 96Kbps. A low speed I C bus operates with a maximum data rate of 100kbps, which should provide sufficient throughput to support the design. However when considering the bus overhead needed for register addressing and 8-bit register alignment the BWoverhead can be

1.5 bytes per sample, making the total bandwidth required BWtot of 132Kpbs

60 Characteristics Influences Iq Since this is always powered, this value must be signif- icantly lower than the minimum expected load of the region being supplied Imax Must be able to supply maximum expected load current for the region being supplied Vmin, Vmax Must be able to operate over the expected voltage supply range Idis Should be able to discharge regions in power down state to prevent floating pins and creating discharge paths from neighboring supplies

Table 3.2: Power Switching Considerations exceed the maximum bus bandwidth. The overhead necessary to operate the ADC and transfer the sample significantly increases the necessary bandwidth and cannot be neglected during architecture selection.

3.1.4 Power Switching

As part of the LEAP design criteria we not only need a method to measure power across a wide variety of system components, but also provide a method to enable powering down unused or idle subsystems. While much advancement in integrated circuit design have focused on on-chip power management capabilities, including power down of large regions of the die, we still require a method to col- lapse the supplies on larger regions of the system including entire circuit boards. This necessitates a dedicated switching component to enable dynamic power con- trol over these regions. The important factors to consider when selecting these components include the quiescent current, maximum load, input voltage range, and load discharge. These are described in Table 3.2.

In addition to power switch component selection, it is critical to perform detailed analysis of the design at all boundary nodes between different voltage

61 domains. Many circuits require dedicated isolation circuitry to prevent current flow through the signal pins via the protection diodes. These act as parasitic diodes and can cause significant power consumption or damage in the unpow- ered circuits. Isolation circuits such as cross-bar bus switches [Ins] should be used to prevent these parasitic diodes where necessary to prevent these paths in Figure 3.8.

Figure 3.8: Parasitic diode affect when driving I/O from a powered domain to an unpowered domain (Vin > Vdd)

3.1.5 Energy Accounting

Having described the components necessary to generate proper data samples for accurate power measurement within each of the system’s voltage domains, we now address the design and implementation of the components required for data sample acquisition, processing, and host processor data transfer. The aggregate functionality of these components we describe as energy accounting. In order to provide high fidelity energy accounting information in an efficient manner we must address the proper architecture for the energy accounting subsystem. We base our experience selecting the proper energy accounting architecture upon lessons learned from prior generation LEAP enabled implementations [MSR12] where several design shortcomings were found. This early implementation was based upon a low power MSP430 microcontroller operated as the coordinator of the

62 energy management system, and connected to the main applications processor via an I2C hardware bus as shown in Figure 3.9. During extensive experimentation this platform, including several classroom projects of the graduate course EE209, we discovered the the following issues with the original implementation:

1. The energy accounting function dominates the power required to implement LEAP and thus must be implemented most efficiently. This dictates the use of specialized hardware designed for this purpose.

2. The energy accounting function must not require intervention from external resources such as the applications processor to function. It must be able to measure and calculate power and accumulated energy values across multiple channels simultaneously and with high sample rates.

3. The energy accounting function must be able to communicate with the applications processor via a high speed bus. This is required to minimize the overhead needed to transfer energy values to the applications processor during operating system scheduling operations.

4. The energy accounting function must be able to perform automatic resets of energy values independently on each of its channels. This is necessary for the operating system to attribute different time intervals with specific software applications with minimum overhead.

Solutions to these issues led to a custom hardware design based on a custom designed ASIC solution, currently implemented in FPGA logic for cost reasons. This ASIC design was named the energy management and accounting processor (EMAP). We now detail this custom hardware design as well as the performance results obtained during evaluation.

63 Figure 3.9: Original LEAP platform hardware architecture

3.1.5.1 EMAP ASIC

As shown in Figure 3.10, the EMAP ASIC consists of a number of top level subsystems which are all connected via a unified system on chip (SoC) back- bone bus. This backbone is provided by the open source Wishbone interconnect [Ope12a] revision B4 implemented as a multiplexor bus logic interconnect. The Wishbone bus provides a low logic count yet high speed interconnect between each of the modules in the EMAP design. This includes support for all single cycle commands as well as block read from energy accounting module to enhance throughput during energy sampling events. The bus provides a 32-bit data path and a maximum burst throughput for sequential reads of 4 bytes per clock cycle. To improve access time for cycles initiated from the applications processor, each of the modules support partial address decoding at the Wishbone interconnect and full address decoding within each of the module blocks.

The EMAP’s Wishbone interconnect includes two masters (the Host Interface Controller and Energy Accounting Manager), so a bus arbiter function is used

64 Figure 3.10: EMAP ASIC architecture block diagram with bus interconnect

65 Wishbone Address Module 0x000000 - 0x000010 System Configuration 0x000010 - 0x000100 Clock Manager 0x000100 - 0x001000 Interrupt Controller 0x001000 - 0x001100 GPIO Controller 0x002000 - 0x008000 Energy Accounting Manager 0x008000 - 0x010000 Power Management Scheduler 0x010000 - 0x110000 Host Interface Controller

Table 3.3: EMAP Wishbone SoC bus addresses to prevent bus deadlock. The Wishbone arbiter utilizes a fixed priority method with highest priority given to the Host Interface Controller for lowest latency register access for the applications processor. Additionally, the arbiter utilizes an arbitration parking scheme, where the current bus master retains bus ownership during idle periods in order to reduce latency for burst transactions from the same bus master.

The main functional blocks attached to the EMAP’s Wishbone interconnect include a Clock Manager, GPIO Controller, Programmable Interrupt Controller, Resource Multiplexer, Energy Accounting Manager, and Power Management Scheduler. Each of these EMAP subsystems, as shown in Figure 3.9, is ac- cessed via the Wishbone bus. The Wishbone addresses for each block are noted in Table 3.3. A complete list of register addresses is found in Appendix A. The following sections describe the operation of each subsystem in detail.

Clock Manager The Clock Manager module is responsible for generation of the different clock domains operating within the EMAP ASIC as well as provide clock gating functions to reduce power consumption for modules that are placed into the sleep state. Each clock domain frequency may be derived from the primary clock input as either a basic clock divider to provide fi = fclk/Ni or into

66 Clock Domain Maximum Frequency Wishbone Clock 34.6MHz Host Interface Clock 48.5MHz Energy Accounting (CAC) Clock 26.9MHz ADC Sampling Clock 16.4MHz Power Management Scheduler Clock 12.2MHz

Table 3.4: Clock Manager domain frequency bounds for an Actel IGLOO FPGA derived from place and route timing reports a phase locked loop (PLL) macrocell for general clock multiplier capabilities given fdom = (fclk · Mi)/(Ni · C). The clock domain’s output frequency is configured individually via a control register which sets the division factor and optionally enables a PLL for clock multiplication. The maximum clock rate for each domain is defined during logic synthesis for the target FPGA architecture and speed grade. Example frequency bounds for the clock division and PLL multiplier are provided for an implementation in an Actel IGLOO FPGA architecture in Table 3.4. In order to mitigate metastable conditions between each clock domain a synchronizer delay circuit is used for signals passing between clock domain boundaries.

GPIO Controller A general purpose input/output (GPIO) controller core is provided to enable low complexity interconnect to a wide array of sensor types. Each GPIO may be individually configured as an input or output with the out- put drive strength setting controlled by the pin configuration during place and route. The output driver may be set to either a push-pull or open-collector ar- chitecture. To alleviate the need for locking when performing read-modify-write operations, all GPIO output values may be set using individual set and clear registers. Finally, GPIO inputs may be routed to the programmable interrupt controller for mapping to one of the interrupt pins to the applications processor.

67 Figure 3.11: Schematic of GPIO pin logic

Figure 3.11 demonstrates the control logic implemented for each GPIO pin and module register memory map.

Programmable Interrupt Controller In order to provide a highly config- urable interrupt request (IRQ) capability for the host applications processor, the EMAP includes a programmable interrupt controller (PIC) subsystem. The PIC is responsible for routing the large number of possible input interrupt sources to the reduced set of applications processor IRQ input lines. This routing is controlled through the PIC’s IRQ configuration registers. Each IRQ includes in- put sources from other EMAP subsystems including the GPIO Controller, Power Management Scheduler, and Energy Accounting Manager. The PIC additionally provides global interrupt masking per individual IRQ line to simplify software locking needed for interrupt service routines. To further enhance compatibility with external interrupt sources, the input lines routed from the GPIO controller may be configured as either edge or level triggered. Additionally, edge triggered

68 interrupts may be configured as sensitive to rising edge, falling edge, or both edge transitions. Figure 3.12 demonstrates the interrupt controller logic.

Figure 3.12: Programmable interrupt controller logic

Host Interface Controller Data accesses between the applications processor and the EMAP’s internal Wishbone bus fabric are managed by the Host Interface Controller (HIC) bridge. The HIC bridge provides a host bus interface that may be configured to mate with a variety of applications processor architectures. Both synchronous and asynchronous bus types are supported via configurable data strobe lines and an optional bus clock line. The default configuration set by boot-strap pins that are latched during reset. Data access is byte masked with a maximum data width of 32-bits per cycle. Since the internal Wishbone bus is operated on a separate clock domain, the HIC bridge uses a host ready signal to delay bus transactions until the HIC is able win arbitration on the Wishbone and perform the requested transaction. This stalls the bus operation on the applications processors memory bus until the Wishbone can complete the bus cycle within the FPGA. Once completed the HCI releases the external memory bus to acknowledge completion. Example memory bus transactions to the Wishbone backbone are shown in Figure 3.13 demonstrating a read cycle when the Wishbone arbitration has been won and secondly when the bus is stalled due

69 to arbitration lost. The worst-case latencies are demonstrated here as well.

Figure 3.13: Wishbone to Host Controller read cycle waveforms

In order to minimize the number of CPU cycles required for data collection from the Energy Accounting Manager (EAM), the HCI also support a bus mas- tering (DMA) cycle. In this operation, the applications processor configures the Energy Accounting Controller with the target memory address of the external memory bus attached to the HCI. The Energy Account- ing Controller then arbitrates for ownership of the Wishbone bus. Once this is achieved, the HCI’s Wishbone slave is enabled, which then arbitrates for own- ership of the external memory bus via a dedicated request/grant (DMAREQ, DMAGNT) handshake signal pair with the applications processor. The Wish- bone write cycle originated from the EAC stalls until the applications processor awards ownership of the external memory bus to the EMAP and the HCI target is able to complete its write cycle. If a block write cycle is indicated (by setting a DMA transfer size greater than one), the EMAP’s Energy Accounting Controller will maintain ownership of the Wishbone and the HCI target address. This holds the HCI DMAREQ line asserted during following write cycles so long as the DMAGNT signal remains asserted. Should the applications processor interrupt the write progress, the HCI will again stall the Wishbone and retry the DMA

70 request cycle. This occurs until the entire data block transfer is complete.

Resource Multiplexer Due to the pin limitations imposed by the small IC packages common in modern applications processors, it is common to find a short- age of serial peripherals available from the application processor to connect large numbers of sensors, especially for non-multiplexed bus types (e.g., UART and SPI). This is exacerbated when the combination of required sensors and types are only known and run-time and therefore allocation of bus resources is prob- lematic. In addition many of these sensors are only intermittently required. For example GPS, typically occupying a processor UART interface, is only necessary when acquiring position updates, but could be reused for other purposes during idle periods. To simplify this mapping of sensor serial buses to interfaces on the applications processor, the EMAP ASIC includes a Resource Multiplexor mod- ule. The Resource Multiplexor expands the number of available serial buses and connects these in a configurable crossbar switch configuration. This provides run- time management capability for the various peripheral buses by allowing 1-to-N connections between the host processor and external sensors. This ability to by- pass and combine ports in many combinations simplifies design-time interconnect since over provisioning of the serial bus ports at the applications processor can be eliminated.

Energy Accounting Manager The design and implementation of the Energy Accounting Manager (EAM) is one of the major contributions of this thesis. The purpose of the EAM is to provide autonomous collection of high resolution en- ergy information without necessitating intervention from higher power resources such as the applications processor. By enabling high-fidelity energy measure- ment across a large number of power domains, yet maintaining minimal external

71 management requirements and a low power operation, the EAM is a notable improvement from prior systems, as these are designed for power profiling only during development or debugging where inefficient measurement methods are not as critical. We describe his implementation and its enhancements to energy measurement here.

The EAM architecture consists of three main functional units, the configu- ration and control registers, the charge accumulation data RAM and controller, and the analog to digital sampling controllers). The configuration and control registers are Wishbone accessible read/write registers responsible for EAM oper- ation. The memory map for the EAM is shown in Table 3.5. The applications processor typically configures EAM functions by writing to these registers dur- ing initial setup. Common operations include setting the master sample rate based upon the frequency of the sampling clock provided by the Clock Man- ager described above, enabling sampling for each of the active channels, setting whether the channel is a current or voltage measurement, and setting each chan- nels sample rate divisor (which may be 1/N of the master sample period). For each channel that is configured as a current measurement type, it is possible to additionally assign a one of the configured voltage channels in order to perform sample by sample energy calculations. This is important since charge and energy are not proportional measurements in DVFS regulated domains as the voltage varies dynamically during operation. This is compounded by modern mobile processors who operate with automatic voltage scaling governors that are not set by software methods [CG11]. Accurate energy calculation requires instantaneous measurements of both voltage and current.

In addition to initial setup configuration, the applications processor may elect to enable a DMA method to push samples directly to its memory. To support this,

72 Wishbone Address Module 0x002000 Version Register 0x002004 Configuration Register 0x002008 Charge Sample Clock Divider 0x00200C - 0x002018 Charge Enable Register [2:0] 0x00201C - 0x002024 Channel Configuration[2:0] 0x002028 - 0x002030 Channel Sample Clock [2:0] 0x002800 - 0x0028FF Voltage/Current RAM 0x002900 - 0x0031FF Reserved 0x003200 - 0x0033FF Channel Count RAM 0x003400 - 0x0035FF Channel Charge RAM 0x003600 - 0x0037FF Channel Charge+RESET RAM

Table 3.5: EMAP Energy Accounting Controller Registers the EAM’s configuration registers include a DMA target address, transfer size, and transfer-enable register. Once configured, the applications processor may set the transfer enable register to start a data transfer from the EAM’s block RAM directly to the applications processor as described in the HIC section. After configuring the proper information for each sampling channel, the applications processor sets global enable bit to start data collection. This begins the state machine within the charge accumulation controller (CAC).

The CAC first initializes its block RAM to zero to prepare it for storage of charge and voltage data. After initialization completes, the CAC waits for read access requests from the Wishbone bus or write requests for storing new samples from each of the ADC sampling controllers (ASC). An arbiter is used to prioritize simultaneous requests to the RAM with the Wishbone read given highest priority and each of the ASC’s are given equal priority with round-robin sequencing. During Wishbone read cycles, an ASC with a pending write request will stall until the read cycle completes. If one or more ASC’s indicate a new sample is ready, the CAC will grant write access to the highest priority ASC and store the available sample into the block RAM at the proper RAM address for the

73 sample data. The CAC additionally provides sample conversion according to the state diagram shown in Figure 3.14. For charge measurement channels, samples are first accumulated with the existing RAM contents and then written back to the RAM as charge measurements. For these channels a counter register is also incremented, providing the number of accumulations since the channel was last reset. Accumulation data is stored as 53 bit values and accumulation counts are 36 bit values providing a minimum of 2 years between rollovers when sampling at 1KHz with a 12-bit ADC (additional rollover periods are listed in Table 3.6). Additionally for energy measurement channels (those with an associated voltage channel), data samples are first multiplied with the latest voltage measurement, then summed with the current RAM contents, and then written back to as energy values. The voltage channels are not accumulated nor do they keep a summary count, and instead simply replace the prior value in RAM on each update.

Figure 3.14: Data collection state machine showing energy measurement method

74 ADC resolution Sample Rate Rollover Period (data) Rollover Period (count) 8-bit 100sps 1000+ years 21 years 12-bit 1Ksps 69 years 2.2 years 16-bit 10Ksps 159 days 79 days

Table 3.6: CAC rollover periods for data width and sample rates

To support autonomous operation during extended power states, each channel in the CAC RAM is sized to prevent rollover for long periods of operation. This is important since the applications processor must service the EAM to read back energy data within the rollover period to prevent accidental data loss. If the rollover period is set too short, additional energy is wasted due to extra servicing from the applications processor.

In order to do effective energy attribution to various software functions run- ning on the applications processor, the CAC provides an efficient method to read incremental energy data. This is useful for example to provide the CPU energy dissipated between scheduler ticks. In a typical RAM based implementation, the applications processor would first read samples from the RAM block and then perform a separate write to each channel. The EMAP instead uses a unique ’read with auto reset’ capability to simplify this operation. Each channel within the RAM block is aliased to multiple address locations as shown in Table 3.5. For normal read operations, RAM data is accessed from its base address. If the same channel is read from its auto-reset aliased address (the base address with alias offset), the RAM block location for this channel will automatically be reset to zero upon completion of the read cycle. This simplifies application software to perform a single read operation during energy attribution functions, which elimi- nates software race conditions inherent read-modify-write operations and reduces access time by half.

The final subsystem within the EAM is the ADC sampling controller. The

75 ASC’s are responsible for initialization of the external ADC’s, reading sample data from each ADC, and writing each sample to the CAC. The ASC design is modular to support different ADC components (e.g TLV2548 vs ADS7844). This feature is necessary to allow component variations in sample size, sample rate, and interface signaling. The ASC’s state machine also manages any unique power management features of the external ADC, for example power down of voltage references during idle periods.

The Energy Accounting Manager typically has several instances of the ASC module instantiated, with each ASC module supporting one or more different external ADC components. The ASC is able to support several external ADC’s on a single SPI bus via different chip select lines. The ASC automatically mul- tiplexes operation of the different ADC’s to support a larger number of logical channels to the CAC. The maximum sample rate, sample size, and SPI clock rate dictate the number of ADC components that can be connected to a single ASC. For example the ASC implementation for the TI TLV2548 supports 4 ADC’s with 8 channels on each ADC, thus presenting 32 channels to the CAC. Addi- tional channels are supported by replicating additional ASC modules within the EAM. The CAC arbitration logic supports cascading additional ASC modules as necessary to expand the channel count.

Power Management Scheduler A second significant contribution of this the- sis is the design and implementation of the Power Management Scheduler (PMS). The role of the Power Management Scheduler is to enable control and scheduling of the platform’s power domains while maintaining very low power operation. Most commonly the applications processor is responsible for central control of power management functions including scheduling of device sleep and wake op- erations. Low power operation is limited since the applications processor is on

76 Figure 3.15: ASC state machine for the TLV2548 (a) and waveform example of multiple ASC modules sampling in parallel on EMAP2 (b)

77 during power management events. Instead a dedicated power manager function which operates independently from the applications CPU is required. The PMS architecture is shown in Figure 3.16. The PMS manages the power state of mul- tiple power domains simultaneously. Each power domain may be in one of three logical states: on, off, or sleeping (although multi-level sleep states may be sup- ported in the PSM if required). The on/off power state is presented to a pin that may be connected to either external power switches or regulator enable signals in order to turn off the selected power domain. The wake/sleep state is output to a separate pin which is used an interrupt to wake external devices from sleep modes. Each power domain’s operating state is managed by the PMS. At reset, by default all power entries are set to the ’on’ state.

Figure 3.16: Power Management Scheduler architecture

The PMS is composed of as set of three subsystems: the configuration and control registers, power management processor, a RAM table of comprised of power management entries. The configuration and control registers provide ba- sic reset and enable signals as well as clock rate controls which determine the

78 maximum rate at which power management operations may occur. The PMS main clock can divided both by the Clock Manager and internally to generate frequencies from 1MHz to down to 100Hz. This clock signal is used to drive a 36-bit real-time counter (RTC), providing counter rollover periods from 19 hours at 1MHz to over 21 years at 100Hz. The RTC value provides the system timer for the PMS, enabling configurable scheduler resolution from 1us to 10ms based upon the configured input clock rate.

The PMS block RAM is responsible for storage of power management entries. Each entry consists of several fields as shown in Figure 3.17 and include: delay, period, actions, and identifier tag. These entries support scheduling different types of events such as single events occurring at specific time, periodic events with fixed repeat intervals, periodic events beginning at a specific future time and repeating with fixed intervals, and asynchronous events. Each scheduling entry contains: a 36 bit delay field, 16 bit power domain bitmask, 2 bit action tag, 32 bit period field, and 18 bit user identification (UID) field. Each entries action tag and power domains bitmask set the operations for that entry. The action tag sets the entry to be one of: power domain on, power domain off, or set domain wakeup. Entries may also include a user identifier tag (UID) for the applications processor to use as a reference for each of the table entries. This is important since power management entries may need to be modified during operation. The identification tag provides a simple method for application software to locate entries within the table as well as control permissions in multi-user systems where different power domains may be managed by separate users.

The applications processor is responsible for writing power management events to the PMS RAM via Wishbone write cycles initiated through the HIC. The appli- cations processor uses read/write access to the RAM blocks to load, modify, and

79 Figure 3.17: Power Management Scheduler table entry delete power management scheduling entries. The power management proces- sor within the PMS is responsible for executing all power management activities based upon these entries in the RAM table. The control logic for entry processing is shown in Figure 3.18. At each RTC update, the power management processor searches the table for expired entries. It then sequentially parses each slot with the power management table to locate valid entries. An entry is active if it its en- able bit is set and the delay field is less than or equal to the RTC counter. When an active entry is found, the element’s action bitmask field is read to determine the required power management action. If the entry’s period field is non-zero, this period value is added to the element’s delay field and the entry is re-enabled, otherwise the entry is disabled via a write to the action field. Once the sched- uler completes parsing all entries in the RAM block, it is disabled until the next update of the RTC. This gives the worst case response latency for any scheduled power management action of 128us when driven by a 1MHz external clock If an entry is found that is both valid and expired, its action tag is executed on the selected power domains. Any periodic entries are then rescheduled to reset the new activation time stamp and reenabled while non-periodic entries are invali- dated. If asynchronous event has occurred such as an interrupt, the configured output events are activated. We show a summary of the pseudo code logic in Algorithm-1 for reference.

80 Figure 3.18: Power Management Scheduler state logic

81 Algorithm 1 PSM pseudocode logic 1: INIT: i = 0, RTC = 0, power state = all on 2: 3: repeat 4: RAM[i] = 0 { Initialize all entries in RAM } 5: i = i + 1 6: until i = 128 7: while CONFIG[RUN] = 1 do 8: if RTC has changed then 9: i = 0 10: repeat 11: while memory arbiter not ready do 12: wait one cycle 13: end while 14: entry = RAM[i] 15: if entry[delay] < RTC and entry[enabled] = 1 then 16: power state = entry[action] 17: if entry[period] ! = 0 then 18: entry[delay] = entry[delay] + entry[period] 19: else 20: entry[action] = invalid 21: end if 22: end if 23: i = i + 1 24: until i = 128 25: end if 26: end while

Performance Analysis Having now described the LEAP architecture and its primary hardware components, we now present an analysis of LEAP implemen- tations, including its performance through direct measurements. We first review the operation of the EMAP function, a critical part of LEAP operation, through instantiations of the EMAP logic into two different FPGA’s architectures and present the details of these implementations.

82 QL8250 AGL600 Quiescent Power 4.3 mW 0.057 mW Maximum Clock Rate 28.9 MHz 34.6 MHz Resources (% of total) 96% 44% Clock Manager 7% 3% Interrupt Controller 5% 2% GPIO Controller 9% 4% Energy Accounting Manager Block RAM 50% 25% Logic Tiles 23% 12% Power Management Scheduler Block RAM 50% 25% Logic Tiles 19% 8% Host Interface Controller 33% 15%

Table 3.7: Quicklogic QL8250 and Actel IGLOO AGL600V2 implementation details

3.1.5.2 EMAP Implementations

During EMAP ASIC development we generated two implementations of the de- sign. Our initial implementation utilized a Quicklogic QL8250 [Qui] based upon an anti-fuse design principle. These FPGA’s are one-time programmable parts which offer very low quiescent current draw do to the read-only anti-fuse archi- tecture. While this has distinct power advantages over SRAM based FPGA’s the one-time programming feature was a burden during rapid development cycles. Following the QL8250 we selected an Actel IGLOO AGL600 [Mic12], an FPGA featuring low quiescent current due to its flash based gate architecture. Unlike the QL8250, the AGL600 is reprogrammable via its JTAG port. As will be de- scribed in the later applications section, the JTAG reprogramming feature may also be performed via the applications processor while deployed in the field.

Table 3.7 shows the FPGA resources, maximum clock rate, and power con- sumed by the Quicklogic QL8250 and Actel IGLOO AGL600 FPGA’s. As can

83 be seen in the table, the AGL600 has significant power and speed advantages over the older QL8250 architecture. This is due to its lower core logic voltage operation (1.2V vs 1.8V) and smaller gate feature size. In addition to its lower quiescent power, the AGL600 also performed at lower operating power than the QL8250 implementation at various sampling frequencies. Additionally to validate the EMAP solution against other alternative designs, we performed experiments using the prior LEAP1 platform design and also a design constructed from off- the-shelf (OTS) components providing similar, though not identical functions. This power and performance comparison is shown on Figure 3.19. An important differentiator of the EMAP ASIC based solutions is they maintain a nearly con- stant power profile (for a fixed sample rate) that does not depend on the number of channels being measured. This is due to the EMAP’s ability to parallelize data sampling by adding additional ADC sampling controller’s to the EAM without increasing the core clock rate.

The LEAP1 solution, which operates with low average power, suffers from low communications bandwidth between the MSP430 microcontroller and the applications processor. This increases data collection overhead due to wasted power in the applications power. The MSP430 also suffers from a limitation to the number of sampling channels. The OTS solution, which uses discrete Texas Instruments BQ27000 fuel gauge IC’s for each power domain, is superior when used at low sample rates and low channel counts. This is intuitive since the additional hardware of the EMAP solution is not fully utilized in low channel count applications. However as channel count increases, the redundant charge accumulation hardware (replicated in every BQ27000) quickly increases the total power of the OTS solution beyond the custom EMAP implementations.

As shown in the performance graphs, the custom EMAP solution based upon

84 EMAP: QL8250+ EMAP: AGL600+ TLV2548 ADS7844 Sample Rate(max 1 ch) 38462 76923 Charge Accuracy(min uVh) 8.79 8.79 Maximum Channels 64(CAC RAM limited) 128(CAC RAM limited) Host Charge Read Time(us) 0.096 0.070 VDD range(V) 1.2-5.5 1.2-5.5 Auto Reset Capable Yes Yes 85 MSP430F1611 BQ27000 Sample Rate(max 1 ch) 95400(CPU limited) 256 Charge Accuracy(min uVh) 16.11 3.57 Maximum Channels 8(ADC port limited) 10(I2C address limited) Host Charge Read Time(us) 12 320 VDD range(V) 1.8-3.6 2.6-4.2 Auto Reset Capable No No

Table 3.8: Implementation details of energy measurement solutions Figure 3.19: Power comparison of four energy measurement solutions: (a) Total sampling power by number of channels (b) Total sampling power by number by sample rate and (c) applications processor sample read back time

86 Figure 3.20: Comparison of four energy measurement solutions when including applications processor power, assuming 1W operation with 100 sample reads/sec the AGL600 provides lower power performance that both the MSP430 and BQ27000 solutions once approximately 8 channels are sampled. Once the power consumed by the applications processor is also considered the advantage of the dedicated solution is enhanced. Figure 3.20 shows the additional power overhead incurred with traditional low bandwidth interfaces becomes significant as channel count increases. As shown, the OTS fuel gauge solution reaches 100% of the available bandwidth at its peak, which requires the applications processor to remain on and consume significant idle power.

LEAP2 Platform The previous section describes the advantages of the dedi- cated EMAP energy accounting method versus traditional solutions. In order to demonstrate and utility of EMAP capabilities and the value of the LEAP archi- tecture in generating energy-aware ENS systems, a modular stackable platform was designed and constructed from custom printed circuit boards. This platform,

87 named LEAP2, contains all the advancements of the LEAP architecture previ- ously described and has been used as test bench for development and validation of several energy-aware software tools.

The LEAP2 platform is a second generation, low power, LEAP enabled ENS system allowing high accuracy, low overhead energy measurement of platform computing, storage, sensing, and communications devices at granularity levels previously unachievable. LEAP2 incorporates greatly enhanced energy account- ing and scheduling capabilities over the previous LEAP1 design including the EMAP ASIC, and expanded peripheral and sensor interface selection, and a larger set of storage and communications devices to enable run-time energy optimiza- tion. LEAP2 is a highly modular design. Energy measurement domains may be added to or removed from the platform through an expandable energy monitoring bus integrated into the LEAP2 stacking connectors. This enables future add-on modules to benefit from the LEAP2 energy management and accounting features.

Each PCB module was constructed with high resolution energy measurement capabilities at fine grained levels previously unattained to our knowledge. The LEAP2 platform is shown in Figure 3.21 along with a block diagram highlighting the various PCB modules. We first describe these platform modules and their designs. We then provide details on the many software capabilities built upon the energy sensing capabilities of LEAP2.

3.1.5.3 EMAP2 Communications Module

The EMAP2 module (Figure 3.22)provides the primary LEAP functions includ- ing the EMAP ASIC’s energy measurement capabilities, sensor serial bus multi- plexing, expandable power management bus, a modular stacking interface, and the communications functions. The module’s EMAP ASIC performs continuous,

88 Figure 3.21: Photo and block diagram of the LEAP2 platform real-time energy monitoring at a fine-grained device level across the entire plat- form including all add-on expansion modules through its modular, expandable power management serial bus. The module, as shown in Table 3.9, has eight current sensing channels allocated to the EMAP2 module’s peripherals such as Ethernet, USB, quick capture camera, and Compact Flash. Eight channels are allocated for the host processor module’s use through the host interface connec- tor, and 32 additional sensors may be connected through the power management bus on separate modules.

Each of the EMAP2 module’s power domains is electrically isolated, requir- ing all voltage supplies to pass through precision current sensing resistors. The voltage across the sense resistor is amplified through Texas Instruments INA216, a high common mode rejection, differential amplifier. The signal is then filtered through a single pole, passive low pass filter, and then sampled by an 8 channel,

89 Figure 3.22: Photo and block diagram of the energy management and accounting (EMAP2) module

Power(mW) EMAP2 Operating State Typical Max RTC Only (hibernate) 2.3 2.6 EMAP Sampling 4.64 6.5 Ethernet 198 429 USB 125 2500 Camera 286 356

Table 3.9: EMAP2 operating states and power consumption

12 bit analog to digital converter (ADC), a Texas Instruments TLV2548. The sense resistor and amplifier gain values are scaled per domain to prevent satu- ration of the ADC signal and to maximize the measurable dynamic range. The maximum anticipated current draw is used to set the amplifier output voltage to the ADC full scale value, set by a 3.0V precision voltage reference.

We verified EMAP2 power measurement accuracy, which includes both cur- rent sensor and ADC measurement, by comparing applied current loads against their sampled values for several of the EMAP2 power domains. This testing was performed on a sample of units over the range of specified operating power points in Table 3.9. The results of several calibration measurements are shown in Figure 3.23. Our measurements verified that the expected and actual measure- ments were within 10% across the operation spectrum with a 95% confidence and

90 average errors were typically less than 2%.

We also verified the energy cost for the applications process on the LEAP2 platform to read from the EMAP ASIC. The CPU’s execution time and energy during sampling of the energy data was recorded for several configurations of channel number and sample rates. As shown in Figure 3.24 the high bandwidth interface of the EMAP ASIC requires minimal additional energy even at large channel counts and high sample rates. This validates our EMAP design has achieved our LEAP design goals.

The EMAP2 module provides many of the communications interfaces present in modern ENS platforms. Ethernet is valuable for providing gateway functions for network edge nodes while USB and Quick Camera interface are used for adding imaging capabilities. The power consumed by each of these communica- tions interfaces is individually measured. In addition, the module utilizes a stack- ing architecture to enable adding new platform enhancements. Located on this stacking connector are numerous serial and parallel buses, all routed through the EMAP’s resource multiplexor to facilitate run-time resource management from the applications processor. Several power resources on the stacking connector may be controlled via the EMAP ASIC’s Power Management Scheduler.

In addition the EMAP2 provides an ultra-low power hibernation mode to en- able extended operation from battery in long deployment applications. In this mode, the EMAP ASIC (via the Power Management Scheduler) initiates a plat- form shutdown whereby all power domains (including their voltage regulators) are shut off to reduce power. The EMAP ASIC then enters a lower power standby mode, which disables the high frequency internal clock domains. An external low frequency clock signal (provided by a micro-power silicon oscillator or watch crystal) is enabled to provide updates to the EMAP’s real time clock register.

91 Figure 3.23: Measurement accuracy of the EMAP2 current sensors and ADC for the following channels: (a) HPM CPU (b) HPM SDRAM (c) HPM NAND

92 Figure 3.24: CPU energy overhead when reading from the EMAP ASIC on EMAP2: (a) CPU sampling energy by number of sampled channels (b) by sample rate

93 Likewise, the Power Management Scheduler retains its table entries and contin- ues to operate, albeit at a reduced rate. Wakeup from the micro power sleep state are limited to asynchronous trigger inputs or scheduled power management events. For example, the platform could be configured to restart at a fixed future date and time (perhaps for network synchronization purposes), or by low energy triggers such as PIR sensors or tripwires.

3.1.5.4 Host Processor Module

The Host Processor Module (HPM) provides the main applications processor as well as several volatile and non-volatile storage media for local processing of sensor and communications data. The HPM is attached to the EMAP2 via a standard SODIMM200 connector for high bandwidth and high point count con- nectivity. The HPM’s interface to the EMAP2 is designed such that the processor module may be redesigned to incorporate different CPU architectures (including future updates) while maintaining a common interface to the EMAP2. One archi- tecture, based on the Marvell PXA270 applications processor, was implemented and fabricated for proof of concept of the LEAP design principles. This design includes several heterogeneous storage types including SDRAM, SRAM, NAND flash and NOR flash. As was noted early in section 2.2.3, each of these storage types provides different energy advantages depending on its usage pattern. In this design, each processing and storage resource is isolated into its own power domain and connected via the power management bus to the EMAP ASIC. This allows unprecedented access to high resolution energy dissipation information of the CPU and each of its memories. With this architecture it is now possible to do run-time analysis of storage efficiency for a specific applications needs.

In addition to a traditional 32-bit memory bus mapped to each of the storage

94 Figure 3.25: Photo and block diagram of the host processor module (HPM) peripherals, a programmable logic device (CPLD) on the HPM module enables direct memory-to-memory DMA transfers for both the SDRAM and SRAM pe- ripherals. This works with the EMAP’s Host Interface Controller capability described in section 3.1.5.1 to perform direct writes from the Energy Accounting Module into the HPM memory areas. This capability simplifies high frequency sampling of energy data during time critical events on the host CPU. This, for example, is needed in energy attribution to operating system threads, which may be swapped at frequencies as high as 1kHz. This DMA function is also critical to enable low power sensing operations, where data processing is performed not by the applications processor, but specialized detection circuitry. In these applications data logging to the applications memories is possible while keeping the applications processor itself in low power standby or powered off states.

The PXA270’s memory map is shown in Figure 3.26. In addition to the the onboard heterogeneous memory resources, several chip selects are routed to the EMAP2 and stacking interface connector allowing additional modules to be directly mapped to CPU memory without the need for an intervening bridging function. The PXA270 CPU is capable of operating over a wide frequency range using standard DVFS techniques. This DVFS function provides both low power

95 Figure 3.26: Memory map of the HPM resources from the PXA270 CPU

96 PXA270 Current(mA)@5V CPU Frequency (MHz) Idle Max 624 55 214 520 47 170 416 40 131 312 34 101 208 30 77 104 16 44 13 (in SRAM) 5 5

Table 3.10: PXA270 operating frequencies and operating power operation during processor idle times and well as high capacity processing during peak operating periods. The operational states for the PXA270 CPU are listed in Table 3.10 as well as the measured current consumption of the HPM in this state. This included a custom designed, ultra-low power ’microcontroller’ mode where the PLL circuits were disabled and the CPU operated directly from the crystal oscillator. In this mode the CPU operated out of the static RAM and was able to perform basic platform vigilance functions. The CPU was then able to switch into higher performance modes dynamically to enable enhanced capabilities such as sophisticated operating system functions.

3.1.5.5 Sensor Interface Module

The Sensor Interface Module (SIM) provides low power sensing operations in- cluding data collection and micro power processing, as well as low bandwidth radio compunction and a bank of switched power supply outputs to external sensors. Each of these operations is supported by energy monitoring through independent power domains connected through the power management bus to the EMAP ASIC. A block diagram of the SIM is shown in Figure 3.27 noting these general capabilities. The SIM is capable of independent operation from the

97 Power(mW) SIM Operating State Typical Max MSP430 PM2 0.14 0.23 MSP430 ADC Sampling 4.4 5.9 External Switches On 5.7 10.2 CC2420 Tx 65 72 CC2420 Rx 62 66

Table 3.11: SIM operating states and power consumption previously described modules in the LEAP2 stack. The entire platform may be powered down leaving only the energy measurement functions of the EMAP2 and the SIM operating enabling basic operations.

Figure 3.27: Photo and block diagram of the Sensor Interface Module (SIM)

To facilitate coordinated sensing activities, we added a low power CC2420 radio was added to the MSP430’s interfaces providing short range communica- tions and networking capabilities. The SIM’s MSP430 and CC2420 architecture along with energy sensing circuitry was the basis for a derivative platform named uLEAP, as shown in Table 3.11. We note several operating states of the SIM and their power consumption in Figure 3.22. In addition to the on board sensing and communications options, the SIM supports powered external sensor equipment. Operation of each of these external sensors may be scheduled through the EMAP Power Management Scheduler.

98 3.1.5.6 MiniPCI Module

The MiniPCI Module (MPM) provides the functions necessary to enable high bandwidth, long-range wireless communications. These functions include: a high current power supply, a local memory bus to PCI bridging component, a commer- cial standard interface connector, and integrated energy management capabilities. The MPM includes a miniPCI connector, which complies with the commercial standard used widely in PC laptop technology. The miniPCI form factor is a plug-in card standard for many off-the-shelf WiFi and cellular wireless tech- nologies. The MPM’s high current power supplies support even the specialized ’high-power’ WiFi card options. This option may be necessary when deploying into challenging environments such as in dense foliage and/or large inter-device spacing.

Figure 3.28: Photo and block diagram of the MiniPCI Module (MPM)

The miniPCI bus interconnect provides high networking throughput by con- necting directly to the PXA270’s SDRAM through bus mastering DMA from the cards PCI memory, through the MPM’s bridge, and directly onto the HPM. Data words (32 bits wide) are transferred at a full 33MHz with single cycle la- tency. The aggregate bandwidth including bus mastering setup times exceeds

99 100MB/sec, which accommodates high throughput bursts to and from the WiFi cards local memory with very low overhead. In addition the DMA operations do not require applications processor activity, so the CPU may idle during data transfer operations to reduce power consumption.

3.2 LEAP Enabled Software

We have described the key hardware components required to enable LEAP tech- nology as well as provided technical details of one example, the LEAP2 hardware platform, which serves as a reference design to showcase the LEAP design princi- ples. In order to realize the capabilities provided by LEAP enabled hardware, we must now turn to development of the necessary software components. We begin with a discussion of the fundamental operating system enhancements and low- level device driver modifications made to accompany the LEAP2 reference design. This includes integrating EMAP measurements into key functions within the op- erating systems to enable attribution of hardware energy dissipation to abstract operating system objects (such as applications and users) that share access to the underlying hardware. We then describe several energy profiling applications which have been build atop the energy-aware operating system software.

3.2.1 Operating System Integration

The EMAP ASIC provides unprecedented access to energy dissipation data of platform hardware components. It does not provide any direction on how this energy dissipation information is best used by software components to enhance low power operation and extend battery lifetime. In contrast the operating sys- tem is a set of software services and programming abstractions allow coexistence

100 of many individual applications by coordinating accesses the many platform re- sources and serving as a central control mechanism for platform services. The operating system must therefore be augmented to integrate this new energy data to its fundamental components as to enhance these software abstractions with ownership of energy utilization data. As part of LEAP development, we have en- hanced one existing open source operating system, Linux, with these additional energy-aware capabilities to provide a fine grained energy attribution capability to a variety of operating system functions.

The Linux source code already integrates several software tracing projects whose objectives are to provide low level monitoring functions in order to gather information about a running Linux system. This capability is frequently utilized for debugging, to locate and correct hard to generate errors, or for performance tuning, to increase throughput or efficiency. These tracing methods include both static and dynamic trace point insertion methods. Linux Tracepoints [Bar] are static markers within the kernel source which, when enabled, can be used to hook custom software routines into the running kernel at specific points where these markers are located. Linux Tracepoints can be used by a number of tools for kernel debugging and performance problem diagnosis. One of the advantages of Tracepoints are they allow developers to monitor many aspects of system behavior without needing to know much about the kernel operation itself. In contrast, Linux Kprobes are a dynamic probing capability. With Kprobes, the developer is able to insert tracing methods (taps) into arbitrary locations within the running kernel without precompiling these tap locations a priori. However, the Kprobes developer needs intimate knowledge of the kernel operation in order to effectively insert probes into the proper location within the kernel binary.

During LEAP software development we found that both methods provided

101 value and have utilized each of these methods to enhance the Linux kernel with a new energy-aware functionality. This is provided by a custom, centralized energy registry within the kernel. Due to the extremely low overhead energy sampling of the EMAP ASIC, it is now possible to add energy sampling methods at even the high frequency operations of the kernel. Using our energy registry software developers may enable energy sampling from any of the existing Tracepoints or Kprobes tap locations. In order to use the energy registry, software components first register their intent to measure specific power domains and whether those measurements are should be ’self-clearing’ operations (i.e. incremental energy usage measurements). Once this intent has been registered, the tracing software may call the energy registry’s sample request methods. The registry then accesses the EMAP ASIC’s energy sample data (for the previously registered power do- mains), and returns energy sample data to the caller. For a complete listing of energy registry source code the reader is referred to Appendix B.

In addition to accounting functions, the energy registry is responsible for man- aging requests for power domain scheduling. The registry provides a simplified abstraction layer for software components by managing the low level event ta- ble stored within the EMAP. Operating system software may then request basic power domain operations such as power on, power off, or wakeup triggers each domain while the energy registry manages request conflicts. The avoid scheduling request conflicts, the energy registry utilizes the user ID tag field found in each entry of the PMS RAM table as a handle to each registry user. The handle en- ables retrieval of all unexpired schedule entries for each user and domain. Via this handle, the energy registry is able to verify that different software components do not operate on the same power domain.

To provide basic command line scripted operation of EMAP functions, a cus-

102 Figure 3.29: Block diagram of the developed Linux energy registry tom set of /proc interface files are available. These export the EMAP’s energy information from the kernel to user space applications. A list of these information files located in /proc/leap files is below:

1. energy—Provides latest energy dissipation information for each power do- main and the voltage settings for each voltage channel.

2. energy-reset—Provides latest energy dissipation information as above but each energy counter is reset after reading.

3. schedule—Provides a command line interface to access the PMS scheduler table data.

4. status—Provides current power domain status information.

103 5. trigger—Provides a list of the currently enabled wakeup triggers in the EMAP PMS.

6. rtc—Provides latest real-time counter value from EMAP PMS.

Listing 3.1: Example data from /proc/leap/energy file showing power domains and energy dissipation External Voltage: 15.788 External Current: 0.203 Battery Current: 0.003 Battery Voltage: 14.628 Energy Status: Name: Current Charge Energy Time SDRAM: 0.067A 16.671C 30.041J 2707.871S NOR Flash: 0.000A 0.202C 0.364J 2707.871S SRAM: 0.000A 0.198C 0.356J 2707.871S PXA Core: 0.054A 72.870C 112.292J 2707.871S NAND Flash: 0.000A 0.399C 1.316J 2707.871S Ethernet: 0.001A 3.011C 9.936J 2707.871S System: 0.413A 782.835C 3815.537J 2707.871S Radio: 0.000A 1.615C 5.329J 2707.871S GPS: 0.000A 8.128C 26.822J 2707.871S USB: 0.155A 454.975C 1501.417J 2707.871S SD: 0.001A 4.790C 15.807J 2707.871S HPM: 0.253A 309.202C 1507.050J 2707.871S

104 3.2.2 Energy Profiling Tools

We now provide three examples of user applications showing the value of this energy registry. Each application is a tool providing energy profiling information to the developer from different energy usage perspectives. The first, etop collects and displays aggregated energy dissipation of platform applications and power domains and is valuable for comparing energy dissipation between applications. The second example, endoscope provides application energy tracing at higher time resolution, to locate high energy dissipation at critical sections within an appli- cation. The third example improves the aforementioned endoscope by enhancing the Linux Trace Toolkit [Ope12b] to provide energy dissipation information in graphical form in addition to existing kernel and application execution trace in- formation.

3.2.2.1 etop

Etop is a user-space tool that enables rapid observation of energy consumption, when running an arbitrary set of processes. Using etop, an application developer can easily visualize, in real-time, the power draw of subsystems and energy con- sumption of running processes. Due to its rapid real-time information display, it can also be used as a runtime debugging tool, for quickly ascertaining system and process energy performance, as well as for understanding the energy behavior of processes in a dynamic environment. Etop is based on the well-known UNIX pro- gram top. Etop adds two additional capabilities: per-subsystem current, power and energy information and per-process energy accounting. Figure 3.30 presents a screenshot of etop, with the top section displaying per-subsystem information and the bottom section displaying per-process energy consumption.

Etop’s per-subsystem information is directly linked to the output of the EMAP2

105 16 subsystems Instantaneous & average values

ETH: 385mW

CF: 195mW

SDRAM: 26mW

CPU: 578mW

md5sum: 0.8 Joules in user mode 5.33 Joules in kernel mode 6.13 Joules total

Figure 3.30: Etop screenshot displaying per-subsystem power and energy information as well as per-process energy consumption. through the the energy registry. On every refresh cycle (a user-controlled param- eter that top provides, with a default of 3 sec), etop presents current, power and energy consumption information of individual subsystems. It also provides voltage information for the four voltage-only channels. Current and power values are by default averaged over the refresh period; maximum values can also be displayed.

To provide etop with per-process energy accounting, we introduced a modifi- cation to the Linux OS scheduler so as to record energy consumption information, in a manner similar to ECOsystem [HCA02]. Specifically, we utilized Tracepoint markers around the Linux scheduler and augmented the task struct structure

106 (that contains information about a specific process) with two energy containers: one for user-level information and one for kernel-level information. On every scheduler tick, Linux determines whether the current process time slice executed in user or kernel mode and charges an appropriate container accordingly. This information is used by the kernel (and top) to determine the fraction of CPU time spent in kernel (system) mode and in user mode. We then requested the energy registry to read from all channels on every scheduler tick. We charged this to the corresponding user- or kernel-level process energy container. As a result, we can disambiguate between energy consumption that occurred when a process was performing work in support of a user mode task and when the system was performing work during the process time slice in support of kernel mode tasks. Both these values are exported using the /proc interface, in a manner similar to /proc//stat. Etop then reads those values and displays them in standard top-like column format.

3.2.2.2 endoscope

Since etop displays system energy performance in real-time it is not optimized for in-depth detailed measurements. Etop is a user-space process with non-trivial memory and CPU usage (especially at refresh rates higher than 1 Hz) and as such can interfere with the measurements themselves. To provide high-rate sam- pling in software with a very low energy overhead, we implemented the endoscope kernel module. Endoscope uses the energy registry to access the EMAP ASIC’s energy data at frequencies as high as the default Linux scheduler tick resolution (1KHz in our system) and with very low CPU overhead, as no expensive user- kernel boundary crossings are required. The results are then stored in a circular buffer in kernel memory, a very fast and low-overhead operation. We utilize a

107 Etop Endoscope Execution space User Kernel Kernel modifications Scheduler for Kernel Module per-process data Data display interface top-like /proc CPU usage 1 − 25% Negligible Sample storage No Yes Real-time data collection Yes Yes Real-time data display Yes No Per-subsystem resolution Yes Yes Per-process resolution Yes No

Table 3.12: Qualitative comparison of Etop and Endoscope standard /proc interface for user-space data display and control purposes. To avoid frequent periodic polling of the /proc interface from user-space, the circu- lar buffer has sufficient memory to store several minutes worth of real-time data. To avoid buffer overrun, a user-space application only needs to read from the /proc interface at an interval that is slightly smaller than the buffer’s capacity at a specific sampling rate. Endoscope’s negligible operating overhead, combined with the minimal power requirements of the EMAP2 ASIC, have enabled post- deployment monitoring and analysis of the power behavior of applications, both on a single node and on the entire network. As a result, endoscope has been used extensively to collect experimental datasets. Endoscope, unlike etop, measures energy consumption of entire subsystems rather than those associated with indi- vidual processes. As a result, when conducting our experiments, care is taken to ensure that endoscope measures energy solely attributed to a single application. For this development of endoscope, this procedure is effective for ENS systems that typically support one dominant application, or a set of applications that can be considered dominant. Table 3.12 summarizes the capabilities and differences of these software tools.

108 Figure 3.31: LTT screenshot displaying NOR flash write information as well as per- process energy consumption.

3.2.2.3 Energy-aware Linux Trace Toolkit

While endoscope provides real-time energy tracing data, it was soon discovered that the data generated during applications profiling can become a significant and may interfere with the testing application. We discovered a open source tracing and visualization application which solved many of the fundamental limitations of endoscope while providing a modular software architecture to enable a broad energy-aware development capability to both kernel and user-space applications. The Linux Trace Toolkit (LTT) is a sampling framework build upon the Linux Tracepoints and Kprobes capabilities to record software execution traces into a kernel buffer that is optimized for low overhead data transfer to the user- space recording applications. LTT supports multiple standard visualization tools including Babeltrace [Eff12] and Eclipse [Fou] for trace display.

We augmented LTT kernel trace generator code to include energy recording

109 data as part of its trace buffer records. LTT supports dynamically selectable con- text information to augment event payload such as PID, PPID, process name, and nice values. We modified added an additional context type to support energy accounting information in the event records. Now LTT users may choose to col- lect EMAP energy data at any of the enabled trace markers. This enhancement to the LTT’s sample collection routine has provided valuable insight into plat- form energy dissipation. Figure 3.31 displays a trace showing the CPU activity and power dissipation during a NOR flash filesystem copy operation. This data corresponds to the previously described NOR versus NAND energy storage com- parison in Section 2.4.2.1. As was shown in Section 3.2.2.2, the selection of not only file system, but data copy method can have significant impacts on energy dissipated by the CPU and memory subsystems. LTT enabled with energy trac- ing features when coupled with kernel system call trace information was critical in finding these energy differences.

3.2.3 Power Scheduling Tools

While the previously developed tools are valuable for energy accounting functions, we still require a software method to activate the EMAP’s power management scheduler feature. The current Linux power management framework allows sus- pend and resume operations of the entire platform, but it is not well suited to dynamic power management at the subsystem or device level. While the /sys interface exposes device power states, they cannot be operated from user-space operations. Likewise, the power management API cannot be operated at the per bus or per class level from within the kernel. These are set only during platform level suspend and resume functions. In addition power management events may only be handled by the applications CPU, which requires a complete resume of

110 the platform in order to process new wakeup events. To support a finer grained control as well as scheduled operation of components, we created a new power management utility, emap-client. The emap-client utility provides a script-based user input to generate complex power management features including:

1. Wall time based sleep and resume functions for each EMAP managed power domain.

2. Periodic or interval based sleep and resume for cyclic sensing and processing.

3. Asynchronous wakeup signal for event driven operation.

4. Cancellation of existing power management schedules.

5. User permission verification to prevent unauthorized schedule modification.

Figure 3.32: Example commands from emap-client demonstrating scheduling capability

Emap-client control is managed by a simple language based parser that allows users to provide complex scheduling actions. For instance script time references

111 are based upon Linux system time (and are compatible with standard date values) rather than referencing the EMAP’s RTC value. The emap-client software auto- matically performs time conversion during schedule entry as well as conversion of RTC value to local time during display operations. In addition, power domains are referenced by configurable name handles (read from the configuration file) rather than using EMAP PMS register values. Finally, emap-client allows users to add custom reference handles for each entry. These reference handles may be used in later scheduling commands to modify or delete the existing entry, sim- plifying schedule maintenance. Figure 3.32 demonstrates some simple command activities using emap-client from the command line.

112 CHAPTER 4

LEAP Application: GeoNET

The previous chapter described the LEAP architecture including supporting hard- ware circuit schematics and performance measurements, energy profiling and scheduling software tools, and a complete reference platform design. We now provide a case study demonstrating a LEAP-enabled application in a commercial product offering. We begin with a detailed description of this platform and its capabilities. This is followed by a presentation of experimental data collected during numerous field trials lasting over one year in duration. The data includes a validation of LEAP capabilities enabling in field refinement of the energy usage while operating incontinuous deployment, including platform software modifica- tions providing subsequent increases to battery lifetime. We believe the ability to perform this in field optimization for platform power dissipation to be a unique contribution of the LEAP architecture.

Autonomous sensing systems have become increasingly important in envi- ronmental applications as a method to collect and process data for detection of specific phenomena. One important environmental application is the detection of seismic episodes following a major catastrophic event. An urgent need has arisen, in the wake of the 2011 Japan earthquake measuring 9.0, for a rapidly deployable, battery operated, aftershock detection capability. This new detec- tion system would be deployed following large earthquakes, when much of the existing infrastructure may be damaged or destroyed. Its purpose is to alert

113 first responders of approaching aftershocks and allow their evacuation from un- stable or dangerous locations prior to the arrival of new aftershocks. The early warning capability was lacking during search and rescue operations performed in several of Japans major metropolitan areas. In the months that followed the initial earthquake an additional 1000 aftershocks were felt including a 7.7 and 7.9 magnitude on March 11, 2011 [Wik11], as shown in Figure 4.1. Each of these had the potential to create deadly repercussions for the thousands of first responders searching for survivors.

Figure 4.1: Map of aftershock activity near Japan as of March 31, 2011

The specific challenges of this pre-aftershock warning system impose a unique and difficult set of design requirements. First external power sources and other infrastructure will be unreliable in the aftermath of the primary earthquake event. Therefore these systems must be capable of operating for several weeks to a month (during search and rescue operations) on battery power alone with only a small solar panel for secondary charging possible in some situations. However, in order to provide a rapid deployment capability, strict limitations on size and weight are needed, greatly limiting the size of the battery allowed.

114 Conversely seismic applications require extremely high precision amplifiers and high fidelity sampling components. These are necessary for effective event detection [TEK] in the presence of noise from natural phenomena such as rain- fall, wind, and . Deployments in urban environments during aftershocks pose additional challenges due to manmade vibrations from pedestrians, vehicles, and other heavy equipment. This is challenge is additionally compounded in an aftershock detection situation due to the need for rapid deployment. This pro- hibits time consuming placement of the geophone sensors thus degrading further the signal to noise ratio of the sensor.

The previous constraints imposes a critical energy budget challenge, as little power remains for critical tasks such as processing and communication. However reliable event detection in the presence of a wide variety of natural and man-made noise sources requires sophisticated algorithms to reduce false alarms. In addi- tion, an ad-hoc wireless communications network is necessary to coordinate event detection as well as distribution of event alerts. Due to the propagation speed of the aftershock event this limits the maximum warning time to only a few seconds imposing severe limitations on network formation and alert propagation through the network. Transitioning the system from lower power detection operations to fully vigilant during alert distribution must be rapid with a minimum time time necessary for route learning and path establishment.

To address these design challenges, UCLA’s Center for Embedded Networked Sensing (CENS) has been working with Refraction Technologies Inc., a leading seismic equipment supplier, to develop a novel aftershock detection and alert ca- pability specifically designed for this urgent application. This GeoNET project incorporates LEAP capabilities to provide a low average energy sensing capa- bility while enabling robust sampling algorithm development and heterogeneous

115 communications capabilities. As part of defining the GoeNET objectives, we de- fined a specific set of platform requirements based upon contributions from field experts from both the Geology and Civil Engineering departments. Table 4.1 lists these platform criteria.

4.1 GeoNET Design

The GeoNET platform, a custom configuration of a Refraction Technologies RT155, is a ruggedized, all-weather sensing system designed for rapid deploy- ment and long term autonomous operation. The RT155 may be equipped with a variety of seismic sensors including both passive and active types. The stan- dard GeoNET configuration of the RT155 includes a 3-axis L-4 passive geophone sensor. The GeoNET enclosure is shown in Figure 4.2-a.

Figure 4.2: GeoNET (RefTek RT155) product photo (a) and internal bays (b)

4.1.1 Hardware Architecture

The RT155 internally houses several modular bays supporting different power, sensing, and processing modules to suite a wide range of applications. The inter-

116 Requirement Specification Details No. Channels 3 (6ch option) Support passive and active sensors Sample Rate 200sps Resolution 24-bit (135.5dB) (Dynamic Range) 200mW ave. For 3 passive channels Power Input 1W For 3 12V channels Power Output ±12V Geophones (passive), Episensor 12V, LVDT 6-30V, Non-Contact Dis- placement 10-18V Radio >100mW Output Able to communicate over 0.5-1km GPS and Option to obtain GPS Timestamp RBS ready coordinates Timing Error <0.1ms >60s pre-event Event Detect Trigger store-and-forward >120s post-event 3ch x 200sps x 24bit Buffer Size 8MB 6.5MB/hr Buffering to capture data for 15-20min Concurrent buffering and data transfer without inter- ruption of data collection On-site storage Enough for 1 week without 1GB external storage power → 168hr 1.1GB Rugged IP67 box Overall Dimensions 25x25x15cm Includes CPU housing and battery

Table 4.1: GeoNET design requirements list

117 Slot ID Module Components Function 1 CPU RT619+HPM CPU and communications module 2 PWR RT618+RT620 Power module and battery charging 3 AD RT618+RT617 Three channel analog data sampling 4 AD RT618+RT617 Optional module to extend 3 to 6 channels

Table 4.2: GeoNET configuration of the RT155’s modular bays. nal view of the RT155 is shown in Figure 4.2-b. The three modules in the figure can be found along the bottom of the enclosure. The GeoNET configuration loads the RT155’s bays with the hardware listed in Table 4.2.

The RT619 module, located in slot 1 of the GeoNET configuration, is a custom LEAP-based design, derived from the EMAP2 circuit board described in Section 3.1.5.3 and accompanying HPM connector. This design was provided to RefTek who fabricated and assembled the RT619 module as part of GeoNET product de- velopment. A photo of the RT619 circuit board with an installed HPM is shown in Figure 4.3 along with a circuit block diagram. The remaining slots in the

Figure 4.3: RT619 module photo

GeoNET configuration of the RT155 are installed with a power supply module (PWR module), and a high fidelity 3 channel analog to digital sampling module (AD module). The power supply module supports battery charging from external DC supplies or solar panel arrays. The AD module includes a state-of-the-art Texas Instruments ADS1282 sampling engine. We provide measurement plots of

118 the AD module’s noise floor and spurious free dynamic range (SFDR) in Figure 4.4 for reference. It is important to note the dynamic range of the ADS1282, over 110dB SFDR including harmonics and 150dB excluding harmonics. Addition- ally AD module noise floor is near -120dB V 2/Hz, necessitating 24-bit or larger sample sizes. As was demonstrated in section 2.2.1, we reiterate the larger data types are most efficiently handled by higher performance computation resources so long as the idle power can be handled effectively.

The RT155’s modules are connected through a high density ribbon cable with shared power, ground, and data signals. Analog sample data is passed through the ribbon cable in a daisy-chain using a round-robin token passing method to manage arbitration for the source module. This allows multiple AD modules to be connected in series and seamlessly extends the number of channels through adding additional AD modules into the RT155 bays. At each sample interval, the AD modules generate time stamp information and sample data. These are then transferred via the serial bus in a fixed priority sequence. Each 32-bit word in the data stream contains a unique identifier tag. The details of the token passing and serial frame types are shown in Figure 4.5.

4.1.1.1 RT619 Design

We now highlight the important distinctions of the RT619 from the prior LEAP2 design described in Section 3.1.5.2. For detailed design information on the RT619, please refer to Appendix A. While the RT619 design was leveraged from the EMAP2, several enhancements were added to support the GeoNET application. We provide a block diagram of the RT619 with new GeoNET functions high- lighted for reference (Figure 4.6). Specifically, we added a high precision time reference via a TCXO with voltage control pin. The 16.384MHz TCXO output

119 Figure 4.4: GeoNET RT614 Analog module performance measurements: (a) PSD of the noise floor from DC to 100Hz and (b) dynamic range of a 17.08Hz tone

120 121

Figure 4.5: RT155 Backbone Serial Data Protocol Figure 4.6: RT619 architecture highlighting additional GeoNET capabilities may vary up to ±10ppm over the specified input voltage range. The TCXO voltage control pin is driven by a 16-bit DAC reference that is controlled by the RT619 FPGA and the TXCO clock output fed back into the FGPA. This feed- back circuit is used to provide local time synchronization such that local time may be precisely aligned to a GPS pulse-per-second signal or other precision ref- erence. We provide a more detailed description of the software controls and time synchronization circuits in Section 4.1.2.2. The RT619 additionally supports the new RT155 backbone data bus via a common ribbon cable connection between the different modules. The RT155’s data bus connection on the RT619 is routed directly to the FPGA for local data buffering and processing. A diagram noting new GeoNET FPGA enhancements is shown in Figure 4.7 and now described.

4.1.1.2 Data Sampling, Buffering, and Event Detection

The RT619’s FPGA provides two data paths for sample collection through its Data Acquisition Unit (DAU). This is managed through Wishbone accessible configuration registers in the DAU. First the DAU provides a direct synchronous

122 Figure 4.7: GeoNET FPGA architecture highlighting additional sensing capabil- ities serial bus connection between the RefTek AD data backbone and the applications processor’s on-chip peripherals, enabling a low latency data collection. This is beneficial when buffering delays prevent accurate inter-unit timing. Alterna- tively samples may be buffered within the FPGA’s internal RAM blocks via a serial-to-parallel conversion and subsequent FIFO write operation. A FIFO con- figuration register sets the threshold value, above which IRQ events are sent to the applications processor. A diagram of the DAU is shown in Figure 4.8 for reference. The threshold IRQ is also routed to the FPGA’s power management unit to provide wakeup events for registered power domains. In addition to the FPGA’s data sampling and buffering function, the DAU provides an interface to optional event detection blocks. The event detection interface is clocked by both the high speed TCXO as well as a top-of-second indicator facilitating coor- dinated inter-node event timing. This module tightly coupled to the data buffer RAM, including single cycle access latency to any RAM address. This permits

123 Figure 4.8: Diagram of the Geonet FPGA’s Data Acquisition Unit algorithms requiring multiple passes over RAM data in addition to those needing sequential access. Several event detection implementations are described in the later experimentation section.

4.1.1.3 Time Synchronization

The RT619’s FPGA also manages the RT155 time reference signal through the previously mentioned TXCO and DAC voltage control. This reference signal is routed through the RT155 backbone to provide a synchronization signal across each of the AD sampling modules. We implemented a software managed fre- quency control loop using the FPGA to provide precision error estimation of the TXCO frequency using the GPS’s top-of-second time synchronization sig- nal. We have qualified the measurement accuracy as follows: The Ublox LEA-4T GPS module derives its synchronization timing signal from the module’s inter- nal 24.103 MHz oscillator. We have also measured an average rising edge pulse delay of 6ns on the RT619 circuit boards, dictating a module timing uncertainty

124 of ±47ns. When combined with the local TXCO’s frequency of 16.384 MHz, our phase error measurement is limited to ±109ns resolution. We assume the cable delay to be identical between different units as to be neglected in the error calculation.

The FPGA’s time synchronization circuit provides two distinct modes of op- eration: frequency tracking and phase tracking. The frequency tracking circuit is used during initialization and allows rapid convergence of the TXCO frequency to the top-of-second signal. In this mode, the rising edge of the top-of-second signal generates frequency error estimation in TCXO clock cycles, and resets the count value to realign the local counter with the synchronization signal. Since frequency errors may be large during this mode, the phase offset is not consid- ered. Once frequency offset has converged below a minimum threshold, phase tracking is enabled. In phase tracking mode the top-of-second signal no longer resets the TXCO count value. Instead the phase error continues to accumulate in the offset register. A separate top-of-second event counter is then enabled to provide frequency error calculation as the phase error accumulates. A software phase lock loop, described below, is used to modify the DAC voltage, setting TXCO frequency, based upon periodic phase error measurements.

4.1.2 Software Architecture

The RT619’s addition to the RefTek RT155 platform enables highly sophisti- cated sensor data collection, processing and communications methods. These new methods are provided through our development of new software applications and platform services. These new services include:

1. Provide platform health monitoring such as battery charge state and wire- less interface connection status, including emergency preservation modes.

125 2. Provide specialized time synchronization software including custom device drivers enabling the FPGA frequency control circuit and software phase locked loop.

3. Provide data collection services to acquire geophone samples from the RT155’s AD modules and format this data for later post-processing.

4. Perform detection of seismic events by selecting from a library of available methods and algorithms (both on the CPU and FPGA).

5. Enable rapid reporting of events through established infrastructure methods such as email and SMS.

6. Perform synthesis of event data into formats consistent with rapid user response.

7. Enable resource power scheduling.

8. Provide platform software management with in-field updates.

As part of the GeoNET software development, each of these services is pro- vided through a combination of custom and open source software. Figure 4.9 provides a simplified diagram of the primary RT619 software applications. Each is described in the following sections.

4.1.2.1 System Health Monitoring

A fundamental operation in any remotely deployed system is persistent vigilance of platform health. As GeoNET devices are expected to be left unattended for many weeks, health monitoring is a critical concern. We developed a specialized health monitoring daemon sysmon, which provides continuous verification of bat- tery health and wireless communications link state. Failures of internal software

126 Figure 4.9: Diagram of RT619 software applications services or hardware triggers automatic watchdog rebooting of the platform to avoid deadlock situations. As part of battery health monitoring, should sysmon measure the battery voltage at critical levels while battery charging is disabled, sysmon forces entry into emergency shutdown state. In this state all software services are terminated to reduce immediate power consumption. Following this, as imminent malfunction or failure is likely, sysmon schedules a future wakeup time to restart the unit into the FPGA’s PMS table. Sysmon then shuts down power to all domains other than the FPGA. At the prescribed daily wakeup time, scheduled to corresponding with the maximum solar charge, the PMS restarts the platform. Once booted, sysmon performs a retest of battery state and reports current health information to the remote server. This continues until sufficient charge in the battery disables the emergency shutdown state.

127 4.1.2.2 Time Synchronization

As noted in our requirements (Table 4.1), inter-node timing error may not exceed 0.1ms, yet GPS enabled time synchronization on the RT619 provides significantly better timing accuracy. However, generating the GPS module’s timing signal re- quires significant energy. If we instead rely upon the RT619’s on-board TCXO, sufficiently conditioned by the FPGA to match the GPS’s synchronization sig- nal via a periodic realignment, we may achieve a net power savings. Since the TCXO clock error as referenced to GPS, may be measured in the FPGA we uti- lize standard phase lock algorithms for local TCXO frequency adjustments. Dur- ing periods when local clock error is less than our defined maximum, the GPS module is shut down to reduce power. Our time synchronization is managed through three separate components: the FPGA’s time synchronization module, a time synchronization Linux kernel driver, and a time synchronization daemon gpsmon. The FPGA performs error estimation via clock difference measurements between GPS and the local TCXO as previously described. The time synchro- nization kernel module provides FPGA status information to the gpsmon daemon including periodic calculations of drift rate and current error in parts-per-billion. This kernel information is periodically refreshed via a timer thread. An exam- ple of the kernel’s provided information is shown below displaying FPGA’s time synchronization status information.

Listing 4.1: Example data from /proc/leap/energy file showing power domains and energy dissipation AD Time: 1318565717.664 Local Time: 1318565718 Time difference: -.336 PPS Error: no

128 Time Sync: enabled Lock Mode: phase PPS Drift: +0.206ppm Time Error: 61ns DAC Value: 39040

Gpsmon reads time synchronization state from the kernel driver through the /proc/leap/timesync interface and adjusts the DAC feedback voltage to the TCXO. Once phase error is reduced below a specified threshold gpsmon switches off the GPS unit for a prescribed period of time using various algorithms based upon the last frequency and phase error estimations. We do not devote considerable time to list these algorithms here, but provide the measured energy efficiency for these in the field experiments in the next section.

4.1.2.3 Data Collection

GeoNET’s data collection service provides continuous buffering, time stamping, file compression, and archiving operations for samples from the AD modules. The process uses several software applications to fulfill the service. Figure 4.9 shows the architecture of sampling process. The RT155’s AD data arrives on the backbone bus at the FPGA where it is delivered to the HPM in one of two methods. If configured for serial pass-through, the FPGA forwards the AD data directly to the HPM’s serial data bus. The application’s processor DMA is used to transfer data from the serial bus into a CPU local memory buffer. If configured to buffer AD data within the FPGA, the applications processor is idle until the FPGA sets the IRQ indicating the FIFO fill level is reached. This wakes the CPU out of idle to setup a DMA transfer using the host memory bus rather than the

129 serial data bus. Data is copied via the DMA to a CPU local memory buffer.

After DMA transfer completes, the kernel driver receives an IRQ from the DMA. This causes the kernel driver to wake the reader application adlogger, which is blocked waiting for new data to arrive. Adlogger daemon reads the buffer data via a device interface file, /dev/ad. The daemon converts the raw RT155 formatted binary data to one of several output formats including binary integers, floating point, comma separated text, or SQLite database format. After converting sample data to the appropriate format, adlogger writes to a storage file whose location is determined by the daemon’s configuration file.

Since the GeoNET application requires data to be sampled continuously for long periods, we provide a service to deal with filling available storage resources. We use a two stage data archiving mechanism to prevent data loss. The first manages data archiving using an open source log file storing utility logrotate. We configured logrotate to provide an hourly file archiving service, where at the start of each hour, the current data file is terminated and a new empty file is opened for adlogger to continue storing data. Once the previous file is terminated, logrotate renames the file with epoch time stamp information and then compresses the file using gzip or bzip2 before moving this to the long term storage location, typically on an SD Card or USB flash drive if installed. The second stage of the archiving application provides graceful retirement of old data files. Once the filesystem reaches a preset capacity, the oldest files are removed to provide space for new files.

4.1.2.4 Event Detection

In addition to data collection and archiving, GeoNET provides an event detec- tion service. Event detection is handled via a specialized daemon eventpicker.

130 The eventpicker daemon is a algorithm agnostic, providing hooks to support both software based event detection methods as well as FPGA based detection. Eventpicker receives a continuous stream of data from the adlogger daemon by reading directly from the current data file by opening it as read-only. Upon each new data read, eventpicker informs one or more detection algorithms of the data arrival by passing this through a shared memory object. If either an interrupt signal from the FPGA hardware or a software signal from an event the event de- tection software indicate an event arrival, eventpicker initiates the event reporting mechanism by first extracting samples from adlogger’s data file corresponding to a time window around the current time. This is set via the eventpicker configu- ration file. Once the event file has been extracted, eventpicker initiates the event reporting mechanism described next.

4.1.2.5 Event Reporting

The event reporting service is composed of a series of activity scripts executed by eventpicker using the Debian run-parts utility. These activity scripts provide an simple method to add and remove actions initiated by local event detection. An example of standard GeoNET event reporting activities is demonstrated in Algorithm 2.

The alert sequence shown in Algorithm 2 may be augmented with an addi- tional activity scripts, such as supporting SMS messaging via the Linux smsclient application. When enabled, the smsclient activity script sends a copy of the mes- sage body from the email alert activity, using an SMS text message, to the list of phone numbers contained in the sms-list file. Figure 4.10 demonstrates the email reporting format for a distributed event detection using two a two unit collaboration.

131 Figure 4.10: Example of email report generated during El Centro deployment demonstrating display of distributed detection from two units

132 4.1.2.6 Configuration and Management

In order to remotely monitor unit operational status and energy efficiency perfor- mance, a remote configuration and management service is installed on GeoNET. This service is comprised of several distinct operations including:

1. Generation of application state, power profile, battery status, event detec- tion log, and time synchronization log files.

2. Perform log file compression and archiving.

3. Transfer log archives via WiFi to gateway device.

4. Synchronize current event files with remote server.

5. Synchronize sample data archive with remote server.

6. Receive new commands from server for execution.

To minimize the significant energy needed to maintain the wireless commu- nications interfaces, these devices are scheduled for only minimal operation; the objective being to sufficiently inform a remote observer of network status and op- erational performance. We use a combination of several open source applications such as cron, logrotate, and ppp in coordination with the FPGA power manage- ment scheduling capability, to enable the wireless interfaces only periodically to fulfill the tasks outlined above.

Since each GeoNET unit is time synchronized to a fine-grained level, we co- ordinate operation of the WiFi devices to rapidly establish an ad-hoc network. Once the network has formed, each remote unit synchronizes log file, event file, and optionally sample data archives with the gateway unit. After remote units have synchronized with the gateway unit (or a timeout action occurs), the WiFi

133 interface is disabled. Gateway units then initiate a cellular connection to a remote server using openvpn to create a tunnel through ppp. The VPN tunnel provides a secure and reliable method for remote operators to access the gateway units. This is necessary since many cellular providers do not allow inbound connections through their network firewall. The openvpn tunnel enables direct SSH access to the gateway unit from the server providing interactive command and control operation.

After successfully creating a VPN connection, the gateway unit synchronizes its log files, event files, and (optionally) sample data archives with the server into a unique outbox directory. This synchronization includes any data from remote units that was synchronized to the gateway over the WiFi link in the preceding time window. Upon completion of the data upload, the gateway unit also retrieves new configuration data, schedules, and commands from its inbox. The inbox may include an inbox/run path to enable automatic executable or script operation.

4.1.2.7 Software Update

To emphasize dynamic in-field optimization we provide an over-the-air (OTA) method for continuous update of platform software and FPGA images. Appli- cation software is bundled using the OpenEmbedded opkg tool, which provides implicit versioning, merging, and installation capabilities. New software packages are deployed to the package repository, which may be hosted on any internet ac- cessible server. During the previously mentioned synchronization of inbox, new software packages may be installed via opkg commands. This causes the new soft- ware packages to be downloaded over HTTP using wget into local flash memory. The package then installs new software according to the package’s configuration data.

134 In addition to its application software, the RT619’s FPGA is OTA repro- grammable. The FPGA’s JTAG port is wired to the HPM’s applications proces- sor via GPIO lines, enabling in-field reprogramming of the FPGA’s image. We created a custom Linux application fpga-prog, based upon Actel’s directc source code, which reads the FPGA image from a local file and then writes this image via the JTAG port to the FPGA’s flash memory. Upon completion of the erase, write, and subsequent verify stage, the FPGA restarts using the newly updated image. As will be discussed in the next chapter, FPGA reprogramming is essen- tial to match the most energy efficient event detection method with a specific environment.

135 Algorithm 2 Event reporting pseudocode 1: Startup WiFi mesh network 2: Distribute local event file to all neighbors using rsync 3: Receive events from any neighbors with detections via rsyncd 4: if Device is cellular gateway then 5: if Distributed event detection enabled then 6: repeat 7: Run event detection algorithm on neighbori data 8: if Event time on neighbori ≤ ±∆t local event time then 9: Increase likelihood 10: else 11: Decrease likelihood 12: end if 13: until All neighbor events processed 14: if Likelihood < threshold then 15: Cancel alert reporting 16: end if 17: end if 18: Start ppp dialup of the 3G modem. 19: Generate email report including event time, date, and geographic location 20: Run gnuplot on local event file 21: Attach local plot graphic to email report 22: repeat 23: Run gnuplot on neighbori event file data 24: Attach neighbori plot graphic to email report 25: until All neighbor events plotted 26: Add email cc recipients from cc-list file 27: Run postfix mailer to send email to [email protected] 28: Shutdown pop connection and disable 3G modem 29: end if 30: Shutdown WiFi mesh network.

136 CHAPTER 5

Field Experiments

In the preceding chapter we documented the unique capabilities of the Refrac- tion Technologies RT155, including our custom RT619 module design, and its use in the GeoNET application. We provided a basic overview of the software and hardware components comprising the RT619 and its host processor module so that we may now address the value of energy-aware sensing. This chapter focuses on experimental data collected with our GeoNET devices across two long term deployments. These deployments, together accumulating over 9 months of in-field operation, were vital in validating an energy-aware design approach through iterative software development and deployment while operating in con- tinuous deployment. Through the FPGA’s novel EMAP operation, providing high-fidelity, fine-grained energy dissipation of each component operating in the GoeNET system, we were able to make substantive, iterative improvements to energy efficiency while maintaining high performance. This performance is shown through accurate, rapid event detection as well as sophisticated event reporting operation through feature rich applications.

Our field experiments were conducted at two sites. The first site was located outside El Centro, California, south of the Salton Sea and near the Mexican border. The second site was located at a Stunt Ranch, part of the Santa Monica Mountains Reserve near Los Angeles. Both sites were only remotely accessible through cellular connections and no infrastructure power was provided. In both

137 experiments the GeoNet units were equipped with 20W solar panel arrays and expected to be energy self-reliant. These panels were wired directly to the battery charger circuit accessed through the power connector on the RT155 lid. WiFi access between the units was established during site survey using high gain yagi antennas in order to provide sufficient range in rugged terrain.

5.1 El Centro

Our first experiment took place at the El Centro site during March 2011 through June 2011. The GeoNET units were installed along a remote section of Highway 78 approximately 50m from the side of the road approximately 350m apart. The unit map locations and photos of the installation sites are found in Figure 5.1 and Figure 5.2.

Figure 5.1: Locations of the B003 and B004 units installed in El Centro, CA

138 Figure 5.2: Photo of B003 and B004 unit installations at El Centro test site

5.1.1 Data Collection Efficiency

During the three month deployment, GeoNET units were equipped to contin- uously monitor energy expenditure of the GeoNET software applications using the measurement tools described in section 3.2. We periodically harvested en- ergy usage information reported to the server in the intent of locating methods to increase platform efficiency. Our first experiment focused on investigating alter- natives for the data collection process using the various output formats provided by adlogger and underlying flash storage media and file system times. During the experiment we modified the adlogger configuration script to output data samples in binary, CSV, and SQLite formats. Additionally we varied the storage file loca- tion across both a RAM backed tmpfs partition (/tmp), and a ubifs volume on the raw NAND device (/mnt/nand). We note that experiments to provide a direct write to the SD Card and USB flash drive were unsuccessful due to occasional extended write times on these devices due to background flash erase operations. The results of our experiment are found in Figure 5.3. The tmpfs storage pro- vides lower energy write operations than the ubifs option across all three output

139 formats by approximately 10%. We also find that there is a significant energy cost to convert from RT155 binary format to either CSV text or SQLite database formats.

Figure 5.3: Energy consumed by adlogger to capture and store different format types and file systems

5.1.2 Data Archiving Efficiency

After adlogger stores converted data to its sample file, an archiving application, logrotate, will periodically read, compress and store sample data to a its archiving directory. We next test different options for the archive process through experi- mentation with different source file locations. For each we assume the destination location remains constant, a vfat formatted SD Card (/mnt/sd). The source data is located one of the following: a second file on /mnt/sd, a ubifs volume residing

140 in the raw NAND device (/mnt/nand), or our tmpfs partition in /tmp. As part of the experiment, both gzip and bzip2 compression schemes were measured.

The results of this experiment are shown in Figure 5.4. The energy supplied to each of the hardware components that is directly attributable to the adlogger and logrotate processes as well as the file system read and write operations has been accumulated into the graph. For simplicity we attributed all of the energy consumed by the storage media to the adlogger and logrotate applications. This was a fair assumption since no file assess is performed during normal operation beyond periodic log file generation and rotation. As expected the bzip2 algorithm performs superior compression, approximately 40% file size reduction on average, but at the expense of significantly longer cpu execution times. This results in 2-3 times more energy expended using bzip2 versus gzip. The expected return for higher data compression is a reduced transmission during file synchronization across the network and reduced energy for data transfer. This is measured as part of our distributed detection experiment.

5.1.3 Detection Efficiency

One unanticipated challenge at the El Centro site was due to the high volume of tractor-trailer traffic traveling Highway 78. This route is frequently traveled by trucks transporting agricultural cargo. The net result was a significant false alarm rate due to vehicle ground vibration rather than geologic phenomena. In the initial deployment, our GeoNET units were configured to do independent event detection and reporting without event collaboration. While sophisticated detections algorithms currently exist to discern vehicles noise from geologic phe- nomena [ZTX11] [BRV11], this is outside of our expertise. Instead we developed a distributed detection method to discriminate traffic noise based upon each unit’s

141 Figure 5.4: Energy consumed using different file systems and compression for data archiving spacial diversity and correlated local timing references. This is rests upon the observation that P-wave geological events travel at speeds of 5 to 8 km/s and will exhibit a time of arrival difference under 100ms at our node spacing rather than multiple seconds at vehicular speeds. Using this observation, a distributed de- tection algorithm was deployed in April with the object of decreasing false alarm rate. (Once deployed we then able to measure the speed of passing vehicles using doppler shift and the node separation distance). Our false alarm rate, shown in

142 Table 5.1, was greatly reduced however, the the distributed detection algorithm requires additional energy to manage event transfers from neighboring units. We next determine the effective energy efficiency of both the local detection and distributed methods based upon direct measurements of each approach.

Our efficiency metric is calculated as the net energy cost to perform acqui- sition and storage, detection, and reporting per each valid reported event. We neglect outside factors such as cellular or SMS usage costs in this metric. Our metric dictates we first measure the average energy costs for each of the ser- vices noted. The energy cost for adlogger measured in Figure 5.3 provides the first component. We next provide energy use data for several alternative detec- tion algorithms. Each was implemented and deployed on GeoNET during our experiments. These algorithms correspond to generally accepted methods for ge- ological events [MGP82] [WAY98] [HWL84] including a short-term to long-term variance ratio, finite impulse filtering, and Fourier transform. In order to fur- ther explore energy tradeoffs, we implemented each of the candidate algorithms both as software on the applications processor’s CPU and as an HDL component in the RT619 FPGA. The software implementations were incorporated into the eventpicker detection service while the HDL implementations operated from the FPGA’s event detection block. The performance of each algorithm is shown in Figure 5.5. As anticipated, the hardware based solutions outperform the soft- ware driven solutions by significant margins. However, the development time for each of the HDL based implementations was significantly higher than its software equivalent. Additionally, the most sophisticated algorithms, such as the Fourier transform based operation was not achievable within the available FPGA gates.

We now determine the energy cost for our final component, event reporting. Event reporting is the most challenging service to measure as it encompasses

143 Figure 5.5: Energy consumed for several event detection methods multiple software applications, data transformations, file operations, and data communications. The data flow through these services is complex and accurate energy measurement challenging. This tested our LEAP profiling software’s abil- ity to track several simultaneously running applications and to differentiate en- ergy expenditure across multiple platform hardware resources. Using etop as our measurement tool for applications software and tracepoints for energy dissipation due to file I/O operations and communications, we established the total energy cost to report a single event. The event energy total is the sum of the individual contributing applications and their resulting kernel operations. A summary of these operations is shown in Figure 5.6.

The energy efficiency provided by our local detection versus distributed detec- tion methods is shown in Table 5.1. We provide the contribution of each service

144 Figure 5.6: Energy contribution in joules from the component software applica- tions used for event detection reporting

(data storage, event detection, and event reporting) to the efficiency calculation. The false alarm rate using distributed detection versus our most efficient local detection algorithm (STV/LTV) is given. While our local detection method pro- vides lower energy event detection, it’s higher false alarm rate causes significant wasted energy through erroneous event reports. The distributed event detection method performs with 23% less energy at the El Centro site. Interestingly this observation runs counter to common heuristics inferring local processing meth- ods provide more efficient applications and would only be known through direct measurement as provided by LEAP.

145 Detection Type El Centro Reporting Local Distributed Detection Energy 461.5 627.4 Event Reporting 23.6 23.6 False Alarm Rate 94% 7% Effective Detection Energy 855 653

Table 5.1: GeoNET Energy Efficiency during El Centro experiment assuming one valid event per day

5.2 Stunt Ranch

Our second experiment took place at the Stunt Ranch Reserve property from July 2011 through December 2011. The Stunt Ranch Reserve is located along a remote region of the Santa Monica mountains in Calabasas, CA. The property is rugged and undeveloped with no large public roadways. Figure 5.7 notes the GeoNET installation locations this site. Figure 5.8 provides a photograph of our installation including solar panel placement an antenna placement.

There are several key differentiators between the El Centro and Stunt Ranch sites. First the public roadway at Stunt Range is set back from the installation site by several hundred meters and is not traveled by large vehicles. There is no human development within several miles. Accordingly, we anticipated the false alarm rate to decrease as compared to the El Centro site. Second, the El Centro installation was in a flat barren dessert area with little vegetation or other solar obstructions during the day. The weather was consistent, with mostly direct sunlight throughout the day. In contrast, our installation site at Stunt Ranch was located on the north facing slope of the mountain chain and exposure to direct sunlight was reduced significantly during winter months. The mountain chain typically experiences fog during the morning hours due to is proximity to the Pacific Ocean. We next describe how the GeoNET system was able to adapt to

146 Figure 5.7: Locations of the two units installed at Stunt Ranch these site differentiators during the course of the experiment through our in-situ energy profiling.

5.2.1 Detection Efficiency

We repeated an identical set of energy efficiency experiments as in El Centro in order to determine if our previous selection was best suited to the Stunt Ranch site. The results of these tests are provided in Table 5.2 and demonstrates that lo- cal event detection algorithm provides higher efficiency at Stunt Ranch. This was as result of two major factors. First, the lower noise level reduced the false alarm rate as expected (the small number of remaining false alarms were attributed to animal activity). The second factor is due to an improved cellular connection at

147 Figure 5.8: Photo of B003 unit installation at Stunt Ranch facility

Detection Type Stunt Ranch Reporting Local Distributed Detection Energy 461.5 627.4 Event Reporting 18.4 18.4 False Alarm Rate 53% 1% Effective Detection Energy 501 646

Table 5.2: GeoNET Energy Efficiency during Stunt Ranch experiment assuming one valid event per day

Stunt Ranch. Its higher bandwidth and reduced power consumption resulted in a lower event reporting energy cost. (We discovered later the our site is near a Verizon 4G cellular tower.) These factors created sufficient influence to modify our detection solution.

Adaptive Operation The LEAP capabilities of the GeoNET system enable it to measure long term trends and modify local operation to adapt to changing environmental conditions. The Stunt Ranch installation occurred in mid-summer months when coastal fog is infrequent and the sun’s northern path provides max- imum battery charging power. During the course of the experiment, the sun’s position was increasingly eclipsed by the mountain peaks as well as winter rain

148 storms causing greater than expect reductions in daily battery charge replen- ishment. As part of GeoNET’s sysmon health monitoring service, battery and charging state are continuously logged. These logs provide a history of daily and seasonal changes. We provide and abbreviate history containing one week from July and another from November in Figure 5.9. The figure provides solar panel voltage and charging current as well as battery voltage and current draw.

The daily charging cycles are clearly evident in the increased panel voltage and charging currents. The July and November records, though similar, contain important distinctions. The November solar charging current pulses are notice- ably diminished in size versus July. A plot of the net battery charge, Figure 5.10, clarifies this difference. In July the battery’s net charge reaches an equilibrium, where nightly battery use is balanced by daily charging. The November record shows a disparity between nightly battery use and daily charging indicating a crit- ical energy imbalance. A corrective change to platform was necessary to prevent battery depletion. Using LEAP profiling tools, we discovered an issue with the CPU’s frequency governor, the software responsible for its DVFS settings, was not allowing the CPU to decrease its core frequency below a prescribed threshold. This was due to our use of the CPU’s SPI port to collect samples directly from the RT155 AD data bus. The CPU throttling restriction caused unnecessary idle power dissipation not previously attributed to any application. The FGPA sampling architecture was modified to allow AD data to buffer within the FPGA while the CPU was able to govern itself to the minimum clock frequency. Once the FPGA sampled buffer filled, the it wakes the CPU from the low power state to acquire the buffered data. This change to the FPGA resulting in a net energy savings as measured in Figure 5.11.

In addition to the energy reduction attributed directly to the sampling process

149 and shown in the Figure, an additional power savings in the CPU during idle time provided a platform reduction to reach energy equilibrium during the winter months of operation. This in-field adaptive behavior was made possible through a LEAP enabled platform and accompanying software tools.

150 Figure 5.9: Solar charging and discharging over a one week time period

151 Figure 5.10: Battery state during July and November periods

152 Figure 5.11: Energy consumed using CPU and FPGA data collection methods

153 CHAPTER 6

LEAP Applications and Partnerships

While this thesis has directed the LEAP architecture into a very specific network sensing application space, this was done solely to provide a concise demonstra- tion of its utility rather than due to a fundamental limitation of its applicability. Rather, the LEAP principles have found an audience in a great number of other industries and markets. These markets range widely in customer (from con- sumer to industrial and medical) and size (from small medical wearables to large enterprise servers). This chapter discusses some applicable markets as well as highlights several of the research and product development programs that have been inspired by or developed directly out of LEAP research.

6.1 LEAP Applications and Markets

Smart Portables The latest smart phones and tablets now contain high perfor- mance multi-core applications processors operating and gigahertz speeds. They typically utilize a wide range of radio technologies and several highly optimized heterogeneous computation resources to operate cellular and local area wireless basebands, provide graphics rendering, and collect phone sensor data. Power management has become a critical factor in maintaining product battery life as the drive to become thinner and lighter while not compromising customer expec- tations for talk-time and media applications. As was noted in Section 2.3.2.2,

154 chipset manufactures have realized the need to monitor the dozens of different supplies needed by recent applications processors and enable software developers to perform energy benchmarking during development in order to optimize battery performance. This may lead to one day integrating LEAP capabilities into com- mercial phone hardware in order to adapt operation to the dynamic requirements of consumer application software.

Cloud Data Centers Recent attention to global climate changing factors, such as greenhouse gas emissions due to power generation, has generated new interest in reducing worldwide energy demand. A significant component of the increasing demand is due to electronic and computing equipment. The creation of large data centers to handle the explosive growth of internet services has begun taxing local power grids and has created new challenges for building temperature control and HVAC systems. In response to this energy demand, new government and industry sponsored programs have been created to address problems such as power supply conversion efficiency by recommending computing hardware standards [Mar06]. These small changes are expected to reduce current computing energy demands by as much as 20 percent. However, these new initiatives do not address some fundamental energy efficiency issues. Instead, new approaches are needed to address efficiency at a basic computing level. The LEAP architecture is well suited to this application and has broad application in the traditional server and enterprise equipment markets.

6.2 LEAP Inspired Products, Research, and Partnerships

MicroLEAP - Wireless Health In collaboration with Laurence Au’s research in wireless health, LEAP has led to improvements in wearable medical devices.

155 Dr. Au architected the MicroLEAP (uLEAP) wireless sensor, which provides a new architectural solution for Body Sensor Network (BSN) applications in wire- less health by introducing real-time system energy accounting. His MicroLEAP design provides the required sensing resolution and wireless interface for compati- bility with mobile devices during data acquisition and processing. In addition, Mi- croLEAP provides on-board real-time system energy accounting and management via software in methods compatible with the LEAP architecture. Consequently, MicroLEAP can keep track its own energy consumption; signal acquisition and data transmission can be done on demand. This hardware-software system ap- proach encourages the usage and scheduled operation of sensors and wireless interface, and it is a key consideration in MicroLEAP’s design. According to Dr. Au, the distributed nature of BSNs also allows energy-efficient wearable sensors to collectively infer activity context through sensor fusion algorithms [WBB08].

Figure 6.1: Photo of MicroLEAP platform

Runtime Direct Energy Measurement System (RTDEMS) In collab- oration with Sebastian Ryffel from the Swiss Federal Institute of Technology we assisted in development of the RTDEMS enhancement to LEAP. Mr. Ryffel presents an energy attribution and accounting architecture for multi-core server systems, based upon LEAP, which can provide accurate, per-process energy in-

156 formation of individual hardware components. He introduces a hardware-assisted direct energy measurement system that integrates seamlessly with the host plat- form and provides detailed energy information of multiple hardware elements at millisecond scale time resolution. This is built upon a performance counter based behavioral model that provides indirect information on the proportional energy consumption of concurrently executing processes in the system. The direct and in- direct measurement information is fused into a low-overhead kernel-based energy apportion and accounting software system that provides visibility of per-process CPU and RAM energy consumption information on multi-core systems.

Figure 6.2: Photo of the LEAP server project and RTDEMS block diagram

National Instruments LabVIEW LEAP has also been part of programs in- volving industry affiliations. National Instruments is a leading supplier of graph- ical development tools and display software. Their LabVIEW software is used broadly in ENS applications and elsewhere. As part of a research program in collaboration with National Instruments, LabVIEW Microprocessor was ported to the LEAP2 platform. This was led by Timothy Chow, a UCLA graduate student. Mr. Chow constructed the first ever LabVIEW Microprocessor port for ARM Linux hosts, including methods for remote software deployment. In ad-

157 dition, the LabVIEW Microprocessor port to LEAP2 included enhancements to the standard LabVIEW software to enable energy monitoring and power schedul- ing controls. These new energy-aware virtual instrument (VI) capabilities enable non-traditional embedded systems developers to get real-time performance of individual VI components. This greatly reduces the complexity and technical barrier to LEAP enabled system development. Figure 6.3 highlights a couple of the LEAP VI components developed during this research program.

Figure 6.3: Photo of LEAP enabled LabVIEW components

Reftek RT155 As discussed in chapter 4, the GeoNET system is based upon a commercial product offering, the RT155, from Refraction Technologies Inc. The UCLA Geology department has recently purchased additional RT155 units including a revised RT619 design. The updated RT155 platform is a highly capa- ble data acquisition system, capable of now providing 12 channels of 24-bit data with sample rates up to 4 KHz. The new RT619 includes numerous improve- ments including a new DM3730 with combined CPU and DSP, higher precision time synchronization circuit, and higher fidelity energy accounting capability. This enhancement is performed while maintaining average power dissipation less than 200mW. The platform is able to operate continuously in low power vigi- lant states while performing event detection. If events require, the platform can

158 quickly transition to a fully vigilant mode where networking and additional plat- form resources can be enabled through the custom EMAP logic. The platform is able to continuously record data through these power mode transitions. The LEAP-enabled RT619 CPU module is a crucial part of the RT155 design pro- viding in-situ energy usage information allowing remote optimization of software operation and power management scheduling.

Figure 6.4: DARPA ADAPT module attached to expansion carrier board.

DARPA ADAPT The DARPA ADAPT program (BAA-12-11) focuses on rapid development cycles and high integration through the use of commercial technology. The ADAPT module hardware integrates the state of the art Qual- comm S4 (MSM8960) processor coupled to a low power Actel FPGA; a similar architecture to the LEAP2 platform. A photo of the ADAPT module from Phase I is shown in Figure 6.4. As part of the ADAPT platform development objectives, long battery lifetimes are required, much longer than traditional cell phone opera- tion, necessitating deep power cycling and fine grained power management of the many processing, communications, and storage resources. As a program awardee,

159 we are integrating many LEAP capabilities, including EMAP operations into the FPGA, and energy-aware design concepts needed to meet this program goal. Fu- ture iterations of the ADAPT module may be enhanced new LEAP capabilities emphasizing the heterogeneous multi-core processing.

Figure 6.5: Photo of the Agile Communications NetRAT handheld computing device

Agile Communications NetRAT In addition to research programs, LEAP technology has been integrated into commercial and government related prod- ucts. One example is the NetRAT handheld portable data analysis tool from Agile Communications, shown in Figure 6.5. The NetRAT-HH is intended for networked system test and evaluation applications and also provides users with a rugged, general purpose handheld device that can be easily extended to sup- port a wide range of end user applications. Embedded into the core of NetRAT hardware is an FPGA design based upon LEAP2 including its EMAP function- ality. The NetRAT includes numerous power sensing features across many of the

160 communications. Developers are able to use the LEAP energy profiling software applications previously described to monitor and manage platform energy and to assess energy impacts of various IP routing protocols, QoS policies, and network bandwidth settings.

161 CHAPTER 7

Conclusion

This thesis has described a new ENS platform design approach, and also appli- cable to a wide spectrum of other applications, in which the system is able to adaptively select the most energy efficient hardware components matching an ap- plication’s needs. The LEAP2 and GeoNET platforms support highly dynamic requirements in sensing fidelity, computational load, storage media, and network bandwidth. Their design approaches focus on episodic operation of each com- ponent and considers the energy dissipation for each platform task. In addition to the platforms’ unique hardware capabilities, their software architectures have been tailored to provide an easy to use power management interface, a robust, fault tolerant operating environment, and to enable remote upgrade of all soft- ware components.

The hardware has been chosen to exploit energy efficiency advantages of high performance resources while also providing low power capabilities for use in dy- namic environments. As we have shown though direct experimental results, the ability to measure in-situ energy dissipation at high resolution provides a new insight into platform performance and enables new efficiency metrics. When coupled with a remotely managed software update capability, we enable a new energy-aware development cycle. We are then able to actively tune and optimize software execution and power management cycles to best match the environment.

While the LEAP architecture described in this thesis emphasizes the use of

162 multiple heterogenous hardware components in order to exploit high energy ef- ficiency across a wide dynamic range of capability, future LEAP-enabled sys- tems may include multi-modal hardware assets utilizing heterogeneous opera- tional characteristics rather than multiple discrete hardware resources. Current computer architecture and networking projects [JSR04, BCE07] may one day al- low run-time changes to low level hardware operation (e.g CPU microcode) in order to adaptively configure a resource’s capability and power profile to match current system requirements. This may be coupled with on-chip energy mea- surement features, such as the EMAP ASIC, will enable a highly integrated energy-aware solution.

163 APPENDIX A

Hardware Design Files

The complete set of design files for the RefTek RT619 design are located in the LEAP subversion and git repositories. Listing A.1: LEAP File Repositories git://ascent.ee.ucla.edu/git/leap-linux.git svn://ascent.ee.ucla.edu/svn/leap2/

A.1 RefTek RT619

For reference the schematic for this module follows:

A.1.0.1 Schematics

164 5 4 3 2 1 LEAP - RT619

D Schematic Pages D * Author: Dustin McIntire * "Copyright (c) 2006 The Regents of the University of California. * All rights reserved. 1. Title Sheet * 2. USB UART * Permission to use, copy, modify, and distribute this software and its * documentation for any purpose, without fee, and without written agreement is 3. Current Sensing * hereby granted, provided that the above copyright notice, the following 4. USB Host and OTG * two paragraphs and the author appear in all copies of this software. * 5. Ethernet * IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR 6. SD Card * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT 7. Radio IF * OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF * CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 8. GPS * 9. Reftek Subsystem * THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY 10. FPGA * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS 11. Stacking Connector * ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATION TO 12. Power Supplies * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS."

C C

Power Supply Hierarchy

LAYOUT NOTES: 1. No vias inside SMT caps and resistors. 2. No vias inside SMT pads.

B B

MTG142/217 MTG HOLE

JTAG Chain MTG142/217 Reset Hierarchy MTG HOLE

MTG142/217 MTG HOLE

N/A MTG142/217 MTG HOLE A A

CHASSIS_GND

Title Title EMAP2 - Title Page Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Wednesday, November 14, 2012 Sheet 1of 16 5 4 3 2 1 5 4 3 2 1

USB Console UART

USB_VIN USB_3.3_VREG U1 USB_VIN HPM_VIO C1 C2 D1 D 0.1uF D2 D 0.1uF FT232RQ AYPG1211F 1 30 BRPY1211F CONSOLE_TXD 10 19 VCCIO TXD 2 VCC RXD CONSOLE_RXD 10 USBDM 15 32 USBDP 14 USBDM nRTS 8 USBDP nCTS 23 31 5 NC5 nDTR 6 NC4 nDSR

12 7 5 6 7 8 13 NC3 nDCD 3 RN1 25 NC2 nRI 330 Ohm 29 NC1 22 CBUS0 R78 NC0 CBUS0 21 CBUS1 4 3 2 1 PXA_LED0 11 10K 18 CBUS1 10 PXA_LED1 11 10 CONSOLE_RXD_EN 27 nRESET CBUS2 11 28 OSCI CBUS3 9 C USB_3.3_VREG OSCO CBUS4 C 16 4 3V3OUT GND 17 C3 26 GND 20 0.1uF TEST GND 24 AGND BODY FT232R 33

USB_VIN FB1

BLM18AG121SN1D C4 B 0.01uF B

SHIELD 1 USBDM

11 2 J1 3 USBDP 54819-0572 4 5 C5 C6 47pF 47pF

SHIELD

A A

Title EMAP2 - USB UART

Size Document Number Rev A C

Date: Friday, April 02, 2010 Sheet 2of 16 5 4 3 2 1 5 4 3 2 1

CURRENT SENSING

VCC_3.3_VREG SD Card C8 C7 R1 0.1uF 0.1uF 100 Ohm U2 U3 1% 1 6 5 1 SD_CURRENT D VIN VOUT V+ OUT D C9 3 5 R2 3 R3 10 SD_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% ILIM GND INA138NA R4 MIC2007YM6TR 470 Ohm CF_3.3_VREG

C10 22uF

VCC_3.3_VREG Ethernet

C12 C11 R5 0.1uF 0.1uF 100 Ohm U4 U5 1% 1 6 5 1 ETH_CURRENT VIN VOUT V+ OUT C13 3 5 R6 3 R7 10 ETH_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% ILIM GND INA138NA R8 MIC2007YM6TR 470 Ohm ETH_3.3_VREG

C14 22uF

VCC_3.3_VREG GPS C C C22 C21 R16 0.1uF 0.1uF 100 Ohm U9 U10 1% Current Sense ADC 1 6 5 1 GPS_CURRENT VIN VOUT V+ OUT C23 3 5 R18 3 R19 VCC_RTC 10 GPS_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% ILIM GND INA138NA R12 R20 10 Ohm MIC2007YM6TR 470 Ohm GPS_3.3_VREG C18 0.1uF

C25 5 U8 22uF SYSTEM_VIN ETH_CURRENT 6 R14 SYSTEM_CURRENT 7 A0 VCC 100K RF_CURRENT 8 A1 1 A2 SDO EMAP_CSENSE_DOUT 10 RF_VIN Radios 1% GPS_CURRENT 9 USB_CURRENT 10 A3 R15 C20 SD_CURRENT 11 A4 4 C28 VCC_RTC C27 R21 100K 0.1uF HPM_CURRENT 12 A5 EOC 0.1uF 0.1uF 100 Ohm 1% 13 A6 U12 U13 1% A7 14 R17 1 6 5 1 RF_CURRENT nCSTART VIN VOUT V+ OUT 16 10K C29 nPWDN 17 3 5 R22 3 R23 FS 10 RF_POWER_ON ENA CSL Vin+ 0.1uF 20 1 Ohm 4 2 100K 10 EMAP_CSENSE_nCS nCS Vin- GND 2 19 4 2 1% 10 EMAP_CSENSE_DIN SDI REFP ILIM GND INA138NA 10 EMAP_CSENSE_SCLK 3 18 R24 SCLK REFM C24 MIC2007YM6TR 470 Ohm AGND 0.1uF RF_VCC TLV2548IPWR VCC_RTC U11 15 VREF 1 C30 VIN 2 22uF C26 3 VOUT B 0.1uF GND B REF2930

SYSTEM_VIN HPM Tie at a single point.

C32 C31 R25 0.1uF 0.1uF 100 Ohm U14 U15 1% 1 6 5 1 HPM_CURRENT Radio Voltage Selection VIN VOUT V+ OUT C34 VCC_3.3_VREG RF_VIN 3 5 R26 3 R27 0.1uF

10 HPM_POWER_ON 1 ENA CSL .470 OHM 4 Vin+ 2 100K Vin- GND 4 2 1% 3 2 ILIM GND INA138NA R31 JP4 MIC2007YM6TR VCC_1.8_VREG 470 Ohm HPM_VIN

C37 22uF

USB Host System Input From RefTEK Interface +5V SYSTEM_VIN

C33 C62 R28 R49 0.1uF 0.1uF 100 Ohm 100 Ohm U16 U20 1% 1% 5 1 USB_CURRENT 5 1 SYSTEM_CURRENT V+ OUT V+ OUT C35 C63 R29 3 R30 R47 3 R48 USB_VCC 0.1uF 0.1uF 1 Ohm 4 Vin+ 2 100K .470 OHM 4 Vin+ 2 100K Vin- GND Vin- GND A 1% 1% A INA138NA SYSTEM_VIN INA138NA C36 22uF

Title Title EMAP2 - Current Sensing Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Friday, April 02, 2010 Sheet 3of 16 5 4 3 2 1 5 4 3 2 1

USB Host Port 1

J2 D 9 7 D 1 5 8 6 USB_VCC

SHIELD 1 2 3 4 5 SHIELD C38 U17 USB1_VBUS 0.1uF 1 6 USB1_VBUS USB1_VBUS U52 VIN VOUT C40 C39 3 5 0.1uF STF201 11 PXA_USB_P1_HPEN ENA CSL 22uF 1 6 VBUS GND R32 4 2 10K ILIM GND 2 5 R33 PXA_USB_P1_DP 11 MIC2007YM6TR DIN+ DOUT+ 470 Ohm 3 4 DIN- DOUT- PXA_USB_P1_DN 11 Note: match trace lengths for USB differential pair STF201 Current Limit: 500mA

C USB Host Port 2 or USB OTG Using External Transciever C

USB Host (default) USB OTG J3 Install U44, R52 DNI U44, R52 9 7 1 5 DNI U18, R35, R36 Install U18, R35, R36 8 6 USB2_VBUS U53

1 2 3 4 5 STF201 SHIELD SHIELD 1 6 VBUS GND 2 5 USB2_VBUS DIN+ DOUT+ PXA_USB_P2_DP 11 HPM_VIO DNI U18 C43 3 4 PXA_USB_P2_DN 11 ISP1301 21 47nF DIN- DOUT- C1 USB OTG XVCR 22 C2 20 STF201 Note: match trace lengths for USB differential pair VBATT 7 C44 DNI C45 USB2_HPEN 1 VOUT3.3 0.1uF USB2_HPEN 1uF PXA_USB_P2_8 11 2 ADR 19 R34 9,11,12 PXA_I2C_SDA SDA VBUS R52 3 15 R35R35R36R36 Ohm Ohm 33 33 Ohm Ohm 33 33 10K 9,11,12 PXA_I2C_SCL SCL D- 16 10 Ohm D+ 18 4 ID B nRESET 12 B C61 C60 C46 RCV 11 USB_VCC 22pF 22pF 1uF 5 VP 10 11 PXA_USB_P2_1 9 nINT VM 11 PXA_USB_P2_2 13 OE_TP_INT_N 17 C47 USB2_VBUS 11 PXA_USB_P2_4 U19 14 SE0_VM AGND 23 0.1uF 11 PXA_USB_P2_5 1 6 6 DAT_VP CGND 24 11 PXA_USB_P2_7 VIN VOUT C51 8 SPEED VDD_LGC 25 C50 11 PXA_USB_P2_8 SUSPEND DGND USB2_HPEN 3 5 0.1uF ENA CSL 22uF R37 4 2 HPM_VIO 10K ILIM GND R38 MIC2007YM6TR 470 Ohm TP1 TP1 TP2 TP2 TP3 TP3

C52 0.1uF

A A

Title EMAP2 - USB OTG

Size Document Number Rev B C

Date: Friday, April 02, 2010 Sheet 4of 16 5 4 3 2 1 5 4 3 2 1

10/100 Ethernet Controller

D D ETH_nSPEED_LED ETH_nACT_LED ETH_EECS ETH_EESK ETH_EEDIO

21 20 19 39 38 ETH_2.5_VREG U32 U33 10,11 PXA_D[31:0] PXA_D0 18 SD0 ETH_RX+ R65 1 RX 1:1 16 ETH_RXP PXA_D1 17 LED1 LED2

EEDIO 49.9 Ohm PXA_D2 16 SD1 EEDCS EEDCK J13 C113 1% 3 14 PXA_D3 14 SD2 3 ETH_RX+ 0.01uF R66 PXA_D4 13 SD3 RX+ ETH_TXP 1 ETH_RX- 49.9 Ohm 2 15 ETH_RXN PXA_D5 12 SD4 4 ETH_RX- ETH_TXN 2 STANDARD ETHERNET 1% PXA_D6 11 SD5 DM9000A RX- ETH_RXP 3 CABLE PAIRS PXA_D7 10 SD6 7 ETH_TX+ 4 PXA_D8 31 SD7 TX+ 5 1 & 2 ETH_TX+ R67 7 TX 1:1 10 ETH_TXP PXA_D9 29 SD8 8 ETH_TX- ETH_RXN 6 3 & 6 49.9 Ohm PXA_D10 28 SD9 TX- 7 4 & 5 C114 1% 6 11 PXA_D11 27 SD10 8 7 & 8 0.01uF R68 PXA_D12 26 SD11 9 ETH_TX- 49.9 Ohm 8 9 ETH_TXN PXA_D13 25 SD12 41 10 1% PXA_D14 24 SD13 TEST 46 H0019 PXA_D15 22 SD14 SD R69 R70 SD15 34 75 Ohm 75 Ohm RJ45 W/SHIELD ETH_IRQ 10 37 INT RJ45_95622 C 11 PXA_ETH_nCS R39 C 35 nCS 1 R71 6.8K C115 MOLEX 10,11 PXA_nOE 10K 36 nIOR BGRES 48 0.1uF SHIELD 95622-4882 11 PXA_nWE nIOW BGGND ETH_2.5_VREG 32 9 C116 DNI 10,11 PXA_A2 CMD VDD25 2 1000pF C117 C118 ETH_nRESET 40 VDD25 0.1uF 22uF nRESET 42 SHIELD ETH_3.3_VREG C123 VDD 30 Tie Analog and Digital Gounds 22pF 44 VDD 23 X1 VDD together at a single point 43 C119 C120 C121 C122 2 1 X2 GND GND GND GND GND GND 0.1uF 0.1uF 0.1uF 0.1uF MAC ID EEPROM DM9000A X4 5 6 25.000Mhz 15 33 45 47

3 4 ETH_3.3_VREG C124 22pF

U34 8 ETH_EEDIO 3 6 DI NC

U35 VCC B ETH_EESK 2 7 B TPS3805 ETH_3.3_VREG CLK NC 3 ETH_nRESET ETH_EECS 1 4 ETH_EEDIO 5 RESET CS GND DO SENSE 1 R72 NC 93LC46B 4 2 5 10K VDD GND TPS3805

A A

Title EMAP2 - Ethernet

Size Document Number Rev B C

Date: Friday, April 02, 2010 Sheet 5of 16 5 4 3 2 1 5 4 3 2 1

SD Card Interface

D D

SD_3.3_VREG

SD_3.3_VREG C64 C65 0.1uF 22uF

R40 R41 R42 R43 R44 R45 R46 J15 10K 10K 10K 10K 10K 10K 10K 1 C 11 PXA_SD_D3 C 2 11 PXA_SD_CMD 3 4 5 11 PXA_SD_CLK 6 7 11 PXA_SD_D0 8 11 PXA_SD_D1 9 11 PXA_SD_D2 10 11 PXA_SD_PROTECT 11 11 PXA_SD_DETECT 12 13 14

CONN SD CARD 500998-0909 B MOLEX B 500998-0909

A A

Title EMAP2 - SD Card Interface Schematic

Size Document Number Rev A ASCENT-LEAP2-EMAP2 C

Date: Friday, April 02, 2010 Sheet 6of 16 5 4 3 2 1 5 4 3 2 1

Radio Interface

D D

RF_VCC J14 1 2 10 RADIO_TXD RF_GPIO[7:0] 10 3 4 RF_GPIO0 C 10 RADIO_RXD C 5 6 RF_GPIO1 10 RADIO_nIRQ 7 8 RF_GPIO2 10 RADIO_nRESET 9 10 RF_GPIO3 10 RADIO_nCS 11 12 RF_GPIO4 10 RADIO_SDI 13 14 RF_GPIO5 10 RADIO_SDO 15 16 RF_GPIO6 10 RADIO_SCLK 17 18 RF_GPIO7 19 20 DF17A(3.0)-20DS-0.5V(51)

MTG HOLE MTG142/217

B B

A A

Title EMAP2 - Radio Interface

Size Document Number Rev A C

Date: Friday, April 02, 2010 Sheet 7of 16 5 4 3 2 1 5 4 3 2 1

D D

GPS Interface

GPS_3.3_VREG VCC_RTC C87 C88 R51 0.1uF 2.2uF 10 Ohm TP7 VCC_RF

GPS_3.3_VREG

U26 6 19 18 11 24 5 C89 0.1uF VCC V_BAT VDDIO 16 V_ANT VCC_RF VDDUSB 1 J7 RF_IN 8 142-9701-811 3 VDD18_OUT C90 10 GPS_TXD 4 TxD1 0.1uF 10 GPS_RXD 2 3 RxD1 C GND GND C SIGNAL 25 28 GPS_TIMEPULSE 10 26 USB_DM TIMEPULSE USB_DP RF_GND RF_GND 20 27 GPS_WAKEUP 10 nAADET EXT_INT0 21 10 EXT_INT1 nRESET 1 12 NC 2 BOOT_INT NC 9 NC 22 Digital Noise Rejection NC 23 NC GND GND GND GND GND LEA-5T 7 FB4 13 14 15 17 BLM18AG121SN1D RF_GND

B B

A A

Title EMAP2 - GPS

Size Document Number Rev Custom ASCENT-LEAP2-EMAP2 C

Date: Friday, April 02, 2010 Sheet 8of 16 5 4 3 2 1 5 4 3 2 1

RefTEK Sub System

D D

REFTEK

10 EXT_SPI_SCLK EXT_SPI_SCLK 13 13 13 10 EXT_SPI_SDO EXT_SPI_SDO EXT_SPI_SDI EXT_SPI_SDI 10

13 10 RT_CnC_CS_N RT_CnC_CS_N 13 13 10 RT_SEL_LATCH RT_SEL_LATCH PXA_SEL_ACK PXA_RT_SEL_ACK 11 13 10 RT_LED_LATCH RT_LED_LATCH

13 C 10 RT_AD_CHAIN_OUT 13 13 PXA_SSP1_SCLK 11 C RT_AD_CHAIN_OUT 13 PXA_SSP1_SCLK 10 RT_AUX_CHAIN_OUT RT_AUX_CHAIN_OUT 13 PXA_SSP1_SFRM PXA_SSP1_SFRM 10,11 PXA_SSP1_RXD PXA_SSP1_RXD 11

13 PXA_RT_IRQ PXA_RT_IRQ 11

13 13 10 RT_TIME RT_TIME 13 RT_CLKTIM RT_CLKTIM 10 10 RT_CLK1HZ RT_CLK1HZ

REFTEK

B B

A A

Title EMAP2 - RefTek Sub System Interconnect

Size Document Number Rev A C

Date: Friday, April 02, 2010 Sheet 9of 16 5 4 3 2 1 5 4 3 2 1

U36A I/O BANK 0-1 U36B I/O BANK 2-3 A2 B15 PXA_D11 11 PXA_CLK_OUT A3 GAA0/IO00RSB0 GBA2/IO60PDB1 D14 T14 B2 11 PXA_RDY PXA_DQM0 11 12 SYSTEM_3.3V_ENA PXA_FF_TXD 11 A4 GAA1/IO01RSB0 GBB2/IO61PDB1 E13 PXA_D19 R13 GDA2/IO89RSB2 GAA2/IO174PPB3 B1 11 PXA_BT_RXD B4 GAB0/IO02RSB0 GBC2/IO62PPB1 H14 12 SYSTEM_1.8V_ENA T12 GDB2/IO90RSB2 GAB2/IO173PDB3 D3 PXA_BT_TXD 11 11 PXA_FF_RXD C5 GAB1/IO03RSB0 GCA0/IO71NDB1 J13 RADIO_RXD 7 11 PXA_CLK_IN N6 GDC2/IO91RSB2 GAC2/IO172PDB3 J1 PXA_SSP3_SFRM 11 11 PXA_nPWE RADIO_SDO 7 12 EMAP_nRESET EXT_IRQ1 11 -> C6 GAC0/IO04RSB0 GCA1/IO71PDB1 J16 T3 GEA2/IO143RSB2 GFA2/IO161PPB3 J5 FPGA_TP23 11 PXA_EMAP_nCS GAC1/IO05RSB0 GCA2/IO72PDB1 H16 RADIO_nIRQ 7 16.384MHz -> 9 RT_CLKTIM R4 FF/GEB2/IO142RSB2 GFB2/IO160GDB3 K1 TP23 GCB0IO70NPB1 RADIO_nCS 7 9 RT_CLK1HZ GEC2/IO141RSB2 GFC2/IO159PSB3 EXT_nIRQ0 11 D D4 H13 -> D 11 PXA_SSP3_RXD C4 IO06RSB0 GCB1/IO70PPB1 J12 RADIO_SDI 7 N2 11 PXA_SSP2_RXD TP8 EXT_POWER_ON3 11 PXA_A24 D6 IO07RSB0 GCB2/IO73PDB1 H12 K4 IO147PPB3 M3 RADIO_TXD 7 11 EXT_WAKEUP_IN0 EXT_POWER_ON2 11 A5 IO10RSB0 GCC0IO69NDB1 G13 PXA_D17 M8 IO129RSB2 IO147NPB3 M1 11 PXA_ST_TXD B5 IO11RSB0 GCC1/IO69PDB1 J14 3 RF_POWER_ON M9 IO117RSB2 IO150PDB3 N1 EXT_SPI_nCS 11 11 PXA_ST_RXD IO13RSB0 GCC2/IO74PDB1 RADIO_SCLK 7 -> 9 RT_TIME IO110RSB2 IO150NDB3 EXT_POWER_ON5 11 PXA_A25 B6 11 EXT_WAKEUP_IN2 N4 L3 EXT_WAKEUP2 11 A6 IO14RSB0 B16 PXA_D5 M13 IO140RSB2 IO151PPB3 M2 11 PXA_SSP3_TXD TP16 EXT_POWER_ON0 11 PXA_A12 A7 IO16RSB0 IO60NDB1 C15 PXA_D10 N7 IO94RSB2 IO151NPB3 K3 8 GPS_RXD EXT_SPI_SCLK 11 PXA_A15 D7 IO18RSB0 IO61NPB1 F13 PXA_D18 N8 IO126RSB2 IO156PPB3 L2 9,11 PXA_SSP1_SFRM EXT_POWER_ON4 11 PXA_A14 C7 IO19RSB0 IO62NDB1 C16 PXA_D4 -> N9 IO120RSB2 IO156NPB3 L4 3 SD_POWER_ON EXT_WAKEUP1 11 PXA_A13 B7 IO20RSB0 IO63PDB1 D16 PXA_D3 -> N10 IO108RSB2 IO158PSB3 L1 9 RT_AD_CHAIN_OUT EXT_SPI_SDI 11 PXA_A9 C8 IO21RSB0 IO63NDB1 E15 PXA_D8 -> N11 IO103RSB2 IO159NDB3 J4 3 EMAP_CSENSE_nCS EXT_WAKEUP0 11 PXA_A7 E8 IO24RSB0 IO64PPB1 F14 PXA_D14 N13 IO99RSB2 IO160NDB3 K2 TP18 EXT_SPI_SDO 11 PXA_A8 D8 IO25RSB0 IO64NPB1 F15 PXA_D7 P4 IO92RSB2 IO161NPB3 G2 11 EXT_WAKEUP_IN1 EXT_CSENSE_nCS1 11 PXA_A10 B8 IO26RSB0 IO65PPB1 G14 PXA_D13 P5 IO138RSB2 IO165PDB3 G1 11 RESERVED EXT_CSENSE_nCS0 11 PXA_A11 A8 IO27RSB0 IO65NPB1 E16 PXA_D2 P6 IO136RSB2 IO165NDB3 E1 8 GPS_TXD HPM_CSENSE_SCLK 11 PXA_A3 D9 IO28RSB0 IO66PDB1 F16 PXA_D1 P7 IO131RSB2 IO166PDB3 F1 8 GPS_WAKEUP PXA_IRQ0 11 PXA_A2 E9 IO30RSB0 IO66NDB1 E14 PXA_D15 P8 IO124RSB2 IO166NDB3 F3 3 HPM_POWER_ON EXT_CSENSE_nCS2 11 PXA_A5 B9 IO31RSB0 IO67PPB1 H15 RF_GPIO0 P9 IO119RSB2 IO167PPB3 E2 2 CONSOLE_RXD_EN HPM_CSENSE_nCS 11 PXA_A4 C9 IO32RSB0 IO67NPB1 K16 RF_GPIO2 P10 IO107RSB2 IO167NPB3 G3 9 RT_AUX_CHAIN_OUT EXT_CSENSE_DOUT 11 PXA_A6 A9 IO33RSB0 IO72NDB1 K13 RF_GPIO5 -> P11 IO104RSB2 IO168PPB3 F2 3 EMAP_CSENSE_SCLK PXA_IRQ1 11 A10 IO34RSB0 IO73NPB1 K15 RF_GPIO3 R3 IO97RSB2 IO168NDP3 F4 5,11 PXA_nOE 9 RT_CnC_CS_N EXT_CSENSE_nCS3 11 B10 IO37RSB0 IO74NPB1 G15 PXA_D6 -> R5 IO139RSB2 IO169PDB3 E4 11 PXA_DQM3 TP19 PXA_SSP3_SCLK 11 C10 IO38RSB0 IO75PDB1 G16 PXA_D0 R6 IO132RSB2 IO169NDB3 D2 11 PXA_DQM2 8 GPS_TIMEPULSE PXA_SSP2_SFRM 11 PXA_D31 D10 IO39RSB0 IO75NDB1 J15 RF_GPIO1 R7 IO127RSB2 IO171PDB3 D1 11 PXA_WAKEUP HPM_CSENSE_DIN 11 PXA_D30 A11 IO40RSB0 IO80PPB1 K14 RF_GPIO4 R8 IO121RSB2 IO171NDB3 E3 3 GPS_POWER_ON PXA_SSP2_SCLK 11 PXA_D29 B11 IO41RSB0 IO80NPB1 L16 R9 IO114RSB2 IO172NDB3 C1 RADIO_nRESET 7 2 CONSOLE_RXD HPM_CSENSE_DOUT 11 PXA_D26 A12 IO42RSB0 IO84PDB1 M16 R10 IO109RSB2 IO173NDB3 C2 TP15 9 RT_SEL_LATCH PXA_SSP2_TXD 11 PXA_D28 C11 IO43RSB0 IO84NDB1 L15 RF_GPIO6 -> R11 IO105RSB2 IO174NDB3 3 EMAP_CSENSE_DIN PXA_D27 D11 IO44RSB0 IO85PDB1 L14 RF_GPIO7 R12 IO98RSB2 H2 5 ETH_IRQ EXT_CSENSE_SCLK 11 PXA_D20 D13 IO45RSB0 IO85NDB1 T2 IO96RSB2 GFA0/IO162NPB3 J2 9 RT_LED_LATCH EXT_nIRQ1 11 PXA_D21 C13 IO50RSB0 P16 -> FPGA_TP20 T4 IO137RSB2 GFA1/IO162PPB3 H1 TP9 TP20 EXT_IRQ0 11 B14 IO51RSB0 GDA0/IO88NDB1 N16 FPGA_TP21 T5 IO134RSB2 GFB0/IO163NPB3 H3 11 PXA_DQM1 TP10 TP21 EXT_CSENSE_DIN 11 PXA_D9 D15 IO52RSB0 GDA1/IO88PDB1 L13 FPGA_TP22 T6 IO125RSB2 GFB1/IO163PPB3 H5 IO53RSB0 GDB0/IO87NDB1 M14 TP12 TP22 T7 IO123RSB2 GFC0/IO164NDB3 G4 EXT_RXD 11 TP11 12 PS_nIRQ EXT_TXD 11 PXA_D16 A14 GDB1/IO87PDB1 N15 T8 IO118RSB2 GFC1/IO164PDB3 R2 TP14 3 ETH_POWER_ON EXT_nCS1 11 PXA_D12 A15 GBA0/IO58RSB0 GDC0IO86NPB1 M15 T9 IO115RSB2 GEA0/IO144NDB3 R1 TP13 2 CONSOLE_TXD EXT_nCS2 11 PXA_D22 B13 GBA1/IO59RSB0 GDC1/IO86PPB1 FPGA_TP24 T10 IO111RSB2 GEA1/IO144PDB3 P2 TP24 PXA_CLK_REQ 11 C PXA_D23 A13 GBB0/IO56RSB0 -> T11 IO106RSB2 GEB0/IO145NDB3 P1 <- C 3 EMAP_CSENSE_DOUT PXA_IRQ3 11 PXA_D24 C12 GBB1/IO57RSB0 T13 IO102RSB2 GEB1/IO145PDB3 M4 PXA_GPIO100 11 12 SYSTEM_1.5V_ENA EXT_POWER_ON1 11 PXA_D25 B12 GBC0/IO54RSB0 M1AGL600-FG256IO93RSB2 GEC0/10146NDB3 N3 GBC1/IO55RSB0 PXA_GPIO103 11 GEC1/IO146PDB3 EXT_nCS0 11 M1AGL600-FG256 PXA_GPIO86 11 PXA_GPIO97 11 PXA_GPIO96 11 R85 R86 1K 1K Future JTAG for ARM Cortex SoC

FPGA_TP20 FPGA_TP20 11 FPGA_TP21 FPGA_TP21 11 FPGA_TP22 FPGA_TP22 11 FPGA_TP23 FPGA_TP23 11 FPGA_TP24 FPGA_TP24 11

U36C

VCC_1.5_VREG POWER/JTAG F7 A1 F8 VCC GND A16 F9 VCC GND F6 F10 VCC GND F11 VCC GND G6 G7 VCC_RTC G11 VCC GND G8 H6 VCC GND G9 H11 VCC GND G10 R87 J6 VCC GND H7 1K J11 VCC GND H8 11 PXA_RDY K6 VCC GND H9 K11 VCC GND H10 L7 VCC GND J7 L8 VCC GND J8 5,11 PXA_D[31:0] L9 VCC GND J9 5,11 PXA_A[25:0] L10 VCC GND J10 7 RF_GPIO[7:0] B VCC GND K7 B C14 GND K8 RF_VIN E5 VMV0 GND K9 E12 VMV0 GND K10 P12 VMV1 GND L6 M12 VMV1 GND L11 P3 VMV2 GND T1 VCC_RTC C3 VMV2 GND T16 M5 VMV3 GND VMV3 B3 E6 GNDQ D5 E7 VCCIOB0 GNDQ D12 VCCIOB0 GNDQ RF_VIN E10 N5 E11 VCCIOB0 GNDQ N12 I/O Bypass Core Bypass F12 VCCIOB0 GNDQ R15 VCC_RTC VCC_1.5_VREG G12 VCCIOB1 GNDQ K12 VCCIOB1 P13 L12 VCCIOB1 TCK R16 PXA_FPGA_TCK 11 C128 C125 C126 C127 C108 C109 C110 C111 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF M6 VCCIOB1 TDO T15 PXA_FPGA_TDO 11 M7 VCCIOB2 TMS P15 PXA_FPGA_TMS 11 M10 VCCIOB2 TRST R14 PXA_FPGA_TRST 11 M11 VCCIOB2 TDI PXA_FPGA_TDI 11 VCC_1.5_VREG F5 VCCIOB2 HPM_VIO R83 R84 G5 VCCIOB3 N14 1K 1K VCC_RTC VCCIOB3 VJTAG VCC_1.5_VREG K5 L5 VCCIOB3 C131 C54 C55 C56 C57 C58 0.1uF 0.01uF 0.01uF 0.01uF 0.01uF 0.01uF 1 VCCIOB3 P14 C137 C134 C135 C136 VPUMP 0.1uF 0.1uF 0.1uF 0.1uF JP6 3 2 J3 VCC_RTC VCCLPF 1-2 To enable PLL (default) C59 C133 U51 0.01uF 0.1uF H4 M1AGL600-FG256 5 VCC 1 2-3 To disable PLL power VCOMPLF 2 4 3 FPGA_PROG_EN 11 GND R93 RF_VIN C53 C132 1K NC7SZ125 0.01uF 0.1uF SOT23_5 C141 C138 C139 C140 FAIRCHILD 0.1uF 0.1uF 0.1uF 0.1uF

A A

Title Title EMAP2 - FPGA Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Friday, April 02, 2010 Sheet 10of 16 5 4 3 2 1 5 4 3 2 1

PCMCIA Configuration Name J11 Pin PXA GPIO Alt Func LEAP2 Stacking Connector (Upper) PXA_nPWAIT 124 56 1-IN HPM SODIMM Connector PXA_nPIOIS16 123 57 1-IN HPM_VIN HPM_VIN VCC_1.5_VREG PXA_PREG 116 55 2-OUT VCC_3.3_VREG PXA_nPIOW 111 51 2-OUT PXA_nPOE 113 48 2-OUT J11 1 2 SYSTEM_VIN 1 2 VCC_1.8_VREG 3 1 2 4 PXA_nPCE1 105 102 1-OUT 10 PXA_WAKEUP GPIO_0 SYSTEM_PWRBTN SYSTEM_PWRBTN 3 4 3 4 PXA_IRQ0 10 5 6 PXA_nPIOR 103 50 2-OUT -> 10 PXA_CLK_IN 5 CLK_PIO_GPIO9 CLK_TOUT_GPIO10 6 7 5 6 8 PXA_IRQ1 10 PXA_nPCE2 30 105 1-OUT 2 PXA_LED0 7 L_DD_14_GPIO72 L_DD_12_GPIO70 8 9 7 8 10 PXA_IRQ3 10 -> 6 PXA_SD_PROTECT 9 L_LCLK_A0_GPIO75 L_DD_11_GPIO69 10 11 9 10 12 PXA_PSKTSEL 32 104 1-OUT 10 HPM_CSENSE_DIN 11 CSENSE_DIN CSENSE_DOUT 12 HPM_CSENSE_DOUT 10 13 11 12 14 PXA_nIRQ 18 76 0-IN 10 HPM_CSENSE_nCS CSENSE_nCS CSENSE_SCLK HPM_CSENSE_SCLK 10 13 14 EXT_CSENSE_nCS0 10 13 14 SYSTEM_PWRBTN 15 16 EXT_CSENSE_nCS1 10 PXA_nCD 16 77 0-IN D 15 SYSTEM_VIN13 GND14 16 PXA_CF_nCD 17 15 16 18 D 12 SYSTEM_nRESET EXT_CSENSE_nCS2 10 PXA_MASTER_nEN 7 75 0-OUT -> 6 PXA_SD_DETECT 17 L_FCLK_RD_GPIO74 L_BIAS_GPIO77 18 PXA_CF_IRQ <- 19 17 18 20 -> 10 PXA_CLK_REQ L_DD_17_GPIO87 L_PCLK_WR_GPIO76 <- 12 SYSTEM_nLOWBAT 19 20 EXT_CSENSE_nCS3 10 PXA_RESET 15 74 0-OUT 19 20 10 EXT_POWER_ON0 21 22 EXT_CSENSE_DOUT 10 2 PXA_LED1 21 L_DD_15_GPIO73 L_VSYNC_GPIO14 22 PXA_SSP2_SFRM 10 23 21 22 24 10 PXA_FF_RXD nPCE1_GPIO85 CPLD_SRAM_nCS_IN PXA_SRAM_nCS 10 EXT_POWER_ON1 23 24 EXT_CSENSE_SCLK 10 23 24 10 EXT_POWER_ON2 25 26 EXT_CSENSE_DIN 10 USB Configuration 10 FPGA_PROG_EN 25 nPCE2_GPIO54 BB_IB_STB_GPIO84 26 PXA_SSP3_SCLK 10 27 25 26 28 10 EXT_POWER_ON3 EXT_SPI_nCS 10 Name J11 Pin PXA GPIO Alt Func 10 PXA_SSP3_RXD 27 BB_IB_DAT0_GPIO82 BB_IB_CLK_GPIO83 28 PXA_SSP3_SFRM 10 29 27 28 30 -> - PXA_GPIO108 KP_MKOUT_5_GPIO108 SYSTEM_VIN28 PXA_nPCE2 10 EXT_POWER_ON4 29 30 EXT_SPI_SDI 10 PXA_USB_P1_DP 84 Dedicated pin 29 30 10 EXT_POWER_ON5 31 32 EXT_SPI_SDO 10 10 PXA_GPIO86 31 L_DD_16_GPIO86 KP_MKOUT_2_GPIO105 32 PXA_PKTSEL <- 33 31 32 34 PXA_USB_P1_DN 86 Dedicated pin 6 PXA_GPIO97 KP_MKIN_3_GPIO97 KP_MKOUT_1_GPIO104 <- 10 EXT_WAKEUP0 33 34 EXT_SPI_SCLK 10 PXA_USB_P1_HPEN 52 89 2-OUT 33 34 10 EXT_WAKEUP1 35 36 EXT_RXD 10 6 PXA_GPIO96 35 KP_MKOUT_6_GPIO96 KP_MKOUT_4_GPIO107 36 PXA_GPIO108 - <- 37 35 36 38 10 EXT_WAKEUP2 EXT_TXD 10 PXA_USB_P2_DP 72 Dedicated pin -> 9 PXA_RT_SEL_ACK 37 KP_DKIN_0_GPIO93 KP_MKIN_0_GPIO100 38 PXA_GPIO100 10 39 37 38 40 -> 9 PXA_RT_IRQ KP_MKIN_4_GPIO98 KP_MKOUT_0_GPIO103 PXA_GPIO103 10 10 EXT_WAKEUP_IN0 39 40 EXT_nCS0 10 PXA_USB_P2_DN 74 Dedicated pin 39 40 10 EXT_WAKEUP_IN1 41 42 EXT_nCS1 10 10 PXA_SSP2_SCLK 41 L_CS_GPIO19 GND40 42 43 41 42 44 PXA_USB_P2_1 89 35 2-IN SYSTEM_VIN41 GND42 10 EXT_WAKEUP_IN2 43 44 EXT_nCS2 10 PXA_USB_P2_2 50 34 1-OUT 43 44 10 EXT_nIRQ0 45 46 RESERVED PXA_GPIO44 45 BTCTS_GPIO44 BTTXD_GPIO43 46 PXA_BT_TXD 10 47 45 46 48 PXA_USB_P2_4 61 36 1-OUT 4 PXA_USB_P2_7 FFRTS_USB7_GPIO41 USBHPWR0_GPIO88 PXA_GPIO88 10 EXT_nIRQ1 47 48 PXA_CLK_IN 10 <- 47 48 49 50 PXA_USB_P2_5 73 40 3-IN -> 6 PXA_SD_D0 MMDAT_0_GPIO92 SDA_GPIO118 PXA_I2C_SDA 4,12 <- 10 EXT_IRQ0 49 50 FPGA_PROG_EN <- 49 50 10 EXT_IRQ1 51 52 PXA_GPIO108 - -> 6 PXA_SD_D3 51 MMDAT_3_GPIO111 FFRXD_USB2_GPIO34 52 PXA_USB_P2_2 4 53 51 52 54 <- PXA_USB_P2_7 45 41 2-IN -> 9 PXA_SSP1_SCLK SSPSCLK_GPIO23 USBHPEN0_GPIO89 PXA_USB_P1_HPEN 4 10 PXA_SSP2_SFRM 53 54 PXA_GPIO86 10 PXA_USB_P2_8 81 37 1-OUT 53 54 PXA_SRAM_nCS 55 56 PXA_GPIO97 10 -> - PXA_GPIO25 55 SSPTXD_GPIO25 SCL_GPIO117 56 PXA_I2C_SCL 4,12 <- 57 55 56 58 9 PXA_SSP1_RXD SSPRXD_GPIO26 FFTXD_USB6_GPIO39 PXA_FF_TXD 10 - PXA_GPIO108 57 58 PXA_GPIO96 10 -> 57 58 -> 59 60 LED Configuration -> 9,10 PXA_SSP1_SFRM SSPSFRM_GPIO24 MMCMD_GPIO112 PXA_SD_CMD 6 <- 10 PXA_GPIO100 59 60 PXA_RT_SEL_ACK 9 <- 59 60 10 PXA_GPIO103 61 62 PXA_RT_IRQ 9 10 PXA_FPGA_TCK 61 FFRI_USB3_GPIO38 BTRTS_GPIO45 62 PXA_GPIO45 63 61 62 64 <- Name J11 Pin PXA GPIO Alt Func 4 PXA_USB_P2_4 FFDCD_USB4_GPIO36 MMCLK_GPIO32 PXA_SD_CLK 6 <- PXA_GPIO88 63 64 PXA_SSP2_SCLK 10 LED0 5 72 0-OUT 63 64 4,12 PXA_I2C_SDA 65 66 PXA_GPIO44 10 PXA_FPGA_TDO 65 GPIO22 MMDAT_2_GPIO110 66 PXA_SD_D2 6 <- 67 65 66 68 -> 4,12 PXA_I2C_SCL PXA_SD_D0 9 LED1 19 73 0-OUT 10 PXA_FPGA_TMS 67 SSPEXTCLK_GPIO27 PWM_OUT_0_GPIO16 68 PXA_GPIO16 69 67 68 70 -> 6 PXA_SD_CMD PXA_SD_D3 9 <- 10 PXA_FPGA_TRST 69 SDATA_IN_GPIO29 PWM_OUT_1_GPIO17 70 PXA_GPIO17 -> 71 69 70 72 <- 10 PXA_FPGA_TDI PXA_SD_D1 6 PXA_GPIO45 PXA_SSP1_SCLK 9 CIF Configuration 71 SDATA_OUT_GPIO30 MMDAT_1_GPIO109 72 <- 73 71 72 74 <- PXA_GPIO113 AC97_RESET_n_GPIO113 USBC_P PXA_USB_P2_DP 4 6 PXA_SD_CLK 73 74 PXA_GPIO25 - Name J11 Pin PXA GPIO Alt Func 73 74 -> 6 PXA_SD_D2 75 76 PXA_SSP1_RXD 9<- 4 PXA_USB_P2_5 75 FFDTR_USB5_GPIO40 USBC_N 76 PXA_USB_P2_DN 4 77 75 76 78 PXA_CIF_CLKIN 51 23 1-OUT -> PXA_GPIO16 PXA_SSP1_SFRM 9,10<- 10 PXA_ST_RXD 77 ICP_RXD_GPIO46 GND76 78 79 77 78 80 PXA_GPIO17 FPGA_TP20 10 <- PXA_CIF_D0 37 98 2-IN PXA_GPIO31 79 SYNC_GPIO31 SYSTEM_VIN78 80 81 79 80 82 <- PXA_BT_RXD 10 6 PXA_SD_D1 FPGA_TP21 10 PXA_CIF_D1 93 114 1-IN 10 PXA_SSP2_RXD 81 EXT_SYNC_0_GPIO11 BTRXD_GPIO42 82 -> 83 81 82 84 <- 4 PXA_USB_P2_8 FFDSR_USB8_GPIO37 nRESET_OUT 4 PXA_USB_P1_DP 83 84 FPGA_TP22 10 <- PXA_CIF_D2 92 116 1-IN 83 84 4 PXA_USB_P1_DN 85 86 FPGA_TP23 10 PXA_CLK_OUT 85 GPIO12 USBH_P0 86 PXA_USB_P1_DP 4 87 85 86 88 <- PXA_CIF_D3 88 115 2-IN - PXA_GPIO115 FPGA_TP24 10 PXA_GPIO28 87 BITCLK_GPIO28 USBH_N0 88 PXA_USB_P1_DN 4 89 87 88 90 <- -> - PXA_GPIO116 PXA_GPIO113 PXA_CIF_D4 102 95 2-IN 10 PXA_SSP2_TXD 89 CLK_EXT_GPIO13 U_EN_GPIO115 90 PXA_GPIO115 - <- 91 89 90 92 C -> PXA_GPIO99 PXA_GPIO31 PXA_CIF_D5 106 94 2-IN C 4 PXA_USB_P2_1 91 FFCTS_USB1_GPIO35 SYSTEM_VIN90 92 93 91 92 94 -> UIO UIO U_DET_GPIO116 PXA_GPIO116 - <- - PXA_GPIO95 93 94 PXA_SSP2_RXD 10 PXA_CIF_D6 35 93 2-IN 93 94 -> PXA_GPIO53 95 96 PXA_CLK_OUT -> - PXA_GPIO114 95 UVS0_GPIO114 ICP_TXD_GPIO47 96 PXA_ST_TXD 10 97 95 96 98 PXA_CIF_D7 27 108 1-IN - PXA_GPIO94 PXA_GPIO28 PXA_GPIO91 97 UCLK_GPIO91 GND96 98 -> 99 97 98 100 PXA_CIF_D8 34 107 1-IN PXA_GPIO90 nURST_GPIO90 SYSTEM_VIN98 PXA_GPIO101 99 100 PXA_SSP2_TXD 10 99 100 PXA_GPIO52 101 102 PXA_GPIO114 - PXA_CIF_D9 109 106 1-IN PXA_nSDCS_IN 101 CPLD_nSDCS_IN KP_MKIN_5_GPIO99 102 PXA_GPIO99 103 101 102 104 VCC_RTC VCC_BAK PXA_nSDRAS PXA_GPIO91 <- 10 PXA_RDY 103 CPLD_RDY KP_DKIN_2_GPIO95 104 PXA_GPIO95 - <- 105 103 104 106 PXA_CIF_FRAME_VALID 57 24 1-IN PXA_nPIOR 10 PXA_A25 PXA_GPIO90 -> 105 nPIOR_GPIO50 BB_OB_STB_GPIO53 106 PXA_GPIO53 107 105 106 108 PXA_CIF_LINE_VALID 53 25 1-IN 10 PXA_A24 PXA_nSDCS_IN -> PXA_nPCE1 107 KP_MKIN_2_GPIO102 KP_DKIN_1_GPIO94 108 PXA_GPIO94 - <- 109 107 108 110 PXA_CIF_PXCLK 55 26 2-IN VCC_BAK KP_MKIN_1_GPIO101 PXA_GPIO101 PXA_A22 109 110 PXA_GPIO106 - 109 110 SYSTEM_nRESET 12 PXA_A20 111 112 PXA_RDY 10 <- R75 -> - PXA_nPIOWPXA_GPIO106 111 KP_MKOUT_3_GPIO106 SYSTEM_nRESET 112 113 111 112 114 0 Ohm -> PXA_nPOE nPIOW_GPIO51 BB_OB_CLK_GPIO52 PXA_GPIO52 PXA_A18 113 114 PXA_A23 113 114 PXA_A16 115 116 PXA_A21 DNI -> 115 nPOE_GPIO48 BUF_nSDRAS 116 PXA_nPREGPXA_nSDRAS 117 115 116 118 BT UART Configuration 10 PXA_A14 PXA_A19 10 PXA_SSP3_TXD 117 BB_OB_DAT0_GPIO81 nPREG_GPIO55 118 <- 119 117 118 120 Name J11 Pin PXA GPIO Alt Func PXA_A23 BUF_MA_23 BUF_MA_25 PXA_A25 10 10 PXA_A12 119 120 PXA_A17 119 120 10 PXA_A10 121 122 PXA_A15 10 PXA_BT_RXD 80 42 1-IN 121 GND119 BUF_MA_24 122 PXA_A24 10 -> 123 121 122 124 PXA_nIOIS16PXA_A21 BUF_MA_21 BUF_MA_22 PXA_nPWAITPXA_A22 10 PXA_A8 123 124 PXA_A13 10 PXA_BT_TXD 44 43 2-OUT 123 124 -> 10 PXA_A6 125 126 PXA_A11 10 -> 125 nIOIS16_GPIO57 nPWAIT_GPIO56 126 <- -> 127 125 126 128 10 PXA_A4 PXA_A9 10 PXA_A19 127 BUF_MA_19 BUF_MA_20 128 PXA_A20 -> 129 127 128 130 <- ST UART Configuration PXA_A17 BUF_MA_17 BUF_MA_18 PXA_A18 - PXA_A1 129 130 PXA_A7 10 129 130 -> 10 PXA_nPWE 131 132 PXA_A5 10 <- Name J11 Pin PXA GPIO Alt Func 10 PXA_A15 131 BUF_MA_15 BUF_MA_16 132 PXA_A16 -> 133 131 132 134 <- 10 PXA_A13 BUF_MA_13 BUF_MA_14 PXA_A14 10 PXA_RD_nWR 133 134 PXA_A3 10 PXA_ST_RXD 75 46 2-IN 133 134 PXA_MBGNT 135 136 PXA_A2 5,10 <- 10 PXA_A11 135 BUF_MA_11 BUF_MA_12 136 PXA_A12 10 137 135 136 138 PXA_ST_TXD 94 47 1-OUT 5 PXA_nWE PXA_A0 - <- 137 GND135 BUF_nENABLE 138 139 137 138 140 PXA_nCS5 PXA_MBREQ <- -> 10 PXA_A9 139 BUF_MA_9 BUF_MA_10 140 PXA_A10 10 <- 141 139 140 142 SPI2 Configuration 10 PXA_A7 BUF_MA_7 BUF_MA_8 PXA_A8 10 PXA_nSDCAS 141 142 PXA_nOE 5,10 -> 141 142 <- 143 144 Name J11 Pin PXA GPIO Alt Func -> 10 PXA_A5 BUF_MA_5 BUF_MA_6 PXA_A6 10 <- 10 PXA_DQM3 143 144 PXA_SDCLK 143 144 10 PXA_D30 145 146 PXA_DQM2 10 -> 10 PXA_A3 145 BUF_MA_3 BUF_MA_4 146 PXA_A4 10 <- 147 145 146 148 PXA_SSP2_RXD 79 11 2-IN 10 PXA_D28 PXA_D31 10 -> 5,10 PXA_A2 147 BUF_MA_2 BUF_MA_1 148 PXA_A1 - <- 149 147 148 150 PXA_SSP2_SCLK 39 19 1-OUT -> - PXA_A0 BUF_MA_0 CPLD_nPWE PXA_nPWE 10 <- 10 PXA_D26 149 150 PXA_D29 10 149 150 10 PXA_D24 151 152 PXA_D27 10 PXA_SSP2_SFRM 20 14 2-OUT 151 CPLD_PSKTSEL CPLD_RD_nWR 152 PXA_RD_nWR 153 151 152 154 PXA_MBREQ CPLD_MBREQ CPLD_MBGNT PXA_MBGNT 10 PXA_D22 153 154 PXA_D25 10 PXA_SSP2_TXD 87 13 1-OUT 153 154 10 PXA_D20 155 156 PXA_D23 10 5,10 PXA_nOE 155 CPLD_nOE CPLD_nWE 156 PXA_nWE 5 157 155 156 158 PXA_nCS5 10 PXA_D18 PXA_D21 10 5 PXA_ETH_nCS 157 CPLD_nCS4 CPLD_nCS5 158 159 157 158 160 SPI3 Configuration 10 PXA_EMAP_nCS CPLD_nCS3 BUF_nSDCAS PXA_nSDCAS HPM_VIO 10 PXA_D16 159 160 PXA_D19 10 159 160 10 PXA_DQM0 161 162 PXA_D17 10 Name J11 Pin PXA GPIO Alt Func PXA_SDCLK 161 CPLD_SDCLK BUF_DQM_3 162 PXA_DQM3 10 163 161 162 164 10 PXA_DQM2 BUF_DQM_2 GND162 163 164 PXA_DQM1 10 PXA_SSP3_RXD 25 82 1-IN 163 164 5,10 PXA_D15 165 166 PXA_D14 5,10 B 165 SYSTEM_VIN163 BUF_MD_30 166 PXA_D30 10 -> 167 165 166 168 PXA_SSP3_SCLK 24 84 1-OUT B 5,10 PXA_D13 PXA_D12 5,10 <- 10 PXA_D31 167 BUF_MD_31 BUF_MD_28 168 PXA_D28 10 -> 169 167 168 170 <- PXA_SSP3_SFRM 26 83 1-OUT 10 PXA_D29 BUF_MD_29 BUF_MD_26 PXA_D26 10 5,10 PXA_D11 169 170 PXA_D10 5,10 169 170 -> 5,10 PXA_D9 171 172 PXA_D8 5,10 <- PXA_SSP3_TXD 115 81 1-OUT 10 PXA_D27 171 BUF_MD_27 BUF_MD_24 172 PXA_D24 10 173 171 172 174 -> 5,10 PXA_D7 PXA_D6 5,10 <- 10 PXA_D25 173 BUF_MD_25 BUF_MD_22 174 PXA_D22 10 -> 175 173 174 176 5,10 PXA_D5 PXA_D4 5,10 <- FPGA IRQ Configuration 10 PXA_D23 175 BUF_MD_23 BUF_MD_20 176 PXA_D20 10 -> 177 175 176 178 5,10 PXA_D3 PXA_D2 5,10 <- 10 PXA_D21 177 BUF_MD_21 BUF_MD_18 178 PXA_D18 10 HPM_VIO 179 177 178 180 Name J11 Pin PXA GPIO Alt Func -> 5,10 PXA_D1 PXA_D0 5,10 <- 10 PXA_D19 179 BUF_MD_19 BUF_MD_16 180 PXA_D16 10 -> 179 180 <- PXA_IRQ0 4 10 0-IN (FPGA IRQ reg) 10 PXA_D17 181 BUF_MD_17 BUF_DQM_0 182 PXA_DQM0 10 WALLPLUG_VIN PXA_IRQ1 6 70 0-IN (Eth IRQ) 10 PXA_DQM1 183 BUF_DQM_1 VCC_IO 184 181 PXA_D15 5,10 PXA_IRQ2 3 9 0-IN (Header IRQn[1:0]) 185 GND183 BUF_MD_15 186 <- C101 182 181 5,10 PXA_D14 PXA_D13 5,10 -> 187 BUF_MD_14 BUF_MD_13 188 <- 22uF 183 182 PXA_IRQ3 8 69 0-IN (Header IRQ[1:0]) -> 5,10 PXA_D12 189 BUF_MD_12 BUF_MD_11 190 PXA_D11 5,10 <- 184 183 -> 5,10 PXA_D10 191 BUF_MD_10 BUF_MD_9 192 PXA_D9 5,10 <- 184 EMAP FPGA JTAG Configuration 5,10 PXA_D8 SODIMM200BUF_MD_8 BUF_MD_7 PXA_D7 5,10 -> 193 194 <- 185 189 Name J11 Pin PXA GPIO Alt Func -> 5,10 PXA_D6 195 BUF_MD_6 BUF_MD_5 196 PXA_D5 5,10 <- 186 185 189 191 -> 5,10 PXA_D4 197 BUF_MD_4 BUF_MD_3 198 PXA_D3 5,10 <- 187 186 191 190 PXA_FPGA_JTAG_TDI 69 30 0-OUT -> 5,10 PXA_D2 199 BUF_MD_2 BUF_MD_1 200 PXA_D1 5,10 <- 188 187 190 192 PXA_FPGA_JTAG_TDO 63 22 0-IN -> 5,10 PXA_D0 BUF_MD_0 GND200 188 192 PXA_FPGA_JTAG_TMS 65 27 0-OUT J12 PXA_FPGA_JTAG_TCK 59 38 0-OUT QTH-090-02-F-D-A PXA_FPGA_JTAG_TRST 67 29 0-OUT

5,10 PXA_D[31:0] 5,10 PXA_A[25:0]

A A

Title Title EMAP2 - IO Connectors Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Friday, April 02, 2010 Sheet 11of 16 5 4 3 2 1 5 4 3 2 1

POWER SUPPLIES

SYSTEM_VIN

R53 10 Ohm C91 U30 D 1uF D 37 31 SYSTEM_VIN VCC nPWRFAIL 21 SYSTEM_nLOWBAT 6 nLOWBAT VINDCDC1 25 SYSTEM_3.3V_ENA VCC_RTC C93 C92 36 DCDC1_EN 24 SYSTEM_1.8V_ENA VINDCDC2 DCDC2_EN LOWBAT = 3.6V 22uF 10uF 23 SYSTEM_1.5V_ENA VCC_3.3_VREG 5 DCDC3_EN VINDCDC3 9

R54 4 3 2 1 1% VDCDC1 7 698K C94 38 L1 8 L5 RN5 R56 22uF SYSTEM_VIN 1% PWRFAIL_SNS PGND1 2.2uH 23K VCC_1.8_VREG 100K 39 R57 5 6 7 8 1% LOWBAT_SNS 33 R58 280K VDCDC2 35 10K C95 11 L2 34 L6 SYSTEM_3.3V_ENA 11 SYSTEM_nRESET 22uF SYSTEM_3.3V_ENA 10 HOT_nRESET PGND2 2.2uH SYSTEM_1.8V_ENA VCC_1.5_VREG SYSTEM_1.8V_ENA 10 SYSTEM_1.5V_ENA SYSTEM_1.5V_ENA 10 26 2 C96 TRESPWRON VDCDC3 4 1000pF C97 VCC_RTC 12 L3 3 L7 22uF PB_IN PGND3 2.2uH 13 R76 SYSTEM_VIN PB_OUT 22 VCC_3.3_VREG LDO_EN VCC_RTC 10K C 14 JP1 VCC_1.8_VREG C 3 11 2 2 3 3 3 2 1 VSYSIN 19 JP3 VCC_1.5_VREG SYSTEM_nLOWBAT 2 VINLDO

3 2 10 20 SYSTEM_nLOWBAT 11 DEFDCDC1 VLDO1 JP2 1 32 18 VCC_1.5_VREG 1 DEFDCDC2 VLDO2 C98 C99 DEFDCDC3 28 2.2uF 2.2uF R59 15 nINT 27 1% 100K VBACKUP nRESPWRON 16 29 R60 1% VCC_RTC HPM_VIO VCC_RTC VRTC SDAT 30 100K SCLK

DNI C100 5 6 7 8 PWRPAD 22uF AGND1 AGND2 RN6 10K TPS65020RHA 40 41 17 4 3 2 1 PS_nIRQ 10 EMAP_nRESET 10 DEFDCDC1DEFDCDC2 DEFDCDC3 PXA_I2C_SDA 4,11 PXA_I2C_SCL 4,11 JP3 JP2 Vout= 1-2 VCCIO=3.3V 1-2 VCCMEM=1.8V 0.6*(R1+R2)/R2 2-3 VCCIO=3.0V 2-3 VCCMEM=2.5V B B

A A

Title EMAP2 - Power Supply

Size Document Number Rev B C

Date: Friday, April 02, 2010 Sheet 12of 16 5 4 3 2 1 5 4 3 2 1

RefTek Module Connections COMMAND & CONTROL AND POWER

INDICATORS RIBBON CABLE TO ADDITIONAL SHIELDED MODULES TO DAS LID D J17 D

1 2 15,16 SPI_CLK SPI_CS_N 15,16 J16 SPI_CLK SPI_MISO 3 4 9 EXT_SPI_SCLK EXT_SPI_SDI 9 15,16 SPI_MOSI SPI_MISO 16 SEL_CLK 5 6 15,16 SEL_ACK SEL_CLK 14 1 LED_DAT_IN LED_CLK 7 8 14 SEL_DAT_OUT SEL_LATCH 14 2 LED_CLK 9 10 15 CLKTIM+ CLKTIM- 15 3 SPI_MOSI CLK1HZ 11 12 TIME +3.3V 9 EXT_SPI_SDO 4 LED_CLR SEL_DAT_IN AD_BUS_SCLK 13 14 AD_BUS_FRAME RST_PULSE 14 5 LED_LATCH LED_DAT_IN AD_BUS_DOUT 15 16 AD_BUS_CHAIN_OUT 6 IRQ 17 18 AUX_BUS_CHAIN_OUT 7 MAG_SW SPI_CS_N MAG_SW 19 20 9 RT_CnC_CS_N +MAG_PWR 8 +MAG_PWR 21 22 9 SEL_LATCH SEL_ACK 23 24 9 RT_SEL_LATCH PXA_SEL_ACK 9 10 SHIELD 25 26 LED_LATCH 27 28 9 RT_LED_LATCH V5BUS 29 30 V5BUS RJ45 W/SHIELD V33BUS 31 32 V33BUS RJ45_95622 33 34 MOLEX AD_BUS_CHAIN_OUT AD_BUS_SCLK 35 36 9 RT_AD_CHAIN_OUT PXA_SSP1_SCLK 9 V9BUS V9BUS 95622-4882 STANDARD ETHERNET AUX_BUS_CHAIN_OUT AD_BUS_FRAME 37 38 9 RT_AUX_CHAIN_OUT PXA_SSP1_SFRM 9 -V9BUS -V9BUS CABLE PAIRS AD_BUS_DOUT 39 40 PXA_SSP1_RXD 9 V17BUS V17BUS -V17BUS 41 42 -V17BUS 1 & 2 43 44 C C 3 & 6 IRQ 45 46 PXA_RT_IRQ 9 4 & 5 47 48 7 & 8 49 50 TIME CLKTIM 9 RT_TIME RT_CLKTIM 9 CLK1HZ 9 RT_CLK1HZ CONN SOCKET 25x2 127MM_HDR_RT_50P_C ERNI 154765

RN7 +5V FB5 EMAP2 4 5 TEST +5V 3 6 V5BUS +3.3V 2 7 180 at 100MHz, 1500mA 1 8 CL0603 P1 MURATA CHIP_RES_ARRAY_EXB38V 1 BLM18PG181SN1 1K 5% 2 FB6 3 +3.3V CLKTIM 4 B V33BUS 15 CLKTIM B SEL_DAT_IN 5 SEL_DAT_IN 14 180 at 100MHz, 1500mA 6 CL0603 7 MURATA 8 BLM18PG181SN1 9 10 11 12 13 14 15 16 C158 17 M1 18 MHOLE1 19 20 SHIELD 4.7nF 20% 200V X7R CON20 M2 CR0603 TSH_110_01_S_DH_SL MHOLE1 SAMTEC R92 TSH-110-01-S-DH-SL

M3 +3.3V +3.3V MHOLE1 1.00M 1% 200V RN8 RN9 A CR1206 SPI_CS_N 1 8 SEL_LATCH 1 8 A SPI_MOSI 2 7 SEL_DAT_IN 2 7 REFRACTION TECHNOLOGY, INC M4 RJ1 SPI_CLK 3 6 SEL_CLK 3 6 MHOLE4 SPI_MISO 4 5 4 5 DALLAS, TEXAS U.S.A. Title ZERO OHM Schematic, RT619, EMAP2 CPU Carrier Board, LEAP2 SYSTEM CR0603 100K 5% 100K 5% CHIP_RES_ARRAY_EXB38V CHIP_RES_ARRAY_EXB38V PANASONIC PANASONIC Size Document Number Rev EXB-38V104JV EXB-38V104JV B 00-3619 - Date: Friday, April 02, 2010 Sheet 13of 16 5 4 3 2 1 5 4 3 2 1

SPI Chip Select Serial Chain

D D

+3.3V U37 14 16 13 SEL_DAT_IN S_IN VCC +3.3V 11 15 U38 13 SEL_CLK SCLK QA 1 VCC 5 RESET_N 10 QB 2 OSC_DA_SPI_EN 2 SCLR QC 3 3 4 QD OSC_DA_SPI_EN_N 15 12 4 GND U39 RCLK QE +3.3V 5 TC7S04FU +3.3V RST_PULSE 1 VCC 5 13 QF 6 SSOP5_PA 2 EN QG 7 ID_PROM_SPI_EN TOSHIBA 13 SEL_LATCH 3 4 8 QH 9 C142 GND GND SOUT 0.1uF 10% 16V X7R TC7S32FU 74HC595 CC0603 SSOP5_PA TSSOP16_TC74VHC C TOSHIBA TOSHIBA C TC74VHC595FT

+3.3V +3.3V

C143 C144 0.1uF 10% 16V X7R 0.1uF 10% 16V X7R CC0603 CC0603

+3.3V U40 VCC 5 2 3 4 ID_PROM_SPI_EN_N 16 GND TC7S04FU +3.3V SSOP5_PA TOSHIBA +3.3V C145 0.1uF 10% 16V X7R CC0603 B B +3.3V +3.3V R88 U41 10.0K 1% SEL_DAT_OUT 13 1 8 CR0603 2 VM VCC 7 3 SEL1 SEL2 6 4 TOL RT 5 RESET_N GND RST PWR MONITOR TSOT23_8 C146 LINEAR TECH 1nF 10% 50V X7R LTC2915TS8 CC0603 RST_PULSE RST_PULSE 13 PROGRAMMED FOR 3.3V -15% 200mSEC PULSE R89 100K 1% CR0603

A A REFRACTION TECHNOLOGY, INC DALLAS, TEXAS U.S.A. Title Schematic, RT619, EMAP2 CPU Carrier Board, LEAP2 SYSTEM

Size Document Number Rev B 00-3619 - Date: Friday, April 02, 2010 Sheet 14of 16 5 4 3 2 1 5 4 3 2 1

Temperatue Compensated Voltage Controlled Oscillator

+3.3V

D R90 D 10.0 OHM 1% +3.3V CR0603 U42 1 VCC 5 2 3 4 C149 SEL_ACK 13,16 GND C147 C148 + 100uF 20% 6V TANT NC7SZ125 0.1uF 10% 16V X7R VCTRL 0.1uF 10% 16V X7R SMCT_7343_C SOT23_5 CC0603 MAX 2.5V CC0603 KEMET FAIRCHILD typ 1.5V T491D107M006AS MIN 0.5V +3.3V 2.5 Vout FS U43 U44 1 5 1 8 R91 U45 14 OSC_DA_SPI_EN_N VCC SPI_CS_N 2 2 CLK VCC 7 1 4 3 4 OSC_DA_CS_N 3 DIN VOUT 6 Vctrl VCC +3.3V GND 4 LD REF 5 1.00K 1% U46 DOUT GND TC7S32FU CR0603 C150 1 VCC 5 0.1uF 10% 16V X7R 2 3 2 SSOP5_PA 16BIT Vout D/A GND OUT TOSHIBA SOG050_8_W244_L225 CC0603 3 4 GND CLKTIM 13 LINEAR TECH VCXO 16.384MHZ 0.5PPM +3.3V +3.3V LTC1655LCS8 T103B1561 NC7SZ125 SOT23_5 C BLILEY C T103B1561 FAIRCHILD C151 C152 0.1uF 10% 16V X7R 0.1uF 10% 16V X7R CC0603 CC0603 +3.3V 13,16 SPI_CS_N C153 0.1uF 10% 16V X7R 13,16 SPI_CLK CC0603 13,16 SPI_MOSI

+3.3V +3.3V U47 1 8 2 R VCC 7 B CLKTIM- 13 B 3 RE I/OB 6 CLKTIM+ 13 4 DE I/OA 5 D GND

DIFF LINE INTERFACE MSOP_8 LINEAR TECH LTC2850CMS8#PBF

+3.3V

C154 0.1uF 10% 16V X7R CC0603

A A REFRACTION TECHNOLOGY, INC DALLAS, TEXAS U.S.A. Title Schematic, RT619, EMAP2 CPU Carrier Board, LEAP2 SYSTEM

Size Document Number Rev B 00-3619 - Date: Friday, April 02, 2010 Sheet 15of 16 5 4 3 2 1 5 4 3 2 1

Board Identification Serial EEProm

D D

+3.3V U48 1 VCC 5 2 3 4 GND SEL_ACK 13,15 NC7SZ125 SOT23_5 SPI_MISO 13,15 FAIRCHILD

+3.3V U49 C 1 5 +3.3V +3.3V C 14 ID_PROM_SPI_EN_N VCC SPI_CS_N 2 U50 3 4 ID_PROM_CS_N 1 8 GND 2 CS VCC 7 SO HOLD TC7S32FU 3 6 4 WP SCK 5 SSOP5_PA GND SI TOSHIBA SERIAL MEMORY 16Kx8 TSSOP8 +3.3V +3.3V ATMEL AT25128A-10TU-2.7 C155 C156 +3.3V 0.1uF 10% 16V X7R 0.1uF 10% 16V X7R CC0603 CC0603 C157 0.1uF 10% 16V X7R CC0603 13,15 SPI_CS_N

13,15 SPI_MOSI B B

13,15 SPI_CLK

A A REFRACTION TECHNOLOGY, INC DALLAS, TEXAS U.S.A. Title Schematic, RT619, EMAP2 CPU Carrier Board, LEAP2 SYSTEM

Size Document Number Rev B 00-3619 - Date: Friday, April 02, 2010 Sheet 16of 16 5 4 3 2 1 Part Number: RT619-0001 Revision: A1 Unit Qty Reference Part Name Manufacturer Description Value Part Type Notes 1 BT1 ML621-TZ1 FDK America BATT, 3V, LITH COIN RECHARGEABLE 3V Battery 3 FB1-3 BLM18AG121SN1D Murata FLTR, EMI, 120 OHM, 200mA, 0603 120 OHM Bead 2 FB5-6 BLM18PG181SN1 Murata FERRITE CHIP 180OHM 1500MA 0603 180 Ohm Bead 1 FB4 BLM18AG121SN1D Murata FLTR, EMI, 120 OHM, 200mA, 0603 120 OHM Bead 8 C4 C53-59 ECJ-0EB1E103K Panasonic CAP, 0.01UF, X7R, 25V, 10%, 0402 0.01uF Ceramic 1 C205 GRM155R71C223KA01D Murata CAP, 0.022UF, X5R, 10V, 10%, 0402 0.022uF Ceramic 18 C44-47 C142-145 C147-148 C150-157 C0603C104K4RAC Kemet CAP CER 0.1UF 16V 10% X7R 0603 0.1uF Ceramic C1-3 C7-9 C11-13 C15-24 C26-29 C31-35 C40-41 C48 C51-52 72 C60-64 C66-70 C73-74 C76 C78-103 ECJ-0EB1A104K Panasonic CAP CER 0.1UF 10V 10% X5R 0402 0.1uF Ceramic 1 C146 C0603C102K5RAC Kemet CAP CER 1000PF 50V 10% X7R 0603 1000pF Ceramic 11 C10 C14 C25 C30 C36-39 C43 C50 C65 ECJ-2FB0J226M Panasonic CAP, 22UF, X5R, 6.3V, 20%, 0805 22uF Ceramic 2 C72 C207 GRM1555C1H300JZ01D Murata CAP, 30PF, COG, 50V, 10%, 0402 30pF Ceramic 1 C158 C0603C472M2RAC Kemet CAP CER 4700PF 200V 20% X7R 0603 4.7nF 20% 200V X7R Ceramic 5 C42 C49 C71 C75 C77 C0603C475K8PACTU Kemet CAP, 4.7UF, Y5V, 10V, 10%, 0603 4.7uF Ceramic 2 C5-6 ECJ-0EC1H470J Panasonic CAP, 47PF, NPO, 50V, 5%, 0402 47pF Ceramic 2 R19 R26 ERJ-3RQFR47V Panasonic RES, 0.470 OHM, 1/10W, TKF, 1%, 0603 0.47 Chip Resistor 6 R2 R6-7 R13-14 R18 RC0603FR-071RL Yageo RES, 1.00 OHM, 1/10W, TKF, 1%, 0603 1 Chip Resistor 1 R148 ERJ-2RKF10R0X Panasonic RES, 10 OHM, 1/16W, TKF, 1%, 0402 10 Chip Resistor 1 R90 CRCW060310R0F Vishay-Dale RES 10.0 OHM 1/10W 1% 0603 SMD 10 Chip Resistor 2 R12 R40 ERJ-2GEJ100X Panasonic RES, 10 OHMS, 1/16W, TKF, 5%, 0402 10 Chip Resistor 4 R20 R22-23 R71 ERJ-2RKF49R9X Panasonic RES, 49.9, 1/16W, TKF, 1%, 0402 49.9 Chip Resistor 8 R1 R5 R16 R21 R24 R27 R30 R54 ERJ-2RKF1000X Panasonic RES, 100 OHM, 1/16W, TKF, 1%, 0402 100 Chip Resistor 4 R48-51 ERJ-2GEJ331X Panasonic RES 330 OHM 1/10W 5% 0402 SMD 330 Chip Resistor 1 JP6 ERJ-2GE0R00X Panasonic RES, 0, 1/16W, TKF, 5%, 0402 0 Ohm Chip Resistor Position: 2-3 1 R91 CRCW06031001F Vishay-Dale RES 1.00K OHM 1/10W 1% 0603 SMD 1.00K Chip Resistor 1 R92 CRCW12061004F Vishay-Dale RES 1.00M OHM 1/4W 1% 1206 SMD 1.00M Chip Resistor 1 R88 CRCW06031002F Vishay-Dale RES 10.0K OHM 1/10W 1% 0603 SMD 10.0K Chip Resistor 1 R89 CRCW06031003F Vishay-Dale RES 100K OHM 1/10W 1% 0603 SMD 100K Chip Resistor 4 R3 R4 R10 R32 ERJ-2RKF1003X Panasonic RES, 100K, 1/16W, TKF, 1%, 0402 100K Chip Resistor 8 R8 R11 R36-39 R43 R47 ERJ-2GEJ103X Panasonic RES, 10K, 1/16W, TKF, 5%, 0402 10K Chip Resistor 10 R17 R25 R28-29 R31 R33-35 R72 R78 ERJ-2GEJ103X Panasonic RES, 10K, 1/16W, TKF, 5%, 0402 10K Chip Resistor 1 R156 ERJ-2RKF1242X Panasonic RES, 12.4K, 1/16W, TKF, 1%, 0402 12.4K Chip Resistor 1 R9 ERJ-2GEJ123X Panasonic RES, 12K, 1/16W, TKF, 5%, 0402 12K Chip Resistor 9 R15 R41-42 R44-46 R83 R85-86 ERJ-2GEJ102X Panasonic RES, 1K, 1/16W, TKF, 5%, 0402 1K Chip Resistor 1 R155 ERJ-2GEJ105X Panasonic RES, 1M, 1/16W, TKF, 5%, 0402 1M Chip Resistor 4 R48-51 ERJ-2GEJ331X Panasonic RES 330 OHM 1/10W 5% 0402 SMD 330 Chip Resistor 2 R52-53 ERJ-2GEJ472X Panasonic RES 4.7K OHM 1/10W 5% 0402 SMD 4.7K Chip Resistor 1 RJ1 CRCW06030000Z0EA Vishay-Dale RES 0.0 OHM 1/10W 0603 SMD ZERO OHM Chip Resistor 2 J4 J6 DF40C-100DS-0.4V(51) Hirose CONN RCPT 100POS 0.4MM SMD GOLD 100 pins Connector 1 J7 142-9701-811 Johnson Components CONN, JACK, PC END MOUNT, RCPTL 142-9701-811 Connector 1 J1 54819-0572 Waldom CONN, MINI-USB, MINI-B TYPE, SMT 54819-0572 Connector 2 J2-3 56579-0576 Waldom CONN, MINI-USB, MINI-AB TYPE, SMT 56579-0576 Connector 1 P1 TSH-110-01-S-DH-SL Sametec CONN UNSHROUDED HDR 20POS .05" CON20 Connector 1 J15 500998-0909 Molex CONN MEMORY SECURE DIGIT REV SMD CONN SD CARD Connector 1 J17 154765 ERNI CONN SOCKET 50POS CONN SOCKET 25x2 Connector 2 J13 J16 95622-3981 Molex CONN RJ45 RA TH JACK RJ45 W/SHIELD Connector 1 Y1 ABM3B-25.000MHZ-B2-T Abracon Corp CRYSTAL 25.000MHZ 18PF SMD 25.000Mhz Crystal 1 U20 RCLAMP3304N.TCT Semtech IC, TVS ARRAY 4-LINE 3.3V, 10-SLP RCLAMP3304N Diode Array 1 D16 1N4148WT-7 Fairchild DIODE SCHOTTKY 2A 80V SOD532 1N4148WT-7 Diode 1 J5 HRP10H-ND Assman HEADER, .100, 2X5, VERT HDR2X5 Header 1 U37 TC74VHC595FT Toshiba IC REGISTER SHIFT 8-BIT 16-TSSOP 74HC595 IC 1 U24 TC74LCX74FT Toshiba IC FLIP-FLOP DUAL D-TYPE 14TSSOP 74LCX74 IC 1 U34 93LC46B MIcrochip IC, EEPROM 93LC46B, TSSOP8 93LC46B IC 6 U2 U4 U6-7 U9 U25 AP2280-1WG-7 Diodes Inc IC, LOAD SW CTRL 1CH, SOT-25 AP2280 IC 1 U18 FIN1001M5X Fairchild Semiconductor IC DRIVER 3.3V LVDS HS SOT-23 FIN1001 IC 1 U1 FT232RQ FTDI IC, FT232R, USB SERIAL, QFN32 FT232R IC 8 U3 U5 U10 U12-15 U30 INA216A2YFFR Texas Instruments IC CURRENT SHUNT MONITOR 4DSBGA INA216A IC 1 U16 LAN9512 SMSC LAN9512-JZX LAN9512 IC 1 U26 LEA-5T UBLOX MODULE, GPS, LEA-5T, SMT LEA-5T IC 1 U28 LM73CIMK National Semiconductor IC, TEMP SENSR DGTL 2.7V, SOT23-6 LM73CIMK IC 1 U44 LTC1655LCS8 Linear Technology IC DAC 16BIT R-R MICROPWR 8SOIC LTC1655 IC 1 U41 LTC2915TS8 Linear Technology IC VOLT SUPERVISOR TSOT23-8 LTC2915 IC 1 U36 AGL600V2-FGG256 Actel IC, IGLOO FPGA 600K GATE, FG256 AGL600V2-FGG256 IC 1 U17 MCP1603T Microchip IC SYNC BUCK REG 500MA TSOT-23-5 MCP1603T IC 1 U33 MIC2026-1BM Micrel IC SW DISTRIBUTION 2CHAN 8SOIC MIC2026-1 IC 4 U42 U46 U48 U51 NC7SZ125M5 Fairchild Semiconductor IC BUFF TRI-ST UHS N-INV SOT235 NC7SZ125 IC 1 U29 PCA9306DCUR Texas Instruments IC VOLT LEVEL TRANSLATOR 8-VSSOP PCA9306 IC 1 U11 REF2930AIDBZT Texas Instruments IC, REF2930, 3.0V REFERENCE, SOT23, REF2930 IC 1 U50 AT25128A-10TU-2.7 Atmel IC EEPROM 128KBIT 20MHZ 8TSSOP SERIAL MEMORY 16Kx8 IC 4 U22-23 U38 U40 TC7S04FU Toshiba IC INVERTER HS C2MOS SSOP5 TC7S04FU IC 1 U21 TC7S08FU Toshiba IC GATE AND 2INP 74HC08 5-SSOP TC7S08FU IC 1 U19 TC7S14FU Toshiba IC INVERTER SCHMITT 5-SSOP TC7S14FU IC 2 U43 U49 TC7S32FU Toshiba IC GATE OR 2INPUT HS C2MOS SSOP5 TC7S32FU IC 1 U8 TLV2548IPWR Texas Instruments IC, 12 BIT ADC, TSSOP20 TLV2548IPWR IC 1 L1 CBC2518T4R7M Taiyo-Yuden INDUCTOR 4.7UH 20% 1007 SMD 4.7uH Inductor 1 D1 AYPG1211F Stanley Electric LED, YELLOW/GREEN BICOLOR, 2.1V, RT ANG, SMD AYPG1211F LED 1 D2 BRPY1211F Stanley Electric LED, RED/YELLOW-GREEN BICOLOR, 2.1V, RT ANG, SMD BRPY1211F LED 1 U27 450-0037 LS Research MODULE TIWI-R2 U.FL 450-0037 Module 2 RN8-9 EXB-38V104JV Panasonic RES ARRAY 100K OHM 4 RES 1206 100K Resistor Array 1 RN7 EXB-38V102JV Panasonic RES ARRAY 1K OHM 4 RES 1206 1K Resistor Array 1 C149 T491D107M006AS Kemet CAP TANT 100UF 6.3V 7343 100uF Tantalum 1 U45 T103B1561 Bliley TVCXO 16.384MHZ 0.5PPM SMD T103B1561 TCXO 1 T1 H0019 Pulse TRANSFORMER, 10/100 MAGNETICS, 0.500x0.350 H0019 Transformer 3 Q1-2 Q10 BC847A-TP Micro Semi BJT, NPN, 100MA, 45V, SOT23 BC847A-TP Transistor 1 FB7 BLM18AG121SN1D Murata FLTR, EMI, 120 OHM, 200mA, 0603 120 OHM Bead NOT INSTALLED A.1.0.2 Layout

183 APPENDIX B

LEAP2 Design Documents

B.1 Hardware Schematics and PCB Layout

This section provides all hardware design files.

B.1.1 EMAP2

B.1.1.1 Schematics

184 5 4 3 2 1

LEAP2 - EMAP2

D Schematic Pages D * Author: Dustin McIntire * "Copyright (c) 2006 The Regents of the University of California. * All rights reserved. 1. Title Sheet * 2. USB UART * Permission to use, copy, modify, and distribute this software and its * documentation for any purpose, without fee, and without written agreement is 3. Current Sensing * hereby granted, provided that the above copyright notice, the following 4. USB Host and OTG * two paragraphs and the author appear in all copies of this software. * 5. Ethernet * IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR 6. Compact Flash * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT 7. Radio IF * OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF * CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 8. GPS * 9. CMOS Imager IF * THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY 10. FPGA * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS 11. Stacking Connector * ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATION TO 12. Power Supplies * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS."

C C

Power Supply Hierarchy

LAYOUT NOTES: 1. No vias inside SMT caps and resistors. 2. No vias inside SMT pads.

B B

MTG142/217 MTG HOLE

CHASSIS_GND

MTG142/217 MTG JTAG Chain HOLE Reset Hierarchy

CHASSIS_GND

MTG142/217 MTG HOLE

N/A CHASSIS_GND MTG142/217 MTG HOLE A A

CHASSIS_GND

Title Title EMAP2 - Title Page Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Saturday, November 03, 2012 Sheet 1of 12 5 4 3 2 1 5 4 3 2 1

USB Console UART

USB_VIN USB_3.3_VREG U1 USB_VIN HPM_VIO C1 C2 D1 D 0.1uF D2 D 0.1uF FT232RQ AYPG1211F 1 30 BRPY1211F CONSOLE_TXD 10 19 VCCIO TXD 2 VCC RXD CONSOLE_RXD 10 USBDM 15 32 USBDP 14 USBDM nRTS 8 USBDP nCTS 23 31 5 NC5 nDTR 6 NC4 nDSR

12 7 5 6 7 8 13 NC3 nDCD 3 RN1 25 NC2 nRI 330 Ohm 29 NC1 22 CBUS0 R78 NC0 CBUS0 21 CBUS1 4 3 2 1 PXA_LED0 11 10K 18 CBUS1 10 PXA_LED1 11 10 CONSOLE_RXD_EN 27 nRESET CBUS2 11 28 OSCI CBUS3 9 C USB_3.3_VREG OSCO CBUS4 C 16 4 3V3OUT GND 17 C3 26 GND 20 0.1uF TEST GND 24 AGND BODY FT232R 33

USB_VIN FB1

BLM18AG121SN1D C4 B 0.01uF B

CHASSIS_GND 1 2 USBDM J1 11 3 USBDP 54819-0572 4 5 C5 C6 47pF 47pF

CHASSIS_GND

A A

Title EMAP2 - USB UART

Size Document Number Rev A C

Date: Saturday, November 03, 2012 Sheet 2of 12 5 4 3 2 1 5 4 3 2 1

CURRENT SENSING

VCC_3.3_VREG Compact Flash C8 C7 R1 0.1uF 0.1uF 100 Ohm U2 U3 1% 1 6 5 1 CF_CURRENT D VIN VOUT V+ OUT D C9 3 5 R2 3 R3 10 CF_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% ILIM GND INA138NA R4 MIC2007YM6TR 470 Ohm CF_3.3_VREG

C10 22uF

VCC_3.3_VREG Ethernet

C12 C11 R5 0.1uF 0.1uF 100 Ohm U4 U5 1% 1 6 5 1 ETH_CURRENT VIN VOUT V+ OUT C13 3 5 R6 3 R7 10 ETH_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% ILIM GND INA138NA R8 MIC2007YM6TR 470 Ohm ETH_3.3_VREG

C14 22uF

CMOS Imager C VCC_3.3_VREG C

C16 C15 R9 0.1uF 0.1uF 100 Ohm U6 U7 Current Sense ADC 1% 1 6 5 1 IMAGER_CURRENT VIN VOUT V+ OUT C17 VCC_RTC 3 5 R10 3 R11 9,10 IMAGER_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% R12 ILIM GND INA138NA 10 Ohm R13 MIC2007YM6TR 470 Ohm IMAGER_3.3_VREG C18 0.1uF

C19 U8 5 22uF SYSTEM_VIN ETH_CURRENT 6 R14 IMAGER_CURRENT 7 A0 VCC 100K RF_CURRENT 8 A1 1 A2 SDO EMAP_CSENSE_DOUT 10 VCC_3.3_VREG GPS 1% GPS_CURRENT 9 USB_CURRENT 10 A3 R15 C20 CF_CURRENT 11 A4 4 C22 VCC_RTC C21 R16 100K 0.1uF HPM_CURRENT 12 A5 EOC 0.1uF 0.1uF 100 Ohm 1% 13 A6 U9 U10 1% A7 14 R17 1 6 5 1 GPS_CURRENT nCSTART VIN VOUT V+ OUT 16 10K C23 nPWDN 17 3 5 R18 3 R19 FS 10 GPS_POWER_ON ENA CSL Vin+ 0.1uF 20 1 Ohm 4 2 100K 10 EMAP_CSENSE_nCS nCS Vin- GND 2 19 4 2 1% 10 EMAP_CSENSE_DIN SDI REFP ILIM GND INA138NA 10 EMAP_CSENSE_SCLK 3 18 R20 SCLK REFM C24 MIC2007YM6TR 470 Ohm AGND 0.1uF GPS_3.3_VREG TLV2548IPWR VCC_RTC U11 15 VREF 1 C25 VIN 2 22uF C26 3 VOUT B 0.1uF GND B REF2930

RF_VIN Radios Tie at a single point.

C28 C27 R21 0.1uF 0.1uF 100 Ohm U12 U13 1% 1 6 5 1 RF_CURRENT Radio Voltage Selection VIN VOUT V+ OUT C29 VCC_3.3_VREG RF_VIN 3 5 R22 3 R23 0.1uF

10 RF_POWER_ON 1 ENA CSL 1 Ohm 4 Vin+ 2 100K Vin- GND 4 2 1% 3 2 ILIM GND INA138NA R24 JP4 MIC2007YM6TR VCC_1.8_VREG 470 Ohm RF_VCC

C30 22uF

SYSTEM_VIN HPM USB Host

C32 C31 R25 0.1uF SYSTEM_VIN 0.1uF 100 Ohm U14 U15 1% C33 1 6 5 1 HPM_CURRENT R28 0.1uF VIN VOUT V+ OUT 100 Ohm C34 U16 3 5 R26 3 R27 1% 10 HPM_POWER_ON ENA CSL Vin+ 0.1uF 5 1 USB_CURRENT .470 OHM 4 2 100K V+ OUT 4 2 Vin- GND 1% C35 R29 3 R30 ILIM GND INA138NA USB_VCC 0.1uF R31 1 Ohm 4 Vin+ 2 100K MIC2007YM6TR 470 Ohm Vin- GND 1% HPM_VIN INA138NA A A C36 22uF C37 22uF

Title Title EMAP2 - Current Sensing Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Saturday, November 03, 2012 Sheet 3of 12 5 4 3 2 1 5 4 3 2 1

USB Host Port 1

D J2 D MP1 MP2 1 5 MP3 MP4 USB_VCC 1 2 3 4 5 CHASSIS_GND CHASSIS_GND C38 U17 USB1_VBUS 0.1uF 1 6 USB1_VBUS VIN VOUT C40 8 1 C39 PXA_USB_P1_DP 11 USB1_HPEN 3 5 0.1uF 7 2 22uF PXA_USB_P1_DN 11 ENA CSL USB1_HPEN 6 3 PXA_USB_P1_HPEN 11 R32 4 2 5 4 R79 R80 C41 C42 10K ILIM GND R33 15K 15K 68pF 68pF MIC2007YM6TR RN2 470 Ohm 10 Ohm

Current Limit: 500mA

USB Host Port 2 or USB OTG Using External Transciever C C

J3 MP1 MP2 1 5 MP3 MP4 1 2 3 4 5 CHASSIS_GND CHASSIS_GND

USB2_VBUS 8 1 HPM_VIO DNI PXA_USB_P2_DP 11 U18 C43 7 2 PXA_USB_P2_DN 11 ISP1301 21 47nF USB2_HPEN 6 3 C1 PXA_USB_P2_8 11 USB OTG XVCR 22 5 4 20 C2 R81 R82 C48 C49 VBATT 7 C44 DNI RN3 C45 15K 15K 68pF 68pF USB2_HPEN 1 VOUT3.3 0.1uF 10 Ohm 1uF 2 ADR 19 R34 9,11,12 PXA_I2C_SDA SDA VBUS 3 15 R35R35R36R36 Ohm Ohm 33 33 Ohm Ohm 33 33 10K 9,11,12 PXA_I2C_SCL SCL D- 16 D+ 18 4 ID B nRESET 12 B C61 C60 C46 RCV 11 USB_VCC 22pF 22pF 1uF 5 VP 10 11 PXA_USB_P2_1 9 nINT VM 11 PXA_USB_P2_2 13 OE_TP_INT_N 17 C47 USB2_VBUS 11 PXA_USB_P2_4 U19 14 SE0_VM AGND 23 0.1uF 11 PXA_USB_P2_5 1 6 6 DAT_VP CGND 24 11 PXA_USB_P2_7 VIN VOUT C51 8 SPEED VDD_LGC 25 C50 11 PXA_USB_P2_8 SUSPEND DGND USB2_HPEN 3 5 0.1uF ENA CSL 22uF R37 4 2 HPM_VIO 10K ILIM GND R38 MIC2007YM6TR 470 Ohm TP1 TP1 TP2 TP2 TP3 TP3

C52 0.1uF

A A

Title EMAP2 - USB OTG

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 4of 12 5 4 3 2 1 5 4 3 2 1

10/100 Ethernet Controller

D D

ETH_EECS ETH_EESK ETH_EEDIO ETH_nSPEED_LED ETH_nACT_LED ETH_3.3_VREG Wired to connect to A0157161

21 20 19 39 38 ETH_2.5_VREG U32 J13 (Dongle for Netgear FA410 PCMCIA) U33 Pins 6,10,11 PXA_D[31:0] 1 PXA_D0 18 2 1. GND SD0 ETH_RX+ R65 1 RX 1:1 16 ETH_RXP PXA_D1 17 LED1 LED2 3

EEDIO 49.9 Ohm ETH_nACT_LED 2. 5V PXA_D2 16 SD1 EEDCS EEDCK C113 1% 3 14 4 PXA_D3 14 SD2 3 ETH_RX+ 3. RXLED 0.01uF R66 ETH_nSPEED_LED 5 PXA_D4 13 SD3 RX+ 4. TXLED ETH_RX- 49.9 Ohm 2 15 ETH_RXN 6 PXA_D5 12 SD4 4 ETH_RX- 1% 7 5. NC PXA_D6 11 SD5 RX- DM9000A ETH_RXN 8 PXA_D7 10 SD6 7 ETH_TX+ 6. 100M LED ETH_RXP 9 PXA_D8 31 SD7 TX+ 7. NC ETH_TX+ R67 7 TX 1:1 10 ETH_TXP 10 PXA_D9 29 SD8 8 ETH_TX- 49.9 Ohm ETH_TXN 11 8. NC PXA_D10 28 SD9 TX- C114 1% 6 11 ETH_TXP 12 9. RJ45-6 PXA_D11 27 SD10 0.01uF R68 13 PXA_D12 26 SD11 10. RJ45-3 ETH_TX- 49.9 Ohm 8 9 ETH_TXN 14 PXA_D13 25 SD12 41 11. NC 1% 15 PXA_D14 24 SD13 TEST 46 SD14 SD H0019 12. RJ45-2 PXA_D15 22 R69 R70 16 SD15 34 75 Ohm 75 Ohm 13. RJ45-1 ETH_IRQ 10C115 17 37 INT 14. NC C 11 PXA_ETH_nCS R39 0.1uF 18 C 35 nCS 1 R71 6.8K 10,11 PXA_nOE 10K 19 15. NC 36 nIOR BGRES 48 11 PXA_nWE C116 5-558556-4 nIOW BGGND ETH_2.5_VREG 1000pF 32 9 DNI 6,10,11 PXA_A2 CMD VDD25 2 C117 C118 CHASSIS_GND ETH_nRESET 40 VDD25 0.1uF 22uF nRESET 42 CHASSIS_GND ETH_3.3_VREG C123 VDD 30 Tie Analog and Digital Gounds 22pF 44 VDD 23 X1 VDD together at a single point 43 C119 C120 C121 C122 2 1 X2 GND GND GND GND GND GND 0.1uF 0.1uF 0.1uF 0.1uF MAC ID EEPROM DM9000A X4 5 6 25.000Mhz 15 33 45 47

3 4 ETH_3.3_VREG C124 22pF

U34 8 ETH_EEDIO 3 6 DI NC

U35 VCC B ETH_EESK 2 7 B TPS3805 ETH_3.3_VREG CLK NC 3 ETH_nRESET ETH_EECS 1 4 ETH_EEDIO 5 RESET CS GND DO SENSE 1 R72 NC 93C46 4 2 5 10K VDD GND TPS3805

A A

Title EMAP2 - Ethernet

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 5of 12 5 4 3 2 1 5 4 3 2 1

Compact Flash Interface Buffers Compact Flash Connector HPM_VIO

U24 U20 5,10,11 PXA_A[25:0] 5 = High current trace, D PXA_A0 V4 SN74LV4320A R1 CF_A0 D PXA_A1 W4 SA00 A00 P1 CF_A1 4 2 CF_IREQ 100mA Max, 11 PXA_CF_IRQ PXA_A2 W5 SA01 A01 N1 CF_A2 CF_3.3_VREG keep short PXA_A3 W6 SA02 A02 M1 CF_A3 3 SN74AUP1G34DCKR PXA_A4 V5 SA03 A03 L1 CF_A4 PXA_A5 V6 SA04 A04 K3 CF_A5 PXA_A6 U5 SA05 A05 K1 CF_A6 J5 C64 C65 PXA_A7 U6 SA06 A06 J1 CF_A7 0.1uF 22uF PXA_A8 T5 SA07 A07 H1 CF_A8 PXA_A9 T6 SA08 A08 G1 CF_A9 CF_TYPE_II PXA_A10 R5 SA09 A09 F2 CF_A10 SA10 A10 50 GND2 5,10,11 PXA_D[31:0] CF_nCD2 25 nCD2 PXA_D0 G5 T1 CF_D0 CF_D10 49 D10 PXA_D1 F6 SD00 D00 U1 CF_D1 CF_nIOIS16 24 nIOIS16 PXA_D2 F5 SD01 D01 V1 CF_D2 CF_D9 48 D09 PXA_D3 E6 SD02 D02 A3 CF_D3 CF_D2 23 PXA_D4 E5 SD03 D03 A2 CF_D4 D02 CF_D8 47 PXA_D5 D6 SD04 D04 B2 CF_D5 D08 CF_D1 22 PXA_D6 D5 SD05 D05 C2 CF_D6 D01 CF_BVD1 46 PXA_D7 C6 SD06 D06 D2 CF_D7 BVD1 CF_D0 21 PXA_D8 C5 SD07 D07 U2 CF_D8 D00 CF_BVD2 45 PXA_D9 B6 SD08 D08 V2 CF_D9 BVD2 CF_A0 20 PXA_D10 B5 SD09 D09 W2 CF_D10 A00 CF_nPREG 44 PXA_D11 A6 SD10 D10 B3 CF_D11 nREG CF_A1 19 C PXA_D12 A5 SD11 D11 A1 CF_D12 A01 C CF_nINPACK 43 PXA_D13 B4 SD12 D12 B1 CF_D13 nINPACK CF_A2 18 PXA_D14 A4 SD13 D13 C1 CF_D14 A02 CF_nPWAIT 42 PXA_D15 C3 SD14 D14 D1 CF_D15 nWAIT SD15 D15 CF_A3 17 A03 CF_RESET 41 K6 G2 CF_nIORD RESET 11 PXA_nPIOR CF_A4 16 L6 nSIORD nIORD H2 CF_nIOWR A04 11 PXA_nPIOW 40 N6 nSIOWR nIOWR P2 CF_nPREG nVS2 11 PXA_nPREG CF_A5 15 K5 nSREG nREG F1 CF_nOE A05 11 PXA_nPOE 39 L5 nSOE nOE J2 CF_nWE nCSE 10,11 PXA_nPWE nSWE nWE CF_A6 14 A06 38 VCC2 13 PXA_nPCE1 G6 E2 CF_nPCE1 VCC1 11 PXA_nPCE1 CF_IREQ 37 PXA_nPCE2 K4 nSCE1 nCE1 E1 CF_nPCE2 IREQ 11 PXA_nPCE2 nSCE2 nCE2 CF_A7 12 A07 CF_nWE 36 TP4 P6 T2 CF_BVD1 nWE CF_A8 11 TP5 P5 SBVD1 BVD1 R2 CF_BVD2 A08 SBVD2 BVD2 CF_nIOWR 35 nIOWR CF_A9 10 TP6 C4 N2 CF_nINPACK A09 CF_nIORD 34 M5 nSINPACK nINPACK K2 CF_IREQ nIORD 11 PXA_CF_IRQ CF_nOE 9 N5 SREADY READY M2 CF_nPWAIT nOE 11 PXA_nPWAIT 33 R6 nSWAIT nWAIT W1 CF_nIOIS16 nVS1 11 PXA_nIOIS16 CF_A10 8 SWP WP A10 CF_nPCE2 32 M6 L2 CF_RESET nCE2 11 PXA_CF_RESET CF_nPCE1 7 B SRESET RESET nCE1 B CF_D15 31 W3 U3 CF_nCD1 D15 11 PXA_CF_nCD CF_D7 6 nSCD nCD1 V3 CF_nCD2 D07 CF_D14 30 nCD2 D14 CF_D6 5 CF_DIR T4 U4 CF_DIR D06 DIR DIR_OUT CF_D13 29 CF_D5 4 D13 PXA_nPCE2 H6 D05 nENH CF_D12 28 PXA_nPCE1 H5 D12 nENL CF_D4 3 CF_D11 27 D04 J5 D11 11 PXA_CF_nMSTR_EN nMASTER_EN CF_D3 2 3 3 J6 CF_3.3_VREG CF_nCD1 26 D03 nBUF_EN D3 2 2 nCD1 VCC_CF1 1 F3 GND1

1 R48 JP5 VCC_CF2 H3 C66 C67 C68 0 Ohm E3 VCC_CF3 M3 0.1uF 0.1uF 0.1uF 11 PXA_PKTSEL E4 GND1 VCC_CF4 P3 G3 GND2 VCC_CF5 R3 G4 GND3 VCC_CF6 HPM_VIO J3 GND4 D4 J4 GND5 VCC_S1 F4 L3 GND6 VCC_S2 H4 C70 C71 C72 L4 GND7 VCC_S3 M4 0.1uF 0.1uF 0.1uF N3 GND8 VCC_S4 R4 N4 GND9 VCC_S5 A P4 GND10 T3 A GND11 VCC_SD

SN74LV4320A Title EMAP2 - Compact Flash Interface Schematic

Size Document Number Rev B ASCENT-LEAP2-EMAP2 C

Date: Saturday, November 03, 2012 Sheet 6of 12 5 4 3 2 1 5 4 3 2 1

Radio Interface

D D

RF_VCC J14 1 2 10 RADIO_TXD RF_GPIO[7:0] 10 3 4 RF_GPIO0 C 10 RADIO_RXD C 5 6 RF_GPIO1 10 RADIO_nIRQ 7 8 RF_GPIO2 10 RADIO_nRESET 9 10 RF_GPIO3 10 RADIO_nCS 11 12 RF_GPIO4 10 RADIO_SDI 13 14 RF_GPIO5 10 RADIO_SDO 15 16 RF_GPIO6 10 RADIO_SCLK 17 18 RF_GPIO7 19 20 DF17A(3.0)-20DS-0.5V(51)

MTG HOLE MTG142/217

B B

A A

Title EMAP2 - Radio Interface

Size Document Number Rev A C

Date: Saturday, November 03, 2012 Sheet 7of 12 5 4 3 2 1 5 4 3 2 1

D D

GPS Interface

GPS_3.3_VREG VCC_RTC C87 C88 R51 0.1uF 2.2uF 10 Ohm TP7 VCC_RF

GPS_3.3_VREG

U26 6 19 18 11 24 5 C89 0.1uF VCC V_BAT VDDIO 16 V_ANT VCC_RF VDDUSB 1 J7 RF_IN 8 H.FL-R-SMT 3 VDD18_OUT C90 10 GPS_TXD 4 TxD1 0.1uF 10 GPS_RXD 2 3 RxD1 C GND GND C SIGNAL 25 28 GPS_TIMEPULSE 10 26 USB_DM TIMEPULSE USB_DP RF_GND RF_GND 20 27 GPS_WAKEUP 10 nAADET EXT_INT0 21 10 EXT_INT1 nRESET 1 12 NC 2 BOOT_INT NC 9 NC 22 Digital Noise Rejection NC 23 NC GND GND GND GND GND LEA-5T 7 FB4 13 14 15 17 BLM18AG121SN1D RF_GND

B B

A A

Title EMAP2 - GPS

Size Document Number Rev Custom ASCENT-LEAP2-EMAP2 C

Date: Saturday, November 03, 2012 Sheet 8of 12 5 4 3 2 1 5 4 3 2 1

CMOS Imager Interface

SYSTEM_VIN D D J8 1 CIF_I2C_SCL 2 CIF_I2C_SDA 3 U28 4 CIF_CLKIN 5 SN74CBTLV3126 6 Bus Switch 7 CIF_I2C_SCL 2 3 11 PXA_CIF_PXCLK PXA_I2C_SCL 4,11,12 8 CIF_I2C_SDA 5 1A1 1B1 6 11 PXA_CIF_LINE_VALID PXA_I2C_SDA 4,11,12 9 CIF_CLKIN 9 1A2 1B2 8 11 PXA_CIF_FRAME_VALID PXA_CIF_CLKIN 11 10 12 1A3 1B3 11 11 PXA_CIF_D[9:0] PXA_CIF_D9 11 1A4 1B4 PXA_CIF_D8 12 1 3,10 IMAGER_POWER_ON PXA_CIF_D7 13 4 OE1 PXA_CIF_D6 14 10 OE2 PXA_CIF_D5 15 13 OE3 16 OE4 C PXA_CIF_D4 C PXA_CIF_D3 17 HPM_VIO PXA_CIF_D2 18 14 7 PXA_CIF_D1 19 VCC GND PXA_CIF_D0 20 SN74CBTLV3126RGYR 21 22 10 CIF_STROBE 23 10 CIF_TRIGGER 24 10 CIF_STANDBY 25 10 CIF_nOE 26 10 CIF_GSHT_CTL 27 IMAGER_3.3_VREG 28 29 30

52746-3070

B B

A A

Title EMAP2 - CMOS Imager

Size Document Number Rev A C

Date: Saturday, November 03, 2012 Sheet 9of 12 5 4 3 2 1 5 4 3 2 1

U36A I/O BANK 0-1 U36B I/O BANK 2-3 A2 B15 PXA_D11 11 PXA_CLK_OUT A3 GAA0/IO00RSB0 GBA2/IO60PDB1 D14 T14 B2 11 PXA_RDY PXA_DQM0 11 12 SYSTEM_3.3V_ENA PXA_FF_TXD 11 A4 GAA1/IO01RSB0 GBB2/IO61PDB1 E13 PXA_D19 R13 GDA2/IO89RSB2 GAA2/IO174PPB3 B1 11 PXA_BT_RXD B4 GAB0/IO02RSB0 GBC2/IO62PPB1 H14 12 SYSTEM_1.8V_ENA T12 GDB2/IO90RSB2 GAB2/IO173PDB3 D3 PXA_BT_TXD 11 11 PXA_FF_RXD C5 GAB1/IO03RSB0 GCA0/IO71NDB1 J13 RADIO_RXD 7 11 PXA_CLK_IN N6 GDC2/IO91RSB2 GAC2/IO172PDB3 J1 PXA_SSP3_SFRM 11 6,11 PXA_nPWE RADIO_SDO 7 12 EMAP_nRESET EXT_IRQ1 11 C6 GAC0/IO04RSB0 GCA1/IO71PDB1 J16 OSC_1MHZ T3 GEA2/IO143RSB2 GFA2/IO161PPB3 J5 11 PXA_EMAP_nCS RADIO_nIRQ 7 TP23 GAC1/IO05RSB0 GCA2/IO72PDB1 H16 XTAL_IN R4 FF/GEB2/IO142RSB2 GFB2/IO160GDB3 K1 GCB0IO70NPB1 RADIO_nCS 7 GEC2/IO141RSB2 GFC2/IO159PSB3 EXT_nIRQ0 11 D D4 H13 D 11 PXA_SSP3_RXD C4 IO06RSB0 GCB1/IO70PPB1 J12 RADIO_SDI 7 N2 11 PXA_SSP2_RXD TP8 EXT_POWER_ON3 11 PXA_A24 D6 IO07RSB0 GCB2/IO73PDB1 H12 K4 IO147PPB3 M3 RADIO_TXD 7 11 EXT_WAKEUP_IN0 EXT_POWER_ON2 11 A5 IO10RSB0 GCC0IO69NDB1 G13 PXA_D17 M8 IO129RSB2 IO147NPB3 M1 11 PXA_ST_TXD B5 IO11RSB0 GCC1/IO69PDB1 J14 3 RF_POWER_ON M9 IO117RSB2 IO150PDB3 N1 EXT_SPI_nCS 11 11 PXA_ST_RXD IO13RSB0 GCC2/IO74PDB1 RADIO_SCLK 7 9 CIF_GSHT_CTL IO110RSB2 IO150NDB3 EXT_POWER_ON5 11 PXA_A25 B6 11 EXT_WAKEUP_IN2 N4 L3 EXT_WAKEUP2 11 A6 IO14RSB0 B16 PXA_D5 M13 IO140RSB2 IO151PPB3 M2 11 PXA_SSP3_TXD TP16 EXT_POWER_ON0 11 PXA_A12 A7 IO16RSB0 IO60NDB1 C15 PXA_D10 N7 IO94RSB2 IO151NPB3 K3 8 GPS_RXD EXT_SPI_SCLK 11 PXA_A15 D7 IO18RSB0 IO61NPB1 F13 PXA_D18 N8 IO126RSB2 IO156PPB3 L2 3,9 IMAGER_POWER_ON EXT_POWER_ON4 11 PXA_A14 C7 IO19RSB0 IO62NDB1 C16 PXA_D4 N9 IO120RSB2 IO156NPB3 L4 3 CF_POWER_ON EXT_WAKEUP1 11 PXA_A13 B7 IO20RSB0 IO63PDB1 D16 PXA_D3 N10 IO108RSB2 IO158PSB3 L1 9 CIF_STROBE EXT_SPI_SDI 11 PXA_A9 C8 IO21RSB0 IO63NDB1 E15 PXA_D8 N11 IO103RSB2 IO159NDB3 J4 3 EMAP_CSENSE_nCS EXT_WAKEUP0 11 PXA_A7 E8 IO24RSB0 IO64PPB1 F14 PXA_D14 N13 IO99RSB2 IO160NDB3 K2 TP18 EXT_SPI_SDO 11 PXA_A8 D8 IO25RSB0 IO64NPB1 F15 PXA_D7 P4 IO92RSB2 IO161NPB3 G2 11 EXT_WAKEUP_IN1 EXT_CSENSE_nCS1 11 PXA_A10 B8 IO26RSB0 IO65PPB1 G14 PXA_D13 P5 IO138RSB2 IO165PDB3 G1 11 RESERVED EXT_CSENSE_nCS0 11 PXA_A11 A8 IO27RSB0 IO65NPB1 E16 PXA_D2 P6 IO136RSB2 IO165NDB3 E1 8 GPS_TXD HPM_CSENSE_SCLK 11 PXA_A3 D9 IO28RSB0 IO66PDB1 F16 PXA_D1 P7 IO131RSB2 IO166PDB3 F1 8 GPS_WAKEUP PXA_IRQ0 11 PXA_A2 E9 IO30RSB0 IO66NDB1 E14 PXA_D15 P8 IO124RSB2 IO166NDB3 F3 3 HPM_POWER_ON EXT_CSENSE_nCS2 11 PXA_A5 B9 IO31RSB0 IO67PPB1 H15 RF_GPIO0 P9 IO119RSB2 IO167PPB3 E2 2 CONSOLE_RXD_EN HPM_CSENSE_nCS 11 PXA_A4 C9 IO32RSB0 IO67NPB1 K16 RF_GPIO2 P10 IO107RSB2 IO167NPB3 G3 9 CIF_nOE EXT_CSENSE_DOUT 11 PXA_A6 A9 IO33RSB0 IO72NDB1 K13 RF_GPIO5 P11 IO104RSB2 IO168PPB3 F2 3 EMAP_CSENSE_SCLK PXA_IRQ1 11 A10 IO34RSB0 IO73NPB1 K15 RF_GPIO3 XTAL_OUT R3 IO97RSB2 IO168NDP3 F4 5,11 PXA_nOE EXT_CSENSE_nCS3 11 B10 IO37RSB0 IO74NPB1 G15 PXA_D6 R5 IO139RSB2 IO169PDB3 E4 11 PXA_DQM3 TP19 PXA_SSP3_SCLK 11 C10 IO38RSB0 IO75PDB1 G16 PXA_D0 R6 IO132RSB2 IO169NDB3 D2 11 PXA_DQM2 8 GPS_TIMEPULSE PXA_SSP2_SFRM 11 PXA_D31 D10 IO39RSB0 IO75NDB1 J15 RF_GPIO1 R7 IO127RSB2 IO171PDB3 D1 11 PXA_WAKEUP HPM_CSENSE_DIN 11 PXA_D30 A11 IO40RSB0 IO80PPB1 K14 RF_GPIO4 R8 IO121RSB2 IO171NDB3 E3 3 GPS_POWER_ON PXA_SSP2_SCLK 11 PXA_D29 B11 IO41RSB0 IO80NPB1 L16 R9 IO114RSB2 IO172NDB3 C1 RADIO_nRESET 7 2 CONSOLE_RXD HPM_CSENSE_DOUT 11 PXA_D26 A12 IO42RSB0 IO84PDB1 M16 R10 IO109RSB2 IO173NDB3 C2 TP15 9 CIF_STANDBY PXA_SSP2_TXD 11 PXA_D28 C11 IO43RSB0 IO84NDB1 L15 RF_GPIO6 R11 IO105RSB2 IO174NDB3 3 EMAP_CSENSE_DIN PXA_D27 D11 IO44RSB0 IO85PDB1 L14 RF_GPIO7 R12 IO98RSB2 H2 5 ETH_IRQ EXT_CSENSE_SCLK 11 PXA_D20 D13 IO45RSB0 IO85NDB1 OSC_1MHZ_DIV T2 IO96RSB2 GFA0/IO162NPB3 J2 EXT_nIRQ1 11 PXA_D21 C13 IO50RSB0 P16 T4 IO137RSB2 GFA1/IO162PPB3 H1 B14 IO51RSB0 GDA0/IO88NDB1 N16 TP9 TP20 T5 IO134RSB2 GFB0/IO163NPB3 H3 EXT_IRQ0 11 11 PXA_DQM1 TP10 TP21 EXT_CSENSE_DIN 11 PXA_D9 D15 IO52RSB0 GDA1/IO88PDB1 L13 T6 IO125RSB2 GFB1/IO163PPB3 H5 IO53RSB0 GDB0/IO87NDB1 M14 TP12 TP22 T7 IO123RSB2 GFC0/IO164NDB3 G4 EXT_RXD 11 TP11 12 PS_nIRQ EXT_TXD 11 PXA_D16 A14 GDB1/IO87PDB1 N15 T8 IO118RSB2 GFC1/IO164PDB3 R2 TP14 3 ETH_POWER_ON EXT_nCS1 11 PXA_D12 A15 GBA0/IO58RSB0 GDC0IO86NPB1 M15 T9 IO115RSB2 GEA0/IO144NDB3 R1 TP13 2 CONSOLE_TXD EXT_nCS2 11 PXA_D22 B13 GBA1/IO59RSB0 GDC1/IO86PPB1 T10 IO111RSB2 GEA1/IO144PDB3 P2 9 CIF_TRIGGER PXA_IRQ2 11 C PXA_D23 A13 GBB0/IO56RSB0 T11 IO106RSB2 GEB0/IO145NDB3 P1 C 3 EMAP_CSENSE_DOUT PXA_IRQ3 11 PXA_D24 C12 GBB1/IO57RSB0 T13 IO102RSB2 GEB1/IO145PDB3 M4 PXA_GPIO100 11 12 SYSTEM_1.5V_ENA EXT_POWER_ON1 11 PXA_D25 B12 GBC0/IO54RSB0 M1AGL600-FG256IO93RSB2 GEC0/10146NDB3 N3 GBC1/IO55RSB0 PXA_GPIO103 11 GEC1/IO146PDB3 EXT_nCS0 11 M1AGL600-FG256 PXA_GPIO86 11 PXA_GPIO97 11 PXA_GPIO96 11 R85 R86 1K 1K Future JTAG for ARM Cortex SoC XTAL_OUT XTAL_IN

R73 R74 Low Power Oscillator 3K 10K U29 VCC_RTC 1 4 LTC6909 6 1 C129 2 3 C130 OSC_1MHZ 22pF 22pF V+ OUT U36C OSC_1MHZ_DIV 3 5 X5 DIV GRD 26.000Mhz 4 2 VCC_1.5_VREG POWER/JTAG DNI SET GND R52 F7 A1 LTC6906CS6 100K F8 VCC GND A16 F9 VCC GND F6 F10 VCC GND F11 VCC GND G6 G7 VCC_RTC G11 VCC GND G8 VCC GND Route GRD pin around H6 G9 SET pin and Resistor. H11 VCC GND G10 R87 J6 VCC GND H7 1K J11 VCC GND H8 11 PXA_RDY K6 VCC GND H9 K11 VCC GND H10 L7 VCC GND J7 L8 VCC GND J8 5,6,11 PXA_D[31:0] L9 VCC GND J9 5,6,11 PXA_A[25:0] L10 VCC GND J10 7 RF_GPIO[7:0] B VCC GND K7 B C14 GND K8 RF_VIN E5 VMV0 GND K9 E12 VMV0 GND K10 P12 VMV1 GND L6 M12 VMV1 GND L11 P3 VMV2 GND T1 VCC_RTC C3 VMV2 GND T16 M5 VMV3 GND VMV3 B3 E6 GNDQ D5 E7 VCCIOB0 GNDQ D12 VCCIOB0 GNDQ RF_VIN E10 N5 E11 VCCIOB0 GNDQ N12 I/O Bypass Core Bypass F12 VCCIOB0 GNDQ R15 VCC_RTC VCC_1.5_VREG G12 VCCIOB1 GNDQ K12 VCCIOB1 P13 L12 VCCIOB1 TCK R16 PXA_FPGA_TCK 11 C128 C125 C126 C127 C108 C109 C110 C111 C112 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF M6 VCCIOB1 TDO T15 PXA_FPGA_TDO 11 M7 VCCIOB2 TMS P15 PXA_FPGA_TMS 11 M10 VCCIOB2 TRST R14 PXA_FPGA_TRST 11 M11 VCCIOB2 TDI PXA_FPGA_TDI 11 HPM_VIO VCC_1.5_VREG F5 VCCIOB2 R83 R84 G5 VCCIOB3 N14 1K 1K VCC_RTC K5 VCCIOB3 VJTAG C131 VCC_1.5_VREG 0.1uF L5 VCCIOB3 C54 C55 C56 C57 C58 0.01uF 0.01uF 0.01uF 0.01uF 0.01uF 1 VCCIOB3 P14 C137 C134 C135 C136 VPUMP 0.1uF 0.1uF 0.1uF 0.1uF 3 2 J3 HPM_VIO JP6 C59 C133 VCCLPF 0.01uF 0.1uF H4 M1AGL600-FG256 VCOMPLF C53 C132 0.01uF 0.1uF RF_VIN

C141 C138 C139 C140 0.1uF 0.1uF 0.1uF 0.1uF

A A

Title Title EMAP2 - FPGA Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Saturday, November 03, 2012 Sheet 10of 12 5 4 3 2 1 5 4 3 2 1

PCMCIA Configuration Name J11 Pin PXA GPIO Alt Func LEAP2 Stacking Connector (Upper) PXA_nPWAIT 124 56 1-IN HPM SODIMM Connector PXA_nPIOIS16 123 57 1-IN HPM_VIN HPM_VIN VCC_1.5_VREG PXA_PREG 116 55 2-OUT VCC_3.3_VREG PXA_nPIOW 111 51 2-OUT PXA_nPOE 113 48 2-OUT J11 1 2 SYSTEM_VIN 1 2 VCC_1.8_VREG 3 1 2 4 PXA_nPCE1 105 102 1-OUT 10 PXA_WAKEUP GPIO_0 SYSTEM_PWRBTN SYSTEM_PWRBTN 3 4 3 4 PXA_IRQ0 10 5 6 PXA_nPIOR 103 50 2-OUT 10 PXA_IRQ2 5 CLK_PIO_GPIO9 CLK_TOUT_GPIO10 6 7 5 6 8 PXA_IRQ1 10 PXA_nPCE2 30 105 1-OUT 2 PXA_LED0 7 L_DD_14_GPIO72 L_DD_12_GPIO70 8 9 7 8 10 PXA_IRQ3 10 6 PXA_CF_nMSTR_EN 9 L_LCLK_A0_GPIO75 L_DD_11_GPIO69 10 11 9 10 12 PXA_PSKTSEL 32 104 1-OUT 10 HPM_CSENSE_DIN 11 CSENSE_DIN CSENSE_DOUT 12 HPM_CSENSE_DOUT 10 13 11 12 14 PXA_nIRQ 18 76 0-IN 10 HPM_CSENSE_nCS CSENSE_nCS CSENSE_SCLK HPM_CSENSE_SCLK 10 13 14 EXT_CSENSE_nCS0 10 13 14 SYSTEM_PWRBTN 15 16 EXT_CSENSE_nCS1 10 PXA_nCD 16 77 0-IN D 15 SYSTEM_VIN13 GND14 16 17 15 16 18 D PXA_CF_nCD 6 12 SYSTEM_nRESET EXT_CSENSE_nCS2 10 PXA_MASTER_nEN 7 75 0-OUT 6 PXA_CF_RESET 17 L_FCLK_RD_GPIO74 L_BIAS_GPIO77 18 19 17 18 20 10 PXA_CLK_IN L_DD_17_GPIO87 L_PCLK_WR_GPIO76 PXA_CF_IRQ 6 12 SYSTEM_nLOWBAT 19 20 EXT_CSENSE_nCS3 10 PXA_RESET 15 74 0-OUT 19 20 10 EXT_POWER_ON0 21 22 EXT_CSENSE_DOUT 10 2 PXA_LED1 21 L_DD_15_GPIO73 L_VSYNC_GPIO14 22 PXA_SSP2_SFRM 10 23 21 22 24 10 PXA_FF_RXD nPCE1_GPIO85 CPLD_SRAM_nCS_IN PXA_SRAM_nCS 10 EXT_POWER_ON1 23 24 EXT_CSENSE_SCLK 10 23 24 10 EXT_POWER_ON2 25 26 EXT_CSENSE_DIN 10 USB Configuration PXA_GPIO54 25 nPCE2_GPIO54 BB_IB_STB_GPIO84 26 PXA_SSP3_SCLK 10 27 25 26 28 10 EXT_POWER_ON3 EXT_SPI_nCS 10 Name J11 Pin PXA GPIO Alt Func 10 PXA_SSP3_RXD 27 BB_IB_DAT0_GPIO82 BB_IB_CLK_GPIO83 28 PXA_SSP3_SFRM 10 29 27 28 30 9 PXA_CIF_D7 KP_MKOUT_5_GPIO108 SYSTEM_VIN28 10 EXT_POWER_ON4 29 30 EXT_SPI_SDI 10 PXA_USB_P1_DP 84 Dedicated pin 29 30 10 EXT_POWER_ON5 31 32 EXT_SPI_SDO 10 10 PXA_GPIO86 31 L_DD_16_GPIO86 KP_MKOUT_2_GPIO105 32 PXA_nPCE2 6 33 31 32 34 PXA_USB_P1_DN 86 Dedicated pin 10 PXA_GPIO97 KP_MKIN_3_GPIO97 KP_MKOUT_1_GPIO104 PXA_PKTSEL 6 10 EXT_WAKEUP0 33 34 EXT_SPI_SCLK 10 PXA_USB_P1_HPEN 52 89 2-OUT 33 34 10 EXT_WAKEUP1 35 36 EXT_RXD 10 10 PXA_GPIO96 35 KP_MKOUT_6_GPIO96 KP_MKOUT_4_GPIO107 36 PXA_CIF_D8 9 37 35 36 38 10 EXT_WAKEUP2 EXT_TXD 10 PXA_USB_P2_DP 72 Dedicated pin 9 PXA_CIF_D6 37 KP_DKIN_0_GPIO93 KP_MKIN_0_GPIO100 38 PXA_GPIO100 10 39 37 38 40 9 PXA_CIF_D0 KP_MKIN_4_GPIO98 KP_MKOUT_0_GPIO103 PXA_GPIO103 10 10 EXT_WAKEUP_IN0 39 40 EXT_nCS0 10 PXA_USB_P2_DN 74 Dedicated pin 39 40 10 EXT_WAKEUP_IN1 41 42 EXT_nCS1 10 10 PXA_SSP2_SCLK 41 L_CS_GPIO19 GND40 42 43 41 42 44 PXA_USB_P2_1 89 35 2-IN SYSTEM_VIN41 GND42 10 EXT_WAKEUP_IN2 43 44 EXT_nCS2 10 PXA_USB_P2_2 50 34 1-OUT 43 44 10 EXT_nIRQ0 45 46 RESERVED PXA_GPIO44 45 BTCTS_GPIO44 BTTXD_GPIO43 46 PXA_BT_TXD 10 47 45 46 48 10 EXT_nIRQ1 PXA_GPIO87 PXA_USB_P2_4 61 36 1-OUT 4 PXA_USB_P2_7 47 FFRTS_USB7_GPIO41 USBHPWR0_GPIO88 48 PXA_GPIO88 49 47 48 50 PXA_GPIO92 MMDAT_0_GPIO92 SDA_GPIO118 PXA_I2C_SDA 4,9,12 10 EXT_IRQ0 49 50 PXA_GPIO54 PXA_USB_P2_5 73 40 3-IN 49 50 10 EXT_IRQ1 51 52 PXA_CIF_D7 9 PXA_GPIO111 51 MMDAT_3_GPIO111 FFRXD_USB2_GPIO34 52 PXA_USB_P2_2 4 53 51 52 54 PXA_USB_P2_7 45 41 2-IN 9 PXA_CIF_CLKIN SSPSCLK_GPIO23 USBHPEN0_GPIO89 PXA_USB_P1_HPEN 4 10 PXA_SSP2_SFRM 53 54 PXA_GPIO86 10 PXA_USB_P2_8 81 37 1-OUT 53 54 PXA_SRAM_nCS 55 56 PXA_GPIO97 10 9 PXA_CIF_LINE_VALID 55 SSPTXD_GPIO25 SCL_GPIO117 56 PXA_I2C_SCL 4,9,12 57 55 56 58 9 PXA_CIF_D8 PXA_GPIO96 10 9 PXA_CIF_PXCLK 57 SSPRXD_GPIO26 FFTXD_USB6_GPIO39 58 PXA_FF_TXD 10 59 57 58 60 9 PXA_CIF_FRAME_VALID SSPSFRM_GPIO24 MMCMD_GPIO112 PXA_GPIO112 10 PXA_GPIO100 59 60 PXA_CIF_D6 9 LED Configuration 59 60 10 PXA_GPIO103 61 62 PXA_CIF_D0 9 10 PXA_FPGA_TCK 61 FFRI_USB3_GPIO38 BTRTS_GPIO45 62 PXA_GPIO45 63 61 62 64 Name J11 Pin PXA GPIO Alt Func 4 PXA_USB_P2_4 FFDCD_USB4_GPIO36 MMCLK_GPIO32 PXA_GPIO32 PXA_GPIO88 63 64 PXA_SSP2_SCLK 10 LED0 5 72 0-OUT 63 64 4,9,12 PXA_I2C_SDA 65 66 PXA_GPIO44 10 PXA_FPGA_TDO 65 GPIO22 MMDAT_2_GPIO110 66 PXA_GPIO110 67 65 66 68 4,9,12 PXA_I2C_SCL PXA_GPIO92 LED1 19 73 0-OUT 10 PXA_FPGA_TMS 67 SSPEXTCLK_GPIO27 PWM_OUT_0_GPIO16 68 PXA_GPIO16 69 67 68 70 10 PXA_FPGA_TRST SDATA_IN_GPIO29 PWM_OUT_1_GPIO17 PXA_GPIO17 PXA_GPIO112 69 70 PXA_GPIO111 69 70 PXA_GPIO45 71 72 PXA_CIF_CLKIN 9 10 PXA_FPGA_TDI 71 SDATA_OUT_GPIO30 MMDAT_1_GPIO109 72 PXA_GPIO109 73 71 72 74 CIF Configuration PXA_GPIO113 AC97_RESET_n_GPIO113 USBC_P PXA_USB_P2_DP 4 PXA_GPIO32 73 74 PXA_CIF_LINE_VALID 9 Name J11 Pin PXA GPIO Alt Func 73 74 PXA_GPIO110 75 76 PXA_CIF_PXCLK 9 4 PXA_USB_P2_5 75 FFDTR_USB5_GPIO40 USBC_N 76 PXA_USB_P2_DN 4 77 75 76 78 PXA_CIF_CLKIN 51 23 1-OUT PXA_GPIO16 PXA_CIF_FRAME_VALID 9 10 PXA_ST_RXD 77 ICP_RXD_GPIO46 GND76 78 79 77 78 80 PXA_GPIO17 PXA_GPIO38 PXA_CIF_D0 37 98 2-IN PXA_GPIO31 79 SYNC_GPIO31 SYSTEM_VIN78 80 81 79 80 82 PXA_BT_RXD 10 PXA_GPIO109 PXA_GPIO22 PXA_CIF_D1 93 114 1-IN 10 PXA_SSP2_RXD 81 EXT_SYNC_0_GPIO11 BTRXD_GPIO42 82 83 81 82 84 4 PXA_USB_P2_8 FFDSR_USB8_GPIO37 nRESET_OUT 4 PXA_USB_P1_DP 83 84 PXA_GPIO27 PXA_CIF_D2 92 116 1-IN 83 84 4 PXA_USB_P1_DN 85 86 PXA_GPIO29 PXA_CLK_OUT 85 GPIO12 USBH_P0 86 PXA_USB_P1_DP 4 87 85 86 88 PXA_CIF_D3 88 115 2-IN 9 PXA_CIF_D3 PXA_GPIO30 PXA_GPIO28 87 BITCLK_GPIO28 USBH_N0 88 PXA_USB_P1_DN 4 89 87 88 90 9 PXA_CIF_D2 PXA_GPIO113 PXA_CIF_D4 102 95 2-IN 10 PXA_SSP2_TXD 89 CLK_EXT_GPIO13 U_EN_GPIO115 90 PXA_CIF_D3 9 91 89 90 92 C PXA_GPIO99 PXA_GPIO31 PXA_CIF_D5 106 94 2-IN C 4 PXA_USB_P2_1 91 FFCTS_USB1_GPIO35 SYSTEM_VIN90 92 93 91 92 94 UIO UIO U_DET_GPIO116 PXA_CIF_D2 9 9 PXA_CIF_D4 93 94 PXA_SSP2_RXD 10 PXA_CIF_D6 35 93 2-IN 93 94 PXA_GPIO53 95 96 PXA_CLK_OUT 9 PXA_CIF_D1 95 UVS0_GPIO114 ICP_TXD_GPIO47 96 PXA_ST_TXD 10 97 95 96 98 PXA_CIF_D7 27 108 1-IN 9 PXA_CIF_D5 PXA_GPIO28 PXA_GPIO91 97 UCLK_GPIO91 GND96 98 99 97 98 100 PXA_CIF_D8 34 107 1-IN PXA_GPIO90 nURST_GPIO90 SYSTEM_VIN98 PXA_GPIO101 99 100 PXA_SSP2_TXD 10 99 100 PXA_GPIO52 101 102 PXA_CIF_D1 9 PXA_CIF_D9 109 106 1-IN VCC_BAK PXA_nSDCS_IN 101 CPLD_nSDCS_IN KP_MKIN_5_GPIO99 102 PXA_GPIO99 103 101 102 104 VCC_RTC 10 PXA_RDY CPLD_RDY KP_DKIN_2_GPIO95 PXA_CIF_D4 9 PXA_nSDRAS 103 104 PXA_GPIO91 PXA_CIF_FRAME_VALID 57 24 1-IN 103 104 10 PXA_A25 105 106 PXA_GPIO90 6 PXA_nPIOR 105 nPIOR_GPIO50 BB_OB_STB_GPIO53 106 PXA_GPIO53 107 105 106 108 PXA_CIF_LINE_VALID 53 25 1-IN 10 PXA_A24 PXA_nSDCS_IN 6 PXA_nPCE1 107 KP_MKIN_2_GPIO102 KP_DKIN_1_GPIO94 108 PXA_CIF_D5 9 109 107 108 110 PXA_CIF_PXCLK 55 26 2-IN VCC_BAK KP_MKIN_1_GPIO101 PXA_GPIO101 PXA_A22 109 110 PXA_CIF_D9 9 109 110 SYSTEM_nRESET 12 PXA_A20 111 112 PXA_RDY 10 R75 9 PXA_CIF_D9 111 KP_MKOUT_3_GPIO106 SYSTEM_nRESET 112 113 111 112 114 0 Ohm 6 PXA_nPIOW nPIOW_GPIO51 BB_OB_CLK_GPIO52 PXA_GPIO52 PXA_A18 113 114 PXA_A23 113 114 PXA_A16 115 116 PXA_A21 DNI 6 PXA_nPOE 115 nPOE_GPIO48 BUF_nSDRAS 116 PXA_nSDRAS 117 115 116 118 BT UART Configuration 10 PXA_A14 PXA_A19 10 PXA_SSP3_TXD 117 BB_OB_DAT0_GPIO81 nPREG_GPIO55 118 PXA_nPREG 6 119 117 118 120 Name J11 Pin PXA GPIO Alt Func PXA_A23 BUF_MA_23 BUF_MA_25 PXA_A25 10 10 PXA_A12 119 120 PXA_A17 119 120 6,10 PXA_A10 121 122 PXA_A15 10 PXA_BT_RXD 80 42 1-IN 121 GND119 BUF_MA_24 122 PXA_A24 10 123 121 122 124 PXA_A21 BUF_MA_21 BUF_MA_22 PXA_A22 6,10 PXA_A8 123 124 PXA_A13 10 PXA_BT_TXD 44 43 2-OUT 123 124 6,10 PXA_A6 125 126 PXA_A11 10 6 PXA_nIOIS16 125 nIOIS16_GPIO57 nPWAIT_GPIO56 126 PXA_nPWAIT 6 127 125 126 128 6,10 PXA_A4 PXA_A9 6,10 PXA_A19 127 BUF_MA_19 BUF_MA_20 128 PXA_A20 129 127 128 130 ST UART Configuration PXA_A17 BUF_MA_17 BUF_MA_18 PXA_A18 6 PXA_A1 129 130 PXA_A7 6,10 129 130 6,10 PXA_nPWE 131 132 PXA_A5 6,10 Name J11 Pin PXA GPIO Alt Func 10 PXA_A15 131 BUF_MA_15 BUF_MA_16 132 PXA_A16 133 131 132 134 10 PXA_A13 BUF_MA_13 BUF_MA_14 PXA_A14 10 PXA_RD_nWR 133 134 PXA_A3 6,10 PXA_ST_RXD 75 46 2-IN 133 134 PXA_MBGNT 135 136 PXA_A2 5,6,10 10 PXA_A11 135 BUF_MA_11 BUF_MA_12 136 PXA_A12 10 137 135 136 138 PXA_ST_TXD 94 47 1-OUT 5 PXA_nWE PXA_A0 6 137 GND135 BUF_nENABLE 138 139 137 138 140 6,10 PXA_A9 BUF_MA_9 BUF_MA_10 PXA_A10 6,10 PXA_nCS5 139 140 PXA_MBREQ 139 140 PXA_nSDCAS 141 142 PXA_nOE 5,10 SPI2 Configuration 6,10 PXA_A7 141 BUF_MA_7 BUF_MA_8 142 PXA_A8 6,10 143 141 142 144 6,10 PXA_A5 BUF_MA_5 BUF_MA_6 PXA_A6 6,10 10 PXA_DQM3 143 144 PXA_SDCLK Name J11 Pin PXA GPIO Alt Func 143 144 10 PXA_D30 145 146 PXA_DQM2 10 6,10 PXA_A3 145 BUF_MA_3 BUF_MA_4 146 PXA_A4 6,10 147 145 146 148 PXA_SSP2_RXD 79 11 2-IN 10 PXA_D28 PXA_D31 10 5,6,10 PXA_A2 147 BUF_MA_2 BUF_MA_1 148 PXA_A1 6 149 147 148 150 PXA_SSP2_SCLK 39 19 1-OUT 6 PXA_A0 BUF_MA_0 CPLD_nPWE PXA_nPWE 6,10 10 PXA_D26 149 150 PXA_D29 10 149 150 10 PXA_D24 151 152 PXA_D27 10 PXA_SSP2_SFRM 20 14 2-OUT 151 CPLD_PSKTSEL CPLD_RD_nWR 152 PXA_RD_nWR 153 151 152 154 PXA_MBREQ CPLD_MBREQ CPLD_MBGNT PXA_MBGNT 10 PXA_D22 153 154 PXA_D25 10 PXA_SSP2_TXD 87 13 1-OUT 153 154 10 PXA_D20 155 156 PXA_D23 10 5,10 PXA_nOE 155 CPLD_nOE CPLD_nWE 156 PXA_nWE 5 157 155 156 158 PXA_nCS5 10 PXA_D18 PXA_D21 10 5 PXA_ETH_nCS 157 CPLD_nCS4 CPLD_nCS5 158 159 157 158 160 SPI3 Configuration 10 PXA_EMAP_nCS CPLD_nCS3 BUF_nSDCAS PXA_nSDCAS HPM_VIO10 PXA_D16 159 160 PXA_D19 10 159 160 10 PXA_DQM0 161 162 PXA_D17 10 Name J11 Pin PXA GPIO Alt Func PXA_SDCLK 161 CPLD_SDCLK BUF_DQM_3 162 PXA_DQM3 10 163 161 162 164 10 PXA_DQM2 BUF_DQM_2 GND162 163 164 PXA_DQM1 10 PXA_SSP3_RXD 25 82 1-IN 163 164 5,6,10 PXA_D15 165 166 PXA_D14 5,6,10 B 165 SYSTEM_VIN163 BUF_MD_30 166 PXA_D30 10 167 165 166 168 PXA_SSP3_SCLK 24 84 1-OUT B 5,6,10 PXA_D13 PXA_D12 5,6,10 10 PXA_D31 167 BUF_MD_31 BUF_MD_28 168 PXA_D28 10 169 167 168 170 PXA_SSP3_SFRM 26 83 1-OUT 10 PXA_D29 BUF_MD_29 BUF_MD_26 PXA_D26 10 5,6,10 PXA_D11 169 170 PXA_D10 5,6,10 169 170 5,6,10 PXA_D9 171 172 PXA_D8 5,6,10 PXA_SSP3_TXD 115 81 1-OUT 10 PXA_D27 171 BUF_MD_27 BUF_MD_24 172 PXA_D24 10 173 171 172 174 5,6,10 PXA_D7 PXA_D6 5,6,10 10 PXA_D25 173 BUF_MD_25 BUF_MD_22 174 PXA_D22 10 175 173 174 176 5,6,10 PXA_D5 PXA_D4 5,6,10 FPGA IRQ Configuration 10 PXA_D23 175 BUF_MD_23 BUF_MD_20 176 PXA_D20 10 177 175 176 178 5,6,10 PXA_D3 PXA_D2 5,6,10 10 PXA_D21 177 BUF_MD_21 BUF_MD_18 178 PXA_D18 10 HPM_VIO 179 177 178 180 Name J11 Pin PXA GPIO Alt Func 5,6,10 PXA_D1 PXA_D0 5,6,10 10 PXA_D19 179 BUF_MD_19 BUF_MD_16 180 PXA_D16 10 179 180 PXA_IRQ0 4 10 0-IN (FPGA IRQ reg) 10 PXA_D17 181 BUF_MD_17 BUF_DQM_0 182 PXA_DQM0 10 WALLPLUG_VIN PXA_IRQ1 6 70 0-IN (Eth IRQ) 10 PXA_DQM1 183 BUF_DQM_1 VCC_IO 184 181 PXA_D15 5,6,10 PXA_IRQ2 3 9 0-IN (Header IRQn[1:0]) 185 GND183 BUF_MD_15 186 C101 182 181 5,6,10 PXA_D14 PXA_D13 5,6,10 187 BUF_MD_14 BUF_MD_13 188 22uF 183 182 PXA_IRQ3 8 69 0-IN (Header IRQ[1:0]) 5,6,10 PXA_D12 189 BUF_MD_12 BUF_MD_11 190 PXA_D11 5,6,10 184 183 5,6,10 PXA_D10 191 BUF_MD_10 BUF_MD_9 192 PXA_D9 5,6,10 184 EMAP FPGA JTAG Configuration 5,6,10 PXA_D8 SODIMM200BUF_MD_8 BUF_MD_7 PXA_D7 5,6,10 193 194 185 189 Name J11 Pin PXA GPIO Alt Func 5,6,10 PXA_D6 195 BUF_MD_6 BUF_MD_5 196 PXA_D5 5,6,10 186 185 189 191 5,6,10 PXA_D4 197 BUF_MD_4 BUF_MD_3 198 PXA_D3 5,6,10 187 186 191 190 PXA_FPGA_JTAG_TDI 69 30 0-OUT 5,6,10 PXA_D2 199 BUF_MD_2 BUF_MD_1 200 PXA_D1 5,6,10 188 187 190 192 PXA_FPGA_JTAG_TDO 63 22 0-IN 5,6,10 PXA_D0 BUF_MD_0 GND200 188 192 PXA_FPGA_JTAG_TMS 65 27 0-OUT J12 PXA_FPGA_JTAG_TCK 59 38 0-OUT QTH-090-02-F-D-A PXA_FPGA_JTAG_TRST 67 29 0-OUT

A A

Title Title EMAP2 - IO Connectors Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Saturday, November 03, 2012 Sheet 11of 12 5 4 3 2 1 5 4 3 2 1

POWER SUPPLIES

SYSTEM_VIN

R53 10 Ohm C91 U30 D 1uF D 37 31 SYSTEM_VIN VCC nPWRFAIL 21 SYSTEM_nLOWBAT 6 nLOWBAT VINDCDC1 25 SYSTEM_3.3V_ENA VCC_RTC C93 C92 36 DCDC1_EN 24 SYSTEM_1.8V_ENA VINDCDC2 DCDC2_EN LOWBAT = 3.6V 22uF 10uF 23 SYSTEM_1.5V_ENA VCC_3.3_VREG 5 DCDC3_EN VINDCDC3 9

R54 4 3 2 1 1% VDCDC1 7 698K C94 38 L1 8 L5 RN5 R56 22uF SYSTEM_VIN 1% PWRFAIL_SNS PGND1 2.2uH 23K VCC_1.8_VREG 100K 39 R57 5 6 7 8 1% LOWBAT_SNS 33 R58 280K VDCDC2 35 10K C95 11 L2 34 L6 SYSTEM_3.3V_ENA 11 SYSTEM_nRESET 2 4 22uF SYSTEM_3.3V_ENA 10 HOT_nRESET PGND2 2.2uH SYSTEM_1.8V_ENA VCC_1.5_VREG SYSTEM_1.8V_ENA 10 SYSTEM_1.5V_ENA SYSTEM_1.5V_ENA 10 26 2 C96 SW1 TRESPWRON VDCDC3 4 1000pF C97 VCC_RTC EVQ-PUC02K 12 L3 3 L7 22uF PB_IN PGND3 2.2uH 13 R76 SYSTEM_VIN 1 3 PB_OUT 22 VCC_3.3_VREG LDO_EN VCC_RTC 10K C 14 JP1 VCC_1.8_VREG C 3 11 2 2 3 3 3 2 1 VSYSIN 19 JP3 VCC_1.5_VREG SYSTEM_nLOWBAT 2 10 VINLDO 20 SYSTEM_nLOWBAT 11 DEFDCDC1 VLDO1 JP2 1 32 18 VCC_1.5_VREG 1 DEFDCDC2 VLDO2 C98 C99 DEFDCDC3 28 2.2uF 2.2uF R59 15 nINT 27 1% 100K VBACKUP nRESPWRON 16 29 R60 1% VCC_RTC HPM_VIO VCC_RTC VRTC SDAT 30 100K SCLK

DNI C100 5 6 7 8 PWRPAD 22uF AGND1 AGND2 RN6 10K TPS65020RHA 40 41 17 4 3 2 1 PS_nIRQ 10 EMAP_nRESET 10 DEFDCDC1DEFDCDC2 DEFDCDC3 PXA_I2C_SDA 4,9,11 PXA_I2C_SCL 4,9,11 JP2 JP3 Vout= 1-2 VCCMEM=1.8V 1-2 VCCIO=3.3V 0.6*(R1+R2)/R2 2-3 VCCIO=3.0V 2-3 VCCMEM=2.5V B B

Wall Cube Connector

3.3-5.5VDC@500mA WALLPLUG_VIN USB_VIN

U31 SYSTEM_VIN J9 WALLPLUG_VIN TPS2115 1 1 8 3 2 STAT IN1 7 2 3 DO OUT 6 4 D1 IN2 5 PJ-007 ILIM GND

TPS2115 2 R63 R64 D3 1K 0 Ohm SMAJ6.5A A 6.5V A ILIM=500/RLIM 1

Title EMAP2 - Power Supply

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 12of 12 5 4 3 2 1 B.1.1.2 Layout

B.1.2 HPM

B.1.2.1 Schematics

197 5 4 3 2 1

LEAP2 - Host Processor Module

D D

Schematic Pages

* Author: Dustin McIntire 1. Title Sheet * "Copyright (c) 2006 The Regents of the University of California. 2. PXA270-1 * All rights reserved. 3. PXA270-2 * * Permission to use, copy, modify, and distribute this software and its 4. SDRAM * documentation for any purpose, without fee, and without written agreement is 5. Flash and SRAM * hereby granted, provided that the above copyright notice, the following * two paragraphs and the author appear in all copies of this software. Tranceivers * Power Supply * IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT 8. Current Sensing * OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF 9. I/O * CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS * ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATION TO * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS." C C

Power Supply Hierarchy

LAYOUT NOTES: 1. No vias inside SMT caps and resistors. TBD 2. No vias inside SMT pads.

B B

Reset Hierarchy JTAG Chain MTG FIDUCIAL FIDUCIAL HOLE MTG1.8MM/NP FIDUCIAL FIDUCIAL MTG HOLE FIDUCIAL FIDUCIAL MTG1.8MM/NP

TBD TBD FIDUCIAL

A A

Title Title HPM - Tile Page Size Document Number Rev Size Document Number Rev C C Date: Sheet Date: Saturday, November 03, 2012 Sheet 1of 9 5 4 3 2 1 5 4 3 2 1

PROCESSOR SIGNAL SDCLK_1 SDCLK_2 D SDCLK_0 D R1R1OHM OHM 24.9 24.9 MD_0 MD_1 MD_2 MD_3 MD_4 MD_5 MD_6 MD_7 MD_8 MD_9 MMDAT_0_GPIO92 MMCMD_GPIO112 SSPTXD_GPIO25 MMDAT_3_GPIO111 MMDAT_2_GPIO110 MMDAT_1_GPIO109 BB_IB_DAT0_GPIO82 SSPEXTCLK_GPIO27 MMCLK_GPIO32 BB_IB_STB_GPIO84 BB_OB_STB_GPIO53 BB_OB_DAT0_GPIO81 EXT_SYNC_0_GPIO11 L_FCLK_RD_GPIO74 L_BIAS_GPIO77 L_CS_GPIO19 SSPSFRM_GPIO24 SSPRXD_GPIO26 BB_IB_CLK_GPIO83 BB_OB_CLK_GPIO52 MD_10 MD_11 MD_12 MD_13 MD_14 MD_15 MD_16 MD_17 MD_18 MD_19 MD_20 MD_21 MD_22 MD_23 MD_24 MD_25 MD_26 MD_27 MD_28 MD_29 MD_30 MD_31 nCS_0 nCS_1_GPIO15 nCS_2_GPIO78 nCS_4_GPIO80 nCS_5_GPIO33 PSKTSEL_GPIO79 nPWAIT_GPIO56 nPCE2_GPIO54 L_DD_10_GPIO68 L_DD_11_GPIO69 L_DD_12_GPIO70 L_DD_13_GPIO71 L_DD_14_GPIO72 L_DD_15_GPIO73 L_DD_16_GPIO86 L_DD_17_GPIO87 L_LCLK_A0_GPIO75 L_PCLK_WR_GPIO76 nSDCS_0 nSDCS_3_MBGNT_GPIO21 nSDCAS_ADV nPCE1_GPIO85 nPREG_GPIO55 L_DD_0_GPIO58 L_DD_1_GPIO59 L_DD_2_GPIO60 L_DD_3_GPIO61 L_DD_4_GPIO62 L_DD_5_GPIO63 L_DD_6_GPIO64 L_DD_7_GPIO65 L_DD_8_GPIO66 L_DD_9_GPIO67 SSPSCLK_GPIO23 DQM_0 DQM_1 DQM_2 DQM_3 nSDCS_1 nSDCS_2_MBREQ_GPIO20 nSDRAS nPOE_GPIO48 nIOIS16_GPIO57 nPIOW_GPIO51 nPIOR_GPIO50 L_VSYNC_GPIO14 SDCKE_1 RD_nWR RDY_GPIO18 nOE nWE nPWE_GPIO49 R2R2OHM OHM 24.9 24.9 R3R3OHM OHM 24.9 24.9 C9 B9 C8 G24 G22 G23 H24 H22 H23 J22 K24 K22 K23 L21 L23 M24 L22 N24 M22 N22 N23 R23 P23 P22 R21 R22 T24 B16 D16 B15 C15 C17 B17 D17 B18 C11 Y3 Y2 U4 V3 U1 T3 R4 P4 P2 N3 M4 L1 K1 Y1 W1 V1 V2 T1 R1 R3 P3 N1 N2 M3 M2 L2 B3 B7 C7 B6 U1A AB4 AB1 AA3 AA4 AB3 AA1 A3 AB9 AB10 AC9 AC10 AB7 AB8 AB6 AD5 AD6 AC4 AD7 AD3 AC7 AA6 AC5 AB5 A10 AC13 AC12 AC11 AB11 AA10 AA14 AD13 AB13 A17 A19 A20 AA11 AD10 AD9 AB14 AC14 AB12 RTPXA270C5C520 nOE nWE MD_0 MD_1 MD_2 MD_3 MD_4 MD_5 MD_6 MD_7 MD_8 MD_9 nCS_0 MD_10 MD_11 MD_12 MD_13 MD_14 MD_15 MD_16 MD_17 MD_18 MD_19 MD_20 MD_21 MD_22 MD_23 MD_24 MD_25 MD_26 MD_27 MD_28 MD_29 MD_30 MD_31 DQM_0 DQM_1 DQM_2 DQM_3 nSDRAS nSDCAS RD_nWR SDCLK_0 SDCLK_1 SDCLK_2 nSDCS_0 nSDCS_1 SDCKE_1 RDY_GPIO18 L_CS_GPIO19 nPOE_GPIO48 nPWE_GPIO49 nPIOR_GPIO50 nCS_1_GPIO15 nCS_2_GPIO78 nCS_4_GPIO80 nCS_5_GPIO33 nPIOW_GPIO51 C nPREG_GPIO55 L_BIAS_GPIO77 C MMCLK_GPIO32 L_DD_0_GPIO58 L_DD_1_GPIO59 L_DD_2_GPIO60 L_DD_3_GPIO61 L_DD_4_GPIO62 L_DD_5_GPIO63 L_DD_6_GPIO64 L_DD_7_GPIO65 L_DD_8_GPIO66 L_DD_9_GPIO67 nIOIS16_GPIO57 nPCE[1]_GPIO85 nPWAIT_GPIO56 nPCE[2]_GPIO54 SSPTXD_GPIO25 SSPRXD_GPIO26 L_DD_10_GPIO68 L_DD_11_GPIO69 L_DD_12_GPIO70 L_DD_13_GPIO71 L_DD_14_GPIO72 L_DD_15_GPIO73 L_DD_16_GPIO86 L_DD_17_GPIO87 MMCMD_GPIO112 PSKTSEL_GPIO79 L_VSYNC_GPIO14 SSPSCLK_GPIO23 MMDAT_0_GPIO92 SSPSFRM_GPIO24 MMDAT_3_GPIO111 MMDAT_2_GPIO110 MMDAT_1_GPIO109 L_LCLK_A0_GPIO75 BB_IB_CLK_GPIO83 BB_IB_STB_GPIO84 L_FCLK_RD_GPIO74 BB_OB_CLK_GPIO52 BB_OB_STB_GPIO53 L_PCLK_WR_GPIO76 SSPEXTCLK_GPIO27 BB_IB_DAT0_GPIO82 EXT_SYNC_0_GPIO11 Baseband BB_OB_DAT0_GPIO81 PCMCIA

nSDCS_3_MBGNT_GPIO21 SSP / Memory Control nSDCS_2_MBREQ_GPIO20 LCD Port and Control SDIO/MMC and CIF Memory Stick

Power Manger and Reset PWM PWM JTAG JTAG Pad Pad KeyKey ICP ICP TEST TEST AC97 AC97 FF UART / / UART UART FF FF OTG OTG USBUSB I2C I2C USIM / / USIMUSIM CIF CIF USB Host Host USBUSB Bluetooth Bluetooth USB Client Client USBUSB Intel(R) PXA270 Processor Processor PXA270 PXA270 Intel(R) Intel(R) Address and Data Buses Buses Data Data and and Address Address nRESET nRESET_OUT nBATT_FAULT nVDD_FAULT PWR_EN SYS_EN PWR_SCL_GPIO3 PWR_SDA_GPIO4 PWM_OUT_0_GPIO16 PWM_OUT_1_GPIO17 MA_0 MA_1 MA_2 MA_3 MA_4 MA_5 MA_6 MA_7 MA_8 MA_9 MA_10 MA_11 MA_12 MA_13 MA_14 MA_15 MA_16 MA_17 MA_18 MA_19 MA_20 MA_21 MA_22 MA_23 MA_24 MA_25 SCL_GPIO117 SDA_GPIO118 USBC_P USBC_N USBH_P0 USBH_N0 USBHPWR0_GPIO88 USBHPEN0_GPIO89 UIO UVS0_GPIO114 U_EN_GPIO115 U_DET_GPIO116 UCLK_GPIO91 nURST_GPIO90 TCK TDI TDO TMS nTRST TEST TESTCLK GPIO_0 GPIO_1 BOOTSEL_0 GPIO12 GPIO22 CLK_TOUT_GPIO10 SYNC_GPIO31 SDATA_OUT_GPIO30 SDATA_IN_GPIO29 BITCLK_GPIO28 KP_DKIN_0_GPIO93 KP_DKIN_1_GPIO94 KP_DKIN_2_GPIO95 KP_MKIN_0_GPIO100 KP_MKIN_1_GPIO101 KP_MKIN_2_GPIO102 KP_MKIN_3_GPIO97 KP_MKIN_4_GPIO98 KP_MKIN_5_GPIO99 KP_MKOUT_0_GPIO103 KP_MKOUT_1_GPIO104 KP_MKOUT_2_GPIO105 KP_MKOUT_3_GPIO106 KP_MKOUT_4_GPIO107 KP_MKOUT_5_GPIO108 KP_MKOUT_6_GPIO96 FFRXD_USB2_GPIO34 FFTXD_USB6_GPIO39 FFCTS_USB1_GPIO35 FFRTS_USB7_GPIO41 FFDCD_USB4_GPIO36 FFDSR_USB8_GPIO37 FFRI_USB3_GPIO38 FFDTR_USB5_GPIO40 ICP_RXD_GPIO46 ICP_TXD_GPIO47 BTRXD_GPIO42 BTTXD_GPIO43 BTCTS_GPIO44 BTRTS_GPIO45 AC97_RESET_n_GPIO113

B B J1 J2 J3 F2 F3 K4 K2 K3 E1 E3 E4 A6 C6 H2 H3 D1 C1 D2 C2 D4 C4 D6 G1 G2 G3 G4 F23 F22 T22 T21 T23 Y22 Y21 Y23 B22 E22 E23 E21 V24 V22 Y24 B10 V23 B13 B19 B14 B11 B20 A18 A22 A13 A14 A21 A15 A11 C16 D20 C20 D23 C23 D22 D24 C24 U23 U21 U24 D13 C12 C13 D19 C18 C14 D14 C21 C22 C19 W22 W21 W23 AB24 AA24 AB23 AB19 AA18 AB17 AA17 AB18 AB16 AB15 AD19 AC18 AC17 AD18 AC16 AD15 AC15 AD14 AC19 TDI UIO TMS TCK TDO MA_10 MA_11 MA_0 MA_1 MA_2 MA_3 MA_4 MA_5 MA_6 MA_7 MA_8 MA_9 MA_12 MA_13 MA_14 MA_15 MA_16 MA_17 MA_18 MA_19 MA_20 MA_21 MA_22 MA_23 MA_24 MA_25 nTRST GPIO22 GPIO_0 GPIO_1 GPIO12 nRESET SYS_EN USBC_P USBC_N PWR_EN USBH_P0 USBH_N0 BOOTSEL_0 nRESET_OUT nVDD_FAULT nBATT_FAULT SCL_GPIO117 SDA_GPIO118 UCLK_GPIO91 SYNC_GPIO31 BTTXD_GPIO43 UVS0_GPIO114 BTCTS_GPIO44 BTRTS_GPIO45 U_EN_GPIO115 nURST_GPIO90 BTRXD_GPIO42 BITCLK_GPIO28 U_DET_GPIO116 ICP_TXD_GPIO47 PWR_SCL_GPIO3 ICP_RXD_GPIO46 PWR_SDA_GPIO4 USBHPEN0_GPIO89 SDATA_IN_GPIO29 FFRI_USB3_GPIO38 CLK_TOUT_GPIO10 USBHPWR0_GPIO88 KP_DKIN_0_GPIO93 KP_DKIN_1_GPIO94 KP_DKIN_2_GPIO95 KP_MKIN_3_GPIO97 KP_MKIN_4_GPIO98 KP_MKIN_5_GPIO99 SDATA_OUT_GPIO30 KP_MKIN_0_GPIO100 FFDTR_USB5_GPIO40 KP_MKIN_1_GPIO101 KP_MKIN_2_GPIO102 PWM_OUT_0_GPIO16 PWM_OUT_1_GPIO17 FFTXD_USB6_GPIO39 KP_MKOUT_6_GPIO96 FFCTS_USB1_GPIO35 FFRTS_USB7_GPIO41 FFRXD_USB2_GPIO34 FFDSR_USB8_GPIO37 FFDCD_USB4_GPIO36 KP_MKOUT_4_GPIO107 KP_MKOUT_5_GPIO108 Chip Select Allocation KP_MKOUT_0_GPIO103 KP_MKOUT_1_GPIO104 KP_MKOUT_2_GPIO105 KP_MKOUT_3_GPIO106 AC97_RESET_n_GPIO113 0 - NOR Flash (32 bit) 1 - NAND Flash (16 bit) 2 - SRAM (16 bit) 3 - SYSTEM BUS_0 A 4 - SYSTEM BUS_1 A 5 - SYSTEM BUS_2

Title HPM - PXA270 Signals

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 2of 9 5 4 3 2 1 5 4 3 2 1

PROCESSOR POWER / GROUND / PROCESSOR LOCAL DECOUPLING CLOCK VCC_CORE VCC_BATT VCC_PLL

D D CLK_EXT_GPIO13 C1 C1 C2 C2 C3 C3 C4 C4 C5 C5 C6 C6 CLK_PIO_GPIO9 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF

R4 10K

VCC_CORE N12 N13 N14 N15 P10 P11 P12 P13 P14 P15 P21 R10 R11 R12 R13 R14 R15 U3 W4 D11 D15 D18 F21 H21 N21 C5 D5 D9 F4 H4 J4 L4 N4 R2 T4 V4 Y4 W24 U22 C10 V21 B1 B2 B5 D7 D8 D10 D12 D21 G21 J21 K10 K11 K12 K13 K14 K15 K21 L10 L11 L12 L13 L14 L15 M10 M11 M12 M13 M14 M15 M21 N10 N11 AA21 AC23 AC24 AD23 AD24 AA13 AD20 A1 A2 AA7 AA12 AA15 AA16 AA19 AA5 AA8 AA9 AB2 AC3 VCC_SRAM U1B RTPXA270C5C520 VSS_1 VSS_2 VSS_3 VSS_4 VSS_5 VSS_6 VSS_BB VSS_PLL CLK_REQ VSS_IO_1 VSS_IO_2 VSS_IO_3 VSS_IO_4 VSS_IO_5 VSS_IO_6 VSS_IO_7 VSS_IO_8 C12 C12 C13 C13 VSS_MEM_1 VSS_MEM_2 VSS_MEM_3 VSS_MEM_4 VSS_MEM_5 VSS_MEM_6 VSS_MEM_7 VSS_MEM_8 VSS_MEM_9 0.1UF 0.1UF 0.1UF 0.1UF VSS_MEM_10 VSS_MEM_11 VSS_MEM_12 VSS_MEM_13 VSS_MEM_14 VSS_MEM_15 VSS_MEM_16 VSS_MEM_17 VSS_CORE_1 VSS_CORE_2 VSS_CORE_3 VSS_CORE_4 VSS_CORE_5 VSS_CORE_6 VSS_CORE_7 VSS_CORE_8 VSS_CORE_9 VSS_CORE_10 VSS_CORE_11 VSS_CORE_12 VSS_CORE_13 VSS_CORE_14 VSS_CORE_15 VSS_CORE_16 VSS_CORE_17 VSS_CORE_18 VSS_CORE_19 VSS_CORE_20 VSS_CORE_21 VSS_CORE_22 VSS_CORE_23 VSS_CORE_24 VSS_CORE_25 VSS_CORE_26 VSS_CORE_27 VSS_CORE_28 VSS_CORE_29 VSS_CORE_30 VSS_CORE_31 VSS_CORE_32 VSS_CORE_33 VSS_CORE_34 VSS_CORE_35 VSS_CORE_36 VSS_CORE_37 VSS_CORE_38 VSS_CORE_39 VSS_CORE_40 VSS_CORE_41 VSS_CORE_42 VSS_CORE_43 VSS_CORE_44 VSS_CORE_45 VSS_CORE_46 VSS_CORE_47 VSS_CORE_48 VSS_CORE_49 VSS_CORE_50 VSS_CORE_51 VSS_CORE_52 VSS_CORE_53 VSS_CORE_54 VSS_CORE_55 VSS_CORE_56 C10 C10 C11 C11 CLK_PIO_GPIO9 0.1UF 0.1UF 0.1UF 0.1UF CLK_EXT_GPIO13

VCC_MEM C C

VCC_BB VCC_USIM Intel(R) PXA270 Processor Processor PXA270 PXA270 Intel(R) Intel(R) PXTAL_IN PXTAL_OUT TXTAL_IN TXTAL_OUT PWR_CAP_0_GPIO5 PWR_CAP_1_GPIO6 PWR_CAP_2_GPIO7 PWR_CAP_3_GPIO8 PWR_OUT VCC_LCD_1 VCC_LCD_2 VCC_BB VCC_PLL VCC_RAM_1 VCC_RAM_2 VCC_RAM_3 VCC_RAM_4 VCC_USB_1 VCC_USB_2 VCC_USB_3 VCC_USB_4 VCC_USIM VCC_CORE_1 VCC_CORE_2 VCC_CORE_3 VCC_CORE_4 VCC_CORE_5 VCC_CORE_6 VCC_CORE_7 VCC_CORE_8 VCC_CORE_9 VCC_CORE_10 VCC_CORE_11 VCC_CORE_12 VCC_CORE_13 VCC_CORE_14 VCC_IO_1 VCC_IO_2 VCC_IO_3 VCC_MEM_1 VCC_MEM_2 VCC_MEM_3 VCC_MEM_4 VCC_MEM_5 VCC_MEM_6 VCC_MEM_7 VCC_MEM_8 VCC_MEM_9 VCC_MEM_10 VCC_MEM_11 VCC_MEM_12 VCC_MEM_13 VCC_MEM_14 VCC_MEM_15 VCC_MEM_16 VCC_MEM_17 VCC_MEM_18 VCC_MEM_19 VCC_BATT C7 C7 C8 C8 C9 C9 L3 T2 F1 B8 E2 P1 B4 A5 A8 A9 A7 A4 D3 C3 H1 U2 M1 W3 W2 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF J24 J23 L24 F24 P24 B23 B24 E24 B12 B21 A23 A24 A12 A16 R24 AA2 M23 AD4 AC1 AC2 AC6 AC8 AD1 AD2 AD8 AA22 AA23 AB21 AA20 AB22 AB20 AC21 AD21 C18 AD22 AC22 C19 C20 AD12 AC20 AD11 AD16 AD17 C16 C16 C17 C17 0.1UF 0.1UF 0.1UF 0.1UF

0.1UF 0.1UF 0.1UF

PXTAL_IN PXTAL_OUT TXTAL_IN TXTAL_OUT VCC_LCD VCC_IO VCC_USIM VCC_CORE VCC_USB VCC_USB VCC_LCD C15 C15 C14 C14 0.1UF 0.1UF 0.1UF 0.1UF VCC_BATT VCC_MEM VCC_BB C23 C23 C24 C24 B 0.1UF 0.1UF 0.1UF 0.1UF B VCC_PLL VCC_IO

LEDS VCC_IO VCC_SRAM VCC_IO C22 C22 C21 C21 0.1UF 0.1UF 0.1UF 0.1UF 13MHz

32kHz 1 2 3 4 5 6 7 8 CRYSTAL CRYSTAL D1 D2 RN1 Device ID EEPROM 10K PXTAL_IN TXTAL_IN U19 9 16 15 14 13 12 11 10

1 2 AT24C02 SCL_GPIO117 8 7 6 5

2 1 SDA_GPIO118 3 SDA_GPIO118 SCL_GPIO117 1 SDA RN5 L_DD_0_GPIO58 X2 SCL 330 OHM L_DD_1_GPIO59 X1 32.768KHZ 5 nTRST A 13.000Mhz VCC_IO WP 1 2 3 4 nVDD_FAULT A

3 4 4 3 GPIO_1 4 2 L_DD_2_GPIO60 VCC GND BOOTSEL_0 TXTAL_OUT L_DD_3_GPIO61 PXTAL_OUT 24LC02BT Title HPM - PXA270 Power

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 3of 9 5 4 3 2 1 5 4 3 2 1

SDRAM U2

MOBILE-SDRAM 4Megx32x4 G8 R8 MA_10 MD_0 G9 A0 DQ0 N7 MA_11 MD_1 F7 A1 DQ1 R9 MA_12 MD_2 D F3 A2 DQ2 N8 D MA_13 MD_3 G1 A3 DQ3 P9 MA_14 MD_4 G2 A4 DQ4 M8 MA_15 MD_5 G3 A5 DQ5 M7 MA_16 MD_6 H1 A6 DQ6 L8 MA_17 MD_7 H2 A7 DQ7 L2 PXA Address to SDRAM Address Map MA_18 MD_8 J3 A8 DQ8 M3 MA_19 MD_9 G7 A9 DQ9 M2 MA_20 MD_10 H9 A10 DQ10 P1 MA_21 MD_11 SDRAM_A12 H3 A11 DQ11 N2 MD_12 A12 DQ12 R1 11 MD_13 JP1 J7 DQ13 N3 MA_22 MD_14 22 H8 BA0 DQ14 R2 BA1 DQ15 MD_15 33 SDRAM_BA0 E8 MD_16 K9 DQ16 D7 11 DQM_0 MD_17 JP2 K1 DQM0 DQ17 D8 MA_23 DQM_1 MD_18 22 F8 DQM1 DQ18 B9 DQM_2 DQM2 DQ19 MD_19 33 SDRAM_BA1 F2 C8 DQM_3 MD_20 DQM3 DQ20 A9 11 MD_21 JP3 J8 DQ21 C7 MA_24 nSDCS_0 MD_22 22 J9 nCS DQ22 A8 nSDRAS nRAS DQ23 MD_23 33 K7 A2 nSDCAS_ADV MD_24 K8 nCAS DQ24 C3 MD_25 nWE nWE DQ25 A1 C MD_26 C J1 DQ26 C2 SDCLK_1 MD_27 J2 CLK DQ27 B1 SDCKE_1 MD_28 CKE DQ28 D2 MD_29 B2 DQ29 D3 MD_30 B7 VDDQ DQ30 E2 MD_31 C9 VDDQ DQ31 D9 VDDQ B3 E1 VDDQ VSSQ B8 L1 VDDQ VSSQ C1 M9 VDDQ VSSQ D1 N9 VDDQ VSSQ E9 VCC_SDRAM P2 VDDQ VSSQ L9 P7 VDDQ VSSQ M1 VDDQ VSSQ N1 A7 VSSQ P8 F9 VDD VSSQ P3 VDD VSSQ C31 C31 C32 C32 C25 C25 C26 C26 C27 C27 C28 C28 C29 C29 C30 C30 L7 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF 0.1UF R7 VDD A3 VDD VSS F1 E3 VSS L3 E7 NC0 VSS R3 H7 NC1 VSS K2 NC2 K3 NC3 B NC4 B

MT48H16M32L2B5-8

SDRAM CONFIGURATIONS:

Bank Size (MByte) Jumper Settings SDRAM Parts

(default) 64MByte (512Mbit) JP10,JP8,JP9 - 1-2 MT48H16M32L2B5

JP10,JP8,JP9 - 2-3 A 32MByte (256Mbit) MT48V8M32LFB5 A

Title HPM - SDRAM

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 4of 9 5 4 3 2 1 5 4 3 2 1

U3 FLASH AND SRAM Intel NOR Flash Memroy Synchronous A1 F2 Intel Strataflash (P30) MA_1 P30 Flash MD_0 DNI B1 A1 DQ0 E2 MA_2 32M x 16 MD_1 C1 A2 DQ1 G3 RN2 MA_3 MD_2 D1 A3 DQ2 E4 1 16 MA_4 MD_3 BUF_MD_0 D2 A4 DQ3 E5 2 15 MA_5 MD_4 BUF_MD_1 A2 A5 DQ4 G5 3 14 MA_6 MD_5 BUF_MD_2 C2 A6 DQ5 G6 4 13 MA_7 MD_6 BUF_MD_3 D A3 A7 DQ6 H7 5 12 D MA_8 MD_7 BUF_MD_4 B3 A8 DQ7 E1 6 11 MA_9 MD_8 BUF_MD_5 NAND Flash Memory C3 A9 DQ8 E3 7 10 MA_10 MD_9 BUF_MD_6 D3 A10 DQ9 F3 8 9 MA_11 MD_10 BUF_MD_7 VCC_NAND C4 A11 DQ10 F4 MA_12 MD_11 A5 A12 DQ11 F5 MA_13 MD_12 0 OHM B5 A13 DQ12 H5 U4 R5 MA_14 MD_13 C5 A14 DQ13 G7 RN3 10K MA_15 MD_14 NAND Flash D7 A15 DQ14 E7 1 16 29 19 MA_16 MD_15 BUF_MD_9 D8 A16 DQ15 2 15 30 IO0/IO9 nWP 17 MA_17 BUF_MD_2 BUF_MA_7 A7 A17 A4 3 14 31 IO1/IO2 ALE 16 MA_18 BUF_MD_10 BUF_MA_6 B7 A18 VPP 4 13 32 IO2/IO10 CLE 8 MA_19 BUF_MD_3 CPLD_NAND_nOE C7 A19 B8 5 12 41 IO3/IO3 nRE 18 MA_20 BUF_MD_12 CPLD_NAND_nWE C8 A20 RFU1 E8 6 11 42 IO4/IO12 nWE 9 MA_21 BUF_MD_5 CPLD_NAND_nCS1A VCC_MEM A8 A21 RFU2 F1 7 10 43 IO5/IO5 nCE 10 MA_22 BUF_MD_13 CPLD_NAND_nCS1B G1 A22 RFU3 H1 8 9 44 IO6/IO13 NC/nCE2 7 MA_23 BUF_MD_6 U5 H8 A23 RFU4 26 IO7/IO6 RYnBY 6 VCC_NOR MA_24 BUF_MD_0 5 B6 A24 28 NC/IO0 GND/RYnBY2 MA_25 0 OHM BUF_MD_1 A25 40 NC/IO1 1 BUF_MD_4 2 4 D4 D5 46 NC/IO4 NC 2 nRESET_OUT BUF_MD_7 R6 C6 nRST VCCQ1 D6 VCC_NOR 27 NC/IO7 NC 3 L_DD_1_GPIO59 nWP VCCQ2 BUF_MD_8 NC/IO8 NC 3 10K G4 33 4 11 SN74AUP1G34DCKR BUF_MD_11 E6 VCCQ3 45 NC/IO11 NC 5 JP4 SDCLK_0 BUF_MD_14 22 F6 CLK A6 VCC_NOR 47 NC/IO14 NC 11 C nSDCAS_ADV nADV VCC1 BUF_MD_15 NC/IO15 NC 33 C F7 H3 C33 C33 C34 C34 14 WAIT VCC2 0.1UF 0.1UF 0.1UF 0.1UF NC TP1 VCC_NAND 15 F8 H4 12 NC 20 nOE nOE VSSQ1 VCC NC G8 H6 C35 C35 C36 C36 37 21 nWE 0.1UF 0.1UF 0.1UF 0.1UF nWE VSSQ2 C37 C38 VCC NC 22 NC nCS_0 B4 B2 0.1UF 0.1UF 38 23 nCE VSS1 H2 25 PRE/VSS NC 24 VSS2 48 NC/VSS NC 34 13 NC/VSS NC 35 36 VSS NC 39 VSS NC

MT29F4G16BABWP Standard Async SRAM or PSRAM U6 A3 B6 MA_1 MD_0 A4 A0 D1 C5 MA_2 MD_1 A5 A1 D2 C6 NAND CONFIGURATIONS: MA_3 MD_2 B3 A2 D3 D5 MA_4 MD_3 B4 A3 D4 E5 MA_5 MD_4 C3 A4 D5 F5 MA_6 MD_5 Width Settings C4 A5 D6 F6 MA_7 MD_6 B D4 A6 D7 G6 B MA_8 MD_7 H2 A7 D8 B1 MA_9 MD_8 H3 A8 8M x 16 D9 C1 x8 RNx - DNI, RNy - Install MA_10 MD_9 H4 A9 D10 C2 MA_11 MD_10 H5 A10 D11 D2 MA_12 MD_11 G3 A11 D12 E2 (default) x16 RNx - Install, RNy - DNI MA_13 MD_12 G4 A12 D13 F2 MA_14 MD_13 F3 A13 D14 F1 MA_15 MD_14 F4 A14 D15 G1 MA_16 A15 D16 MD_15 MA_17 E4 Mfgr Setting D3 A16 B5 MA_18 A17 CS1n CPLD_SRAM_nCS2 MA_19 H1 A6 MA_24 G2 A18 CRE G5 MA_20 A19 WRn nWE Micron JPx - 1-2 MA_21 H6 A2 nOE E3 A20 OEn B2 MA_22 DQM_1 J4 A21 UBn A1 MA_23 DQM_0 A22 LBn J1 (default) Toshiba JPx - 2-3 WAIT nSDCAS_ADV J3 TP2 ADVn J5 J2 RFU1 J6 SDCLK_0 CLK RFU2 VCC_PSRAM E6 D6 D1 VSS VCC E1 VSSQ VCCQ C39 A 0.1UF A MT45W8MW16B

Title HPM - Flash and SRAM

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 5of 9 5 4 3 2 1 5 4 3 2 1 TRANCEIVERS/CPLD U8 SN74AVC32245 U9 32-Bit Xcvr SN74AVC32245 A5 A2 32-Bit Xcvr nSDRAS 1A1 1B1 BUF_nSDRAS MA_25 A6 A1 BUF_MA_25 MD_30 A5 A2 BUF_MD_30 B5 1A2 1B2 B2 A6 1A1 1B1 A1 U7 MA_24 1A3 1B3 BUF_MA_24 MD_31 1A2 1B2 BUF_MD_31 MA_23 B6 B1 BUF_MA_23 MD_28 B5 B2 BUF_MD_28 MACH4064 C5 1A4 1B4 C2 B6 1A3 1B3 B1 MA_22 1A5 1B5 BUF_MA_22 MD_29 1A4 1B4 BUF_MD_29 MA_21 C6 C1 BUF_MA_21 MD_26 C5 C2 BUF_MD_26 BGA56 D5 1A6 1B6 D2 C6 1A5 1B5 C1 D B1 A5 MA_20 BUF_MA_20 MD_27 BUF_MD_27 D JTAG 1A7 1B7 1A6 1B6 CPLD_TDI TDI CLK0/I nCS_2_GPIO78 D6 D1 D5 D2 A10 H6 MA_19 1A8 1B8 BUF_MA_19 MD_24 1A7 1B7 BUF_MD_24 CPLD_TDO TDO CLK1/I TP19 E5 E2 D6 D1 K1 K6 MA_18 2A1 2B1 BUF_MA_18 MD_25 1A8 1B8 BUF_MD_25 CPLD_TCK TCK CLK2/I nCS_4_GPIO80 E6 E1 E5 E2 J10 C5 MA_17 2A2 2B2 BUF_MA_17 MD_22 2A1 2B1 BUF_MD_22 CPLD_TMS TMS Clocks Clocks CLK3/I nCS_5_GPIO33 F5 F2 E6 E1 MA_16 2A3 2B3 BUF_MA_16 MD_23 2A2 2B2 BUF_MD_23 MA_15 F6 F1 BUF_MA_15 MD_20 F5 F2 BUF_MD_20 nPWE_GPIO49 E3 F10 CPLD_SRAM_nCS_IN 2A4 2B4 2A3 2B3 I0 I6 MA_14 G5 G2 BUF_MA_14 MD_21 F6 F1 BUF_MD_21 nSDCS_3_MBGNT_GPIO21 E1 Inputs B10 CPLD_nSDCS_IN 2A5 2B5 2A4 2B4 I1 I7 MA_13 G6 G1 BUF_MA_13 MD_18 G5 G2 BUF_MD_18 PSKTSEL_GPIO79 J1 A8 CPLD_SDCLK 2A6 2B6 2A5 2B5 I3 I8 MA_11 H6 H1 BUF_MA_11 MD_19 G6 G1 BUF_MD_19 CPLD_RDY K3 F8 CPLD_MBREQ 2A7 2B7 2A6 2B6 I4 I5 MA_12 H5 H2 BUF_MA_12 MD_17 H6 H1 BUF_MD_17 J5 2A8 2B8 J2 H5 2A7 2B7 H2 C4 H7 MA_10 BUF_MA_10 MD_16 BUF_MD_16 BUF_nENABLE CPLD_nOE J6 3A1 3B1 J1 J5 2A8 2B8 J2 CPLD_MD_BUF_nOE A4 A0/GOE0 C0 K7 MA_9 BUF_MA_9 MD_15 BUF_MD_15 CPLD_nWE K5 3A2 3B2 K2 J6 3A1 3B1 J1 CPLD_MA_BUF_nOE A3 A1 C1 K8 MA_8 BUF_MA_8 MD_14 BUF_MD_14 CPLD_nCS3 K6 3A3 3B3 K1 K5 3A2 3B2 K2 A2 A2 C2 K9 MA_7 BUF_MA_7 MD_13 BUF_MD_13 Bank0 3A4 3B4 3A3 3B3 RD_nWR A4/3 C4/3 CPLD_MBGNT L5 L2 K6 K1 CPLD_MD_BUF_DIR A1 K10 MA_6 3A5 3B5 BUF_MA_6 MD_12 3A4 3B4 BUF_MD_12 A6 C6/4 CPLD_RD_nWR L6 L1 L5 L2 C3 H8 MA_5 3A6 3B6 BUF_MA_5 MD_11 3A5 3B5 BUF_MD_11 nOE A8/5 C8/5 MA_8 M5 M2 L6 L1 C1 H10 MA_4 3A7 3B7 BUF_MA_4 MD_10 3A6 3B6 BUF_MD_10 SDCLK_1 A10/6 BankBank 1 1 C10/6 nCS_1_GPIO15 M6 M1 M5 M2 D1 G10 MA_3 3A8 3B8 BUF_MA_3 MD_9 3A7 3B7 BUF_MD_9 TP3 A11/7 C11/7 nRESET_OUT N5 N2 M6 M1 K5 A6 MA_1 4A1 4B1 BUF_MA_1 MD_8 3A8 3B8 BUF_MD_8 CPLD_SRAM_nCS2 B0 D0/GOE1 CPLD_NAND_nOE N6 N1 N5 N2 CPLD_MA_BUF_DIR H5 C6 MA_2 4A2 4B2 BUF_MA_2 MD_7 4A1 4B1 BUF_MD_7 B2/1 D2/1 CPLD_NAND_nWE P5 P2 N6 N1 H4 C7 nSDCAS_ADV 4A3 4B3 BUF_nSDCAS MD_6 4A2 4B2 BUF_MD_6 nWE B4/2 D4/2 CPLD_NAND_nCS1B P6 P1 P5 P2 K4 A7 MA_0 4A4 4B4 BUF_MA_0 MD_5 4A3 4B3 BUF_MD_5 nSDCS_2_MBREQ_GPIO20 B6/3 D6/3 CPLD_nCS4 R5 R2 P6 P1 H1 C10 DQM_3 4A5 4B5 BUF_DQM_3 MD_4 4A4 4B4 BUF_MD_4 nSDCS_0 B8/4 D8/4 CPLD_nCS5 R6 R1 R5 R2 G1 D10 DQM_2 4A6 4B6 BUF_DQM_2 MD_3 4A5 4B5 BUF_MD_3 C RDY_GPIO18 B10/5 D10/5 CPLD_nPWE T6 T1 R6 R1 C G3 D8 DQM_1 4A7 4B7 BUF_DQM_1 MD_2 4A6 4B6 BUF_MD_2 CPLD_PSKTSEL B12/6 D12/6 CPLD_NAND_nCS1A T5 T2 T6 T1 F1 E10 DQM_0 4A8 4B8 BUF_DQM_0 MD_1 4A7 4B7 BUF_MD_1 TP5 B15/7 D15/7 KP_MKOUT_1_GPIO104 MD_0 T5 T2 BUF_MD_0 CPLD_MA_BUF_nOE A4 4A8 4B8 VCC_MEM F3 E8 VCC_IO H4 nOE1 CPLD_MD_BUF_nOE A4 VCCIO0 Power VCCIO1 J4 nOE2 B3 H4 nOE1 C40 D3 G8 C41 T4 nOE3 GND1 B4 J4 nOE2 B3 0.1UF GND0 GND1 0.1UF nOE4 GND2 D3 T4 nOE3 GND1 B4 CPLD_MA_BUF_DIR A3 GND3 D4 nOE4 GND2 D3 H3 DIR1 GND4 E3 CPLD_MD_BUF_DIR A3 GND3 D4 VCC VCC GND GND J3 DIR2 GND5 E4 H3 DIR1 GND4 E3 LC4064ZC-75M56C T3 DIR3 GND6 G3 J3 DIR2 GND5 E4 K2 A9 VCC_MEM C8 H3 VCC_MEM DIR4 GND7 G4 T3 DIR3 GND6 G3 C4 GND8 K3 VCC_MEM DIR4 GND7 G4 F4 VCCA1 GND9 K4 C4 GND8 K3 C42 C43 C44 L4 VCCA2 GND10 M3 F4 VCCA1 GND9 K4 0.1UF 0.1UF 0.1UF VCC_IO P4 VCCA3 GND11 M4 C45 C46 L4 VCCA2 GND10 M3 C3 VCCA4 GND12 N3 0.1UF 0.1UF VCC_IO P4 VCCA3 GND11 M4 F3 VCCB1 GND13 N4 C3 VCCA4 GND12 N3 C47 C48 L3 VCCB2 GND14 R3 F3 VCCB1 GND13 N4 0.1UF 0.1UF P3 VCCB3 GND15 R4 C49 C50 L3 VCCB2 GND14 R3 VCCB4 GND16 0.1UF 0.1UF P3 VCCB3 GND15 R4 SN74AVCH32T245GKER VCCB4 GND16 CPLD_MBREQ SN74AVCH32T245GKER B B R50 10K

A A

Title HPM - Tranceivers

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 6of 9 5 4 3 2 1 5 4 3 2 1

POWER SUPPLIES

SYSTEM_VIN VCC_BATT R7 10 OHM U10 C51 R8 D 1uF 1M D 37 31 nBATT_FAULT SYSTEM_VIN VCC nPWRFAIL nBATT_FAULT 21 L_DD_0_GPIO58 6 nLOWBAT VINDCDC1 25 SYS_EN C52 C53 36 DCDC1_EN 24 VINDCDC2 DCDC2_EN 22uF 10uF 23 PWR_EN PWR_EN VCC_IO VCC_LCD VCC_USIM VCC_USB VCC_BB 5 DCDC3_EN VINDCDC3 9 R9 1% VDCDC1 7 698K C54 38 L1 8 L1 LOWBAT = 3.6V R10 22uF SYSTEM_VIN 1% PWRFAIL_SNS PGND1 2.2UH 23K VCC_MEM PWRFAIL = 3.3V 39 R11 1% LOWBAT_SNS 33 R12 280K VDCDC2 35 10K C55 11 L2 34 L2 SYSTEM_nRESET HOT_nRESET PGND2 22uF 2.2UH VCC_CORE_PRE C56 26 2 1000pF TRESPWRON VDCDC3 4 C57 SYSTEM_PWRBTN 12 L3 3 L3 22uF SYSTEM_PWRBTN PB_IN PGND3 2.2UH R13 PB_OUT 13 100K SYSTEM_VIN PB_OUT 22 PWR_EN VCC_IO LDO_EN VCC_MEM C 14 VCC_PLL C 3 DEFDCDC1DEFDCDC2 DEFDCDC3 3 2 1 3 2 1 VSYSIN 19 JP5 JP6 JP7 VCC_SRAM 2 10 VINLDO 20 DEFDCDC1 VLDO1 JPXX JPXX JPXX 1 32 18 1 DEFDCDC2 VLDO2 1-2 VCCMEM=1.8V1-2 VCORE=1.55V C58 C59 1-2 VCCIO=3.3V DEFDCDC3 28 VCC_BAK nVDD_FAULT 2.2uF 2.2uF 2-3 VCCIO=3.0V 2-3 VCCMEM=2.5V2-3 VCORE=1.3V 15 nINT 27 VBACKUP nRESPWRON nRESET 16 29 VCC_IO VCC_BATT VRTC SDAT 30 SCLK R14 R15 C60 10K 10K PWRPAD 22uF AGND1 AGND2 PWR_SDA_GPIO4 TPS65020RHA PWR_SCL_GPIO3 40 41 17

RTC and Trickle Charger B B

SYSTEM_VIN

2 3 SYSTEM_PWRBTN 2 1 U11 R16 M1 X3 DS1339 10K RTM002P02T2L 32.768KHZ I2C RTC 1 7 2 X1 SQW 1 3 4 X2 VCC_BAK 6 SCL_GPIO117 SCL

5 3 1 SDA_GPIO118 SDA VBAT VCC_IO 8 4 VCC GND B1 DS1339U-3.3 3V nBATT_FAULT C61

0.1UF 11 JP8 PB_OUT 22 33

2 GPIO_1

A A JP? 1-2 Hard power switch (default) ML Type Cell - Charge to 3.2V 2-3 Soft power switch 0.3mA max charging current Title HPM - Power Supply

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 7of 9 5 4 3 2 1 5 4 3 2 1 CURRENT SENSING

VCC_IO

C62 VCC_MEM 0.1UF Vout=(100K/10K)(V2-V1)

AGND 5 R17 R18 Fc = 1.6KHz 10K 100K R20 D R19 3 V+ D 1% 1% 100 OHM 100m +IN 1% 1 SDRAM_CURRENT AGND OUT C63 4 VCC_SDRAM -IN 0.1UF Current Sense ADC R21 V- 10K U12 AGND VCC_IO

1% 2 OPA333

C64 R22 AGND 0.1UF 100K R49 1% 10 OHMS AGND VCC_IO U13

TLV2548IPWR 5

C65 SDRAM_CURRENT 6

0.1UF VCC VCC_MEM NOR_CURRENT 7 A0 A1 VCC_MEM SRAM_CURRENT 8 1 CSENSE_DOUT AGND 5 CORE_CURRENT 9 A2 SDO R23 R24 VCC_CORE_PRE NAND_CURRENT 10 A3 10K 100K R26 A4 R25 3 V+ VCC_IO 11 4 1% 1% 100 OHM VCC_IO C +IN 12 A5 EOC C 1.0 OHM 1% 1 NOR_CURRENT 13 A6 AGND OUT A7 14 R48 C66 4 nCSTART 16 10K VCC_NOR 0.1UF -IN nPWDN 17 R27 V- 20 FS 10K U14 CSENSE_nCS AGND 2 nCS 19 1% 2 OPA333 CSENSE_DIN 3 SDI REFP 18 CSENSE_SCLK SCLK REFM CSENSE_DOUT CSENSE_DOUT C67 AGND 0.1UF R29 R28 AGND U15 100K VCC_IO 100K AGND VCC_IO 15 1% 5 6 7 8 VREF RN4 1 AGND 0 OHM VIN 2 VCC_IO C68 3 VOUT DNI AGND 0.1UF GND 4 3 2 1 C69 BB_IB_DAT0_GPIO82 REF2930 VCC_MEM 0.1UF BB_IB_STB_GPIO84 AGND BB_OB_DAT0_GPIO81

AGND 5 BB_IB_CLK_GPIO83 Tie at a single point. R30 R31 10K 100K R33 3 V+ 1% 1% 100 OHM R32 +IN 1% 1.0 OHM 1 SRAM_CURRENT B AGND OUT AGND B C70 4 VCC_PSRAM -IN 0.1UF R34 V- 10K U16 AGND

1% 2 OPA333

AGND R35 100K 1%

VCC_IO VCC_IO

C71 C72 VCC_CORE_PRE 0.1UF VCC_IO 0.1UF

AGND 5 AGND 5 R36 R38 R37 R40 10K 100K R42 10K 100K R43 R39 3 V+ 3 V+ 1% 1% 100 OHM 1% 1% 100 OHM 100m +IN R41 +IN 1% 1% 1 CORE_CURRENT 1.0 OHM 1 NAND_CURRENT A AGND OUT AGND OUT A C73 C74 4 4 VCC_CORE -IN 0.1UF VCC_NAND -IN 0.1UF R44 R45 V- V- 10K U17 AGND 10K U18 AGND

1% 2 OPA333 1% 2 OPA333 Title HPM - Current Sensing R46 R47 AGND AGND Size Document Number Rev 100K 100K B C 1% 1% Date: Saturday, November 03, 2012 Sheet 8of 9 5 4 3 2 1 5 4 3 2 1

I/O CONNECTORS

SYSTEM_VIN

J1 1 2 D CPLD_TCK 3 D CPLD_TDO 4 CPLD_TDI UNUSED PXA PINS 5 CPLD_TMS 6 FFRXD_USB2_GPIO34 7 FFTXD_USB6_GPIO39 8 TCK 9 TDO 10 11 12 TDI 13 TMS 14 L_DD_10_GPIO68 L_DD_13_GPIO71 L_DD_5_GPIO63 L_DD_8_GPIO66 L_DD_7_GPIO65 L_DD_9_GPIO67 L_DD_6_GPIO64 L_DD_4_GPIO62 nSDCS_1 SDCLK_2 SYSTEM_PWRBTN 15 nRESET 16

52746-1670 TP7 TP7 TP8 TP8 TP9 TP9 TP10 TP10 TP11 TP11 TP12 TP12 TP13 TP13 TP15 TP15 TP18 TP18 TP17 TP17

VCC_BAK

C C MMDAT_0_GPIO92 MMDAT_3_GPIO111 SSPTXD_GPIO25 SSPEXTCLK_GPIO27 EXT_SYNC_0_GPIO11

SYSTEM_VIN L_FCLK_RD_GPIO74 L_CS_GPIO19 SSPRXD_GPIO26 SSPSFRM_GPIO24 CLK_EXT_GPIO13 GPIO_0 L_DD_14_GPIO72 L_LCLK_A0_GPIO75 L_DD_17_GPIO87 L_DD_15_GPIO73 L_DD_16_GPIO86 KP_MKIN_3_GPIO97 KP_MKOUT_6_GPIO96 KP_DKIN_0_GPIO93 KP_MKIN_4_GPIO98 BTCTS_GPIO44 FFRTS_USB7_GPIO41 FFDCD_USB4_GPIO36 SDATA_IN_GPIO29 SDATA_OUT_GPIO30 ICP_RXD_GPIO46 SYNC_GPIO31 FFDSR_USB8_GPIO37 GPIO12 BITCLK_GPIO28 FFCTS_USB1_GPIO35 UIO UVS0_GPIO114 UCLK_GPIO91 nURST_GPIO90 KP_MKIN_2_GPIO102 KP_MKOUT_3_GPIO106 AC97_RESET_n_GPIO113 CPLD_nSDCS_IN CPLD_RDY CPLD_SDCLK nPCE1_GPIO85 KP_MKOUT_5_GPIO108 SSPSCLK_GPIO23 FFRI_USB3_GPIO38 GPIO22 FFDTR_USB5_GPIO40 CLK_PIO_GPIO9 nPCE2_GPIO54 BB_IB_DAT0_GPIO82 nPIOR_GPIO50 nPIOW_GPIO51 nPOE_GPIO48 BUF_MA_23 BUF_MA_21 nIOIS16_GPIO57 BUF_MA_19 BUF_MA_17 BUF_MA_15 BUF_MA_13 BUF_MA_11 BUF_MA_9 BUF_MA_7 BUF_MA_5 BUF_MA_3 BUF_MA_2 BUF_MA_0 CPLD_PSKTSEL CSENSE_DIN BB_OB_DAT0_GPIO81 CPLD_MBREQ CPLD_nOE CPLD_nCS4 CPLD_nCS3 BUF_MD_31 BUF_MD_29 BUF_MD_27 BUF_MD_25 BUF_MD_23 BUF_MD_21 BUF_MD_19 BUF_MD_17 BUF_MD_14 BUF_MD_12 BUF_MD_10 BUF_MD_8 BUF_MD_6 BUF_MD_4 BUF_MD_2 BUF_MD_0 CSENSE_nCS BUF_DQM_2 BUF_DQM_1 79 81 83 85 87 89 91 93 95 97 99 101 103 105 107 109 111 113 115 117 119 121 123 125 127 129 131 133 135 137 139 141 143 145 147 149 151 153 155 157 159 161 163 165 167 169 171 173 175 177 179 181 183 185 187 189 191 193 195 197 199 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 77 1 3 5 7 9 77 79 81 83 85 87 89 91 93 95 97 99 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 63 65 67 69 71 73 75 101 103 105 107 109 111 113 115 117 119 121 123 125 127 129 131 133 135 137 139 141 143 145 147 149 151 153 155 157 159 161 163 165 167 169 171 173 175 177 179 181 183 185 187 189 191 193 195 197 199

J2 198 200 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 146 148 150 152 154 156 158 160 162 164 166 168 170 172 174 176 178 180 182 184 186 188 190 192 194 196 SODIMM200 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64 66 68 70 72 74 76 78 80 82 84 86 88 90 92 94 96 98 200 B 100 102 104 106 108 110 112 114 116 118 120 122 124 126 128 130 132 134 136 138 140 142 144 146 148 150 152 154 156 158 160 162 164 166 168 170 172 174 176 178 180 182 184 186 188 190 192 194 196 198 B USBC_P USBC_N USBH_P0 USBH_N0 BUF_MA_8 BUF_MA_6 BUF_MA_4 BUF_MA_1 BUF_MD_1 BUF_MD_9 BUF_MD_7 BUF_MD_5 BUF_MD_3 CPLD_nWE BUF_MA_25 BUF_MA_24 BUF_MA_22 BUF_MA_20 BUF_MA_18 BUF_MA_16 BUF_MA_14 BUF_MA_12 BUF_MA_10 CPLD_nCS5 BUF_MD_30 BUF_MD_28 BUF_MD_26 BUF_MD_24 BUF_MD_22 BUF_MD_20 BUF_MD_18 BUF_MD_16 BUF_MD_15 BUF_MD_13 BUF_MD_11 BUF_DQM_3 BUF_DQM_0 CPLD_nPWE nRESET_OUT BUF_nSDCAS CPLD_MBGNT BUF_nSDRAS SCL_GPIO117 SDA_GPIO118 BUF_nENABLE BTTXD_GPIO43 CSENSE_SCLK BTRTS_GPIO45 CPLD_RD_nWR BTRXD_GPIO42 U_EN_GPIO115 CSENSE_DOUT nPWAIT_GPIO56 L_BIAS_GPIO77 MMCLK_GPIO32 U_DET_GPIO116 nPREG_GPIO55 VCC_IO ICP_TXD_GPIO47 L_DD_12_GPIO70 L_DD_11_GPIO69 SYSTEM_PWRBTN CLK_TOUT_GPIO10 L_VSYNC_GPIO14 SYSTEM_nRESET USBHPEN0_GPIO89 MMCMD_GPIO112 USBHPWR0_GPIO88 BB_IB_STB_GPIO84 BB_IB_CLK_GPIO83 MMDAT_2_GPIO110 MMDAT_1_GPIO109 KP_MKIN_5_GPIO99 KP_DKIN_2_GPIO95 KP_DKIN_1_GPIO94 CPLD_SRAM_nCS_IN KP_MKIN_0_GPIO100 BB_OB_STB_GPIO53 BB_OB_CLK_GPIO52 L_PCLK_WR_GPIO76 KP_MKIN_1_GPIO101 PWM_OUT_0_GPIO16 PWM_OUT_1_GPIO17 KP_MKOUT_4_GPIO107 FFTXD_USB6_GPIO39 FFRXD_USB2_GPIO34 KP_MKOUT_2_GPIO105 KP_MKOUT_1_GPIO104 KP_MKOUT_0_GPIO103

A A

Title HPM - I/O Connectors

Size Document Number Rev B C

Date: Saturday, November 03, 2012 Sheet 9of 9 5 4 3 2 1 B.1.2.2 Layout

B.1.3 SIM

B.1.3.1 Schematics

207 5 4 3 2 1

LEAP2 - Sensor Interface Module (SIM-MSP430)

D Block Diagram D

EMAP2 SIM-MSP430 Sensor 1 Sensor 2 Sensor 3 Sensor 4

Switch

Current Sense LPF Amplifier LPF LPF LPF

WALLPLUG_VIN (12 V) SPI Switch ADC Channels 802.15.4 Radio Current Sense C Amplifier SPI C SYSTEM_VIN (5 V) Serial Flash Switch Voltage Supply Selection VCC_3.3_VREG (3.3 V) Current Sense JTAG Debug Amplifier Header MSP430 Core

Switch I2C Header

Current Sense Amplifier Serial Header

B B

Schematics Pages Mounting Holes Fiducials

U19 U20 FIDUCIAL10-20 FIDUCIAL10-20 Z1 Z2 (01) Title Page MTG142/217 MTG142/217 FIDUCIAL FIDUCIAL MTG MTG 10-20 10-20 (02) Current Sensing HOLE HOLE U21 Z3 Z4 FIDUCIAL10-20 (03) MSP430 Core & Peripherals MTG142/217 MTG142/217 MTG MTG FIDUCIAL (04) I/O Connectors HOLE HOLE 10-20

A A

UCLA ASCENT Lab Title LEAP2 - Sensor Interface Module (SIM-MSP430)

Size Document Number Rev C B

Date: Saturday, November 03, 2012 Sheet 1of 4 5 4 3 2 1 5 4 3 2 1

Current Sensing Circuitry Power Output Screw Terminals Second-Order LPFs Current Amplifier VCC_OUT2 VCC_OUT1 J9 ED555/6DS (G = 20 V/V) 1 2 EXT_WAKEUP0 3 4 R1 5 VCC_3.3_VREG EXT_WAKEUP0 0_NL 6 VCC_3.3_VREG VCC_SEL1 U1 C1 INA138NA C2 0.1uF AGND 3 5 0.2uF AGND V+IN V+ R3 D D R2 1 VCC_1_CURRENT U2A OUT 4 VCC_OUT4 VCC_OUT3 0.1 100 OPA4340EA J10 4 2 C3 SENSOR1+_INT 3 ED555/6DS V-IN GND + R4 0.1uF 1 SENSOR1+ 1 100k R5 R6 C4 2 2

8 7 - EXT_WAKEUP0 AGND 1k 1k 0.1uF C5 3 0.1uF 4

Q1-1 13 5 EXT_WAKEUP0 2 IRF7503TR AGND 6 AGND AGND AGND VCC_OUT1 1 AGND

C6 C7 C8 10uF 1uF 0.1uF R7 VCC_3.3_VREG AGND AGND AGND 0_NL

C9 C10 0.1uF 0.2uF VCC_3.3_VREG AGND VCC_SEL2 U4 INA138NA U2B 3 5 4 OPA4340EA Sensor Input Screw Terminals V+IN V+ 7V_VREF U3 7V_VREF R9 SENSOR2+_INT 5 + LTC1156 R11 1 VCC_2_CURRENT 7 SENSOR2+ 3 0.1 OUT R10 R8 C12 6 J3 100 - 8 VS 15 4 2 C11 1k 1k 0.1uF C13 ED555/4DS VS DS1 V-IN GND 13 R12 0.1uF 0.1uF 1 SENSOR1+_INT DS2 2 12 100k 13 2 EXT_POWER_ON0 4 IN1 DS3 10 5 6 AGND 3 SENSOR2+_INT AGND AGND EXT_POWER_ON1 5 IN2 DS4 AGND 4 EXT_POWER_ON2 7 IN3 16 Q1-2 EXT_POWER_ON3 IN4 G1 14 4 IRF7503TR AGND 1 G2 11 6 GND G3 9 VCC_OUT2 GND G4 3 LTC1156CS AGND C R13 VCC_3.3_VREG C C14 C15 C16 0_NL 10uF 1uF 0.1uF J4 C17 ED555/4DS AGND AGND AGND C18 0.1uF 1 SENSOR3+_INT 0.2uF AGND 2 3 SENSOR4+_INT U2C 4

VCC_3.3_VREG 4 VCC_SEL3 U5 OPA4340EA INA138NA 12 SENSOR3+_INT + 3 5 10 SENSOR3+ V+IN V+ R17 R14 R15 C19 11 - R16 1 VCC_3_CURRENT 1k 1k 0.1uF C20 OUT AGND 0.1 100 0.1uF

4 2 C21 13 V-IN GND R18 0.1uF AGND AGND 100k AGND AGND 8 7

Q2-1 AGND 2 IRF7503TR

VCC_OUT3 R19

1 VCC_3.3_VREG 0_NL 7V Gate Voltage Generation 7V_VREF C22 C23 C24 C25 10uF 1uF 0.1uF 0.1uF R20 C26 AGND AGND AGND AGND 0.2uF 4.80Meg U6 C27 U2D 0.47uF 4 OPA4340EA 1 VCC_3.3_VREG FB/NC VCC_SEL4 U7 SENSOR4+_INT 14 5 AGND INA138NA + 16 2 OUT 3 5 R21 R22 C28 15 SENSOR4+ R23 GND SYSTEM_VIN V+IN V+ - R25 1k 1k 0.1uF C29 1Meg 3 4 NC IN R24 1 VCC_4_CURRENT 0.1uF B OUT B 0.1 13 C31 100 TPS71501QDCKRQ1 4 2 C30 AGND 1uF V-IN GND AGND AGND R26 0.1uF AGND AGND 100k AGND AGND 5 6

Q2-2 AGND 4 IRF7503TR Voltage Selector Jumpers

VCC_OUT4 3 WALLPLUG_VIN

C32 C33 C34 Current Sense ADC SYSTEM_VIN 10uF 1uF 0.1uF VCC_3.3_VREG AGND AGND AGND VCC_3.3_VREG J5 HDR2X12 1 2 R27 3 4 VCC_SEL1 10 VCC_3.3_VREG 5 6 VCC_3.3_VREG U8 7 8 C35 INA138NA 9 10 VCC_SEL2 0.1uF 3 5 11 12 V+IN V+ R30 13 14

5 AGND R29 1 VCC_MSP_CURRENT U9 15 16 VCC_SEL3 OUT 0.1 100 17 18 MSP_3.3V 4 2 C36 VCC_1_CURRENT 6 VCC_3.3_VREG 19 20

V-IN GND A0 VCC R31 0.1uF VCC_2_CURRENT 7 21 22 VCC_SEL4 100k A1 VCC_3_CURRENT 8 1 EXT_CSENSE_DOUT R32 23 24 C37 C38 C39 AGND VCC_4_CURRENT 9 A2 SDO 0 A3 10uF 1uF 0.1uF VCC_MSP_CURRENT 10 VEREF+ 11 A4 4 VCC_3.3_VREG AGND AGND AGND AGND 12 A5 EOC 13 A6 A7 14 R34 nCSTART 16 10K nPWDN A 17 R28 A 20 FS 0 EXT_CSENSE_nCS1 nCS EXT_CSENSE_DIN 2 19 VEREF+ VEREF- 3 SDI REFP 18 EXT_CSENSE_SCLK SCLK REFM C41 AGND AGND AGND 0.1uF TLV2548IPWR 15 AGND

AGND Title LEAP2 - Sensor Interface Module (SIM-MSP430)

Size Document Number Rev C B

Date: Saturday, November 03, 2012 Sheet 2of 4 5 4 3 2 1 5 4 3 2 1

C43 C44 C45 C46 LEDS 10uF 10uF 10uF 10uF Digital Noise Rejection DVCC AVCC VEREF+ VEREF- SENSOR1- R35 MSP_DAC_OUT VCC_3.3_VREG VCC_3.3_VREG C49 C47 C48 C50 1K MSP_3.3V 0.1uF 0.1uF 0.1uF 0.1uF C54 RF_D3.3V C C 0.1uF FB1 D2 D1

AGND AGND AGND 600 OHM BRPY1211F AYPG1211F AGND C51 C52 0.1uF 0.1uF

B A B A MSP430F1611 FB2 600 OHM 8 7 6 5 D D TP1 U11 RN3 MSP_3.3V MSP_3.3V 330 OHM MSP_LED0

AGND 1 2 3 4 DVCC 1 MSP_LED1 MSP_3.3V 62 DVCC MSP_LED2 R36 63 AVSS MSP_LED3 100 AVCC 64 DVSS AVCC C53 MSP_LED0 12 59 0.1uF AGND P1.0_TACLK P6.0_A0 SENSOR1+ P1.1_BSLTX 13 60 SENSOR2+ PROBE_1 14 P1.1_TA0 P6.1_A1 61 P1.2_TA1 P6.2_A2 SENSOR3+ AGND PROBE_2 15 2 SENSOR4+ MSP_LED1 16 P1.3_TA2 P6.3_A3 3 Serial Flash P1.4_SMCLK P6.4_A4 RS232_nFORCEOFF RF_FIFO 17 4 SWITCH_nOE RF_FIFOP 18 P1.5_TA0 P6.5_A5 5 MSP_DAC_OUT MSP_3.3V RF_CCA 19 P1.6_TA1 P6.6_A6_DAC0 6 P1.7_TA2 P6.7_A7_DAC1_SVSIN MSP_LED2 20 44 P5.0_STE1 C55 MSP_LED3 21 P2.0_ACLK P5.0_STE1 45 P5.1_SIMO1 0.1uF P2.1_TAINCLK P5.1_SIMO1 P2.2_BSLRX 22 46 P5.2_SOMI1 U12 R37 SENSOR1- 23 P2.2_CAOUT_TA0 P5.2_SOMI1 47 P5.3_UCLK 10K SENSOR1+ 24 P2.3_CA0_TA1 P5.3_UCLK 48 P5.0_STE1 1 8 P2.5_ROSC 25 P2.4_CA1_TA2 P5.4_MCLK 49 CE VCC R38 P2.6_DMAE0 26 P2.5_Rosc P5.5_SMCLK 50 P5.2_SOMI1 2 7 P2.6_ADC12CLK_DMAE0 P5.6_ACLK SO HOLD 0 RF_nRESET 27 51 PXA_RESUME_SWITCH P2.7_TA0 P5.7_TBOUTH_SVSOUT 3 6 P5.3_UCLK RF_nCS 28 36 WP SCLK P3.0_STE0 P4.0_TB0 P3.1_SDA 29 37 4 5 P5.1_SIMO1 P3.2_SOMI0 30 P3.1_SIMO0_SDA P4.1_TB1 38 GND SI P3.2_SOMI0 P4.2_TB2 P3.3_SCL 31 39 32 P3.3_UCLK0_SCL P4.3_TB3 40 RF_VREG_EN P3.4_UTXD0 33 P3.4_UTXD0 P4.4_TB4 41 SST25VF020 P3.5_URXD0 P3.5_URXD0 P4.5_TB5 P4.5_TB5 P3.6_UTXD1 34 42 RF_SFD 35 P3.6_UTXD1 P4.6_TB6 43 P4.7_TBCLK P3.7_URXD1 P3.7_URXD1 P4.7_TBCLK 10 7 VREF+ VEREF+ VeREF+ VREF+ X1 C 8 9 4 1 C XIN XOUT TDI_TCLK 55 3 2 MSP_TDI TMS 56 TDI_TCLK 54 TDO_TDI JTAG Debug Header MSP_TMS MSP_3.3V TCK 57 TMS TDO_TDI MSP_TDO 32.768KHZ TCK TCK J6 103308-2 53 52 14 R39 XT2IN XT2OUT TP2 NC 13 X2 100K 11 RFXTAL NC 12 VEREF- 8.000 MHz VREF-_VeREF- NC 11 MSP_nRESET C57 MSP_nRESET 58 C56 RST_NMI 10 22pF RST/NMI 22pF NC 9 GND 8 3 TEST 7 TCK TCK U13 MSP430F1611 TCK 6 A BAT54A NC 5 TMS MSP_TMS TMS 4 MSP_3.3V

C2 C1 NC 3 TDI_TCLK TDI_TCLK 2 MSP_TDI 2 1 VCC 1 TDO_TDI TDO_TDI MSP_TDO MSP_nRESET 802.15.4 Radio Interface SYSTEM_nRESET

RF_D1.8V MSP_3.3V RF_A1.8V The ground pins should be connected to ground as close as possible to the External I2C Header package pin using individual vias C62 C63 C64 C65 C66 C58 C59 C60 C61 MSP_3.3V 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.01uF 0.01uF 68pF 68pF U14 CC2420 AGND AGND AGND AGND AGND The de-coupling capacitors should also be CC2420 placed as close as possible to the supply pins and 25 1 R40 R41 DVDD3.3 VCO_GUARD 2 connected to the ground plane by separate vias. 4.7K 4.7K 17 AVDD_VCO 3 J7 1 18 AVDD_ADC AVDD_PRE 4 P3.3_SCL 2 B 20 DVDD_ADC AVDD_RF1 10 P3.1_SDA B 3 26 DGUARD AVDD_SW 14 4 35 DVDD_1.8 AVDD_RF2 15 5 DVDD_RAM AVDD_IF2 37 11 AVDD_XOSC 44 12 NC AVDD_IF1 48 13 NC AVDD_CHP 16 NC 5 L1 C67 36 NC GND 6 7.5nH 0.5pF 40 NC RF_P 7 L2 MSP_3.3V NC TXRX_SWITCH 8 C69 Probe (Unused) Pins 5.6nH C68 RF_FIFO 30 RF_N 9 5.6pF 5.6pF AGND RF_FIFOP 29 FIFO GND X4

1 MSP_3.3V RF_CCA 28 FIFOP 19 L3 U.FL R42 R43 RF_SFD 27 CCA DGND_GUARD 22 C70 7.5nH 150K 10K 10K AGND SFD DGND 23 0.5pF 2 3 R44 TP4 RF_nCS 31 DSUB_PADS 24 GND GND P2.5_ROSC TP3 CSn DSUB_CORE RFXTAL P5.3_UCLK 32 AGND SIGNAL VREF+ TP5 P5.1_SIMO1 33 SCLK 39 C71 VEREF- TP6 SI XOSC_01 P5.2_SOMI1 34 38 22pF AGND AGND VEREF+ TP7 SO XOSC_02 1 P4.7_TBCLK TP8 RF_nRESET 21 45 X3 P2.6_DMAE0 TP9 RF_VREG_EN 41 RESETn RBIAS 16.000MHZ P3.2_SOMI0 TP10 VREG_EN 42 R45 P3.5_URXD0 TP11 46 VREG_OUT 43 43.0K 2 C72 PROBE_1 TP12 47 ATEST2 VREG_IN 1% 22pF PROBE_2 TP13 ATEST1 Place header as close TP14 AGND RF_D3.3V as possible to the IC RF_A1.8V RF_D1.8V R46 C73 0 10uF C74 10uF

AGND A A

Title LEAP2 - Sensor Interface Module (SIM-MSP430)

Size Document Number Rev C B

Date: Saturday, November 03, 2012 Sheet 3of 4 5 4 3 2 1 5 4 3 2 1

LEAP2 Stacking Connector (Upper) LEAP2 Stacking Connector (Lower)

VCC_2.7_VREG VCC_2.7_VREG SIM <--> EMAP2 (PXA) Bus Switch VCC_3.3_VREG VCC_3.3_VREG

1 2 SYSTEM_VIN 1 2 SYSTEM_VIN VCC_1.8_VREG 3 1 2 4 VCC_1.8_VREG 3 1 2 4 U15 SN74CBTLV3245A 5 3 4 6 5 3 4 6 7 5 6 8 7 5 6 8 9 7 8 10 9 7 8 10 P3.6_UTXD1 SN74CBTLV3245A Bus Switch 11 9 10 12 11 9 10 12 P1.1_BSLTX 11 12 11 12 2 18 EXT_RXD 13 14 13 14 P3.7_URXD1 1A1 1B1 13 14 EXT_CSENSE_nCS0 13 14 EXT_CSENSE_nCS0 3 17 EXT_TXD 15 16 15 16 P2.2_BSLRX 1A2 1B2 SYSTEM_PWRBTN 15 16 EXT_CSENSE_nCS1 SYSTEM_PWRBTN 15 16 EXT_CSENSE_nCS1 4 16 PXA_I2C_SDA D 17 18 17 18 P3.1_SDA 1A3 1B3 D SYSTEM_nRESET 17 18 EXT_CSENSE_nCS2 SYSTEM_nRESET 17 18 EXT_CSENSE_nCS2 5 15 PXA_I2C_SCL 19 20 19 20 P3.3_SCL 1A4 1B4 SYSTEM_nLOWBAT 19 20 EXT_CSENSE_nCS3 SYSTEM_nLOWBAT 19 20 EXT_CSENSE_nCS3 VCC_3.3_VREG 6 14 PXA_GPIO109 21 22 21 22 TCK 1A5 1B5 EXT_POWER_ON0 21 22 EXT_CSENSE_DOUT EXT_POWER_ON0 21 22 EXT_CSENSE_DOUT 7 13 PXA_GPIO32 23 24 23 24 5 MSP_nRESET EXT_POWER_ON1 EXT_CSENSE_SCLK EXT_POWER_ON1 EXT_CSENSE_SCLK 8 1A6 1B6 12 25 23 24 26 25 23 24 26 HPM_VIO PXA_RESUME_SWITCH EXT_WAKEUP_IN0 EXT_POWER_ON2 EXT_CSENSE_DIN EXT_POWER_ON2 EXT_CSENSE_DIN 9 1A7 1B7 11 27 25 26 28 27 25 26 28 VCC P4.5_TB5 EXT_nIRQ0 EXT_POWER_ON3 EXT_SPI_nCS EXT_POWER_ON3 EXT_SPI_nCS 2 4 1 1A8 1B8 29 27 28 30 29 27 28 30 A EXT_POWER_ON4 EXT_SPI_SDI EXT_POWER_ON4 EXT_SPI_SDI 1 Y 2 19 1 31 29 30 32 31 29 30 32 EXT_POWER_ON5 EXT_SPI_SDO EXT_POWER_ON5 EXT_SPI_SDO NC 3 nOE1 NC 33 31 32 34 33 31 32 34 GND SWITCH_nOE VCC_3.3_VREG EXT_WAKEUP0 33 34 EXT_SPI_SCLK EXT_WAKEUP0 33 34 EXT_SPI_SCLK 20 10 35 36 35 36 3 R47 EXT_WAKEUP1 EXT_RXD EXT_WAKEUP1 EXT_RXD J11 VCC GND 37 35 36 38 37 35 36 38 10K EXT_WAKEUP2 EXT_TXD EXT_WAKEUP2 EXT_TXD SW 39 37 38 40 39 37 38 40 U16 C75 EXT_WAKEUP_IN0 EXT_nCS0 EXT_WAKEUP_IN0 EXT_nCS0 SN74LVC1G04 0.1uF 41 39 40 42 41 39 40 42 EXT_WAKEUP_IN1 EXT_nCS1 EXT_WAKEUP_IN1 EXT_nCS1 43 41 42 44 43 41 42 44 EXT_WAKEUP_IN2 EXT_nCS2 EXT_WAKEUP_IN2 EXT_nCS2 45 43 44 46 45 43 44 46 EXT_nIRQ0 RESERVED EXT_nIRQ0 RESERVED 47 45 46 48 47 45 46 48 EXT_nIRQ1 PXA_GPIO87 EXT_nIRQ1 PXA_GPIO87 49 47 48 50 49 47 48 50 EXT_IRQ0 PXA_GPIO54 EXT_IRQ0 PXA_GPIO54 51 49 50 52 51 49 50 52 U17 EXT_IRQ1 PXA_CIF_D7 EXT_IRQ1 PXA_CIF_D7 SN74CBTLV3125 53 51 52 54 53 51 52 54 PXA_SSP2_SFRM PXA_GPIO86 PXA_SSP2_SFRM PXA_GPIO86 3 4 55 53 54 56 55 53 54 56 PXA_GPIO110 PXA_SRAM_nCS PXA_GPIO97 PXA_SRAM_nCS PXA_GPIO97 MSP_TDI 1A 1B 57 55 56 58 57 55 56 58 PXA_CIF_D8 PXA_GPIO96 PXA_CIF_D8 PXA_GPIO96 6 7 PXA_GPIO16 59 57 58 60 59 57 58 60 MSP_TDO PXA_GPIO100 PXA_CIF_D6 PXA_GPIO100 PXA_CIF_D6 2A 2B 61 59 60 62 61 59 60 62 PXA_GPIO103 PXA_CIF_D0 PXA_GPIO103 PXA_CIF_D0 11 10 63 61 62 64 63 61 62 64 PXA_GPIO17 PXA_GPIO88 PXA_SSP2_SCLK PXA_GPIO88 PXA_SSP2_SCLK MSP_TMS 3A 3B 65 63 64 66 65 63 64 66 PXA_I2C_SDA PXA_GPIO44 PXA_I2C_SDA PXA_GPIO44 14 13 67 65 66 68 67 65 66 68 PXA_I2C_SCL PXA_GPIO92 PXA_I2C_SCL PXA_GPIO92 4A 4B 69 67 68 70 69 67 68 70 VCC_3.3_VREG PXA_GPIO112 PXA_GPIO111 PXA_GPIO112 PXA_GPIO111 2 16 71 69 70 72 71 69 70 72 PXA_GPIO45 PXA_CIF_CLKIN PXA_GPIO45 PXA_CIF_CLKIN 1OE VCC 73 71 72 74 73 71 72 74 PXA_GPIO32 PXA_CIF_LINE_VALID PXA_GPIO32 PXA_CIF_LINE_VALID 5 9 75 73 74 76 75 73 74 76 PXA_GPIO110 PXA_CIF_PXCLK PXA_GPIO110 PXA_CIF_PXCLK 2OE NC 77 75 76 78 77 75 76 78 C76 PXA_GPIO16 PXA_CIF_FRAME_VALID PXA_GPIO16 PXA_CIF_FRAME_VALID 12 1 79 77 78 80 79 77 78 80 0.1uF PXA_GPIO17 PXA_GPIO38 PXA_GPIO17 PXA_GPIO38 3OE NC 81 79 80 82 81 79 80 82 PXA_GPIO109 PXA_GPIO22 PXA_GPIO109 PXA_GPIO22 15 8 83 81 82 84 83 81 82 84 PXA_USB_P1_DP PXA_GPIO27 PXA_USB_P1_DP PXA_GPIO27 4OE GND 85 83 84 86 85 83 84 86 PXA_USB_P1_DN PXA_GPIO29 PXA_USB_P1_DN PXA_GPIO29 87 85 86 88 87 85 86 88 PXA_CIF_D3 PXA_GPIO30 PXA_CIF_D3 PXA_GPIO30 89 87 88 90 89 87 88 90 PXA_CIF_D2 PXA_GPIO113 PXA_CIF_D2 PXA_GPIO113 91 89 90 92 91 89 90 92 PXA_GPIO99 PXA_GPIO31 PXA_GPIO99 PXA_GPIO31 C 93 91 92 94 93 91 92 94 C PXA_CIF_D4 PXA_SSP2_RXD PXA_CIF_D4 PXA_SSP2_RXD 95 93 94 96 95 93 94 96 PXA_GPIO53 PXA_CLK_OUT PXA_GPIO53 PXA_CLK_OUT 97 95 96 98 97 95 96 98 PXA_CIF_D5 PXA_GPIO28 PXA_CIF_D5 PXA_GPIO28 99 97 98 100 99 97 98 100 PXA_GPIO101 PXA_SSP2_TXD PXA_GPIO101 PXA_SSP2_TXD 101 99 100 102 101 99 100 102 PXA_GPIO52 PXA_CIF_D1 PXA_GPIO52 PXA_CIF_D1 103 101 102 104 103 101 102 104 PXA_nSDRAS PXA_GPIO91 PXA_nSDRAS PXA_GPIO91 105 103 104 106 105 103 104 106 PXA_A25 PXA_GPIO90 PXA_A25 PXA_GPIO90 107 105 106 108 107 105 106 108 PXA_A24 PXA_nSDCS_IN PXA_A24 PXA_nSDCS_IN 109 107 108 110 109 107 108 110 PXA_A22 PXA_CIF_D9 PXA_A22 PXA_CIF_D9 111 109 110 112 111 109 110 112 PXA_A20 PXA_RDY PXA_A20 PXA_RDY 113 111 112 114 113 111 112 114 PXA_A18 PXA_A23 PXA_A18 PXA_A23 RS-232 UART 115 113 114 116 115 113 114 116 PXA_A16 PXA_A21 PXA_A16 PXA_A21 117 115 116 118 117 115 116 118 PXA_A14 PXA_A19 PXA_A14 PXA_A19 119 117 118 120 119 117 118 120 PXA_A12 PXA_A17 PXA_A12 PXA_A17 121 119 120 122 121 119 120 122 PXA_A10 PXA_A15 PXA_A10 PXA_A15 123 121 122 124 123 121 122 124 VCC_3.3_VREG U41 VCC_3.3_VREG PXA_A8 PXA_A13 PXA_A8 PXA_A13 MAX3221EIPWR 125 123 124 126 125 123 124 126 0 R48 PXA_A6 PXA_A11 PXA_A6 PXA_A11 12 15 127 125 126 128 127 125 126 128 PXA_A4 PXA_A9 PXA_A4 PXA_A9 16 FORCEON VCC 10 129 127 128 130 129 127 128 130 PXA_A1 PXA_A7 PXA_A1 PXA_A7 RS232_nFORCEOFF FORCEOFF* INVALID* 131 129 130 132 131 129 130 132 PXA_nPWE PXA_A5 PXA_nPWE PXA_A5 133 131 132 134 133 131 132 134 PXA_RD_nWR PXA_A3 PXA_RD_nWR PXA_A3 0603, on top 11 13 135 133 134 136 135 133 134 136 RS232_TXD PXA_MBGNT PXA_A2 PXA_MBGNT PXA_A2 9 DIN DOUT 8 137 135 136 138 137 135 136 138 RS232_RXD PXA_nWE PXA_A0 PXA_nWE PXA_A0 ROUT RIN 139 137 138 140 139 137 138 140 J14 SW PXA_nCS5 PXA_MBREQ PXA_nCS5 PXA_MBREQ 1 2 3 141 139 140 142 141 139 140 142 PXA_nSDCAS PXA_nOE PXA_nSDCAS PXA_nOE P3.4_UTXD0 2 4 C1+ V+ 7 143 141 142 144 143 141 142 144 PXA_DQM3 PXA_SDCLK PXA_DQM3 PXA_SDCLK 3 C1- V- 145 143 144 146 145 143 144 146 PXA_D30 PXA_DQM2 PXA_D30 PXA_DQM2 EXT_TXD 5 1 147 145 146 148 147 145 146 148 PXA_D28 PXA_D31 PXA_D28 PXA_D31 6 C2+ EN* 14 0.47 uF 0.47 uF 149 147 148 150 149 147 148 150 PXA_D26 PXA_D29 PXA_D26 PXA_D29 1 C2- GND C82 C83 151 149 150 152 151 149 150 152 PXA_D24 PXA_D27 PXA_D24 PXA_D27 P3.5_URXD0 2 153 151 152 154 153 151 152 154 PXA_D22 PXA_D25 PXA_D22 PXA_D25 3 155 153 154 156 155 153 154 156 PXA_D20 PXA_D23 PXA_D20 PXA_D23 EXT_RXD 157 155 156 158 157 155 156 158 C85 C84 PXA_D18 PXA_D21 PXA_D18 PXA_D21 0.1 uF 0.47 uF 159 157 158 160 159 157 158 160 J13 SW HPM_VIO PXA_D16 PXA_D19 HPM_VIO PXA_D16 PXA_D19 161 159 160 162 161 159 160 162 PXA_DQM0 PXA_D17 PXA_DQM0 PXA_D17 163 161 162 164 163 161 162 164 PXA_DQM1 PXA_DQM1 165 163 164 166 165 163 164 166 PXA_D15 PXA_D14 PXA_D15 PXA_D14 167 165 166 168 167 165 166 168 B PXA_D13 PXA_D12 PXA_D13 PXA_D12 B 169 167 168 170 169 167 168 170 PXA_D11 PXA_D10 PXA_D11 PXA_D10 171 169 170 172 171 169 170 172 PXA_D9 PXA_D8 PXA_D9 PXA_D8 173 171 172 174 173 171 172 174 PXA_D7 PXA_D6 PXA_D7 PXA_D6 175 173 174 176 175 173 174 176 PXA_D5 PXA_D4 PXA_D5 PXA_D4 177 175 176 178 177 175 176 178 PXA_D3 PXA_D2 PXA_D3 PXA_D2 179 177 178 180 179 177 178 180 PXA_D1 179 180 PXA_D0 PXA_D1 179 180 PXA_D0 VCC_3.3_VREG J12 WALLPLUG_VIN WALLPLUG_VIN CONNECTOR 1 181 181 RS232_TXD 2 182 181 182 181 RS232_RXD 3 183 182 183 182 4 184 183 184 183 184 184 5 6 185 189 185 189 7 186 185 189 191 186 185 189 191 8 187 186 191 190 187 186 191 190 9 188 187 190 192 188 187 190 192 188 192 188 192 J1 J2 QTH-090-02-F-D-A QSH-090-01-F-D-A

A A

Title LEAP2 - Sensor Interface Module (SIM-MSP430)

Size Document Number Rev C B

Date: Saturday, November 03, 2012 Sheet 4of 4 5 4 3 2 1 B.1.3.2 Layout

B.1.4 MPM

B.1.4.1 Schematics

212 5 4 3 2 1

LEAP2 - Mini PCI Module

D Schematic Pages D

1. Title Sheet * Author: Dustin McIntire * "Copyright (c) 2006 The Regents of the University of California. 2. Mini PCI and FPGA * All rights reserved. 3. Power Supplies * * Permission to use, copy, modify, and distribute this software and its 4. I/O Connectors * documentation for any purpose, without fee, and without written agreement is * hereby granted, provided that the above copyright notice, the following * two paragraphs and the author appear in all copies of this software. * * IN NO EVENT SHALL THE UNIVERSITY OF CALIFORNIA BE LIABLE TO ANY PARTY FOR * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES ARISING OUT * OF THE USE OF THIS SOFTWARE AND ITS DOCUMENTATION, EVEN IF THE UNIVERSITY OF * CALIFORNIA HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. * * THE UNIVERSITY OF CALIFORNIA SPECIFICALLY DISCLAIMS ANY WARRANTIES, * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS * ON AN "AS IS" BASIS, AND THE UNIVERSITY OF CALIFORNIA HAS NO OBLIGATION TO * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS."

C C

Power Supply Hierarchy

LAYOUT NOTES: 1. No vias inside SMT caps and resistors. 2. No vias inside SMT pads. TBD

B B

MTG142/217 MTG HOLE

MTG142/217 Reset Hierarchy JTAG Chain MTG HOLE

MTG142/217 MTG HOLE

TBD TBD MTG142/217 MTG HOLE A A

Title Title Mini PCI Module - Title Page Size Document Number Rev Size Document Number Rev C A Date: Sheet Date: Saturday, November 03, 2012 Sheet 1of 4 5 4 3 2 1 5 4 3 2 1

PCI Bridge FPGA

MiniPCI Socket U1A 1/2 3.3V/32bits/33MHz D D UNUSED0 D16 B8 AD31 SYSTEM_VIN PCI_3.3_VREG PCI_3.3_VREG SYSTEM_VIN UNUSED1 D17 I/O AD[31] A8 AD30 D18 I/O AD[30] D7 AD29 (4) EXT_WAKEUP0 J1 UNUSED2 D19 I/O AD[29] C7 AD28 PXA_SDCLK_nOE E16 I/O AD[28] B7 AD27 Mini PCI Type III F17 I/O AD[27] A7 AD26 1 2 (4) PXA_D0 I/O AD[26] TIP RING (4) PXA_D1 F18 D6 AD25 INTA# F19 I/O AD[25] D5 AD24 3 4 U1B G16 I/O AD[24] D4 AD23 5 8PMJ-3 8PMJ-1 6 (4) PXA_D2 I/O AD[23] PCI_3.3_VREG PCI_1.8_VREG 8PMJ-6 8PMJ-2 (4) PXA_D3 G17 C4 AD22 2/2 7 8 G18 I/O AD[22] B4 AD21 D1 9 8PMJ-7 8PMJ-4 10 D2 (4) PXA_D4 I/O AD[21] E17 E6 8PMJ-8 8PMJ-5 G19 A4 AD20 VCCIO VCC 11 12 (4) PXA_D5 I/O AD[20] J17 E7 LED1_GRNP LED2_YELP H17 B3 AD19 VCCIO VCC 13 14 (4) PXA_DQM3 I/O AD[19] L17 E9 LED1_GRNN LED2_YELN H18 A3 AD18 VCCIO VCC 15 16 (4) PXA_DQM2 I/O AD[18] R17 E12 CHSGND RESERVED H19 C3 AD17 VCCIO VCC INTB# 17 18 (4) PXA_DQM1 I/O AD[17] U5 E13 INTB# 5V INTB# J16 D3 AD16 VCCIO VCC 19 20 INTA# I/O AD[16] U9 F15 3.3V INTA# J18 G2 AD15 VCCIO VCC 21 22 (4) PXA_DQM0 I/O AD[15] U11 G5 RESERVED RESERVED J19 G1 AD14 VCCIO VCC 23 24 (4) SYSTEM_nRESET I/O AD[14] U15 G15 GND 3.3VAUX K16 H3 AD13 VCCIO VCC PCLK1 25 26 RST# (4) PXA_D6 I/O AD[13] H5 CLK RST# K17 H2 AD12 PCI_3.3_VREG VCC 27 28 (4) PXA_D7 I/O AD[12] H15 GND 3.3V K18 H1 AD11 VCC REQ1# 29 30 GNT1# (4) PXA_D8 I/O AD[11] C5 J15 REQ# GNT# UNUSED2 L16 J4 AD10 VCCIO VCC 31 32 I/O AD[10] C9 L5 3.3V GND L18 J2 AD9 VCCIO VCC AD31 33 34 PME# (4) PXA_RDY I/O AD[9] C11 M5 AD[31] PME# L19 J1 AD8 VCCIO VCC AD29 35 36 (4) PXA_nWE I/O AD[8] C15 N5 AD[29] RESERVED M17 K3 AD7 VCCIO VCC 37 38 AD30 (4) PXA_nSDRAS I/O AD[7] E3 N15 GND AD[30] M18 L4 AD6 VCCIO VCC AD27 39 40 (4) PXA_nSDCAS I/O AD[6] J3 P5 AD[27] 3.3V M19 L2 AD5 VCCIO VCC AD25 41 42 AD28 (4) PXA_nSDCS_IN I/O AD[5] L3 R7 AD[25] AD[28] N16 L1 AD4 VCCIO VCC 43 44 AD26 (4) PXA_MBGNT I/O AD[4] R3 R8 RESERVED AD[26] N17 M4 AD3 VCCIO VCC CBE3# 45 46 AD24 (4) PXA_MBREQ I/O AD[3] R11 C/BE[3]# AD[24] P16 M3 AD2 VCC AD23 47 48 IDSEL1 (4) PXA_RD_nWR I/O AD[2] R12 AD[23] IDSEL P17 M2 AD1 VCC 49 50 (4) EXT_nIRQ0 I/O AD[1] D13 R13 GND GND (4) PXA_A24 P18 M1 AD0 INREF VCC AD21 51 52 AD22 P19 I/O AD[0] PCI_3.3_VREG AD19 53 AD[21] AD[22] 54 AD20 (4) PXA_A25 R16 I/O K4 CBE0# 55 AD[19] AD[20] 56 PAR (4) PXA_nCS5 E8 R18 I/O CBEN[0] H4 57 GND PAR 58 (4) PXA_nOE CBE1# PCI_3.3_VREG K2 VDED H16 AD17 AD18 R19 I/O CBEN[1] C1 CBE2# CBE2# 59 AD[17] AD[18] 60 AD16 (4) PXA_nPWE RN1 TCK VDED K1 T3 I/O CBEN[2] B5 CBE3# IRDY# 61 C/BE[2]# AD[16] 62 (4) PXA_D9 5 4 W9 VDED M15 C T4 I/O CBEN[3] 63 IRDY# GND 64 FRAME# C (4) PXA_D10 6 3 TDI VDED R14 T5 I/O E1 PERR# CLKRUN# 65 3.3V FRAME# 66 TRDY# (4) PXA_D11 7 2 U17 VDED T7 I/O PERRN F3 SERR# SERR# 67 CLKRUN# TRDY# 68 STOP# (4) PXA_D12 8 1 TDO T8 I/O SERRN E4 FRAME# 69 SERR# STOP# 70 (4) PXA_D13 I/O FRAMEN B9 E15 GND 3.3V (4) PXA_D14 T9 E2 STOP# 10K TMS VPUMP PERR# 71 72 DEVSEL# T10 I/O STOPN G4 PAR CBE1# 73 PERR# DESEL# 74 (4) PXA_D15 I/O PAR K19 C/BE[1]# GND T12 F4 DEVSEL# TRSTB AD14 75 76 AD15 (4) PXA_A0 I/O DEVSELN A2 AD[14] AD[15] T13 D2 IRDY# GND 77 78 AD13 (4) PXA_A1 I/O IRDYN B18 GND AD[13] T14 D1 TRDY# GND AD12 79 80 AD11 (4) PXA_A2 I/O TRDYN E5 AD[12] AD[11] T15 A9 RST# GND AD10 81 82 (4) PXA_A3 I/O (PCI)RSTN E10 AD[10] GND T16 A5 IDSEL0 GND 83 84 AD9 (4) PXA_A4 I/O IDSEL E11 GND AD[09] T18 C8 REQ0# GND AD8 85 86 CBE0# (4) PXA_A5 I/O REQN E14 AD[08] C/BE[0]# T19 D8 GNT0# GND AD7 87 88 (4) PXA_A6 I/O GNTN U10 F5 AD[[07] 3.3V U1 CLK<0> GND 89 90 AD6 (4) PXA_D16 I/O J5 3.3V AD[06] U2 A11 REQ1# RN2 GND AD5 91 92 AD4 (4) PXA_D17 I/O I/O K5 AD[05] AD[04] U4 A12 PME# 5 4 GND 93 94 AD2 (4) PXA_D18 I/O I/O K15 RESERVED AD[02] U7 A13 CLKRUN# 6 3 GND AD3 95 96 AD0 (4) PXA_D19 I/O I/O V10 L15 AD[03] AD[00] U8 A15 PCLK_nOE 7 2 CLK<1> GND 97 98 (4) PXA_D20 I/O I/O P15 5V RESERVED_WIP U12 A16 33MHZ_EN 8 1 GND AD1 99 100 (4) PXA_A7 I/O I/O R5 AD[01] RESERVED_WIP U13 A17 RST# GND 101 102 (4) PXA_A8 I/O I/O R6 GND GND U16 B11 GNT1# 10K GND 103 104 M66EN (4) PXA_A9 I/O I/O B10 R9 AC_SYNC M66EN U19 B12 UNUSED3 CLK<2> GND 105 106 (4) PXA_A10 I/O I/O R10 AC_SDATA_IN AC_SDATA_OUT V4 B14 UNUSED4 GND 107 108 R1 (4) PXA_D21 I/O I/O R15 AC_BIT_CLK AC_CODEC_ID0# V5 B15 UNUSED5 GND 109 110 0 Ohm (4) PXA_D22 I/O I/O V2 AC_CODEC_ID1# AC_RESET# V7 B16 UNUSED6 GND 111 112 (4) PXA_D23 I/O I/O PCLK1 D9 V3 MOD_AUDIO_MON RESERVED V8 C10 REQ0# (PCI)CLK GND 113 114 (4) PXA_D24 I/O I/O V18 AUDIO_GND GND V9 C12 UNUSED7 GND 115 116 (4) PXA_D25 I/O I/O SYS_AUDIO_OUT SYS_AUDIO_IN (4) PXA_A11 V12 C13 UNUSED8 117 118 V13 I/O I/O C14 UNUSED9 QL5822-280 119 SYS_AUDIO_OUT_GND SYS_AUDIO_IN_GND 120 (4) PXA_A12 I/O I/O A10 AUDIO_GND AUDIO_GND (4) PXA_A13 V15 C16 UNUSED10 CLK<3> 121 122 MPCIACT# V16 I/O I/O C17 UNUSED11 123 1123992-1RESERVED MPCIACT# 124 (4) PXA_A14 V17 I/O I/O C18 UNUSED12 VCC5VA 3.3VAUX (4) PXA_A15 I/O I/O (4) PXA_D26 W3 C19 UNUSED13 W4 I/O I/O D10 GNT0# (4) PXA_D27 I/O I/O (4) PXA_D28 W5 D11 XTAL_OUT W6 I/O I/O D12 (4) PXA_D29 W7 I/O I/O D14 UNUSED14 AD16 IDSEL1 (4) PXA_D30 I/O I/O R2 B W8 D15 UNUSED15 10K 10K B (4) PXA_D31 R4 3K R3 W10 I/O I/O N2 UNUSED16 UNUSED0 1 16 UNUSED1 UNUSED16 1 16 UNUSED17 (4) PXA_A16 I/O I/O 10K 100 Ohms (4) PXA_A17 W11 N3 UNUSED17 UNUSED2 2 15 UNUSED3 UNUSED18 2 15 UNUSED19 W12 I/O I/O N4 UNUSED18 UNUSED4 3 14 UNUSED5 UNUSED20 3 14 UNUSED21 (4) PXA_A18 W13 I/O I/O P1 UNUSED19 1 4 UNUSED6 4 13 UNUSED7 UNUSED22 4 13 UNUSED23 (4) PXA_A19 I/O I/O PCI Pull-up (4) PXA_A20 W15 P2 UNUSED20 2 3 UNUSED8 5 12 UNUSED9 UNUSED24 5 12 UNUSED25 W16 I/O I/O R1 UNUSED21 UNUSED10 6 11 UNUSED11 6 11 PCI_3.3_VREG PCI_3.3_VREG (4) PXA_A21 QL5822-280 C1 C2 W17 I/O I/O R2 UNUSED22 X1 UNUSED12 7 10 UNUSED13 7 10 RN3 RN4 (4) PXA_A22 22pF 22pF W18 I/O I/O R4 UNUSED23 UNUSED14 8 9 UNUSED15 8 9 FRAME# 8 9 INTA# 8 9 (4) PXA_A23 33.000Mhz I/O I/O T1 UNUSED24 IRDY# 7 10 INTB# 7 10 I/O T2 UNUSED25 RN5 RN6 TRDY# 6 11 MPCIACT# 6 11 I/O DEVSEL# 5 12 CLKRUN# 5 12 DNI DNI STOP# 4 13 PME# 4 13 REQ1# 3 14 CBE3# 3 14 PERR# 2 15 CBE2# 2 15 AD17 IDSEL0 SERR# 1 16 CBE1# 1 16 R5 10K 10K 100 Ohms PAR CBE0# R6 R7 10K 10K PCI_3.3_VREG PCI_1.8_VREG

C3 C4 C5 C6 C7 C8 C9 C10 C11 C12 C13 C14 C15 C16 C17 C18 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 0.1uF 22uF 22uF

PCI Clock Generator A A

PCI_1.8_VREG XTAL_OUT FB1 DNI R8 DNI 0 Ohm

C19 C20 4 R9 U2 U3 0.1uF 0.1uF 10K Dual Buffer 33MHZ_EN 1 3 33MHZ 2 6 R10 33 Ohms EN OUT A1 Y1 PXA_SDCLK (4)

VCC 5 3 R11 33 Ohms PCLK1 R12 PXA_SDCLK_nOE 1 A2 Y2 7 PCLK_nOE 10K 33.000MHz 8 nOE nOE2 4 PCI_3.3_VREG Title VCC GND Title SN74LVC2G125DCUR Mini PCI Module - FPGA C21 Size Document Number Rev 0.1uF Size Document Number Rev C A Date: Sheet Date: Saturday, November 03, 2012 Sheet 2of 4 5 4 3 2 1 5 4 3 2 1

Current Sense ADC

VCC_3.3_VREG

R13 10 Ohm

C23 5.0V@2A and 3.3V@2A Power Supply 0.1uF D D U4 5 VREG_INTVCC WALLPLUG_VIN PCI_3.3V_CURRENT 6

A0 VCC 3 R14 PCI_1.8V_CURRENT 7 100K A1 C24 8 1 EXT_CSENSE_DOUT (4) 22uF 1% 9 A2 SDO D3 C22 10 A3 1uF A4 R15 C25 11 4 VCC_3.3_VREG 10K 0.1uF 12 A5 EOC A6 BAT54A BAT54A WALLPLUG_VIN 1% 13 A7 1 2 14 R16 nCSTART 16 10K nPWDN 17 20 FS SYSTEM_VIN (4) EXT_CSENSE_nCS0 nCS (4) EXT_CSENSE_DIN 2 19 C26 C27 3 SDI REFP 18 10uF 10uF (4) EXT_CSENSE_SCLK VREG_INTVCC SCLK REFM C28 R18 AGND 0.1uF (4) SYSTEM_nLOWBAT 0 OHM TLV2548IPWR VCC_3.3_VREG U5 R17 15 VREF 0 OHM 1 VIN 22 27 28 19 20 2 U7 C29 3 VOUT U6 26 15 U8 0.1uF GND TG1 TG2

5 4 C30 Vin C31 4 5 REF2930 6 D1 G1 3 0.1uF 24 17 0.1uF 3 G1 D1 6

D1 S1 BOOST1 INTVcc BOOST2 S1 D1

7 2 EXTVcc 2 7 8 D2 G2 1 25 PGOOD1 PGOOD2 16 1 G2 D2 8 Tie at a single point. D2 S2 SW1 SW2 S2 D2 NDS8936 23 LTC3827EUH 18 NDS8936 BG1 BG2

32 10 SENSE1+ SENSE2+ L2 L1 6.8uH 6.8uH 1 9 SENSE1- SENSE2- C C

31 11 Vfb1 Vfb2 C32 C33 CURRENT SENSING R19 1000pF 30 12 1000pF R20 0.015 OHM Ith1 Ith2 0.015 OHM SYSTEM_VIN 7 8 PCI_3.3_PRE RUN1 RUN2 PCI 3.3V 29 13 C34 R24 R21 TRACK/SS1 TRACK/SS2 R22 0.1uF 100 Ohm 105K 5 63.4K PCI_3.3_PRE U9

1 1 1% 1% PLLIN/MODE C36 1% 5 1 PCI_3.3V_CURRENT C35 + R23 2 4 C40 1000pF R25 C38 + V+ OUT C37 C39 47UF 20K PLLPF CLKOUT 33PF 20K 47UF R26 3 R27 1000pF Vin+ 0.1uF 6.3V 1% 3 14 R29 1% 6.3V 0.1 Ohm 4 2 100K 2 2 PCI_3.3_VREG C41 R28 PHASMD PKG_GND SGND PGND FOLDIS 15K Vin- GND 1% 33PF 15K INA138NA 6 LTC3827EUH 1 VREG_INTVCC 33 21 R30 C42 + 100K C43 100UF JP1 C44 22 3 3 0.1uF 6.3V PCI_POWER_ON 2 0.1uF

VREG_INTVCC 11

VREG_INTVCC VCC 530Khz VCC_1.8_VREG Float 400Khz PCI 1.8V DNI R31 C46 100K C45 R33 250Khz 0.1uF GND 0.1uF 100 Ohm U10 U11 1% 1 6 5 1 PCI_1.8V_CURRENT R32 VIN VOUT V+ OUT 100K C47 3 5 R34 3 R35 PCI_POWER_ON 0.1uF ENA CSL 1 Ohm 4 Vin+ 2 100K 4 2 Vin- GND 1% Continuous Mode ILIM GND INA138NA B VCC R36 B MIC2007YM6TR 470 Ohm PCI_1.8_VREG VCC/2 Pulse Skipping Mode Burst Mode GND C48 22uF

A A

Title Title Sensor Interface Module - ADC Size Document Number Rev Size Document Number Rev C A Date: Sheet Date: Saturday, November 03, 2012 Sheet 3of 4 5 4 3 2 1 5 4 3 2 1

LEAP2 Stacking connectors

LEAP2 Stacking Connector (Upper) LEAP2 Stacking Connector (Lower)

VCC_2.7_VREG VCC_2.7_VREG VCC_3.3_VREG VCC_3.3_VREG

D 1 2 SYSTEM_VIN 1 2 SYSTEM_VIN D VCC_1.8_VREG 3 1 2 4 VCC_1.8_VREG 3 1 2 4 5 3 4 6 5 3 4 6 7 5 6 8 7 5 6 8 9 7 8 10 9 7 8 10 11 9 10 12 11 9 10 12 13 11 12 14 13 11 12 14 13 14 EXT_CSENSE_nCS0 (3) 13 14 EXT_CSENSE_nCS0 (3) SYSTEM_PWRBTN 15 16 EXT_CSENSE_nCS1 SYSTEM_PWRBTN 15 16 EXT_CSENSE_nCS1 17 15 16 18 17 15 16 18 (2) SYSTEM_nRESET EXT_CSENSE_nCS2 (2) SYSTEM_nRESET EXT_CSENSE_nCS2 19 17 18 20 19 17 18 20 (3) SYSTEM_nLOWBAT 19 20 EXT_CSENSE_nCS3 (3) SYSTEM_nLOWBAT 19 20 EXT_CSENSE_nCS3 EXT_POWER_ON0 21 22 EXT_CSENSE_DOUT (3) EXT_POWER_ON0 21 22 EXT_CSENSE_DOUT (3) 23 21 22 24 23 21 22 24 EXT_POWER_ON1 23 24 EXT_CSENSE_SCLK (3) EXT_POWER_ON1 23 24 EXT_CSENSE_SCLK (3) EXT_POWER_ON2 25 26 EXT_CSENSE_DIN (3) EXT_POWER_ON2 25 26 EXT_CSENSE_DIN (3) 27 25 26 28 27 25 26 28 EXT_POWER_ON3 EXT_SPI_nCS EXT_POWER_ON3 EXT_SPI_nCS 29 27 28 30 29 27 28 30 EXT_POWER_ON4 29 30 EXT_SPI_SDI EXT_POWER_ON4 29 30 EXT_SPI_SDI EXT_POWER_ON5 31 32 EXT_SPI_SDO EXT_POWER_ON5 31 32 EXT_SPI_SDO 33 31 32 34 33 31 32 34 (2) EXT_WAKEUP0 33 34 EXT_SPI_SCLK (2) EXT_WAKEUP0 33 34 EXT_SPI_SCLK EXT_WAKEUP1 35 36 EXT_RXD EXT_WAKEUP1 35 36 EXT_RXD 37 35 36 38 37 35 36 38 EXT_WAKEUP2 EXT_TXD EXT_WAKEUP2 EXT_TXD 39 37 38 40 39 37 38 40 EXT_WAKEUP_IN0 39 40 EXT_nCS0 EXT_WAKEUP_IN0 39 40 EXT_nCS0 EXT_WAKEUP_IN1 41 42 EXT_nCS1 EXT_WAKEUP_IN1 41 42 EXT_nCS1 43 41 42 44 43 41 42 44 EXT_WAKEUP_IN2 43 44 EXT_nCS2 EXT_WAKEUP_IN2 43 44 EXT_nCS2 (2) EXT_nIRQ0 45 46 RESERVED (2) EXT_nIRQ0 45 46 RESERVED 47 45 46 48 47 45 46 48 EXT_nIRQ1 PXA_GPIO87 EXT_nIRQ1 PXA_GPIO87 49 47 48 50 49 47 48 50 EXT_IRQ0 49 50 PXA_GPIO54 EXT_IRQ0 49 50 PXA_GPIO54 EXT_IRQ1 51 52 PXA_CIF_D7 EXT_IRQ1 51 52 PXA_CIF_D7 53 51 52 54 53 51 52 54 PXA_SSP2_SFRM 53 54 PXA_GPIO86 PXA_SSP2_SFRM 53 54 PXA_GPIO86 PXA_SRAM_nCS 55 56 PXA_GPIO97 PXA_SRAM_nCS 55 56 PXA_GPIO97 57 55 56 58 57 55 56 58 PXA_CIF_D8 PXA_GPIO96 PXA_CIF_D8 PXA_GPIO96 59 57 58 60 59 57 58 60 PXA_GPIO100 59 60 PXA_CIF_D6 PXA_GPIO100 59 60 PXA_CIF_D6 PXA_GPIO103 61 62 PXA_CIF_D0 PXA_GPIO103 61 62 PXA_CIF_D0 63 61 62 64 63 61 62 64 PXA_GPIO88 63 64 PXA_SSP2_SCLK PXA_GPIO88 63 64 PXA_SSP2_SCLK PXA_I2C_SDA 65 66 PXA_GPIO44 PXA_I2C_SDA 65 66 PXA_GPIO44 67 65 66 68 67 65 66 68 PXA_I2C_SCL PXA_GPIO92 PXA_I2C_SCL PXA_GPIO92 69 67 68 70 69 67 68 70 PXA_GPIO112 69 70 PXA_GPIO111 PXA_GPIO112 69 70 PXA_GPIO111 PXA_GPIO45 71 72 PXA_CIF_CLKIN PXA_GPIO45 71 72 PXA_CIF_CLKIN 73 71 72 74 73 71 72 74 PXA_GPIO32 73 74 PXA_CIF_LINE_VALID PXA_GPIO32 73 74 PXA_CIF_LINE_VALID PXA_GPIO110 75 76 PXA_CIF_PXCLK PXA_GPIO110 75 76 PXA_CIF_PXCLK C 77 75 76 78 77 75 76 78 C PXA_GPIO16 PXA_CIF_FRAME_VALID PXA_GPIO16 PXA_CIF_FRAME_VALID 79 77 78 80 79 77 78 80 PXA_GPIO17 79 80 PXA_GPIO38 PXA_GPIO17 79 80 PXA_GPIO38 PXA_GPIO109 81 82 PXA_GPIO22 PXA_GPIO109 81 82 PXA_GPIO22 83 81 82 84 83 81 82 84 PXA_USB_P1_DP 83 84 PXA_GPIO27 PXA_USB_P1_DP 83 84 PXA_GPIO27 PXA_USB_P1_DN 85 86 PXA_GPIO29 PXA_USB_P1_DN 85 86 PXA_GPIO29 87 85 86 88 87 85 86 88 PXA_CIF_D3 PXA_GPIO30 PXA_CIF_D3 PXA_GPIO30 89 87 88 90 89 87 88 90 PXA_CIF_D2 89 90 PXA_GPIO113 PXA_CIF_D2 89 90 PXA_GPIO113 PXA_GPIO99 91 92 PXA_GPIO31 PXA_GPIO99 91 92 PXA_GPIO31 93 91 92 94 93 91 92 94 PXA_CIF_D4 93 94 PXA_SSP2_RXD PXA_CIF_D4 93 94 PXA_SSP2_RXD PXA_GPIO53 95 96 PXA_CLK_OUT PXA_GPIO53 95 96 PXA_CLK_OUT 97 95 96 98 97 95 96 98 PXA_CIF_D5 PXA_GPIO28 PXA_CIF_D5 PXA_GPIO28 99 97 98 100 99 97 98 100 PXA_GPIO101 99 100 PXA_SSP2_TXD PXA_GPIO101 99 100 PXA_SSP2_TXD PXA_GPIO52 101 102 PXA_CIF_D1 PXA_GPIO52 101 102 PXA_CIF_D1 103 101 102 104 103 101 102 104 (2) PXA_nSDRAS 103 104 PXA_GPIO91 (2) PXA_nSDRAS 103 104 PXA_GPIO91 (2) PXA_A25 105 106 PXA_GPIO90 (2) PXA_A25 105 106 PXA_GPIO90 107 105 106 108 107 105 106 108 (2) PXA_A24 PXA_nSDCS_IN (2) (2) PXA_A24 PXA_nSDCS_IN (2) 109 107 108 110 109 107 108 110 (2) PXA_A22 109 110 PXA_CIF_D9 (2) PXA_A22 109 110 PXA_CIF_D9 (2) PXA_A20 111 112 PXA_RDY (2) (2) PXA_A20 111 112 PXA_RDY (2) 113 111 112 114 113 111 112 114 (2) PXA_A18 113 114 PXA_A23 (2) (2) PXA_A18 113 114 PXA_A23 (2) (2) PXA_A16 115 116 PXA_A21 (2) (2) PXA_A16 115 116 PXA_A21 (2) 117 115 116 118 117 115 116 118 (2) PXA_A14 PXA_A19 (2) (2) PXA_A14 PXA_A19 (2) 119 117 118 120 119 117 118 120 (2) PXA_A12 119 120 PXA_A17 (2) (2) PXA_A12 119 120 PXA_A17 (2) (2) PXA_A10 121 122 PXA_A15 (2) (2) PXA_A10 121 122 PXA_A15 (2) 123 121 122 124 123 121 122 124 (2) PXA_A8 123 124 PXA_A13 (2) (2) PXA_A8 123 124 PXA_A13 (2) (2) PXA_A6 125 126 PXA_A11 (2) (2) PXA_A6 125 126 PXA_A11 (2) 127 125 126 128 127 125 126 128 (2) PXA_A4 PXA_A9 (2) (2) PXA_A4 PXA_A9 (2) 129 127 128 130 129 127 128 130 (2) PXA_A1 129 130 PXA_A7 (2) (2) PXA_A1 129 130 PXA_A7 (2) (2) PXA_nPWE 131 132 PXA_A5 (2) (2) PXA_nPWE 131 132 PXA_A5 (2) 133 131 132 134 133 131 132 134 (2) PXA_RD_nWR 133 134 PXA_A3 (2) (2) PXA_RD_nWR 133 134 PXA_A3 (2) (2) PXA_MBGNT 135 136 PXA_A2 (2) (2) PXA_MBGNT 135 136 PXA_A2 (2) 137 135 136 138 137 135 136 138 (2) PXA_nWE PXA_A0 (2) (2) PXA_nWE PXA_A0 (2) 139 137 138 140 139 137 138 140 (2) PXA_nCS5 139 140 PXA_MBREQ (2) (2) PXA_nCS5 139 140 PXA_MBREQ (2) (2) PXA_nSDCAS 141 142 PXA_nOE (2) (2) PXA_nSDCAS 141 142 PXA_nOE (2) 143 141 142 144 143 141 142 144 (2) PXA_DQM3 143 144 PXA_SDCLK (2) (2) PXA_DQM3 143 144 PXA_SDCLK (2) (2) PXA_D30 145 146 PXA_DQM2 (2) (2) PXA_D30 145 146 PXA_DQM2 (2) 147 145 146 148 147 145 146 148 (2) PXA_D28 PXA_D31 (2) (2) PXA_D28 PXA_D31 (2) 149 147 148 150 149 147 148 150 (2) PXA_D26 149 150 PXA_D29 (2) (2) PXA_D26 149 150 PXA_D29 (2) B (2) PXA_D24 151 152 PXA_D27 (2) (2) PXA_D24 151 152 PXA_D27 (2) B 153 151 152 154 153 151 152 154 (2) PXA_D22 153 154 PXA_D25 (2) (2) PXA_D22 153 154 PXA_D25 (2) (2) PXA_D20 155 156 PXA_D23 (2) (2) PXA_D20 155 156 PXA_D23 (2) 157 155 156 158 157 155 156 158 (2) PXA_D18 PXA_D21 (2) (2) PXA_D18 PXA_D21 (2) 159 157 158 160 159 157 158 160 HPM_VIO (2) PXA_D16 159 160 PXA_D19 (2) HPM_VIO (2) PXA_D16 159 160 PXA_D19 (2) (2) PXA_DQM0 161 162 PXA_D17 (2) (2) PXA_DQM0 161 162 PXA_D17 (2) 163 161 162 164 163 161 162 164 163 164 PXA_DQM1 (2) 163 164 PXA_DQM1 (2) (2) PXA_D15 165 166 PXA_D14 (2) (2) PXA_D15 165 166 PXA_D14 (2) 167 165 166 168 167 165 166 168 (2) PXA_D13 PXA_D12 (2) (2) PXA_D13 PXA_D12 (2) 169 167 168 170 169 167 168 170 (2) PXA_D11 169 170 PXA_D10 (2) (2) PXA_D11 169 170 PXA_D10 (2) (2) PXA_D9 171 172 PXA_D8 (2) (2) PXA_D9 171 172 PXA_D8 (2) 173 171 172 174 173 171 172 174 (2) PXA_D7 173 174 PXA_D6 (2) (2) PXA_D7 173 174 PXA_D6 (2) (2) PXA_D5 175 176 PXA_D4 (2) (2) PXA_D5 175 176 PXA_D4 (2) 177 175 176 178 177 175 176 178 (2) PXA_D3 PXA_D2 (2) (2) PXA_D3 PXA_D2 (2) 179 177 178 180 179 177 178 180 (2) PXA_D1 179 180 PXA_D0 (2) (2) PXA_D1 179 180 PXA_D0 (2) WALLPLUG_VIN WALLPLUG_VIN 181 181 182 181 182 181 183 182 183 182 184 183 184 183 184 184 185 189 185 189 186 185 189 191 186 185 189 191 187 186 191 190 187 186 191 190 188 187 190 192 188 187 190 192 188 192 188 192 J2 J3 QTH-090-02-F-D-A QSH-090-01-F-D-A

A A

Title Title Mini PCI Module - I/O Connectors Size Document Number Rev Size Document Number Rev C A Date: Sheet Date: Saturday, November 03, 2012 Sheet 4of 4 5 4 3 2 1 B.1.4.2 Layout

217 References

[ABF04] “Sensors for environmental observatories. Report of the NSF Sponsored Workshop.” http://www.wtec.org/seo/final/2004. [ACP99] J. Agre, L. Clare, G. Pottie, and N. Romanov. “Development platform for self-organizing wireless sensor networks.” Aerosense, International Society of Optical Engineering, pp. 257–268, 1999. [ACP05] ACPI. “Advanced Configuration and Power Interface Specification.” Technical report, December 30, 2005 2005. [ADL98] G. Asada, M. Dong, T. S. Lin, F. Newberg, G. Pottie, W. J. Kaiser, and H. O. Marcy. “Wireless Integrated Network Sensors: Low Power Systems on a Chip.” In In Proceedings of the European Solid State Circuits Conference, 1998. [ANF03] Manish Anand, Edmund B. Nightingale, and Jason Flinn. “Self-tuning wireless network power management.”, 2003. [ANF04] Manish Anand, Edmund B. Nightingale, and Jason Flinn. “Ghosts in the machine: interfaces for better power management.”, 2004. [ASS02] I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci. “Wire- less Sensor Networks: A Survey.” Computer Networks, 38:393–422, 2002. [AWB07] L.K. Au, W.H. Wu, M.A. Batalin, D.H. Mclntire, and W.J. Kaiser. “MicroLEAP: Energy-aware Wireless Sensor Platform for Biomedical Sensing Applications.” pp. 158–62. IEEE Biomedical Circuits and Sys- tems Conference (BIOCAS), November 2007. [Bar] “The Linux Kernel Tracepoint API.” http://kernel.org/doc/htmldocs/tracepoint.html. [BBC96] K. Bult, A. Burstein, D. Chang, M. Dong, M. Fielding, E. Kruglick, J. Ho, F. Lin, T. H. Lin, and W. J. Kaiser. “Low Power Systems for Wireless Microsensors.” In IEEE International Symposium on Low Power Electronics and Design, pp. 17–21, 1996. [BCE07] “GNU Radio - The GNU Software Radio.” http://gnuradio.org. [BCM00] Luca Benini, Giuliano Castelli, Alberto Macii, Enrico Macii, and Ric- cardo Scarsi. “Battery-driven dynamic power management of portable systems.”, 2000.

218 [BDM99] Gaurav Banga, Peter Druschel, and Jeffrey C. Mogul. “Resource Con- tainers: A New Facility for Resource Management in Server Systems.” Operating Systems Design and Implementation, pp. 45–58, 1999.

[Bel00] Frank Bellosa. “The benefits of event: driven energy accounting in power-sensitive systems.”, 2000.

[Ben11] “Developer Tool Spotlight - Using Trepn Profiler for Power- Efficient Apps.” https://developer.qualcomm.com/blog/developer- tool-spotlight-using-trepn-profiler-power-efficient-apps.

[BLR01] Leonard Bacaud, Claude Lemarchal, Arnaud Renaud, and Claudia Sagastizbal. “Bundle Methods in Stochastic Optimal Power Manage- ment: A Disaggregated Approach Using Preconditioners.” Comput. Optim. Appl., 20(3):227–244, 2001.

[Bri08] “Gartner Says More than 1 Billion PCs In Use Worldwide and Headed to 2 Billion Units by 2014.” http://www.gartner.com/it/page.jsp?id=703807.

[BRV11] R. Bajwa, R. Rajagopal, P. Varaiya, and R. Kavaler. “In-pavement wireless sensor network for vehicle classification.” In International Conference on Information Processing in Sensor Networks (IPSN), 2011.

[BSC05] N. Banerjee, J. Sorber, M. Corner, S. Rollins, and D. Ganesan. “Triage: A power-aware software architecture for tiered microservers.” Technical report, University of Massachusetts-Amherst, April 2005.

[Bsq12] “Snapdragon-based Mobile Development Platforms.” http://www.bsquare.com/snapdragon-mobile-development- platforms.aspx.

[BSY04] M. Batalin, G. S. Sukhatme, Y. Yu, M. H. Rahimi, M. Hansen, G. Pot- tie, W. Kaiser, and D. Estrin. “Call and Response: Experiments in Sampling the Environment.” In Proceedings of ACM SenSys, Balti- more, Maryland, 2004.

[CB95] A. P. Chandrakasan and R. W. Brodersen. Low Power Digital CMOS Design. Kluwer Academic Publishers, 1995.

[CG11] “SmartReflex Power and Performance Management Technologies: re- duced power consumption, optimized performance, white paper.” http://focus.ti.com/pdfs/wtbu/smartreflex whitepaper.pdf.

219 [Col03] D. Cole. Energy Consumption and Personal Computers. Kluwer Aca- demic Publishers, Dordrecht, The Netherlands, 2003.

[DGA05] Prabal Dutta, Mike Grimmer, Anish Arora, Steven Bibyk, and David Culler. “Design of a wireless sensor network platform for detecting rare, random, and ephemeral events.”, 2005.

[DKM94] Fred Douglis, P. Krishnan, and Brian Marsh. “Thwarting the power- hungry disk.”, 1994.

[DPN02] L. Dexin, H. C. Pai, and B. Nader. “Mode Selection and Mode- Dependency Modeling for Power-Aware Embedded Systems.” In In Proceedings of the 2002 conference on Asia South Pacific design au- tomation/VLSI Design, 2002.

[Eff12] “Babeltrace.” http://www.efficios.com/babeltrace.

[EPS01] D. Estrin, G. Pottie, and M Srivastava. “Instrumenting the world with wireless sensor networks.” In Proceedings of ICASSP Conference, 2001.

[Fou] “Eclipse.” http://www.eclipse.org/.

[FS99a] Jason Flinn and M. Satyanarayanan. “Energy-aware adaptation for mobile applications.”, 1999.

[FS99b] Jason Flinn and M. Satyanarayanan. “PowerScope: A Tool for Profil- ing the Energy Usage of Mobile Applications.”, 1999.

[Gas02] M. S Gast. 802.11 Wireless Networks: The Definitive Guide. O’Reilly & Associates, Inc., 2002.

[GH96] R. Gonzalez and M. Horowitz. “Energy dissipation in general purpose microprocessors.” IEEE Journal of Solid-State Circuits, 31(9):1277– 1284, 1996.

[GMJ04] Contreras Gilberto, Martonosi Margaret, Peng Jinzhan, Ju Roy, and Lueh Guei-Yuan. “XTREM: a power simulator for the Intel XScale core.”, 2004.

[GSI02] Sudhanva Gurumurthi, Anand Sivasubramaniam, Mary Jane Irwin, N. Vijaykrishnan, and Mahmut Kandemir. “Using complete machine simulation for software power estimation: the SoftWatt approach.” Eighth International Symposium on High-Performance Computer Ar- chitecture, pp. 141–150, 2002.

220 [HCA02] Zeng Heng, S. Ellis Carla, R. Lebeck Alvin, and Vahdat Amin. “ECOSystem: managing energy as a first class operating system re- source.”, 2002.

[HLS00] David P. Helmbold, Darrell D. E. Long, Tracey L. Sconyers, and Bruce Sherrod. “Adaptive disk spin-down for mobile computers.” Mobile Networks and Applications, 5(4):285–297, 2000.

[HPB02] T. Heath, E. Pinheiro, and R. Bianchini. “Application-supported De- vice Management for Energy and Performance.”, February 2002.

[HPH02] Taliver Heath, Eduardo Pinheiro, Jerry Hom, Ulrich Kremer, and Ricardo Bianchini. “Application Transformations for Energy and Performance-Aware Device Management.”, 2002.

[HPK03] Hai Huang, Padmanabhan Pillai, and G Shin Kang. “Design and im- plementation of power-aware virtual memory.”, 2003.

[HRA07] M. P. Hamilton, P. W. Rundel, M. A. Allen, W. Kaiser, D. E. Estrin, and E. Graham. “New Approaches in Embedded Networked Sensing for Terrestrial Ecological Observatories.” Environmental Engineering Science, pp. 192–204, 2007.

[HSW00] Jason Hill, Robert Szewczyk, Alec Woo, Seth Hollar, David E. Culler, and Kristofer S. J. Pister. “System Architecture Directions for Net- worked Sensors.” Architectural Support for Programming Languages and Operating Systems, pp. 93–104, 2000.

[HWL84] D.J. Houliston, G. Waugh, and J. Laughlin. “Automatic real-time event detection for seismic networks.” In Comput. Geosci., volume 10, pp. 431–436, 1984.

[ifi12] “Samsung Galaxy Nexus Teardown.” http://www.ifixit.com/Teardown/Samsung+Galaxy+Nexus+ Teardown/7182/1.

[Ins] “Digital Bus Switch Selection Guide.” http://www.ti.com/general/docs/lit/getliterature.tsp?baseLiterature Number=scdb006.

[Int02] “Stargate Platform.” http://platformx.sourceforge.net.

[Int07] “Imote2.” http://www.xbow.com.

221 [JDC07] Xiaofan Jiang, Prabal Dutta, David Culler, and Ion Stoica. “Mi- cro power meter for energy monitoring of wireless sensor networks at scale.”, 2007.

[JSR04] P. Jones, Padmanabhan Shobana, D. Rymarz, J. Maschmeyer, D. V. Schuehler, J. W. Lockwood, and R. K. Cytron. “Liquid architecture.” In Parallel and Distributed Processing Symposium, 2004. Proceedings. 18th International, p. 202, 2004.

[JTB07] D. Jung, A Teixeira, A. Barton-Sweeney, and A. Savvides. “Model- based Design Exporation of Wireless Sensor Networks at Scale.”, 2007.

[KA00] A. Kamerman and G. Aben. “Net throughput with IEEE 802.11 wire- less LANs.”, 2000.

[LBM00] Yung-Hsiang Lu, Luca Benini, and Giovanni De Micheli. “Low-power task scheduling for multiple devices.”, 2000.

[LC03] Hyung Gyu Lee and Naehyuck Chang. “Energy-aware memory alloca- tion in heterogeneous non-volatile memory systems.”, 2003.

[LDP02] Kanishka Lahiri, Sujit Dey, Debashis Panigrahi, and Anand Raghu- nathan. “Battery-Driven System Design: A New Frontier in Low Power Design.”, 2002.

[Lin07a] Linux. “The Linux Operating System.”, 2007.

[Lin07b] “PowerTOP.” http://www.awaretechs.com.

[LJ01] Jiong Luo and Niraj K. Jha. “Battery-aware static scheduling for dis- tributed real-time embedded systems.”, 2001.

[LM99] Y. Lu and G. De Micheli. “Adaptive Hard Disk Power Management on Personal Computers.” IEEE Great Lakes Symposium on VLSI, pp. 50–53, 1999.

[LPZ07] Dimitrios Lymberopoulos, Nissanka B. Priyantha, and Feng Zhao. “mPlatform: a reconfigurable architecture and efficient data sharing mechanism for modular sensor nodes.”, 2007.

[LRR05] Nachman Lama, Kling Ralph, Adler Robert, Huang Jonathan, and Hummel Vincent. “The Intel Mote platform: a Bluetooth-based sensor network for industrial monitoring.”, 2005.

222 [LS05] Dimitrios Lymberopoulos and Andreas Savvides. “XYZ: a motion- enabled, power aware sensor node platform for distributed sensor net- work applications.”, 2005. [Mar00] Diana Marculescu. “Profile-driven code execution for low power dissi- pation.”, 2000. [Mar06] John Markoff. “Google to Push for More Electrical Efficiency in PCs.”, September 26 2006. [Mar11] “Data Centers Power Use Less Than Was Expected.” http://www.nytimes.com/2011/08/01/technology/data-centers- using-less-power-than-forecast-report-says.html. [MGP82] Andrew J Michael, Stephen P Gildea, and Jay J. Pulli. “A real-time digital seismic event detection and recording system for network ap- plications.” In Bulletin of the Seismological Society of America72, pp. 2339–2348, December 1982. [Mic01] Microsoft. “OnNow Device Power Management.”, 2001. [Mic06] Microsoft. “Power Policy Configuration and Deployment in Windows Vista.” Technical report, Microsoft Corporation, October 30 2006. [Mic12] “The ultra-low-power programmable solution.” http://www.actel.com/products/IGLOO/. [MS96] T. Martin and D. Siewiorek. “A power metric for mobile systems.”, 1996. [MSR12] Dustin McIntire, Thanos Stathopoulos, Sasank Reddy, Thomas Schmidt, and William J. Kaiser. “Energy-Efcient Sensing with the Low Power, Energy Aware Processing (LEAP) Architecture.”, 2012. [MV04a] Aqeel Mahesri and Vibhore Vardhan. “Power consumption breakdown on a modern laptop.” In Proceedings of the 4th international conference on Power-Aware Computer Systems, pp. 165–180, 2004. [MV04b] Aqeel Mahesri and Vibhore Vardhan. “Power Consumption Breakdown on a Modern Laptop.” PACS, 3471:165–180, 2004. [OAB03] C. M. P. Ozanne, D. Anhuf, S. L. Boulter, M. Keller, R. L. Kitching, C. Korner, F. C. Meinzer, A. W. Mitchell, T. Nakashizuka, P. L. S. Dias, N. E. Stork, S. J. Wright, and M. Yoshimura. “Biodiversity meets the atmosphere: A global view of forest canopies.” Science, 301:183–6, 2003.

223 [Ope12a] “SoC Interconnection: Wishbone.” http://opencores.org/opencores ,wishbone.

[Ope12b] “Linux Trace Toolkit.” http://lttng.org/.

[PK05] Greg Pottie and William Kaiser. Principles of Embedded Networked Systems Design. Cambridge University Press, 2005.

[PS03] Athanasios E. Papathanasiou and Michael L. Scott. “Energy efficiency through burstiness.” Mobile Computing Systems and Applications, 2003. Proceedings. Fifth IEEE Workshop on, pp. 44–53, 2003.

[PSC05] Joseph Polastre, Robert Szewczyk, and David Culler. “Telos: enabling ultra-low power wireless research.”, 2005.

[QQM00] Qiu Qinru, Wu Qing, and Pedram Massoud. “Dynamic power man- agement of complex systems using generalized stochastic Petri nets.”, 2000.

[Qua11] “Snapdragon S4 Processors: System on Chip So- lutions for a New Mobile Age, White Paper.” http://www.qualcomm.com/media/documents/snapdragon-s4- processors-system-chip-solutions-new-mobile-age.

[Qui] “Eclipse II.” http://www.quicklogic.com/support/mature- products/eclipse-ii/overview/.

[RCB06] Jose Rizo-Morente, Miguel Casas-Sanchez, and C. J. Bleakley. “Dy- namic current modeling at the instruction level.”, 2006.

[RG00] Dinesh Ramanathan and Rajesh Gupta. “System level online power management algorithms.”, 2000.

[RGK02] K. W. Roth, F. Goldstein, and J. Kleinman. “Energy Consumption by Office and Telecommunications Equipment in Commercial Buildings, Volume I: Energy Consumption Baseline.” Technical report, Arthur D. Little Inc, Cambridge, MA, January 2002.

[RIG02] D. Ramanathan, S. Irani, and R. Gupta. “An Analysis of System Level Power Management Algorithms and their Effects on Latency.”, 2002.

[Rus02] Charlie Russel. “Power Management in Windows XP.”, March 25 2002.

224 [SBC05] B. Schott, M. Bajura, J. Czarnaski, J. Flidr, T. Tho, and L. Wang. “A modular power-aware microsensor with ¿1000X dynamic power range.” In Information Processing in Sensor Networks, pp. 469–474, Los An- geles, 2005. [SBM99] Tajana Simunic, Luca Benini, and Giovanni De Micheli. “Energy- efficient design of battery-powered embedded systems.”, 1999. [SBM00] Tajana Simunic, Luca Benini, Giovanni De Micheli, and Mat Hans. “Source code optimization and profiling of energy consumption in em- bedded systems.”, 2000. [SCI01] Vishnu Swaminathan, Krishnendu Chakrabarty, and S. S. Iyengar. “Dynamic I/O power management for hard real-time systems.”, 2001. [SHC04] Victor Shnayder, Mark Hempstead, Bor-rong Chen, Geoff Werner Allen, and Matt Welsh. “Simulating the power consumption of large- scale sensor network applications.” Proceedings of the 2nd interna- tional conference on Embedded Networked Sensor Systems (SenSys04), pp. 188–200, 2004. [SKW01] S. Steinke, M. Knauer, L. Wehmeyer, and P. Marwedel. “An Accurate and Fine Grain Instruction-Level Energy Model supporting Software Optimizations.”, September 2001. [TEK] “High Resolution Seismic Recorders.” http://www.reftek.com/products/high-resolution-seismic- recorders.htm. [TRJ05] T. K. Tan, A. Raghunathan, and N. K. Jha. “Energy macromodeling of embedded operating systems.” Trans. on Embedded Computing Sys., 4(1):231–254, 2005. [UK03] Osman S. Unsal and Israel Koren. “System-level power-aware design techniques in real-time systems.” Proceedings of the IEEE, 91(7):1055– 1069, 2003. [Wai03] Martin Waitz. Accounting and control of power consumption in energy- aware operating systems. Diplomarbeit, Friedrich Alexander Univer- sit¨atErlangen-N¨urnberg, 2003. [WAY98] Mitchell Withers, Richard Aster, Christopher Young, Judy Beiriger, and Mark Harris. “A comparison of select trigger algorithms for au- tomated global seismic phase and event detection.” In Bulletin of the Seismological Society of America88, pp. 95–106, February 1998.

225 [WBB02] Andreas Weissel, Bjoern Beutel, and Frank Bellosa. “Cooperative I/O-A Novel I/O Semantics for Energy-Aware Applications.” In 5th USENIX Symposium on Operating Systems Design and Implementa- tion (OSDI’02),, 2002.

[WBB08] Winston H. Wu, Alex A. T. Bui, Maxim A. Batalin, Lawrence K. Au, Jonathan D. Binney, and William J. Kaiser. “Medic: Medical embedded device for individualized care.” Artificial Intelligence for Medicine, 42(2):137–52, 2008.

[WBF04] C. Worth, M. Bajura, J. Flidr, and B. Schott. “On-demand Linux for Power-aware Embedded Sensors.” In Ottawa Linux Symposium, Ottawa, Ontario Canada, 2004.

[Wik07] “A Typical North/Southbridge Layout.” http://www.wikipedia.org.

[Wik11] “Map of aftershock activity near Japan as of March 31, 2011.” http://www.wikipedia.org.

[ZBS04] V Zyuban, D Brooks, Viji Srinivasan, M Gschwind, Pradip Bose, P N Strenski, and P G Emma. “Integrated analysis of power and perfor- mance for pipelined microprocessors.” IEEE Transactions on Comput- ers, 53(8):1004–1016, 2004.

[ZGS03] F. Zheng, N. Garg, S. Sobti, C. Zhang, R. Joseph, A. Krishnamurty, and R. Wang. “Considering the energy consumption of mobile storage alternatives.” In IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems, 2003.

[ZTQ10] Lide Zhang, Birjodh Tiwana, Zhiyun Qian, Zhaoguang Wang, Robert P. Dick, Zhuoqing Morley Mao, and Lei Yang. “Accu- rate online power estimation and automatic battery behavior based power model generation for smartphones.” In Proceedings of the eighth IEEE/ACM/IFIP international conference on Hardware/soft- ware codesign and system synthesis, New York, NY, 2010.

[ZTX11] Zhou Zhu-yu, Deng Tian-min, and Lv Xian-yang. “Study for vehicle recognition and classification based on Gabor wavelets transform & HMM.” In International Conference on Consumer Electronics, Com- munications and Networks (CECNet), 2011.

226