Improved Spectrograms Using the Discrete Fractional Fourier Transform
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Chirp Presentation
Seminar Agenda • Overview of CHIRP technology compared to traditional fishfinder technology – What’s different? • Importance of proper transducer selection & installation • Maximize the performance of your electronics system • Give feedback, offer product suggestions, and ask tough transducer questions Traditional “Toneburst” Fishfinder • Traditional fishfinders operate at discrete frequencies such as 50kHz and 200kHz. • This limits depth range, range resolution, and ultimately, what targets can be detected in the water column. Fish Imaging at Different Frequencies Koden CVS-FX1 at 4 Different Frequencies Range Resolution Comparison Toneburst with separated targets Toneburst w/out separated targets CHIRP without separated targets Traditional “Toneburst” Fishfinder • Traditional sounders operate at discrete frequencies such as 50kHz and 200kHz. • This limits resolution, range and ultimately, what targets can be detected in the water column. • Tone burst transmit pulse may be high power but very short duration. This limits the total energy that is transmitted into the water column CHIRP A major technical advance in Fishing What is CHIRP? • CHIRP has been used by the military, geologists and oceanographers since the 1950’s • Marine radar systems have utilized CHIRP technology for many years • This is the first time that CHIRP technology has been available to the recreational, sport fishing and light commercial industries….. and at an affordable price CHIRP Starts with the Transducer • AIRMAR CHIRP-ready transducers are the enabling technology for manufacturers designing CHIRP sounders • Only sounders using AIRMAR CHIRP-ready transducers can operate as a true CHIRP system CHIRP is a technique that involves three principle steps 1. Use broadband transducer (Airmar) 2. Transmit CHIRP pulse into water 3. -
Programming Chirp Parameters in TI Radar Devices (Rev. A)
Application Report SWRA553A–May 2017–Revised February 2020 Programming Chirp Parameters in TI Radar Devices Vivek Dham ABSTRACT This application report provides information on how to select the right chirp parameters in a fast FMCW Radar device based on the end application and use case, and program them optimally on TI’s radar devices. Contents 1 Introduction ................................................................................................................... 2 2 Impact of Chirp Configuration on System Parameters.................................................................. 2 3 Chirp Configurations for Common Applications.......................................................................... 7 4 Configurable Chirp RAM and Chirp Profiles.............................................................................. 7 5 Chirp Timing Parameters ................................................................................................... 8 6 Advanced Chirp Configurations .......................................................................................... 11 7 Basic Chirp Configuration Programming Sequence ................................................................... 12 8 References .................................................................................................................. 14 List of Figures 1 Typical FMCW Chirp ........................................................................................................ 2 2 Typical Frame Structure ................................................................................................... -
Enhance Your DSP Course with These Interesting Projects
AC 2012-3836: ENHANCE YOUR DSP COURSE WITH THESE INTER- ESTING PROJECTS Dr. Joseph P. Hoffbeck, University of Portland Joseph P. Hoffbeck is an Associate Professor of electrical engineering at the University of Portland in Portland, Ore. He has a Ph.D. from Purdue University, West Lafayette, Ind. He previously worked with digital cell phone systems at Lucent Technologies (formerly AT&T Bell Labs) in Whippany, N.J. His technical interests include communication systems, digital signal processing, and remote sensing. Page 25.566.1 Page c American Society for Engineering Education, 2012 Enhance your DSP Course with these Interesting Projects Abstract Students are often more interested learning technical material if they can see useful applications for it, and in digital signal processing (DSP) it is possible to develop homework assignments, projects, or lab exercises to show how the techniques can be used in realistic situations. This paper presents six simple, yet interesting projects that are used in the author’s undergraduate digital signal processing course with the objective of motivating the students to learn how to use the Fast Fourier Transform (FFT) and how to design digital filters. Four of the projects are based on the FFT, including a simple voice recognition algorithm that determines if an audio recording contains “yes” or “no”, a program to decode dual-tone multi-frequency (DTMF) signals, a project to determine which note is played by a musical instrument and if it is sharp or flat, and a project to check the claim that cars honk in the tone of F. Two of the projects involve designing filters to eliminate noise from audio recordings, including designing a lowpass filter to remove a truck backup beeper from a recording of an owl hooting and designing a highpass filter to remove jet engine noise from a recording of birds chirping. -
The Digital Wide Band Chirp Pulse Generator and Processor for Pi-Sar2
THE DIGITAL WIDE BAND CHIRP PULSE GENERATOR AND PROCESSOR FOR PI-SAR2 Takashi Fujimura*, Shingo Matsuo**, Isamu Oihara***, Hideharu Totsuka**** and Tsunekazu Kimura***** NEC Corporation (NEC), Guidance and Electro-Optics Division Address : Nisshin-cho 1-10, Fuchu, Tokyo, 183-8501 Japan Phone : +81-(0)42-333-1148 / FAX : +81-(0)42-333-1887 E-mail: * [email protected], ** [email protected], *** [email protected], **** [email protected], ***** [email protected] BRIEF CONCLUSION This paper shows the digital wide band chirp pulse generator and processor for Pi-SAR2, and the history of its development at NEC. This generator and processor can generate the 150, 300 or 500MHz bandwidth chirp pulse and process the same bandwidth video signal. The offset video method is applied for this component in order to achieve small phase error, instead of I/Q video method for the conventional SAR system. Pi-SAR2 realized 0.3m resolution with the 500 MHz bandwidth by this component. ABSTRACT NEC had developed the first Japanese airborne SAR, NEC-SAR for R&D in 1992 [1]. Its resolution was 5 m and this SAR generates 50 MHz bandwidth chirp pulse by the digital chirp generator. Based on this technology, we had developed many airborne SARs. For example, Pi-SAR (X-band) for CRL (now NICT) can observe the 1.5m resolution SAR image with the 100 MHz bandwidth [2][3]. And its chirp pulse generator and processor adopted I/Q video method and the sampling rate was 123.45MHz at each I/Q video channel of the processor. -
4 – Synthesis and Analysis of Complex Waves; Fourier Spectra
4 – Synthesis and Analysis of Complex Waves; Fourier spectra Complex waves Many physical systems (such as music instruments) allow existence of a particular set of standing waves with the frequency spectrum consisting of the fundamental (minimal allowed) frequency f1 and the overtones or harmonics fn that are multiples of f1, that is, fn = n f1, n=2,3,…The amplitudes of the harmonics An depend on how exactly the system is excited (plucking a guitar string at the middle or near an end). The sound emitted by the system and registered by our ears is thus a complex wave (or, more precisely, a complex oscillation ), or just signal, that has the general form nmax = π +ϕ = 1 W (t) ∑ An sin( 2 ntf n ), periodic with period T n=1 f1 Plots of W(t) that can look complicated are called wave forms . The maximal number of harmonics nmax is actually infinite but we can take a finite number of harmonics to synthesize a complex wave. ϕ The phases n influence the wave form a lot, visually, but the human ear is insensitive to them (the psychoacoustical Ohm‘s law). What the ear hears is the fundamental frequency (even if it is missing in the wave form, A1=0 !) and the amplitudes An that determine the sound quality or timbre . 1 Examples of synthesis of complex waves 1 1. Triangular wave: A = for n odd and zero for n even (plotted for f1=500) n n2 W W nmax =1, pure sinusoidal 1 nmax =3 1 0.5 0.5 t 0.001 0.002 0.003 0.004 t 0.001 0.002 0.003 0.004 £ 0.5 ¢ 0.5 £ 1 ¢ 1 W W n =11, cos ¤ sin 1 nmax =11 max 0.75 0.5 0.5 0.25 t 0.001 0.002 0.003 0.004 t 0.001 0.002 0.003 0.004 ¡ 0.5 0.25 0.5 ¡ 1 Acoustically equivalent 2 0.75 1 1. -
Lecture 1-10: Spectrograms
Lecture 1-10: Spectrograms Overview 1. Spectra of dynamic signals: like many real world signals, speech changes in quality with time. But so far the only spectral analysis we have performed has assumed that the signal is stationary: that it has constant quality. We need a new kind of analysis for dynamic signals. A suitable analogy is that we have spectral ‘snapshots’ when what we really want is a spectral ‘movie’. Just like a real movie is made up from a series of still frames, we can display the spectral properties of a changing signal through a series of spectral snapshots. 2. The spectrogram: a spectrogram is built from a sequence of spectra by stacking them together in time and by compressing the amplitude axis into a 'contour map' drawn in a grey scale. The final graph has time along the horizontal axis, frequency along the vertical axis, and the amplitude of the signal at any given time and frequency is shown as a grey level. Conventionally, black is used to signal the most energy, while white is used to signal the least. 3. Spectra of short and long waveform sections: we will get a different type of movie if we choose each of our snapshots to be of relatively short sections of the signal, as opposed to relatively long sections. This is because the spectrum of a section of speech signal that is less than one pitch period long will tend to show formant peaks; while the spectrum of a longer section encompassing several pitch periods will show the individual harmonics, see figure 1-10.1. -
Analysis of EEG Signal Processing Techniques Based on Spectrograms
ISSN 1870-4069 Analysis of EEG Signal Processing Techniques based on Spectrograms Ricardo Ramos-Aguilar, J. Arturo Olvera-López, Ivan Olmos-Pineda Benemérita Universidad Autónoma de Puebla, Faculty of Computer Science, Puebla, Mexico [email protected],{aolvera, iolmos}@cs.buap.mx Abstract. Current approaches for the processing and analysis of EEG signals consist mainly of three phases: preprocessing, feature extraction, and classifica- tion. The analysis of EEG signals is through different domains: time, frequency, or time-frequency; the former is the most common, while the latter shows com- petitive results, implementing different techniques with several advantages in analysis. This paper aims to present a general description of works and method- ologies of EEG signal analysis in time-frequency, using Short Time Fourier Transform (STFT) as a representation or a spectrogram to analyze EEG signals. Keywords. EEG signals, spectrogram, short time Fourier transform. 1 Introduction The human brain is one of the most complex organs in the human body. It is considered the center of the human nervous system and controls different organs and functions, such as the pumping of the heart, the secretion of glands, breathing, and internal tem- perature. Cells called neurons are the basic units of the brain, which send electrical signals to control the human body and can be measured using Electroencephalography (EEG). Electroencephalography measures the electrical activity of the brain by recording via electrodes placed either on the scalp or on the cortex. These time-varying records produce potential differences because of the electrical cerebral activity. The signal generated by this electrical activity is a complex random signal and is non-stationary [1]. -
Analysis of Chirped Oscillators Under Injection Signals
Analysis of Chirped Oscillators Under Injection Signals Franco Ramírez, Sergio Sancho, Mabel Pontón, Almudena Suárez Dept. of Communications Engineering, University of Cantabria, Santander, Spain Abstract—An in-depth investigation of the response of chirped reach a locked state during the chirp-signal period. The possi- oscillators under injection signals is presented. The study con- ble system application as a receiver of frequency-modulated firms the oscillator capability to detect the input-signal frequen- signals with carriers within the chirped-oscillator frequency cies, demonstrated in former works. To overcome previous anal- ysis limitations, a new formulation is presented, which is able to band is investigated, which may, for instance, have interest in accurately predict the system dynamics in both locked and un- sensor networks. locked conditions. It describes the chirped oscillator in the enve- lope domain, where two time scales are used, one associated with the oscillator control voltage and the other associated with the II. FORMULATION OF THE INJECTED CHIRPED OSCILLATOR beat frequency. Under sufficient input amplitude, a dynamic Consider the oscillator in Fig. 1, which exhibits the free- synchronization interval is generated about the input-signal frequency. In this interval, the circuit operates at the input fre- running oscillation frequency and amplitude at the quency, with a phase shift that varies slowly at the rate of the control voltage . A sawtooth waveform will be intro- control voltage. The formulation demonstrates the possibility of duced at the the control node and one or more input signals detecting the input-signal frequency from the dynamics of the will be injected, modeled with their equivalent currents ,, beat frequency, which increases the system sensitivity. -
Femtosecond Optical Pulse Shaping and Processing
Prog. Quant. Elecrr. 1995, Vol. 19, pp. 161-237 Copyright 0 1995.Elsevier Science Ltd Pergamon 0079-6727(94)00013-1 Printed in Great Britain. All rights reserved. 007%6727/95 $29.00 FEMTOSECOND OPTICAL PULSE SHAPING AND PROCESSING A. M. WEINER School of Electrical Engineering, Purdue University, West Lafayette, IN 47907, U.S.A CONTENTS 1. Introduction 162 2. Fourier Svnthesis of Ultrafast Outical Waveforms 163 2.1. Pulse shaping by linear filtering 163 2.2. Picosecond pulse shaping 165 2.3. Fourier synthesis of femtosecond optical waveforms 169 2.3.1. Fixed masks fabricated using microlithography 171 2.3.2. Spatial light modulators (SLMs) for programmable pulse shaping 173 2.3.2.1. Pulse shaping using liquid crystal modulator arrays 173 2.3.2.2. Pulse shaping using acousto-optic deflectors 179 2.3.3. Movable and deformable mirrors for special purpose pulse shaping 180 2.3.4. Holographic masks 182 2.3.5. Amplification of spectrally dispersed broadband laser pulses 183 2.4. Theoretical considerations 183 2.5. Pulse shaping by phase-only filtering 186 2.6. An alternate Fourier synthesis pulse shaping technique 188 3. Additional Pulse Shaping Methods 189 3.1. Additional passive pulse shaping techniques 189 3. I. 1. Pulse shaping using delay lines and interferometers 189 3.1.2. Pulse shaping using volume holography 190 3.1.3. Pulse shaping using integrated acousto-optic tunable filters 193 3.1.4. Atomic and molecular pulse shaping 194 3.2. Active Pulse Shaping Techniques 194 3.2.1. Non-linear optical gating 194 3.2.2. -
Advanced Chirp Transform Spectrometer with Novel Digital Pulse Compression Method for Spectrum Detection
applied sciences Article Advanced Chirp Transform Spectrometer with Novel Digital Pulse Compression Method for Spectrum Detection Quan Zhao *, Ling Tong * and Bo Gao School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China; [email protected] * Correspondence: [email protected] (Q.Z.); [email protected] (L.T.); Tel.: +86-18384242904 (Q.Z.) Abstract: Based on chirp transform and pulse compression technology, chirp transform spectrometers (CTSs) can be used to perform high-resolution and real-time spectrum measurements. Nowadays, they are widely applied for weather and astronomical observations. The surface acoustic wave (SAW) filter is a key device for pulse compression. The system performance is significantly affected by the dispersion characteristics match and the large insertion loss of the SAW filters. In this paper, a linear phase sampling and accumulating (LPSA) algorithm was developed to replace the matched filter for fast pulse compression. By selecting and accumulating the sampling points satisfying a specific periodic phase distribution, the intermediate frequency (IF) chirp signal carrying the information of the input signal could be detected and compressed. Spectrum measurements across the entire operational bandwidth could be performed by shifting the fixed sampling points in the time domain. A two-stage frequency resolution subdivision method was also developed for the fast pulse compression of the sparse spectrum, which was shown to significantly improve the calculation Citation: Zhao, Q.; Tong, L.; Gao, B. speed. The simulation and experiment results demonstrate that the LPSA method can realize fast Advanced Chirp Transform pulse compression with adequate high amplitude accuracy and frequency resolution. -
Attention, I'm Trying to Speak Cs224n Project: Speech Synthesis
Attention, I’m Trying to Speak CS224n Project: Speech Synthesis Akash Mahajan Management Science & Engineering Stanford University [email protected] Abstract We implement an end-to-end parametric text-to-speech synthesis model that pro- duces audio from a sequence of input characters, and demonstrate that it is possi- ble to build a convolutional sequence to sequence model with reasonably natural voice and pronunciation from scratch in well under $75. We observe training the attention to be a bottleneck and experiment with 2 modifications. We also note interesting model behavior and insights during our training process. Code for this project is available on: https://github.com/akashmjn/cs224n-gpu-that-talks. 1 Introduction We have come a long way from the ominous robotic sounding voices used in the Radiohead classic 1. If we are to build significant voice interfaces, we need to also have good feedback that can com- municate with us clearly, that can be setup cheaply for different languages and dialects. There has recently also been an increased interest in generative models for audio [6] that could have applica- tions beyond speech to music and other signal-like data. As discussed in [13] it is fascinating that is is possible at all for an end-to-end model to convert a highly ”compressed” source - text - into a substantially more ”decompressed” form - audio. This is a testament to the sheer power and flexibility of deep learning models, and has been an interesting and surprising insight in itself. Recently, results from Tachibana et. al. [11] reportedly produce reasonable-quality speech without requiring as large computational resources as Tacotron [13] and Wavenet [7]. -
Time-Frequency Analysis of Time-Varying Signals and Non-Stationary Processes
Time-Frequency Analysis of Time-Varying Signals and Non-Stationary Processes An Introduction Maria Sandsten 2020 CENTRUM SCIENTIARUM MATHEMATICARUM Centre for Mathematical Sciences Contents 1 Introduction 3 1.1 Spectral analysis history . 3 1.2 A time-frequency motivation example . 5 2 The spectrogram 9 2.1 Spectrum analysis . 9 2.2 The uncertainty principle . 10 2.3 STFT and spectrogram . 12 2.4 Gabor expansion . 14 2.5 Wavelet transform and scalogram . 17 2.6 Other transforms . 19 3 The Wigner distribution 21 3.1 Wigner distribution and Wigner spectrum . 21 3.2 Properties of the Wigner distribution . 23 3.3 Some special signals. 24 3.4 Time-frequency concentration . 25 3.5 Cross-terms . 27 3.6 Discrete Wigner distribution . 29 4 The ambiguity function and other representations 35 4.1 The four time-frequency domains . 35 4.2 Ambiguity function . 39 4.3 Doppler-frequency distribution . 44 5 Ambiguity kernels and the quadratic class 45 5.1 Ambiguity kernel . 45 5.2 Properties of the ambiguity kernel . 46 5.3 The Choi-Williams distribution . 48 5.4 Separable kernels . 52 1 Maria Sandsten CONTENTS 5.5 The Rihaczek distribution . 54 5.6 Kernel interpretation of the spectrogram . 57 5.7 Multitaper time-frequency analysis . 58 6 Optimal resolution of time-frequency spectra 61 6.1 Concentration measures . 61 6.2 Instantaneous frequency . 63 6.3 The reassignment technique . 65 6.4 Scaled reassigned spectrogram . 69 6.5 Other modern techniques for optimal resolution . 72 7 Stochastic time-frequency analysis 75 7.1 Definitions of non-stationary processes .