Displaying bioacoustic directional information from sonobuoys using “azigrams” Aaron M. Thode,1,a) Taiki Sakai,2,b) Jeffrey Michalec,3 Shannon Rankin,3 Melissa S. Soldevilla,4 Bruce Martin,5 and Katherine H. Kim6 1Marine Physical Laboratory, Scripps Institution of Oceanography, University of California San Diego, La Jolla, California 92093-0238, USA 2Lynker Technologies, LLC, under contract to the Southwest Fisheries Science Center, NMFS/NOAA, La Jolla, California 92037, USA 3Southwest Fisheries Science Center, NMFS/NOAA, La Jolla, California 92037, USA 4Southeast Fisheries Science Center, NMFS/NOAA, 75 Virginia Beach Drive, Miami, Florida 33149, USA 5JASCO Applied Sciences, 32 Troop Avenue, Suite 202, Dartmouth, Nova Scotia, B3B 1Z1, Canada 6Greeneridge Sciences, Inc., 90 Arnold Place, Suite D, Santa Barbara, California 93117, USA (Received 16 November 2018; revised 22 May 2019; accepted 5 June 2019; published online 10 July 2019) The AN/SSQ-53 Directional Frequency Analysis and Recording (DIFAR) sonobuoy is an expend- able device that can derive acoustic particle velocity along two orthogonal horizontal axes, along with acoustic pressure. This information enables computation of azimuths of low-frequency acous- tic sources from a single compact sensor. The standard approach for estimating azimuth from these sensors is by conventional beamforming (i.e., adding weighted time series), but the resulting “cardioid” beampattern is imprecise, computationally expensive, and vulnerable to directional noise contamination for weak signals. Demonstrated here is an alternative multiplicative processing scheme that computes the “active intensity” of an acoustic signal to obtain the dominant direction- ality of a noise field as a function of time and frequency. This information is conveniently displayed as an “azigram,” which is analogous to a spectrogram, but uses color to indicate azimuth instead of intensity. Data from several locations demonstrate this approach, which can be computed without demultiplexing the raw signal. Azigrams have been used to help diagnose sonobuoy issues, improve detectability, and estimate bearings of low signal-to-noise ratio signals. Azigrams may also enhance the detection and potential classification of signals embedded in directional noise fields. VC 2019 Acoustical Society of America. https://doi.org/10.1121/1.5114810 [KGS] Pages: 95–102 I. INTRODUCTION sonobuoy was developed, which obtains directionality by deriving acoustic particle acceleration along two orthogonal A sonobuoy is an expendable device that can transmit horizontal axes (v and v ), along with a pressure component acoustic data from a hydrophone to a nearby platform, typi- x y (p) from an omnidirectional sensor. The DIFAR sonobuoy is a cally an aircraft. Although the concept was first developed dur- workhorse of the current anti-submarine warfare (ASW) fleet ing World War I, the first wide-spread use of the technology and the subject of this manuscript. occurred during World War II, and by 1945, the US Navy had While the Navy has used DIFAR sonobuoys extensively ordered 150 000 sonobuoys and 7500 receivers (Holler, 2014). The first buoys simply used an omnidirectional hydrophone, since the late 1960s, their first published use for oceanographic but as early as 1943, engineers were designing mechanically research in the open literature occurred two decades later, rotating directional hydrophones that used a gravity motor to measuring the directionality of acoustic noise from coastal spin 3–5 times a minute down a fishing line in order to mea- surf (Wilson et al., 1985). Beginning in the 1990s, their use by sure the direction from which acoustic signals arrived. The civilian researchers expanded further as surplus sonobuoys spiritual descendants of these prototypes, the AN/SSQ-1 and from the U.S. Navy began to be used by marine bioacousti- AN/SSQ-20 (derived from a British design), were deployed in cians to detect and track baleen whales (e.g., D’Spain et al., the early 1950s. In 1954, Bell Telephone Labs built and unsuc- 1991; Thode et al., 2000; Greene et al., 2004; McDonald, cessfully tested the first sonobuoys with orthogonal pressure- 2004; Miller et al., 2015). At present, surplus sonobuoys are gradient hydrophones in an attempt to eliminate the need to being used to study baleen whales in the Arctic and Antarctic mechanically rotate the hydrophone to obtain directional infor- Ocean, the Pacific Ocean, and the Gulf of Mexico. mation. Between 1965 and 1969, the first AN/SSQ-53 DIFAR sonobuoys combine the three data streams Directional Frequency Analysis and Recording (DIFAR) (p, vx,vy) into a single broadband heterodyned signal before transmitting the signal to ship, shore, or plane, where the sig- nal is then converted back into p, vx,andvy. The signal proc- essing methods many bioacousticians currently use to process a)Electronic mail: [email protected] b)Also at: Lynker Technologies LLC, 202 Church Street Southeast #536, DIFAR data have changed little over 50 years. A spectrogram Leesburg, VA 20175, USA. is made of the omnidirectional channel data (p), which has a J. Acoust. Soc. Am. 146 (1), July 2019 0001-4966/2019/146(1)/95/8/$30.00 VC 2019 Acoustical Society of America 95 This article may be downloaded for personal use only. Any other use requires prior permission of the author and the Acoustical Society of America. bandwidth of around 3–4 kHz (depending on signal intensity). for anti-submarine warfare (ASW) applications, and commer- A bioacoustic user manually selects a “bounding box” around cial software exists for such applied applications (e.g., the a transient signal of interest, and the three time series are “TruView” software suite by GeoSpectrum, Inc.) However, then bandpass filtered and trimmed into short signal seg- azigram use by bioacousticians is not widespread, with several ments. The segments are added together in a weighted sum exceptions (Miksis-Olds et al.,2018). Therefore, the goal of analogous to conventional beamforming (McDonald, 2004; this paper is to illustrate the numerous advantages of this D’Spain et al., 2006): alternative representation of directional bioacoustic data for ÂÃthose not familiar with the technique. BðÞu; t ¼ ptðÞþ Z0 vxðtÞ sin u þ vyðtÞ cos u ; (1) Section II discusses how to demultiplex a DIFAR signal in software, compute azimuths from the active intensity, and where u, or the “steering angle,” is a hypothesized azimuth of generate an azigram. Section III provides illustrative exam- an arriving plane wave signal, typically defined as increasing ples of azigrams and highlights useful applications of this clockwise relative to the internal x axis of the sensor, consis- kind of plotting, including diagnosing equipment issues, tent with a geographic azimuth (i.e., 0 points to true or geo- improving azimuth estimation for low signal-to-noise (SNR) detic north, and 90 is geodetic east.) The free-space frequency-modulated (FM) sweeps, and potentially enhanc- impedance Z0 is a conversion factor that ensures that all three ing the performance of simple automated detectors. time series share the same units and scaling. Since the relation- ship between acoustic pressure and particle velocity for an II. THEORY acoustic plane wave is v ¼ p/qc,whereq and c are the respec- tive density and sound speed of the fluid medium, a common A. Demultiplexing in frequency domain value of Z0 is typically qc. Equation (1) canalsobecomputed Let s(t) be the multiplexed time series received from a in the frequency domain. Different beam patterns can be gen- DIFAR sensor. The omnidirectional component is defined as erated using different values for Z0, allowing tradeoffs p(t), and the two orthogonal particle velocity components between beampattern directivity and sidelobe ambiguity. are vx(t) and vy(t), respectively. Modern DIFAR sensors have When evaluated as a function of azimuth, Eq. (1) generates built-in compasses that allow the latter two channels to be a cardioid beampattern whose output is maximized whenever mapped relative to magnetic north, with x indicating a mag- the steering angle matches the true arrival azimuth of the signal. netic north-south axis and y indicating an east-west axis. The This maximization requires evaluating Eq. (1) over numerous frequency regime below 7.5 kHz represents p(t), which is angles, a computationally cumbersome process. Furthermore, if extracted through simple low-pass filtering. At frequencies a weak transient signal is embedded in a directional ambient above 7.5 kHz the two velocity time series are multiplexed noise field, Eq. (1) can yield incorrect bearings when computed together using Quadrature Amplitude Modulation (QAM) in the time domain, particularly if the bioacoustic signal of (Grado, 1988; Delagrange, 1992; Holler, 2014), interest is a frequency-modulated (FM) sweep, which is a com- mon form of baleen whale call. Drawing a bounding box around stðÞ¼ vxðÞt cosð2pf0tÞvyðÞt sinð2pf0tÞ; (2) an FM up- or down-sweep incorporates a lot of background noise into the signal, even if bandpass filtering is used. where f0 ¼ 15 kHz is the analog carrier frequency used to Much of this approach is a legacy from an era when sig- generate the QAM output. nal processing was performed in military hardware due to The spectrum of s(t) is then limitations in computer processing speed. Here we present an alternative approach to computing and displaying DIFAR 0 1 S ðÞf ; T ¼ ½vxðÞf À f0; T þ vxðÞf þ f0; T data that takes place in near-real time and processes all time- 2 frequency cells of a spectrogram, obviating the need to select i ÂÃ þ v ðÞf À f ; T þ v ðÞf þ f ; T ; (3) bounding boxes. This approach is called an “azigram” and is 2 y 0 y 0 analogous to a conventional spectrogram. Azigrams use an alternative multiplicative approach to computing bearing, where S0(f,T) is the Fast-Fourier Transform (FFT) of s(t) first formulated by (Mann et al., 1987; Fahy and Salmon, computed over a particular time window T.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-