Summer 2020 Volume 16, Issue 2 An Acoustical Society of America publication

The Tuning Fork: An Amazing Acoustics Apparatus Simulation + testing = optimized loudspeaker designs

Acoustic pressure within a speaker box and the sound pressure level in the surrounding domain.

A global leader in electronics rose to the top of the audio industry by adding multiphysics simulation to their design workflow. COMSOL Multiphysics® enables audio engineers to couple acoustics analyses and other physical phenomena to address design challenges inherent to loudspeaker and soundbar designs. The COMSOL Multiphysics® software is used for simulating designs, devices, and processes in all fields of engineering, manufacturing, and scientific research. See how you can apply it to your loudspeaker designs.

comsol.blog/loudspeaker-design MICROPHONES FOR WHEN YOU CONSUMER GOODS TESTING ■■ A broad range of high quality products NEED TO TAKE at unbeatable prices A SOUND ■■ Shipped to you fast MEASUREMENT ■■ The best warranty in the business

1 800 828 8840 | pcb.com/consumergoods

MTS Sensors, a division of MTS Systems Corporation (NASDAQ: MTSC), vastly expanded its range of products and solutions after MTS acquired PCB Piezotronics, Inc. in July, 2016. PCB Piezotronics, Inc. is a wholly owned subsidiary of MTS Systems Corp.; IMI Sensors and Larson Davis are divisions of PCB Piezotronics, Inc.; Accumetrics, Inc. and The Modal Shop, Inc. are subsidiaries of PCB Piezotronics, Inc. An Acoustical Society of America publication

Summer 2020 Volume 16, Issue 2

7 ASA Statement on Racism and Injustice Sound Perspectives 8 From the Editor 73 Ask an Acoustician 10 From the President Subha Maruvada and Micheal L. Dent Featured Articles 77 JASA-EL to Become an Independent, “Gold-Level,” Open-Access Journal The Underwater Sound from Offshore Charles C. Church 13 Wind Farms Jennifer Amaral, Kathleen Vigness-Raposa, 79 A Perspective on Proceedings of James H. Miller, Gopu R. Potty, Arthur Newhall, Meetings on Acoustics and Ying-Tsong Lin Kent L. Gee, Megan S. Ballard, and Helen Wall Murray 22 Solving Complex Acoustic Problems Using High-Performance Computations 82 ASA Books Committee Gregory Bunting, Clark R. Dohrmann, Mark F. Hamilton Scott T. Miller, and Timothy F. Walsh 85 Data, Dinners, and Diapers: Traveling Battlefield Acoustics in the First World with a Baby to a Scientific Conference 31 War: Artillery Location Laura N. Kloepper Richard Daniel Costley Jr. 87 Work-Parenting Harmony Bioacoustic Attenuation Spectroscopy: 40 A New Approach to Monitoring Fish at Sea Tracianne B. Neilsen and Alison K. Stimpert Orest Diachok 91 Spooked!

The Tuning Fork: An Amazing Lenny Rudow 48 Acoustics Apparatus Daniel A. Russell Departments 56 Speech Acoustics of the 84 ASA Press World’s Languages 93 Obituaries Benjamin V. Tucker and Richard Wright Whitlow Au | 1941–2020 Jan F. Lindberg | 1941–2020 The Adapted Ears of Big Cats and 65 Golden Moles: Exotic Outcomes of the Evolutionary Radiation of Mammals 6 Advertisers Index Edward J. Walsh and JoAnn McGee 76 Business Directory

About the Cover

Image from “The Tuning Fork: An Amazing Acoustics Apparatus” by Daniel A. Russell, on page 48. The image shows flexural bending modes for a tuning fork.

4 Acoustics Today • Summer 2020 Sound and Vibration Instrumentation Scantek, Inc.

Sound Level Meters Vibration Meters Prediction Software

Selection of sound level meters Vibration meters for measuring Software for prediction of for simple noise level overall vibration levels, simple to environmental noise, building measurements or advanced advanced FFT analysis and insulation and room acoustics acoustical analysis human exposure to vibration using the latest standards

Building Acoustics Sound Localization Monitoring

Systems for airborne sound Near- eld or far- eld sound Temporary or permanent remote transmission, impact insulation, localization and identi cation monitoring of noise or vibration STIPA, reverberation and other using Norsonic’s state of the art levels with noti cations of room acoustics measurements acoustic camera exceeded limits

Specialized Test Systems Multi-Channel Systems Industrial Hygiene

Impedance tubes, capacity and Multi-channel analyzers for Noise alert systems and volume measurement systems, sound power, vibration, building dosimeters for facility noise air-ow resistance measurement acoustics and FFT analysis in the monitoring or hearing devices and calibration systems laboratory or in the eld conservation programs

Scantek, Inc. www.ScantekInc.com 800-224-3813 Summer 2020 • Acoustics Today 5 Editor Acoustical Society of America Arthur N. Popper | [email protected] The Acoustical Society of America was founded in 1929 “to generate, disseminate, and promote the knowledge Associate Editor and practical applications of acoustics.” Information Micheal L. Dent | [email protected] about the Society can be found on the website: Book Review Editor www.acousticalsociety.org. Philip L. Marston | [email protected] Membership includes a variety of benefits, a list of which can be found at the website: ASA Publications Staff www.acousticalsociety.org/asa-membership Kat Setzer, Editorial Assistant | [email protected] Helen A. Popper, AT Copyeditor | [email protected] Acoustics Today (ISSN 1557-0215, coden ATCODK) Liz Bury, Senior Managing Editor | [email protected] Summer 2020, volume 16, issue 2, is published quarterly by the Acoustical Society of America, Suite 300, 1305 Walt ASA Editor In Chief Whitman Rd., Melville, NY 11747-4300. Periodicals Post- James F. Lynch age rates are paid at Huntington Station, NY, and additional Allan D. Pierce, Emeritus mailing offices. POSTMASTER: Send address changes to Acoustical Society of America Acoustics Today, Acoustical Society of America, Suite 300, 1305 Walt Whitman Rd., Melville, NY 11747-4300. Diane Kewley-Port, President Stan E. Dosso, Vice President Copyright 2020, Acoustical Society of America. All rights reserved. Single Maureen Stone, President-Elect copies of individual articles may be made for private use or research. For Joseph R. Gladden, Vice President-Elect more information on obtaining permission to reproduce content from this publication, please see www.acousticalsociety.org. Judy R. Dubno, Treasurer Christopher J. Struck, Standards Director Susan E. Fox, Executive Director Advertisers Index ASA Web Development Office Brüel & Kjaer...... Cover 4 Daniel Farrell | [email protected] www.bksv.com Visit the online edition of Acoustics Today at Commercial Acoustics...... Cover 3 AcousticsToday.org www.mfmca.com Comsol...... Cover 2 www.comsol.com JLI Electronics...... Page 76 www.jlielectronics.com NTI Audio AG...... Page 12 www.nti-audio.com PCB Piezotronics, Inc...... Page 3 www.pcb.com Publications Office Quiet Curtains ...... Page 76 P.O. Box 809, Mashpee, MA 02649 www.quietcurtains.com (508) 534-8645 Scantek ...... Page 5 www.scantekinc.com Follow us on Twitter @acousticsorg Advertising Sales & Production

Please see important Acoustics Today disclaimer at Debbie Bott, Advertising Sales Manager www.acousticstoday.com/disclaimer. Acoustics Today, c/o AIPP, Advertising Dept 1305 Walt Whitman Rd, Suite 300, Melville, NY 11747-4300 Phone: (800) 247-2242 or (516) 576-2430 Fax: (516) 576-2481 | Email: [email protected] For information on rates and specifications, including display, business card and classified advertising, go to Acoustics Today Media Kit online at: https://publishing.aip.org/acousticstodayratecard or contact the Advertising staff. 6 Acoustics Today • Summer 2020 ASA Statement on Racism and Injustice

The Acoustical Society of America (ASA) strongly supports racial justice movements and the fight against systemic racism. The brutal killings of Ahmaud Arbery, Michael Brown, Philando Castile, Jamar Clark, George Floyd, Eric Garner, Tamir Rice, Breonna Taylor, and countless others have incited outrage and highlighted the deeply rooted racial injustices that persist in this country. We, as members of the ASA, are equally outraged. We recognize the profoundly damaging effects that police brutality, mass incarceration, and economic, health, and educational inequities have on Black people and communities of color. It is our responsibility to actively oppose racial injustices and understand the impact of our own implicit biases, acknowledging that we are sometimes complicit within an oppressive system.

The protests and current unrest in our nation and across the world are taking place amidst a global pandemic, which itself has disproportionately impacted people of color. The convergence of these events further highlights the urgent need for genuine change.

ASA is firmly committed to fostering a diverse, equitable, and inclusive acoustics community as outlined in the ASA Policy on Diversity from 2013:

The Acoustical Society of America (ASA) is committed to making acoustics more accessible to everyone, and asserts that all individuals, regardless of racial identity, ethnic background, sex, gender identity, sexual orientation, age, disability, religion, or national origin, must be provided equal opportunity in the field of acoustics. The Society upholds the belief that diversity enriches the field of acoustics, and is working to diversify its membership and the acoustics community in general by identifying barriers to implementing this change, and is taking an active role in organizational and institutional efforts to bring about such change. The Society actively sup- ports efforts by the acoustics community to better engage the knowledge and talents of a diverse population, increase the viability of acoustics as a career option for all individuals, and promote the pursuit of acoustics careers by members of historically under-represented groups.

We recognize that the membership of the ASA does not reflect the demographics of our nation and acknowledge our responsibility to fix this. We must strive to be advocates of justice, to sup- port policy and legislative changes that will decrease systemic racism in our organization and nationally, and to work for institutional reform. The ongoing, deeply destructive effects of systemic racism must be simultaneously addressed at many levels including within ourselves, our local communities, the ASA, and our nation.

Finally, let us be clear: Black Lives Matter. Black lives should have always mattered. Now is the time to transform the system so that Black lives not only matter but also will be respected and valued in the tapestry that is the United States. Then and only then can the country finally live up to the ideals and principles on which it was founded.

Summer 2020 • Acoustics Today 7 From the Editor

Arthur N. Popper

As many members of the Acoustical teachers you know (including your children’s teachers). Society of America (ASA) know, The And, in the future, AT would be interested in collaborat- Journal of the Acoustical Society of ing with other ASA groups and activities to develop special America and other ASA journals have issues that focus on a particular topic. recently adopted new styles and covers. These changes did not include Acoustics Today (AT) because As you can see, this is a large issue of AT, filled with excit- our style is rather different from those of the ASA peer- ing articles and a number of very interesting essays. I reviewed publications. want to point out that the first four articles have as a theme (although specifically only a focus of the second However, about six months ago, we decided to try to article) using acoustic computation to solve big problems. make the magazine more readable, have it incorpo- This was not intentional but is an interesting occurrence rate ASA publication standards (e.g., colors, fonts), and that reflects the growing importance of computation improve the way that the various parts of the magazine in science and technology, including in acoustics (and tie together. At the same time, we did not want to do in the ASA). anything to alter the content of the magazine or what it contributes to the ASA and its members. The first article by Jennifer Amaral, Kathleen Vigness- Raposa, James Miller, Gopu Potty, Arthur Newhall, This issue reflects these changes. We are very grateful and Ying-Tsong Lin is about the sound from offshore to the Opus Design team and to the many members of windfarms. Although AT has had articles about onshore the ASA who gave us feedback and additional ideas as windfarms, this is the first article that explores the under- we moved forward. We hope you like the changes and water sounds from what will be a vastly growing number that you find the magazine even more readable than in of offshore devices. the past. Of course, if you have other ideas to improve the look and feel and, most of all, the readability of AT, One of the issues arising in this article is the way that under- please share them with us. water sound propagates. In a way, this issue is addressed in our second article by Gregory Bunting, Clark Dohrmann, I want to also point out a few new things on the AT Scott Miller, and Timothy Walsh. They consider that many website (see acousticstoday.org). First, we have a new acoustic problems are extremely complex and require exten- AT intern, Hilary Kates Varghese, a graduate student sive computations. In their article, the authors discuss the at the University of New Hampshire (Durham). Over methods now available for such computations. the course of the year, Hilary is going to interview a number of past ASA presidents about their careers and Again, related to the idea of analysis of complex acoustics, their work with the ASA. The first of these is now online in the third article, Richard (Dan) Costley Jr. provides at acousticstoday.org/meet-asa-presidents and more will fascinating insight into how the military used acoustics come over the course of 2020. Please visit the site and to locate enemy artillery in World War I (WWI). The learn more about a group of really interesting colleagues. methods used seem “crude” by today’s standards, but they were very effective. Second, AT collaborated with the ASA International Year of Sound Committee to produce a Special Issue of AT The fourth article by Orest Diachok continues with compu- that is aimed at teaching about acoustics to high-school tational acoustics in the sense that Orest writes about using and college students, teachers, politicians, regulators, and sound to find and identify fish. Using sound to find fish others. You can see the issue at acousticstoday.org/IYS2020. comes out of WWI, and there is a continuing quest to use Feel free to share the link to the issue with students and acoustics and computation to improve fisheries methods.

8 Acoustics Today • Summer 2020 The fifth article by Daniel Russell moves in a different well as guidance for how other parents might attend direction and is a wonderful “tutorial” about tuning forks meetings with a young child. and their history. You may recall that Dan did an article several years ago on the acoustics of baseball and softball Related to this is an essay from the Women in Acoustics bats. The current article is equally interesting and pro- group, written by Tracianne Neilsen and Alison Stimpert. vides wonderful insight into a device we all know, as well Traci and Allison discuss what they call work-parenting as a discussion of how they work. “harmony” and share some important ideas that should be of interest to all members. In the sixth article, Benjamin V. Tucker and Richard Wright provide fascinating insight into how human languages The final essay is by my friend Lenny Rudow. Lenny is exploit the sound-producing potential of the human vocal not an acoustician but a renowned writer about all things tract efficiently to produce a wide variety of speech sounds. related to sport fishing and boating. I met Lenny several years ago when he contacted me to learn how human- The final article by Edward Walsh and JoAnn McGee generated sound, such as those produced by a fisherman explores hearing but from the perspective of evolution. playing loud music on his boat, might affect catch rate. In The article delves into hearing specializations in two very trying to answer Lenny and help him learn more about interesting species. I particularly want to point out the pho- fish hearing and fish sounds, I realized that there are tograph in Figure 5 of this article (page 70), suggesting that probably many members of the ASA who fish or have JoAnn and Ed work with what may be the most dangerous fished but have never thought about putting together species that any member of the ASA has worked with! their hobby and their profession as an acoustician. Thus, I invited Lenny to write this essay from the perspective This issue also has a number of very different Sound of someone who does not do acoustics but who is con- Perspectives essays. As usual, our first one is “Ask an cerned about sound. I do want to add, however, because Acoustician.” This essay features Subha Maruvada, there is a slight conflict of interest, that I had (and look an acoustics engineer with the US Food and Drug forward to having again) a wonderful day fishing with Administration. Interestingly, Subha is not only a very Lenny on the Chesapeake Bay along with my grandson accomplished acoustician, but she has a fascinating “other (picture) and other family members. life” that many will find very interesting to learn about.

Two essays talk about other ASA publications. In the first, AT editor’s grandson fishing on the Chesapeake Bay with Charles C. Church, editor of The Journal of the Acoustical Lenny Rudow. Society of America Express Letters (JASA-EL), talks about very important changes in that online journal. In the second, Kent L. Gee, Megan S. Ballard, and Helen Wall Murray describe the history of Proceedings of Meetings on Acoustics (POMA) and a change in leadership of the journal.

Another important ASA publication activity is ASA Books. ASA Books Committee Chair Mark Hamilton, in his essay, talks about the history of the committee. And, most impor- tant, Mark shares information about how to publish a book (either authored or edited) with the ASA Press.

These essays about the ASA are followed by an insight- ful discussion by Laura Kloepper about her experiences bringing her newborn son to scientific meetings. Laura provides personal insights into the issues she faced as

Summer 2020 • Acoustics Today 9 From the President

Victor Sparrow

How the Acoustical The Executive Council, chaired by the ASA president, Society of America Works: is composed of the elected officers and members who A Bird’s Eye View in Both Great represent you as the ASA policy-making body. Through Times and Challenging Times appropriate councils chaired by Executive Council mem- One of the privileges of serving as bers, all ASA administrative committees report through the Acoustical Society of America (ASA) president is the Executive Council. The Executive Council is a “stra- that you get to see the entire organization in action. The tegic body” and aims to both think ahead and to react to Executive Council, the Technical Council, ASA Head- the world’s changing conditions. A number of other parts quarters, and our many, many passionate and dedicated of the ASA also report to the Executive Council, includ- volunteers all work toward making the ASA the best it ing all ASA publications through Editor in Chief Jim can be. By the time you read this article, Diane Kew- Lynch, the ASA standards program through Standards ley-Port will have succeeded me as president. But as Manager Christopher Struck, and the ASA executive I write this article in late March 2020 and while I am director. But because the Executive Council is the policy- still ASA President, I have a view of our organization making body, it delegates the day-to-day operations of that I want to share with each of you. I have seen the the ASA through the editor in chief, the standards direc- ASA in great times (e.g., International Year of Sound, tor, and the executive director. And we are very fortunate an increasing impact factor for The Journal of the that they are all doing a fantastic job! Acoustical Society of America) and in challenging times (COVID-19). And each of you should know how the There are a few other special individuals who you should ASA works and operates on your behalf, particularly be aware of as well. One is L. Keeta Jones, our educa- considering the astounding and unprecedented events tion and outreach coordinator. Ms. Jones reports to the occurring in 2020. executive director. Another is our Finance Director Mike McGovern who, as an employee, reports to the executive Regardless of the issue of the day, ASA Headquarters, led director and, at the same time, works very closely with by our Executive Director Susan Fox, is always on the job our elected treasurer, currently Judy Dubno. I would be making sure that things are running smoothly. If you call absolutely remiss if I didn’t also point out how much of the ASA, most likely it would be Elaine Moran, our direc- what we do in the ASA is led directly by the volunteers tor of operations, who will pick up the telephone and serving as chairs and members of our administrative, talk to you. But the ASA Headquarters also has a larger technical, and meeting committees. The ASA is so very group of dedicated individuals ready to help members much driven by such volunteers, and this is not typical and support them in many ways. The ASA Headquar- among today’s large scientific societies. ters is responsible for the ASA meetings and all member services, among other duties. It takes a team to deal with the day-to-day work of the ASA, and we are fortunate to have such great individuals The vice president, currently Peggy Nelson, who will be involved and engaged. I’ll now give you two examples in succeeded by Stan Dosso, leads the Technical Council. how the team has responded to very different situations The Technical Council is the body of Technical Com- in the last few months. mittee chairs who are responsible for assembling the technical sessions and technical committee meetings. January 2020 began a special year, designated the The technical sessions are the lifeblood of what makes International Year of Sound (IYS) by the International up our meetings, and we applaud the Technical Com- Commission for Acoustics (ICA; see sound2020.org). mittees for everything they do. Realizing that this was a special activity, probably once

10 Acoustics Today • Summer 2020 in a lifetime, the ASA formed an ad hoc committee on to David Lechner, chair of the , DC, Regional the IYS last summer, cochaired by L. Keeta Jones and Chapter for making this all happen. myself. The ASA meetings in 2020 were designated as IYS meetings. The ICA held a worldwide IYS opening In addition to the opening ceremonies, there are other event in Paris, France, at the Sorbonne on January 31, IYS events coming to student and regional chapters of 2020. On behalf of the ASA, both the executive director the ASA. A special online-only issue of Acoustics Today and I attended, and a recording of the entire ceremony has been prepared by Acoustics Today Editor in Chief is available online at sound2020.org. Figure 1 shows Dr. Arthur Popper in very close collaboration with L. Keeta Mark Hamilton of the ASA participating in the opening Jones and ASA Editorial Assistant Kat Setzer. The Special event as the president of ICA. Issue is available at bit.ly/3dxCju1. The Special Issue is aimed at outreach to make students, teachers, and others Then, on February 13, 2020, the Washington, DC, Regional in local, state, and federal governments more aware of Chapter of the ASA hosted the United States opening the importance of acoustics and sound generally in our celebration for the IYS in collaboration with ASA Head- society. Indeed, Keeta will be distributing copies of the quarters. This event was hosted by the American Institute Special Issue at many events around the country in the of Physics in College Park, MD, and both Keeta and I were coming year. I also encourage all members to share the able to attend, as were many individuals from the Ameri- link to the Special Issue with their students, colleagues, can Institute of Physics (AIP) staff and ASA members from and even their children’s (and grandchildren’s) teachers. around the Washington, DC, area. In addition to showing some short ASA-related IYS videos, we had two speakers Moreover, led by the Public Relations Committee and its address the group in College Park. They were Josef Raus- chair Laura Kloepper, the ASA is partnering with AIP checker from Georgetown University who talked about Media Services to generate a number of short videos and “Auditory Perception and Action” and Gary Gottlieb from press releases throughout 2020 to engage the public and the Audio Engineering Society who spoke about “The Evo- let them know about sound in this special year. The IYS lution of Audio.” A recording of the entire US IYS opening has been a great example of how all of ASA can come program, including these talks, is available on the ASA together and address a special year-long event on a fixed Facebook page. A particularly special thank you goes out time schedule. See bit.ly/39w1yJH for the videos and the latest information.

Figure 1. Dr. Mark Hamilton, Acoustical Society of America At the same time, the world had to change its collective (ASA) member and former ASA president, addresses the ways in a hurry in March 2020 when the novel corona- International Year of Sound opening celebration as president virus (COVID-19) struck across the globe. Clearly, the of the International Commission for Acoustics, Sorbonne virus has made a major impact on all ASA members and University, Paris, France, January 31, 2020. the organization itself. Indeed, as this article is being written, all the stakeholders such as ASA Headquarters, the councils, the administrative and technical commit- tees, and individual ASA members are figuring out how the next year of meetings will be navigated.

The ASA officers and managers had their spring 2020 meeting on March 12, and many options were being considered for the meetings, including canceling, delay- ing, or shifting to online only. A plan to move forward quickly was established a few days later, and thanks go to ASA Headquarters for their speedy work. Given that many government laboratories, universities, and busi- nesses have canceled all foreign travel for the foreseeable future, the decision was made to cancel the planned

Summer 2020 • Acoustics Today 11 November 2020 meeting in Cancun, Mexico, and to delay while other special session organizers may prefer that the Chicago meeting to the next opportunity when the the sessions be postponed until the spring 2021 meeting original Chicago hotel would be available. We thank the in Seattle, WA. Similarly, the special sessions and other Cancun organizers and are very disappointed that we conference elements originally planned for Cancun may had to cancel. occur in Chicago or be postponed until Seattle.

As you all know by now, the next ASA meeting will be As I write this article, we do not know how the technical held in Chicago from December 8–12, 2020, a Tuesday- program and administrative committee meeting sched- Saturday schedule. One fallout from not having the ules are going to work out. But I do know that all involved spring 2020 meeting is that the ASA very much needs in the decision making are working hard to ensure that to continue the work of its administrative groups and the culture of the ASA is preserved, to the extent possible, committees, even without a meeting. Thus, ASA Head- and that we continue to make our meetings valuable to quarters and the ASA Executive Council are developing our members. This is a very challenging time, and 2020 plans so that these meetings can take place via teleconfer- has turned out to be one of the more challenging years ences at about the same time. in our collective memory, but I believe that the ASA has a great organization, and we are well positioned to deal Furthermore, the ASA Technical Council is strategizing with anything put in front of us, either good or bad. I how the technical program in Chicago will be carried thank you all for the special opportunity to lead a great forward to the December 2020 meeting. Part of the organization like the ASA, and I hope this bird’s-eye view issue is that some special sessions may stay in Chicago (from a Sparrow) was useful.

XL2 Acoustic Analyzer High performance and cost efficient hand held Analyzer for Community Noise Monitoring, Building Acoustics and Industrial Noise Control

An unmatched set of analysis functions is already available in the base package: • Sound Level Meter (SLM) with simultaneous, instanta- neous and averaged measurements • 1/1 or 1/3 octave RTA with individual LEQ, timer control & logging • Reverb time measurement RT-60 • Real time high-resolution FFT Sound Level Meter (SLM) Real Time Analyzer (RTA) • Reporting, data logging, WAV and voice note recording • User profiles for customized or simplified use

Extended Acoustics Package (option) provides: • Percentiles for wideband or spectral values • High resolution, uncompressed 24 Bit / 48 kHz wave file recording Noise Curves Spectral Limits 1/12th(SLO) • Limit monitoring and external I/O control • Event handling (level and ext. input trigger)

Spectral limits (option) provides: • 1/6th and 1/12th octave analysis

Made in Switzerland

For more information visit: NTI Audio AG NTI Americas Inc. NTI China NTI Japan 9494 Schaan Tigard / Oregon 97281 215000 Suzhou 130-0026 Sumida-ku, Tokyo www.nti-audio.com Liechtenstein USA China Japan +423 239 6060 +1 503 684 7050 +86 512 6802 0075 +81 3 3634 6110

i-ince half page landscape PTB.indd 1 05.03.2019 15:45:30

12 Acoustics Today • Summer 2020 FEATURED ARTICLE

The Underwater Sound from Offshore Wind Farms

Jennifer Amaral, Kathleen Vigness-Raposa, James H. Miller, Gopu R. Potty, Arthur Newhall, and Ying-Tsong Lin

Introduction in development is continuing to grow. The US Bureau of Efforts to reduce carbon emissions from the burning of Ocean Energy Management (BOEM) is responsible for fossil fuels have led to an increased interest in renewable overseeing all the offshore renewable energy develop- energy sources from around the globe. Offshore wind is ment on the outer continental shelf of the United States, a viable option to provide energy to coastal communities which includes issuing leases and providing approval for and has many advantages over onshore wind energy pro- all potential wind energy projects. duction due to the limited space constraints and greater resource potential found offshore. The first offshore wind The Block Island Wind Farm (BIWF) was completed in farm was installed off the coast of Denmark in 1991, and 2016 off the East Coast of the United States in Rhode since then numerous others have been installed world- Island and is the first and only operational wind farm in wide. At the end of 2017, there were 18,814 megawatts US waters to date. It produces 30 MW from five 6-MW (MW) of installed offshore wind capacity worldwide, turbines and is capable of powering about 17,000 homes. with nearly 84% of all installations located in European As of August 2019, there were 15 additional active waters and the remaining 16% located offshore of China, offshore wind leases that account for over 21 GW of followed by Vietnam, Japan, South Korea, the United potential capacity off the East Coast of the United States. States, and Taiwan. This equated to 4,149 grid-connected offshore wind turbines in Europe alone, with the number Offshore wind farms are generally constructed in shal- increasing annually since then (Global Wind Energy low coastal waters, which often have a high biological Council, 2017). In the last 10 years, the average size of productivity that attracts diverse marine life. The aver- European offshore wind farms has increased from 79.6 age water depth of wind farms under construction in MW in 2007 to 561 MW in 2018 (Wind Europe, 2018). 2018 in European waters was 27.1 meters and the aver- age distance to shore was 33 kilometers (Wind Europe, On land, China leads the onshore wind energy market 2018). As a by-product of the construction, operation, with 206 gigawatts (GW) of installed capacity, followed and eventual decommissioning of offshore wind farms, by the United States with 96 GW in 2018 (Global Wind sound is generated both in air and underwater through Energy Council, 2019). Over 80% of the US electricity various activities and mechanisms. With the rate of demand is from coastal states, but onshore wind energy wind farm development continuing to increase world- generation is usually located far from these coastal areas, wide, regulatory agencies, industry, and scientists are which results in long-distance energy transmission. With attentive to the potential physiological and behavioral over 2,000 GW of offshore wind energy potential in US effects these sounds might have on marine life living waters, which equals nearly double the electricity demand in the surrounding environment. The contribution of of the nation, offshore development could provide an alter- sound produced during any anthropogenic activity can native to long-distance transmission or development of change the underwater soundscape and alter the habitats onshore installations in land-constrained coastal regions of marine mammals, fishes, and invertebrates by poten- (US Department of Energy, 2016). With the potential tially masking communications for species that rely on for offshore wind to be a clean and affordable renewable sound for mating, navigating, and foraging. This article energy source, US federal and state government interest discusses the typical sounds produced during the life of

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 13 https://doi.org/10.1121/AT.2020.16.2.13 UNDERWATER SOUNDS FROM WIND FARMS

a wind farm, efforts that can be taken to reduce sound Most installed wind turbines utilize bottom-fixed founda- levels, and how these sounds might be assessed for their tions, but these foundations become less feasible in water potential environmental impact. depths greater than 50 meters. In the United States, roughly 58% of the offshore wind potential is in water depths deeper Construction of Offshore Wind Turbines than 60 meters (US Department of Energy, 2016). In these Once the development of a wind farm has been approved, greater water depths, floating foundations that are tethered the installation of the wind turbine foundations can begin. to the seabed using anchors are a more viable option. The type of foundation used will depend on parameters such as the water depth, seabed properties at the site, and Impact Pile Driving turbine size. In water depths less than 50 meters, fixed Impact pile driving, where the top of the pile is pounded foundations such as monopiles, gravity base, and jacket repeatedly by a heavy hammer, is a method used to foundations are used to secure the wind turbines to the install monopile and jacket foundations and generates seabed (Figure 1). A gravity base foundation is a type sound in the air, water, and sediment. The installation of reinforced concrete structure used in water depths of a jacket foundation requires multiple piles be driven less than 10 meters that sits on the seabed and is heavy into the seabed to secure the corners of the steel struc- enough to keep the wind turbine upright. A monopile ture, whereas installation of the monopile design requires foundation is a single steel tube with a typical diameter one larger pile be driven (Norro et al., 2013). Impact pile of 3-8 meters that is driven into the seafloor, whereas a driving is not used for the installation of most floating or jacket foundation is a steel structure composed of many gravity-based foundations and therefore is not an inher- smaller tubular members welded together that sits on ent part of wind farm construction if the water depths top of the seafloor and is secured by multiple steel piles and sediment characteristics at the installation site are driven into the sediment (Wu et al., 2019). Monopiles suitable for these alternate foundations. can be driven to a depth of 20-45 meters below the sea- floor and the piles to secure jacket foundations can be The impact of the hammer on the top of the pile is the driven to a depth of between 30 and 75 meters (JASCO primary source of sound that is generated during impact and LGL, 2019). pile driving (see tinyurl.com/tbdgsb2). High-amplitude sound pressure is generated that radiates away from the pile on an angle that is dependent on the material proper- Figure 1. Schematic showing some types of offshore wind ties of the pile and the sound speed in the surrounding turbine foundation structures, with the wind turbine water. This angle is typically between 15° and 19° relative components labeled. Image courtesy of the Bureau of Safety to the pile axis (Figure 2; Dahl et al., 2015b). Characteris- and Environmental Enforcement (BSEE), Department of the tics of the sound generated from each hammer strike are Interior, See https://tinyurl.com/wawb979. strongly dependent on the pile configuration, hammer impact energy, and environmental properties (such as the water depth and seabed properties).

In addition to the sound pressure generated in the water, compressional, shear, and interface waves are generated in the seabed that propagate outward from the pile in all directions (Figure 2). Compressional waves are the fast- est traveling waves in the seafloor and are characterized by particle motion that is parallel to the direction of wave propagation, whereas shear waves, which arrive second, have particle motion that is perpendicular to the direction of the propagating wave (Miller et al., 2016). Interface (or Scholte) waves along the water-sediment interface occur as a result of interfering compressional and shear waves. The low-frequency and slow-moving interface waves propagate

14 Acoustics Today • Summer 2020 Figure 2. Left: simplified schematic showing the types of sound generated as a result of a hammer striking a pile. Sound pressure is radiated into the water at an angle relative to the pile axis, compressional and shear waves are generated in the sediment, and interface waves propagate along the seafloor boundary. Right: finite-element output for the pile driving of a vertical steel pile in 12 meters of water. The seafloor is at 12 meters depth (black horizontal line). The acoustic pressure in the water (<12 meters) and the particle velocity in the sediment (>12 meters) generated from a hammer strike are shown. Various wave phenomena can be seen, including the sound pressure wave radiated at an angle from the pile into the water and the resulting body and interface waves in the sediment. Reprinted/adapted from Popper and Hawkins, 2016, with permission from Springer.

over long distances and generate large-amplitude oscilla- level metrics can be used. The SEL is a measure of the tions along the water-sediment boundary that have the energy within a signal and allows for the total energy potential to affect marine life living close to or within the of sounds with different durations to be compared. It is seafloor sediment that is sensitive to this type of distur- defined as the time integral of the squared sound pres- bance (Popper and Hawkins, 2018). The amplitude of the sure reported in units of decibels re 1 µPa2s. This metric interface wave decays exponentially away from the inter- can be used to describe the sound levels from a single face, and, therefore, any disturbance will be noticeable only strike (SELss) and cumulated across multiple hammer within a distance of a few wavelengths from the seafloor strikes or over the duration of the piling activity (Tsouvalas and Metrikine, 2016). (SELcum). When assessing the potential effect of impul- sive sounds on the physiology of marine mammals and Measuring the Radiated Sound fishes, the peak sound pressure level and SEL are used The total number of hammer strikes required to drive a (Popper et al., 2014; Southall et al., 2019). pile to its final penetration depth could range between 500 to more than 5,000, with the hammer striking the A standard measurement method is important to ensure pile between 15 and 60 times per minute (Matuschek that independent measurements made at different wind and Betke, 2009). On average, a jacket foundation farms can be compared. An approach for measuring and requires three times more hammer strikes to install characterizing the underwater sound generated during than a monopile and will result in a longer total piling impact pile driving is defined through the International time because the jacket design requires multiple piles Organization of Standardization (ISO) 18406 document to secure the structure to the seabed as opposed to a (2017), which is the standard for measurements of radi- single pile for the monopile design (Norro et al., 2013). ated underwater sound from impact pile driving. In this To characterize the impulsive sound generated during standard, a combination of range-varying hydrophone each hammer strike as part of impact pile driving, the deployments and fixed-range measurements are recom- sound exposure level (SEL) and peak sound pressure mended to capture variation in the resulting sound field

Summer 2020 • Acoustics Today 15 UNDERWATER SOUNDS FROM WIND FARMS

facilitate comparison with the large number of existing measurements at this range from other wind farm sites (Robinson and Theobald, 2017).

Frequency Content of Hammer Strikes Impact pile driving radiates considerable levels of low- frequency impulsive noise into the environment. The majority of the energy in the resulting broadband sound field is found below 2 kHz, with spectral peaks between 100 and 400 Hz (Figure 3, top; Matuschek and Betke, 2009) , where the dispersion of shallow-water acoustic modes is present (Frisk, 1994). Measurements taken during wind farm construction in the North Sea showed similar spectra resulting from the piling of a monopile and jacket foundation (Norro et al., 2013).

Azimuthal Dependence of Radiated Sound Fields The installation of jacket foundations sometimes requires piles to be driven on an angle inside the legs of the foun- dation. For example, the legs of the jacket foundations at Figure 3. Top: time-frequency representation of hammer the BIWF were hollow, steel members that were inclined strikes during impact pile driving at the Block Island inward at an angle of roughly 13° and piles were impact Wind Farm (BIWF) recorded at a range of 7.5 kilometers driven into the legs to secure the foundation to the seabed and roughly at midwater depth. Bottom: time-frequency (Figure 4). The nonaxisymmetric orientation of the pile representation of the acoustic signals around 71 Hz relative to the seabed causes an azimuthal dependence to hypothesized to be due to the operation of 1 turbine at the the radiated sound field, which can result in a significant BIWF measured near the seafloor at a range of 50 meters while fin were vocalizing at 20 Hz. The received wind turbine sounds were measured at a level of 100 dB re 1 µPa Figure 4. Jacket foundation in the water to the right of the root-means-square (rms) while the fin vocalizations pile-driving barge at the BIWF, with a steel pile section were measured at a level of 125 dB re 1 µPa peak. inserted into each leg at an angle of roughly 13° prior to piling. The hammer is shown positioned on one of the piles in preparation to drive the pile into the seafloor. with both distance and changing source characteristics. The source characteristics and resulting sound level radiated into the environment will vary during a piling sequence due to changes in the hammer strike energy, penetration depth of the pile, and depth-dependent seabed properties. Usually, the piling event will begin with hammer strikes at a lower energy before increasing to a higher strike energy to drive the pile deeper into the seafloor. As the length of the pile driven into the seafloor increases, it has the poten- tial to encounter sediment layers with different properties that would influence the resulting radiated sound levels. This variation could be adequately captured on stationary measurement systems, ideally deployed at multiple ranges but with at least one deployed at a range of 750 meters to

16 Acoustics Today • Summer 2020 variation in the received sound levels measured along seabed and from the operation of the vessels used during different radials (Wilkes and Gavrilov, 2017). Received construction. The primary source of noise during the levels recorded on fixed-range and towed measurement cable laying process is from vessel operations and the systems were substantially different (~10 dB) between potential use of dynamic positioning thrusters to hold piles inclined in opposite directions (Vigness-Raposa et vessels in position. An environmental assessment per- al., 2017; Martin and Barclay, 2019). These differences formed for the Vineyard Wind project off the coast of were observed independent of the strike energy used Massachusetts concluded that the sounds generated from for individual hammer strikes (Amaral et al., 2020). The these activities were generally consistent with those from pile orientation affected the incident angle of the radi- routine vessel traffic expected in the area, and, therefore, ated pressure wave front on the seabed, which resulted they were not anticipated to be a significant contributor in the directivity of the radiated sound varying based to the overall acoustic footprint of the project (JASCO on the azimuth. The steeper the incident angle of the and LGL, 2019). radiated wave front on the seafloor, the more energy was absorbed in the sediment. The azimuthal dependence Operational Sounds of Wind Turbines to the radiated sound field and resulting sound levels The construction of a wind farm takes place over a period are important factors to consider when determining the of months, whereas the typical wind farm life span is potential marine mammal and fish impact zones around between 20 and 25 years. Once completed, the turbines pile-driving activities for inclined piles. will operate nearly continuously, except for occasional shutdowns for maintenance or severe weather. Therefore, Vibratory Pile Driving the contribution of sound to the marine environment will Vibratory pile driving is another method used to drive piles be more consistent and of longer duration during the oper- into the seafloor and could be used prior to impact pile driv- ational phase than during any other phase of the life of the ing to ensure that the pile is stable in the seabed (JASCO and wind farm (Nedwell and Howell, 2004). The underwater LGL, 2019) or for the installation of sheet piles to construct noise levels emitted during the operation of the turbines temporary cofferdams (Tetra Tech, 2012). In this process, are low and not expected to cause physiological injury the pile is vibrated at a certain frequency, typically between to marine life but could cause behavioral reactions if the 20 and 40 Hz, to drive it into the sediment rather than ham- animals are in the immediate vicinity of the wind turbine mering the top of the pile (Matuschek and Betke, 2009). The (Tougaard et al., 2009; Sigray and Andersson, 2011). vibratory process produces lower level continuous sounds (see tinyurl.com/st4h9tq) compared with the high-ampli- In some shallow-water environments, sound due to ship- tude impulsive noise produced during impact pile driving. ping traffic or storms could dominate the low-frequency The high-amplitude pressure waves generated in the water ambient-sound field over the sound emitted from the column during impact piling are not present with vibratory wind turbines. Therefore, evaluating the relative sound piling, and the highest sound pressures are expected near levels from the wind turbine compared with those the seafloor as a result of the propagating low-frequency from other sources is important when considering the interface waves (Tsouvalas and Metrikine, 2016). The radi- potential impacts to marine life. Measurements made ated spectrum will be strongly influenced by the vibration at 3 different wind turbines in Denmark and Sweden at frequency, will have peaks at the operating frequency and ranges between 14 and 40 meters from the turbine foun- its subsequent harmonics, and will vary as the operating dations found that the sound generated due to turbine frequency is adjusted according to changing operational operation was only detectable over underwater ambient conditions such as sediment type (Dahl et al., 2015a). To noise at frequencies below 500 Hz (Tougaard et al., 2009). assess the impact of nonimpulsive sound on marine life, the SEL metric is used (Southall et al., 2019). The main sources of sound generated during the opera- tion of wind turbines are aerodynamic and mechanical. Additional Construction-Related Sounds The mechanical noise is from the nacelle, which is situ- The construction of an offshore wind project generates ated at the top of the wind turbine tower and houses the sound during other activities apart from pile driving, gear box and generator (Figure 1). As the wind turbine including during the laying of electric cables on the blades rotate, vibrations are generated that travel down

Summer 2020 • Acoustics Today 17 UNDERWATER SOUNDS FROM WIND FARMS

the turbine tower into the foundation and radiate into particle motion in the water and sediment is also impor- the surrounding water column and seabed (Tougaard et tant when considering the potential impact to marine life al., 2009). The resulting sound is described as continuous sensitive to this stimulus. Additionally, the context under and nonimpulsive and is characterized by one or more which an animal is exposed to a sound, in addition to tonal components that are typically at frequencies below the received sound level, will affect the probability of a 1 kHz (see tinyurl.com/wke3lso). The frequency content behavioral response (Ellison et al., 2012). of the tonal signals is determined by the mechanical properties of the wind turbine and does not change with Protective Measures to Mitigate wind speed (Madsen et al., 2006). Sound Levels Various mitigation methods can be employed during Underwater measurements taken during the operation each phase of wind farm development to reduce the of one of the turbines at the BIWF contained sound that overall propagated sound levels and potential effect on is hypothesized to be caused by aerodynamic noise from marine life. Time-of-year limitations on construction the turbine blade tips that was propagated through the are implemented to provide safeguards for specific pro- air, into the water, and received on a hydrophone on the tected or susceptible species. Antinoise legislation in the seabed at a range of 50 meters (Figure 3, bottom; J. Miller, Netherlands prohibits pile driving from July 1 through Personal observation). This sound was measured to be December 31 to avoid disturbance of the breeding season around 71 Hz and was lower in level than fin whale vocal- of the harbor (Tsouvalas and Metrikine, 2016). izations recorded at the same time. This sound was only Off the US East Coast, an agreement was made between detectable during times when the weather was calm and environmental groups and a wind farm developer to pro- there were no ships traveling in the area. vide protections for the North Atlantic right whale by not allowing pile driving between January 1 and April Sounds from Decommissioning 30 when right whales are most likely to be present in the Since the first offshore wind farm decommissioning in project area (Conservation Law Foundation, 2019). 2015, a small number of offshore farms have been decom- missioned, but the decommissioning process is generally The use of noise mitigation systems such as bubble cur- unexplored. As more wind farms reach the end of their tains (see tinyurl.com/v6m6ops) or physical barriers design life, the decision will have to be made relating to around the pile are commonly used to reduce the levels of extending operations, repowering, or decommissioning. sound generated during impact pile driving (Bellmann et Decommissioning is typically thought of as a complete al., 2017). These methods are a type of barrier system that removal of all components above and below the water work to attenuate the radiated sound levels by exploiting surface, but there is research supporting a partial removal an impedance mismatch between the generated sound where some of the substructure would remain in place wave and a gas-filled barrier. Factors such as the water as an artificial reef for marine life (Topham et al., 2019). depth, current, and foundation type will influence the In general, sound would be generated as a by-product effectiveness of each system. of the process used to remove the substructures, which could include cutting the foundation piles via explosives Ramp-up operational mitigation measures, in which the or water jet cutting (Nedwell and Howell, 2004). hammer intensity is gradually increased to full power, are also employed. This method aims to allow time for ani- Assessing Impact to Marine Life mals to leave the immediate area and avoid exposure to Impulsive sounds, like those generated during impact harmful sound levels, although there are no data to sup- pile driving, exhibit physical characteristics at the source port the contention that this works for fishes, invertebrates, that make them potentially more injurious to marine or turtles. Another mitigation method involves visually life compared with nonimpulsive sounds, like those monitoring an exclusion zone around the piling activity for generated during vibratory pile driving and wind tur- the presence of marine mammals. This zone is predefined bine operation (Popper et al., 2014; Southall et al., 2019). based on the expected sound levels in the area and requires Sound exposure is currently assessed based on the sound pausing piling activities if an animal is observed to reduce pressure received in the water column, but the resulting near-field noise exposure (Bailey et al., 2014).

18 Acoustics Today • Summer 2020 Figure 5. Seasonal variability of underwater sound propagation in the BIWF area showing transmission loss (TL) predictions in decibels for a 200-Hz sound source in September 2015 (summer; a) and December 2015 (winter; b). The source depth (Zs) in the model was 15 meters and the receiver depth (Zr) was 20 meters. The corresponding sound speed profiles (SSP) are shown. The TL was higher in the summer compared with the winter conditions. Reproduced from Lin et al., 2019, with permission.

Exploiting seasonal differences in the water tempera- restrictions on sound-generating activities. The potential ture and salinity and its effect on underwater sound for acute sound exposure of marine mammals and fishes propagation could also be used to mitigate the impact of is currently assessed based on the generated sound pres- pile-driving noise by scheduling wind farm construction sure levels in the water column, but other factors such as during seasons of high expected acoustic transmission the particle motion in the water and sediment and the loss. For example, the pile driving for the BIWF occurred behavioral response of marine life are important factors to during the summer season but had the construction evaluate. Although the construction and decommissioning occurred during the winter season, the received SELs phases take on the order of months to complete, offshore at ranges greater than 6 kilometers could have been up wind farms are designed to operate for minimum of 20–25 to 8 dB higher (Figure 5) due to lower water tempera- years. With the continued development of offshore wind tures causing larger acoustic impedance contrast at the farms worldwide there will be additional opportunities seafloor (water-bottom interface) and a more isovelocity, to measure the underwater sound generated during all or constant, sound speed profile (Lin et al., 2019). This phases and assess any potential long-term effect of this difference in received sound levels is significant and high- sound on the marine environment. lights the effect the environmental conditions have on the overall sound propagation. References Amaral, J. L., Miller, J. H., Potty, G. R., Vigness-Raposa, K. J., Fran- Conclusion kel, A. S., et al. (2020). Characterization of impact pile driving signals during installation of offshore wind turbine foundations. Ancillary sounds of varying levels and characteristics are The Journal of the Acoustical Society of America, 147(4), 2323-2333. generated during each phase in the development of an off- https://doi.org/10.1121/10.0001035. shore wind farm. The highest amplitude sound is expected Bailey, H., Brookes, K. L., and Thompson, P. M. (2014). Assessing during the impact pile-driving part of the construction environmental impacts of offshore wind farms: Lessons learned and recommendations for the future. Aquatic Biosystems 10, 1-13. phase and potentially during the decommissioning phase https://doi.org/10.1186/2046-9063-10-8. depending on the methods employed to remove the wind Bellmann, M. A., Schuckenbrock, J., Gündert, S., Michael, M., turbine foundations. The installation methods used for Holst, H., and Remmers, P. (2017). Is there a state-of-the-art to each turbine foundation type will result in different levels reduce pile-driving noise? In J. Köppel (Ed.), Wind Energy and and types of sounds radiated into the marine environ- Wildlife Interactions, Springer, Cham, Switzerland, pp. 161-172. https://doi.org/10.1007/978-3-319-51272-3_9. ment. The sound levels can be reduced using physical Conservation Law Foundation. (2019). Protective Measures for North barriers, and the sound exposure of marine life can be Atlantic Right Whales. Available at https://tinyurl.com/tj8awyb. mitigated through monitoring methods and time-of-year Accessed February 27, 2020.

Summer 2020 • Acoustics Today 19 UNDERWATER SOUNDS FROM WIND FARMS

Dahl, P. H., Dall’Osto, D. R., and Farrell, D. M. (2015a). The underwater Popper, A. N., and Hawkins, A. D. (Eds.). (2016). The Effects of Noise sound field from vibratory pile driving. The Journal of the Acoustical Soci- on Aquatic Life II. Springer, New York. ety of America 137, 3544-3554. https://doi.org/10.1121/1.4921288. Popper, A. N., and Hawkins, A. D. (2018). The importance of particle Dahl, P. H., de Jong, C. A. F., and Popper, A. N. (2015b). The under- motion to fishes and invertebrates. The Journal of the Acoustical Soci- water sound field from impact pile driving and its potential effects ety of America 143, 470-488. https://doi.org/10.1121/1.5021594. on marine life. Acoustics Today 11(2), 18-25. Popper, A. N., Hawkins, A. D., Fay, R. R., Mann, D. A., Bartol, S., Ellison, W. T., Southall, B. L., Clark, C. W., and Frankel, A. S. (2012). Carlson, T., Coombs, S., Ellison, W. T., Gentry, R., Halvorsen, M. A new context-based approach to assess marine mammal behavioral B., Lokkeborg, S., Rogers, P., Southall, B. L., Zeddies, D. G., and responses to anthropogenic sounds. Conservation Biology 26, 21-28. Tavolga, W. N. (2014). Sound Exposure Guidelines for Fishes and Sea https://doi.org/10.1111/j.1523-1739.2011.01803.x. Turtles: A Technical Report prepared by ANSI-Accredited Standards Frisk, G. V. (1994). Ocean and Seabed Acoustics. Prentice-Hall, Inc, Committee S3/SC1 and registered with ANSI. ASA S3/SC1.4 TR-2014. Englewood Cliffs, NJ. SpringerBriefs in Oceanography, Springer International Publishing, and ASA Press, Cham, Switzerland. Global Wind Energy Council. (2017). GWEC Global Wind 2017 Robinson, S. P., and Theobald, P. (2017). A standard for the measure- Report. Available at https://tinyurl.com/sg3puy7. Accessed Febru- ment of underwater sound radiated from marine pile driving. 24th ary 12, 2020. International Congress on Sound and Vibration, London, UK, July Global Wind Energy Council. (2019). Global Wind Report 2018. 23–27, 2017, pp. 5022-5028. Available at https://gwec.net/global-wind-report-2018/. Accessed Sigray, P., and Andersson, M. H. (2011). Particle motion measured January 31, 2020. at an operational wind turbine in relation to hearing sensitivity in International Organization for Standardization (ISO). (2017). ISO fish. The Journal of the Acoustical Society of America 130, 200-207. 18406:2017 Underwater Acoustics — Measurement of Radiated Under- https://doi.org/10.1121/1.3596464. water Sound from Percussive Pile Driving. International Organization Southall, B. L., Finneran, J. J., Reichmuth, C., Nachtigall, P. E., Ketten, for Standardization, Geneva, Switzerland. D. R., Bowles, A. E., Ellison, W. T., Nowacek, D. P., and Tyack, P. L. JASCO and LGL. (2019). Request for an Incidental Harassment Autho- (2019). Marine mammal noise exposure criteria: Updated scientific rization to Allow the Non‐Lethal Take of Marine Mammals Incidental recommendations for residual hearing effects. Aquatic Mammals 45, to Construction Activities in the Vineyard Wind BOEM Lease Area 125-232. https://doi.org/10.1578/AM.45.2.2019.125. OCS‐A 0501, Version 4.1. Document No. 01648, Prepared by JASCO Tetra Tech. (2012). Block Island Wind Farm and Block Island Transmission Applied Sciences (USA) Ltd. and LGL Ecological Research Associ- System Environmental Report/Construction and Operations Plan. Report ates, for Vineyard Wind, LLC. Available at https://tinyurl.com/ua5veos. Prepared by Tetra Tech Inc. for Deepwater Wind, Boston, MA. Avail- Accessed February 27, 2020. able at https://tinyurl.com/wkmonot. Accessed February 27, 2020. Lin, Y.-T., Newhall, A. E., Miller, J. H., Potty, G. R., and Vigness- Topham, E., Gonzalez, E., McMillan, D., and João, E. (2019). Chal- Raposa, K. J. (2019). A three-dimensional underwater sound lenges of decommissioning offshore wind farms: Overview of the propagation model for offshore wind farm noise prediction. The European experience. Journal of Physics: Conference Series, WindEu- Journal of the Acoustical Society of America 145, EL335-EL340. rope Conference and Exhibition 2019, Bilbao,Spain, April 2–4, 2019, https://doi.org/10.1121/1.5099560. Vol. 1222, No. 1, p. 012035. Madsen, P. T., Wahlberg, M., Tougaard, J., Lucke, K., and Tyack, P. https://doi.org/10.1088/1742-6596/1222/1/012035. (2006). Wind turbine underwater noise and marine mammals: Impli- Tougaard, J., Henriksen, O. D., and Miller, L. A. (2009). Underwa- cations of current knowledge and data needs. Marine Ecology Progress ter noise from three types of offshore wind turbines: estimation of Series 309, 279-295. https://doi.org/10.3354/meps309279. impact zones for harbor and harbor seals. The Journal of Martin, S. B., and Barclay, D. R. (2019). Determining the dependence the Acoustical Society of America 125, 3766-3773. https://doi.org/10.1121/1.3117444. of marine pile driving sound levels on strike energy, pile penetra- Tsouvalas, A., and Metrikine, A. V (2016). Structure-borne wave radi- tion , and propagation effects using a linear mixed model based on ation by impact and vibratory piling in offshore installations: From damped cylindrical spreading. The Journal of the Acoustical Society sound prediction to auditory damage. Journal of Marine Science and of America 109, 109-121. https://doi.org/10.1121/1.5114797. Engineering 4(3), 44. https://doi.org/10.3390/jmse4030044. Matuschek, R., and Betke, K. (2009). Measurements of construction US Department of Energy (2016). National Offshore Wind Strategy. Avail- noise during pile driving of offshore research platforms and wind able at https://tinyurl.com/vshgdne. Accessed February 27, 2020. farms. NAG/DAGA International Conference on Acoustics, Rotter- Vigness-Raposa, K. J., Giard, J. L., Frankel, A. S., Miller, J. H., Potty, G. dam, The Netherlands, March 23–23, 2009, pp. 262-265. R., Lin, Y. T., Newhall, A., and Mason, T. (2017). Variations in the Miller, J. H., Potty, G. R., and Kim, H.-K. (2016). Pile-driving pressure acoustic field recorded during pile-driving construction of the Block and particle pelocity at the seabed: Quantifying effects on crustaceans Island Wind Farm. The Journal of the Acoustical Society of America and groundfish. In A. N. Popper and A. D. Hawkins (Eds.), The Effects 141(5), 3993. https://doi.org/10.1121/1.4989147. of Noise on Aquatic Life II, Springer, New York, pp. 719-728. Wilkes, D. R., and Gavrilov, A. N. (2017). Sound radiation from Nedwell, J., and Howell, D. (2004). A Review of Offshore Windfarm impact-driven raked piles. Journal of the Acoustical Society of Amer- Related Underwater Noise Sources. Subacoustech Report No. 544 R ica 142, 1-11. https://doi.org/10.1121/1.4990021. 0308, Prepared by Subacoustech Ltd. for The Crown Estate. Avail- Wind Europe. (2018). Offshore Wind in Europe. Available at able at https://tinyurl.com/senknnb. Accessed February 27, 2020. https://tinyurl.com/ycls9vo4. Accessed January 30, 2020. Norro, A. M. J., Rumes, B., and Degraer, S. J. (2013). Differentiating Wu, X., Hu, Y., Li, Y., Yang, J., Duan, L., Wang, T., Adcock, T., Jiang, Z., between underwater construction noise of monopile and jacket foun- Gao, Z., Lin, Z., and Borthwick, A. (2019). Foundations of offshore dations for offshore windmills: A case study from the Belgian part of wind turbines: A review. Renewable and Sustainable Energy Reviews the North Sea. The Scientific World Journal, Article ID 897624. 104, 379-393. https://doi.org/10.1016/j.rser.2019.01.012.

20 Acoustics Today • Summer 2020 About the Authors Gopu R. Potty [email protected] Department of Ocean Engineering Jennifer Amaral University of Rhode Island [email protected] Narragansett, Rhode Island 02882, Marine Acoustics, Inc. USA 2 Corporate Place, Suite 105 Gopu R. Potty received his PhD degree Middletown, Rhode Island 02842, USA in ocean engineering from the Uni- Jennifer Amaral is a lead scientist and versity of Rhode Island (URI; Narragansett) in 2000. He is engineer with Marine Acoustics, Inc. currently an associate research professor in the Ocean Engi- (Middletown, RI), where she implements modeling strate- neering Department at URI. His research interests include gies and develops acoustic assessment tools to evaluate shallow-water acoustic propagation, acoustical oceanogra- sound exposure on marine life for environmental impact phy, geoacoustic inversion, and marine bioacoustics. Dr. Potty assessments. She earned her BS and MS degrees in ocean is a senior member of the IEEE Oceanic Engineering Society engineering from the University of Rhode Island (URI; Nar- and a Fellow of the Acoustical Society of America and Acous- ragansett) and is currently studying toward her PhD in the tical Society of India. He is an associate editor of the IEEE same discipline. Her doctoral research focuses on the acous- Journal of Oceanic Engineering and Journal of Acoustical tic propagation and characterization of pile-driving sounds Society of India. and marine mammal vocalizations. Arthur Newhall Kathleen Vigness-Raposa [email protected] [email protected] Applied Ocean Physics INSPIRE Environmental and Engineering 513 Broadway, Suite 314 Woods Hole Oceanographic Newport, Rhode Island 02840, USA Institution Woods Hole, Massachusetts 02543, Kathleen Vigness-Raposa is a principal USA scientist with INSPIRE Environmental (Newport, RI), with over 20 years of experience. Her main Arthur Newhall received a BS in mathematics from the Uni- areas of expertise are bioacoustics and impact assessments versity of Maine (Orono) in 1985. He is a Senior Information of anthropogenic sounds in the marine environment. Over Systems Specialist in the Applied Ocean Physics and Engi- the course of her career, she has conducted marine mammal neering Department, Woods Hole Oceanographic Institution research, led acoustic monitoring teams on research cruises, (Woods Hole, MA). He is a member of the IEEE Oceanic and taught graduate-level courses at the University of Rhode Engineering Society and the Acoustical Society of America. Island. She uses innovative techniques to model and predict His research interests include ocean acoustic propagation environmental impacts and cocreated the award-winning modeling, acoustical oceanography, software engineering, educational website “Discovery of Sound in the Sea.” and music.

James H. Miller [email protected] Ying-Tsong Lin [email protected] Department of Ocean Engineering Applied Ocean Physics University of Rhode Island and Engineering Narragansett, Rhode Island 02882, Woods Hole Oceanographic USA Institution Woods Hole, Massachusetts 02543, James H. Miller earned his BS in electri- USA cal engineering in 1979 from Worcester Polytechnic Institute (Worcester, MA), his MS in electrical Ying-Tsong Lin received his PhD degree in engineering engineering in 1981 from Stanford University (Stanford, CA), science and ocean engineering from the National Taiwan Uni- and his Doctor of Science in oceanographic engineering in versity (NTU; Taipei City, Taiwan) in 2004. He is currently an 1987 from the Massachusetts Institute of Technology (Cam- associate scientist with tenure at the Applied Ocean Physics bridge) and Woods Hole Oceanographic Institution (Woods and Engineering (AOP&E) Department, Woods Hole Oceano- Hole, MA). Since 1995, he has been on the faculty in the graphic Institution (WHOI; Woods Hole, MA). His research Department of Ocean Engineering, University of Rhode interests include shallow-water acoustic propagation, acous- Island (Narragansett) where he holds the rank of profes- tical oceanography, geoacoustic inversion, and underwater sor. He is a Fellow of the Acoustical Society of America and sound source localization. Dr. Lin is a member of the IEEE served as President of the Acoustical Society of America in Oceanic Engineering Society and the American Geophysical 2013–2014. Union, and a Fellow of the Acoustical Society of America.

Summer 2020 • Acoustics Today 21 FEATURED ARTICLE

Solving Complex Acoustic Problems Using High-Performance Computations

Gregory Bunting, Clark R. Dohrmann, Scott T. Miller, and Timothy F. Walsh

Introduction acoustic quantities of interest because otherwise there is Sound waves propagating in fluids (air, water, etc.) are no other means of obtaining this information. a ubiquitous part of our everyday lives, from commu- nication through speech to learning in classrooms to Computational acoustics (CA) has emerged as a subdisci- communicating underwater. The propagation of acous- pline of acoustics, concerned with combining mathematical tic waves in these environments is well-understood and modeling and numerical solution algorithms to approxi- documented in the comprehensive history given in the mate acoustic fields with computer-based models and book by Allan Pierce (2019). For example, one may wish simulation. Using CA, acoustic propagation is math- to know the acoustic pressure field in a large expanse of ematically modeled via the wave equation, a continuous water under the surface of the ocean (Duda et al., 2019), partial differential equation that admits wave solutions. the sound field at every location and time in a large con- The numerical methods of CA are focused on taking the cert hall (Hochgraf, 2019), or the structural response continuous equations from calculus and turning them into of aerospace structures to high-intensity acoustic fields discrete linear algebraic calculations, which are amenable that are experienced in-flight (e.g., the Orion capsule to solution on digital computers. In the case of a concert in Figure 1, left). Unfortunately, when the geometry, hall or underwater domain with complex geometries that boundary conditions, and/or given spatial distributions are not amenable to an analytic solution, CA would enable of material properties of the fluid are complex, the gov- an acoustics engineer to compute a numerical solution to erning wave equations do not typically lend themselves to the wave equation to help the engineering design process. an analytic solution. The same holds true for wave equa- Some of the more popular of these methods are finite dif- tions in other areas of physics such as electromagnetism ference, finite volume, spectral element, boundary element, and optics. In these scenarios, numerical solution of the and finite-element methods (FEMs). Although each of the wave equations can be a powerful tool for computing the numerical strategies for solution of the acoustics equations has its own niche applications and advantages/disadvan- tages, in this article, we focus on the FEM and its application Figure 1. The Orion (left) is the new NASA spacecraft on modern high-performance computing platforms. for astronauts to revisit the moon by 2024. Ground-based testing of the capsule can be modeled via the finite-element Example: Solving the Helmholtz Equation method (FEM). A FEM discretization of the acoustic As an illustration, one can consider the continuous and domain surrounding the Orion (right) illustrates the domain discrete versions of the acoustic Helmholtz equation for discretization method. See text for discussion. steady-state wave propagation in fluids. In the continu- ous form, when body loads are neglected, one has

Δp+k2 p = 0 ( 1)

where p = p(x, y, z) is the acoustic steady-state pressure as a function of position and k = ω/c is the wave number. Apply- ing one’s favorite numerical method to solve Helmholtz’s

22 Acoustics Today • Summer 2020 | Volume 16, issue 2 ©2020 Acoustical Society of America. All rights reserved. https://doi.org/10.1121/AT.2020.16.2.22 equation (along with the associated boundary conditions) larger numbers of CPUs and/or GPUs on computing numerically yields a discrete set of n linear equations clusters to enable the numerical solution. Modern HPC clusters deployed by the US Department of Defense A p = F ( 2) (DOD) and Department of Energy (DOE) laboratories have access to tens of thousands of CPUs/GPUs, each where A is an n × n matrix that contains a discrete rep- with substantial in-core memory resources. The prob- resentation of the continuous Helmholtz equation, p is lem then becomes how to tailor the numerical method a vector of unknown discrete acoustic pressures at the of interest so that it can be applied in these novel, distrib- nodes of the discretized model, and F is an n × 1 vector uted memory and architectural computing environments. containing information about boundary conditions and energy/load sources for the acoustics problem at hand. Because a wide range of acoustics applications encoun- For completeness, we note that in the case of boundary ter large-domain sizes and/or high frequencies of interest, element methods, one would start with the Helmholtz the ability to numerically solve the acoustics equations integral equation instead of the differential form given in in a scalable way is of broad interest across the field of Eq. 1. The details are omitted here for brevity. acoustics engineering. Large-domain sizes present them- selves in underwater acoustics (Duda et al., 2019), waves High-Performance Computing for Acoustics in atmospheric propagation scenarios (Hart et al., 2016), In high-performance computing (HPC) for acoustics, a aeroacoustics for airborne structures, architectural challenge is to solve equations in the form of Eq. 2 when acoustics in large concert halls, and large-scale acoustic the number of unknowns (n) becomes too large for a chambers for testing aerospace structures (Schultz et al., single computer. Modern HPC platforms and the cor- 2015), to name a few. In these applications, the large size responding software for domain decomposition and of area where the acoustic solution is desired translates parallel communication are revolutionizing the numeri- to large matrices for the numerical solution and hence cal solution of acoustic wave equations. This is enabling the need for HPC. Equivalently, applications with high the solution of practical engineering problems in acous- frequencies present precisely the same computational chal- tics such as airborne acoustic propagation (Hart et al., lenges as large-domain sizes because in both cases the large 2016), sonar applications, and aeroacoustic noise miti- number of wavelengths to be resolved requires more and gation that were not possible using previous generations more discrete elements and/or nodes to resolve the wave of computers. The enabling HPC technology allows one propagation. Ultrasound applications (Suslick, 2019) are to resolve acoustic wave propagation in ever-increasing an example where, due to the high frequencies and hence domain sizes (or, equivalently, ever-increasing frequency small wavelengths, numerical methods require large num- ranges, e.g., megahertz) of interest for the wave propa- bers of degrees of freedom for the solution. HPC has the gation. The number of discrete equations to be solved potential to enable the solution of these and other acous- increases with the frequency range and domain size. tics problems across a variety of engineering disciplines. Eventually, the growing number of degrees of freedom and corresponding matrix storage requirements preclude In many acoustics applications, the FEM is an attractive the solution of the problem on one’s laptop or desktop as numerical strategy. Some advantages of the method include memory resources in the computer are exceeded. • The ability to construct unstructured, body-fitted meshes that capture curved interfaces between com- Modern HPC platforms, based on either distributed cen- plex fluid/structural domains; tral processing units (CPUs) and/or graphics processing • Sparse systems (i.e., matrices wherein most entries units (GPUs), are built to optimize the use of memory are zeros) of algebraic equations that, when combined resources on the largest problems in computational with a FEM of an elastic structure, render a coupled physics. In the case of acoustics, as the frequency range system of equations that is still sparse; and/or domain size increases and the required in-core • The ability to solve either linear and/or nonlinear memory [aka random-access memory (RAM) for storing acoustic wave equations; and bits of information] resources correspondingly increase, • The ability to easily handle spatially varying material an acoustics researcher can, in principle, simply employ properties (e.g., capturing the speed of sound and

Summer 2020 • Acoustics Today 23 HPC FOR ACOUSTICS

density that vary with vertical position in underwater became known as Moore’s law and served as a target or atmospheric acoustics). for computer chip manufacturers for several decades. During the reign of Moore’s law from 1975 to 2012, larger By contrast, finite-difference approaches employ a struc- computational problems could be solved by waiting for tured grid that cannot easily capture curved interfaces. a faster processor to be produced. However, Moore’s law Boundary element methods present a dense linear system could not reign forever because the physical limits of of equations, which makes coupling with finite-element- the microelectronics prevented such perpetual growth. based structural models challenging because the latter Rather than wait for a faster processor, it became neces- present a sparse system of equations. In fairness, these sary to use many processors working together to solve alternative methods also have their own advantages over larger problems; thus parallel computing is born. FEMs in certain applications. However, a common theme is the emergence of HPC resources and the benefits that The Advent of Parallel Computing are being presented to any numerical approach for solv- The early work in parallel computing for acoustics started ing acoustic problems. in the 1990s and consisted of using many CPUs to solve a given problem. In the case of a finite-element solution Finite-Element Method of the wave equation, the approach was a divide-and- The FEM has been widely used as a tool for solving the conquer strategy (aka domain decomposition), where the acoustic wave equation. One of the earliest references is individual finite elements were evenly distributed across from Gladwell (1965), quickly followed by several follow- the CPUs and the solution of the global set of algebraic on efforts (Craggs, 1971). Additional references involving equations would be accomplished by many CPUs work- the coupling of an acoustic fluid with a structure fol- ing in parallel. With the continued demise of Moore’s lowed in the late 1960s and early 1970s (Zienkiewicz law, manufacturers are now producing GPU-based com- and Newton, 1969; Craggs, 1972). More recent surveys puting platforms. A single GPU can have thousands of on FEMs for acoustics and structural acoustics provide processor cores compared with tens of CPU cores per comprehensive technical reviews on the application of computational node. Seymour Cray, the father of super- FEMs for solving acoustics problems (Atalla et al., 2017). computing, once remarked, “If you were plowing a field, which would you rather use: two strong oxen or 1024 Finite-element technology solves partial differential equa- chickens?” (Cray, 2020). This antiquated quote reflects tions (PDEs) by turning them into linear algebra. The the opinion that it would be more advantageous to have FEM discretizes the physical domain of a problem into one fast processor rather than restructuring work to be a finite number of elements. This discretization process accomplished in a massively parallel fashion. Modern is illustrated in Figure 1, right, for the Orion space cap- HPC is finding a way to harness the power of the chickens sule. In this case, the goal is to understand the structural when the oxen are unavailable. Heterogeneous comput- response of the Orion capsule to high-intensity acoustic ing environments, where CPUs, GPUs, and possibly other excitation as would be experienced in flight. The continu- processors coexist on a single piece of hardware are the ous physical domain (box) containing the Orion space future of scientific computing. capsule in Figure 1, left, is subdivided into a collection of elements in Figure 1, right. The solution is approximated Key points for the parallelism of work are synchroniza- by a polynomial with unknown coefficients, defined tion and independence. Tasks to be executed in parallel locally on each element, and is substituted into a suit- need to be independent from each other so that it does not able integral representation of the PDE. The result of this matter which one gets completed first. Synchronization approximation methodology is a linear system of equa- points in an algorithm provide a waiting point where all tions to be solved for the polynomial coefficients. Each parallel tasks can meet up and exchange any information of these unknowns is referred to as a degree of freedom. that might be needed for future work. Current models for heterogeneous computing rely on CPUs to organize and Getting Around Moore’s Law divide tasks to be parallel processed on the GPU. Hard- Gordon Moore (1965) observed that the speed of a com- ware configurations dictate that multiple CPUs must be puter processor doubles about every two years. This able to execute tasks simultaneously on a single GPU.

24 Acoustics Today • Summer 2020 High-Performance Computing for Given this rapidly evolving hardware, navigating the last Department of Energy/Department of 20 years of supercomputing changes has led to some fun- Defense Applications damental changes in the way the software is structured. The emphasis on using HPC as a pillar of science and engineering for national security purposes can be traced Parallel Scalability back to 1992 when the United States passed a morato- An important concept in HPC is the notion of scalability, rium on nuclear testing. A consequence of this was the which encompasses both the size of the problem that can be establishment of the Stockpile Stewardship Program solved and how fast the problem can be solved. The former (SSP) that was given the task of certifying the safety is typically referred to as weak scaling, and the latter as strong and reliability of the nuclear weapons stockpile without scaling. In cases where the goal is to solve the problem very nuclear explosives testing. The Advanced Simulation fast, the intent is to use HPC to achieve strong scaling. In and Computing Program (ASC) is a vital element of the other cases, the goal may be to solve very large problems SSP, creating the modeling and simulation capabilities (i.e., many degrees of freedom), in which case one wants to necessary to combine theory and past experiments to achieve weak scaling on the HPC platform. create future engineering designs and assess aging com- ponents in the stockpile. Sierra Mechanics is one software Strong scaling is demonstrated when doubling the in the ASC toolset and was developed at Sandia National amount of processing power available for a given problem Laboratories (see sandia.gov). Within the Sierra Mechan- cuts the solution time in half. Weak scaling represents the ics suite, the Sierra-SD (structural dynamics) module ability to solve very large problems with many degrees includes capabilities for massively parallel acoustics and of freedom. In the case of a finite-element solution, this structural acoustics capabilities in both the time and would imply that the model has many nodes and/or frequency domains as well as eigenvalue capabilities for elements, which is commonly the case in acoustics appli- mode shape and frequency calculations (Bhardwaj et al., cations when the domain size and/or frequency range 2002; Bunting, 2019). In Applications we show examples of interest becomes large. Weak scaling is demonstrated of the use of these capabilities to solve acoustics problems when one simultaneously increases both the problem size on some of the world’s largest supercomputers. and the processing power available and is able to solve the same problem in the same amount of time. More High-Performance Computing generally, it can be stated as the ability to solve an n times Taking advantage of modern HPC platforms for acoustics larger problem using n times more compute processors requires an understanding of the architectures them- in nearly constant CPU time. selves to develop optimal software strategies to maximize performance. The evolution of computing platforms in Linear Solvers the past couple of decades can be illustrated by compar- The HPC hardware described in High-Performance Comput- ing the top platforms in the late 1990s versus those of ing is only useful for computational acoustics if one also has today. In 1997, the Advanced Strategic Computing Initia- the software to solve Eq. 2 for large dimension n. A simple tive (ASCI) Red machine of the DOE came online with example of a linear system is given by the two equations 2x + y over 9,000 processors and 1 terabyte of total memory, = 4 and x + 3y = 7, where x and y are the unknowns. In this case, becoming the first supercomputer in the world to achieve the solution x = 1 and y = 2 can be obtained by hand. When the speed of 1 teraflop (i.e., 1012 floating point operations the dimension n of matrix A from Eq. 2 is of moderate size per second). We can compare that with the more recent (less than several million), sparse direct solvers (Davis, 2006) DOE Summit (Oak Ridge National Laboratory, 2020) can be used effectively to solve for the unknowns. For larger machine, which has over 200,000 CPU cores and 27,000 problems in computational acoustics when n exceeds several GPUs. Summit has a peak performance of 200 petaflops million, however, the computational resources required by a (or 200,000 teraflops), currently making it the fastest direct solver quickly become prohibitive. In the case of the supercomputer in the world (Top500, 2019). Cloud com- Orion capsule example described in Applications with over puting, such as those offered by Google and Amazon, has 2 billion unknowns, a direct solution would take order of a prohibitively slower communication time between CPU months on a hypothetical CPU processor with a peak perfor- cores and thus are not optimal for scientific computing. mance of 1 teraflop and enough memory for the computations.

Summer 2020 • Acoustics Today 25 HPC FOR ACOUSTICS

The predominant methods for solving linear systems with the large dimension n beyond the reach of direct solv- ers are preconditioned iterative approaches (Smith et al., 1996; Dohrmann et al., 2010). These methods solve the linear system in an iterative fashion rather than using a direct factorization. A common preconditioned iterative approach is based on the concept of a divide-and-con- quer strategy where the physical domain is divided into disjoint partitions and each partition is handled by a separate CPU (Smith et al., 1996; Dohrmann and Wid- lund, 2010). Other preconditioners based on multigrid are popular in other applications, wherein each partition Figure 2. The simulated response of a stiffened cylinder is further subdivided into another level of partitions. subject to underwater explosion is a demonstration of a typical Navy use of the FEM. The incident pressure wave Applications excites the structure as it is reflected off the surface. Inset: Here, we present examples of applications where HPC has a typical gauge time history. These results can be used to been utilized to solve acoustics problems of the form in Eq. design structures to withstand explosive detonations in the 2, with a large dimension n that would not be possible with surrounding medium (water). smaller scale computing platforms. Solutions of problems with over 2.2 billion degrees of freedom are presented. domain consists of an ellipsoidal region composed of Underwater Acoustics for Ship Shock tetrahedral elements with an acoustic material formula- One application of underwater acoustics is ship-shock tion. The far-field, semi-infinite domain is approximated testing. Vessels in the Navy fleet must undergo ship-shock by infinite elements, which are not shown. Figure 2, tests before they are certified for service. These tests involve inset, shows a typical gauge time history predicted by setting off large underwater explosives near the vessel of the analysis. interest, typically around 75% of the expected failure load. The purpose is not to sink the vessel but to find out what Simulation of Ground-Based Acoustic Tests breaks when the ship is exposed to nearby explosions (e.g., Qualification tests of aerospace structures and flight electronics, chairs, pipes). Such at-sea tests are extremely vehicles require that the structures be subjected to expensive and take a vessel out of the fleet for many months acoustic loads that are representative of the environ- or even years. The more a ship is damaged in such a test, ments that will be experienced in-flight. One way to the longer it takes to return to the fleet. Computational achieve this, of course, is to conduct a full-scale flight modeling of a ship-shock event is one strategy to design test on the structure. The associated accelerometer and/ components that will survive a ship-shock test. In the far or pressure sensor measurements can then be used to field, the large pressures generated by explosives can be assess the acoustic environment, and the resulting modeled as acoustic pressure waves impinging on the ship. structural response. These underwater acoustics applications exhibit large simu- lation domains and frequency ranges of interest that result However, flight tests tend to be very expensive, and in many wavelengths in the domain. As such, they lend due to instrumentation and telemetry limitations, only themselves well to solution via HPC (Moyer et al., 2016). limited accelerometer data are typically available from such tests. As a result, ground-based acoustic testing Figure 2 shows slices of the acoustic pressure field is a common alternative wherein the structure is sub- reflected from a stiffened cylinder in a transient-coupled jected to representative acoustic fields in an acoustic test structural-acoustics simulation. Acoustic loading is due chamber. Typically, high-powered speakers and other to an underwater explosion away from the submerged, acoustic sources are used to generate the acoustic fields. air-filled structure. Rings on the surface of the cylinder The advantages of ground-based testing are that the cost indicate the mechanical response. The near-field fluid is typically only a small fraction of that of a flight test

26 Acoustics Today • Summer 2020 and, perhaps more importantly, significantly more data Example: Orion Capsule in Ground-Based can be gathered with ground-based acquisition systems. Acoustic Test As an example of ground-based acoustic testing, we pres- The advantages of ground-based acoustic testing come ent a numerical model of a reverberation chamber test with a challenge, however, in that one needs to engi- on the Orion capsule. Reverberation (reverb) chambers neer specific acoustic fields that emulate what would be are rooms designed to produce a diffuse sound field seen in-flight. Given the large-domain sizes and high- around an object of interest, which is a common condi- frequency ranges of interest, these problems typically tion in flight environments. A diffuse sound field is an have many wavelengths in the domain, thus making acoustic environment where the acoustic energy density HPC with finite elements an attractive solution strategy. is the same at all locations. By understanding structural response to a diffuse field, including absorption coef- Ground-based testing goes hand in hand with compu- ficients and transmission loss, the structural behavior tational acoustics modeling. Figure 3A shows a typical during launch and reentry can be characterized. experimental setup of a ground-based acoustics test at Sandia National Labs, and Figure 3B shows a corre- The sound absorbability is determined by the change in sponding finite-element model of a representative test reverb time of the test object. Acoustic excitation can be (Schultz et al., 2015). The ground-based test can tell us accomplished with a variety of different source arrange- with high confidence the mechanical response from a ments. Reverb room tests are very important for ground certain loading environment at specific sensor locations. testing flight objects that will be excited to uniformly However, numerical simulations are used to test large high random pressure loads while in use. numbers of different loading environments and provide the response at all points of a model, something impossi- To demonstrate a simulation of a reverb chamber test, we ble to do with experiments alone. These models typically present a numerical simulation of a three-quarter scale reach sizes of hundreds of millions of finite elements. version of the Orion capsule (crew module; National

Figure 3. A: an engineer setting up a ground-based acoustic test of a weapon system. Instrumentation is being put into place in preparation for acoustic excitation to evaluate the structural response to high-amplitude acoustic fields. B: results of the computational acoustics simulation corresponding to the physical test in A. The simulation results provide pressure, acceleration, and stress values in the weapons system at every point in time in the simulation. These numerical results are used to evaluate the weapon response in the simulated acoustics environment. The finite-element mesh is represented by the grid, and the colors represent the magnitude of the acoustic pressure field at an instant in time.

Summer 2020 • Acoustics Today 27 HPC FOR ACOUSTICS

Aeronautics and Space Administration, 2019) in the in conjunction with high-order polynomial interpola- middle of the vibroacoustic test facility (VATF) of Sandia tion as the basis functions that are able to approximate National Labs (Schultz et al., 2015). We note that this is a waveform with less error, but for simplicity, we do not purely a numerical study, not an actual experimental test. cover those details here. The VATF is a rectangular box 6.58 m × 7.50 m × 9.17 m, making the volume ratio of capsule to room approximately The acoustic domain for this problem is the air enclosed 0.1. Acoustic excitation to the 140 dB level is provided by by the reverb chamber and surrounding the Orion cap- a 0.1-m2 loudspeaker in the bottom corner of the room. sule. Using the Sierra-SD software, a transient reverb It provides a sinusoidal acoustic velocity loading with an simulation for over 2.2 billion unknowns was solved on amplitude of 3.4 m/s and a frequency of 350 Hz. the Serrano supercomputer using over 22,000 computing cores in under 8 hours! Figure 4A shows an early time The accuracy of a finite-element solution is dependent on instance of the developing acoustic pressure field on the the size of the elements used to obtain the solution. In Orion capsule. The sinusoidal excitation is clearly visible acoustics, the element size used in the mesh will limit the on the surface. Figure 4B illustrates the acoustic pressure frequencies resolved. For instance, computing a sound field on the Orion and in a cutout of the chamber at the field using finite elements would require the mesh size final time of 0.2 s. The acoustic pressure field has become visibly diffuse at this instant. h = ( 3) Conclusions where h is the size of the finite element, c0 is the speed of This article has presented a discussion of modern HPC sound in air, fmax is the highest frequency requested, and hardware and software advances for modeling acoustic λ is the number of elements per wavelength needed. Typi- problems with large numbers of degrees of freedom, which cally, for low-frequency excitation, we select λ = 10 for arise in a wide range of applications. Example applications linear hexahedral elements. Fewer elements can be used in ground-based acoustics testing and underwater acous-

Figure 4. A: acoustic pressure field on NASA’s Orion capsule suspended in Sandia’s acoustic reverberation chamber at an instant early in time. The sound field is generated by an acoustic source in the nearest bottom corner of the room. The source generates acoustic waves that are visible and coherent at this early time instant (not yet diffuse). The red represents high acoustic pressure, and the blue is low acoustic pressure. B: diffuse acoustic pressure field on NASA’s Orion capsule after 0.2 s of simulated time. The pressure field from A has evolved into a diffuse field, which is needed to model the statistical behavior and structural response to a random input, such as launch and reentry. The distribution of this diffuse field is the desired output of a reverberation chamber experiment.

28 Acoustics Today • Summer 2020 tics on models and corresponding linear systems with over In Proceedings of the 2002 ACM/IEEE Conference on Supercomputing, 2 billion degrees of freedom were demonstrated, showing Baltimore, MD, November 16–22, 2002. Bunting, G. (2019). Strong and Weak Scaling of the Sierra/SD Eigen- the potential for HPC to enable acoustics solutions that vector Problem to a Billion Degrees of Freedom. Technical Report were not previously possible. As future software and hard- SAND2019-1217, Sandia National Laboratories, Albuquerque, NM. ware advances continue to evolve, one can expect HPC to Available at https://www.osti.gov/biblio/1494162. continue to expand the range of acoustics problems that Craggs, A. (1971). The transient response of a coupled plate-acoustic system using plate and acoustic finite elements. Journal of Sound and can be solved for realistic engineering applications. Vibration 15, 509-528. Craggs, A. (1972). An acoustic finite element approach for study- Although the technology has made great strides in recent ing boundary flexibility and sound transmission between irregular decades, there is still significant research that is ongo- enclosures. Journal of Sound and Vibration 30, 331-339. Cray, S. (2020) Seymour Cray. Wikipedia. Available at ing and more that is required for HPC to continue to https://en.wikipedia.org/wiki/Seymour_Cray. expand the boundaries of large-scale acoustic modeling. Davis, T. (2006). Direct Methods for Sparse Linear Systems (Fundamen- Some areas where emerging computational research is tals of Algorithms 2). Society for Industrial and Applied Mathematics, essential include Philadelphia, PA. Dohrmann, C., and Widlund, O. (2010). Hybrid domain decomposition • The optimal use of GPU-based architectures with algorithms for compressible and almost incompressible elasticity. Inter- high-order finite elements; national Journal for Numerical Methods in Engineering 82, 157-183. • Continued development of GPU-aware multilevel Duda, T., Bonnel, J., Coelho, E., and Heaney, K. (2019). Computational domain decomposition and multigrid solvers for acoustics in oceanography: The research roles of sound field simulations. Acoustics Today 15(3), 28-37. https://doi.org/10.1121/AT.2019.15.3.28. acoustics and structural acoustics problems; Gladwell, G. (1965). A finite element method for acoustics. In Proceed- • Advances in mesh creation for large-scale acoustics ings of the Fifth International Conference on Acoustics, Liege, Belgium, problems; and Paper L33. • Advances in HPC for large-scale optimization (e.g., Hart, C., Reznicek, N., and Wilson, C. (2016). Comparison between physics-based, engineering, and statistical learning models for out- design and inverse problems) problems in structural door sound propagation. The Journal of the Acoustical Society of acoustics, wherein the solution of the acoustics equa- America 139, 2640-2655. tions is inside of an optimization loop. Hochgraf, K. (2019). The art of concert hall acoustics: Current trends and questions in research and design. Acoustics Today 15(1), 28-36. Oak Ridge National Laboratory. (2020). Summit. Leadership Computing Facility, Oak Ridge National Laboratory, Oak Ridge, TN. Available at Acknowledgments https://www.olcf.ornl.gov/olcf-resources/compute-systems/summit. Sandia National Laboratories is a multimission labora- Moore, G. (1965). Cramming more components onto integrated cir- cuits. Electronics 38(8), 114-117. tory managed and operated by National Technology and Moyer, T., Stergiou, J., Reese, G., Luton, J., and Abboud, N. (2016). Navy Engineering Solutions of Sandia, LLC, a wholly owned enhanced sierra mechanics (NESM): Toolbox for predicting Navy shock subsidiary of Honeywell International Inc., for the US and damage. Computing in Science & Engineering 18(6), 10-18. Department of Energy National Nuclear Security Admin- National Aeronautics and Space Administration (2019). Orion Capsule. National Aeronautics and Space Administration, Washington, DC. istration under contract DE-NA0003525. This paper Available at https://nasa3d.arc.nasa.gov/detail/orion-capsule. describes objective technical results and analysis. Any Pierce, A. (2019). Acoustics, 3rd ed. Springer International Publishing, subjective views or opinions that might be expressed in Cham, Switzerland. the paper do not necessarily represent the views of the US Schultz, R., Ross, M., Stasiunas, E., and Walsh, T. (2015). Finite ele- ment simulation of a direct-field acoustic test of a flight system Department of Energy or the United States Government. using acoustic source inversion. In Proceedings of the 86th Shock The capabilities reported herein are the work of many and Vibration Symposium, Shock and Vibration Exchange, Orlando, people including the authors, Nathan Crane, David Day, FL, October 5–8, 2015. Sean Hardesty, Payton Lindsay, Lynn Munday, Kendall Smith, B., Bjørstad, P., and Gropp, W. (1996). Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations. Pierson, Jerry Rouse, Scott Gampert, Ryan Schultz, Nich- Cambridge University Press, New York. olas Reynolds, Michael Miraglia, and Jonathan Stergiou. Suslick, K. (2019). The dawn of ultrasonics and the palace of science. Acous- tics Today 15(4), 38-46. https://doi.org/10.1121/AT.2019.15.4.38. Top500. (2020). Summit up and running at Oak Ridge, claims first References exascale application. Available at https://www.top500.org. Atalla, N., and Sgard, F. (2017). Finite Element and Boundary Methods in Structural Acoustics and Vibration. CRC Press, Boca Raton, FL. Zienkiewicz, O., and Newton, R. (1969). Coupled vibrations of a Bhardwaj, M., Pierson, K., Reese, G., Walsh T., Day, D., Alvin, K., Peery, structure submerged in a compressible fluid. In Proceedings of J., Farhat, C., and Lesoinne, M. (2002). Salinas: A scalable software International Symposium on Finite Element Techniques, Stuttgart, for high-performance structural and solid mechanics simulations. Germany, May 1–15, 1969, pp. 359-379.

Summer 2020 • Acoustics Today 29 HPC FOR ACOUSTICS

About the Authors Timothy F. Walsh [email protected] Gregory Bunting Sandia National Laboratories [email protected] PO Box 5800, MS 0897 Sandia National Laboratories Albuquerque, New Mexico 87185-0897, PO Box 5800, MS 0845 USA Albuquerque, New Mexico 87185-0845, Timothy F. Walsh received a PhD in USA computational and applied mathematics from the University Gregory Bunting earned a PhD from of Texas at Austin in 2001. He has been a staff member at Purdue University (West Lafayette, IN) in 2016. Gregory cur- Sandia National Laboratories (Albuquerque, NM) for 19 years, rently works in the Solid Mechanics and Structural Dynamics with research interests in computational acoustics, structural Group, Sandia National Laboratories (Albuquerque, NM), dynamics, inverse problems, and design optimization. which develops and maintains simulation and modeling capa- bilities for real-world problems. His research interests include real-time hybrid simulation, structural acoustics, perfectly matched layers, inverse problems, and higher order methods.

Clark R. Dohrmann [email protected] Sandia National Laboratories PO Box 5800, MS 0845 Albuquerque, New Mexico 87185-0845, USA Clark R. Dohrmann is a staff member in the Computational Solid Mechanics and Structural Dynam- ics Department, Sandia National Laboratories (Albuquerque, NM), having received his degrees from Ohio State University (Columbus). His research interests include preconditioners for iterative solvers, finite-element methods, and structural dynamics. He currently resides in Tennessee with his wife and seven children.

Scott T. Miller [email protected] Sandia National Laboratories PO Box 5800, MS 0845 Albuquerque, New Mexico 87185-0845, USA Scott T. Miller is a computational mechanician with research interests in fluid-structure interaction, blast-induced traumatic brain injuries, and scientific and engineering software develop- ment. He completed his PhD in theoretical and applied mechanics, University of Illinois (Champaign-Urbana), previ- ously worked in Applied Research Laboratory, Pennsylvania State University (University Park), and is currently a staff member at Sandia National Laboratories (Albuquerque, NM).

30 Acoustics Today • Summer 2020 FEATURED ARTICLE

Battlefield Acoustics in the First World War: Artillery Location

Richard Daniel Costley Jr.

Introduction and Context Heavy artillery was developed in the late 1800s, largely by Acoustics have probably been used in warfare for as long Germany because field artillery was not able to destroy as people have been warring with each other. Indeed, in improvised field fortifications in the Russo-Turkish War an historical review, Namorato (2000) relates the use of (1877–1878). Heavy artillery fired larger caliber rounds acoustics in warfare from biblical to modern times. The with increased muzzle velocities at increased ranges. examples he describes through the nineteenth century Then, in 1897, France developed a gun with a long largely rely on the human ear for hearing or not hearing barrel recoil that had a brake mechanism that absorbed and for recognizing various sounds associated with warfare. the recoil so that the gun did not require repositioning after each shot. In addition, shells combined propellant, World War I (WWI) was distinguished from earlier con- warhead, and timing devices into a cylinder that could flicts by technological advancements such as the advent be quickly loaded into the guns. These developments of electricity that took place in the late nineteenth and allowed for an increased rate of fire and subsequently early twentieth centuries which led to the invention of required an increased supply of ammunition. devices for transmitting, detecting, and recording sound. Other technological advancements in warfare were intro- One example of heavy artillery in use by the German Army duced during this period: the machine gun was invented in WWI was a 21-cm-caliber Morser Howitzer (Figure in 1884, the flamethrower was invented in 1914, poison 1) that had a supersonic muzzle velocity of 393 meters/ gas was introduced in 1915, and the tank was invented in 1916 (Meyer, 2006). Figure 1. A 21-cm-caliber Morser Howitzer used by the German However, advancements in artillery, which killed more Army in World War I (WWI). Photo by Balcer~commonswiki, people in WWI than did any other weapon, led to the used under the Creative Commons license with attribution (CC development of methods to localize (and ultimately BY 2.5). Available at bit.ly/3d75Eem. destroy) enemy artillery. In other words, artillery loca- tion technologies were developed to counter the recent advances in artillery.

In the early twentieth century, there were two classes of artillery: field artillery and heavy artillery (see bit.ly/2TX2DFZ). Field artillery, such as that used in the US Civil War, was intended for mobile warfare and shot small caliber shells between 7.5 and 8.4 centime- ters. The projectiles traveled in flat trajectories at targets within the line of sight, so soldiers could usually see the cannon firing at them. Field artillery continued to be used in the First World War, but it was supplemented by heavy artillery (Meyer, 2006; Storz, 2014).

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 31 https://doi.org/10.1121/AT.2020.16.2.31 ARTILLERY LOCATION IN WWI

second and a maximum firing range of 11 kilometers. It the same time, the operator at Central could be confi- could be fired at a rate of 1 to 2 rounds per minute. These dent they were observing the same flash. The observers guns could be positioned at larger distances from the line measured the bearings from their position to the flash of of fighting and behind obstacles such as forests or hills. the observed gun and reported them to Central through the telephone. On a map board at Central, the observ- WWI began as a war of mobility, with seven German ers’ positions had been accurately plotted. The operator armies advancing rapidly through Belgium and into at Central stretched catgut strings from the observers’ France in August 1914. Their progress was impeded positions on the board along the corresponding bearings by the French, with the help of the British Expedition- so that the intersection of the strings from the different ary Force (BEF), before they were able to invade Paris. observation posts provided the location of the gun or For the next four years, the Allied and German armies the battery. Captain (later Colonel) Harold Hemming, a defended lines of demarcation that roughly ran from the Canadian gunnery officer posted to Ranging and Survey, North Sea near the border between France and Belgium is credited with the development and advancement of to the corner between France, Germany, and Switzerland flash ranging. Flash and sound ranging were developed in what became known as trench warfare. Machine guns at the same time and complemented each other (Bragg kept troops in their trenches and the fronts stationary so et al., 1971; Mitchell, 2012). that the most effective way to inflict injury on the enemy was to lob shells at them from behind one’s own line. Beginnings of Sound Ranging When the war started, Charles Nordmann was a profes- Methods of Artillery Location sor of astronomy at the Paris Observatory in Meudon, All sides in WWI recognized the need for locating artil- France. He was serving at the front in the French Army lery fairly quickly. Three basic methods were used: aerial in 1914 when he conceived the idea of sound ranging and location, flash ranging, and sound ranging. Aerial location obtained permission to test it. He sought the assistance was performed by using both observation balloons and of Lucien Bull, who had been developing galvanometers airplanes. Aerial photography made great strides during for electrocardiography at the Institute of Marey in Paris this period and provided valuable intelligence for locat- (Van der Kloot, 2005; Mitchell, 2012). ing enemy positions and for map making. However, these advances came with risks because as airplanes developed, Nordmann determined that a gun could be located by so did antiaircraft artillery. Also, aerial surveillance was measuring the time differences of arrival of the sound not effective in conditions of fog or rain, which were not from a gun to different observation positions. Nord- uncommon. Observation balloons bobbed around in the mann and Bull conducted sound ranging experiments atmosphere, making it difficult to measure bearings accu- to test this idea in late November 1914. Two guns were rately. In addition, each side would hide or camouflage its located in St. Cloud, a village located on the west side batteries to avoid detection or set out decoys. of Paris. In one approach, human observers, “tappers,” would press a key similar to the ones used by the flash- Flash ranging, developed alongside sound ranging, was ranging observers when they heard the guns fire. They another method for locating artillery. Three to four also used stopwatches to record the time from the flash observation posts were ideally surveyed 1,400-1,800 of the muzzle to the time they heard the sound. In a sepa- meters from the front lines along a 3- to 4-kilometer rate approach, four microphones placed along a baseline base. Efforts were made to conceal these posts from the of 4,500 meters recorded the signals. The tests proved enemy, but concealment was not always possible. There successful, and the guns were located with an error of had to be a sufficient distance between each observer for 40 meters in range and 20 meters in bearing. The human accurate triangulation. The observer had a survey instru- observers were within 0.05 second of the microphones. ment at his position, similar to a transit or a theodolite, The “tappers” were deployed to Arras, Belgium, near the and a phone line hardwired to the switchboard at “Flash British line, to establish sound-ranging (SR) sections. Central.” The observer would press a key or button that Because the method was subjective and lacked precision, was part of the phone set when he saw the flash from the French continued to develop and improve systems an enemy gun. If multiple observers pressed the key at using microphones (MacLeod, 2000).

32 Acoustics Today • Summer 2020 The Geographical Section of the General Staff in London During this period, Bragg promoted the exchange of ideas: had learned about the French efforts to locate artillery “At intervals of two months or so, we had a meeting at some by sound. The head of the topographical subsection at central point such as Doullens to which each section sent the General Headquarters (GHQ) in Flanders, Belgium, an expert. They swopped stories, schemes, and boasts of Lieutenant. Colonel Ewan Maclean Jack, Royal Engineers their achievements and I am sure emulation made every- (RE), recruited Second Lieutenant William Lawrence thing go much faster. The meeting generally ended with a Bragg to put a system in operation. binge of heroic magnitude” (Bragg et al., 1971, p. 38).

Bragg was a student of mathematics and physics at Trin- Method and Apparatus ity College Cambridge in 1909. After taking a First in Part Hyperbolas II Physics in 1911, he started working at the Cavendish As mentioned in Beginnings of Sound Ranging, acoustic Laboratory. In November 1915, he and his father, Wil- location was based on determining the differences in the liam Henry Bragg, were jointly awarded the Nobel Prize times of arrival of the sound from a cannon to different in physics “for their services in the analysis of crystal observation positions or microphones. The difference in time structure by means of Röntgen rays.” W. L. Bragg, who was used to determine the direction of travel of the sound was 29 years old at the time, is still the youngest laureate by considering that the gun was located on the asymptote to have won the Nobel Prize in a scientific category. of a hyperbola while the pair of microphones that detected the sound was located at its foci. The time difference would Bragg had enlisted shortly after the war began and was be constant for any gun located along this asymptote. Thus, commissioned as a Second Lieutenant. In the fall of 1915, the time difference established the direction of arrival of the Lieutenant Colonel Jack offered Bragg the opportunity to gun wave. Time differences from other pairs of microphones put SR into operation. Bragg accepted, happy to have a produced their own asymptotes or bearings. The intersection scientific job in the war, and recruited an assistant, newly of these bearings determined the location of the gun. commissioned Lieutenant Harold Roper Robinson, a lec- turer in physics at London University where he worked with Thus, the sound rangers devised graphical techniques Ernest Rutherford (MacLeod, 2000; Van der Kloot, 2005). to determine the gun locations. After being surveyed,

Bragg and Robinson traveled to a section of the front in the Vosges Mountains in the Alsace region of France Figure 2. Plotting board used with sound ranging. Nos. where the French had set up their SR apparatus. The 1-6, microphone positions. The timescales between adjacent French sound rangers instructed the British officers over microphones are within the border that runs from the top the next couple of weeks. However, the front during that left, underneath, and to the right of the array (although time was very quiet and they didn’t see much action. difficult to read at this scale). From Mitchell, 2012. See text Bragg and Robinson left to establish the first SR section for detailed explanation. on the British front in Flanders, five miles southwest of Ypres, Belgium, which was being held by the Canadian Corps of the British Second Army. Bragg and his team struggled through 1915 and the first part of 1916, pro- ducing inconsistent results due mainly to the type of microphone they were using (discussed in Microphones).

Despite the lack of progress, Lieutenant Colonel Jack agreed to start up additional SR sections. Bragg recruited new sound rangers by attending unit parades, where he would order “all Bachelors of Science step forward” (Van der Kloot, 2005). By June 1916, 16 SR sections had been recruited, trained, and deployed to areas along the front (one of these recruits was Lance Corporal William Tucker).

Summer 2020 • Acoustics Today 33 ARTILLERY LOCATION IN WWI

the positions of six or seven microphones were plotted was of primary interest to the sound rangers because it on a plotting board (Figure 2; Mitchell, 2012). A catgut radiated from the cannon. The second sound, although string was pinned at the midpoint between adjacent detected first because the projectile was supersonic, was microphones; the strings stretched from this point to the ballistic shock wave or shell wave recorded by the their corresponding time difference on the timescale. In microphones as the supersonic projectile traveled over Figure 2, the string is represented by the lines running them on its way to its target. This was referred to by the from the midpoints between the microphone positions French as onde de choc and is basically a minisonic boom to the circles with numbers (the circles are not part of radiating from the projectile. The third sound recorded the plotting board but are only included to identify by the microphones was the explosion of the shell as it the microphones corresponding to that bearing). For impacted its target. The carbon microphones were sensi- example, the bearing determined from the No. 1 and tive to the shell wave, which was higher in frequency than No. 2 microphones is represented by the line that runs the muzzle wave. This issue frustrated the BEF sound from the corresponding midpoint to the circle contain- rangers in their early efforts because it complicated the ing “1-2.” The intersection of the strings indicated the interpretation of the signals because the shell wave was gun location. In practice, a reliable location required dependent on the caliber of the gun and varied with the three or more asymptotes. range and direction of fire.

When surveying the microphone positions, the SR team During this time, Bragg was billeted in a farmhouse at La would try to maintain equal spacing between the micro- Clytte in Flanders. The privy, located in an annex just off phones, which were separated by approximately 1,500 the kitchen, was sealed except for the hole beneath the seat: meters, so that the entire array had a length of roughly “one sat on the only aperture between interior and the outer 7,500 meters. As the war progressed, the SR teams air” (Bragg et al., 1971). When a British 6-inch gun, located learned that they could distinguish individual guns a quarter mile away, fired, “anyone sitting on the privy was when fired simultaneously by placing the microphones slightly, but perceptibly, lifted off the seat” (Bragg et al., on the arc of a circle facing the enemy line, as depicted 1971). This led Bragg and his colleagues to conclude that in Figure 2. The radius of the arc was the estimated dis- the gun sound produced a large amount of low-frequency tance of the enemy batteries. Bragg initially thought this energy that could be exploited for SR (Mitchell, 2012). was “fussy” because of the extra complexity in laying out the line, but he later recognized the benefits. A captured Corporal Tucker’s experiences led to a solution. Tucker German order forbade any battery to fire alone, think- had joined Bragg’s section on Kemmel Hill from the Phys- ing that multiple batteries firing at the same time would ics Department at Imperial College, where he had been confuse the Allied SR systems. But Bragg claimed that performing experiments in the cooling of very fine hot they could locate “almost any number of guns firing at platinum wires, called Wollaston wires (see bit.ly/3d5u5Jf). once, the more the merrier” (Bragg et al., 1971), obvi- The tarred-paper shack in which he was staying at the time, ously a hyperbole. with Bragg and others, had holes or tears in it. When the guns fired, jets of cold air annoyed him as he lay on his Microphones bunk. Tucker had the idea of using these jets of air to Initially, both the French and the British used carbon cool Wollaston wires. The first successful experiment microphones for SR. These were invented independently involved stretching a thin wire over the opening of an in England and in the United States around 1877. A empty rum jar and blowing on it. Then they obtained recent article by Thompson (2019) in Acoustics Today “proper wire” from England and stretched it across a hole contains an excellent description of these devices. they had drilled in a discarded ammunition box. The new microphone worked; the shell wave “hardly made the Three different sounds were produced by the firing artil- galvanometer quiver” while the gun wave “gave an enor- lery and recorded by the SR apparatus. The first sound mous kick” (Bragg et al., 1971; Mitchell, 2012). was the muzzle wave, referred to as onde de bouche by the French, which was produced by the explosive charge The final form of the Tucker microphone consisted of propelling the projectile from the cannon. This sound a 23-liter (5-gallon) tinplate cylinder with conical ends,

34 Acoustics Today • Summer 2020 manufactured in England by the Cambridge Instrument at a constant speed. A magnetic field was applied in the Company. One end of the cylinder was closed. A short, direction perpendicular to the plane of the wires. Elec- open tube was inserted into the other end. The hot wire trical current from a microphone, in the presence of the grid consisted of fine platinum wires stretched across a magnetic field, caused the corresponding wire to deflect, square opening (4.5-centimeter aperture) in a mica disk, which was recorded on the film. which screwed into the end of the open tube (Tucker and Paris, 1921). The original devices worked poorly due to The motor of the harp galvanometer contained a wheel resonances, but the problem was mitigated by drilling with spokes that would interrupt the illumination of the four small holes in the side of the cylinder. Tucker micro- film at fixed intervals. This placed lines on the film at phones were provided to all British SR sections, whereas evenly spaced intervals, which permitted the time differ- the French continued using systems with microphones. ences to be accurately and more easily read. The type of record, reproduced from film, showing signals produced The Tucker microphone is essentially a Helmholtz reso- by German guns and detected with Tucker microphones, nator; the low-frequency, long-wavelength muzzle wave caused the pressure of the volume of air in the cylinder to change uniformly as the acoustic wave passed over the Figure 3. A record reproduced from film of a German gun microphone. This vibration caused the air in the tube or firing. S, shell wave arrivals; G, muzzle (gun) wave arrivals. neck to move into and out of the container, cooling the From Trowbridge, 1920. See text for details. platinum wires. Air moving in either direction cooled the wire, which had the effect of “rectifying” the signal by producing a current that was always positive (Bragg et al., 1971). Its resonance frequency was probably between 30 and 50 Hz, based on the cited dimensions. The four drilled holes would have dampened and broadened the resonance. Bragg reports that the characteristic frequencies of the guns were between 10 and 25 Hz, with larger guns at the lower frequencies. Tucker’s microphone was less sensitive to the higher frequency, smaller wavelength shell waves. Figure 4. An SR apparatus such as that used in WWI. Harp Galvanometer Annotations in the border indicate that the plotting board was SR required that the signals produced by the micro- located at the bottom left. Just above this was the camera with phones be recorded so that the time differences could be the galvanometer to its right at the bottom center. The rheostats determined. These signals were recorded with different (right) were used to balance the circuits containing the Tucker styles of galvanometers. In one arrangement, referred microphones. Available at militarysurvey.org.uk. See text for to as the télégraphe militaire (TM), electric signals from detailed explanation. carbon microphones “would actuate marking pens on smoked paper” (MacLeod, 2000), probably in a way that is similar to a strip-chart recorder.

Lucien Bull designed and developed a recording device based on the string or Einthoven galvanometer, which had been invented in 1901 by Willem Einthoven. Bull’s version contained six wires, each connected to a different microphone. This was referred to as the “harp” galva- nometer. The wires were arranged in a plane parallel to one another and a half centimeter or so away from a plane containing 35-mm movie film. A light source pro- jected images of the wires onto the film, which advanced

Summer 2020 • Acoustics Today 35 ARTILLERY LOCATION IN WWI

is shown in Figure 3. In addition to time differences, the to start training and deploying flash- and sound-rang- analyst could determine the caliber of the gun by noting ing sections knowing that the German Army would be the duration of the signal, which was an indication of its concentrating its forces on the Western Front with the frequency; lower frequency signals produced by larger surrender of Russia in the east. They recruited officers guns had a longer duration. The system designed by Bull, from the Ambulance Service and troops from the Army which had a faster response time than the stylus system Engineers who had already arrived in France. Lyman used in the TM, was able to determine time differences and Bazzoni set up a school for flash and SR at Ft. de St. to within 0.01 second (Bragg et al., 1971). Menge, near Langres, (see bit.ly/3d0vJvE) early in Janu- ary 1918 where they trained and fielded four flash and Bull made the first 50 galvanometers, whereas later ver- four SR sections by the end of the war. sions were manufactured by the Cambridge Instrument Company. A complete SR apparatus with galvanometer, Sound Ranging in Practice camera, and plotting board is shown in Figure 4. Each A schematic of a typical SR base is shown in Figure 5. Num- Tucker microphone was connected to a wire of the harp bers 1-6 represent the positions of the microphones, each of galvanometer through a circuit containing a Wheat- which was connected with a low-resistance wire to the appa- stone bridge that contained rheostats used to modify ratus shown in Figure 4 at the Central position. The distance resistances to balance the bridge; this particular system between microphones 1 and 6 in the figure is approximately would have accommodated seven Tucker microphones. 7 kilometers (4.3 miles), requiring the use of approximately 64 kilometers (40 miles) of low-resistance wire. Although The United States Enters the War the vacuum tube amplifier had been invented in 1911, it was The United States officially entered the war on April 6, 1917. not used in SR until after WWI, thus requiring the use of General John J. Pershing became the Commander in Chief thicker gauge, i.e., low-resistance, wire. of the American Expeditionary Force (AEF) in May 1917. He arrived in France in June, where he was introduced to the Observation posts were located on the right and left flash- and sound-ranging efforts of Britain and France. The sides of the base near the front line. The observer would newly formed US National Research Council recommended Augustus “Gus” Trowbridge, professor of physics at Princ- eton University, to organize the flash- and sound-ranging Figure 5. Schematic of a typical SR station. x1-x6, Positions services. Trowbridge became a Major in the Signal Corps of the microphones, each of which was connected with a low- Reserve before being transferred to the Corps of Engineers resistance wire to the apparatus shown in Figure 4 at the in October 1917, where he served under Colonel R. G. Alex- Central position. Open circles, observation posts located at the ander, chief of the Topographical Section (Kelves, 1969). right and left sides of the base near the front line. From Hinman, 1919. See text for detailed explanation. Trowbridge met with the French in July where he was introduced to their SR systems. In the meantime, Colonel Alexander had been investigating the British and French systems with the help of Lieutenant Charles B. Bazzoni, an American physicist who had volunteered while on a research fellowship in London. Bazzoni preferred the British system, referred to as the Bull-Tucker; although not as sensitive as some of the French devices, it was less delicate and not as difficult to use under demanding battle conditions. The AEF adopted the Bull-Tucker, and the British supplied the first systems (Kelves, 1969).

Theodore Lyman, a distinguished Harvard physicist, joined the team as a captain in the Signal Corps Reserve. He and Trowbridge sailed to Europe in September, eager

36 Acoustics Today • Summer 2020 push a key or button when he heard or saw guns fire. A report on SR issued by the General Staff in March This signal would initiate the film to start recording at 1917 claimed that the error from a single good observa- Central, which would run for 20-30 seconds. The film tion was usually within 50 yards, dropping to less than record contained the muzzle blast, at times the shell 25 yards when several observations were averaged. The wave, and the explosion from the bomb’s impact. The report also claimed that a single SR section obtained 260 film was developed, the time differences plotted on the locations of German batteries in the two-month period plotting board, and the results delivered to the artillery of December through January. section, all within 10 minutes. If the strings on the plot- ting board intersected at a single point, the location was Another report from 1918, prepared by an artillery deemed accurate and less accurate if they intersected in information officer of the AEF, compared the locations more than one point. The SR sections were able to deter- of artillery batteries provided by flash and SR. In the first mine the location of the gun, the location of impact, and case, flash ranging outperformed SR during a period of the time of the round in air. This information aided in mobility where the allied forces were advancing and the determining the caliber of the gun. They would send enemy forces were retreating. Out of 425 locations, flash scouts to scavenge shell fragments to confirm the cali- sections reported 79% while sound sections reported ber and type (not an enviable assignment!). According 21%. However, during a period when the front was to Bragg, “a typical report gave the caliber, number of stationary, SR outperformed flash ranging; out of 392 guns, and target on which the battery had registered” locations, flash reported 46% while sound reported 54%. (Trowbridge, 1920; Bragg et al., 1971). The reports were The flash sections outreported the sound sections during provided to the artillery who used the information to periods of mobility because the flash sections required target the enemy batteries. less equipment, enabling them to pick up and set up more quickly in mobile conditions, while the sound sec- In the BEF, a “sound-ranging section had 3 officers and tions performed much better during periods of stationary 18 others: 1 sergeant, 1 instrument repairer, 1 photogra- warfare. The SR locations were also more accurate. pher, 3 linemen, 2 telephonists (telephone switchboard operators), 3 forward observers, 3 batmen and 4 motor Weather affected SR in several ways. The first is temper- transport drivers” (Van der Kloot, 2005). Although ature because sound travels faster in warm air than in efforts were made to conceal their posts, the observers cooler air. The time differences are inversely proportional were often in vulnerable positions, being located close to to the speed of sound: an increase in sound speed makes the front line. In addition to pressing the key to start the the gun appear closer. film at the time at which a gun was fired, they would esti- mate the location and caliber of the enemy batteries. If Wind was also a big factor. When the wind blew toward possible, they would also provide other intelligence, such the enemy lines, the microphones were sometimes unable as the location of machine guns and troop movements. to detect the muzzle wave. The wind speed is lower near Linemen were the most vulnerable. They laid out the wire the ground than at higher altitudes, with the result that the from Central to the observation posts and the six micro- effective speed of sound decreases with altitude. This causes phones. When wires were damaged, they would have to the sound to refract upward. Bragg complained, “due to the repair and splice or, if beyond repair, replace them, often ‘principle of maximum cussedness’ the wind in Flanders and under adverse conditions. Surveyors established the loca- Artois was generally westerly” (quoted in Van der Kloot, tions of the microphones so that their positions could be 2005). When the wind direction was from the enemy guns, accurately placed on the plotting board. The soldiers at the effective speed of sound increased with altitude, caus- Central performed several functions: they developed film, ing sound propagating in this direction to refract toward measured time differences and plotted them on plotting the ground, sometimes distorting the signals due to ground boards, maintained and repaired instrumentation, and reflections but providing measurable signals. SR worked reported the results to the artillery. Central was also best for the Allies when the winds were light and the tem- vulnerable, often setting up in wrecked houses located perature uniform and in foggy weather, not an uncommon within the range of enemy artillery. condition in Flanders (Bragg et al., 1971).

Summer 2020 • Acoustics Today 37 ARTILLERY LOCATION IN WWI

To determine the wind and temperature corrections, Concluding Remarks the BEF set up “wind sections” behind the front lines. A The armistice went into effect on November 11, 1918, at “pound or so of explosive was set off at intervals of a few 11:00 a.m. A sound recording was produced from a film hours and the sound was recorded by a series of micro- strip recorded from 10:59 to 11:01 a.m. on the American phones” whose positions were accurately known, thus Front near the River Moselle at the end of the war (see determining the effects of temperature and wind. bit.ly/38p2ekk).

Wind also produced turbulence as it flowed over the micro- The cannons fired right up to the time of the armistice, phone. This cooled the wire and was a source of noise, and it was so quiet immediately after that one could hear referred to as wind noise. The early sound rangers experi- birds singing. The Imperial War Museum website has mented with several methods to mitigate this. Wrapping the other information of interest (see bit.ly/2OGoyOV). microphone in camouflage netting helped, as did putting a thick hedge around the microphone (Mitchell, 2012). SR was not the only application of acoustics to warfare in WWI. Namorato (2000) describes the efforts at using A secondary use of SR was to register or correct the fire acoustics to detect and track aircraft and to detect tun- of friendly artillery since the microphones were able to neling. There were also advances in underwater acoustics detect the bursts of shells fired on enemy targets. The SR that was used to detect and track German submarines. record of friendly fire was compared with the SR record A previous article by Muir and Bradley (2016) in Acous- of the enemy gun. The SR section would be able report tics Today also describes the efforts in underwater to the artillery that the shell had fallen so many meters acoustics. Incidentally, William Henry Bragg, William short or to the side of the target. Corrections would be Lawrence Bragg’s father, contributed to this effort (Van made until a direct hit was achieved. This practice was der Kloot, 2005). independent of wind and weather because the record from the gun and the shell bursts were made under the Acknowledgments same conditions. I thank David Swanson for introducing me to this topic and sharing references. I also thank the staff of the library The Battle of Arras near Vimy Ridge in April 1917 pro- at the US Army Engineer Research and Development vided a specific example of the effectiveness of flash Center, recognized as the 2017 Fedlink Large Federal ranging and SR. Before the attack, three SR sections Library of the Year, who were very helpful in finding coordinated their results to locate a giant German gun many of the references used in this article. In addition, 11 miles behind the front lines (Van der Kloot, 2005). I acknowledge the contributions and sacrifices made by Other locations provided by flash ranging and SR enabled the flash and sound rangers in World War I. Permission Canadian artillery to take out 83 German batteries. to publish was granted by the Director, Geotechnical and Having dealt with the enemy artillery, Canadian troops Structures Laboratory, US Army Engineer Research and attacked under a creeping barrage, shells fired by the Development Center. artillery approximately 200 yards in front of the advanc- ing infantry, designed to keep German machine gunners References in their trenches. Other tactical adjustments that had Bragg, L., Dowson, A. H., and Hemming, H. H. (1971). Artillery Survey in the First World War. Field Artillery Survey, London. been learned over three years of warfare had been made. Finan, J. S., and Hurley, W. J. (1997). McNaughton and Canadian The Canadians took and secured the ridge at the cost of operational research at Vimy. Journal of the Operational Research 10,000 men. This compares with the 200,000 troops the Society 48, 10-14. French had lost in three previous attempts between 1914 Hinman, J. R. (1919). Ranging in France with Flash and Sound. Dunham Printing Co., Portland, OR. and 1916. The Germans lost similar numbers defending Kevles, D. J. (1969). Flash and sound in the AEF: The history of a it (Finan and Hurley, 1997; Meyer, 2006). technical service. Military Affairs 33, 374-384.

38 Acoustics Today • Summer 2020 MacLeod, R. (2000). Sight and sound on the western front: Surveyors, About the Author scientists, and the ‘battlefield laboratory,’ 1915–1918. War & Society 18(1), 23-46. Meyer, G. J. (2006). A World Undone: The Story of the Great War, 1914 R. Daniel Costley Jr. to 1918. Bantam Books, New York. [email protected] Mitchell, A. J. (2012). Technology for Artillery Location, 1914–1970. Lulu.com, Glasgow, UK. US Army Engineer Research Muir, T. G., and Bradley, D. L. (2016). Underwater acoustics: A and Development Center brief historical overview through World War II. Acoustics Today Vicksburg, Mississippi 39180, USA 12(3), 40-48. Namorato, M. V. (2000). A concise history of acoustics in warfare. R. Daniel Costley Jr. is a research Applied Acoustics 59, 101-135. mechanical engineer at the US Army Storz, D. (2014). Artillery. International Encyclopedia of the First Engineering Research and Development Center (Vicksburg, World War. Available at https://bit.ly/2QmgEL6. MS) where he performs research in acoustic and seismic sens- Thompson, S. C. (2019). As we enter the second century of electroa- ing. Dan received his undergraduate degree in mathematics coustics... Acoustics Today 15(4), 55-63. and an MS in engineering mechanics from the University of https://doi.org/10.1121/AT.2019.15.4.55. Texas at Austin and a PhD from the Georgia Institute of Tech- Trowbridge, A. (1920). Sound ranging in the American Expeditionary nology (Atlanta). He is a Fellow of the Acoustical Society of Forces. In R. M. Yerkes (Ed.), New World of Science. The Century Co., America and coordinating editor for engineering acoustics for New York, pp. 63-88. The Journal of the Acoustical Society of America. He chairs Tucker, W. S., and Paris, E. T. (1921). A selective hot wire microphone. the program committee of the Military Sensing Symposia Philosophical Transactions of the Royal Society. Series A, Containing specialty group on Battlespace Acoustic, Seismic, Magnetic, Papers of a Mathematical or Physical Character 221, 389-430. and Electric-Field Sensing and Signatures. Van der Kloot, W. (2005). Lawrence Bragg’s role in the development of sound-ranging in World War I. Notes and Records of the Royal Society 59, 273-284.

The Journal of the Acoustical Society of America

JASA Call For Submissions: JASA is currently accepting manuscripts for the following Special Issues:

• COVID-19 Pandemic • Machine Learning in • Theory and Applications Acoustic Effects Acoustics of Acoustofluidics • Additive Manufacturing • Lung Ultrasound • Ocean Acoustics in the and Acoustics • Modeling of Musical Changing Arctic • Supersonic Jet Noise Instruments

Special Issue articles are free to read for one year after publication and don’t incur any mandatory page charges.

Find out more at asa.scitation.org/jas/info/specialissues

Summer 2020 • Acoustics Today 39 FEATURED ARTICLE Bioacoustic Attenuation Spectroscopy: A New Approach to Monitoring Fish at Sea

Orest Diachok

Introduction Swim bladders provide buoyancy and enable fish to be How does one find fishes in the sea? For millennia, catch- neutrally buoyant at their preferred depth between bouts ing fish has depended on luck for a fisherman out for a of vigorous swimming (Helfman et al., 2009). The size of day of recreation and both luck and knowledge of fish the swim bladder varies by species and is often related to behavior and ecological preferences for those seeking the size of the species. Although the swim bladder probably larger catches. Still, in many ways, finding fish, especially evolved to provide buoyancy to the fish, it is also involved in large quantities, was a “shot in the dark.” However, in other functions such as hearing and sound production this started to change when fisheries biologists started in many species (e.g., Popper and Hawkins, 2019). to apply acoustics to the hunt for fishes. The various approaches that have been used and that continue to Figure 1 shows an X-ray image of a side view of the evolve now enable fishers not only to find large groups of swim bladder of a pilchard sardine (Sardinops ocellatus). fish more efficiently and effectively but also to enable fish- Because swim bladders are generally filled with air, they ery biologists to quantify the number of fish in areas of scatter sound in various directions and are the primary interest, their migration patterns, and how their numbers cause of backscattering and the echoes detected by fish- evolve over time as well as other aspects of their behavior. eries sonars. Fisheries scientists employ a concept called target strength (TS) to describe how much energy is The purpose of this article is to provide a brief historical reflected in the backscattered direction by an individual review of acoustic approaches to fishery biology, discuss fish. TS increases with the size of the swim bladder. the limitations of these approaches, and describe bioacous- tic attenuation spectroscopy (BAS), a new and promising As sound propagates through an aggregation of fish, each acoustic approach that has the potential to revolutionize encounter with a fish within the aggregation causes some fishery biology. The BAS approach is essentially noninva- of the energy to be scattered in various directions, includ- sive, provides measures of fish abundance and the number ing sound that is backscattered, as illustrated in Figure of fish in a region, and can even estimate the number of 2. As a result, each encounter with each fish within the fishes of different lengths in the ensonified region. aggregation diminishes the energy of the sound propa- gating in the forward direction (Figure 2, red arrows). The most important practical application of research in fish- eries acoustics is the estimation of the abundance of species that are of commercial interest. This information is used by Figure 1. X-ray image of the of the swim bladder of a pilchard government agencies to set limits on commercial fishing. sardine (Sardinops ocellatus). Anterior is to the left. Image courtesy of John Horne, University of Washington, Seattle. Interactions Between Fish and Sound Before reviewing the history of sonar methods to detect fish, first I review the basic physics of sound interaction with fish. The majority of species of bony fishes that occur in large numbers have a swim bladder, which is an elon- gated, air-filled chamber located in the abdominal cavity.

40 Acoustics Today • Summer 2020 | Volume 16, issue 2 ©2020 Acoustical Society of America. All rights reserved. https://doi.org/10.1121/AT.2020.16.2.40 effective radius (r; the radius of a spherical bubble with the same volume as the swim bladder) and to a small extent by the eccentricity (e; the ratio of the major and minor axes) of the swim bladder. To a good approximation, r = 0.044L, where L is the fish length

in centimeters. Figure 3 illustrates how f0 varies as a function of fish length and provides experimental

measures of f0 at the surface, which were extrapolated

from at-sea measurements of f0 at other depths at sites where the dominant species and their depths were

known. The three highest values of f0 were derived from attenuation measurements, whereas the two

Figure 2. Sound from a source at the left incident (thick red lowest values of f0 were derived from backscatter mea- arrow) on a fish is scattered in various directions (thin red surements (Nero et al., 1998; Stanton et al., 2010). The

arrows). Note that the energy level of sound propagating in the depth dependence of f0 is discussed in A Brief Histori- forward direction at the right (thick red arrow) is diminished. cal Review of Fisheries Sonar.

The relationship between fish length and f0 illustrated Scattering and attenuation are manifestations of the same in Figure 3 is valid only when fish are far apart (many process. The loss in acoustic signal level versus range is fish lengths) and each fish is free to resonate. When fish called biological attenuation. The physics-based con- are in schools in which the separation between fish is cept of attenuation coefficient, which may be expressed less than two fish lengths (Pitcher and Parrish,1993), in decibels per meter, describes the magnitude of this then the close proximity between fish dampens the effect. As a result of biological attenuation, interpretation resonances of each fish and causes the school to act as of back-scattered energy (echoes) from an aggregation of a “bubble cloud” that resonates at a lower frequency fish is complicated by the fact that echo levels are con- (F; Diachok, 1999; Raveneau and Feuillade, 2015). F trolled not only by the TS of the fish at a specified range decreases as the average separation between fish but also by the attenuation coefficient due to all the fish decreases and, in the (admittedly unrealistic) limit of between the echo sounder and the specified range. zero separation, approaches the resonance frequency of a large bubble formed by the swim bladders of all of It is ironic that the BAS method, described here as a new the fish within the school. approach to fish monitoring, has its roots in the first docu- mented successful demonstration of the use of sound to detect fish (Kimura, 1929). Kimura installed a sound trans- Figure 3. Measurements (circles) and calculations (line) of mitter and a hydrophone on opposite ends of a pond. The the resonance frequency of fish swim bladders at the surface distance between the transmitter and receiving hydrophone versus fish length. was 43 m. The pond was sufficiently shallow, less than 4.5 m, so that he was able to observe the movements of schools of fish. He transmitted a continuous signal at one frequency for long periods of time. When there were no fish between the transmitter and hydrophone, he heard a continuous hum. When a school of fish passed through the acoustic path between the transmitter and hydrophone, the acoustic signal fluctuated as a result of time variable attenuation.

Swim Bladder Resonance Swim bladders, being air-filled bubbles, resonate at

frequencies (f0).that are controlled primarily by the

Summer 2020 • Acoustics Today 41 BIOACOUSTIC ATTENUATION SPECTROSCOPY

TS and biological attenuation, which are much greater at Knowledge of TS is a prerequisite for the estimation of fish

f0 than at other frequencies, are controlled by the mag- abundance. As a result, the swim bladder and how it affects

nitudes of r, f0, and the number of fish per cubic meter, the TS have received much attention by the fisheries sci- known as the number density. Consequently, measure- ence community (Stanton, 2012). Extensive measurements

ment of f0 and the magnitude of the attenuation coefficient have revealed that TS is species dependent. In particular,

at f0 permits calculation of the number density versus fish the TS of species that cannot control the amount of gas length. This is possible using the well-established theory in their swim bladders (known as physostomes), such as for when fish are far apart (Medwin and Clay, 1997). A sardines and anchovies, decreases with the depth of the more sophisticated theory, however, is required to infer animal. Because this change is understood, it is possible to the number densities when fish are in close proximity in predict how the resonance frequency changes with depth. schools (Raveneau and Feuillade, 2015). In contrast, the TS of species that can control the amount of gas in their swim bladders (known as physoclists), such A Brief Historical Review of as cod and hake, is independent of depth. These species Fisheries Sonar are able to adjust the amount of gas in their swim bladders Sonar as Fish Finder through special secretory mechanisms, but the process is The development of transducers, which was driven by slow. Thus, when a physoclist changes depth, its swim blad- the need to detect and track submarines near the end of der requires hours to adjust to the new depth. As a result World War I, paved the way for the development of fisher- of the long timescale of this process, the TS of physoclists ies sonar. The feasibility of the sonar detection of fish was may not be readily predictable, especially after changes in first reported by Sund (1935), who detected the presence depth (see Helfman et al., 2009, for a discussion of swim of Atlantic cod (Gadus morhua) in the wild. Develop- bladder filling mechanisms). ments in sonar technology during and after World War II also led to an increased sophistication of fisheries sonar. Effect of Biological Attenuation on Echo Level By the 1970s, echo sounders were widely employed by Because measured echo levels are controlled not only by fishers and fisheries scientists to find fish (MacLennan the TS of fish at a specified depth but also by the attenua- and Simmonds, 1991). It was, no doubt, apparent that tion due to all the fish between the sonar and the specified large fish produce strong echoes and small fish produce depth, initial estimates of fish abundance were biased by weak echoes when both are at the same range. However, disregarding this effect. Biological attenuation due to fish the lack of quantitative knowledge of the TS of fish pre- was measured and theoretical corrections for the effect of cluded an estimation of abundance. sound attenuation on echo level were developed by Foote (1990). The magnitude of this effect is species dependent Target Strength of Individual Fish: The Basis and increases with the size and number density of the for Estimation of Abundance fish at each depth that are generally not known because Subsequently, new instruments, called split beam echo concurrent trawls generally provide information on spe- sounders, which were developed in the late 1970s, per- cies composition usually at only one depth. mitted measurement of the TS of individual fish in the wild (Ehrenberg, 1979). Split beam sonars have four Estimation of fish abundance assumes that echo sound- crystals that permit measurement of the TS of fish as ers are capable of detecting the species of interest, function of their orientation. At frequencies of echo independent of their depth. Fisheries echo sounders, sounders, the TS of swim bladders is extremely sensi- being hull mounted, cannot detect fish near the sur- tive to fish orientation. Because they are horizontally face. Commercially important species, such as sardines elongated, swim bladders act almost like small mirrors, and anchovies, that are generally assumed to reside at which cause reflected echoes to be very strong only when depths far from the surface have in fact been observed the swim bladders are nearly perpendicular to the direc- in large numbers near the surface (Scalabrin et al., tion of the sonar beam. This technological development 2007). Hull-mounted echo sounders are also ineffective transformed echo sounders from being merely fish find- at discriminating between echoes from fish that are near ers to a tool for abundance estimation. the bottom and those at the bottom. As a result, they

42 Acoustics Today • Summer 2020 are not capable of monitoring bottom-dwelling species, frequency naval sonars. This research was motivated by such as flatfish (e.g., flounder). need of the United States and other navies to understand the effects of echoes from fish aggregations on the detec- Ship Avoidance and Sonar tion of submarines and was focused on the bioacoustics Estimation of fish abundance also assumes that echo of myctophids, a group of species that reside primarily in sounder measurements do not affect the behavior of the the deep ocean during the day but that move toward the species being measured. Unfortunately, hull-mounted surface at night (Farquhar, 1970). The scientific measure- echo sounder measurements are plagued by the prob- ments utilized impulsive devices (usually explosives) to lem of ship avoidance. As a ship approaches, fish dive to generate broadband sound at frequencies between about greater depths and to the left and right of the track of the 100 Hz and several kilohertz and an array of hydrophones ship, thereby reducing the number of fish beneath the to determine the depth of fish-reflected echoes. Mea- echo sounder. This phenomenon biases estimates of fish sured resonance frequencies were consistent with the abundance (Scalabrin et al., 2007). theoretical calculations of the resonance frequencies of myctophids (Chapman et al., 1974). In recognition of the severity of the avoidance prob- lem, an international committee of fisheries scientists Modern Directional Sonar Measurements conducted a comprehensive review of the literature on In recent years, impulsive devices were replaced by ship avoidance, concluded that its main cause is acoustic directional, broadband transducers (Nero et al., 1998; noise from the engines of the ship, and recommended Stanton et al., 2010). Stanton et al. (2010) adapted a construction of quiet ships. Many quiet ships were built commercial, highly directional subbottom profiler with for use by fisheries biologists. Unfortunately, ship avoid- a source level (SL) of about 197 dB re 1 μPa root-mean- ance of the new quiet ships was just as severe as ship square (rms) to measure the frequency dependence of avoidance of the older, noisy vessels (Ona et al., 2007). A echoes from fish at resonance frequencies. The SL is possible explanation is that fish respond to the pressure defined as the level of sound at 1 m from the source. As wave of approaching vessels rather than to the acoustic a result of its high directionality, this source is unlikely noise of engines. to affect marine mammal behavior unless the animal is directly beneath the beam. Stanton et al. (2010) mea- In view of this apparently unsolvable problem, recent sured the resonance frequency of 25-cm-long herring research has focused on removing echo sounders from Clupea harengus, the dominant species at their mea- ships and placing them on autonomous underwater vehicles surement site, and demonstrated consistency with (AUVs; Scalabrin et al., 2007) and wave glider-based sys- theoretical calculations. Because the source is towed tems (Greene et al., 2014). These approaches will improve behind a ship, this configuration is limited to the detec- the quality of echo sounder measurements because they will tion of fish that are below the wake of the ship. The not be affected by the ship avoidance phenomenon and will resultant measurements may be biased by changes in permit measurements of fish near the surface. fish behavior in response to high-level acoustic signals, particularly at their resonance frequencies, at short dis- Another approach for eliminating the ship avoidance tances from the source. problem is to deploy echo sounders in an upward-looking mode on the bottom. This approach provides unbiased Long-Range Sonar Measurements data of scientific interest (Kaltenberg and Benoit-Bird, Much more powerful broadband sonars, which oper- 2009) but is not suitable for estimating fish abundance in ate at low frequencies, have been employed to measure large areas of commercial interest because these devices the frequency dependence of backscattered echoes only detect fish in specific locations. from fish in the vicinity of their resonance frequen- cies at ranges of about 100 km. A major benefit of this Bioacoustic Backscatter Spectroscopy approach is that it provides a synoptic view of the syn- Early Impulsive Source Measurements chronized changes in fish behavior over areas as large During the 1960s and 1970s, extensive research on swim as about 100 km (Makris et al., 2006). Interpretation bladder resonance was conducted in support of new, low- of the resultant measurements, however, is limited by

Summer 2020 • Acoustics Today 43 BIOACOUSTIC ATTENUATION SPECTROSCOPY

uncertainties in sound propagation, the inability to esti- mate the depths of fish aggregations, and the lack of corroborating biological data. This approach requires a very high SL, about 230 dB re 1 μPa rms, that may affect the behavior of both marine mammals and fish at long ranges from the source.

Interpretation of Bioacoustic Backscatter Spectroscopy Measurements and Biological Attenuation As mentioned in the Introduction, interpretation of bioacoustic backscatter spectroscopy (BBS) data is complicated by the fact that echo levels are affected not Figure 4. Geometrical configuration for measurement of only by the TS of fish at a specified range but also by biological attenuation due to a layer of fish, including a the effect of biological attenuation due to fish between broadband source (S; red rectangle) deployed from a ship and the sonar and the specified range (Weston and Andrews, a fixed hydrophone array (H; red circles). 1990), except when there are no fish between the sonar and fish at the specified range. The two lowest resonance frequencies, shown in Figure 3 (Nero et al., 1998; Stan- from the source. The vertical array permits measurement ton et al., 2010), were derived from BBS measurements of how the layer of fish affects attenuation as a function of at sites where there were no fish between the sonar and hydrophone depth. This information may be used to infer the targeted layers of fish. Duane et al.’s (2019) experi- the depth of the fish layer (Diachok and Wales, 2005). ment at the resonance frequency of a species (possibly The acoustic source must cover the frequency band that herring) showed that a spatially well-defined aggrega- includes the resonance frequencies of the species that are tion of fish at short range, which moved in front of a expected to be present at the measurement site, ideally spatially well-defined aggregation of fish at long range, between 100 Hz and 10 kHz. The level of the received dramatically reduced the magnitude of the echoes from signal is affected by the number and size of the fishes that the more distant aggregation. Qiu et al. (1999) conducted come between the source and receiver. Proper analysis of the only known concurrent measurements of attenuation the received signal can tell us a great deal about the fish. and backscattering from fish at the resonance frequency of the fish. The attenuation coefficient peaked at the Because all fishes hear low frequencies and some may resonance frequency of dispersed Japanese anchovies hear above 3 kHz, they may react to some sounds if (Engraulis japonicus), consistent with theoretical calcula- the sounds are sufficiently loud (Doksæter et al., 2009; tions (Diachok and Wales, 2005), whereas backscattered Popper and Hawkins, 2019). To minimize the effects of levels exhibited a minimum, instead of the theoretically acoustic signals on fish behavior, sources used in BAS expected maximum, at the resonance frequency as a experiments were programmed to generate a sequence result of biological attenuation. of 1-second-long continuous-wave (CW) signals, with very low SLs (Diachok, 1999, 2005). These are similar Bioacoustic Attenuation Spectroscopy to the sequence of tones one hears during a hearing test. Experimental Approach By contrast with conventional and low-frequency sonars, The alternate approach to exploit swim bladder reso- BAS measurements may be conducted with a SL as low nance is to measure attenuation due to the presence of as 170 dB re 1 μPa rms at ranges less than 10 km. At this fish between a source and a receiving hydrophone. The SL, only those fishes that are in very close proximity to geometrical configuration for this approach, bioacoustic the source may detect or react to the sounds and change attenuation spectroscopy (BAS), is illustrated in Figure their behavior. This effect is unlikely to bias BAS mea- 4. A broadband source is suspended from a ship, and surements, which provide information about fish over a hydrophone array is deployed between a near-surface a much greater distance. As a consequence, the BAS float and an anchor at a range of between 1 and 10 km method is not likely to alter fish behavior.

44 Acoustics Today • Summer 2020 Why can BAS measurements be conducted with such Ching and Weston (1971) attributed the diurnal vari- a low SL, whereas BBS measurements require such a ability in attenuation to diurnal changes in fish behavior. high SL? BAS measurements are subject to one-way Pilchard sardines, the dominant species in the Bristol transmission loss (TL) between the source and the Channel where he worked, generally disperse at night hydrophone. TL is defined as the loss in signal level and school during the day. Weston speculated that the between one meter and another range. One-way TL is low levels of attenuation during the day may be due to 60 dB at 1 km (due to spherical spreading), whereas BBS interference between reflections from nearby fish in measurements are subject to two-way TL between the schools, a phenomenon that would dampen the reso- source to the fish and then from the fish to the sonar. nance of individual fish. Attenuation at night tended to The resultant TL is twice as large, 120 dB at 1 km. So, peak at frequencies of 0.7 and 3.5 kHz. Weston attributed detection of fish with BBS at 1 km requires a SL, which the resonance at 0.7 kHz to pilchard sardines with a mean is 60 dB louder than with BAS. length of 24 cm and the resonance at 3.5 kHz, which was mostly evident a few months after the spawning season, If there are no fish present between the source and the to juvenile sardines (Ching and Weston, 1971). receiving array, then signal levels recorded by the hydro- phone array will be relatively loud and measured levels Biological Attenuation: Day Versus Night will be in accord with theoretical levels derived from TL Inspired by Weston’s (1967) compelling acoustic observa- models. TL models account for geometrical spreading tions, Diachok (1999) conducted a BAS experiment. The loss, chemical absorption loss, and loss in signal level objectives were to demonstrate that resonance frequen- due to sound transmission into the bottom. cies of dispersed fish at night were due to the dominant species at the measurement site and to determine the If a large number of fish are present between the source cause(s) of the difference in biological attenuation during and the hydrophone array, then the fish will cause excess night and day. Concurrent trawls revealed that the 16-cm (biological) attenuation. Biological attenuation will be European pilchard (Sardina pilchardus) was the domi- maximum at the resonance frequency of the fish. The nant species at this site. A ship-mounted echo sounder biological attenuation coefficient (A), in decibels per kilo- provided measurements of the depths and schooling meter, may be derived from measurements of signal level behavior of this species. Concurrent echo sounder data versus range by towing the source. If the source is fixed, showed that fish were dispersed at night at a depth of 20 then biological attenuation coefficients may be calculated m, descended to 65 m during dawn, and formed schools by comparing measured levels with calculated levels of at 65 m a few minutes after sunrise. received levels that account for all causes of attenuation except biological attenuation. TL measurements were made at many frequencies between 0.7 and 5 kHz along a track with constant depth, parallel to Discovery of Biological Attenuation at the shoreline, to simplify data interpretation and modeling. Resonance Frequencies David Weston (1967) discovered that biological attenua- Figure 5 shows measurements and theoretical calcula- tion peaks at the resonance frequencies of fish fortuitously tions of resonance frequencies of dispersed European

as a result of routine measurements of TL in support of pilchard (f0), and schools (F). The diameter of the data engineering tests of an experimental Navy sonar. To his points is proportional to the magnitude of biological surprise, sound attenuation was generally much higher at attenuation at the resonance frequencies. Measure- night than during the day and that transitions occurred ments of resonance frequencies at night of 1.2 kHz at 20 during morning and evening twilight throughout the m and at sunrise of 2.7 kHz at 65 m are consistent with year (Ching and Weston, 1971). During months when theoretical calculations of the resonance frequencies sardines were present, differences in TL between night of dispersed 16-cm sardines. The increase in frequency and day were generally about 15 dB and occasionally as from 1.2 kHz at night to 2.7 kHz at sunrise is driven by high as 40 dB at some frequencies. During months when the decrease in the effective radius of European pilchard sardines were absent, the difference between night and swim bladders as they descend from 20 to 65 m. The day was essentially zero at all frequencies. measurement of the resonance frequency at 1.6 kHz at

Summer 2020 • Acoustics Today 45 BIOACOUSTIC ATTENUATION SPECTROSCOPY

Diachok and Wales (2005) showed that the bioacoustic parameters of an aggregation of fish may be inferred from TL measurements with hydrophones at two depths. There are several technologically mature approaches that would permit BAS measurements with a small number of hydrophones. Consideration of the relative merits of these approaches is beyond the scope of this article.

Because TL is affected not only by bioattenuation but also by the geoacoustic properties of the bottom, the latter would have to be measured in areas of interest. The geo- acoustic properties of the bottom could be measured with direct methods (e.g., Turgut, 1990), inverted (derived) Figure 5. Measurements of resonance frequencies of 16-cm from TL data in the absence of fish or inverted from the

sardines in dispersed (f0 ) and school (F) modes during night application of the concurrent inversion method (Diachok (solid circle), sunrise (open triangles) and day (open circle). and Wales, 2005) in the presence of fish. The usefulness The diameters of the symbols are proportional to the magnitudes of information derived from BAS measurements would

of the attenuation coefficients at f0 and F. Theoretical probably have to be initially demonstrated by fisheries calculations of F are from Raveneau and Feuillade, 2015. scientists charged with estimating fish abundance and could eventually be employed by fishers to reduce the vast bycatch (unwanted species) that fishers routinely 65 m during the day is consistent with Raveneau and catch and discard daily throughout our oceans. Feuillade’s (2015) theoretical calculations of the reso- nance frequency of European pilchard schools. Acknowledgments The research reported here was supported by the Office Figure 5 also shows that biological attenuation was high- of Naval Research Ocean Acoustics Program. I thank est during night when the fish were dispersed and lowest Arthur N. Popper for his painstaking reviews of the pre- during the day when the fish were in schools. The transi- liminary drafts of this article. tion occurred during sunrise when some of the fish were dispersed and some were in schools. Why is attenuation References due to fish in schools during the day much lower than Chapman, R. P., Bluy, O. Z., Adlington, R. H., and Robinson, A. E. (1974). Deep scattering layer spectra in the Atlantic and Pacific attenuation due to dispersed fish at night? Theoretical Oceans and adjacent seas. The Journal of the Acoustical Society of calculations indicate that Ching and Weston’s (1971) America 56, 1722-1734. Ching, P. A., and Weston, D. (1971). Wideband studies of shallow- speculation was essentially correct. The difference in water acoustic attenuation due to fish. Journal of Sound and Vibration biological attenuation between night and day is driven 18, 499-510. primarily by the difference in the separation between fish Diachok, O. (1999). Effects of absorptivity due to fish on transmission loss in shallow water. The Journal of the Acoustical Society of America in dispersed and school modes and, to a lesser extent, by 105, 2107-2128. the difference in the effective radius of swim bladders Diachok, O. (2005). Contribution of fish with swim bladders to scintillation of transmitted signals. In J. Papadakis and L. Bjorno (Eds.), Proceedings of (Diachok, 1999; Raveneau and Feuillade, 2015). the First Conference on Underwater Acoustic Measurements: Technologies and Results, Heraklion, Crete, Greece, June 28 to July 1, 2005. Possible Practical Applications of Diachok, O., and Wales, S. (2005). Concurrent inversion of bio and geo- acoustic parameters from transmission loss measurements in the Yellow Bioacoustic Attenuation Spectroscopy Sea. The Journal of the Acoustical Society of America 117, 1965-1976 The measurement approach, illustrated in Figure 4, is Doksæter, L., Godø, O. R., Handegard, N. O., Kvadsheim, P. H., Lam, F. P. A., Donovan, C., and Miller, P. J. (2009). Behavioral responses well-suited to test scientific hypotheses but is too cumber- of herring (Clupea harengus) to 1-2 and 6-7 kHz sonar signals and some for practical applications. In particular, a vertical killer whale feeding sounds. The Journal of the Acoustical Society of array that spans most of the water column is difficult America 125, 554-564. Duane, D., Cho, B., Jain, A. D., Godø, O. R., and Makris, N. C. (2019). and time consuming to deploy and recover. Furthermore, The effect of attenuation from fish shoals on long-range, wide-area such an array is not needed for practical applications. acoustic sensing in the ocean. Remote Sensing 11, 2464.

46 Acoustics Today • Summer 2020 Ehrenberg, J. E. (1979). A comprehensive analysis of in situ methods Weston, D. E., and Andrews, H. W. (1990). Seasonal sonar observa- for directly measuring the acoustic target strength of individual fish. tions of the diurnal shoaling of fish. The Journal of the Acoustical IEEE Journal of Ocean Engineering 4(4), 141-152. Society of America 87, 639-651. Farquhar, G. B. (Ed.) (1970). Proceedings of an International Sympo- sium on Biological Sound Scattering in the Ocean. US Government Printing Office, Washington, DC. About the Author Foote, K. G. (1990). Correcting acoustic measurements of scatterer den- sity for extinction. The Journal of the Acoustical Society of America Orest Diachok [email protected] 88, 1543-1546. Greene, C. H., Meyer-Gutbrod, E. L., McGarry, L. P. Hufnagle, C., Jr., Poseidon Sound Chu, D., McClatchie, S., Packer, A., Jung, J. B., Acker, T., Dorn, H., 3272 Fox Mill Road and Pelkie, C. (2014). A wave glider approach to fisheries acoustics: Oakton, Virginia 22124, USA Transforming how we monitor the nation’s commercial fisheries in Orest Diachok has made significant the 21st century. Oceanography 27, 168-174. contributions to arctic acoustics, geo- Helfman, G., Collette, B. B., Facey, D. E., and Bowen, B. W. (2009). acoustics of the upper crust, and The Diversity of Fishes: Biology, Evolution, and Ecology. John Wiley matched-field processing, a powerful acoustical method for & Sons, New York. studying the ocean. During his tour as chief scientist at the Kaltenberg, A. M., and Benoit-Bird, K. J. (2009). Diel behavior of sar- NATO Undersea Research Centre (1970–1975), he met David dine and anchovy schools in the California Current System. Marine Weston and, as a result of numerous discussions, became a Ecology Progress Series 394, 247-262. convert to marine bioacoustics and designed the first inter- Kimura, K. (1929). On the detection of fish-groups with an acoustic disciplinary bioattenuation experiment. Orest enjoys traveling, method. Journal of the Fish Institute, Tokyo 24, 41-45. MacLennan, D. N., and Simmonds, E. J. (1991). Fisheries Acoustics. hiking in national parks, and playing billiards. He loves Puc- Chapman and Hall, London. cini arias, BB King laments, and Ukrainian and American Makris, N. C., Ratilal, P., Symonds, D. T., Jagannathan, S., Lee, S., and folk songs. He is happily married to Olha and has two sons, Nero, R.W. (2006). Fish population and behavior revealed by instan- Mateo and Mark. taneous continental shelf-scale imaging. Science 311, 660-663. Medwin, H., and Clay, C. S. (1997). Fundamentals of Acoustical Ocean- ography. Academic Press, New York. Nero, R. W., Thompson, C. H., and Love, R. H. (1998). Low-frequency acoustic measurements of Pacific hake,Merluccius productus, o ff the west coast of the United States. Fishery Bulletin 96, 329-343. Ona, E., Godø, O. R., Handegard, N. O., Hjellvik, V., Patel, R., and Pedersen, G. (2007). Silent research vessels are not quiet. The Journal of the Acoustical Society of America 121, EL145-EL150. Pitcher, T., and Parrish, J. (1993). Functions of shoaling behaviour in teleosts. In T. Pitcher (Ed.), Behaviour of Teleost Fishes. Chapman and Hall, London. Popper, A. N., and Hawkins, A. D. (2019). An overview of fish bio- acoustics and the impacts of anthropogenic sounds on fishes. The Journal of Fish Biology 94, 692-713. Qiu, X. F., Zhang, R. H., Li, W. H., Jin, G. L., and Zhu, B. X. (1999) Fre- quency selective attenuation of sound propagation and reverberation in shallow water. The Journal of Sound and Vibration 220, 331-342. Raveneau, M., and Feuillade, C. (2015). Sound extinction by fish schools: Forward scattering theory and data analysis. The Journal of the Acoustical Society of America 137, 539-555. Scalabrin, C., Marfia, C., and Boucher, J. B. (2007). How much fish is hidden in the surface and bottom acoustic blind zones? ICES Journal of Marine Science 66, 1355-1363. Stanton, T. K. (2012). 30 years of advances in active bioacoustics: A personal perspective. Methods in Oceanography 1, 49-77. Stanton, T. K., Chu, D., Jech, J. M., and Irish, J. D. (2010). New broad- band methods for resonance classification and high-resolution imagery of fish with swimbladders using a modified commercial broadband echosounder. ICES Journal of Marine Science 67, 365-378. Sund, O. (1935). Echo sounding in fishery research. Nature 335, 953. Turgut, A. (1990). Measurements of acoustic wave velocities and attenu- ation in marine sediments. The Journal of the Acoustical Society of America 87, 2376-2383. Weston, D. E. (1967). Sound propagation in the presence of bladdered fish. Underwater Acoustics, vol. 2. In V. Albers (Ed.). Plenum, New York.

Summer 2020 • Acoustics Today 47 FEATURED ARTICLE

The Tuning Fork: An Amazing Acoustics Apparatus

Daniel A. Russell

It seems like such a simple device: a U-shaped piece of metal and Helmholtz resonators were two of the most impor- with a stem to hold it; a simple mechanical object that, when tant items of equipment in an acoustics laboratory. In 1834, struck lightly, produces a single-frequency pure tone. And Johann Scheibler, a silk manufacturer without a scientific yet, this simple appearance is deceptive because a tuning background, created a tonometer, a set of precisely tuned fork exhibits several complicated vibroacoustic phenomena. resonators (in this case tuning forks, although others used A tuning fork vibrates with several symmetrical and asym- Helmholtz resonators) used to determine the frequency of metrical flexural bending modes; it exhibits the nonlinear another sound, essentially a mechanical frequency ana- phenomenon of integer harmonics for large-amplitude lyzer. Scheibler’s tonometer consisted of 56 tuning forks,

displacements; and the stem oscillates at the octave of the spanning the octave from A3 220 Hz to A4 440 Hz in steps fundamental frequency of the tines even though the tines of 4 Hz (Helmholtz, 1885, p. 441); he achieved this accu- have no octave component. A tuning fork radiates sound as racy by modifying each fork until it produced exactly 4 a linear quadrupole source, with a distinct transition from beats per second with the preceding fork in the set. At the a complicated near-field to a simpler far-field radiation pat- 1876 Philadelphia Centennial Exposition, Rudolph Koenig, tern. This transition from near field to far field can be seen the premier manufacturer of acoustics apparatus during in the directivity patterns, time-averaged vector intensity, the second half of the nineteenth century, displayed his and the phase relationship between pressure and particle Grand Tonometer with 692 precision tuning forks ranging velocity. This article explores some of the amazing acoustics from 16 to 4,096 Hz, equivalent to the frequency range of that this simple device can perform. the piano (Pantalony, 2009). Keonig’s Grand Tonometer was purchased in the 1880s by the United States Mili- A Brief History of the Tuning Fork tary Academy and currently resides in the collection of The tuning fork was invented in 1711 by John Shore, the the Smithsonian National Museum of American History principal trumpeter for the royal court of England and (Washington, DC; see tinyurl.com/keonig). For his own a favorite of George Frederick Handel. Indeed, Handel personal use, Koenig made a set of 154 forks ranging from wrote many of his more famous trumpet parts for 16 to 21,845.3 Hz; he achieved this decimal point preci- Shore (Feldmann, 1997a). Unfortunately, Shore split his sion at a frequency he couldn’t hear by using the method lip during a performance and was unable to continue of beats as well as the new optical method developed by performing on the trumpet afterward. So he turned Lissajous in 1857 (Greenslade, 1992). Lissajous’ method of his attention to his second instrument, the lute. Being measuring frequencies involved the reflection of a narrow unsatisfied with the pitch pipes commonly used to tune beam of light from mirrors attached to the tines of two instruments at the time, Shore used his tuning fork (prob- massive tuning forks, oriented perpendicular to each other, ably an adaptation of the two-pronged eating utensil) to resulting in the images that now bear his name (Guillemin, tune his lute before performances, often quipping “I do 1877, p. 196, Fig. 135 is one of the earliest images of Lis- not have about me a pitch-pipe, but I have what will do sajous and his optical imaging tuning fork apparatus for as well to tune by, a pitch-fork” (Miller, 1935; Bickerton creating these figures). and Barr, 1987). From the beginning, it was observed that touching the It took more than a hundred years before Shore’s tuning stem of the fork to a surface would transmit the vibration fork became an accepted scientific instrument, but starting of the fork to the surface, causing it to vibrate. In the mid- in the mid-1800s and through the early 1900s, tuning forks 1800s, Ernst Heinrich Weber and Heinrich Adolf Rinne

48 Acoustics Today • Summer 2020 | Volume 16, issue 2 ©2020 Acoustical Society of America. All rights reserved. https://doi.org/10.1121/AT.2020.16.2.48 introduced tuning fork tests in which the stem of a vibrating tuning fork is touched to various places on a ( 1) patient’s skull to measure bone conduction; these tests have since become standard tools for the clinical assess- ment of hearing loss (Feldmann, 1997b,c). Similarly, in where L is the tine length, E and ρ are the Young’s modu- 1903, Rydel and Seiffer introduced a tuning fork with lus of elasticity and density, respectively, and A is a factor a graduated scale at the tines that is currently used to determined by the cross-sectional shape and thickness of measure nerve conduction in the hands and feet (Mar- the tines (Rossing et al., 1992). For tuning forks made from tina et al., 1998). the same material and having the same tine shape and thick- ness, the frequency depends on the inverse square of the tine From an educational viewpoint, the tuning fork has length. Figure 1, a and b, shows a set of forks with frequency long been an important apparatus for demonstrations ratios of an octave starting at 256 Hz (with an extra fork at and experiments in undergraduate physics courses. Lin- 426.6 Hz) and a set of tuning forks with frequencies corre- coln (2013) lists several tuning fork activities, including sponding to the notes of a musical scale starting at “middle”

using an adjustable strobe light to see the tines vibrating; C4 261.6 Hz, respectively. A plot of frequency versus tine using a microphone and an oscilloscope to observe the length verifies that the frequency increases as the inverse frequencies of a fork and the variation in intensity as the square of the tine length (Figure 1c). fork is rotated; using a fork with a resonator box (Bogacz and Pedziwaitr, 2015) to demonstrate sympathetic reso- Since the mid-1800s when tuning forks began to be used nance and beats; attaching a mirror to one of the tines to as precision acoustical measurement devices, it has been reproduce the original Lissajous figures; measuring the known that the frequency of a tuning fork also depends speed of sound with a fork and a cylindrical tube partly strongly on temperature (Miller, 1926), with the frequency filled with water; and observing how the frequency of a of a steel fork decreasing by 0.01% for every 1°C increase fork depends on temperature. in temperature (Greenslade, 1992). In fact, some of the precision forks manufactured by Koenig in the late 1880s Fork Frequencies, Tine Length, and were stamped with the specific temperature at which the Material Properties frequency would be accurate (Pantalony, 2009). Undergrad- The frequency (f) of a tuning fork depends on its material uate student experiments report the frequency of a steel fork properties and dimensions according to dropping by 1.0 Hz over a temperature increase of 55°C

Figure 1. Sets of tuning forks of the same material and tine cross-section dimensions but of different tine lengths. a: Set of blue steel forks with octave frequency ratios starting at 256 Hz, with an extra fork at 426.6 Hz. b: Set of aluminum forks forming a musical

scale starting on “middle” C4 261.6 Hz. c: Frequency versus tine length for the steel forks in a (blue squares) and the aluminum forks in b (red circles). Both dashed lines represent a power law of the form f ∝ L-2.

Summer 2020 • Acoustics Today 49 ACOUSTICS OF TUNING FORKS

(Bates et al., 1999) and a drop of 70 Hz for an aluminum Flexural Bending Modes and fork as the temperature increased by 280°C (Blodgett, 2001). Natural Frequencies A tuning fork that is freely suspended (not held at the The dependence of frequency on the properties of the mate- stem) will exhibit a number of flexural bending modes rial from which a tuning fork is made can be a useful means similar to those of a free-free bar. Figure 2, a and b, of giving students a tangible experience with the properties shows the first two out-of-plane flexural bending modes of various metals and other materials. Burleigh and Fuierer of a free tuning fork, and Figure 2, c and d, shows the (2005) and Laughlin et al. (2008) manufactured different first two in-plane bending modes. Because the fork does collections of 17 tuning forks with identical dimensions but not have a uniform cross section along its length, the made from a variety of metals, polymers, acrylics, and woods displacement amplitudes and the node positions are not and used them with students in a materials course to explore symmetrical about the midpoint of the fork, something how the frequency, duration, and amplitude of the tuning that is similar to the bending modes of a nonuniform fork sound depends on material properties. The material baseball bat (Russell, 2017). However, these free-free from which a tuning fork is constructed is also important for mode shapes are not typically observed when the fork is noneducational applications; MacKechnie et al. (2013) found held at the stem. Instead, the normally observed vibra- that a steel tuning fork was more likely to produce a negative tional mode shapes, the shapes that give rise to the sound test result than an aluminum fork when administering the of the fork, are symmetrical modes in which the tines Rinne test for clinical assessment of conductive hearing loss. move in opposite directions (Rossing et al., 1992), as shown in Figure 2, e and f.

Figure 2. Flexural bending modes for a tuning fork. Red, When vibrating in the fundamental mode, the tines of a antinodes with maximum amplitude; dark blue, nodes with handheld fork flex in opposite directions, like a cantilever zero amplitude. Top: out-of-plane bending modes for a 430- beam. The second mode has a node roughly one-fourth of Hz tuning fork. a: first bending mode at 1372 Hz. b: Second the tine length from the free end. An impact at this loca- bending mode at 3,731 Hz. Center: in-plane bending modes tion will excite the fundamental but not the second mode; for a 430-Hz tuning fork. c: First bending mode at 1,974 Hz. this is where to strike the fork to produce a pure tone. A d: Second bending mode at 4,285 Hz. Bottom: symmetrical fork should be impacted with a soft rubber mallet or struck in-plane modes of a 430-Hz tuning fork. e: Fundamental mode against a relatively soft body part, like the knee or the pisi- at 430 Hz. f: “Clang” mode at 2,612 Hz. form bone at the base of the palm opposite the thumb (Watson, 2011). A fork should never be struck against a hard tabletop or hit with a metal object; doing so will excite other vibrational modes besides the fundamental and it could possibly dent the fork, changing its frequency.

The Frequencies of the Fundamental and the “Clang” Mode When a tuning fork is struck softly, the resulting sound is a pure tone at the frequency of the fundamental symmetrical mode of the tines, as shown in Figure 2e. The spectrum in Figure 3a is for a soft impact on the tines of a 432-Hz tuning fork and shows a single, narrow peak at 432 Hz, 60 dB above the noise floor. Figure 3b shows that when this same 432-Hz fork is given a slightly harder impact at the tip of the tine, both the fundamental and also the second mode are excited. The second mode, called the “clang” mode, has a frequency of 2,605 Hz for this fork, which is slightly more than six times the frequency of the fundamental. The overtones of a tuning fork are not harmonics.

50 Acoustics Today • Summer 2020 Figure 3. Frequency spectra resulting from striking a fork: a soft blow (a); a harder blow at the tip of the tines (b); a very hard blow (c). See text for explanation.

What boundary conditions would be appropriate for mod- common to most musical instruments. However, an eling the vibrational behavior of a tuning fork? Chladni interesting result occurs when the fork is struck vigor- (1802) approached the tuning fork by starting with a ously. If the tines are set into motion with a sufficiently straight bar, free at both ends, and gradually bending it large amplitude, the elastic restoring forces become non- into a U-shape with a stem at the bottom of the U. The pop- linear and the resulting radiated sound contains clearly ular acoustics textbook by Kinsler et al. (2000, pp. 85-86) audible integer multiples of the fundamental (Rossing states that “The free-free bar may be used qualitatively to et al., 1992). Helmholtz (1885, pp. 158-159) reportedly describe a tuning-fork. This is basically a U-shaped bar identified integer harmonics up to the sixth order for a with a stem attached to the center.” A different bound- large fork. The spectrum in Figure 3c shows the result ary condition was considered by Rayleigh (1894), who of striking the fork hard enough to produce an audible treated the tines of a tuning fork as being better modeled “buzzing” and the amplitude of displacement at the end as a clamped-free bar. Who is correct? Well, a theoretical of the tines was visibly observed to be a couple of mil- analysis of the boundary conditions for a beam undergo- limeters. This spectrum shows nine integer harmonics of ing flexural bending vibrations indicates that the frequency the fundamental in addition to the clang tone. of the second mode of a free-free bar is 2.78 times the fundamental, whereas the frequency of the second mode Octave at the Stem of a fixed-free cantilever bar is 6.26 times the fundamental. A more surprising observation is made when the stem of The measured frequency of the clang mode, as shown in a vibrating fork is pressed against a sounding board or Figure 3b, suggests the clamped-free model is better. tabletop. The stem vibrates with a much smaller amplitude than the tines, but the tabletop is a much larger surface The presence of the clang mode could pose problems area so that the radiated sound, when a fork is touched to for the clinical use of a tuning fork when assessing hear- a surface, is considerably louder than the sound of the fork ing health. Tuning forks with frequencies of 256 Hz and in air. Touching the stem to a surface produces an audible 512 Hz are frequently used for Rinne and Weber tests, octave (exactly twice the fundamental frequency), even and the corresponding clang modes near 1,600 Hz and though the tines do not vibrate at the octave; the ampli- 3,200 Hz, respectively, fall within the range of frequen- tude of the octave is often significantly louder than the cies where human hearing is most sensitive. Thus care fundamental (Rossing et al., 1992). A video demonstration must be taken to strike the fork without exciting the of this phenomenon is found at y2u.be/NVUCf8mB1Wg. clang mode to prevent misleading results during a clini- cal examination (Stevens and Pfannenstiel, 2015). The octave at the stem was noticed by Helmholtz (1855) and explored by Rayleigh (1899, 1912), who found that Nonlinear Generation of Integer Harmonics bending the fork tines inward could reduce the strength When struck softly with a rubber mallet, a tuning fork of the octave. However, an explanation of why only the produces a pure tone devoid of integer harmonics octave and fundamental appear at the stem was not pro-

Summer 2020 • Acoustics Today 51 ACOUSTICS OF TUNING FORKS vided until much more recently. Boocock and Maunder near and far distances from the source exhibits distinct (1969) developed a theoretical analysis, supported by differences in directivity patterns, vector intensity maps, experimental results, indicating that the presence of the and the phase between pressure and particle velocity. octave at the stem is due to longitudinal inertia forces. They were able to explain Rayleigh’s (1912) observation Quadrupole and Dipole Directivity Patterns that bending the tines (thus offsetting the longitudinal The nature of the quadrupole radiation may be demon- imbalance) reduces the strength of the octave component. strated by rotating a tuning fork about its long axis while Sönnerlind (2018) developed a detailed computer model holding it close to the ear or near to the opening of a of a tuning fork and found that the octave motion in the quarter-wavelength resonator tuned to the fork funda- stem is likely due to a nonlinear relationship between mental (Helmholtz, 1885, p. 161). During one complete the vertical movement of the center of mass of the fork rotation, there will be four positions where the resulting and the displacement of the tines. His model shows that sound is loud, alternating with four regions where the a double frequency (octave) occurs because the center of sound is very quiet; the sound will be loud when the mass of the fork reaches its minimum position twice per tines are in-line with the ear and also when the tines cycle, when the fork tines bend both inward and outward. are perpendicular. However, if the fork is held at arm’s Sönnerlind’s model also indicates that the octave from length from the ear and rotated, only two loud regions the stem is more prominent for forks with longer tines will be heard, when the tines are in-line with the ear, and and forks with tines having a square cross section (rather the previously loud regions when the tines are perpen- than a circular cross section). dicular to the ear will now be quiet. This variation in the loudness means that care must be taken regarding the The presence of the octave at the stem could affect the orientation of the tuning fork tines with respect to the results of the Rinne and Weber hearing tests and the external auditory canal during the air conduction por- Rydel-Seiffer vibration sensitivity test because the stem tion of the Rinne test (Butskiy et al., 2016). is placed in contact with the skull, hands, or feet. This is why forks for assessing conduction hearing loss and Figure 4, a-c, compares the measured directivity patterns nerve response to vibration are often fitted with weights at increasing distances from a 426-Hz tuning fork with at the tip of the tines because this reduces the presence theoretical predictions for a linear quadrupole source. of the octave at the stem. Measured sound pressure levels around a 426-Hz tuning fork vibrating in its fundamental mode agree very nicely The presence of an octave at the stem also has implica- with theory at all distances (Russell, 2000; Froehle and tions for piano tuners; touching a 440-Hz fork to the piano Persson, 2014). These data explain why one hears four soundboard will produce a 440-Hz tone along with a stron- loud regions when a fork is rotated close to the ear but ger 880-Hz octave, and the 880-Hz octave from the tuning only two loud regions when the fork is rotated at arm’s fork stem will beat with the A 880-Hz piano string that length. It also explains why, if you listen very carefully, is tuned slightly sharp due to the intrinsic inharmonicity the sound is noticeably louder (about 5 dB) when the in piano strings. This very problem was posed as a ques- tines are aligned with the ear compared with when they tion to me during my graduate school days, and answering are perpendicular. the question was the beginning of my fascination with the acoustics of tuning forks (Rossing et al., 1992). If a fork is rigidly clamped at the stem, it may be forced into several other natural modes of vibration that radiate Directivity Patterns, Quadrupole Sources, sound as a dipole source or as a lateral quadrupole source. and Intensity Maps Figure 4, d-f, shows measured directivity patterns for a When a tuning fork vibrates in its fundamental mode, 426-Hz tuning fork that was clamped at the stem and the tines oscillate in opposite directions, with each tine driven at an in-plane dipole mode at 257 Hz, an out-of- acting as a dipole source such that the two oppositely plane dipole mode at 344 Hz, and a lateral quadrupole phased dipoles combine to form a linear quadrupole mode at 483 Hz. The measured data agree well with the source (Rossing et al., 1992). The linear quadrupole is theoretical predictions for dipole and lateral quadrupole an interesting sound source because the sound field at sources (Russell, 2000).

52 Acoustics Today • Summer 2020 away from the fork. The near field shows a much more interesting feature. Perpendicular to the fork tines (the vertical axis of the plot), energy is radiated away from the fork at all distances. But, in the direction parallel to the tines (the horizontal axis of the plot), energy is actually directed inward toward the fork in the near field. At a distance approximately 0.225 times the wavelength, the intensity drops to zero before changing direction and pointing outward for farther distances (Sillitto, 1966). Although the acoustic intensity vanishes at this location, the pressure does not drop to zero, and sound will still be heard without any change in loudness at this location. Figure 4. Sound pressure level directivity patterns around a This theoretical prediction of the “swirling” of the energy tuning fork. Solid circles, measurements; solid curves, theory in the near field of the fork has been experimentally veri- for a linear quadrupole. Red arrows, relative direction of fied through measurements of the time-averaged vector tine motion. Top: data for a 426-Hz fork vibrating in its intensity using a two-microphone intensity probe; the fundamental mode at distances of 5 cm (a), 20 cm (b), and measured data confirm the theoretical predictions (Rus- 80 cm (c). Bottom: data for the same 426-Hz fork driven sell et al., 2013). into vibration as an in-plane dipole source at 275 Hz (d), an out-of-plane dipole source at 344 Hz (e), and a lateral Phase Relationship Between Pressure and quadrupole source at 483 Hz (f). Adapted from Russell, 2000, Particle Velocity with permission. An additional aspect of the transition between the near field and far field around a tuning fork is the relationship of the phase between pressure and particle velocity. In Acoustic Intensity Maps and Energy Flow the far field of a spherically symmetrical source, the pres- Around a Fork sure and particle velocity are in phase with each other; The transition from near-field to far-field radiation for both reach maximum and minimum values at the same a linear quadrupole source may be explored further by time. In the near field, however, the pressure and par- looking at the time-averaged vector intensity. The time- ticle velocity are 90° out of phase; when one quantity is averaged acoustic intensity represents the net energy at a maximum or minimum, the other quantity is zero flow; it is a vector quantity with both magnitude and direction. In the far field from a simple source, the vector intensity points radially away from the source, indicating Figure 5. Time-averaged acoustic intensity vectors in two that the source is producing waves that carry energy away quadrants around a tuning fork. The two black rectangles from the source in a roughly omnidirectional manner. in the center represent the tines and the arrows indicate the However, the near field of a source may consist of regions direction of flow of acoustic energy. See text for explanation. where the energy swirls around, with no net outward flow. Adapted from Russell et al., 2013, with permission.

Figure 5 shows a theoretical prediction of the time-aver- aged acoustic intensity vectors in two quadrants of the horizontal plane surrounding a tuning fork, modeled as a linear quadrupole source. The amplitude of the inten- sity for a quadrupole source falls off as the inverse of the fourth power of distance, so the direction of flow has been normalized to unit length to make it visible and to emphasize the directional property of the intensity. In the far field, the direction of the intensity vectors indi- cate that sound energy is propagating radially outward,

Summer 2020 • Acoustics Today 53 ACOUSTICS OF TUNING FORKS

near a large 426-Hz fork. In the near field, at a distance of 7 cm from the tines, the pressure and particle velocity are seen to be nearly 90° out of phase to each other. But at a larger distance of 80 cm, in the far field, the pressure and particle velocity are nearly in phase. The quadrature phase relationship between pressure and particle velocity is a topic discussed frequently in upper level undergraduate and graduate acoustics courses covering spherical waves. However, even though I had known this for many years, Figure 6. A 426-Hz fork driven in its fundamental mode as both a student and a teacher, the first time I obtained with an electromagnetic coil and a Microflown transducer the experimental data in Figure 6 was an exciting moment. (a) measures pressure and particle velocity in the near field at 7 cm from the fork (b) and in the far field at 80 cm from The Tuning Fork on the Gold Medal of the fork (c). See text for explanation. Adapted from Russell the Acoustical Society of America et al., 2013, with permission. The humble tuning fork is a simple mechanical device that is capable of demonstrating a wide variety of com- plex vibroacoustic phenomena. Perhaps it is no surprise that this marvelous acoustical apparatus is prominently featured on the Acoustical Society of America (ASA) Gold Medal (Figure 7), the most prestigious recogni- tion awarded by the ASA. It is interesting to note that the fork depicted on the medal appears to be vibrating with a sufficiently large amplitude so as to produce non- linearly generated integer harmonics. However, whereas the shape of the fork looks similar to those made by Koenig, the radiated wave fronts are far too close together (relative to the fork dimensions); this fork must have a fundamental frequency much higher than the 21,845-Hz fork in Koenig’s personal collection.

Acknowledgments I thank my friend and former MS thesis advisor, Thomas D. Rossing, for introducing me to the fascinating acous- tics of tuning forks back when I was his student at Northern Illinois University, DeKalb.

References Bates, L., Beach, T., and Arnott, M. (1999). Determination of the Figure 7. The Acoustical Society of America Gold Medal is temperature dependence of Young’s modulus for stainless steel using a tuning fork. Journal of Undergraduate Research in Physics the highest award of the Acoustical Society of America. The 18(1), 9-13. medal shows a tuning fork vibrating with a large displacement Bickerton, R. C., and Barr, G. S. (1987). The origin of the tuning fork. amplitude and radiating sound waves. Photo courtesy of Journal of the Royal Society of Medicine 80, 771-773. Elaine Moran, used with permission. https://doi.org/10.1177/014107688708001215. Blodgett, E. D. (2001). Determining the temperature dependence of Young’s modulus using a tuning fork. The Journal of the Acoustical Soci- ety of America 110(5), 2698. https://doi.org/10.1121/1.4777282. and the quantities are said to be in quadrature. Figure 6 Bogacz, B. F., and Pedziwaitr, A. T. (2015). The sound field around shows measurements of the pressure and particle veloci- a tuning fork and the role of a resonance box. The Physics Teacher ties made with a matchstick-sized Microflown transducer 53(2), 97-100. https://doi.org/10.1119/1.4905808.

54 Acoustics Today • Summer 2020 Boocock, D., and Maunder, L. (1969). Vibration of a symmetric Pantalony, D. (2009). Altered Sensations: Rudolph Koenig’s Acoustical tuning fork. Journal of Mechanical Engineering Science 11(4), 364- Workshop in Nineteenth-Century Paris. Springer Netherlands, Dor- 375. https://doi.org/10.1243/JMES_JOUR_1969_011_045_02. drecht, pp. 92-93. Burleigh, T. D., and Fuierer, P. (2005). Tuning forks for vibrant teach- Rayleigh, J. W. S. (1894). The Theory of Sound, vol. 1. MacMillan, ing. JOM: Journal of the Minerals, Metals & Materials Society 57(11), London, §56 and §171. Reprinted by Dover, New York, 1945. 26-27. https://doi.org/10.1007/s11837-005-0022-4. Rayleigh, J. W. S. (1899). Octave from tuning-forks. Scientific Papers Butskiy, O., Ng, D., Hodgson, M., and Nunez, D. A. (2016). Rinne test: vol. 1, pp. 318-319. Does the tuning fork position affect the sound amplitude at the ear? Rayleigh, J. W. S. (1912). Longitudinal balance of tuning-forks. Scien- Journal of Otolaryngology-Head and Neck Surgery 45, 21. tific Papers vol. 5, pp. 372-375. Originally published in Philosophical https://doi.org/10.1186/s40463-016-0133-7. Magazine 13, 316-333 (1907). Chladni, E. F. F. (1802). Die Akustik. Breitkopf & Hartel, Liepzig. Rossing, T. D., Russell, D. A, and Brown, D. E. (1992). On the acous- Translated into English by R. T. Beyer, Treatise on Acoustics: The tics of tuning forks. American Journal of Physics 60(7), 620-626. First Comprehensive English Translation of E. F. F. Chaldni’s Traite https://doi.org/10.1119/1.17116. d’Acoustique. Springer International Publishing, 2015. Russell, D. A. (2000). On the sound field radiated by a tuning fork. American Feldmann, H. (1997a). History of the tuning fork. I: Invention of the Journal of Physics 68(12), 1139-1145. https://doi.org/10.1119/1.1286661. tuning fork, its course in music and natural sciences. Laryngo-Rhino- Russell, D. A. (2017). Acoustics and vibration of baseball and softball Otologie 76(2), 116-122. https://doi.org/10.1055/s-2007-997398. bats. Acoustics Today 13(4), 35-42. (in German) Russell, D. A., Junell, J., and Ludwigsen, D. O. (2013). Vector inten- Feldmann, H. (1997b). History of the tuning fork. II: The invention sity around a tuning fork. American Journal of Physics 81(2), 99-103. of the classic tests of Weber, Rinne, and Schwabach. Laryngo-Rhino- https://doi.org/10.1119/1.4769784. Otologie 76(5), 318-326. https://doi.org/10.1055/s-2007-997435. Sillitto, R. M. (1966). Angular distribution of the acoustic radiation (in German) from a tuning fork. American Journal of Physics 34(8), 639-644. Feldmann, H. (1997c). History of the tuning fork. III: On the https://doi.org/10.1119/1.1973192. way to quantitatively measuring hearing acuity. Laryngo-Rhino- Sönnerlind, H. (2018). Finding Answers to the Tuning Fork Mystery Otologie 76(7), 428-434. https://doi.org/10.1055/s-2007-997457. with Simulation. Available at https://www.comsol.com/blogs/finding- (in German) answers-to-the-tuning-fork-mystery-with-simulation/. Accessed Froehle, B., and Persson, P.-O. (2014). High-order accurate fluid- February 17, 2020. structure simulation of a tuning fork. Computers & Fluids 98, Stevens, J. R., and Pfannenstiel, T. J. (2015). The otologist’s tuning fork exam- 230-230. https://doi.org/10.1016/j.compfluid.2013.11.009. ination — Are you striking it correctly? Otolaryngology — Head and Neck Greenslade, T. B., Jr. (1992). The acoustical apparatus of Rudolph Koenig. The Surgery 153(3), 477-479. https://doi.org/10.1177/0194599814559697. Physics Teacher 30(12), 518-524. https://doi.org/10.1119/1.2343629. Watson, D. A. R. (2011). How to make a tuning fork vibrate: The Guillemin, A. (1877). The Forces of Nature: A Popular Introduction humble pisiform bone. The Medical Journal of Australia 195(11), 732. to the Study of Physical Phenomena. MacMillan and Co., London. https://doi.org/10.5694/mja11.11058. Edited by J. M. Lockyer; translated by N. Lockyer. Helmholtz, H. L. F. (1885). On the Sensations of Tone as a Physiological Basis for the Theory of Music, 2nd ed. Longmans, Green and Com- About the Author pany, London. Translated by A. J. Ellis, Dover, New York, 1954. Kinsler, L. E., Frey, A. R., Coppens, A. B., and Sanders, J. V. (2000). Fundamentals of Acoustics, 4th ed. J. Wiley & Sons, New York. Daniel A. Russell [email protected] Laughlin, Z., Naumann, F., and Miodownik, M. (2008). Investigating 201 Applied Science Building the acoustic properties of materials with tuning forks. In Proceed- The Pennsylvania State University ings of the Materials & Sensations Conference, Pau, France, October University Park, Pennsylvania 16802, 22–24, 2008. USA Lincoln, J. (2013). Ten things you should do with a tuning fork. Phys- ics Teacher 51(3), 176-181. https://doi.org/10.1119/1.4792020. Daniel A. Russell is a teaching profes- MacKechnie, C. A., Greenberg, J. J., Gerkin, R. C., McCall, A. A., sor of acoustics and distance education Hirsch, B. E., Durrant, J. D., and Raz, Y. (2013). Rinne revisited: coordinator for the Graduate Program in Acoustics at the Steel versus aluminum tuning forks. Journal of Otolaryngology-Head Pennsylvania State University (University Park). His research and Neck Surgery 149(6), 907-213. focuses on the acoustics and vibration of sports equipment https://doi.org/10.1177/0194599813505828. (e.g., baseball and softball bats, ice and field hockey sticks, Martina, I. S. J., van Koningsveld, R., Schmitz, P. I .M., Van der Meche, tennis rackets, cricket bats, hurling sticks, golf drivers, putters F. G. A., and Van Doorn, P. A. (1998). Measuring vibration threshold and balls, and ping-pong paddles). He also spends time in his with a graduated tuning fork in normal aging and in patients with laboratory developing physical demonstrations of vibroacous- polyneuropathy. Journal of Neurology, Neurosurgery & Psychiatry 65, tic phenomena for classroom teaching along with computer 743-747. https://doi.org/10.1136/jnnp.65.5.743. animations to explain acoustics and vibration concepts. His Miller, D. C. (1926). The Science of Musical Sounds, 2nd ed. Macmillan, animations website (see acs.psu.edu/drussell/demos.html) is New York, pp. 29-33. well-known throughout the acoustics education community. Miller, D. C. (1935). Anecdotal History of the Science of Sound to the Beginning of the 20th Century. MacMillan, New York, p. 39.

Summer 2020 • Acoustics Today 55 FEATURED ARTICLE

Speech Acoustics of the World’s Languages

Benjamin V. Tucker and Richard Wright

Depending on how one counts, there are more than 7,000 relying on a handful of well-documented or closely related languages in the world (Eberhard et al., 2019). Many languages from restricted geographic distributions. For his- languages contain unique and interesting sounds and torical and demographic reasons, linguistic diversity and combinations of sounds that speakers produce to convey the research sampling of languages is not evenly distributed meaning to listeners. A language may share similarities with across the globe. other languages on some dimensions while also having dif- ferences along other dimensions. Because of this linguistic The maps in Figure 1 illustrate the linguistic diversity diversity, it is important to the understanding of speech of the world and both maps illustrate the uneven global communication, and, more specifically, sound production distribution of languages. Figure 1A is a heat map of in the world’s spoken languages, to sample the sounds of lan- each country in the world showing the number of lan- guage as broadly as possible. In the present article, we briefly guages spoken in that country, including both indigenous discuss the diversity of the world’s languages and speech and immigrant languages (Hammarström et al., 2019). sound production mechanisms. We also discuss the impor- tance of documenting the acoustic characteristics of these sounds and the role of linguistic extinction on our ability Figure 1. A: world heat map of individual countries. Colors, to adequately sample the sounds of the world’s languages.1 total number of languages reportedly spoken in each country as per Glottolog (Hammarström et al., 2019). B: world map of Over the last century and a half, speech researchers have language location. Circles, latitude and longitude associated developed a reasonably good understanding of how speech with an individual language as reported by Glottolog; color sounds are produced in the vocal tract. Given this, we might is associated with one of six regions (North America, South assume that sampling any single language, or even a hand- America, Eurasia, Africa, Australia, and Oceania). Where ful of languages, might be sufficient for understanding circles become difficult to distinguish represents a dense speech sounds. However, even a language like !Xóõ (Traill, linguistic region. 1985) that is spoken in Botswana by about 2,000 speak- ers (Eberhard, et al., 2019), with 58 consonants, 31 vowels, and 4 tones, covers only a fraction of the attested speech sounds in the world’s languages. Similarly, as pointed out by Ian Catford (1977) and Björn Lindblom (1990), from an anthropophonic (human sound) perspective, the vocal tract is capable of producing a much wider variety of sounds than are used in human language (e.g., beatboxing; Proctor et al., 2013) because linguistic sounds are constrained to be efficient vehicles for communication. Therefore, as scientists, it is important that we sample languages broadly rather than

This article derives from a special issue of The Journal of the Acoustical Society of America on the Phonetics of Under-Documented Languages that was edited by Benjamin V. Tucker and Richard Wright. You can see all articles in the issue at acousticstoday.org/pudls.

56 Acoustics Today • Summer 2020 | Volume 16, issue 2 ©2020 Acoustical Society of America. All rights reserved. https://doi.org/10.1121/AT.2020.16.2.56 was also able to detect vocal-fold vibrations (laryngeal activity). Not long after Rousselot’s publications, P. E. God- dard took a kymograph into the field where he recorded two Athabaskan languages: Hupa (Goddard, 1905), spoken in northwestern California, and Dene Sųłiné (Goddard, 1912), spoken in north-central and northwestern Canada and which is reported to currently be spoken by about 10,700 speakers as their native language (Eberhard et al., 2019). As is illustrated in a recording of Dene Sųłiné published in 1912 (Figure 3), the kymograph recorded not only oral airflow but also the vocal-fold vibrations or voicing of the speech. In Figure 3, the speech sounds are demarcated in [tɬ’i:ze], “a (horse) fly.” In the phones with high-amplitude voicing (vocal-fold vibration), the regular vibrations of the vocal folds can be seen. This early work indicates an interest and desire by early speech research- ers to document the sounds of the world’s languages and to describe the unique aspects of these sounds, although Figure 2. Abbé Rousselot (1846–1924) with a kymograph. a bias often remained toward languages that were easily Available at bit.ly/2wrHMRV. accessible to the researcher.

By the late 1920s, acoustic studies of speech had become Figure 1B takes the latitude and longitude data associ- more common, and the first decade of The Journal of the ated with each of the languages in the Glottolog dataset Acoustical Society of America (JASA) saw 17 papers on (a comprehensive catalogue containing basic descriptive the acoustics of speech and speech production. Of these, information for many of the world’s languages; Hammar- 12 were either about English or used English exclusively ström et al., 2019) and plots it on the map. as data for a study and the others used nonlanguage vocalizations. The pattern of focusing largely on English Both maps illustrate the tremendous linguistic diversity continued even as acoustic studies of spoken language across the world. Nearly all inhabitable regions have at became more widespread with the release of the spec- least several languages. As illustrated in Figure 1B, many trogram in the 1940s (Koenig et al., 1945; Potter, 1945). regions have a high density where the languages are Thus, as acoustics became more widely used in linguistic plotted with overlapping circles, causing those areas to and psychological research after World War II, a small become solid. In many cases, the dense regions in Figure 1B contain languages that have been studied the least, leaving ample opportunity for acoustic studies. For exam- Figure 3. Goddard’s (1912) kymographic tracing of airflow of ple, Equatorial Africa, India, Papua New Guinea, and the a speaker producing the word [tɬ’i:ze], “a (horse) fly” in Dene northern part of South America are solidly covered in Sųłiné, with segmentation of individual phones added. x-axis, circles, which indicates a very high language density. time; y-axis, amount of airflow. Vocal-fold vibration is also shown in the movement on the y-axis. A Historical Perspective on Speech Sound Studies Early phonetic documentation of the world’s languages was relatively broad and considered many languages. By way of illustration, in the late 1800s and early 1900s, Abbé Rous- selot (1897) adapted the kymograph (Figure 2), invented for medical research, to his research on speech. The kymo- graph was used to record both nasal and oral airflow and

Summer 2020 • Acoustics Today 57 SPEECH ACOUSTICS

handful of languages (e.g., English, French, German, McCloy, 2019). Many, maybe most, of these sounds and Dutch, Japanese, and Chinese) came to dominate most sound combinations have not been described acoustically. publications investigating the acoustic characteristics First, we follow the International Phonetic Alphabet (2018; of speech. A notable exception for JASA is a paper in a see at acousticstoday.org/ipa-chart) conventions for tran- special volume on communication from 1950 in which scribing speech sounds and indicate that these are speech the author, John Lotz, points out the need for studying sounds using square brackets on either side of the sound. speech acoustics from a cross-linguistic perspective Then, we briefly describe some of the unique acoustic (available at asa.scitation.org/toc/jas/22/6). characteristics of speech sounds from several different languages that are acoustically underdocumented. “Every speech event belongs to a definite language. Any speech analysis that disregards this fact...will lack adequate Source-Filter Model of Speech Production principles for the classification and description of the com- A simplified way of modeling speech sound produc- plexities of speech” Lotz, 1950, p. 712. tion is using a source-filter model (Chiba and Kajiyama 1945; Fant, 1960) with a source (e.g., vocal-fold vibra- Although most of the papers in this special volume were tion or aperiodic turbulence) that is filtered by the theoretical in nature and therefore contain very little actual shape of the vocal tract (Figure 4A). A way to realize acoustic information and no acoustic investigations about this model is by using a tube as the filter with a source at specific languages, Lotz’s point is well founded. Despite the one end of the tube (e.g., Figure 4B). There are a variety call for more diversity in the acoustic studies of languages of possible sources at different points along the vocal in 1950 and despite a renewed interest in languages of the tract. The easiest vocal tract configuration to start with world among linguists, the bias toward relying on a hand- is a vowel, where vocal-fold vibration creates a complex ful of languages persists even today.

This general lack of acoustic description and research Figure 4. A: midsagittal view of the human vocal tract with on underdocumented languages in JASA and in other important places of articulation labeled. B: neutral closed- journals inspired us to host two special sessions on the open tube with labels indicating the approximated places of phonetics of underdocumented languages at Acoustical articulation. The closed end is the dark oval on the right at Society of America meetings (in Salt Lake City, UT, May the vocal folds and the open end is located at the labial end 2016, and Victoria, BC, Canada, November 2018) and to of the tube. See text for further explanation. organize a special issue on the phonetics of underdocu- mented languages (Tucker and Wright, 2020). One goal of the special issue is to increase the number of under- documented languages described in JASA. In the special issue, there are descriptions of aspects of 25 different underdocumented languages from 5 different continents.

Acoustics of the World’s Languages Many sounds have been described phonologically in lin- guistic grammars of languages, descriptions that explore and explain the patterns of a given language, although the phonological section in these grammars typically makes up a very small portion of the grammar. Most linguistic grammars use impressionistic methods where essentially the researcher writes down what they think they hear. In these grammars only a fraction of the described sounds have been examined in phonetic, and particularly acous- tic, detail. PHOIBLE is a database of speech sounds that lists 3,183 speech sounds in 2,186 languages (Moran and

58 Acoustics Today • Summer 2020 sound that is filtered by the resonant characteristics of the entire vocal tract above it. Because the source and filter are assumed to be independent, it could also be referred to as an independent source and filter model. In Figure 4A, the major speech articulators, which can be divided into static and dynamic, are illustrated using a standard midsagittal view of the head. The tongue is a dynamic articulator, and speech is produced as a result of the interaction between the tongue with the static articulators, creating different tube configurations. Figure 4B illustrates the major static articulators in the approximate location they would fall on in a tube model of the vocal tract. For a neutral tube, the resonant fre- quencies can be calculated by assuming that the vocal tract is a closed-open tube and applying a one-quarter- length standing wave resonator to estimate the resonant frequencies of the tube. The tube model can be used to make predictions about the effect of different types of articulation and how they will impact the acoustic characteristics of the speech.

The Sounds of Language As detailed by Ladefoged and Maddieson (1996), individ- Figure 5. Jalapa de Díaz Mazatec words with waveforms ual speech sounds, which are often referred to as segments (top) and spectrograms (bottom) illustrating different voice or less accurately as phonemes, can be broken down into qualities A: modal voicing [thæ], “itch” (word 21). B: creaky classes based on their production mechanisms and acous- voicing [thæ̰ ], “sorcery” (word 20). F1, F2, and F3, first three tic characteristics. The first main division is consonants vocal tract resonances or formants. Available at bit.ly/2TxrBdK and vowels. Vowels generally have a voicing source at the from files bit.ly/2IjYG7H and bit.ly/32NE4OC produced by larynx (the structure that houses the vocal folds; Figure Speaker 4. Word numbers and speaker number reference the 4A), where egressive airflow (air flowing out) from the items in the original recordings. lungs sets the vocal folds in motion, and their spectral characteristics and resulting resonant, or formant, char- acteristics are determined by different vocal tract shapes Voice Quality above the larynx. The first three vocal tract resonances An important dimension of voiced segments are the ways or formants (Figure 5A, yellow bands), together with the in which speakers can manipulate vocal-fold vibrations overall spectral shape of the signal, are the foundation creating distinct voice characteristics, referred to as voice for human perception of vowel quality (Hillenbrand et quality or phonation type. Voice quality is typically clas- al., 2006). Vowels can also contrast in other ways. One sified into three types: modal, breathy, and creaky. Modal way is in terms of duration, where they can vary in terms voicing is characterized by regular cycles and by a fairly of long versus short vowels. Another way is whether the linear drop in energy of about 6 dB/octave. Breathy voic- velopharyngeal port (the place where the velum and the ing, in which the vocal folds are slack and very loosely pharyngeal wall meet; Figure 4A) is closed (with only oral held together, has a lower amplitude than modal voicing, resonances) or open (with additional nasal resonances due and it is typified by an additional aperiodic component to airflow into the nasal cavity). Yet another way vowels and a steep falloff in spectral energy. Creaky voicing, in contrast is whether they have a single main vowel quality which the vocal folds are slightly stiffer and tightly closed (monophthongs) or vowel movement between two (diph- at the anterior end while allowing the posterior end to thongs) or three (triphthongs) vowel qualities. vibrate, has a lower amplitude than modal voicing, and it

Summer 2020 • Acoustics Today 59 SPEECH ACOUSTICS

is characterized by longer, irregular, cycles and a shallow language. For example, in English, sit and zit mean dif- falloff in spectral energy. ferent things and are minimally contrastive; the sounds [s] and [z] are only distinguished by vocal-fold vibration, Many languages employ differences in voice quality as a which is what differentiates the two words. One language feature of lexical tone (the use of vocal pitch, for which that makes contrasts based on voice quality is Jalapa de the fundamental frequency is the acoustic correlate, to Días Mazatec, an Otomanguean language spoken by about distinguish words), as in Mandarin and Vietnamese. 17,500 speakers in Mexico (Eberhard et al., 2019). Figure For example, Mandarin has four main lexical tones (see 5 illustrates two words where voice quality differences on Multimedia1 at acousticstoday.org/tuckermedia): high the vowels conveys different meanings. In Figure 5A, [thæ], level (as in the word “eight” 八 [pa˥]), mid-high rising “itch” (see Multimedia3 at acousticstoday.org/tuckermedia), (as in the word “to pull out” 拔 [pa˧˥]), mid-low-mid dip- the vocalic portion is modally voiced with regular cycles ping (as in the word “to hold” 保 [pa˨˩˦]), and high-low and a level amplitude. In Figure 5B, [thæ̰ ], “sorcery” (see falling (as in the word “father” 爸 [pa˥˩]). In the mid- Multimedia4 at acousticstoday.org/tuckermedia), the vowel low-mid dipping tone, creaky voicing is used. In other is realized with creaky (also known as laryngealized) voicing. languages, fundamental frequency and voice quality can The lower amplitude and longer and irregular cycles of creaky be used to convey meaning at the sentence level, as in voicing can be seen between 225 and 300 ms in Figure 5B. English questions versus statements (see Multimedia2 at acousticstoday.org/tuckermedia). Statements end in Some consonants, typically referred to as approximants, a low pitch that is often accompanied by creaky voicing. have dynamic vowel-like resonances, such as [w]. Like vowels, they are best defined in terms of their first three Many other languages have contrastive (or phonemic) resonances (F1, F2, and F3). All other consonants can voice quality. Linguists use meaning differentiation to be described in terms of their place of articulation or determine when speech sounds are contrastive in a where in the oral tract they are produced. These places of

Figure 6. Waveforms (top) and spectrograms (bottom) illustrating complex consonant clusters in Tsou. A: [fkoi], “snake.” B: [kʃikʃi], “ash/burning charcoal.” C: [tmihi], “to hang.” D: [pŋajo], “have food in mouth (actor focus).”

60 Acoustics Today • Summer 2020 articulation can be divided into the locations indicated in Figure 4. It is possible to have 17 places of articulation for consonants (Ladefoged and Maddieson, 1996). Con- sonants are also distinguished by the manner in which they are produced. For example, plosives are created by stopping the airflow in the oral tract and quickly releasing it as in the sound [p] in English. Fricatives are produced by making a tight constriction in the oral tract and creat- ing turbulent airflow through the constriction, as in [s] in English. Laryngeal contrasts, as we have already seen, include voicing and voice quality and aspiration (a period of voicelessness following a plosive release). The airstream mechanism (how we control the flow of air in speech) is what a speaker manipulates to power speech and can be realized as pulmonic egressive, glottalic egressive/ejective, glottalic ingressive/implosive, or velaric ingressive/click. Egressive sounds are created by outward airflow; ingressive sounds are created using inward airflow.

In addition to the segmental speech sounds, there are suprasegmental aspects of speech. A language’s supra- segmental acoustic features include whether it is a tone language (e.g., Mandarin and Vietnamese, as seen in the Mandarin example), a stress language (e.g., English and Figure 7. A: midsagittal view of the speech articulators Hawai’ian), or a pitch accent language (e.g., Japanese and illustrating an alveolar ejective. Arrow, direction of laryngeal Western Basque). The acoustic features of suprasegmen- movement. B: waveform (top) and spectrogram (bottom) of tal aspects of speech prominently include fundamental [tɬ’i:ze], “a horse fly” in Dene Sųłiné. frequency as well as duration and intensity.

Phonotactics and Complex Combinations Multimedia5 at acousticstoday.org/tuckermedia) with the A distinguishing characteristic of many languages is how fricative-stop cluster is the amplitude of the labiodental they combine sounds, also called phonotactics. Although fricative. It has a much higher intensity, and it is longer most languages have fairly simple phonotactics, some, than when it occurs in languages that only permit it to like English, have much more complex combinations of occur preceding a vowel. Figure 6B is interesting because segments as syllables as in the word sports, with a fricative of the extremely short vowels in comparison to the frica- ([s] in this case and plosive ([p] and [t]) combinations. A tives (see Multimedia6 at acousticstoday.org/tuckermedia). handful of known languages have very complex phono- Figure 6, C and D, is interesting in part because they have tactics. One example is Tsou, an Austronesian language often been misperceived in impressionistic transcriptions spoken in Taiwan by about 4,000 speakers (Eberhard et as having an extra syllable (see Multimedia7 and 8, respec- al., 2019). Most of its consonants and vowels are com- tively, at acousticstoday.org/tuckermedia). monly found in other languages. Unlike most languages, however, almost all of the two-way combinatorial pos- Nonpulmonic Airstream Mechanisms sibilities of consonants are attested to in clusters at the Languages differ not only in how they use sounds but also beginning of words, resulting in many very rare com- in what sounds they have, how those sounds are made, and binations (Wright and Ladefoged, 1997). The examples how they combine the sounds. For example, although all (from recordings in Wright, 1996) in Figure 6 illustrate languages excite the vocal tract with pulmonic-egressive combinations of [fk], [kʃ], [tm], and [pŋ] at the beginning powered sources (at the larynx for vowels and at various of words. One of the interesting features in Figure 6A (see points along the vocal tract for sounds with aperiodic

Summer 2020 • Acoustics Today 61 SPEECH ACOUSTICS

sounds like fricatives, stops, and affricates), many languages Another airstream mechanism used is velaric. Sounds use other sources, referred to as “airstream mechanisms.” produced using this airstream mechanism are com- monly known as clicks. By way of example, we describe Just over 10%, or 230, of the languages in the PHOIBLE the process of producing an alveolar click; this process is sample contain ejective sounds (Moran and McCloy, illustrated in Figure 8. First, the speaker raises the tongue 2019). Ejectives, illustrated in Figure 7A, are made using and creates a closure both at the alveolar ridge with the tip the laryngeal airstream mechanism as the source. It is of the tongue and at the velum with the back part of the first important to understand how these sounds are pro- tongue. The tongue is then pulled down while maintaining duced, and then we describe the acoustic characteristics the closure at the alveolar ridge and velum. This creates an of these sounds. Ejectives are made by first making a area of low pressure (a vacuum) within the cavity between closure somewhere in the oral tract, like at the alveolar the tongue and the roof of the mouth. Finally, the tip of the ridge (the hard ridge behind the upper teeth; Figure 4A). tongue is released from the alveolar ridge, creating a very The speaker also closes the vocal folds and quickly raises loud popping sound as air fills the area of low pressure. the entire laryngeal system. The air trapped in the oral tract is compressed, increasing the air pressure in the oral Clicks, like ejectives, are also extremely loud speech tract. The tip of the tongue is lowered, opening the oral sounds. The high-intensity release of the click can clearly tract, and releasing the compressed air, creating an extreme be seen in the four different realizations of the alveo- popping sound. The waveform and spectrogram in Figure lar click in Ju|'hoan in Figure 9. These recordings come 7B are the same word recorded by Goddard in 1912 (Figure from the University of California, Los Angeles (UCLA) 3), [tɬ’i:ze], “a (horse) fly,” by a speaker of Dene Sųłiné in Phonetics Lab Archive dataset. Ju|'hoan, a Kx’a language, 2020 (see Multimedia9 at acousticstoday.org/tuckermedia). is spoken in Namibia and Botswana, with a population This word contains an ejectivized alveolar lateral affricate, of about 44,000 speakers (Eberhard et al., 2019). Clicks which is an ejective that is released at the side of the tongue represent just 1% or 29 of the languages spoken in the followed by a fricative. It can be seen in the spectrogram PHOIBLE sample (Moran and McCloy, 2019). One of the that the ejective release (between 0 and 50 ms) is the loudest interesting things about clicks is that they can be combined part of the speech in the word, with strong transients in the with other sounds so that they are produced in many dif- waveform (e.g., Wright et al., 2002). ferent ways. Figure 9, A and B, illustrates the plain alveolar click in the words [!ābē], “to be crinkled” (see Multime- dia10 at acousticstoday.org/tuckermedia), and [!āá], “to run” (see Multimedia11 at acousticstoday.org/tuckermedia), Figure 8. Midsagittal view of the vocal tract illustrating the respectively. The remaining examples illustrate the clicks process of click production. Dotted line, closure both at the occurring with different combinations of sounds. The alveolar ridge with the tip of the tongue and at the velum with example in Figure 9C illustrates a prenasalized alveo- the back part of the tongue. Arrow, direction of the tongue lar click [ŋ!áā], “type of acacia” (see Multimedia12 at movement to create the low-pressure area before release. acousticstoday.org/tuckermedia), and in Figure 9D is the voiced alveolar click from the word [ɡ!āà], “to dry something” (see Multimedia13 at acousticstoday.org/tuckermedia).

As seen from the examples in this section, there is a wide variety of sounds in human language that are worthy of closer acoustic study. Some of the more interesting sounds are found in only a handful of languages, so one would need a very broad sample not to miss them.

Language Endangerment We often hear about the tragedy of the loss of biological diversity, which is a major loss to the planet’s ecosystem. Less often, we hear about cultural and linguistic diversity

62 Acoustics Today • Summer 2020 Figure 9. Waveforms (top) and spectrograms (bottom) of Ju|'hoan words. A: plain alveolar click [!ābē], “to be crinkled” (word 3). B: plain alveolar click [!āá], “to run” (word 4). C: prenasalized alveolar click [ŋ!áā], “type of acacia” (word 7). D: voiced alveolar click [ɡ!āà], “to dry something” (word 8). Available at bit.ly/2wvmaE8 from filebit.ly/2PNqGVn . Word numbers and speaker number reference the items in the original recordings.

and how more than 75% of the world’s languages have can be learned from these sounds. The investigation of fewer than 1,000 speakers, and of those, many are in the acoustic characteristics of the world’s languages is danger of losing all of their speakers within a generation. an important part of understanding speech communica- Many of these endangered languages exist in the most tion. Thus far, we have not mentioned the perception side densely populated areas illustrated in Figure 1B. One of speech communication, and just as it is important to resource calculates that over 40% of the world’s languages understand the acoustic characteristics of speech produc- are endangered (Eberhard et al., 2019). That is nearly 3,000 tion across the world’s languages, it is also important to of the world’s 7,000 languages that will likely not be spoken understand how listeners of these languages make use of over the next 1-2 generations. Languages become endan- acoustic cues to comprehend language. We have already gered when future generations are not actively learning the argued that the literature describing the acoustics of the community’s language but are learning a more dominant world’s languages is lacking; this lack of literature is even language. The loss of language can play out in many dif- more extreme in the domain of speech perception for ferent ways. An article in Acoustics Today by Whalen et underdocumented languages. There are many speech al. (2011) gives an excellent example of some of the docu- sounds that vary radically from sounds in the well- mentation of two endangered languages. The article also documented languages, and most have not been studied discusses in detail the importance of phonetic documenta- acoustically. Similarly, a language may have a set of well- tion from a language endangerment perspective. described sounds in its inventory but may combine them in a way that is not well documented; understanding how Summary sounds interact with each and their acoustic character- As seen in the preceding examples, there is great diver- istics is an important basic step to really understanding sity in the sounds of the world’s languages, and much speech communication.

Summer 2020 • Acoustics Today 63 SPEECH ACOUSTICS

Acknowledgments Rousselot, J.-P. (1897). Principes de Phonétique Expérimentale, tomes We thank Val Wood for providing the Dene Sųłiné exam- I and 2. Paris-Leipzig. Traill, A. (1985). Phonetic and Phonological Studies of !Xóõ Bushman. ple, Youran Lin for providing the Mandarin example, and Helmut Buske, Hamburg, Germany. Matthew Kelley for his comments on an early draft of Tucker, B. V., and Wright, R. A. (2020). Introduction to the special this article. issue on the phonetics of under-documented languages. The Journal of the Acoustical Society of America 147(4), 2741-2744. https://doi.org/10.1121/10.0001107. References UCLA Phonetics Lab. (2007). The UCLA Phonetics Lab Archive. Depart- Catford, J. C. (1977). Fundamental Problems in Phonetics. Indiana ment of Linguistics, University of California Los Angeles (UCLA), Los University Press, Bloomington. Angeles, CA. Available at http://archive.phonetics.ucla.edu/. Chiba, T., and Kajiyama, M. (1942). The Vowel: Its Nature and Whalen, D. H., DiCanio, C., and Shaw, P. A. (2011). Phonetics of Structure. Tokyo-Kaiseikan Publishing Co., Ltd., Tokyo. Edition in endangered languages. Acoustics Today 7(4), 35-42. reprinted in 1952; Japanese edition translated in 2003. Wright, R. A. (1996). Tsou consonant clusters and cue preservation. Eberhard, D. M., Simons, G. F., and Fennig, C. D. (Eds.). (2019). UCLA PhD Dissertation, University of California, Los Angeles. Ethnologue: Languages of the World, 22nd ed. SIL International, Wright, R. A., and Ladefoged, P. N. (1997). A phonetic study of Tsou. Dallas, TX. Bulletin of the Institute of History and Philology, Institute of History Fant, G. (1960). Acoustic theory of speech production: With cal- and Philology, Academia Sinica, Nanking, Taipei. culations based on X-ray studies of Russian articulations, vol. 2. Description and Analysis of Contemporary Standard Russian. Mouton, Hague, The Netherlands,. pp. 15-90. About the Authors Goddard, P. E. (1905). Mechanical aids to the study and recording of language. American Anthropologist 7(4), 613-619. Goddard, P. E. (1912). Texts and analysis of Cold Lake dialect, Benjamin V. Tucker Chipewyan. Anthropological Papers of the American Museum of [email protected] Natural History 10(1-2), 170. Department of Linguistics Hammarström, H., Forkel, R., and Haspelmath, M. (2019). Glot- University of Alberta tolog 4.1. Max Planck Institute for the Science of Human History, 4-32 Assiniboia Hall Jena, Germany. Available at http://glottolog.org. Accessed Janu- Edmonton, Alberta T6G 2E7, Canada ary 27, 2020. Hillenbrand, J. M., Houde, R. A., and Gayvert, R. T. (2006). Speech Benjamin V. Tucker is a professor of perception based on spectral peaks versus spectral shape. The phonetics in the Department of Linguistics, University of Journal of the Acoustical Society of America 119(6), 4041-4054. Alberta (Edmonton, AB, Canada). He is also the director https://doi.org/10.1121/1.2188369. of the Alberta Phonetics Laboratory. He received his PhD International Phonetic Alphabet. (2018). International Phonetic from the University of Arizona. His research focuses on Alphabet (IPA) and the IPA Chart. Available at cognitive aspects of the production and perception of http://www.internationalphoneticassociation.org/content/ipa-chart. spontaneous speech (e.g., “Wazat?” for “What is that?”). Copyright © 2018 International Phonetic Association, available under He also works on the documentation of endangered and a Creative Commons Attribution-Sharealike 3.0 Unported License. underdocumented languages. Koenig, W., Dunn, H. K., and Lacy, L. Y. (1946). The sound spectro- graph. The Journal of the Acoustical Society of America 18(1), 19-49. Richard Wright [email protected] https://doi.org/10.1121/1.1916342. Ladefoged, P., and Maddieson, I. (1996). The Sounds of the World’s Department of Linguistics Languages. Blackwell, Oxford, UK. University of Washington Lindblom, B. (1990). On the notion of “possible speech sound.” Jour- Box 352425 nal of Phonetics 18, 135-152. Seattle, Washington 98195-2428, USA Lotz, J. (1950). Language and speech. The Journal of the Acoustical Soci- Richard Wright is a professor of pho- ety of America 22(6), 712-716. https://doi.org/10.1121/1.1906676. netics in the Department of Linguistics Moran, S., and McCloy, D. (Eds.). (2019). PHOIBLE 2.0. Max Planck at the University of Washington (Seattle). He is also the direc- Institute for the Science of Human History, Jena, Germany. Available tor of the Linguistic Phonetics Laboratory. He received his at http://phoible.org. Accessed January 1, 2020. PhD from the University of California, Los Angeles (UCLA; Potter, R. K. (1945). Visible patterns of sound. Science 102(2654), 463- Los Angeles) in 1996. His research addresses systematic vari- 470. https://doi.org/10.1126/science.102.2654.463. ation in the production and perception of spoken language. Proctor, M., Bresch, E., Byrd, D., Nayak, K., and Narayanan, S. (2013). His other research interests include machine recognition of Paralinguistic mechanisms of production in human “beatboxing”: spoken language, hearing loss, and documentation of endan- A real-time magnetic resonance imaging study. The Journal of the gered and underdocumented languages. Acoustical Society of America 133(2), 1043-1054. https://doi.org/10.1121/1.4773865.

64 Acoustics Today • Summer 2020 FEATURED ARTICLE

The Adapted Ears of Big Cats and Golden Moles: Exotic Outcomes of the Evolutionary Radiation of Mammals

Edward J. Walsh and JoAnn McGee

Through the process of natural selection, diverse organs and organ systems abound throughout the animal kingdom. In light of such abundant and assorted diversity, evolutionary adaptations have spawned a host of peculiar physiologies. The anatomical oddities that underlie these physiologies and behaviors are the telltale indicators of trait specialization. Following from this, the purpose of this article is to consider a number of auditory “inventions” brought about through natural selection in two phylogenetically distinct groups of mammals, the largely fossorial golden moles (Order Afrosoricida, Family Chrysochloridae) and the carnivorous felids of the genus Panthera along with its taxonomic neigh- bor, the clouded leopard (Neofelis nebulosa).

In the Beginning The first vertebrate land invasion occurred during the Early Carboniferous period some 370 million years ago. The primitive but essential scaffolding of what would become the middle and inner ears of mammals was present at this time, although the evolution of the osseous (bony) middle ear system and the optimization of cochlear fea- tures and function would play out over the following 100 million years. Through natural selection, the evolution of the middle ear system, composed of three small articu- lated bones, the malleus, incus, and stapes, and a highly structured and coiled inner ear, came to represent all marsupial and placental (therian) mammals on the planet Figure 1. Schematics of the outer, middle, and inner ears (A) and thus far studied. The consequences of this evolution were the organ of Corti in cross section (B) of a placental mammal. extraordinary. The process of natural selection enabled an extension of the highly restricted low-frequency hearing range that tops out around 12 kHz for most nonmamma- 150-200 kHz in some echolocating bats and aquatic mam- lian vertebrates (although there are notable exceptions in mals. All of this was accomplished, at least partially, by the some frogs and fishes) into the greatly expanded high-fre- selection-driven repurposing of elementary components quency space of the mammal that reaches an upper limit of the reptilian jaw into the osseous middle ear and the of about 90 kHz in some terrestrial mammals and exceeds reconfiguration of the amphibian and basilar papillae into

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 65 https://doi.org/10.1121/AT.2020.16.2.65 ADAPTED EARS OF TERRESTRIAL MAMMALS

a hearing organ (Figure 1) equipped with signal-amplify- lifestyle, save one, the Namib golden mole (Eremitalpa ing sensory cells sensitive to displacements measured on granti namibensis) that we highlight in this article (Figure a nanoscale ruler (Clack et al., 2016). 2, photograph). External ear openings of golden moles are tiny and covered with dense, iridescent hair, and they have The story that we tell here focuses on one well-known vestigial eyes covered by tough, thick skin that render them outcome of evolution through natural selection and one image blind. These phenotypic features are clear indica- outcome that is just emerging. tors of their unusual, but certainly not unique, mammalian lifestyle. Their fusiform body shapes combined with Golden Moles and Their Remarkable specialized cranial features and appendages adapted for Middle Ears digging suit them ideally for a subterranean lifestyle. The golden mole subfamily Chrysochlorinae is home to 11 species of highly specialized mammals (Bronner, 2020). The specialization of interest here, however, has nothing Members of this taxon distinguish themselves from the to do with digging, but everything to do with the detec- only other golden mole subfamily, Amblysominae, by tion of subterranean “sounds” originating in the form of virtue of middle ear specializations thought to augment seismic waves in an ancient desert known as the Namib subterranean auditory performance. Golden moles as erg that extends along the Atlantic coast of Africa from a group are small, insectivorous, burrowing mammals Angola in the north to the northern tip of South Africa. inhabiting wide-ranging climates, altitudes, and floral sys- These soilborne waves almost certainly influenced the tems of sub-Saharan Africa. All species live a subterranean evolution of the auditory periphery of at least some mem- bers of this taxon. The chief evolutionary outcome of this process in Chrysochlorinae species was hypertrophy of Figure 2. Bottom: relationship between body mass and the malleus, the middle ear bone commonly referred to malleus mass is shown for numerous mammals (purple area) as the “hammer,” that is set into motion by the sound- and for Chrysochlorinae species (red area). Data obtained induced vibration of the tympanic membrane, commonly from Nummela, 1995; von Mayer et al., 1995; Mason, known as the eardrum (Figure 1). 2001, 2003; Mason et al., 2018; and Coleman and Colbert, 2010. Top: scaled reconstructions of middle ear ossicular In some species of Chrysochlorinae, malleus size can be chains (blue, malleus; light blue, incus; yellow, stapes) and remarkable (Figure 2). To put the incredible nature of this inner ears reproduced from Crumpton et al., 2015, with adaptation in perspective, consider the species Amblyso- permission. Inset: photograph by G. B. Rathun, reproduced mus hottentotus, the Hottentot golden mole. Its average with permission. body mass is over 2.5 times the mass of the Namib golden mole, the smallest of the golden mole species. Although the malleus mass of the Hottentot variety scales propor- tionally with body mass, as with the majority of mammals, the mass of the Namib golden mole’s malleus is more than 60 times that of the Hottentot variant, which unambigu- ously justifies its designation as an evolutionary adaptation. Accordingly, it is reasonable to presume that a significant amount, if not the great bulk, of the malleus size difference between Hottentot and Namib varieties is the product of environmental modification, an adaptation shaped by the force of natural selection.

The Namib Golden Mole The Namib golden mole has abandoned the inflexible underground lifestyle of its relatives; it is celebrated instead for its sand-swimming skills, so much so that they are known in colloquial terms as “sand sharks.”

66 Acoustics Today • Summer 2020 However, their swimming skill isn’t the point of interest Golden Mole Hearing: Can They? here. It is their peculiar style of foraging, a style enabled, Although we cannot claim to know what Namib golden presumably, by their conspicuously hypertrophied mallei. moles or any other member of the Chrysochlorinae sub- Fiercely territorial, individuals scurry about on the sur- family actually hear, predictions derived from Bárány’s face of the erg, intermittently stopping to dip their snout 1938 model of inertial bone conduction suggest that they and small, conically shaped head beneath the surface. do. Using morphological measurements of key middle ear They are, some have suggested, listening for the low- structures and calculating relevant middle ear parameters frequency, soilborne seismic signature that might lead required by this model, Mason (2003) predicted the fre- them to their favorite prey, the subterranean dune ter- quency producing peak displacement, the strongest driving mite (Fielden et al., 1990). force delivered by the stapes to the fluids of the inner ear, at 300 Hz in Grant’s golden mole (Eremitalpa granti granti), Biologists have, it turns out, settled on a likely expla- a close relative of the Namib golden mole. This frequency nation of just how the head-dipping behavior of the corresponds closely to the peak frequency of seismic signals Namib golden mole might enable its hunting prow- generated by the grassy mounds of the Namib erg (Narins ess. The hypothetical but probable answer is that some et al., 1997). In addition, the predicted resonant frequency golden mole species detect seismic events by tightly of bone conduction in the Cape golden mole, Chrysochloris coupling their heads to the substrate, taking advantage asiatica, is 220 Hz (Mason, 2003), a value nearly matching of inertial bone conduction. The low-frequency seismic resonant frequencies of 100-200 Hz determined by direct waves propagating through the sand of the erg cause the measurements of ossicular velocity in response to vibra- bones of the skull to vibrate in unison. Movement of tional stimuli (Willi et al., 2006a,b). Depending on the the loosely coupled ossicles lags behind that of the skull accuracy of these predictions, these findings point con- because of inertia, producing relative motion between fidently to the conclusion that the middle ears of golden the stapes footplate and the oval window of the cochlea moles were almost certainly adapted to detect soilborne seis- and transferring energy to the inner ear (Bárány, 1938). mic events, a prediction reinforced by the directed foraging We should point out, however, that compression of the behavior of the Namib golden mole (Lewis et al., 2006). bony cochlear wall and/or inner ear fluid inertia also play a role in bone conduction in at least some mammals. Further support for the view that golden moles are able to hear can be found in predictions of the high-frequency Regardless, the enlarged mallei of some Chrysochlo- limit of hearing based on a widely used model of the middle rinae species enhance sensitivity to bone conduction, ear (Hemilä et al., 1995). Using this model, Mason (2001) presumably, but almost certainly, permitting the detec- computed upper limits of 5.9 kHz and 13.7 kHz for Grant’s tion of low-amplitude, low-frequency ground vibrations golden mole and the Cape golden mole, respectively. In (Narins et al., 1997; Mason 2003). A second ossicular addition, direct observation of frequency-dependent altera- adaptation contributing to and enhancing sensitivity to tions in the mode of ossicular vibration permit, theoretically, seismic events is the displacement of the center of mass uncompromised detection of airborne stimuli in the Cape of the ossicular chain away from its natural rotational golden mole (Willi et al., 2006b). The evolutionarily modi- axis. The relocation of the center of mass further ampli- fied middle ear of some golden moles has been described fies ossicular motion relative to the skull and, ostensibly, by some as nothing short of ingenious. augments the sensitivity of the system to low-frequency seismic signals, unlike that predicted for golden moles We end this section by pointing out the obvious. with smaller mallei and a center of mass that falls close Although the role of the middle ear of the golden mole to the natural rotational axis. In a remarkable tilt to the family has been the topic of considerable, highly pro- amazing power of selection-driven adaptation, it can be ductive inquiry, a complete accounting of golden mole confidently argued that the head-dipping behavior of the hearing will require a comprehensive investigation using Namib golden mole permits the detection of faint seis- behavioral techniques or at the systems level of physiol- mic signals produced by the wind-driven motion of dune ogy before a clear understanding of the auditory capacity grass mounds scattered about their territories. of these fascinating mammals is available.

Summer 2020 • Acoustics Today 67 ADAPTED EARS OF TERRESTRIAL MAMMALS

Is the Inner Ear of the Tiger Adapted to React Rapidly to Low Frequencies? Although the middle ear specialization observed in golden moles is evident, even in gross anatomical terms, some adaptations are more subtle and are recognized only in behaviors or physiologies buried deep in otherwise generalist phenotypes. Discovering those traits can be challenging and are frequently revealed, as with many sci- entific discoveries, through serendipity. Such was the case when an unusual feature in the auditory phenotype of the tiger (Panthera tigris) was discovered. Its discovery sug- gested that the tiger may be best thought of as an auditory specialist, a question that we address in this section. When thinking about use of the term “specialist,” we generally refer to any trait that differentiates the animal’s perfor- mance from closely related organisms. If, for example, the shape of an animal’s audiogram breaks radically from that of other related taxa and is beneficial from an ethological perspective, the animal can be thought of as specialized in that specific trait. Figure 3. Cochlear tuning is sharper in tigers and humans across the frequency range when compared with other We start, therefore, by pointing out that many elements of mammalian species. Tuning sharpness was measured from the tiger auditory phenotype are typical of auditory gen- auditory nerve recordings in the macaque (Joris et al., 2011), eralists. For example, overall sensitivity to acoustic stimuli domestic cat, guinea pig, and chinchilla (see Shera et al., in tigers is similar to that of other cat (felid) species, and, 2010) or estimated from ear canal recordings (i.e., stimulus- although the high-frequency limit is lower than that of frequency otoacoustic emissions) of the tiger (Bergevin et al., smaller felids, the shapes of their sensitivity curves are also 2012) and human (Shera et al., 2010). Inset: in mammals, similar (Walsh et al., 2011). It is also likely that the low- generally greater basilar membrane (BM) space is allocated frequency limit of hearing is lower than that of smaller per octave in longer BMs than in shorter membranes. cats, a prediction based on structure-based scaling of the Adapted from Manley, 2017. middle ear (Huang et al., 2000).

Although the middle ear transforms sound energy col- thought of as the gateway to audition. The output of these lected from the outer ear and transfers it to the inner ear, filters determines precisely what a species can detect in the cochlea functions as a frequency-analyzing system the soundscape, a property that, to a large extent, deter- that is commonly modeled as an array of bandpass fil- mines if a selective advantage can be gained by enhanced ters reflecting the resonant properties of the basilar sensitivity to a particular frequency band or by a mecha- membrane (BM; Figure 1B), the vibratory membrane nism that enhances frequency selectivity, for example. supporting the organ of Corti, on which sit the sensory cells for hearing. The resonant properties of the BM In this regard, as shown in Figure 3, cochlear filter sharp- reflect the continuously changing mass and stiffness of ness in the tiger far exceeds that observed in the much the membrane along the length of the cochlea (Figure smaller domestic cat, Felis catus, as well as most other 1A), which increases and decreases, respectively, along common and small laboratory animals (Bergeven et al., a basoapical gradient. This system, coupled with the 2012). On the face of this finding, it may be tempting to voltage-dependent motility of a subset of cochlear sen- conclude that the tiger inner ear filters have undergone sory cells (outer hair cells) that amplifies BM motion and specialization and are more frequency selective than in produces sharp inner ear filters (Brownell, 2017), can be many other mammals.

68 Acoustics Today • Summer 2020 However, it can, and should, be argued that the trait dif- between life and death or between a successful hunt and ferences shown in Figure 3 can be misleading when body an empty stomach, for example. When concentrating on size differences are not taken into consideration. When response timing, the generalist impression is at least par- scaling factors are considered, the frequency-selectivity tially upended. Outcomes of studies examining response differences between the domestic cat and the tiger are not timing in the stimulus frequency space of tigers reveal particularly surprising. In accordance with the principle nonmonotonic profiles (Figure 4A). Increasing from of allometric growth in which the growth of one feature highly unanticipated short-response latencies to low- relative to another is proportional, it is notable that small frequency stimulation, latencies reach a maximum in the animals generally exhibit proportionally shorter cochlear midfrequency range and steadily decrease with increasing lengths and, in some cases, higher upper frequency hear- frequency such that the latency to the highest frequency ing limits than larger animals. The best way to think about studied is higher than the latency to the lowest frequency these differences is probably within the framework of studied. This stands in stark contrast with findings from inner ear frequency-mapping constants (i.e., the length other mammals (Ruggero and Temchin, 2007) studied of the BM devoted to a given frequency bandwidth; thus far; latencies generally decrease exponentially with Figure 3, inset). Because the BM of tigers is longer than frequency, as shown for the modern day workhorse labo- that in domestic cats and other typical laboratory animals ratory animal, the mouse (Mus musculus; Figure 4B), as (Ulehlová et al., 1984; Walsh et al., 2004) and because the well as in a squirrel monkey (Saimiri sciureus; Figure 4C). high-frequency hearing limit is lower than that observed The differences are striking, and they are confusing in light in common laboratory animals, the mapping constant of contemporaneous models of inner ear mechanics. of tigers is, theoretically, significantly larger than that in domestic cats, assuming other cochlear variables are com- Although space limits won’t permit an in-depth consider- parable. The upshot of this consideration is that there is ation of a similar discovery made recently in the clouded no evidence suggesting that inner ear filter outputs break leopard (Neofelis nebulosa), the implications of the find- the uniform mold of other less famous felids if, that is, we ing (Figure 4D) may have real relevance in efforts to frame the question in terms of biological scaling. understand the evolution of the timing trait observed in tigers (Walsh et al., 2017). Not only is the resemblance of Tiger findings considered here are also of interest when response-timing profiles, in our view, stunning, it takes thinking about earlier claims that inner ear mechani- on evolutionary relevance when the taxonomic proxim- cal filters are unusually sharp in humans. As also seen ity of the genus Neofelis to Panthera is considered. Based in Figure 3, the sharpness of inner ear filters in the tiger on the close taxonomic relationship between tigers and closely approximates the cochlear sharpness measured in humans. This finding suggests that the predicted cochlear mapping constant of tigers is much like that observed in Figure 4. The relationship between auditory response humans (von Békésy, 1960; Shera et al., 2010), a finding latencies and stimulus frequency taken from scalp recordings with considerable scientific importance when biological for tigers (A), a laboratory mouse (B), a squirrel monkey scaling questions arise. These results also suggest that other (C), and clouded leopards (D). SPL, sound pressure level. cochlear features contributing to tuning, such as longitu- Recordings in A-C were made at the same location using the dinal coupling via tectorial membrane traveling waves, are same setup. also most likely comparable when humans and tigers are considered (Sellon et al., 2019); again, body size matters.

On the surface, therefore, for all of their otherwise mag- nificence, tigers are not, it would seem, particularly noteworthy from an auditory performance/processing perspective. However, all of that changes when the analyti- cal lens shifts to focus on the timing or latency of neural responses following stimulation in the frequency realm. In the real world, response timing can make the difference

Summer 2020 • Acoustics Today 69 ADAPTED EARS OF TERRESTRIAL MAMMALS

clouded leopards and unpublished findings from our laboratory suggesting that other members of Panthera may exhibit the same trait, the unusual timing relation- ship considered here may have been passed to tigers and other large cats but may have washed out of the taxonomic flow in the other felid lineages in which the trait has not been observed.

Regardless, efforts to begin considering potential mecha- nisms that might underlie this unusual physiology require us to briefly review a few key elements in inner ear bio- mechanics for those who may be less familiar with the process. The prevailing textbook explanation of the stan- dard mammalian latency-frequency relationship borrows Figure 5. A: results of a study showing that the basal half from classical filter theory and derives from a notably large of the tiger’s cochlea contributes substantially to the latency and consistent inner ear biomechanics literature. Sensory of a response to a relatively low-frequency tone (2 kHz). scientists have known from the time of von Békésy (1960) Insets: extremes of the stimulus spectra shown schematically that vibrations on the BM propagate as traveling waves in (pink, signal or “probe tone”; green, high-pass noise masker). a base to apex direction, consuming time as they travel Starting with a relatively broadband noise that masks toward inner ear mechanical filters that match stimula- responses from all cochlear regions basal to the probe tone tion frequencies and toward their so-called characteristic (A, bottom left), the noise cutoff frequency was increased, place along the BM. Therefore, travel time is a clear and decreasing the area of the cochlea being masked (A, bottom relevant term in the latency/frequency equation, but it is right), and resulting in faster response times. Red circles, not the only relevant timing factor. In addition to passive latencies of the control (C) and recovery (R) responses to cochlear delays, timing is influenced by active cochlear the probe tone recorded before and after the masker was filter-response times that are dependent on outer hair presented, respectively. B: example of a tuning curve recorded cell electromotility (Brownell, 2017), the mechanism from an auditory nerve fiber of a domestic cat, indicating the that amplifies responses near the characteristic place and “tip” and “tail” resonances. Red arrow, direction of threshold sharpens filter responses. The ultimate outcome of all of shift of a hypersensitive tail. Inset: photograph of the authors this from a response-timing perspective is that high-fre- preparing to record brain potentials from a tiger. quency propagation times are normally shorter than times associated with lower frequency responses (cf. Figure 4, B and C). Clearly, some members of the Panthera lineage, Although preliminary in large measure because of the including the tiger and clouded leopard, settled on a dif- limited access to these very large and endangered animals, ferent auditory timing strategy than other mammals. findings from that effort suggest that a substantial signal from the basal half of the tiger cochlea contributes consid- The natural questions emerging from this finding are, first, erably to the fast response times to low frequencies in this where in the inner ear does this presumed adaptation origi- big cat (Figure 5A). A hypothetical scenario based on a few nate, and second, what specific inner ear structure, if any, key relevant findings is offered in A Hypothetical Answer underwent adaptation? To attempt to answer this question, to the Response-Timing Conundrum. we turn our attention to morphological features in the organ of Corti in search for evidence of adaptation, and particular A Hypothetical Answer to the Response- attention will be paid to the base of the cochlea, the region Timing Conundrum responsive to high- and mid-range frequencies. The deci- That a relatively narrow band of low-frequency, moderate- sion to concentrate on basal regions was driven primarily level sounds drive up discharge rates of individual auditory by preliminary findings from a masking study conducted nerve fibers tuned to high frequencies is a well-known phe- in our laboratory on a Bengal tiger (Panthera tigris tigris). nomenon in auditory neuroscience circles. This so-called

70 Acoustics Today • Summer 2020 second neural resonance, often referred to as the “tail” of of hearing organs with remarkable sensitivity, extraordi- tuning curves, is easily differentiated from the sharply tuned nary dynamic range, and an operational range spanning primary resonance, as seen in an auditory nerve fiber tuning a 10-octave frequency range in some mammalian species. curve (Figure 5B). The mechanism responsible for the Layer on top of this accounting of evolution the diverse appearance of the tail has been linked to a second inner ear expression of adaptation rarities witnessed in response traveling wave, this one on the tectorial membrane (TM; to virtually every territory invaded by mammals as their Allen and Fahey, 1993), a gelatinous, acellular matrix of stri- populations radiated from one ecosystem to another and ated connective tissue that couples the mechanosensitive the inventiveness of natural selection clarifies. This article hair bundles associated with outer hair cells to motions of has concentrated on one well-understood and much- the BM and playing an important role in the enhancement studied evolutionary wonder, the Namib golden mole, of cochlear sensitivity (Figure 1B). The importance of this whose middle ear is a true marvel of nature, of evolution, linkage in the context of this discussion is heightened by and of natural selection. We also focused on a mysterious, noting that many studies have shown significant effects on poorly understood twist on our contemporaneous model cochlear sensitivity and tuning as well as the expression of of inner ear biomechanics, one trait that, potentially, dif- the second resonance in transgenic mice exhibiting altered ferentiates the tiger and the clouded leopard, and possibly TM composition or detachment of the structure from its other big cats, from the rest of the mammalian class. One mooring on the spiral limbus (Richardson et al., 2008). is the material of textbooks, the other remains shrouded Moreover, tail hypersensitivity has been reported in animals in mystery, awaiting the careful scrutiny of science. under conditions of reduced mechanical coupling between the TM and hair bundles resulting from outer hair cell loss Acknowledgments or stereocilia damage. This tight connection between the We express sincere appreciation to Lee Simmons, Doug- TM and the expression of the second resonance leads, it can las Armstrong, and Heather Robertson and the veterinary be reasonably argued, to the proposition that specialization and animal care staff at Omaha’s Henry Doorly Zoo and of the TM might alter its influence on the expression of the Aquarium (NE) and the Nashville Zoo at Grassmere (TN) second low-frequency resonance. for their cooperation and support of this research.

We do know that the mammalian TM is a viscoelastic References structure with electrokinetic, piezoelectric-like proper- Allen, J. B., and Fahey, P. F. (1993). A second cochlear-frequency map that correlates distortion product and neural tuning measurements. ties (Sellon et al., 2019). That is, deformation of the TM The Journal of the Acoustical Society of America 94(2), 809-816. creates an electric response within the solid matrix of the Bárány, E. H. (1938). A contribution to the physiology of bone con- structure. We also know that the biomechanical proper- duction. Acta Oto-Laryngologica Supplement 26, 1-233. ties are influenced by the concentration of fixed charges Bergevin, C., Walsh, E. J., McGee, J., and Shera, C. A. (2012). Probing cochlear tuning and tonotopy in the tiger using otoacoustic emis- associated with the structure; the greater the fixed charge, sions. Journal of Comparative Physiology A 198(8), 617-624. the greater the electrokinetic effect. This brings us to ask Bronner, G. N. (2020). Golden Moles. IUCN Afrotheria Special- the provocative question: if evolutionary processes led to ist Group. Available at http://www.afrotheria.net/golden-moles/. Accessed January 15, 2020. the exaggeration of fixed charge in the tiger’s TM, could a Brownell, W. E. (2017). What is electromotility? – The history of its powerful electrokinetic force enhance the sensitivity of the discovery and its relevance to acoustics. Acoustics Today 13(1), 20-27. low-frequency resonance and trigger basal turn responses Available at https://bit.ly/39AZp0h. to low-level, low-frequency stimuli? Could such a system Clack, J. A., Fay, R. R., and Popper, A. N. (Eds.) (2016). Evolution of the Vertebrate Ear: Evidence from the Fossil Record. Springer Inter- explain, at least partially, the strange case of response national Publishing, Cham, Switzerland. timing in tigers and their close relatives? Efforts to address Coleman, M. N., and Colbert, M. W. (2010). Correlations between this question are underway, but those efforts are compli- auditory structures and hearing sensitivity in non-human primates. Journal of Morphology 271(5), 511-532. cated by the relative unavailability of subjects. Crumpton, N., Kardjilov, N., and Asher, R. J. (2015). Convergence vs. specialization in the ear region of moles (Mammalia). Journal of Conclusion Morphology 276(8), 900-914. Over the course of the past 200 million years or so, mam- Fielden, L. J., Perrin, M. R., and Hickman, G. C. (1990). Feeding ecology and foraging behaviour of the Namib Desert golden mole, malian hearing was shaped and refined by the forces of Eremitalpa granti namibensis (Chrysochloridae). Journal of Zoology natural selection. The process culminated in the evolution 220(3), 367-389.

Summer 2020 • Acoustics Today 71 ADAPTED EARS OF TERRESTRIAL MAMMALS

Hemilä, S., Nummela, S., and Reuter, T. (1995). What middle ear Walsh, E. J., Ketten, D. R., Arruda, J., Armstrong, D. L., Curro, T. G., parameters tell about impedance matching and high frequency hear- Simmons, L. G., Wang, L. M., and McGee, J. (2004). Temporal bone ing. Hearing Research 85(1-2), 31-44. anatomy in Panthera tigris. The Journal of the Acoustical Society of Huang, G. T., Rosowski, J. J., and Peake, W. T. (2000). Relating America 115, 2485-2486. middle-ear acoustic performance to body size in the cat family: Willi, U. B., Bronner, G. N., and Narins, P. M. (2006a). Middle ear Measurements and models. Journal of Comparative Physiology A dynamics in response to seismic stimuli in the Cape golden mole 186(5), 447-465. (Chrysochloris asiatica). The Journal of Experimental Biology Joris, P. X., Bergevin, C., Kalluri, R., Mc Laughlin, M., Michelet, P., van 209(2), 302-313. der Heijden, M., and Shera, C. A. (2011). Frequency selectivity in Willi, U. B., Bronner, G. N., and Narins, P. M. (2006b). Ossicular Old-World monkeys corroborates sharp cochlear tuning in humans. differentiation of airborne and seismic stimuli in the Cape golden Proceedings of the National Academy of Sciences of the United States mole (Chrysochloris asiatica). Journal of Comparative Physiology of America 108(42), 17516-17520. 192(3), 267-277. Lewis, E. R., Narins, P. M., Jarvis, J. U. M., Bronner, G., and Mason, M. J. (2006). Preliminary evidence for the use of microseismic cues for navigation by the Namib golden mole. The Journal of the Acoustical Society of America 119(2), 1260-1268. About the Authors Manley, G. A. (2017). Comparative auditory neuroscience: Under- standing the evolution and function of ears. Journal of the Association Edward J. Walsh [email protected] for Research in Otolaryngology 18(1), 1-24. Mason, M. J. (2001). Middle ear structures in fossorial mammals: A com- VA Loma Linda Healthcare System parison with non-fossorial species. Journal of Zoology 255(4), 467-486. 12201 Benton Street Mason, M. J. (2003). Bone conduction and seismic sensitivity in golden Loma Linda, California 92357, moles (Chrysochloridae). Journal of Zoology 260(4), 405-413. USA Mason, M. J., and Narins, P. M. (2001). Seismic signal use by fossorial Edward J. Walsh is currently a senior mammals. American Zoologist 41(5), 1171-1184. research scientist at the VA Loma Mason, M. J., Bennett, N. C., and Pickford, M. (2018). The middle and Linda Healthcare System (Loma Linda, CA) and served as inner ears of the Palaeogene golden mole Namachloris: A compari- the director of research (otolaryngology) at the Southern son with extant species. Journal of Morphology 279(3), 375-395. Illinois University School of Medicine (Springfield) and direc- Narins, P. M., Lewis, E. R., Jarvis, J. J., and O’Riain, J. (1997). The use tor of the Developmental Auditory Physiology Lab at Boys of seismic signals by fossorial southern African mammals: A neu- Town National Research Hospital (Omaha, NE) from 1990 roethological gold mine. Brain Research Bulletin 44(5), 641-646. until 2017. He holds an MA from the University of Illinois and Nummela, S. (1995). Scaling of the mammalian middle ear. Hearing a PhD from Creighton University (Omaha, NE). He, along Research 85(1-2), 18-30. with close collaborator JoAnn McGee, conducts studies in Richardson, G. P., Lukashkin, A. N., and Russell, I. J. (2008). The tectorial the areas of animal bioacoustics and auditory neurobiology. membrane: One slice of a complex cochlear sandwich. Current Opinion Current work in the area of animal bioacoustics is conducted in Otolaryngology and Head and Neck Surgery 16(5), 458-464. with colleagues at the University of Minnesota (Minneapolis). Ruggero, M. A., and Temchin, A. N. (2007). Similarity of traveling-wave delays in the hearing organs of humans and other tetrapods. Journal of the Association for Research in Otolaryngology 8(2), 153-166. JoAnn McGee [email protected] Sellon, J. B., Ghaffari, R., and Freeman, D. M. (2019). The tecto- VA Loma Linda Healthcare System rial membrane: Mechanical properties and functions. Cold Spring 12201 Benton Street Harbor Perspectives in Medicine 9(10), a033514. Loma Linda, California 92357, Shera, C. A., Guinan, J. J., and Oxenham, A. J. (2010). Otoacoustic USA estimation of cochlear tuning: validation in the chinchilla. Journal of the Association for Research in Otolaryngology 11(3), 343-365. JoAnn McGee is a senior research sci- Ulehlová, L., Burda, H., and Voldrich, L. (1984). Involution of the entist at the VA Loma Linda Healthcare auditory neuro-epithelium in a tiger (Panthera tigris) and a jaguar System (Loma Linda, CA). She received her PhD in physi- (Panthera onca). Journal of Comparative Pathology 94(1), 153-157. ology and pharmacology from Southern Illinois University von Békésy, G. (1960). Experiments in Hearing. McGraw-Hill, New York. (Springfield) and spent over 25 years at the Boys Town von Mayer, A., O’Brien, G., and Sarmiento, E. E. (1995). Functional National Research Hospital (Omaha, NE), conducting stud- and systematic implications of the ear in golden moles (Chrysochlo- ies focused primarily on mammalian auditory physiology in ridae). Journal of Zoology 236(3), 417-430. a biomedical context as well as comparative aspects along Walsh, E. J., Armstrong, D. L., and McGee, J. (2011). Comparative cat with her close collaborator Ed Walsh. Recent work has studies: Are tigers auditory specialists? The Journal of the Acoustical included studies of hearing and vocalization in eagles and Society of America 129(4), 2447. other raptors conducted with colleagues at the University Walsh, E. J., Armstrong, D. L., Robertson, H. E., and McGee, J. (2017). The of Minnesota (Minneapolis). low frequency auditory life of big cats. 4th International Symposium on Acoustic Communication by Animals, Omaha, NE, July 18, 2017, p. 125.

72 Acoustics Today • Summer 2020 Ask an Acoustician: Subha Maruvada

Subha Maruvada and Micheal L. Dent

Meet Subha Maruvada In this issue, “Ask an Acoustician” features Subha Maruvada. Dr. Maruvada is the lead for the Therapeu- tic Ultrasound Program in the Division of Applied Mechanics, which is a part of the Office of Science and Engineering Laboratories of the US Food and Drug Administration (FDA). Dr. Maruvada is active in both eventually be transmitted into a patient is critical. I pri- scientific and standards organizations. She serves as marily develop methods to measure acoustic power and working group convener, primary liaison, and technical the pressure field and use tissue-mimicking material to expert on several working groups within the International further develop test beds that can be used to measure Electrotechnical Commission (IEC) Technical Committee temperature rise due to therapeutic ultrasound. (TC) 87, Ultrasonics, and led the development of an inter- national standard for the field specifications and methods With a background in electrical and acoustical engi- of measurements for low-frequency ultrasound physio- neering and in acoustics, I have worked in the area of therapy devices within the IEC/TC 87. Dr. Maruvada acoustics measurements and modeling for over 20 years. led the completion of an FDA guidance document estab- My current areas of research are preclinical character- lishing guidelines to report ultrasound physiotherapy ization of high-intensity therapeutic ultrasound (HITU) device characterization in support of safety evaluation devices, characterization of tissue-mimicking materials of medical devices. Until this May, she was the chair of for HITU applications, HITU-induced bioeffects, and the Biomedical Ultrasound Technical Committee of the comparison of acoustics measurements to modeling Acoustical Society of America. results. Some of my specific studies are (1) to evaluate and improve existing techniques for characterizing the A Conversation with Subha Maruvada, acoustic fields produced by these devices, including in Her Words radiation force, piezoelectric hydrophone, thermal, and Tell us about your work. acousto-optic techniques; (2) to evaluate computational I work in the Office of Science and Engineering Laborato- models for mapping the temperature fields associated ries, Center for Devices and Radiological Health, FDA, as with high-intensity forced ultrasound (HIFU); and (3) an ultrasound engineer who primarily does research on to perform experimental investigations comparing the developing acoustic metrology (scientific study of mea- modeled temperature patterns with lesions produced surement) for therapeutic medical ultrasound devices. in tissue mimicking materials and tissue samples. I am My focus is to develop preclinical testing methods for active in providing physics and engineering consults to ultrasound devices that will help in our regulatory review the FDA regulatory staff for HITU, lithotripsy, physio- of these devices. Developing standardized testing meth- therapy, and diagnostic ultrasound devices. ods helps both the FDA and manufacturers to efficiently evaluate and ensure device safety and efficacy. For medi- Describe your career path. cal ultrasound devices, acoustic output characterization Growing up, I was convinced I would become a jockey. that includes understanding the acoustic field that will I loved horses, and all I wanted was to be around them

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 73 https://doi.org/10.1121/AT.2020.16.2.73 ASK AN ACOUSTICIAN

as much as possible. When I reached 5 feet 1 inch in organized and have found that a day planner really helps. height, my dad informed me that I was now too tall to be I try to document what I do during the day, and I find a jockey. I then thought I would be a veterinarian because that it really helps me manage my time more efficiently. I loved animals and wanted to be around them as much as possible. I had the opportunity to spend the day with How do you feel when experiments/projects a veterinarian and decided that I would have pets but not do not work out the way you expected be their doctor as the idea of putting an animal down them to? at the end of its life (I was 17) was unappealing. Loving That depends on whether the outcome was positive or high-school math and physics and not loving biology led not. If positive, it’s delightful because that means the me to electrical engineering at the Pennsylvania State work turned out better than expected. Otherwise, disap- University (University Park). I am a Penn Stater through pointed, but then I try to figure out a better way to tackle and through. I did my bachelor’s and master’s in elec- the problem. There’s always another way to approach a trical engineering (EE) and finally a PhD in acoustics problem. The solution can be either to improve the exper- there. I enjoyed EE but realized that I wanted a career imental setup (which is often the case) or to reevaluate doing something medically related without becoming a my current methodology. Sometimes it is as simple as doctor. A friend and mentor directed me to the Graduate changing the material used to hold an absorbing target Program in Acoustics, and when I looked into the pro- that will allow for more stability during experiments. It gram and professors, I found that I could do my degree always helps to talk to colleagues about a problem. Many in acoustics and specialize in medical ultrasound. Some problems are identified when talking things through with of the influential mentors who helped shaped my path another person. This is as true in personal life as it is in were my MS and PhD thesis advisors, Russell Philbrick professional life. Asking for help sometimes is the answer. and Kirk Shung, as well as many acoustics professors including Anthony Atchley and Doug Mast. I can’t praise Do you feel like you have solved the work-life the Acoustics Program at Penn State enough. The pro- balance problem? Was it always this way? fessors all love acoustics and really motivate students to I have studied a style of Indian classical dance called appreciate not just the science and math but the amaz- Kuchipudi since I was a child. I continued dancing as an ing breadth and importance of the field. The subject of adult in a semiprofessional capacity, performing solo and my PhD dissertation was measuring the backscatter of with my teacher’s dance troupe in the United States, India, biological tissues at high frequencies (>10 MHz). I then and Canada. I now run a dance school and teach mostly did a postdoctoral fellowship at Brigham & Women’s on weekends but some weekday evenings as well. I do Hospital/Harvard Medical School (Boston, MA) in sometimes think that I work all the time, but honestly, I the Radiology Department. My postdoctoral supervi- love both my day job and side hustle so much that I can’t sor was Kullervo Hynynen, who was also an influential imagine my life any other way. However, even when you mentor who helped me forge my career path to the FDA. love what you do, you still need to take care of yourself. The subject of my postdoctoral work was investigating I think I do that better now by checking in with myself HIFU-induced bioeffects in biological media. I injected every day rather than just autopilot through all the things glass catfish (they are small fish with nearly transparent I have to do. I take breaks when I need and am better at bodies and a visible vascular structure) with ultrasound maintaining a reasonable schedule of work time, dance microbubbles and monitored vessel destruction at HIFU time, and me time. output levels. After my postdoc, I joined the FDA and have been there since. What makes you a good acoustician? I hope I’m a good acoustician. I certainly strive to be a What is a typical day for you? good acoustician and researcher. I love acoustics. I’m in A typical day involves some administrative work (emails awe of how much acoustics encompasses. I respect the and such), either lab work or data processing, and some field and the people who work in this area. I feel like I still regulatory consult work. I usually start my day around have so much to learn and enjoy doing that. I love finding 8 a.m. and earlier if I’m doing experiments. Over time, acoustics in things other than my work. I enjoy Indian I’ve tried to become better about keeping my workday classical dance, yoga, Sanskrit, and meditation, and there

74 Acoustics Today • Summer 2020 is acoustics to be found in each of these interests. I’ve nant. I feel like I wasted time by not finding ways to be been learning to read and write Sanskrit in preparation more motivated. Once I realized that it was a fear of for studying the language in depth. I would like to be failure or a lack of confidence that was hindering me, able to read the ancient Indian texts such as the Vedas I started to look for training opportunities to help me and original yoga texts. So much of yoga and medita- develop further my skills in leadership, writing, and com- tion uses sound healing as part of the practice. Ancient municating more effectively, to name a few. Indian texts acknowledge the importance of acoustics. For example, Sanskrit is a vibrational language in that What advice do you have for the way you pronounce a word is of utmost importance budding acousticians? in conveying its meaning. Also, the alphabet order comes Do what you love. It’s not a cliché. Take some time to from how the sounds are formed in the vocal tract, from find the right advisor and the right educational program. back to front. I’m fascinated by how ancient cultures There are so many great options for both mentors and understood acoustics and would love to study that more. acoustics programs. Come to meetings of the Acoustical Society of America (ASA) and talk to people. The ASA How do you handle rejection? is a wonderful organization for budding acousticians I try to figure out why the rejection was good for me: because many professionals there started coming to the what did I need to learn and how do I need to improve? ASA as students or young professionals themselves and It may take some time to get over the initial hurt, but truly understand what budding acousticians need and then it is truly a great learning experience. In the past, how to help them. I have interviewed poorly. I realized that just because I knew what was needed for the position, I may not have Have you ever experienced imposter communicated that properly or may not have completely syndrome? How did you deal with that if so? understood what was expected of me. What helped me I’ve experienced a lack of confidence that has made was taking classes on communication, finding my natural me feel like an imposter. There have been a few times talents, and understanding where I needed to improve during my academic and professional career when I professionally. Most workplaces have good professional have let underconfidence get in the way of my profes- development support. I think that continuous train- sional development. What I have done to overcome ing in professional development is necessary to enjoy those moments or times was to either work harder or your career. find techniques to help, from meditation to professional development classes. What are you proudest of in your career? What drove me to be an acoustician was the desire to do What do you want to accomplish within the something medically related without becoming a physi- next 10 years or before retirement? cian. I wasn’t crazy about biology but wanted to be in I want to inspire as many budding acousticians as I can. a healing profession. I loved Star Trek, especially when At the FDA, we try to create opportunities in ultrasound I was younger, and the idea that you could run a small research for student fellows and volunteers. I also want to device over a body and immediately know what was share my love of acoustics with my dance students. They wrong was magical and inspiring to me. Ultrasound is are mostly girls, and I love engaging with them about that magic. It’s amazing how many applications ultra- their future career aspirations. I want to continue to learn sound is used for, both diagnostically and therapeutically. and apply that knowledge in the field of medical ultra- A career in ultrasound within the Department of Health sound and public health. I would also like to continue and Human Services has allowed me to fulfill that desire studying acoustics in other areas of my life such as the and so much more. I’m proud that I work toward pro- study of Sanskrit and meditation. moting and protecting public health. Bibliography What is the biggest mistake you’ve ever made? Maruvada, S., Harris, G. R., Herman, B. A., and King, A. L. (2007). A procedure for acoustic power calibration of high-intensity focused I don’t think I’ve made serious mistakes, but I do regret ultrasound transducers using a radiation force technique. The Jour- times during my career when I have let myself feel stag- nal of Acoustical Society of America 121, 1434-1439.

Summer 2020 • Acoustics Today 75 ASK AN ACOUSTICIAN

Maruvada, S., Liu, Y., Pritchard, W. F., Herman, B. A., and Harris, Business Directory G. R. (2011). Comparative study of temperature measurements in ex vivo swine muscle and a tissue-mimicking material during high intensity focused ultrasound exposures. Physics in Medicine and Biology 57, 1-21. MICROPHONE ASSEMBLIES Maruvada, S., Liu, Y., Soneson, J. E., Herman, B. A., and Harris, G. R. OEM PRODUCTION (2015). Comparison between experimental and computational meth- CAPSULES • HOUSINGS • MOUNTS ods for the acoustic and thermal characterization of HITU fields. LEADS • CONNECTORS • WINDSCREENS The Journal of the Acoustical Society of America 137, 1704-1713. WATERPROOFING • CIRCUITRY PCB ASSEMBLY • TESTING • PACKAGING Contact Information JLI ELECTRONICS, INC. JLIELECTRONICS.COM • 215-256-3200 Subha Maruvada [email protected] Acoustics Research Engineer US Food and Drug Administration 10903 New Hampshire Avenue Silver Spring, Maryland 20993, USA

Micheal L. Dent [email protected] Department of Psychology B76 Park Hall University at Buffalo State University of New York (SUNY) Buffalo, New York 14260, USA

The Journal of the Acoustical FOLLOW THE ASA Society of America ON SOCIAL MEDIA!

SPECIAL ISSUE ON

Phonetics of Under- @acousticsorg Documented Languages @acousticsorg Be sure to look for other special issues of JASA that are published every year. The Acoustical Society of America

AcousticalSociety

See these papers at: AcousticalSocietyofAmerica acousticstoday.org/PUDLs

acousticstoday.org/youtube

76 Acoustics Today • Summer 2020 JASA-EL to Become an Independent, “Gold-Level,” Open-Access Journal

Charles C. Church

The Good, the Not-So-Bad, and journal among its suite of publications. Therefore, the the Beautiful officers and executive council of the Society have decided As most members of the Acoustical Society of America that JASA-EL will transition to an independent gold-level (ASA) are aware, The Journal of the Acoustical Society open-access journal on January 1, 2021. of America Express Letters (JASA-EL) is open access and devoted to providing rapid (averaging 40 days from The Good submission to first decision and 73 days to acceptance) Precisely what does “gold-level open-access” mean for dissemination of important new research results and authors publishing in JASA-EL? Simply put, after Janu- technical discussion in all fields of acoustics. It has been ary 1, 2021, authors of papers appearing in JASA-EL will an open-access publication from the beginning, originally retain the copyright for their article and will also grant as Acoustic Research Letters Online (ARLO). JASA-EL has a Creative Commons copyright license denoted by “CC been published as a special section of JASA since 2006. BY.” As noted above, the CC BY license means that no However, although papers appearing in JASA-EL were permission need be granted to others wishing to reuse or and are freely available online to everyone whether mem- adapt all or any part of a paper as long as proper attribu- bers of the Society or not, the copyright of most articles tion is given to the original article. was held by the ASA. Several additional changes will also be made to reflect the The world of scientific publishing has changed consider- fact that JASA-EL will now be independent of JASA. Perhaps ably since those early days, and it continues to evolve in most significantly, as a separate journal, articles in JASA-EL exciting and sometimes unpredictable ways. One of the will no longer be published in the print or online editions changes that will affect ASA publications is the desire of JASA. JASA-EL articles will be available only online at for a more open kind of open access. In this case, “more the JASA-EL website (see asa.scitation.org/journal/jel). This open” means, among other things, that the copyright means JASA-EL will have more opportunities to promote licensing of papers should allow for ease of sharing and authors’ papers on the journal website, such as featuring reuse, such as with Creative Commons CC BY licens- papers in a special Editor’s Pick section. Also, JASA-EL will ing. As described on the Creative Commons website have its own cover, and a selected article from the issue will (see creativecommons.org/licenses), “This license lets be specially featured. Another new and exciting possibility others distribute, remix, adapt, and build upon your we are considering is making video abstracts available to work, even commercially, as long as they credit you [i.e., authors for JASA-EL articles. Our publisher, American Insti- the author(s)] for the original creation. This is the most tute of Physics (AIP) Publishing, publishes video abstracts accommodating of licenses offered. Recommended for regularly for the American Association of Physics Teachers maximum dissemination and use of licensed materi- (see bit.ly/33p9PxG) so this option should be available rela- a l s .” JASA as a hybrid journal currently gives authors tively soon after the transition. the option to pay for gold open access with Creative Commons CC BY licensing for regular articles, but this There will be additional, new (for JASA-EL) manuscript licensing for the most part hasn’t been available to JASA- types. In the past, all submissions were labeled as “let- EL authors. As editors of a Society whose members are at ters” for obvious reasons, and this will continue to be once skilled professionals and prolific authors, we agree the default for most manuscripts. Perhaps the most note- that ASA publications should also offer a full open-access worthy “new” type will be “erratum.” Previously, errata

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 77 https://doi.org/10.1121/AT.2020.16.2.77 JASA-EL

for articles in JASA-EL were submitted to JASA. This was the APC will be discounted to 750 USD for ASA mem- natural because JASA-EL was a part of JASA, but it also bers and 900 USD for non-members. These promotional could be the source of confusion for both authors and rates will apply to any manuscript published in 2021. editors. In the future, all errata for papers in JASA-EL will appear in JASA-EL. In addition, tutorials will also be Another item worth noting is the two-year impact factor possible, although these must necessarily be rather brief (IF). As a “new” journal, JASA-EL will not have its own, due to the limitations imposed on the number of pub- independent IF for two years. Of course, for authors who lished pages (although this will be increased from 6 to 7) have published past articles in JASA-EL as a section of and size (the single-column format) that will continue as JASA (from 2006 through 2020), the IF for JASA, which in the past. A tutorial involving the description of new is currently 1.819, should continue to be used. software or an explanation of the best methods for its use may be well suited to this category. Special issues will Additional items involved in the change are the template also be accommodated, and although each article must PDF design of the papers, setting up Table of Contents be short, such a collection may be used to help rapidly alerts independent of those for JASA, and other admin- advance a new area of research. istrative and technical details. These are being worked on this year and will be part of the rollout. Also, the publi- As is currently the case, each paper accepted for JASA-EL cation will be rebranded as JASA Express Letters to help will appear online as soon as the editorial and produc- differentiate it as a new, stand-alone journal. tion process is complete, and these will be compiled into issues on a monthly basis using the classification system The Beautiful found in JASA. All current ASA technical committees will The best thing about this transition is the exciting future. be included (obviously!) as will work in education and new Both researchers and the general public want to see technical specialty groups, e.g., computational acoustics. more open-access scientific journals, and the ASA has responded positively by creating the new JASA-EL. We The Not-So-Bad intend to continue to be a leader in scientific publishing It is obvious that changes of this magnitude may involve in meeting our readers’ and authors’ needs, and this is some level of effort and accommodation for all involved. one way to do so. The editors, staff of JASA-EL, and our publisher AIP Pub- lishing are committed to making this transition as simple We also would like your help with this transition. In partic- and transparent as possible for authors and readers alike. ular, we need direct feedback from ASA members, journal readers, and journal authors. We hope you will offer us With regard to submissions, we will be setting a cutoff your suggestions, your advice, or whatever is on your mind. date and estimate that this will be on August 1, 2020. This If you can’t find us to chat in person, we encourage you to means that manuscripts submitted before the deadline email us at [email protected]. We will be placing would publish under the existing model, in the JASA- updated information about the transition on the website EL section of JASA, and those submitted on or after the for JASA-EL (see asa.scitation.org/journal/jel). We all work deadline would publish in the new stand-alone journal. best when we work together! Of course, we will keep authors updated in case there are any deviations from this estimate. Contact Information In addition, the increase in the page limit and the use of the Creative Commons CC BY licensing will come with Charles C. Church [email protected] some increase in the required article processing charges. National Center for Physical Acoustics (NCPA) The current plan is for the Article Processing Charge University of Mississippi Oxford, Mississippi 38677, USA (APC) to increase to 900 USD for ASA members and 1200 USD for non-members; however, for the first year,

78 Acoustics Today • Summer 2020 A Perspective on Proceedings of Meetings on Acoustics

Kent L. Gee, Megan S. Ballard, and Helen Wall Murray

Figure 1. Proceedings of Meetings on Acoustics (POMA) submissions from 2016 to March 2020 according to area. A relatively recent addition to the rich publishing history Note that computational acoustics wasn't added as an official of the Acoustical Society of America (ASA) is Proceedings technical area until 2017. of Meetings on Acoustics (POMA). OK, not that recent because POMA has now been around for more than a decade. Introduced in 2007, the electronic, open-access POMA receives a growing number of submissions across journal exists to enable authors to publish and archive the all areas of acoustics. research shared in talks and poster sessions at semiannual ASA meetings and its other cosponsored conferences and As mentioned before, manuscripts are editor reviewed. workshops. In this article, we briefly describe POMA as a What does this mean? It means that the assigned publication along with its evolution, global growth, and associate editor will review each manuscript for both cor- future outlook. More importantly, we discuss how POMA rectness and clarity. Although the eventual acceptance can help accomplish the larger purposes of the ASA. In rate of POMA is above 90%, manuscripts are frequently doing so, we hope readers will see how POMA can help returned to authors for minor revisions. The review is not them to more quickly, easily, and broadly disseminate intended to rise to the level of a review for The Journal of their work. the Acoustical Society of America (JASA) or JASA Express Letters (JASA-EL) but is intended to provide a standard of What Is Proceedings of Meetings quality that balances the scope of a conference proceed- on Acoustics? ings with the fact that each published article bears the POMA is an editor-reviewed, online proceedings journal. ASA logo and copyright. Each ASA meeting or cosponsored conference comprises a POMA volume. As of the Chicago (IL) ASA meeting, Evolution of Proceedings of Meetings there are 40 volumes, beginning with Volume 1 for the on Acoustics 2007 Salt Lake City (UT) meeting. To date, more than Since its inception, POMA has evolved. Early articles 3,500 articles have been published. varied substantially in terms of format, style, and length. Today, the articles in the journal have a much more uni- In principle, any paper that has been presented at an form look thanks to manuscript templates that have been ASA meeting or at a cosponsored conference with an well-received by authors (Figure 2). In fact, because of its assigned volume may be published in POMA. Figure 1 simplified process and flexibility, POMA has proven to be shows submissions from 2016 to March 1, 2020 by area. an ideal platform for transitioning and moving forward Although some areas have more submissions (Physical publishing initiatives across other Society journals; it Acoustics is bolstered by the 2018 International Sympo- was the first of the ASA journals to transition to the new sium on Nonlinear Acoustics) and other areas have fewer, manuscript submission platform Editorial Manager, first

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 79 https://doi.org/10.1121/AT.2020.16.2.79 POMA

the most submissions are (in order) China, Japan, Ger- many, United Kingdom, Russia, France, Italy, Argentina, and Spain. Almost as many papers originate from Asia as from the United States, and over 13% of submissions originate from Spanish-speaking countries. Our involve- ment with cosponsored meetings plays an important role. In 2020, in addition to the ASA meeting in Chicago, POMA will host the proceedings of the International Conference on Underwater Acoustics, which is being organized by the Institute of Acoustics of the United Kingdom and will be held virtually.

Promoting Student Involvement Another way of promoting the reach of the ASA is to Figure 2. Evolution of POMA article cover pages using engage the next generation of acousticians. Students examples authored by our incoming Editor Megan Ballard. present a substantial fraction of papers at ASA meet- ings yet struggle to turn excellent talks and posters into publishable manuscripts. POMA offers a rapid publi- cation opportunity for preliminary results or works in progress, with a reasonable bar for success. Additionally, POMA provides a valuable opportunity for students to gain initial experience in technical writing and receive expert and impartial feedback from the POMA edito- rial board. And, in case you haven’t heard the message loud and clear before, a POMA article does not count as a prior publication as far as JASA or JASA-EL are concerned. This should help authors rapidly dissemi- nate findings presented at ASA or other meetings and Figure 3. Global POMA submissions between 2016 and then reduce the effort in preparing a refined JASA-EL February 2020, with darker shading representing a greater or longer JASA manuscript. number of submissions. Social Media One of the purposes of ASA is to disseminate and pro- to have manuscript templates in both Word and LaTeX, mote the science and applications of acoustics. Although first to pursue a uniform branding (which then changed the dissemination of new scientific knowledge occurs to match the new ASA brand), and first to significantly in POMA and other journals, social media allows us to promote papers and authors on Twitter and Facebook. It better promote acoustics to the public. POMA joined the has been a busy last several years! social media fray in November 2016 and now has over 1,000 followers on Twitter (@ASA_POMA) and over 750 Promoting the Global Reach of the followers on Facebook (@ASAPOMA; Figure 4). Hun- Acoustical Society of America dreds of different articles have been highlighted through One of the purposes of POMA is to help grow the global social media posts, from applications of machine learn- reach of the Society (Figure 3). This is happening, and ing to noise impacts on tree frogs to ultrasonic surface it’s exciting. From January 2016 through February 2020, acoustic waves. Perhaps more importantly, however, is 62% of the nearly 1,200 papers submitted have come that engagement by POMA on social media allows us from outside the United States, from a total of 54 differ- to participate in the global conversation in acoustics ent countries. After the United States, the countries with through retweets, comments, and shares.

80 Acoustics Today • Summer 2020 these perspectives are heard and archived is vital to the purposes of the Society.

As we move forward, we are excited to announce a change in editor. Kent Gee served on the POMA editorial board since 2007 and as editor since 2011. It’s time for new energy, perspectives, and ideas. As a frequent POMA author and engaged member of the editorial board, Megan Ballard has recently taken over as POMA editor. We look forward to the next phase of journal growth under her leadership.

Contact Information

Figure 4. POMA Facebook and Twitter pages. Kent L. Gee [email protected] Department of Physics and Astronomy Brigham Young University Looking Forward Provo, Utah 84602, USA We have described our efforts to usePOMA as a vehicle to expand the global reach of the ASA and believe these Megan S. Ballard [email protected] efforts to be essential to the long-term health of the Applied Research Laboratories Society. As we look to the future, POMA is in a strong University of Texas at Austin position to promote and expand the growth and impact Austin, Texas 78758, USA of the ASA. We encourage ASA members to become part of this initiative in four ways. Helen Wall Murray [email protected] Publications Office First, publish in POMA and other ASA journals. Too Acoustical Society of America much of our work is never published because we feel that P.O. Box 809 it is not yet “ready.” Do not let your professional legacy be Mashpee, Massachusetts 02649,USA defined by speculative or inconclusive abstracts. Publish your primary, intermediate, and final results in POMA!

Second, follow the social media accounts of POMA and help disseminate and promote acoustics by liking, shar- ing, and retweeting papers you find of interest. YOU CAN MAKE Third, promote the use of POMA as the archival venue for acoustics-related meetings and workshops. POMA is A DIFFERENCE a true asset to the greater acoustics community by pro- viding a publication service to cosponsored meetings of the ASA. Support the ASA Foundation: acousticalsociety.org/acoustical- Fourth, encourage practitioners of acoustics to submit society-foundation-fund to POMA after they have presented a paper at an ASA or ASA-cosponsored meeting. Those in industry have valuable perspectives that they gain from solving impor- tant, real-world problems on a daily basis. Ensuring that

Summer 2020 • Acoustics Today 81 Acoustical Society of America Books Committee

Mark F. Hamilton

The primary function of the Acoustical Society of America (ASA) Books Committee is to recommend new books on topics in acoustics for publication according to an agree- ment with a commercial publisher and bearing the imprint ASA Press.

The ASA Books Committee was formed in 2016 by Books+ was enormously successful over the years, popu- the merger of the former Books+ Committee and the lating the bookshelves of junior and senior researchers ASA Press Editorial Board. Books+ was created in 1983 alike through its sales of classic books in acoustics at very and charged primarily with proposing reprints of out- low prices. This was accomplished by publishing books of-print books on acoustical topics. The ASA Press with expired copyrights, those whose authors transferred Editorial Board, created in 2011, was charged with the copyrights to the ASA, or those with permission recommending new books for publication, including otherwise granted for republication. The inventory of new editions or English translations of preexisting Books+ that appeared on the ASA website was a verita- books, on topics in acoustics and bearing the imprint ble Who’s Who of iconic authors in acoustics, the first of ASA Press. these being, following Knudsen and Harris (1980), books by Morse (Vibration and Sound, 1981), Hunt (Electroa- The Evolution of Books+ coustics, 1982), Stevens and Davis (Hearing, Its Psychology The beginnings of Books+ traces back to a request made and Physiology, 1983), Beranek (Acoustics, 1986), Tolstoy by Cyril Harris in 1979 to reprint the classic text Acous- and Clay (Ocean Acoustics, 1987), von Békésy (Experi- tical Designing in Architecture that he wrote with Vern ments in Hearing, 1989), and Pierce (Acoustics, 1989). Knudsen and that was published originally in 1950. In his request to the ASA Executive Council, Professor Harris As the Books+ program grew, titles included translation wrote that “Since this book is no longer in print, I am books; books based on poster sessions covering topics such eager that arrangements be made for reissuing it (prefer- as theaters, worship spaces, and music education facilities; ably in paperback, so that it will be available to students and an original publication on concert halls. The “plus” at the lowest possible price) without significant delay. I in Books+ was added in 1995 to recognize alternative will be pleased to turn over all future royalties earned by publication formats such as CDs, DVDs, videotapes, and any reprint edition to the Acoustical Society.” His ini- other means of archiving content. For example, the ASA tiative set in motion events that led to the formation of produced a CD titled Scientific Papers by Lord Rayleigh Books+ just a few years later. containing the corpus of Rayleigh’s articles. A listing of

82 Acoustics Today • Summer 2020 | Volume 16, issue 2 ©2020 Acoustical Society of America. All rights reserved. https://doi.org/10.1121/AT.2020.16.2.82 available books and other items that came through Books+ the book proposal, and if that is approved, then a second can currently be accessed at bit.ly/32MdO79. involving review of the completed manuscript.

The business model followed by Books+ is currently Once Springer forwards a proposal to ASA Books for under review because the ASA is exploring means of review, the decision on whether or not to approve the transitioning away from the practice of preprinting and proposal is made at the following ASA meeting, provided warehousing its inventory of books. Instead, ASA is tran- there is adequate lead time to find a qualified reviewer sitioning toward a print-on-demand model to reduce with knowledge of the specific topic and who has time costs, expedite fulfillment of orders, and ensure perpetual to complete the review prior to the meeting. Sufficient in-print status of all ASA book titles. Apart from these lead time in this regard is normally a few weeks before production matters, as fewer expired copyrights of classic the upcoming ASA meeting. If there is insufficient time books in acoustics have become available in recent years, to have the proposal reviewed before the meeting, the the ASA has focused more on encouraging the publica- decision on the proposal will likely be postponed to the tion of new books in acoustics. next biannual meeting of the ASA.

ASA Books Review of the proposal by ASA Books serves only to Today, ASA Books functions primarily as the ASA Press determine whether the committee might be interested Editorial Board, and in this capacity, it manages the ASA in endorsing the book with the ASA Press imprint. Press imprint. ASA Books is partnered with the commer- Approval of the proposal signifies that the committee cial publisher Springer. By the end of 2019, the ASA Press will subsequently review the completed manuscript. imprint appeared on 30 books published by Springer. A Only following review of the completed manuscript listing of these books may be found on Springer’s ASA does the committee determine whether the book will be Press webpage (see bit.ly/2wou42b). Approximately half approved to carry the ASA Press imprint in addition to of these books appeared in the Springer Handbook of the Springer imprint. Auditory Research (SHAR) series. The next largest tech- nical area carrying the ASA Press imprint prior to 2020 Taken into consideration during review of the proposal was physical acoustics, with half a dozen books, and are mainly the credentials of the author(s) (or editor[s] at least one book carrying the ASA Press imprint was in the case of collected contributions), relevance of the published in each of the following areas: animal bio- subject matter to acoustics, and significance of the pro- acoustics, architectural acoustics, engineering acoustics, posed technical content. If ASA Books approves the musical acoustics, signal processing in acoustics, and proposal, then the author(s) works with Springer to agree underwater acoustics. Several of these books appeared on a schedule for submission of the final manuscript. In in the Springer Modern Acoustics and Signal Process- addition, approval of the proposal may be accompanied ing (MASP) series. The ASA is working to expand its by general recommendations to the author(s), ranging endorsement of new books to all areas of acoustics by from the manner in which the proposed material will reaching out to ASA technical committees and making be presented to possible inclusion of new material. If them aware of this program. ASA Books does not approve the proposal, then the involvement of ASA Books in the project is concluded, Publishing with ASA Press and Springer is then free to pursue publication inde- How does one propose a new acoustics or acoustics- pendently of the ASA. That said, the vast majority of related book with the aim of having it carry the ASA Press proposals requesting the ASA Press imprint to date have imprint? The process is straightforward, and it starts been approved. with downloading the book proposal form on Springer’s ASA Press webpage. The completed form is submitted to Provided the proposal has been approved, then after the Springer’s representative on the ASA Books Committee. completed (or nearly completed) manuscript is received If Springer determines that the book is something they by Springer, it is forwarded to ASA Books. In coordina- would like to publish, then a two-level approval process tion with Springer, ASA Books then identifies and solicits is initiated by ASA Books, the first involving review of the services of suitable reviewers. In-depth review of the

Summer 2020 • Acoustics Today 83 ASA BOOKS

completed manuscript requires at least several months on the committee if they are interested in having their of lead time prior to the ASA meeting at which the com- future book endorsed by the ASA. mittee renders its decision. In this review, the focus of the committee is on whether the quality of the finished Contact Information product deserves to carry the ASA Press imprint. On occasion, the committee may make suggestions to the Mark F. Hamilton [email protected] author(s) or editor(s) of the book to improve the contents and then invite resubmission. The committee also consid- Walker Department of Mechanical Engineering The University of Texas at Austin ers whether the author(s) or editor(s) are perceived to be Austin, Texas 78712-1063, USA advertising a particular product or company, or whether the book advocates policies that could expose the ASA to public or legal difficulties if it were to carry the ASA Press imprint. Book Announcements | After approval of the final manuscript by ASA Books, ASA Press a recommendation that a book carry the ASA Press imprint is submitted to the ASA Executive Council for final approval. Subject to the final approval by Execu- ASA Press is a meritorious imprint of the Acousti- tive Council, the book enters into production at Springer, cal Society of America in collaboration with Springer the ASA Press imprint appears on the front cover, and International Publishing. All new books that are pub- text about ASA Books and about the ASA in general is lished with the ASA Press imprint will be announced included in the front matter of the book. Although it may in Acoustics Today. Individuals who have ideas for be noted that the procedure for obtaining ASA endorse- books should feel free to contact the ASA Publica- ment of a new book can add months to the production tions Office, [email protected], time, potential authors should appreciate that the ASA to discuss their ideas. takes its endorsement of new books very seriously. This requires, primarily, finding qualified reviewers who are willing to carefully vet the entire final manuscript. Also, The Neuroethology final decisions are made only at the biannual meetings of of Birdsong the ASA, not between meetings. The ASA believes that Editors: Jon T. Sakata, Sarah any additional time its reviews add to the publication C. Woolley, Richard R. Fay, schedule is warranted to protect the reputation and sig- Arthur N. Popper nificance of the ASA Press imprint appearing on books Scaling the Levels of Birdsong Hormonal Regulation of the ASA chooses to endorse. Analysis – Jon T. Sakata, et al. Avian Auditory Processing – Neural Circuits Underlying Luke Remage-Healey Conclusion Vocal Learning in Song- The Neuroethology of Vocal The stated mission of the ASA is “To generate, dis- birds – Jon T. Sakata, et al. Communication in Songbirds: seminate, and promote the knowledge and practical New Insights into the Avian Production and Perception applications of acoustics,” and the creation ASA Books, Song System and Neuronal of a Call Repertoire – Julie E. Elie, et al. which incorporates the ASA Press Editorial Board, is Control of Learned Vocaliza- tions – Karagh Murphy, et al. Linking Features of Genomic the most recent major initiative of the Society in fulfill- Function to Fundamental ment of this mission. The solicitation and endorsement The Song Circuit as a Model of Basal Ganglia Func- Features of Learned Vocal by the ASA of new books on topics related to acous- tion – Arthur Leblois, et al. Communication – Sarah E. tics benefits both the authors and scientific community London Integrating Form and Func- by calling attention to outstanding books in the field. tion in the Songbird Auditory Vocal Performance in Song- Anyone contemplating either writing or editing a new Forebrain – Sarah C. Woolley, birds: From Mechanisms to book on any aspect of acoustics is encouraged to con- et al. Evolution – Jeffrey Podos, et al. tact the chair of ASA Books or Springer’s representative

84 Acoustics Today • Summer 2020 Data, Dinners, and Diapers: Traveling with a Baby to a Scientific Conference

Laura Kloepper

In April 2019, when I was in my fourth year in a ten- ure-track position and managing several grant projects, I gave birth to my first son, Nathaniel. My institution, Figure 1. The author with her son at the Acoustical Society of Saint Mary’s College (Notre Dame, IN), offered gener- America meeting in San Diego in December 2019. ous parental leave, which allowed me to take a teaching release through the following January. During my preg- nancy, this sounded like a dream. Almost nine months My first conference was the North American Society for off from work, but I could still maintain some income? Bat Research that was held in Kalamazoo, MI, about 90 Sign me up! minutes away from my hometown of South Bend, IN. This was a good warm-up conference because it required The first month after Nathaniel was born, I fully embraced a short car ride and just one night in a hotel. In advance my new maternal role and didn’t even check my email, of the conference, I reached out to some colleagues in but after that month, I slowly began to struggle with my leadership positions in the Society to ask if it would be new identity. I knew I was incredibly fortunate to even appropriate to bring my baby. I got enthusiastic responses be in a position to stay at home with my son but taking a from all of them, including several offers to help assist complete absence from work began to feel isolating. Fur- with childcare. thermore, I felt like I was halting my research trajectory. With my partner’s support, we decided to find part-time As the conference week approached, I was a ball of nerves childcare in the summer and fall so I could work with and completely exhausted. I had a teething six month old students on some projects, write some grants, and wrap who was only sleeping for two hours at a time and was up some manuscripts. incredibly irritable. My partner had also been traveling the whole week, so I was handling the nighttime wake- As an early-career scientist, I also knew the importance ups. I told myself I just had to show up at the conference, of attending scientific conferences during my leave. Due deliver my talk, and have my one meeting and then I to the close timing around my due date, I missed several could leave. The conference ended up going about how I important conferences in April and May, including the expected it to. It was a mix of meltdowns during meetings Acoustical Society of America (ASA), so I wasn’t willing and coffee breaks and giggles and coos during poster ses- to miss my fall scientific conferences as well. But as a sions and business lunches. What did help was wearing mother who was committed to both breastfeeding and my baby in a carrier so he snuggled against my chest the conferencing and as one part of a dual-career couple, this entire time. A colleague even convinced me to try to wear meant I had only had one option: strap on the baby in him during my talk, but we only made it to my hypoth- the carrier, pack the suitcase, hit the road (or sky), and esis slide before he started to fuss and I passed him off to hope for the best. a colleague who had offered to stand by. But in the end,

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 85 https://doi.org/10.1121/AT.2020.16.2.85 DATA, DINNERS, AND DIAPERS

I considered the conference a success. I presented my the Sound Perspectives essay in this issue of Acoustics research, had some important meetings, and got caught Today by Tracianne B. Neilsen and Alison Stimpert). up-to-date on the latest research in my field. Most importantly, Nathaniel had a wonderful time. With the childcare, he was able to stay entertained and My second conference was the ASA meeting in San Diego well rested, which ensured that he was in a good mood (CA). Even though this would be a five-day affair with a when I took him to events. As I walked the beach with long plane ride, I was much more optimistic going into Nathaniel the morning of my last day in San Diego, I this conference. The ASA has always felt like home, and reflected on how my time at the ASA allowed me to see I was downright giddy to show off my son to all my that I could be both a scientist and a mom, the perfect ASA family (Figure 1). Furthermore, I was awarded transition to ending my maternity leave. the Women in Acoustics Dependent Care Subsidy (see acousticalsociety.org/grants-subsidies/), which allowed Throughout both of my conference experiences, the one me to hire a babysitter through a licensed, bonded, and thing that helped me the most was support. Whether it insured company the hotel concierge recommended, was formal support, such as the ASA Dependent Care who would watch Nathaniel right in my hotel room when Subsidy, or informal support from colleagues, knowing I had committee meetings and talks. I reached out to that I had someone willing to assist helped me ease my the company in advance of the meeting, and they helped concerns of bringing my child to the conference. I am match me with a sitter specific to my needs and who was certainly not the first person to bring a baby to a confer- available the whole week I was at the ASA. My sitter was ence nor will I be the last. So, to all of you parents who a caring grandmother who treated Nathaniel as her own came before, thank you for helping set an example and and sent me countless pictures and videos throughout sharing your stories and advice with us new parents. the week. I felt completely comfortable with Nathaniel And to the parents-to-be, don’t be afraid to reach out in her care. If I didn't have the financial support of the and ask for help. You’ve got an army of support ready Dependent Care Subsidy and the help of the babysitter, I to help you achieve that work-life harmony. would not have made it to San Diego.

Contact Information The ASA is always a busy conference for me, and this one was perhaps the busiest: one workshop, three talks, and [email protected] two committee meetings. In advance of the conference, Laura Kloepper I again reached out to many colleagues. I knew trying to Department of Biology Saint Mary's College juggle all my ASA responsibilities would be challenging Notre Dame, Indiana 46556, USA even with the help of the babysitter, and I wanted full disclosure that I would be bringing my child.

As expected, I got nothing but enthusiastic responses. Some colleagues met me at the airport to help with my mountain of baby luggage and transportation to the hotel. Others invited us to dinner and were more than eager THE ASA STANDS ON to play “pass the baby” so I could eat a proper meal. I ITS FOUNDATION felt comfortable bringing Nathaniel with me to informal meetings or sessions when I didn’t have childcare. Donate today: During the conference, the number of parents, includ- acousticalsociety.org/acoustical- ing ASA members I had not previously met, who came society-foundation-fund up to me to share their own stories of bringing chil- dren to the ASA was so heartwarming. I particularly enjoyed my discussions on work-life harmony with members of the Women in Acoustics Committee (see

86 Acoustics Today • Summer 2020 Work-Parenting Harmony

Tracianne B. Neilsen and Alison K. Stimpert

Disclaimer: This article and the associated work-parenting we are grateful for the insights they provide. We comple- stories were prepared in February 2020, before the added ment their parenting stories with thoughts and quotes from burden parents are currently experiencing due to the pan- younger Acoustical Society of America (ASA) parents about demic and the shut-down of schools and childcare facilities. their own experiences in parenting. Our goal is that these ideas will encourage everyone to do the best they can at cre- This article from the Women in Acoustics (WIA) Com- ating work-family harmony, allowing themselves to accept mittee (see womeninacoustics.org) does two things. First, support from others, remain flexible, and take it one step at it highlights the professional careers of two recently hon- a time as they strive to enjoy life’s journey. ored two pioneering female acousticians, Evgenia (Zhenia) Zabolotskaya and Ilene Busch-Vishniac. Second, using these Evgenia (Zhenia) Andreevna Zabolotskaya1 two women as a starting point, we take a glimpse into their Evgenia (Zhenia) Andreevna Zabolotskaya completed her family life to explore the challenges of maintaining work- bachelor’s degree in physics at Moscow State University family “harmony.” Although many speak of the elusive (MSU) where she met her husband, Yurii (Yura) Ilinskii work-family balance, we intentionally choose instead to (Figure 1). They married in 1963, and Zhenia then returned use the word harmony (McMillan et al. 2011; Berger 2018). to MSU to complete her PhD. Zhenia’s PhD included a model Balance seems to imply that all the pieces are in one perfect equation for nonlinear bubble dynamics as well as the first arrangement and that the slightest nudge will send all the effective medium theory for nonlinear propagation of sound pieces flying. We propose that a more useful paradigm is in bubbly liquid. She later created, in collaboration with Rem work-family harmony because harmonies come in many Khokhlov, the KZ (Khokhlov-Zabolotskaya) equation for beautiful varieties. Harmonies ebb and flow just as our nonlinear sound beams, which has had far-reaching impli- family and work responsibilities change over time. cations on sonar as well as high-intensity focused ultrasound. Zhenia and Yura moved to the United States in 1991 to col- We hope that the family stories of our two honored women laborate with Mark Hamilton at the University of Texas (UT) can serve as case studies of parents of their generation, and at Austin where they also began theoretical research on non- linear Rayleigh waves and a research program in biomedical acoustics. Zhenia was the first woman to receive the ASA Figure 1. Zhenia Zabolotskaya and Yura Ilinksii with one Silver Medal in Physical Acoustics in 2017. of their daughters. Ilene Busch-Vishniac Ilene Busch married Ethan Vishniac in 1976 (Figure 2). Both pursued graduate degrees and became tenure-track faculty at the UT at Austin. Ilene has made several career transitions that allow her to appreciate the full breadth of what ASA members experience as students, academics, administrators, and now working in industry. She was the first woman to receive the Silver Medal in Engineering Acoustics in 2001 “for development of novel electret microphones and of pre- cision micro-electro-mechanical sensors and positioners.”

1. Dr. Zabolotskaya passed away in early June. A full obituary will appear in the fall issue of Acoustics Today.

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 87 https://doi.org/10.1121/AT.2020.16.2.87 WORK-PARENTING HARMONY

time, no process existed at UT at Austin for dealing with child-bearing women professors. To allow time off, she worked out a deal to split her next semester’s class with a colleague, where he took the first half of the semester and she took the second. This arrangement gave Ilene a few months of unofficial (and much appreciated) “maternity leave.” She and Ethan then hired a full-time caregiver five days/week so she could work. Ilene’s second daughter was born during the summer when she was not teaching class and so she took the time to be with her family, meeting occasionally with her research students. She also credits her extremely supportive husband for making this work.

Figure 2. Ethan Vishniac, Ilene Busch-Vishniac, and their dog. As their daughters got older, scheduling became easier because of daycare and then after-school programs. Ilene recalls, “The stress when kids are young is all the energy you Now, Ilene is grateful to work with talented people at a small must devote”; however, the most stressful period with her business to create a device that detects pneumonia and tuber- children was during the preteen and teen years. Ilene, who culosis early enough to save patients’ lives. Her team recently was a dean at the time, found it difficult to be expected to won the MIT Solve-Tiger Challenge prize in Bangladesh be out of the house 3-4 nights/week during dinner, knowing for the development of a respiratory device that can moni- that her daughters would have benefited from her presence. tor, record, upload, and classify lung sounds and that is now “I stuck [the deanship] out as long as I could,” she recalls. “I being used worldwide. didn’t sleep a lot and had a wonky schedule. I finally stepped out of the deanship to have more time.” Work-Parenting Harmony Zhenia and Ilene were honored by the WIA Committee Zhenia’s and Ilene’s stories represent those of many parents because of their contributions to acoustics, but instead at that time. To explore how things have changed, we polled of just including a regular biography in this column, we several ASA members about their paths as parent scientists/ asked them to share their experiences as parents to begin acousticians. With permission, we have shared their sto- our discussion about work-parenting harmony. ries on the WIA webpage. (We invite others to contribute their stories as well by emailing to [email protected]). From the Zhenia and Yura have two daughters, both of whom were stories we collected, several main themes emerge. We born once Zhenia’s degrees had been completed. Zhenia share some quotes that represent these themes but refer credits her very supportive husband as well as her mother readers to the full stories for context and examples of in raising her daughters (and her first daughter in raising different ways parents have tried to find harmony at dif- the second after Zhenia’s mother passed away). Zhenia ferent stages of their children’s lives. remembers, “In Russia at this time, women were often discriminated against when working in science, and Accept Support from Others having children certainly didn't help. Nonetheless, I As a parent, the demands on your time and energy can be persevered, continuing my work through the scrutiny I immense. Almost all of our contributors expressed this faced. A pressure that was certainly relevant through the sentiment and credited understanding employers and years of having children and working was the difficulty advisors for supporting them to find workable solutions of balancing both. This is never easy and can’t be made for their situation as well as life partners, friends, and easy. It is made doable only through the support of your family members who helped to shoulder the load. partner, employer, family, and friends.” Give Yourself Credit Ilene was granted tenure three days before her first child Working parents often feel that they are not able to was born and right after the end of the fall term. At the devote enough time and energy to either their families

88 Acoustics Today • Summer 2020 or their careers. “Just existing and engaging in science this is due to the forward-thinking policy at my lab coupled and engineering as a woman, as a parent, or as both is with my being in a nontenure track position. Now that my advocating for opportunity and diversity in STEM” (Lora kids are grown, I am really glad that I had the ability to make Van Uffelen, Physical Oceanography, University of Rhode the choices I did when they were young.” Island, Narragansett). Respect All Decisions Be Flexible and Do What Feels Right The needs of individual children and parents and overall Many early-career acousticians considering a family may family situations are different. Decisions to continue worry that in the fast-paced atmosphere of STEM disci- working full-time and decisions to modify your sched- plines, it is more challenging to step back for a period of ule should both be supported; all paths come with time to raise young children. “Be creative and open to difficulties. “It is difficult to scratch the surface on the professional opportunities that will allow you the flex- tactical reality of this topic (maternity leave, pumping ibility you want or need as a parent” (Lauren Ronsse, breast milk, childcare costs, spousal support, schedul- Architectural Acoustics, University of Nebraska-Lincoln). ing, and on and on) let alone the emotional depth. My husband and I have prayed and wept and cheered and Don’t Become too Attached to Plans grown and loved because of our decision to both work “The only thing certain during parenting is that things after our children were born” (Jazmin Myres). “Despite are going to change,” remarks Aran Mooney (Animal initially planning to return to work full-time after three Bioacoustics, Woods Hole Oceanographic Institution, months, I have continued on four years later in a soft- Woods Hole, MA), father of two. Regarding her deci- money position part-time because I wanted more time sion not to return to her tenure-track job after the birth with my son. This is despite the massive challenge it of her son, Lauren Ronsse recalls, “This was not the plan has been to maintain soft-money cash flow, complete and was the hardest decision I had made in my profes- administrative tasks, mentor students, and do research sional career to date. I knew I wanted more time with in the hours that I manage to piecemeal together when my son than a full-time job would allow, but this was my son is in some form of childcare or sleeping” (Alison the first time that I had not prioritized my career, and I Stimpert, Animal Bioacoustics, Moss Landing Marine did not know what the next steps would be for me pro- Laboratories, Moss Landing, CA). fessionally. I just knew that I had to do it for myself and for my family.” Jazmin Myres (Signal Processing, Naval Encourage Family Friendly Practices Air Systems Command) had a different experience, “I Everyone can encourage family friendly practices that rec- grew up planning to be a full-time mother but was always ognize the benefits of investing in our children. “Maternity, incredibly driven and passionate about school and work. as well as paternity leave, should be made more accessi- This resulted in serious cognitive dissonance when preg- ble and not be seen as a hindrance towards success in a nant with my first child as I realized that my planned scientific career” (Zhenia Zabolotskaya). In addition to path (staying home) did not align with my personality institutional policies, Ilene Busch-Vishniac recommends or natural talents. With my supportive husband, we sac- that “some concrete steps for improving things for working rificed financially to each take unpaid time off when our parents are to set a more reasonable expectation than the children were born, and I negotiated fiercely to secure a work 24/7 model. Make it more expected that it is OK to part-time schedule for the first few years. I love my family, preserve family time, and train managers and bosses to be and I love having a career.” more careful about when they send emails to respect time that employees don’t have to be working.” Try to Focus on the Present Although uncertainty is unpleasant, the time children are Many ASA members over the years have worked hard to young is short. Marcia Isakson (Underwater Acoustics, formalize family friendly policies at different institutions, Applied Research Laboratories, UT at Austin), who worked but more can be done. “Don’t be afraid to be that squeaky part-time until her children were teenagers, reflects, “Look- wheel at your institution or university. Speak up for your ing back, I don’t think that my decision to work part-time needs, and when you become that mentor (no longer the affected my career as much as I thought it might. I believe mentee), it’s probably even more important to speak up

Summer 2020 • Acoustics Today 89 WORK-PARENTING HARMONY

for family and work-life balance needs. The system won’t Contact Information change if we don’t make it happen” (Aran Mooney).

Tracianne B. Neilsen [email protected] Stay Involved in the ASA Department of Physics and Astronomy During the changes that accompany parenthood, sup- Brigham Young University port also comes from having a professional connection. N251 ESC Traci Neilsen (Underwater Acoustics, Brigham Young Provo, Utah 84602, USA University, Provo, UT) kept attending ASA meetings while stepping back from her career to raise her children. Alison K. Stimpert [email protected] “Members encouraged me and included me as part of the Moss Landing Marine Laboratories Society even when I was only able to keep one toe in. I am 8272 Moss Landing Road grateful to have a professional home that has accepted me Moss Landing, California 95039, USA and allowed me to contribute as I have pursued my non- traditional career path.” Support from the ASA Women in Acoustics Committee “was very helpful and affirming for [Lauren Ronsse], and it gave [her] some comfort to see others who had followed similar paths.” Recently, Laura BECOME INVOLVED Kloepper (Animal Bioacoustics, Saint Mary’s College, Notre Dame, IN) brought her baby to an ASA meeting and shares her experience in this issue of Acoustics Today. Would you like to become more involved with the ASA? Visit acousticalsociety.org/volunteer to learn We conclude with two thoughts from our honored women more about the Society's technical and administrative and remind readers to read the full stories of our quoted committees, and submit a form to express your interest parent acousticians online to appreciate the breadth of paths in volunteering! when harmonizing career pursuits with raising children.

“The beauty of life is the many roles we reflect upon our- selves. We begin as the children of our own parents, ...and acousticalsociety.org/volunteer at some point, if the universe chooses to bless us with our own family and children, we take on the role of a partner and parent. Many of us who have gotten the pleasure of a career and raising a family understand the struggle that can come with balancing both” (Zhenia Zabolotskaya).

“Work always expands to fill all available time and comes without the same biological boundaries as family. At any point, you can postpone a promotion or a move, but there is a limit to the time you can postpone having a family... Regardless of what’s going on, the most impor- BECOME A MEMBER tant thing is to love your kids and that they know and feel OF THE ASA that you love them” (Ilene Busch-Vishniac).

References Visit the Berger, S. (2018). Jeff Bezos doesn’t like the idea of ‘work-life balance’ — Here’s acousticalsociety.org what he swears by instead. CNBC Make It. Published August 9, 2018. Avail- able at https://cnb.cx/2Q43thN. Accessed February 25, 2020. to learn more about the Society McMillan, H. S., Morris, M. L., and Atchley, E. K. (2011). Constructs of and membership. the work/life interface: A synthesis of the literature and introduction of the concept of work/life harmony. Human Resource Development Review 10(1), 6-25. https://doi.org/10.1177/1534484310384958.

90 Acoustics Today • Summer 2020 Spooked!

Lenny Rudow

Editor’s Note: Acoustical Society of America members are unlikely to know the author of the article because he is not an acoustician. The author is, however, not only world-renowned in his profession as a fisherman and boating expert, but he has been a writer and editor in the marine field for over two decades. Lenny has authored seven books on fishing and boat- ing, and he is the angler in chief at Rudow’s FishTalk magazine (see fishtalkmag.com). Lenny is also electronics and fishing create prop noise under water, which sounds like a whirring editor for BoatUS magazine, and is a contributing editor to or a whine to our human ears. The pitch and volume of the several other publications. His writing has resulted in multiple noise a propeller creates is directly related to its speed. Even Boating Writers International writing contest and Outdoor “silent” electric motors still have to spin a prop. So, could a Writers Association of America “Excellence in Craft” awards. potent electric trolling motor running at maximum throttle actually be creating as much noise as a gasoline motor run To an angler, the most important sounds are the ones at minimum throttle? But is this a sound fish can hear in heard by the fish. the first place?

Can they hear me now? That’s a question that goes through Some anglers believe that music played at a reasonable the mind of every angler worth his or her salt. Because volume can help improve the bite. They may postulate sound can kill, kill the fishing action, that is. that the fish are attracted by the steady beat or that it has nothing to do with fish “liking” the music and everything At its core, fishing is an activity that basically boils down to do with the mellow, nonthreatening noises masking to convincing a wild creature to attempt to eat your bait abrupt sounds, sort of like humming a tune or whistling or lure. And to do so, that wild creature has to be com- when walking through bear country to make sure you fortable with what’s going on around it. A scared fish don’t startle a wild beast bigger than you are. Or maybe won’t bite, and loud sounds often scare fish. In some set- music merely adds to background noise, the acoustic tings, this behavior is observable. While poling through scene that fish quickly become acclimated to. But is this the shallow flats of Florida, for example, fly rod in hand, a sound fish can hear in the first place? guides are careful to instruct their clients to remain quiet. If they aren’t, they’ll watch the fish suddenly swim off, Some anglers believe that sound-making lures with rat- startled and spooked. tles inside or a concave face that gurgles and pops can attract the fish. Some others believe that (particularly A slamming hatch. A loud voice. A tacklebox dragged in calm conditions) these noisemakers can do more across the deck of a fiberglass boat. All of these sounds are harm than good, scaring the fish with their loud sounds. known fish spookers. But there are also a lot of unknowns. Or maybe those lure manufacturers (who tell us how amazingly effective these offerings can be) base their Some anglers believe that boat engines can scare the fish. marketing more on myth than reality? And of course, But is this the result of the engine itself or of the propeller once more we have to ask, are these sounds fish can hear that engine spins? All forms of propeller-driven propulsion in the first place?

©2020 Acoustical Society of America. All rights reserved. Volume 16, issue 2 | Summer 2020 • Acoustics Today 91 https://doi.org/10.1121/AT.2020.16.2.91 SPOOKED!

Truth be told, we anglers simply don’t know what fish to move the boat with as much stealth as possible, the can and can’t hear. We’re not acoustic scientists (well, at sound of a push pole (usually thought of as the quietest least not most of us!), and we’re certainly not experts on form of propulsion) crunching sand and shells was nota- fish beyond knowing how to trick them into biting. We bly louder than the propeller noise created by an electric spend untold sums of money on boats, rods and reels, motor moving the boat at the same speed. lures, and other gear; we spend day after day at sea; we dedicate countless hours to figuring out how to fool those But you know what we didn’t find out? What those fish fish and take home dinner. But aside from a few direct heard. We discovered what we could hear beneath the observations, we don’t really know — know — just exactly surface with our human ears, but as for the fish...well, which of these sounds that we’re making are heard by the they weren’t telling. fish, much less which are scaring them off. And for that matter, we also don’t know what sounds we’re making I know darn well that virtually everyone reading this that might actually attract fish. right now knows one heck of a lot more about acoustics than I ever will. And there may even be some anglers We don’t know if the fish are hearing these sounds at all among you. But as a 30-year veteran of the fishing field, or if they are (or are not) feeling them via the lateral line. I’d assert that I probably know the mind of a fisherman We don’t know if the sounds we’re making are skipping as well as anyone on the planet. And one thing I know off the surface of the water or if they’re being projected for sure is that we anglers are quite concerned with catch- down through the hulls of our boats. We don’t know if an ing more and bigger (read older and wiser) fish, which open aluminum hull does a great job of projecting sound means understanding how to avoid spooking them. That through the water, whereas a fiberglass hull does not or includes how we trigger senses other than hearing, such vice versa. In fact, there’s a whole lot about sounds, fish, as sight and even taste. But in this layman’s world, I want and what fish hear that we anglers don’t know. We may to know everything I can about how sounds travel into think we know, but... and through the deep blue and just which of them those fish hear. And it’s a fair bet that this same curiosity runs Many years ago while doing research for an article in through other hobbies and professions. Wouldn’t some- Boating magazine, my team and I used a hydrophone to one training their dog want to know what pitch whistle record underwater sounds while fishing. We were curi- was best for giving commands from afar? Doesn’t a ship’s ous to find out what the fish that we were trying to catch captain want to understand how his or her hearing is might be hearing so we tried towing the hydrophone just affected by fog? Shouldn’t a musician know how to best under the surface among the lures in a trolling spread. set up in a venue to take advantage of the acoustics they’re Then we tried dropping it straight down at various depths presented with? under an anchored boat where our baits were being fished. One of the most fascinating discoveries we made I can’t answer any of these questions. But maybe you can. was that we didn’t hear what we had expected. While trolling, we clearly heard human voices as people talked, Contact Information but we could barely make out the rumble of the diesel engines, a dominant sound above the waterline. While at Lenny Rudow [email protected] anchor, we heard the tell-tail (a stream of cooling water Edgewater, Maryland expelled from an outboard motor) splashing on the sur- face, yet we didn’t hear the motor itself. And while trying

92 Acoustics Today • Summer 2020 Obituary gies as an opportunity to do something novel. Using his Whitlow W. L. Au, 1941–2020 technical expertise and perspective as an engineer, Whit tackled questions he was passionate about, had fun doing it, and was kind and generous to others he interacted with, no matter their position. Whit’s collaborations with his graduate students, most studying zoology or oceanogra- phy, were among his most rewarding, many continuing Whitlow W. L. Au, a pioneer of under- throughout their careers. He loved and respected his grad- standing the echolocation of uate students, and they, in turn, returned his affection and and whales, former president of the his propensity for hard work and success. Acoustical Society of America (ASA), first chairperson of the ASA Animal Bioacoustics Technical Whit wrote 3 books, edited 3 others, and published 226 Committee, long-serving associate editor for The Journal of papers in the peer-reviewed literature. It is hard to imag- the Acoustical Society of America, first Silver Medal recipient in ine where the science of the echolocation of dolphins Animal Bioacoustics, and Gold Medal honoree, passed away and whales would be today without the contributions of at age 79 on February 12, 2020, at his home in Kailua, Hawai’i. Whitlow W. L. Au. In addition to his many roles in the ASA, not least of which was as chief greeter of newcom- Whit received a BS in electrical engineering from the ers, Whit organized many bioacoustics workshops and University of Hawai’i and an MS in electrical engineer- sessions, sat on the Ocean Studies Board of the National ing from Washington State University (WSU; Pullman) Research Council of the National Academies, and served before studying radar as a US Air Force scientist. He then on a Danish Research Foundation Blue Ribbon panel. returned to WSU to complete his PhD in electrical science. Whit turned his research to biosonar when he joined the He is survived by his wife of 53 years, Dorothy; their US Navy Naval Undersea Center. His first publication on 4 children, Wesley, Lani, Wagner (Jim), and Nani; and echolocation showed that dolphin signals can be 7 grandchildren. very broadband, with center frequencies of 120 kHz, much higher than the 35- to 60-kHz signals reported at the time Selected Publications by Whitlow W. L. Au (Au et al., 1974). This new perspective was challenged by Au, W. W. L. (1993) The Sonar of Dolphins. Springer-Verlag, New York. Au, W. W. L. (2015) History of dolphin biosonar research. Acoustics reviewers initially but set the stage for the enormous influ- Today 11(4), 10-17. ence he had on the field, highlighted by his foundational Au, W. W. L., and Benoit-Bird, K. (2003). Automatic gain control in book The Sonar of Dolphins. the echolocation system of dolphins. Nature 423, 861-863. Au, W. W. L., Floyd, R. W., Penner, R. H., and Murchison, A. E. (1974). Measurement of echolocation signals of the Atlantic bottlenose dol- In 1993, the Hawai’i branch of the Navy Center was closed phin, Tursiops truncatus Montagu, in open waters. The Journal of the and Whit jointly began the Marine Mammal Research Acoustical Society of America 56, 1280-1290. Program at the Hawai’i Institute of Marine Biology on Au, W. W. L., Pack, A. A., Lammers, M. O., Herman, L., Deakos, M., Coconut Island with the University of Hawai’i at Manoa. and Andrews, K. (2006) Acoustic properties of songs. The Journal of the Acoustical Society of America 120, 1103-1110. Moving to the university as a researcher/professor allowed Nachtigall, P., Supin, A. Y., Pawloski, J., and Au, W. W. L. (2004). Tem- his academic career to flourish dramatically. Whit made porary threshold shifts after noise exposure in the contributions in many areas of bioacoustics, expanding (Tursiops truncatus) measured using evoked auditory potentials. Marine Mammal Science 20, 673-687. our understanding of the broadband characteristics of snapping shrimp sounds, the songs of humpback whales, Written by: the signaling and foraging behavior of wild odontocetes, Paul Nachtigall [email protected] the echo characteristics of potential dolphin prey, hearing Hawai’i Institute of Marine Biology, University of Hawai’i, Kaneohe, HI in marine mammals, and the distribution of marine life Kelly Benoit-Bird [email protected] around the Hawaiian Islands. Whit’s reach was bolstered by Monterey Bay Aquarium Research Institute, Moss Landing, CA the cross-disciplinary collaborations he fostered with his Marc Lammers [email protected] colleagues. To these collaborations, he brought a fearless Hawaiian Islands Humpback Whale National Marine approach to problems, seeing gaps in available technolo- Sanctuary, Kihei, HI

Summer 2020 • Acoustics Today 93 Obituary These interests led to one of his most creative ideas, the Jan F. Lindberg, 1941–2020 development of a magnetostrictive, Terfenol-D trans- ducer driven by superconducting coil, which, with zero resistance, was free from electrical losses. This was the first demonstration of a superconducting device producing useful power while remaining in a supercon- ducting state. Jan F. Lindberg, a Fellow of the Acous- tical Society of America, passed away At the ONR, Jan continued pushing the state-of-the- on January 6, 2020, at the age of 78. He art by mentoring others in their research. He headed was born in Elyria, Ohio, on June 22, up and became the technical point of contact on a 1941. Jan was a physics graduate of Wittenberg Univer- large number of innovative transducer projects. Some sity (Springfield, OH). He is survived by Susan (Fletcher) projects included work on new materials such as a fer- Lindberg, his wife of 49 years. romagnetic-shaped memory alloy, whereas others led to new transducers such as the hybrid transducer, the Jan began his career as a civil servant with the US Air Force leveraged cylindrical acoustic transducer, the continu- in Ohio but soon left to join the US Navy Underwater ous-wave transducer, the cantilever-mode transducer Sound Laboratory (New London, CT) and continued with and, most recently, the end-driven bender transducer. its successor the Naval Undersea Warfare Center Division Jan also supported and encouraged the use of trans- Newport (Newport, RI), from which he was eventually ducer systems, such as in cloaking undersea vessels detailed to serve as a program officer at the Office of Naval through the use of transducers. Research (ONR; Arlington VA). Jan was a gifted scholar and most enjoyable person to Jan’s first work in transduction was developing flexible-disk work with or for. He almost always had a smile, which put transducers under the mentorship of Ralph Woollett. He one at ease even when dealing with difficult problems. He next began a lifelong study of flextensional transducers that had a constant stream of positive suggestions for solving use a shaped shell that offers a mechanical advantage over those difficult problems, and his legacy continues. the normally very limited piezoelectric expansion of the ceramic stack, producing a very low resonant frequency Selected Publications by Jan F. Lindberg in a small package. Much of this effort was with Class IV Berliner, M. J., and Lindberg, J. F. (Eds.). (1996). Acoustic Particle elliptical flextensionals, but he also initiated construction Velocity Sensors: Design, Performance, and Applications. Acoustic Velocity Sensor Focused Workshop, American Institute of Physics and testing of a large, 40,000-lb array of Class V circular Conference Proceedings, no. 368, Mystic, CT, September 12–13, ring-shell projectors as well as constructing a prototype 1995, American Institute of Physics Press, Woodbury, NY. Class VII dogbone-shaped flextensional. In support of Butler, S. C., and Lindberg, J. F. (1997). A 3 kHz class VII flextensional this research, Jan guided the creation of several specialized transducer. The Journal of the Acoustical Society of America 101, 3164. Ewart, L. M., Lindberg, J. F., Powers, J. M., and Butler, S. C. (2000). software design tools, He also developed a widely accepted Materials and designs for improved high-power naval SONAR proj- Figure-of-Merit (a numerical representation of the effec- ects. US Navy Journal of Underwater Acoustics 50, 4-12. tiveness of a device) that allowed quick comparisons of Joshi, C. H., Voccio, J. P., Lindberg, J. F., and Clark, A. E. (1993). these ground-breaking designs. Development of magnetostrictive sonar transducer using high-tem- perature superconducting coils. The Journal of the Acoustical Society of America 94, 1788. Jan was particularly interested in the development of Lindberg, J. F. (1985). Parametric dual mode transducer. The Journal new transduction materials, which promised to greatly of the Acoustical Society of America 77, 774. increase the power handling capability of sonar trans- ducers, and developed numerous innovative transducers Written by: utilizing these new materials. In his role as an ONR Pro- Roger T. Richards [email protected] gram Officer, Jan cosponsored an annual conference on 169 Payer Lane, Mystic, CT transduction materials and their applications, which led John L. Butler [email protected] to many advances in the state-of-the-art. Image Acoustics, Inc., 40 Willard Street, Quincy, MA

94 Acoustics Today • Summer 2020 Acoustics Today Mag ad opt 2017_Layout 1 1/17/17 1:51 PM Page 1

PROVEN PERFORMANCE For over 40 years Commercial Acoustics has been helping to solve noise sensitive projects by providing field proven solutions including Sound Barriers, Acoustical Enclosures, Sound Attenuators and Acoustical Louvers.

We manufacture to standard specifications and to specific customized request.

Circular & Rectangular Silencers in Dissipative and Reactive Designs Clean-Built Silencers Elbow Silencers and Mufflers Independently Tested Custom Enclosures Acoustical Panels Barrier Wall Systems

Let us PERFORM for you on your next noise abatement project! Commercial Acoustics A DIVISION OF METAL FORM MFG., CO.

Satisfying Clients Worldwide for Over 40 Years. 5960 West Washington Street, Phoenix, AZ 85043 (602) 233-2322 • Fax: (602) 233-2033 www.mfmca.com [email protected]

Acousatics Today Mag- JanAprJulyOct 2017 Issues • Full pg ad-live area: 7.3125”w x 9.75”h K#9072 1-2017 NEED TO SIMPLIFY NOISE MEASUREMENT AND ANALYSIS? JOB DONE.

No matter which profession you are in, you need a sound B&K 2245 SOUND LEVEL METER level meter solution that gets your job done faster, easier and DESIGNED FOR YOUR JOB. problem-free. The B&K 2245 sound level meter is a complete measurement solution with class 1 measurement accuracy. While the instrument can be used as a stand-alone noise measurement device, it works seamlessly with specially-created mobile apps, as well as a PC, bringing an entirely new level of efficency and control.

To simplify your job-to-do, visit www.bksv.com/2245 Bruel & Kjaer North America, Inc. 3079 Premiere Parkway, Suite 120 Duluth, GA 30097 Telephone: 800 332 2040

[email protected] BN2308-11