VOLUME 17 2011 ISSN 1526-6096

JOURNAL OF EDUCATIONAL AUDIOLOGY

IN THIS ISSUE: Official Journal of the Educational Audiology Association

INVITED ARTICLE v Development of Local Child Norms for the Dichotic Digits Test

ARTICLES

v Behavioral Verification of Programmable FM Advantage Settings

v An Exploration of Non-Quiet Listening at School

v Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification

v Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise

REPORTS

v Development of a Video for Pure Tone Hearing Screening Training in Schools

v -habilitation to Enhance Auditory Processing Skills

CASE STUDIES

v Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study

v The Importance of Appropriate Adjustments to Classroom Amplification: A One School, One Classroom Case Study A new era in classroom amplification Dynamic SoundField by Phonak offers all the benefits of classroom amplification, such as improved student attention and better teacher vocal health, without its traditional problems. Its cutting-edge sound performance ensures that distracting echoes and feedback are minimized, while its three transmission modes help every student hear better, whether they have normal or impaired hearing. Best of all, Dyna- mic SoundField is seriously simple to use; its single loudspeaker removes installation headaches and its automated settings simplify the teacher’s job. Just plug it in and teach!

www.Phonak-us.com www.DynamicSoundField.com

Ad_BtB_Dynamic_SoundField_Students_EAA2.indd 1 2/14/11 4:09 PM v EDITOR v Cynthia McCormick Richburg, Ph.D. Indiana University of Pennsylvania, Indiana, Pennsylvania

v ASSOCIATE EDITORS v

Andrew B. John, Ph.D. University of Oklahoma Health Sciences Center, Oklahoma City, Oklahoma

Erin C. Schafer, Ph.D. University of North Texas, Denton, Texas

Claudia Updike, Ph.D. Ball State University, Muncie, Indiana

Karen Anderson, Ph.D. Supporting Success for Children with Hearing Loss, Minneapolis, Minnesota v REVIEWERS v Rebecca S, Atcherson, Au.D. Debra Liebrich, Au.D. Carrie Spangler, Au.D. Arkansas Children’s Hospital Indiana School for the Deaf Stark County Educational Services Little Rock, Arkansas Indianapolis, Indiana Canton, Ohio

Rosanne R. Douville, Au.D. Jackie Davie, Ph.D. Stella Ng, Ph.D. Miami Valley Regional Center Nova Southeastern University University of Western Ontario Riverside, Ohio Fort Lauderdale, Florida London, Ontario, Canada

Samuel R. Atcherson, Ph.D. Jeffrey Martin, Ph.D. Victoria Walkup-Pierce, Au.D. University of Arkansas at Little Rock/ University of Texas at Dallas Orange County Public Schools University of Arkansas for Medical Richardson, Texas Orlando, Florida Sciences Little Rock, Arkansas Tena McNamara, Au.D. Jennifer Phelan, Au.D. Eastern Illinois University Nationwide Children’s Hospital Hala Elsisy, Ph.D. Charleston, Illinois Columbus, Ohio Purdue University West Lafayette, Indiana Joseph Smaldino, Ph.D. Julia Barclay Webb, Au.D. Illinois State University Arkansas Children’s Hospital James Blair, Ph.D. Normal, Illinois Little Rock, Arkansas Utah State University Logan, Utah Robert Moore, Ph.D. Mike Sharp, Au.D. University of South Alabama Illinois State University Sarah Florence, Au.D. Mobile, Alabama Normal, Illinois University of North Texas Denton, Texas Donna Fisher Smiley, Ph.D. Gail Whitelaw, Ph.D. Arkansas Children’s Hospital Ohio State University Lindsay Bondurant, Ph.D. Little Rock, Arkansas Columbus, Ohio Illinois State University Normal, Illinois Susan Naeve-Velguth, Ph.D. Suzanne Sklaney, Ph.D. Central Michigan University Maryland Veterans Administration Mount Pleasant, Michigan Baltimore, Maryland What is EAA? The Educational Audiology Association (EAA) is an international professional organization for audiologists who specialize in the management of hearing and hearing impairment within the educational environment. EAA was established in 1984 to advocate for educational audiologists and the students they serve. The American Academy of Audiology (AAA) and the American Speech-Language- Hearing Association (ASHA) recognize EAA as a related professional organization (RPO), which facilitates direct communication and provides a forum for EAA issues between EAA, AAA, ASHA, and other RPOs. Through the efforts of the EAA executive board and individual members, the association responds to issues and concerns which shape our profession.

EAA Mission Statement: The Educational Audiology Association is an international organization of audiologists and related professionals who deliver a full spectrum of hearing services to all children, particularly those in educational settings.

The mission of the Educational Audiology Association is to act as the primary resource and as an active advocate for its members through its publications and products, continuing educational activities, networking opportunities, and other professional endeavors.

EAA Membership EAA is open to audiologists, speech-language pathologists, teachers of the hearing impaired, and professionals from related fields who have an active interest in the mission of EAA. Student membership is available to those in school for audiology, speech-language pathology, and other related fields. EAA also offers Corporate and Affiliate Memberships, which have unique marketing advantages for those who supply products and services to educational audiologists.

EAA Scholarships and Grants EAA offers doctoral scholarships, as well as two grants for EAA members. In a continuing effort to support educational audiologists, EAA funds small grants in areas related to audiology services in educational settings. The awards are available to practitioners and students who are members of EAA for both research and non-research based projects. All EAA members are encouraged to submit proposals for these awards.

EAA Meetings and Events EAA holds a biannual Summer Conference (in odd years), next scheduled for June 2013 in Scottsdale, Arizona. These meetings provide opportunities for exchanging clinical and professional information with colleagues. The continuing education credits offered are an excellent way to keep updated in a rapidly changing field. These meetings offer individual members an opportunity to hear industry- known keynote speakers, keep up with new technology and information, share best practices, see the latest technology from the exhibitors, network, and more.

EAA Publications Through its publications, EAA communicates the activities and ideas of educational audiologists across the nation. • Educational Audiology Review (EAR) Newsletter: This quarterly publication includes state-of-the-art clinical information and articles on current professional issues and concerns, legislative information, industry news and more (approximately 14-28 pages). • EAA E-News: Updates are provided on current happenings in the field, as well as updates from the President and executive board, committees, new products, events, and more. • Journal of Educational Audiology (JEA): This annual publication contains articles relating to the practice of educational audiology. • Subscriptions to EAA Publications are available!

EAA Products Nowhere else can you find proven instruments, tests, DVDs, forms, accessories, manuals, books, and even games created and used by educational audiologists. EAA’s product line has grown as members share their expertise and develop proven materials invaluable to the profession. Exclusives available only through EAA include the Therapy for APD: Simple, Effective Procedures by Dr. Jack Katz and the Knowledge is Power (KIP) Manual.

3030 West 81st Avenue, Westminster, CO 80031 Phone: 800-460-7EAA (7322) l Fax: 303-458-0002 www.edaud.org l [email protected] v CONTENTS v

v INVITED ARTICLE v Development of Local Child Norms for the Dichotic Digits Test ...... 6 - 10 Gail Gegg Rosenberg (deceased)

v ARTICLES v

Behavioral Verification of Programmable FM Advantage Settings ...... 11 - 22 Lindsay Bondurant and Linda Thibodeau

An Exploration of Non-Quiet Listening at School ...... 23 - 35 Jeffery Crukley, Susan Scollie, and Vijay Parsa

Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification ...... 36 - 52 Leigh Ann Reel and Candace Bourland Hicks

Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise ...... 53 - 61 Kelsey Nielsen, Brian M. Kreisman, Stephen Pallett, and Nicole V. Kreisman

v REPORTS v

Development of a Video for Pure Tone Hearing Screening Training in Schools ...... 62 - 76 Diana C. Emanuel, Merrill Alterman, Michelle Betner, and Rebecca Book

Wii-habilitation to Enhance Auditory Processing Skills ...... 77 - 80 Addie J. Dowell, Brittany Milligan, D. Bradley Davis, and Annette Hurley

v CASE STUDIES v

Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study ...... 81 - 93 Annette Hurley, Robert G. Turner, Eric Arriaga, and Amanda Troyer

The Importance of Appropriate Adjustments to Classroom Amplification: A One School, One Classroom Case Study ...... 94 - 98 James C. Blair and Jeffery B. Larsen

The Journal of Educational Audiology is published annually by the Educational Audiology Association and is distributed to members of the Association. Publication in this Journal, including advertisements, does not constitute endorsement by the Association. Requests for single reprints of articles should be obtained from the Educational Audiology Association at http://www.edaud.org. Other requests should be directed to the Editor. For permission to photocopy articles from this Journal for academic course packs, commercial firms should request authorization from the Educational Audiology Association. Consent is extended to individual instructors to copy single articles for themselves for classroom purposes without permission or fee. Copies of the Journal may be purchased by contacting the office of the Educational Audiology Association, 800-460-7EAA(7322) or www.edaud.org. Journal of Educational Audiology vol. 17, 2011

Development of Local Child Norms for the Dichotic Digits Test

Gail Gegg Rosenberg, M.S. (deceased)

Submitted by Kris English, PhD, to honor an esteemed colleague in educational audiology, Gail Gegg Rosenberg. This article was first published in the 1998 issue of the Florida Journal of Communication Disorders, 18, 4-10. The current leadership of that professional organization has given permission to reprint it in the 2011 edition of the Journal of Educational Audiology.

Appreciation is extended to Robert Keith, PhD for commending the quality of this research and calling for a broader readership; to Karen Anderson, PhD for communicating with Florida audiology leadership; to Barbara Beatty, Assistant Executive Director of the Florida Speech-Language-Hearing Association for helping us discuss this proposal with FLASHA leadership; and to Neferkara Aaron, graduate student at the University of Akron for formatting a photocopied version into an electronic file.

The Dichotic Digits Test (DDT; Musiek, 1983a) has been shown to be useful as an audiological screening procedure for detecting central auditory processing (CAP) disorders in children or as a component of the CAP battery. Published child norms for the DDT in mean (M) percent correct are not accompanied by standard deviations (SD) or other commonly used statistical information essential to determining the extent to which a child’s performance may be different from that of age-alike peers. The purpose of this study was to obtain norms on the DDT at 12 month intervals for a pediatric population (N=200) ranging in age from 5-0 to 12-11 years of age. Over a 16- month period, 200 students referred for routine audiological evaluation were administered the DDT as part of the audiological test battery. Subjects exhibited normal peripheral hearing sensitivity and did not participate in any special services programs in the Sarasota County Florida School District. Data were analyzed to provide mean scores, standard deviations, and other statistical data. These norms allow audiologists evaluating young children to describe results consistent with current reporting procedures for the interdisciplinary evaluation of CAP disorders in children.

Introduction component of the CAP test battery (Bellis,1996; Mueller & Bright, 1994; Musiek, 1983a; Musiek, Gollegly, Kibbe, & Verkest-Lenz, A model explaining how the central auditory nervous system 1991; Musiek, Gollegly, Lamb & Lamb,1990; Stecker,1992). (CANS) manages dichotically presented stimuli was developed Table 1 summarizes the features of the Dichotic Digits Test that by Kimura (1961). Dichotic listening refers to the presentation of render it an appealing dichotic test to be used in the audiological different auditory stimuli to both ears simultaneously. Broadbent assessment of central auditory processing abilities in children. (1954) is reputed to be the first to have utilized a dichotic digits Use of dichotic digits in the evaluation of central auditory paradigm to test both ears simultaneously. Kimura (1961) is credited processing abilities in children is recognized as a preferred with formally introducing dichotic speech tests into the arena of practice with regard to maturational effects. Research has shown central auditory processing evaluation. This was accomplished by that the greater the linguistic load of the auditory stimuli, the more her adaptation of Broadbent’s method for assessing hemispheric Table 1. Summary of features of the Dichotic Digits Test (DDT) that render it appealing as a dichotic listening test to be used in the audiological assessment of central auditory processing (CAP) abilities in asymmetry and unilateral lesion effects. Kimura’s method children. employed a triad of digits presented dichotically to evaluate central auditory functional processing ability (Bellis, 1996). The use of digits as stimuli continues to be a favored dichotic testing The Dichotic Digits Test (DDT) is: paradigm in the 1990s. Musiek (1983a) is noted for authoring the  highly sensitive as a screening test of central auditory processing (CAP) disorders revision of Kimura’s protocol, the Dichotic Digits Test, in which  easily administered in under five minutes two digits are presented to each ear simultaneously. The Task  able to be quickly scored  Force on Central Auditory Processing Consensus Development a lightly linguistically loaded, closed-response set (digits 1-10, except 7)  comprised of digits, which for some persons are easier to respond to than open-set word stimuli acknowledged competing dichotic digits as one of the dichotic  easily understood by children with directions that rarely need to be repeated stimuli that can be used effectively to measure central auditory  relatively resistant to at least a mild conductive or sensorineural hearing loss processing disorders (ASHA, 1996).  not rigidly controlled with regard to the subject’s response time The Dichotic Digits Test (DDT; Musiek, 1983) has been  adaptable to other response modes such as pointing or writing acclaimed as a highly sensitive instrument that may be used as a screening tool for central auditory processing (CAP) disorders or a

6 Development of Local Child Norms for the Dichotic Digits Test prominent the maturational effects are likely to be (Bellis, 1996). 12 to 14 years of age (Musiek, 1990; Musiek & Lamb,1994).The Maturation effects are noted on most tests of auditory perception, DDT’s author currently provides only mean percent scores for but perhaps more so on dichotic tests (Musiek, Gollegly, Lamb, & the 7.0 – 11.11 age range. This study included younger students Lamb, 1990). Very simply, the use of digits in a dichotic testing (5.0 – 6.11) as well as those in the 12.0 – 12.11 age range, due to paradigm in the evaluation of children will very often show a the maturational concern and because many children below the age different profile than that obtained for dichotic sentences. This of 7 years are referred for an audiological CAP evaluation. occurs because the sentence stimuli are more linguistically loaded Mean ages for the cohort of 25 students in each 12-month and place greater stress on the young child’s ability to transfer interval are shown in Table 2. The subject group included 96 males information interhemisperically. Bellis (1996) rates the DDT as (48%) and 104 females (56%). Racial background of the students being somewhat near the middle of the continuum of least-to most- was: 149 Caucasian (74.5%), 39 African-American (19.5%), difficult dichotic tests because the digit stimuli are very closely 7 Hispanic (3.5 %), and 5 Asian (2.5%). Subjects in this study aligned and only lightly linguistically loaded. resided in a middle-sized Florida school district that has many Published norms for children for the DDT are available resources available to its students both within the schools and only in mean percent correct scores. There are no accompanying the community. In addition, the school district’s rate of free and standard deviations (SD) or other normative data. In the test’s reduced lunch is below the average for the State of Florida. The instruction guide, the author strongly recommends establishing geographic area is primarily urban and suburban in composition local norms. There is a diagnostic and educational need to have available sufficient normative data for children that may be used to Table 2. Age in years and gender of subjects (N = 200). describe the degree of deviation from the norm. Test norms should Age Range Mean Age Males Females be reported in terms that match the local regulations and current practices (Hutchison, 1996). 5.0-5.11 5.5 12 13

Thus, there are three very important reasons for establishing 6.0-6.11 6.5 12 13 local normative data for the DDT so that results may be reported in 7.0-7.11 7.4 12 13 a manner consistent with other individual assessment instruments 8.0-8.11 8.4 14 11 used with children. First, when the DDT is used as part of a battery, it is difficult to describe an individual’s performance with 9.0-9.11 9.5 11 14 respect to findings of other audiological CAP tests. For example, 10.0-10.11 10.5 14 11 it would allow a child’s performance on the DDT to be compared 11.0-11.11 11.4 11 14 with mean scores (and deviations from the mean) on tests such 12.0-12.11 12.4 10 15 as the Screening Test for Auditory Processing Disorders (SCAN; Total Group 8.9 96 104 Keith, 1986) and the Staggered Spondaic Word Test (SSW; Katz, 1986). Secondly, it becomes problematic when audiological CAP results are considered in comparison with psycho-educational and Subjects were students referred for routine audiological psycho-linguistic test results in the interdisciplinary assessment of evaluation. None of the students participated in any special students. Evaluation instruments in these domains typically provide services programs in the Sarasota County Florida School District statistical information such as SDs. Finally, establishment of local at the time of the evaluation. All students had normal peripheral norms is strongly recommended by the test’s author. The purpose hearing with pure tone thresholds at 15 dB HL or better for of this study was to develop local norms for children ages 5.0 to octave frequencies 250 – 8000 Hz. Speech recognition scores 12.11 for the Dichotic Digits Test to increase the flexibility of this were 92 percent or better for each ear using age appropriate open- test with respect to current practices in reporting test performance. set recorded word discrimination lists (e.g., PBKs, CID W-22s, NU-6) presented at a 30 dB sensation level (SL; re: spondee Materials and Methods threshold). Immittance audiometry results were within normal limits for all subjects. Subjects Subjects were 200 participants whose ages ranged from 5.0 Dichotic Digits Test Procedure to 12.11 years. Age and maturation have been shown to influence The Dichotic Digits Test (Musiek, 1983a, 1983b) is composed CAP test results. Previous investigators have suggested obtaining of naturally spoken digits from one to 10, exclusive of the number normative data with children at each year under approximately seven. (However, the number seven is included in the single digit

7 Journal of Educational Audiology vol. 17, 2011 subtest.) The DDT test is composed of 20 digit pairs for a total of and the standard error of measurement (SEM) for the eight age 40 test items per ear. The two-digit stimuli on one channel of the range groups. As is typical for this test and some other dichotic tape have been aligned with the two digits on the other channel to tests, the left ear scores are lower than scores for the right ear for produce a dichotic listening task. each age group. There was less variability in the 12.0 – 12.11 year old group, as is shown by the lowest SDs. For both ears, with the Example: Left Ear 5, 4 exception of three instances, the SDs declined with an increase in Right Ear 2, 1 age and this characteristic would appear to be characteristic of the maturation of the central auditory nervous system (CANS).

The cassette tape was played on an Optimus dual-channel Table 3. Mean scores, standard deviations (SD), range of scores, and standard error of tape player with channel one directed to the left ear and channel measurement (SEM) for subjects (CA: 5.0-12.11 years) for left and right ears on the Dichotic Digits Test. two to the right ear, as per test protocol. The signal was fed through the speech circuitry of the Grason Stadler (GSI 10) Left Ear Right Ear Age Mean SD Range SEM Mean SD Range SEM two-channel clinical audiometer and passed on to TDH-39 earphones at 50 dB SL (re: spondee threshold). Testing was 5.0-5.11 52.5% 7.1 42-62 1.4 69.9% 9.8 55-90 2.0 6.0-6.11 58.7% 9.9 48-75 2.0 71.5% 9.9 58-90 2.0 conducted in a double-walled I.A.C. sound treated room. 7.0-7.11 61.3% 8.3 50-80 1.7 73.9% 8.5 62-95 1.7 Each participant was given identical instructions adapted 8.0-8.11 70.6% 8.2 60-88 1.6 79.9% 8.2 70-95 1.6 from those offered by Musiek (1983b). Adaptations were made 9.0-9.11 75.0% 7.0 62-90 1.4 81.7% 8.0 70-98 1.6 10.0-10.11 78.4% 6.8 68-92 1.4 86.3% 6.8 75-98 1.4 in the narrative to accommodate the language level of the 11.0-11.11 88.1% 7.1 75-98 1.4 92.6% 4.3 80-100 0.9 12.0-12.11 90.7% 5.7 75-98 1.1 96.2% 4.1 82-100 0.8 youngest participants. Following are the test instructions:

You will hear two numbers in each of your ears. In Table 4, differences between Musiek’s norms and normative Listen carefully in both ears and repeat all of the data derived from the current Sarasota study are shown. In all numbers you hear. Do not worry about repeating instances, the current study shows higher mean scores. The larger the numbers in any special order. If you are not differences at the 7.0 – 8.11 age levels may suggest a different sure about the numbers you heard, please guess. maturational rate than that which characterized the subjects in the Now let’s practice. original normative study. For the left ear, differences between the two sets of norms are less than .50 for the 9.0 – 11.11 age level. Oral practice was provided prior to beginning presentation of the three tape recorder practice items. Participants were Table 4. Difference between published norms (mean percent) for the Dichotic Digits Test provided ample time to respond, and in some cases this did (Musiek, 1983a) and the current study. require a pause, which is allowable according to published test protocol. There is no required inter-stimulus interval Left Ear Right Ear Musiek Sarasota Musiek Sarasota and Musiek (1983a) indicates that the original norms were Age Norms Norms Difference Norms Norms Difference

established with the subjects being given as much time as they 5.0-5.11 52.5% 69.9% wished prior to responding. Subjects’ responses were recorded 6.0-6.11 58.7% 71.5% 7.0-7.11 55.0% 61.3% 6.3 70.0% 73.9% 3.9 on a DDT worksheet routinely used in this clinical setting when 8.0-8.11 65.0% 70.6% 5.6 75.0% 79.9% 4.9 administering this specific dichotic test (see Appendix). Subject 9.0-9.11 75.0% 75.0% 0.1 80.0% 81.7% 1.7 responses were scored according to protocol provided by the 10.0-10.11 78.0% 78.4% 0.4 85.0% 86.3% 1.3 11.0-11.11 88.0% 88.1% 0.1 90.0% 92.6% 2.6 test author, with the total number of correct responses being 12.0-12.11 90.7% 96.2% multiplied by 2.5 to derive a percentage score rounded to the nearest digit. A summary of deviation from the mean is provided in Table Results and Discussion 5. This information is also included on the DDT Worksheet used in this study. Because deviation from the mean is an important For the purpose of this study, statistics were applied to only element in the interpretation of frequently used psycho-educational the two- digit subtest of the DDT. Table 3 portrays a summary and psycho-linguistic tests, this information should be useful to of local norms computed for the left and right ears that include audiologists in quickly determining a child’s performance in mean scores in percent, standard deviation (SD), range of scores, several arenas. It will all the audiologist to compare results on

8 Development of Local Child Norms for the Dichotic Digits Test the DDT with other tests in the audiological CAP battery, as well DDT norms obtained for this subject sample should be very as the child’s performance on psycho-educational and psycho- appropriate for students residing in similar geographic areas. linguistic tests which may be considered in determining the child’s However, audiologists should use caution when applying these eligibility for special education services. norms in other regions of the state or nation if the demographics An additional analysis of data that may be of interest is the are notably different from the sample used in this investigation. summary of mean ear difference scores for each of the eight age Table 6. Summary of mean ear difference scores and standard deviations (SD) groups (see Table 6.) These data allow the audiologist to determine for eight age groups of children. if the ear difference is normal or abnormal. For instance, if the Age Range Mean Ear Difference SD right ear score is very strong and the left score is weak, but within Score the normal range, the ear difference may actually exceed the mean. In such cases, this finding could provide important information to 5.0-5.11 7.4 9.4 support or refute an ear difference identified on another test in the 6.0-6.11 12.8 4.1

CAP battery. 7.0-7.11 12.6 5.9 The normative data obtained in this study will facilitate greater 8.0-8.11 9.4 3.1 confidence in reporting audiological CAP results to parents and 9.0-9.11 6.6 4.5 other professionals. It will now be possible to compare a student’s performance on the DDT with his/her results on other CAP tests 10.0-10.11 7.9 4.6 that provide SDs (e.g. Screening Test for Auditory Processing 11.0-11.11 4.5 6.9 Disorders [SCAN; Keith, 1986], Staggered Spondaic Word Test 12.0-12.11 5.6 3.9 [SSW; Katz,1986], Willeford battery of CAP tests [Willeford, 1977]). Further analysis of data collected during this study may be Summary conducted to assist in providing a more comprehensive profile of students’ CAP abilities and functional levels. More specifically, Normative data were obtained for a pediatric population application of statistical analysis in the following areas may (5.0 – 12.11 years, N=200) using the Dichotic Digits Test. In all provide useful and critical information for audiologists involved instances, the mean percent score was slightly (.50) to moderately in CAP evaluation of children: (6.28) higher than norms available from the test’s developer. The availability of more sufficient normative data will allow • Ear effect scores (double errors for the same ear) audiologists to interpret a child’s DDT results more confidently. • Order effect scores (first response ear) Further statistical analysis of data complied during this study • Error pattern analysis (e.g., position of error or deleted would expand the flexibility of this dichotic test with regard to digits, response pattern, reversals). ease of interpretation. The DDT is a valuable component in the audiologist’s CAP test battery and the availability of a more complete array of normative data should enhance its use with the pediatric population.

Table 5. Summary of deviation from the mean for children (CA: 5.0-12.11 years) for left and right ears on the Dichotic Digits Test. Right Left Ear Ear Age Mean SD -1 SD -2 SD Mean SD -1 SD -2 SD

5.0-5.11 52.5% 7.1 45.4% 38.3% 69.9% 9.8 60.1% 50.3% 6.0-6.11 58.7% 9.9 48.8% 38.9% 71.5% 9.9 61.6% 51.7% 7.0-7.11 61.3% 8.3 53.0% 44.7% 73.9% 8.5 65.4% 56.9% 8.0-8.11 70.6% 8.2 62.4% 54.2% 79.9% 8.2 71.7% 63.5% 9.0-9.11 75.0% 7.0 68.0% 61.0% 81.7% 8.0 73.7% 65.7% 10.0-10.11 78.4% 6.8 71.6% 64.8% 86.3% 6.8 79.5% 72.7% 11.0-11.11 88.1% 7.1 81.0% 73.9% 92.6% 4.3 83.3% 84.0% 12.0-12.11 90.7% 5.7 85.0% 79.3% 96.2% 4.1 92.1% 88.0% 9 Journal of Educational Audiology vol. 17, 2011

Acknowledgment Stecker, N. (1992). Central auditory processing: Implications in Portions of this manuscript were presented in a poster session at audiology. In J. Katz, N. Stecker, & D. Henderson (Eds.) the 1996 Florida Association of Speech-Language Pathologists Central Auditory Processing: A Transdisciplinary View (pp. and Audiologists (FLASHA) annual convention, Ft. Lauderdale, 117-127). St. Louis, MO: Mosby Year Book. Inc. FL. Willeford, J. (1977). Assessing central auditory behavior in children: A test battery approach. In R. Keith (Ed.) Central References Auditory Dysfunction (pp. 43-72). New York, NY: Grune & American Speech-Language-Hearing Association. (1996). Stratton. Central auditory processing: Current status of research and implication for clinical practice. American Journal of Audiology, 5 (2), 41-54. Bellis, T. (1996). Assessment and management of Central Auditory Processing Disorders in the educational setting. San Diego, CA: Singular Publishing Group. Broadbent, D. (1954). The role of auditory localization in attention and memory span. Journal of Experimental Psychology, 47, 191-196. Hutchison, T. (1996). What to look for the technical manual: Twenty question for users. Language, Speech, Hearing Services in Schools, 27(2), 109-121. Katz, J. (1986). The SSW Test Manual (3rd ed.). Vancouver,WA: Precision Acoustics. Keith, R. (1986). SCAN: Screening Test for Central Auditory Processing Disorders. San Antonio, TX: Psychological Corp. Kimura, D. (1961). Some effects of temporal lobe damage on auditory perception. Canadian Journal of Psychology, 15, 157- 165. Mueller, H., & Bright, K. (1994). Monosyllabic procedures in central testing. In J. Katz (Ed.) Handbook of Clinical Audiology (4th ed.) (pp. 222-238). Baltimore, MD: Williams & Wilkins. Musiek, F. (1983a). Assessment of central auditory dysfunction: The dichotic digits test revisited. Ear and Hearing, 4, 79-83. Musiek, F. (1983b). (audiotape) Dichotic Digits Test. Hanover, NH: Dartmouth-Hitchcock Medical Center. Musiek, F., & Lamb, L. (1994). Central auditory assessment: An overview. In J. Katz (Ed.), Handbook of Clinical Audiology (4th ed.) (pp.197-211). Baltimore, MD: Williams & Wilkins. Musiek, F., Gollegly, K., Kibbe, K., & Verkest-Lenz, S. (1991). Proposed screening test for central auditory disorders: Follow-up on the dichotic digits test. American Journal of Otology, 12(2), 109-113. Musiek, F., Gollegly, K., Lamb, L., & Lamb, P. (1990). Selected issues in screening for central auditory processing dysfunction. Seminars in Hearing, 11(4), 372-384.

10 Behavioral Verification of Programmable FM Advantage Settings

Behavioral Verification of Programmable FM Advantage Settings

Lindsay Bondurant, Ph.D. Illinois State University Normal, Illinois

Linda Thibodeau, Ph.D. University of Texas at Dallas Dallas, Texas

The effects of frequency-modulated (FM) receiver settings on speech perception in noise were examined in adults with and without hearing impairment. Using the Bamford-Kowal-Bamford Speech-in-Noise test, speech perception in noise of ten participants with mild-to-severe bilateral sensorineural hearing impairment and ten participants with normal hearing was evaluated while they wore Phonak iLink hearing instruments. The iLink had integrated FM receivers programmed to FM advantage settings ranging from 0 to +18 dB. Participants with normal hearing showed significantly greater benefit when listening with an FM system than did participants with hearing impairment. For both groups, there was significant improvement in performance with the addition of FM (vs. the local-microphone-only condition), and significant improvements were seen when FM advantage was increased by at least 6 dB. FM systems provide speech-perception-in-noise benefit to listeners with and without hearing impairment; however, incremental adjustments smaller than 6 dB may not result in significant improvements in performance.

Introduction and noise have been reduced. For people with hearing impairment, FM systems are one of the most effective ways to overcome the It is well-known that a reduction in audibility leads to obstacles presented by poor SNR (e.g., Anderson & Goldstein, reduced intelligibility of speech, even in the most ideal conditions 2004; Arnold & Canning, 1999; Boothroyd & Iglehart, 1998; (Ching, Dillon, & Byrne, 1998; Hornsby & Ricketts, 2003). Ideal Hawkins, 1984; Lewis, Crandell, Valente, & Horn, 2004). conditions include optimal signal-to-noise ratio (SNR), specifically When fitting FM systems, the audiologist’s goal is to the audibility of the primary signal (e.g. speech) relative to maintain optimum intelligibility of the speech signal via the unwanted signals (e.g. noise) in the listening environment. The remote microphone, while allowing the user to remain connected SNR at a person’s ear is determined both by the distance of the to his surroundings either naturally or via the local microphone person from the speaker and the background noise level. Anyone of a hearing instrument (American Speech-Language-Hearing can have difficulty understanding speech in poor SNR conditions, Association, 2002). The SNR improvement that would be and listening to speech in the presence of background noise is expected with the addition of an FM system is referred to as “FM one of the most common difficulties cited by people with hearing advantage,” or FMA. The FMA is derived by subtracting the SNR impairment (Cox & Alexander, 1995). To address the deleterious value (which would be obtained without the FM system) from the effect of poor SNR on communication, listeners with and without SNR obtained using the FM signal transmission (Platz, 2004). hearing loss may use a number of different devices to improve SNR For example, if the SNR in the local-microphone-only condition with the goal of better understanding speech in noise (Smaldino & was 5, but improved to 15 with the addition of an FM system, Crandell, 2000). it would be considered an FM advantage of 10 dB, denoted One of the most effective and widely-used technologies for FMA 10. The American Speech-Language-Hearing Association enhancing SNR is the frequency modulated (FM) amplification (ASHA) guidelines (2002) for fitting and monitoring FM systems system. This type of hearing assistance technology is composed recommend that the FM signal should be 10 dB more intense than of a microphone, a transmitter, and a receiver. The microphone the signal from the local microphone of the hearing instrument at is placed close to the desired signal (e.g., a person talking) and the output of the user’s hearing instrument; this is a starting point is connected to a transmitter, which sends an FM signal to the from which further adjustments can be made based on the needs receiver that is placed on or near the listener’s ear. This receiver and the comfort of the listener. can be a loudspeaker, a small ear-level unit, or it can be coupled The introduction of programmable miniaturized FM receivers with, or integrated into, a user’s hearing instrument or cochlear allowed for customized adjustment of FM gain and output to implant. Using any of these receiver placements, the result is an achieve a desired FMA (Platz, 2004). In light of this advancement, improvement in SNR because the deleterious effects of distance emerging research has focused on using electroacoustic verification

11 Journal of Educational Audiology vol. 17, 2011 procedures to evaluate whether changes to FMA settings produce Method the desired change in the device’s acoustic output. However, there is little information available regarding the effect of manipulating Participants the FMA settings with respect to a measurable difference in Control participants included ten adults (four males and user benefit. In 2003, Lewis and Eiten conducted a survey with six females, age 19 to 35 years, mean = 26 years) with normal audiologists who listened to recordings at various SNRs. They hearing, as defined by passing a hearing screening at 15dB found that listeners preferred increased SNR (as achieved by HL at 1000, 2000, and 4000 Hz. None had a history of hearing greater FMA) for audibility of the speaker’s voice, but that the cost problems, nor had any experience using personal FM systems. of increased FMA was decreased audibility of self and of other Control participants were recruited from an available graduate voices. student population. This group was included to see what optimal In addition to questions about the effect of changing FMA performance could be achieved without the effects of hearing on subjective preferences, there have been questions raised loss or age. Experimental participants included ten adults (five about the electroacoustic changes that occur as FMA is changed. males and five females, age 44 to 75 years, mean = 58.7 years) In 2007, Schafer, Thibodeau, Whalen and Overson examined Table 1. Demographic information on the experimental participants, their age, better ear the electroacoustic characteristics of FM receivers coupled to pure-tone average (PTA), personal hearing-aid type, and initial SNR used during testing. personal hearing instruments, and found that the output (as Better Ear PTA Personal Initial Participant Age (years) shown by electroacoustic verification) was affected by the type (dB HL) Hearing Aid SNR and compression characteristics of the hearing instrument and 1 61 25 Open-fit BTE 54 FM equipment used. For example, body-worn FM systems with neck loops provided reduced low-frequency output, and 2 48 65 BTE 64 programmable receivers yielded lower average FMA than other types of receivers. The authors proposed that the sequential- 3 55 20 None 54 testing protocol used in verification may have affected the results 4 56 70 BTE 49 for some units. A sequential-testing protocol dictates measuring hearing aid output for a composite input of 65 dB SPL to the 5 67 28 BTE 64 hearing aid local microphone, then measuring the hearing aid and FM system output for a composite input to the FM microphone 6 75 18 Open-fit BTE 52 of 80 dB SPL (ASHA, 2002). As the authors pointed out, data 7 44 33 Open-fit BTE 50 from Platz (2004, 2006) suggested that sequential testing did not replicate real-world inputs to both microphones. Additionally, 8 49 25 Open-fit BTE 54 they suggested that the use of input compression or wide- dynamic range compression in both the hearing instrument and 9 71 30 CIC 44 the transmitter may have resulted in reductions in the measured FMA when the signals were added. 10 71 32 Open-fit BTE 44

Although all agree that FM systems can result in improvements Mean 58.7 34.6 52.9 -- in speech perception in noise, the question of whether increases (SD) (10.66) (18.03) (6.94) in FMA are associated with significant improvements in speech Note: dB HL = decibel hearing level; BTE = behind-the-ear; CIC = completely-in-the-canal; SNR = signal-to-noise ratio perception in noise remains unanswered. The purpose of the present study was to determine the relationships between changes with bilateral sensorineural hearing impairment, as defined by the in programmable FMA and speech perception in noise in adults better-ear pure tone average (500, 1000, and 2000 Hz) ranging from with normal hearing and those with mild-to-severe sensorineural 18 to 70 dB HL (mean = 34.6 dB HL). Experimental participants hearing loss. Based on the current understanding of the were recruited from those who had participated in past research relationship of SNR to speech perception in noise (Finitzo-Hieber at the University of Texas at Dallas. As shown in Table 1, nine & Tillman, 1978; Nabelek & Pickett, 1974), it was predicted that of ten participants were experienced bilateral hearing-instrument all participants would show improvement with the addition of an users. Two participants (Participants 2 and 4) had experience using FM system, and that people without hearing impairment would FM systems, and four had participated in hearing-related research show greater improvement than listeners with hearing impairment, studies, though none were familiar with the specific test materials. as FMA increased.

12 Behavioral Verification of Programmable FM Advantage Settings

Amplification Systems Electroacoustic Verification Procedure Although there is current FM technology in which the FMA Electroacoustic characteristics of hearing instruments were fluctuates or adapts, depending on the background noise level, verified using a Frye FP-40 hearing-aid analyzer to determine gain/ an examination of the effects of small changes in FMA would be output and equivalent-input-noise characteristics. All study hearing difficult with an adaptive system. Additionally, there are several instruments were programmed to match NAL-NL1 simulated real- FM systems with programmable settings that adjust the FMA to a ear targets using average adult RECD information. One participant certain fixed level. To conduct an investigation of FMA settings, with hearing impairment (Participant 2) requested an increase in a convenient instrument to program for both normal and impaired gain for the left hearing instrument, which resulted in an increase hearing was selected. Ten Phonak iLink S-311 hearing instruments of 7 dB for the three-frequency average at 750, 1000, and 2000 Hz, with integrated FM receivers were programmed using Phonak as measured in a 2cc coupler using an input of 65 dB SPL. PFG software, version 8.6, and Phonak FM Successware, version The American Academy of Audiology Clinical Practice 4.0. For control participants, the study’s hearing instruments were Guideline for Fitting and Verification of Hearing Assistance programmed to meet National Acoustic Laboratories Non-Linear Technology (2008) was used for verification of FM output. version 1 (NAL-NL1: Byrne, Dillon, Ching, Katsch, & Keidser, Measurements via HA-2 coupler with a 65 dB SPL randomly- 2001; Dillon, 1999) targets for a hearing level of 15 dB HL across interrupted, speech-weighted input were used to compare hearing the frequencies of 250 Hz to 8000 Hz. ER-3A tips with size 13 instrument local-microphone (M) response to hearing-aid-plus-FM tubing were used to couple the hearing instruments to each ear. (FM+M) response to ensure that programming changes resulted For experimental participants, the study hearing instruments in changes in output. The goal was not to achieve transparency, were programmed to match the settings of their personal hearing as recommended in the AAA protocol, but rather to measure the instruments when possible. For those without personal hearing output of the M and FM+M in a consistent way. instruments (Participant 3) or with open-fit behind-the-ear aids (Participants 1, 8, and 10), the study aids were programmed to Materials meet NAL-NL1 targets for their hearing loss, and then adjusted for The Bamford-Kowal-Bamford Speech-in-Noise test (BKB- comfort. Four of the ten participants had personal earmolds that SIN), a test of speech perception in decreasing SNR1, was used were used to couple the study’s hearing instruments to their ears; with non-traditional presentation levels to assess the potentially the other six participants used the same type of temporary ER-3A small changes in performance that could occur with the changes eartips as the control participants. in FMA settings. The BKB-SIN uses the Bamford-Kowal-Bench The FMA of iLink S-311 hearing instruments can be adjusted sentences (Bench & Bamford, 1979; Bench, Kowal & Bamford, in 2-dB steps, from 0 to 18. A clinically-feasible step size of 4 dB 1979) spoken by a male talker in four-talker babble (Auditec of was chosen, resulting in the following FMAs: 18, 14, 10, 6, or St. Louis, 1971) and contains 18 List Pairs, each half of which 0 dB. The decision was made to eliminate the FMA 2 condition comprises an 8 to 10 sentence list. The first sentence of each list has rather than including a final step size of less than 4 dB. As a result, four key words, and the remaining sentences each have three. The there was a 6 dB step size from FMA 6 to FMA 0. All participants method for determining the signal-to-noise ratio at 50% correct were fit bilaterally at the beginning of each listening condition with (SNR-50) score is based on the Tillman-Olsen (Tillman & Olsen, a pair of the study’s hearing instruments programmed for their 1973) procedure for obtaining spondee thresholds. In the BKB- hearing loss and for one of the FMA settings. An FM transmitter SIN, one point is given for each key word repeated correctly, and was selected that was of the same generation of equipment and the total number of correct words per list is subtracted from 23.5 that would have minimal advanced features that might impact (this number is derived from the starting SNR = 21, plus half the results, such as adaptive FMA, directional microphones, or voice step size = 1.5, plus the extra word from the first sentence = 1). If activation. A Phonak Campus S transmitter with MM8 lapel modifications were made to the initial intensity of the background microphone on omnidirectional setting was used on Channel 1. babble (see Behavioral Verification section for more information 1 It should be noted that the American Academy of Audiology Clinical Practice on signal levels), then the formula was adjusted as follows: Total Guidelines for Fitting and Verification of Hearing Assistance Technology (2008) cautions against using adaptive-noise behavioral verification protocols because number of words correct was subtracted from [23.5 - (65- x), when “resulting noise levels may exceed typical classroom noise levels”. However, the 65 = initial signal presentation level and x = initial SNR]. The literature supports the use of adaptive noise, as occupied-classroom noise levels fluctuate throughout the day. For example, Dockrell and Shield (2004) found SNR-50 scores for both half-lists of the List Pair are averaged to that classroom noise levels varied from 55.5 dBA when students were working obtain the List Pair score (Etymotic Research, 2005). quietly to 77.3 dBA when students were involved in activities. To mimic the natural fluctuations in noise that occur throughout a listener’s day, a modified method of A benefit of using the BKB-SIN for this study was that it was constants approach to manipulating SNR was needed, and thus the BKB-SIN possible to use a split-track recording in which the initial intensity test (Etymotic Research, 2005) was chosen for the present study to determine behavioral changes in speech perception in noise performance. 13 Journal of Educational Audiology vol. 17, 2011 of the babble could be changed independently of the intensity of (Channel 1) at a constant level of 65 dBA (as measured at the the signal. This was necessary to avoid a “ceiling effect,” where level of the participant’s ear) and the background babble was the addition of an FM system could allow participants to score presented from the rear loudspeaker (Channel 2), beginning at 54 100% correct in even the most difficult SNR conditions. dBA (SNR 11) and was increased by 3 dB for each presentation The stimuli were presented via CD player, with an amplifier until the intensity of the babble was 81 dBA (SNR 16) at the final (GSI-16 or Crown D75) in a single-walled audiometric booth using presentation. single-cone loudspeakers at 0 and 180 degrees azimuth. Initial For experimental participants, the signal (Channel 1) was calibration was completed using a Quest Impulse Integrating again held constant at 65 dBA. Due to the higher variability in Model 1800 sound-level meter placed in the participant’s chair performance expected among listeners with hearing loss, the to approximate the location of the participant’s ears. Subsequent selection of the initial intensity of the background babble level was sound level measurements were made prior to testing each carefully selected for each individual so that performance with the participant to confirm uniformity of signal characteristics from one FM system would still present a across all the FMA subject to the next using a Radio Shack Digital-Display sound level settings. The initial babble level was set based on performance meter, Model 33-2055. In each calibration, the 1000 Hz calibration on the practice list. Several of the participants with hearing tone on the BKB-SIN CD was used to set the volume units (VU) impairment were able to easily complete most of a practice list in meter for the initial signal and noise output from the loudspeakers the microphone-only condition at a SNR of 0. This suggests that via Channel 1 and Channel 2, respectively. Intensity at the level of the addition of an FM system would create a ceiling effect, with the participant’s ear was also verified using speech-spectrum noise participant scores reaching a maximum number of words correct from the BKB-SIN CD to confirm signal and noise intensity prior in one or more conditions. If the practice list indicated a ceiling to beginning the BKB-SIN test. Using these calibration procedures, effect, additional adjustment to initial background babble level signal intensity was confirmed to be 65 dBA at the level of the was made such that when the participant was wearing hearing listener’s ear and 86 dBA at the level of the FM microphone placed instruments programmed to the highest FMA (18), their BKB-SIN 15.25 cm from the speaker at 0 degrees azimuth. score would approach (but not meet) the maximum number of words correct. The starting level of the background babble ranged Behavioral Verification Procedure from 44 dBA (SNR 21) to 65 dBA (SNR 0) and increased in 3 dB The signal and noise loudspeakers were one meter from the steps until the final level was 27 dB more intense than the initial participant at 0 and 180 degrees, respectively. The FM microphone level (refer to Table 1). was placed 15.25 cm in front of the signal loudspeaker (see Figure 1). This distance was chosen based on recommendations for “typical use,” suggesting that lapel microphones be Figure 1. Booth configuration for the behavioral evaluation. The FM microphone was placed 15.25 cm placed six inches from the speaker’s mouth (AAA, from the signal speaker, and the participant was separated from the signal and noise speakers by one meter in each direction. 2008). The BKB-SIN test was administered as a split-track recording with Channel 1 providing the speaker’s voice and Channel 2 providing the background babble. One half-list from List Pairs FM transmitter 9 through 18 was given as a practice list in the and microphone Audiometer Channel 1, coupled microphone-only condition. Participants had to to front speaker be able to correctly repeat at least 18 out of 22 Signal words presented at a SNR of 3 dB or better. All Audiologist test participants met inclusion criteria within one Participant practice list. Test lists (from List Pairs 1 through Audiometer 8) were then administered in computer-generated Channel 2, coupled to back loudspeaker random order in the following conditions: Noise CD player Hearing aids with with Split- integrated FM receivers microphone-only, FMA 0, FMA 6, FMA 10, FMA track CD 14, and FMA 18. Lists were re-randomized for each participant. For participants with normal hearing, the signal was presented from the front loudspeaker

14 Behavioral Verification of Programmable FM Advantage Settings

Figure 2. Average SNR at 50% score (SNR-50) for ten control (Avg. NH) and ten experimental people with hearing impairment would have participants (Avg. HI). greater difficulty with a speech-perception-in- 15.00 noise task (Beattie, 1989; Lewis, et al., 2004;

10.00 Wilson, McArdle, & Smith, 2007 ). Some of the difference could also be attributed to differences 5.00 in ages of the two groups, although this was not 0.00 A vg. NH a main factor of interest in this study. A vg. HI -5.00 There was also a significant effect of FMA SNR-50 Score -10.00 condition, with the SNR-50 score decreasing as

-15.00 FMA increased from 0 to 18 dB, F(1,5)=50.12,

-20.00 MSE= 974.88, p<.0001. A posteriori analysis Mic only FMA 0 FMA 6 FMA 10 FMA 14 FMA 18 was completed using the Tukey-Kramer Condition correction for multiple comparisons, and revealed a significant change when comparing Note: Errorbars show +/- 1 standard deviation; FMA=FM advantage low-FMA conditions to higher-FMA conditions (see Table 3). However, there was no change Table 2. Average improvement in SNR-50 scores across FM advantage conditions for ten control participants and ten experimental participants. in speech perception performance when instruments were changed from FMA 10 or Mic-only Total FMA 0 to 6 6 to 10 10 to 14 14 to 18 to 0 0 to 18 FMA 14 to higher FMA settings. There was

Control -5.35 -7.2 -2.35 -2.35 -2 -19.25 no significant interaction between group and condition, F(1,5)=0.56, MSE=10.93, p=0.73. Experimental -7.45 -4.45 -3.65 -0.10 -1.75 -17.40 The change in SNR-50 scores across FMA conditions occurred similarly for control and Note: FMA = FM advantage; Mic = microphone; SNR = signal-to-noise ratio experimental participants. Results The change in FMA measured electro- acoustically compared to programmed change is shown in Figure The average performance across the conditions for 3, and further detail is provided in Appendix B. A given increase control participants and experimental participants is shown in in FMA in the FM Successware program did not always result Figure 2. The SNR-50 scores for each participant are provided in in a similar (+/- 1dB) increase in FMA when electroacoustically Appendix A. For every condition, the control group could achieve evaluated according to AAA and Phonak guidelines. Changes in 50% correct performance with greater noise levels (lower SNRs) electroacoustic response tended to be closer to the programmed on average than the experimental group with the exception of the change when the hearing aids were programmed for a flat FMA 0 condition. The average improvement from one condition 15-dB hearing level (i.e. for normal hearing). For this programmed to the next is shown in Table 2. For each group, the use of the level, the electroacoustic change was within +/- 1 dB in two of FM system resulted in the ability to tolerate more noise relative four FMA comparison conditions (6 to 10 and 14 to 18). However, to the microphone-only condition. Furthermore, increases in FMA when the hearing aids were programmed for the respective hearing resulted in a lower SNR-50 score relative to the previous condition. loss values, the change in electroacoustic response was within A one between-subject, one within-subject repeated measures +/- 1 dB in only one of the four FMA comparison conditions analysis of variance (ANOVA) was performed and revealed significant main Table 3. Changes in SNR-50 scores between FM advantage conditions for all participants. ______effects for group and condition. Across the Mic-Only FMA 0 FMA 6 FMA 10 FMA 14 FMA 18 six FMA conditions, the control participants Mic-only -- 6.40* 12.23* 15.23* 16.45* 18.33* (mean SNR-50 score = -7.98 dB) achieved FMA 0 -- -- 5.83* 8.83* 10.05* 11.93* lower SNR-50 scores than the experimental FMA 6 ------3.00 4.23* 6.10* participants (mean SNR-50 score = -5.91 FMA 10 ------1.23 3.10 dB), F(1,5) = 4.95, MSE=105.02, p=.03. FMA 14 ------1.88 This difference in performance is consistent Note: FMA = FM advantage; Mic = microphone; SNR = signal-to-noise ratio: An asterisk (*) indicates a with the well-documented finding that significant difference at the .05 level.

15 Journal of Educational Audiology vol. 17, 2011

(0 to 6). For three of the comparison FMAs, the change measured While the results showed that, in general, FM systems provide electroacoustically was less than the programmed change in FMA. advantages to people with and without hearing impairment, the For example, changing the FMA from 10 to 14 in the programming findings described above suggest a great deal of variability software only resulted in an electroacoustic change of 1.12 dB. between programmed settings and behavioral performance on a speech-perception-in-noise task. With this in mind, a clinician Figure 3. Mean electroacoustic change in FM advantage compared to programmed change in FM advantage, by group. cannot always expect that changes beyond the default settings in the programmed FMA will result in an improvement in their 9.00 8.00 patient’s performance. 7.00 Interestingly, improvements in performance related to 6.00 changes in FMA were less consistent for participants with 5.00 4.00 hearing impairment than for participants with normal hearing. 3.00 This may be due, in part, to compression characteristics of the 2.00 hearing instrument and FM system. For listeners without hearing

Change in measured FMA in measured Change 1.00 impairment, the compression settings were the same across all aids. 0.00 0 to 6 dB 6 to 10 dB 10 to 14 dB 14 to 18 dB This most likely resulted in a more consistent SNR improvement Electro NH 7.95 3.23 2.20 3.14 with fluctuating inputs. For listeners with hearing impairment, Electro HI 7.01 2.56 0.76 1.51 the compression settings varied with degree of loss, which could FMA in DB 6.00 4.00 4.00 4.00 FM Program increase account for some of the increased variability seen in the participants with hearing impairment. These findings suggest that, in addition

Note. Electro NH = the difference between the electroacoustic three-frequency average from one to other amplification characteristics, the compression settings of condition (such as FMA 0) to the next (such as FMA 6) for the control participants; Electro HI = experimental participants; and FMA in dB = the difference from one programmed FMA (such as 0) the hearing instrument may interact with those of the FM system. to the next (such as 6). For the group with normal hearing, most participants showed an improvement in performance as FMA increased. Discussion However, the group with hearing impairment achieved greater change more often with the initial addition of an FM system, as The purpose of this study was to examine the listening-in- seen by comparing microphone-only to FMA 0. However, fewer noise performance of adults with normal hearing compared to participants were able to achieve improvements with the addition adults with hearing impairment for various FM advantage (FMA) of greater FM advantage. This suggests that increasing the FM settings. For both groups, there was significant improvement in advantage in small increments (i.e., 4 dB or less beyond FMA performance across all conditions as FMA increased. 6) may not result in significant benefits in speech perception in noise for patients. Also, the variability in results, particularly with Clinical Implications the hearing-impaired group, suggests that sensitive measures of This study has several implications for clinicians working speech perception in noise (such as the BKB-SIN) are necessary with programmable FM systems. First, the data confirmed that when fitting FM systems. the addition of an FM system improved speech perception in Finally, programmed changes in FMA did not always result noise for all participants, whether they had normal hearing or in equal changes in FMA when measured electroacoustically, hearing impairment (see Appendix C). In all FMA conditions, especially for participants with hearing impairment. This may have improvements were achieved by seven of the ten control been related to the fixed output sound pressure at 90 dB (OSPL90) participants relative to the microphone-only condition, and nine of the device. As shown by Schafer, et al (2007), the OSPL90 of the ten experimental participants achieved improvements in the when receiving an FM signal does not exceed the OSPL90 in the FMA 0 condition relative to the microphone-only condition. These microphone-only setting. Thus, when the speech input for a hearing results indicate that individuals who use an FM system can tolerate aid is nearing saturation levels (as is possible for listeners with greater noise levels and still maintain 50% speech recognition. This hearing impairment), the FM system is likely to be in compression finding supports prior research findings that the addition of an FM and may not be able to generate a signal significantly different system is an effective means of addressing the issue of listening in intensity from the microphone-only signal, particularly at high in noise (Anderson & Goldstein, 2004; Arnold & Canning, 1999; FMA settings. Boothroyd & Iglehart, 1998; Lewis et al, 2004).

16 Behavioral Verification of Programmable FM Advantage Settings

Limitations This study was completed using a relatively small sample size of 20 adults. There was more variability in performance with hearing-impaired participants, which could be attributable to differing degrees of hearing loss, varying amounts of prior experience with amplification, compression settings, or central auditory processing issues (due to aging). Another concern is the age difference between the control group (M=26 years, SD=4.74) and the experimental group (M= 60.3 years, SD=9.89). It is well understood that speech perception in noise declines with age (Jerger, Jerger, Oliver, & Pirozzolo, 1989; Martin & Jerger, 2005), so it cannot be ruled out that the performance differences between the control group and the experimental group were confounded by age difference. Additionally, only one type of speech-in-noise test was used, which had an adaptive noise level; results may differ when using different tests and/or when adaptive signal/speech levels are used, as changing the intensity of the signal may affect the compression characteristics of hearing instruments and/or FM systems (see Schafer et. al, 2007). Finally, a specific type of hearing instrument and FM system was used; it is possible that results could vary considerably with different amplification systems as direct audio input characteristics, microphone sensitivities, and impedance characteristics change from one system to the next.

Future Research The focus of this study was on the performance of adults with hearing loss, but there is a clear need to do similar research with children with and without hearing loss, as children tend to be the most frequent users of FM technology (Smaldino & Crandell, 2000). Also, further research into electroacoustic evaluation using speech-like inputs at varying intensities may help explain how compression characteristics affect outputs with and without FM systems. Finally, with the emergence of new technology (such as dynamic or adaptive FM systems), additional research will be needed to determine how to effectively evaluate benefit, both electroacoustically and behaviorally. Ultimately, in order to maximize user benefit, careful monitoring of electroacoustic and behavioral benefit of programmable FM systems is warranted.

17 Journal of Educational Audiology vol. 17, 2011

Acknowledgements Etymotic Research (2005). Bamford-Kowal-Bench Speech-in- The authors would like to thank Phonak Hearing Systems for the Noise Test (Version 1.03) [Audio CD]. Elk Grove Village, donation of the FM receivers and transmitters used in this study. IL: Etymotic Research. Finitzo-Hieber, T., & Tillman, T. (1978). Room acoustics effects on monosyllabic word discrimination ability for normal and References hearing-impaired children. Journal of Speech and Hearing American Academy of Audiology (2008). Clinical practice Research, 21, 440–458. guidelines: Remote microphone hearing assistance Hawkins, D.B. (1984). Comparisons of speech recognition in technologies for children and youth birth - 21 noise by mildly-to-moderately hearing-impaired children years, Supplement A. Retrieved from http://www. using hearing aids and FM systems. The Journal of Speech audiology.org/resources/documentlibrary/ pages/ and Hearing Disorders, 49(4), 409-418. HearingAssistanceTechnologies.aspx. Hornsby, B. W., & Ricketts, T. A. (2003). The effects of hearing American Speech-Language-Hearing Association. (2002). loss on the contribution of high- and low-frequency speech Guidelines for fitting and monitoring FM systems.ASHA information to speech understanding. The Journal of the Desk Reference, 2, 151-171. Acoustical Society of America, 113(3), 1706-17. Anderson, K. L. & Goldstein, H. (2004). Speech perception Jerger, J., Jerger, S., Oliver, T., & Pirozzolo, F. (1989). Speech benefits of FM and infrared devices to children with hearing understanding in the elderly. Ear and Hearing, 10(2), 79-89. aids in a typical classroom. Language, Speech, and Hearing Lewis, D. & Eiten, L. (2003). Assessment of advanced hearing Services in Schools, 35, 169–184. instrument and FM technology. In ACCESS: Achieving Clear Arnold, A. P. & Canning, D. (1999). Does classroom Communication Employing Sound Solutions (pp. 167-174). amplification aid comprehension?British Journal of Warrenville, IL: Phonak AG. Audiology,33(3), 171-8. Lewis, M. S., Crandell, C. C., Valente, M., & Horn, J. E. (2004). Auditec (1971). Four-talker babble [Recording] St. Louis, MO. Speech perception in noise: Directional microphones Beattie, R. C. (1989). Word recognition functions for the CID versus frequency modulation (FM) systems. Journal of the W-22 test in multitalker noise for normally hearing and American Academy of Audiology, 15, 426–439. hearing-impaired subjects. Journal of Speech and Hearing Martin, J. & Jerger, J. (2005). Some effects of aging on central Disorders, 54(1), 20-32. auditory processing. Journal of Rehabilitation Research and Bench, J., & Bamford, J. (1979). “Speech-hearing tests and the Development, 42(4 Suppl 2), 25-44. spoken language of hearing-impaired children.” London: Nabelek, A., & Pickett, J. (1974). Reception of consonants in a Academic Press. classroom as affected by monaural and binaural listening, Bench, J., Kowal, A. & Bamford, J. (1979). “The BKB (Bamford- noise, reverberation, and hearing aids. Journal of the Kowal-Bench) Sentence Lists for Partially-Hearing Acoustical Society of America, 56, 628–639. Children.” British Journal of Audiology, 13, 108-112. Platz, R. (2004). SNR advantage, FM advantage and FM fitting. Boothroyd, A. & Iglehart, F. (1998). Experiments with classroom In ACCESS: Achieving Clear Communication Employing FM amplification.Ear and Hearing, 19(3), 202-17. Sound Solutions (pp. 147-154). Warrenville, IL: Phonak AG. Byrne, D., Dillon, H., Ching, T., Katsch, R., Keidser, G. (2001). Platz, R. (2006). New insights and developments in verification NAL-NL1 procedure for fitting nonlinear hearing aids: of FM systems. Session presented at the annual characteristics and comparisons with other procedures. AudiologyNOW conference of the American Academy of Journal of the American Academy of Audiology, 12, 37-51. Audiology, Minneapolis, MN. Ching, T.Y., Dillon, H., & Byrne, D. (1998). Speech recognition Schafer, E. C., Thibodeau, L. M., Whalen, H. S., & Overson, G. of hearing-impaired listeners: predictions from audibility and J. (2007). Electroacoustic evaluation of frequency-modulated the limited role of high-frequency amplification.The Journal receivers interfaced with personal hearing aids. Language, of the Acoustical Society of America, 103(2), 1128-40. Speech, and Hearing Services in Schools, 38, 315–326. Cox, R.M. & Alexander, G.C. (1995). The Abbreviated Profile of Smaldino, J. & Crandell, C. (2000). Classroom amplification Hearing Aid Benefit. Ear and Hearing, 16, 176-186. technology: Theory and practice. Language, Speech, and Dillon, H. (1999). NAL-NL1: A new prescriptive fitting Hearing Services in Schools, 31, 371–375. procedure for non-linear hearing aids. Hearing Journal 52(4), 10-16.

18 Behavioral Verification of Programmable FM Advantage Settings

Tillman, T. W., & Olsen, W. O. (1973). Speech audiometry. In J. Jerger (Ed.) Modern Developments in Audiology, (2nd ed). New York, NY: Academic Press. Wilson, R.H., McArdle, R.A., & Smith, S.L. (2007). An evaluation of the BKB-SIN, HINT, QuickSIN, and WIN materials on listeners with normal hearing and listeners with hearing loss. Journal of Speech, Language, and Hearing Research, 50, 844–856.

19 Journal of Educational Audiology vol. 17, 2011

Appendix A SNR-50 scores for participants from control group (top) and experimental group (bottom).

Parti- cipant NH-1 NH-2 NH-3 NH-4 NH-5 NH-6 NH-7 NH-8 NH-9 NH-10 Mic-only 3.50 -0.50 1.00 4.50 -1.50 5.00 4.50 7.00 5.50 6.50 FMA 0 1.00 -6.00 -3.50 -2.50 -1.50 3.00 -1.50 -2.50 -2.50 -2.00 FMA 6 -6.50 -11.00 -9.00 -7.00 -11.00 -5.50 -10.50 -7.00 -12.00 -10.50 FMA 10 -10.50 -10.00 -16.00 -10.50 -10.50 -9.00 -12.50 -10.00 -12.50 -12.00 FMA 14 -12.50 -13.50 -15.00 -15.00 -14.00 -14.00 -13.00 -13.00 -13.00 -14.00 FMA 18 -14.00 -15.50 -17.00 -15.00 -16.00 -15.50 -15.00 -16.00 -16.00 -17.00 Mean -6.50 -9.42 -9.92 -7.58 -9.08 -6.00 -8.00 -6.92 -8.42 -8.17

Parti- cipant HI-1 HI-2 HI-3 HI-4 HI-5 HI-6 HI-7 HI-8 HI-9 HI-10 Mic-only 10.50 10.50 6.50 -1.50 0.00 -0.50 9.50 10.50 4.00 4.00 FMA 0 3.50 -1.00 -0.50 -2.50 5.00 -7.00 0.50 -5.00 -10.00 -4.00 FMA 6 -2.50 -4.50 0.00 -9.00 7.00 -5.50 -10.00 -12.00 -17.50 -11.50 FMA 10 -6.50 -7.00 -8.00 -9.50 2.00 -13.00 -10.50 -14.00 -21.00 -14.50 FMA 14 -11.50 -7.00 -6.00 -10.00 3.50 -13.00 -13.00 -10.00 -21.00 -15.00 FMA 18 -8.00 -7.50 -8.50 -15.00 -1.50 -15.00 -10.50 -15.50 -22.50 -16.50 Mean -2.42 -2.75 -2.75 -7.92 2.67 -9.00 -5.67 -7.67 -14.67 -9.58

Average SNR-50 Scores NH SD HI SD Mic-only 3.55 2.92 5.35 4.85 FMA 0 -1.80 2.43 -2.10 4.61 FMA 6 -9.00 2.31 -6.55 6.99 FMA 10 -11.35 2.00 -10.20 6.11 FMA 14 -13.70 0.86 -10.30 6.43 FMA 18 -15.70 0.92 -12.05 5.98 Mean -8.00 -5.88

Note: FMA = FM advantage; Mic = local microphone NH = control group; HI = experimental group

20

Behavioral Verification of Programmable FM Advantage Settings

Appendix B Behavioral SNR improvement and electroacoustic differences as FM advantage changes, by subject. FMA NH-1 NH-2 NH-3 NH-4 NH-5 NH-6 NH-7 NH-8 NH-9 NH-10 0 Beh 7.50 5.00 5.50 4.50 9.50 8.50 9.00 4.50 9.50 8.50 to Elec 7.00 7.00 7.00 7.00 7.00 7.00 7.00 10.17 10.17 10.17 6 Diff 0.50 -2.00 -1.50 -2.50 2.50 1.50 2.00 -5.67 -0.67 -1.67 6 Beh 4.00 -1.00 7.00 3.50 -0.50 3.50 2.00 3.00 0.50 1.50 to Elec 3.33 3.33 3.33 3.33 3.33 3.33 3.33 3.00 3.00 3.00 10 Diff 0.67 -4.33 3.67 0.17 -3.83 0.17 -1.33 0.00 -2.50 -1.50 10 Beh 2.00 3.50 -1.00 4.50 3.50 5.00 0.50 3.00 0.50 2.00 to Elec 2.00 2.00 2.00 2.00 2.00 2.00 2.00 2.67 2.67 2.67 14 Diff 0.00 1.50 -3.00 2.50 1.50 3.00 -1.50 0.33 -2.17 -0.67 14 Beh 1.50 2.00 2.00 0.00 2.00 1.50 2.00 3.00 3.00 3.00 to Elec 3.42 3.42 3.42 3.42 3.42 3.42 3.42 2.50 2.50 2.50 18 Diff -1.92 -1.42 -1.42 -3.42 -1.42 -1.92 -1.42 0.50 0.50 0.50

FMA HI-1 HI-2 HI-3 HI-4 HI-5 HI-6 HI-7 HI-8 HI-9 HI-10 0 Beh 6.00 3.50 -0.50 6.50 -2.00 -1.50 10.50 7.00 7.50 7.50 to Elec 5.83 6.67 8.50 7.17 7.00 8.17 7.67 8.33 3.47 7.33 6 Diff 0.17 -3.17 -9.00 -0.67 -9.00 -9.67 2.83 -1.33 4.03 0.17 6 Beh 4.00 2.50 8.00 0.50 5.00 7.50 0.50 2.00 3.50 3.00 to Elec 2.17 1.50 3.50 1.33 4.17 3.67 3.38 4.00 1.83 0.05 10 Diff 1.83 1.00 4.50 -0.83 0.83 3.83 -2.88 -2.00 1.67 2.95 10 Beh 5.00 0.00 -2.00 0.50 -1.50 0.00 2.50 -4.00 0.00 0.50 to Elec 0.67 1.00 0.33 1.50 0.33 2.50 -0.75 -0.42 0.60 1.83 14 Diff 4.33 -1.00 -2.33 -1.00 -1.83 -2.50 3.25 -3.58 -0.60 -1.33 14 Beh -3.50 0.50 2.50 5.00 5.00 2.00 -2.50 5.50 1.50 1.50 to Elec 1.50 0.67 -0.50 2.83 -1.83 2.17 2.17 3.97 2.07 2.10 18 Diff -5.00 -0.17 3.00 2.17 6.83 -0.17 -4.67 1.53 -0.57 -0.60

Note: Beh = Change in SNR-50 score from one FMA condition to the next; Elec = change in the 3-frequency average (750, 1000, 2000 Hz) difference between hearing aid and hearing aid plus FM, from one FMA condition to the next; Diff = difference between behavioral change and electroacoustic change from one FMA condition to the next.

21

Journal of Educational Audiology vol. 17, 2011

Appendix C Individual participants who exceeded the 95% confidence interval for change in speech- recognition-in-noise score as FM advantage was changed.

Participant Number

Total no. of FMA Group CI 1 2 3 4 5 6 7 8 9 10 participants comparison exceeding CI

Mic-only Control 1.81 + + + + - + + + + + 9 to 0

Experimental 3.00 + + + - - + + + + + 8

Control 1.51 + + + + + + + + + + 10 0 to 6 Experimental 2.85 + + - + - - + + + + 5

Control 1.43 + - + + - + + + - + 7 6 to 10 Experimental 4.33 - - + - + + - - - - 3

Control 1.24 + + - + + + - + - + 7 10 to 14 Experimental 3.78 + ------1

Control 0.53 + + + - + + + + + + 9 14 to 18 Experimental 3.99 - - - + + - - + - - 3

Note: FMA = FM advantage; Mic = microphone; CI = confidence interval; (+) = change in score exceeded 95% confidence interval; (-) = change in score did not exceed 95% confidence interval.

22 An Exploration of Non-Quiet Listening at School

An Exploration of Non-Quiet Listening at School

Jeffery Crukley, Ph.D. Susan Scollie, Ph.D. Vijay Parsa, Ph.D. University of Western Ontario London Ontario Canada

The first goal of this study was to describe acoustic properties across an entire day in each of three educational environments: daycare (pre-kindergarten), an elementary school (kindergarten to grade 8), and a high school (grades 9 through 12). Instructional and non-instructional listening situations were included in this description. Second, we classified the various listening situations experienced by the cohorts at each school. Three sites participated in this study. At each site, empty room measurements were obtained, including noise floor and reverberation levels, across the various rooms frequently occupied by the participating cohorts of children. Next, the first author followed the cohorts throughout their regular school routines, recording sound level data with a dosimeter and documenting observations of the types of listening situations encountered by the children. Noise floor, reverberation, and sound levels were compared to classroom standards and large scale classroom studies. The cohorts in this study encountered highly variable acoustic environments throughout the day, for signal levels, noise sources, and reverberation properties. These results have implications for digital signal processing and hearing instrument fitting approaches for school-age children. Furthermore, the results of this exploratory study may impact on future research on classroom acoustics. Introduction situations beyond the classroom situation of listening to a teacher. This may be an informative first step in determining optimal signal The purpose of the current study was to gather detailed processing for children in non-quiet environments. information about the school-day listening environments of three Studies of adults who wear hearing instruments have applied cohorts of children in mainstream educational environments. This the concept of auditory ecology (Gatehouse, Elberling, & Naylor, study served as a precursor to a larger study investigating hearing 1999; Gatehouse, Naylor, & Elberling, 2003, 2006a, b), a concept instrument fitting strategies for children in non-quiet listening in which the sound levels across a real-life, real-time sample from environments and situations. Modern hearing instruments typically an individual hearing instrument wearer are used to inform hearing offer some combination of frequency-gain adjustment, directional instrument signal processing choices. This study used an auditory microphones, and digital noise reduction (DNR) with the goal ecology measurement approach in a small number of classroom of providing better speech recognition and listening comfort/ cohorts. We measured reverberation time (RT) and noise floor tolerance in noise. While research has demonstrated that directional levels across the many school environments. Additionally, we microphones can improve children’s speech recognition in noise measured sound levels across an entire day, rather than a large performance (Auriemmo et al., 2009; Gravel, Fausel, Liskow, scale sampling of sound levels during only targeted (typically & Chobot, 1999; Kuk, Kollofski, Brown, Melum, & Rosenthal, classroom) listening situations. This ecological approach allowed 1999), the use of DNR with children has not demonstrated any the description of both instructional and non-instructional parts of measureable improvement (Pittman, 2011; Stelmachowicz et al., the day, which may serve to improve hearing instrument fitting 2010). These results are consistent with similar findings in adult practices for children attending school. For example, listening to a listeners, and have led to mixed recommendations regarding the friend while playing outside is an important listening situation, and use of directional microphones and DNR in pediatric hearing one that is not well described in the classroom acoustics literature. instrument fittings. Some guidelines do not recommend using This paper presents data across all listening environments and these features (AAA, 2003), whereas others consider them viable situations encountered by three cohorts of children. options (Bagatto, Scollie, Hyde, & Seewald, 2010; CASLPO, 2002; Foley, Cameron, & Hostler, 2009) or recommend directional Auditory Ecology: Children in Non-Quiet Environments microphones universally (King, 2010). Auditory ecology has been defined as the range of acoustical As part of an overall project investigating strategies to environments that a person experiences, the auditory demands of improve children’s hearing instrument fittings for non-quiet those environments, and the importance of those demands to an listening, the current study explored the daily listening experiences individual’s daily life (Gatehouse, et al., 1999; Gatehouse, et al., of children over an entire school day. This exploration included 2003, 2006a, b). A hearing instrument’s ability to support multi-

23 Journal of Educational Audiology vol. 17, 2011 environment listening is a significant predictor of hearing instrument Kreisman, 2008). The various acoustic properties of a room have benefit in adults (Hickson, Clutterbuck, & Khan, 2010; Kochkin, also been shown to have differential effects on listeners depending 2005). A recent study of hearing instrument outcome in children on age and hearing status, such that younger children and children suggests that multi-environment listening is also important for with hearing loss are more affected by increased RT and decreased children. The study compared two hearing instrument prescriptive signal-to-noise ratio (SNR: Boothroyd, 2004; Finitzo-Hieber algorithms in a sample of school-age children with hearing loss; & Tillman, 1978; Nábĕlek & Nábĕlek, 1994; Nelson & Soli, results are reported across several publications (Ching, Scollie, 2000; Smaldino, et al., 2008). The implications, importance, and Dillon, & Seewald, 2010; Ching et al., 2010a; Ching et al., 2010b; measurement of classroom acoustics have been widely documented Scollie et al., 2010). in the literature (Finitzo-Hieber & Tillman, 1978; Knecht, Nelson, Although auditory ecology was not a specific focus of the Whitelaw, & Feth, 2002; Larsen & Blair, 2008; Nelson, Smaldino, study, insight into the varied auditory environments experienced by Erler, & Garstecki, 2008; Nelson & Soli, 2000; Picard & Bradley, children arose from the diary entries reported in Scollie et al. (2010). 2001; Pugh, Miura, & Asahara, 2006; Shield & Dockrell, 2004). The authors sought to identify a relationship between prescription Background noise generally refers to any sound that interferes preferences and the different listening situations encountered by with or impedes what a listener wants or needs to hear (Knecht, the children by performing a principal components analysis on the et al., 2002; Smaldino, et al., 2008). Examples of background children’s preference ratings. From this analysis, two components noise include sounds from sources within a room (e.g., ventilation emerged that contained several listening environments each. systems, computer fans, and overhead projectors), sounds The first component consisted of loud, noisy, and reverberant from external sources (e.g., traffic noise, grounds maintenance situations: shopping mall, restaurant, car/bus/train, playground, equipment, and sound made by people in adjacent rooms or outside family at home, watching TV or a movie, friends in class, and the building), as well as sounds made by the children themselves. teacher in class. The second component consisted of quiet, or low- Background noise negatively affects speech recognition ability level, listening situations: friends in class, soft speech, sounds from by reducing the audibility of acoustic cues present in a speech behind, teacher in class, and sounds in the environment (Scollie signal that are important for understanding and distinguishing et al., 2010). Interestingly, the classroom listening ratings were speech sounds (Smaldino, et al., 2008). The level of background correlated with both components, suggesting that the classroom noise present in classrooms has been the focus of many classroom environment presents situations that vary between quiet and noisy. acoustic studies and has been reported to range from under 30 dBA Overall, the results indicated that children need hearing instrument to over 70 dBA (Crandell & Smaldino, 1994; Knecht, et al., 2002; strategies that effectively manage listening in noisy situations, Nelson, et al., 2008; Pearsons, Bennett, & Fidell, 1977; Pugh, et as well as strategies that optimize speech intelligibility in quiet al., 2006). The presence of students generally increases the level or communication-intensive situations (Scollie et al., 2010). of noise in a classroom, with increases in noise levels varying from Considering the significant amount of time children spend in approximately 2dBA to 30 dBA between unoccupied and occupied school, the current study focused on exploring children’s listening classrooms (Bess, Sinclair, & Riggs, 1984; Hodgson, 1994; Picard environments and situations encountered in that environment. & Bradley, 2001). Although this was not primarily a study of classroom acoustics In order to be understood clearly, the level of speech in a given per se, traditional measures of room acoustics were included to environment must be sufficiently above the level of background allow for a description of the children’s classrooms and are defined noise. The level of speech relative to the level of background noise below. is typically expressed as SNR, which represents the difference (in dB) between the level of the speech signal and the background Room Acoustics noise level. The SNR encountered in classrooms can range from The characteristics of a speech signal, and the ability of -7 to +15 dB (Blair, 1977; Crandell & Smaldino, 2000; Houtgast, listeners to understand the speech signal, depend in part on the 1981; Markides, 1986; Pearsons, et al., 1977), which may indicate acoustic properties of the room in which the signal is presented. that children often listen at SNRs poorer than the recommended There are multiple factors to consider when classifying a room, minimum of +15 dB SNR for educational settings (ASHA, 2005). such as the level of background noise, the level of the talker, Additionally, the effects of reverberation in the room and distance the amount of reverberation in the room, and the distance of the from the talker can impact whether the speech-to-competition ratio talker from the listener (Boothroyd, 2004; Crandell & Smaldino, is sufficient for speech understanding. 2000; Finitzo-Hieber & Tillman, 1978; Nábĕlek & Nábĕlek, 1994; Reverberation is the persistence of sound energy in a room Nelson & Soli, 2000; Smaldino, Crandell, Brian, Kreisman, & due to reflections of the sound energy from floors, ceilings, and

24 An Exploration of Non-Quiet Listening at School

objects in the room. Reverberation time, specifically RT60, refers classrooms, the background noise level recommendation remains to the length of time required for the level of an emitted sound at 35 dBA with the recommended maximum RT60 increased to (at a particular frequency) to decrease by 60 dB after the signal 0.7s. In studies of background noise levels and RT in classrooms, is stopped. RT is dependent upon the size and shape of a room, as the majority of classrooms surveyed meet ANSI recommendations well as the sound absorptive properties of the walls, ceilings, and (ANSI S12.60, 2010) for RT60, but they fail to meet background objects within the room (Boothroyd, 2004; Nábĕlek & Nábĕlek, noise level criteria (Knecht, et al., 2002; Nelson, et al., 2008; Pugh, 1994; Smaldino, et al., 2008). Measurement of RT in classrooms et al., 2006). has been reported to range from 0.4s to 1.2s. For comparison, Children with hearing loss require a louder speech signal, higher audiometric test booths typically have RTs of approximately SNR, and lower RT than their peers with normal hearing (Boothroyd, 0.2s, living rooms and offices can have RTs of 0.4s to 0.8s, while 2004; Elliott, 1979; Fallon, Trehub, & Schneider, 2002; Nábĕlek & auditoriums and churches can have RTs greater than 3.0s (Nábĕlek Nábĕlek, 1994; Neuman, Wroblewski, Hajicek, & Rubinstein, 2010; & Nábĕlek, 1994; Smaldino, et al., 2008). In general, the ability Scollie, 2008; Smaldino, et al., 2008). To facilitate listening during to understand speech in a room decreases with increasing RT formal classroom instruction, a wireless microphone can be worn by (Nábĕlek & Nábĕlek, 1994; Smaldino, et al., 2008); however, it is the teacher to enhance children’s speech understanding. The remote important to consider the interactions between background noise microphone sends signals to the child’s listening device(s) (hearing sources, reverberation, and the distance between talker and listener instruments or other devices) via frequency modulation (FM) signal due to the synergistic effects of these variables. transmission. This strategy is effective in overcoming the effects The acoustics of a speech signal change over distance and the of background noise, room reverberation, and teacher-to-student sound arriving at the location of the listener is typically divided distance (Boothroyd & Iglehart, 1998; Hawkins, 1984; Lewis, into direct and reflected energy components (Boothroyd, 2004; Feigin, Karasek, & Stelmachowicz, 1991; Pittman, Lewis, Hoover, Crandell & Smaldino, 2000; Nábĕlek & Nábĕlek, 1994; Smaldino, & Stelmachowicz, 1999; Thibodeau, 2010). However, children et al., 2008). Direct sound energy consists of sound waves that experience many situations in which the primary signal of interest is travel straight to the listener, without reflecting off any surfaces not an individual teacher: playing at recess, team games in gym class, in the room. Reflected energy can be divided into two types (a) and conversations in the hallway between classes are all examples. early reflections, which are sound waves that reach the listener In these situations, children may not be using an FM system; shortly after the direct sound (approximately 50msec), and (b) however, they are likely still using their hearing instruments. Thus, late reflections, which arrive at the listener after reflecting off of to inform hearing instrument development and fitting processes, and multiple surfaces in the room. Depending on the distance from the ultimately improve the validity of pediatric prescriptive algorithms, talker and the characteristics of the room, the signal arriving at an understanding of the complex listening needs of children at school the listener may be predominantly direct sound energy, a mixture (which extends beyond the existing literature on classroom acoustics) of direct and reflected energy, or predominantly reflected energy. is needed.

Critical Distance (Dc) is the point in a room where the direct sound energy is equal to the reflected sound energy; at locations closer Purpose and Research Questions than Dc, the effects of reverberation are minimized. However, at The purpose of this study was to describe the acoustic locations further than Dc, reflected energy can interfere with (or environments and listening situations encountered by children across mask) the primary speech signal, which makes understanding an entire day at school or daycare. The goal of this study was not difficult. In general, speech understanding decreases with to replicate prior studies of classroom acoustics; however, some increasing distance from the talker until Dc is reached. Beyond Dc, classroom acoustics data are presented to contextualize the main performance is degraded but relatively constant with increasing purpose of this research. The purpose of this research was to explore distance. In order to maximize speech understanding, the distance the daily listening needs of children at school beyond instructional between talker and listener should be minimized and remain time. This study sought to address the following research question: within Dc (Boothroyd, 2004; Crandell & Smaldino, 2000; Nábĕlek What are the instructional and non-instructional listening situations & Nábĕlek, 1994; Smaldino, et al., 2008). experienced by school-age children throughout a school day? By The American National Standards Institute (ANSI) has situations, we mean signal and noise types, along with sources and outlined recommended acoustic criteria for classrooms (ANSI levels. In addressing this question, some basic room acoustics data S12.60, 2010). This standard recommends a maximum background were also obtained. These measurements are compared to those noise level of 35 dBA and a maximum RT60 of 0.6s for classrooms reported in the literature of larger-scale classroom acoustics, in order with an enclosed volume of less than 283m3 (10 000 ft3); for larger to determine the representativeness of the chosen sites of study.

25 Journal of Educational Audiology vol. 17, 2011

Method with a selected omnidirectional polar pattern) was used for recording all signals. A powered speaker (Simeon 500WU) was

Study Sites used for stimulus presentation in the RT60 estimates. Three sites in London, Ontario, Canada were studied, with Noise floor measurements were performed by positioning the approval from The University of Western Ontario’s Health Sciences recording microphone in the center of the room and then recording Research Ethics Board and appropriate officials of the local school a 30 second sample with SpectraPLUS. Post-processing was then board and daycare center. The school sites included an elementary done with SpectraPLUS to calculate the noise floor level (in dBA) school (kindergarten to grade 8, ages 5 to 14 years) and a high and the spectral distribution of the noise. school (grades 9 to 12, ages 13 to 18 years) that support children Reverberation time estimates were made by positioning the with hearing loss through hearing resource programs (taught by recording microphone in the center of the room and then positioning teachers of the deaf and hard of hearing) within a mainstream the presentation speaker at the same height and approximately two school setting. These two sites were chosen because the cohorts meters from the microphone. Measurements were controlled by of students who use hearing instruments at these two sites would the Reverberation Module of SpectraPLUS set to estimate RT60 ultimately participate in a future study of hearing instrument based on RT20 (Pioneer Hill Software LLC, 2008). A total of three fitting for non-quiet environments. The third site, a daycare (ages reverberation time measurements were conducted in each space;

3 months to 5 years), was chosen to broaden the range of ages and the results were then averaged to provide an estimate of RT60 for the environment types included. corresponding space. Table 1 summarizes general characteristics of the various rooms across the three sites, including whether Procedures rooms had carpet or tiles, windows with or without curtains, and/ Unoccupied room measurements. The study began with or active ventilation systems. acoustic measurements being made around the school and classroom environments encountered by students on a daily basis; Observation and dosimetry phase. After completion of the these measurements were taken in unoccupied spaces after hours. unoccupied acoustic measurements, the observation phase of the Specifically, the level and spectra of the noise floor (dBA),as study began. Students were observed and shadowed at each of the well as estimates of the reverberation time (RT60) in each space, three sites for several school days. Sound samples of occupied were measured. These measurements were taken with a portable spaces were recorded during observations with the portable laptop system consisting of a laptop (LG R405G) running SpectraPLUS system equipped with SpectraPLUS; the equipment was set version 5.0.26.0 (Pioneer Hill Software LLC, 2008) connected to to record with a bandwidth of 20Hz to 20,000Hz. The portable an external sound card (Sound Devices, LLC – USBPre). SpectraPLUS is an acoustic signal Table 1. Room characteristics across sites. analysis software suite. The suite includes a Room Floor Windows Ventilation spectrum analyzer with up to 24 bit precision System in both real-time and post-processing modes Elementary school and performs signal analysis with 1/1- to Mainstream classroom Tile Yes, curtains No 1/96-octave bandwidths and fast Fourier Hearing resource classroom Carpet Yes, curtains No transform (FFT) sizes of 32 through 1,048,576 Music room Tile None Yes points. The spectral analysis software can Computer room Tile None Yes also perform spectral ANSI weighting (flat, High school A, B, and C) and total power calculation of Mainstream classroom Tile None Yes acoustic signals. SpectraPLUS also includes Hearing resource classroom Carpet Yes, curtains No a reverberation time utility that generates Computer room Carpet None No and presents a broadband signal while Daycare automatically recording the level of acoustic Infant room Tile Yes, no curtains No energy in a room over a specified time Toddler room Tile Yes, no curtains No interval; this utility is used for estimating the Pre-school room Tile Yes, no curtains No reverberation time of rooms. Note: “Ventilation System” refers to active systems with fans or blowers emitting noise at levels An AKG C4000B condenser microphone greater than 40 dBA. While all classrooms had air circulation, only some had audible (greater than 40 dBA) ventilation noise; rooms with audible ventilation system noise are marked as “Yes” (1-inch dual-diaphragm condenser transducer and those without audible systems are marked as “No.”

26 An Exploration of Non-Quiet Listening at School system was used to record sound samples during lesson periods, values across all three sites (Figure 1). In general, core learning nutrition breaks, and at recess. The collected sound samples were areas (such as classrooms, computer rooms, and hearing resource then post-processed with SpectraPLUS to calculate the noise level rooms) demonstrated RT60 of under 0.6s. This indicates that the (in dBA) and spectral distribution of the acoustic environment. primary instructional environments in the schools measured MATLAB (The Mathworks Inc., 2004) was used to estimate the were in compliance with ANSI recommendations. Gymnasia

SNR of teachers’ voices during lesson periods in classrooms at the demonstrated large RT60 values of over 1.0s at all three sites. elementary and high school sites. This was done by calculating the variance of the noise component of the recorded signal using Figure 1. Bar graphs showing reverberation time (RT60) for various samples taken during pauses in the teachers’ speech and then rooms at the daycare (panel a), elementary (panel b), and high school (panel c) sites. calculating the variance of the speech signal by subtracting the noise component variance from the variance of the total recorded signal. Since variance is proportional to intensity (or power) and SNR is defined as the ratio of intensities, the ratio of variances was used to calculate SNR from the recordings. The SNR estimates from two to three recordings in each lesson were then averaged. SNR was not estimated for the daycare rooms because education for children in daycares is play-based, rather than lesson- or lecture- based, as recommended by the Ontario provincial government (Best Start Expert Panel on Early Learning, 2007). A Larson Davis Spark 706, Type 2 dosimeter was used during observations in order to record the sound levels experienced by students over the course of their days at school. The dosimeter was worn by an experimenter who attended all classes and activities along with the cohorts of children. The dosimeter microphone was positioned on the observer’s left shoulder in order to have the microphone as close as possible to the left ear. The device was set to record the level in dBA at 10-second intervals over the duration of the school day; the length of the school day varied by site (daycare, elementary school, and high school). Data are reported in equivalent sound level (Leq) which is the average of the sound levels (in dBA) for each 10-second recording interval. Written notes were made during observations to classify the type of listening situation the students were in at any particular moment as: (a) “quiet” when there was no audible background noise or an overall level below 50 dBA, (b) “speech alone” when there was a single primary talker amidst no audible background noise, (c) “speech in noise” when there was a speech signal of interest (from one or more talkers) amidst audible background noise, (d) or “noise alone” when the only acoustic signal consisted of only undesired sound with no speech. Sources of noise (such as computer fans, traffic noise, and ventilation systems) were also noted. A similar method has been used by Ricketts, Picou, Galster, Federman, & Sladen (2010) in the evaluation of children’s use of directional microphone technology.

Results

Reverberation Time across School-Day Settings

Reverberation time (RT60) data showed a wide range of

27 Journal of Educational Audiology vol. 17, 2011

Hallways at the elementary and high school sites demonstrated elementary school hearing resource classroom’s low RT60. The relatively high RT60 values, whereas the hallway at the daycare hearing resource rooms at both sites had similar noise floor levels demonstrated a relatively low RT60. This difference is likely due and similar SNRs during instruction. The addition of carpet and to low ceilings in the daycare hallway and numerous articles of window treatments in both hearing resource rooms were likely clothing lining the hallway, which would act to absorb sound the main factors contributing to the lower RT60, thus the improved reflections. However, areas such as gymnasia and hallways are not listening environment, of those rooms relative to the mainstream considered “core learning areas” and, therefore, are not within the classrooms. scope of the ANSI S12.60 (2010) recommendations. Figure 2. Amplitude spectra of sound sources at each observation site: Spectral Characteristics across School-Day Settings and pre-school room at daycare (panel a), elementary school classroom (panel b), and high school classroom (panel c). Overall level is shown Situations above each curve. Spectral data from classrooms at all three sites showed a broad range in level (Figure 2). Dosimetry data (presented later) offer explanations of some of the spectral results. The unoccupied noise levels in the daycare and elementary school were similar, while the noise floor of the high school classroom was more than 10 dB higher. This difference was likely due to the ventilation system being present in the high school classroom, which remained active for most of the school day. The levels and shape of the noise present while students were engaged in individual seatwork were similar in the elementary and high school classrooms; the pattern appears similar during the naptime of the pre-school children at the daycare with the exception of less low frequency energy at the daycare. Written observation data indicated that music was played throughout the entire naptime period. The highest overall level (71 dBA) was seen when the pre-school daycare children were engaged in indoor activities, with the majority of the energy in the mid-frequency region. Mid-frequency emphasis is characteristic of a raised vocal effort in the speech of both adults and children (Pearsons, et al., 1977). Table 2 shows SNR estimates for a number of classroom settings, along with the corresponding RT60 and unoccupied noise floor estimates. A range of SNRs are seen across the rooms at both the high school and elementary school sites. The competing noise from computers and ventilation systems in the elementary school’s music and computer rooms result in low SNRs of only +5 dB. The SNRs of male teachers’ lessons in regular classrooms and lessons in the hearing resource classrooms of both sites were the highest estimates collected. The elementary school had a broader range of unoccupied noise floor levels relative to rooms measured at the high school. The difference between the lowest and highest noise floor levels was 23 dB at the elementary school and only 6 dB at the high school. The hearing resource classrooms at both sites had low reverberation times and noise floors. Although both hearing resource rooms were carpeted and had curtains for the windows, the room at the elementary school was also equipped with acoustic ceiling tiles and acoustic panels on the walls; these additions provide extra sound absorption and contribute to the

28 An Exploration of Non-Quiet Listening at School

Table 2. Acoustic characteristics of classrooms across sites. Sound Levels and Sources across the School Room RT60 (sec) Unoccupied Noise Floor Average SNR (dB) Day (dBA) Dosimetry data show a large degree Elementary school of variation in sound levels and listening Mainstream classroom 0.35 29 13a Hearing resource classroom 0.18 32 12b environments and situations over the course Music room 0.23 45 5b of a school day across all three sites (Figure Computer room 0.45 52 5b 3). The youngest group of children (toddlers) High school experienced the highest levels of all (panel Mainstream classroom 0.53 41 12a, 8b Hearing resource classroom 0.34 35 11b a), followed by the pre-school children at the Computer room 0.30 35 11b daycare (panel b), with both groups of children Daycare experiencing maximum Leq levels of 90 dBA or Infant room 0.56 37 n/a higher. The daycare data show more sustained Toddler room 0.50 34 n/a and higher levels than the elementary (panel Pre-school room 0.70 31 n/a

c) or high school (panel d) sites. The daycare a Male teacher children of both age groups also show the same b Female teacher pattern of lower levels during naptime. The

elementary and high school sites show more There were notable differences between the elementary and frequent variation in sound levels over the course of their school high school computer rooms, as shown in Tables 1 and 2. The high days, although lower in level, when compared to the daycare children. school computer room had a lower RT60, lower noise floor, and higher SNR, relative to the elementary school Figure 3. Dosimetry data shown as LeqA by time at each observation site: toddler room at daycare (panel a), computer room. The pre-school room at daycare (panel b), elementary school (panel c), and high school (panel d). Leq curves are elementary school labeled with events and environments from observation records. computer room was not carpeted and was equipped with an active ventilation unit. These factors contributed to the higher RT60 and noise floor, which in turn contributed to the lower SNR of the elementary school computer room relative to the carpeted computer room at the high school (which did not contain a ventilation unit). As mentioned previously, SNR was not estimated for the daycare rooms due to the play-based, rather than lesson- or lecture- based, curricula (Best Start Expert Panel on Early Learning, 2007).

29 Journal of Educational Audiology vol. 17, 2011

Dosimetry data may be summarized as the distribution of Figure 5. Proportion of time spent in each sound environment, as classified by the observer for each site. sound levels (Leq) over time. The Leq distributions are shown as box plots in Figure 4. In this figure, the box encloses the central 50% of the data points. The solid line within the box represents the median; the lower edge of the box represents the lower quartile (25th percentile), with the upper edge of the box representing the upper quartile (75th percentile). Lines extend from the ends of the box to the maximum and minimum data points above and below the box respectively. Similar to the dosimetry Leq graphs, the box plots show a large range of levels over the course of a day, with minimum values of 40 dBA and maximum values higher than 90 dBA. The higher levels in the daycare, relative to the elementary and high school sites, are apparent from the median points, with daycare children experiencing higher sound levels than the elementary and high school children.

Figure 4. Boxplots depicting the LeqA data for each observation site. following listening environment-situation combinations. Daycare environments and situations included: outdoor play, indoor play, and naptime (indoor). Elementary school environments and situations included: instructional lessons in classrooms, instructional lessons in computer rooms, hallway noise (with communication attempts), quiet seatwork in classrooms, outdoor recess, lunch in classroom (with conversation), indoor recess in gymnasium, and gym class. High school environments and situations included: hallway noise (with communication attempts), resource periods in resource rooms, gym class, lunch in cafeteria (with conversation), and music class. In all of these situations, communication is occurring to varying degrees of success. Implications of the interaction of sound level, environment, and situation are worth considering for discussion.

Discussion

Charted notes made during the observations were analyzed Auditory Ecology to yield the proportion of time the children spent in several The main contribution of this study is in its attention to the environments (Figure 5). On average, children spent 80% of their non-instructional listening situations that children encounter total time in a mixture of speech in noise across the three sites, in their daily lives at school. This investigation revealed that and seldom were in situations classified as quiet, speech alone, or children spend time in a variety of rooms, with a broad range of noise alone (4% of total time, on average). In the daycare setting, reverberation levels and spectral characteristics. Furthermore, there was no time considered to be quiet (i.e., no background the types and levels of sound sources that children experience noise, or an overall level below 50 dBA) and in the elementary throughout their school days are also quite diverse. The variability school there was no time considered to be speech alone. Sources in noise levels between unoccupied and occupied classrooms has of competing noise were similar across the three sites, according been well documented in the literature (Bess, et al., 1984; Hodgson, to the written observation data. These sources included active 1994; Picard & Bradley, 2001). However, the novel contribution ventilation systems, fan noise from computers, traffic noise from of the current study is the application of the concept of auditory outside, children’s voices and lessons from the same and adjacent ecology to school day listening. The data presented illustrate and rooms, and activity in hallways outside of the classrooms. detail the range of acoustic environments and situations, as well It is worth emphasizing some of the observational results as the challenges inherent in each, which children experience here. Referring to Figure 3 again, observations revealed the at school. Implications of these results will be discussed in the

30 An Exploration of Non-Quiet Listening at School context of hearing aid fittings. This discussion offers an exploration assumed. Therefore, the classroom environment may not allow of auditory ecology of children in the school setting, which may children to benefit from directional microphones, as talker distance inform future hearing instrument fitting approaches. and location may not be frontal or within appropriate distance. Although the current study was not an attempt to replicate Head orientation during note-taking, for example, has been shown prior large-scale classroom acoustics research, results suggest that to limit directional benefit (Ricketts & Galster, 2008). the cohorts experienced representative classroom acoustics, with DNR has been shown to improve listening comfort but not average noise floor and RT60 measurements resembling those of speech understanding in noise for adults (Bentler & Chiou, 2006; Knetch at al. (2002) and Larsen and Blair (2008). The purpose Bentler, Wu, Kettel, & Hurtig, 2008; Bentler, 2005; Mueller, of collecting RT60 and spectral data in the current study was to Weber, & Hornsby, 2006; Palmer, Bentler, & Mueller, 2006; provide a frame in which to view the dosimetry data, which were Ricketts & Hornsby, 2005). Similarly, use of DNR has shown no collected in order to evaluate the auditory ecology of the children improvement in children’s recognition of speech-in-noise (Pittman, in the study. 2011; Stelmachowicz, et al., 2010). Thus, this technology may The work of Gatehouse et al. (1999; 2003, 2006a, b) not provide adequate speech understanding in the classroom; FM demonstrated the importance of considering an individual’s auditory systems are, therefore, preferred (AAA, 2003; CASLPO, 2002). ecology in hearing instrument fittings and candidacy. Results For these reasons, typical noise management technologies of the Gatehouse et al. (1999; 2003, 2006a, b) study indicated may be difficult to apply for children, or may offer insufficient differential benefit from hearing instrument processing strategies benefits, particularly during instruction. Children, nonetheless, directly related to the diversity in participants’ auditory ecology. experience situations of problematic loudness and/or noisiness The current study’s combined data from dosimeter readings and that should be addressed in hearing instrument fittings (Scollie, et observation notes demonstrate the broad range of environment- al., 2010). To address this need, loudness management strategies, situation combinations influencing auditory ecology for the such as a secondary listening program for use in noisy situations, cohorts of children in the present study. For example, these data have been suggested (Scollie et al., 2005, 2010). The present study may suggest that existing practice guidelines which recommend describes the wide range of acoustic environments and listening a single hearing instrument listening program (optimized for situations encountered by children of three age ranges. This communication-intensive environments [AAA, 2003; CASLPO, study was developed in order to inform future studies of hearing 2002]) may not adequately serve children across the diverse range instrument signal processing for non-quiet and non-instructional of their auditory ecology. Rather, children may benefit from an periods of the school day. additional listening program that has been optimized to address Hearing instrument digital signal processors use signal non-quiet listening needs. classification algorithms to classify listening situations as either “speech,” “noise,” or other types of signals (e.g., wind or music). Implications for Hearing Instruments A notable result emerges from the combined data presented in Current practice guidelines are mixed with regard to Figure 4 and Figure 5. The data show that the vast majority of recommendations for noise management strategies in pediatric students’ days are spent in “speech in noise” situations, across hearing instrument fittings. Some sources state that there is a variety of environments, rooms, and levels. Across the three insufficient evidence to warrant use of advanced processing sites, approximately 45% of speech in noise situations occurred at (AAA, 2003; Foley, et al., 2009), others consider these strategies moderate sound levels (60 to 70 dBA). viable options (Bagatto, et al., 2010; CASLPO, 2002), while others The DNR systems available in commercial hearing recommend features such as directional microphones ubiquitously instruments are typically activated by internal measurements of (King, 2010) . Two strategies commonly used for adults include SNR, overall input level, or both (Bentler & Chiou, 2006; Chung, directional microphones and digital noise reduction (DNR). In 2004). If a hearing instrument classifier assumes that “noise” only adults and children, use of directional microphones has been shown occurs in loud environments, there is potential for classification to improve speech understanding when the speech signal is in front errors to occur (Chung, 2004). The data from this study may, and noise comes from the back or sides of the listener (Amlani, therefore, serve to inform future work on hearing instrument signal 2001; Auriemmo, et al., 2009; Bentler, 2005; Gravel, et al., 1999; processing for children by beginning to identify the range of SNRs Hawkins & Yacullo, 1984; Hornsby & Ricketts, 2007a; Hornsby & and input levels children experience in their daily lives. Ricketts, 2007b; Kuk, et al., 1999; Ricketts, 2000, 2001; Ricketts, Likewise, audiologists typically fit hearing instruments for 2005). However, in those situations, the listener is expected to hearing in speech-dominated environments. In the classroom, point his or her head toward the talker; close range listening is also school-age children who wear hearing instruments typically have

31 Journal of Educational Audiology vol. 17, 2011 a personal FM system, which is an effective and optimal strategy for for optimal learning because not all of a child’s formal learning takes that situation. However, the results of this study show that children have place in a traditional classroom with a teacher’s voice as the main signal substantial listening needs outside traditional classroom instruction or of interest. However, the acoustic measurements and observation data speech-dominated environments. This result aligns with the results reported in this study represent an admittedly small sample, with limited of the Scollie et al. (2010) study, which reported a variety of listening generalization abilities. It is suggested that future research pursue a needs and requirements that were best served by multiple hearing large-scale investigation of the acoustics, dosimetry characteristics, or instrument listening programs. In certain situations (such as listening classification of non-classroom environments, for example. to a teacher or peer during hallway travel, playing team sports, and participating in dynamic group learning activities), use of a personal FM Future Research system may not be optimal or practical. In these situations, a secondary The primary focus of this work was to determine the range and listening program that uses additional signal processing strategies types of listening situations children encounter across a school day, in (such as frequency-gain shaping, directional microphones, DNR, order to provide context for future work in hearing instrument signal or a combination of these strategies) may be effective for improving processing strategies for children with hearing loss. The results of this loudness comfort and/or speech understanding. A secondary program study have demonstrated that children experience a wide range of noise can be either selected manually by the child or automatically by the levels and types across a variety of listening environments and situations hearing instrument processor. While there is some evidence that children over the course of a school day. Classroom RT60 measurements were can manually switch listening programs effectively and appropriately generally under the 0.6s maximum as recommended by ANSI S12.60 (Scollie, et al., 2010), others studies have shown that many children do (2010). However, unoccupied noise floor levels ranged from several dB not manually switch listening programs appropriately or at all (Ricketts, below, to almost 20 dB above the recommended 35 dBA noise level; et al., 2010). Most modern hearing instruments offer automatic listening estimates of SNR were generally below the +15 dB recommended program switching, which uses the DSP classification system of the by ASHA (2005). Notably, hearing resource rooms that had acoustic hearing instrument to make decisions regarding which listening program treatment demonstrated better acoustic properties than untreated rooms. or microphone mode should be used. However, research suggests The data support a need to consider and classify noise sources and levels that current automatic switching systems may not be appropriate for encountered in a school day (such as class activity from adjacent rooms, children’s use in school settings (Ricketts, et al., 2010). Therefore, while students in the hallway, and low-level computer noise) in addition to the the implementation of secondary listening programs may address the more traditional definitions of noise (such as machine and equipment diverse listening needs of children at school, clinicians need to consider noise). Limitations of the current study’s sample size preclude statistical the individual abilities of children when designing and implementing a analysis and generalization. Yet, the sites selected are in agreement with secondary listening program in a pediatric fitting. The data presented in the existing larger scale studies reported in the literature. Thus, it is the current study may inform future work regarding the use of hearing possible to infer that other cohorts of children may be subject to similar instrument signal processing for children’s listening needs across amounts of variability in listening environments and situations. multiple environments. Conclusions Implications for Classroom Acoustics Research Existing literature provides acoustic descriptions of static In summary, these data describe the acoustical properties of a classroom acoustics and ANSI recommended criteria for classroom typical day at school. Results indicate that children regularly experience acoustics (Knecht, et al., 2002; Larsen & Blair, 2008; Nelson, et al., loud situations with levels in excess of 80 dBA, as well as moderate- 2008). The current data generally agree with the existing literature level situations with poor SNRs. Raised vocal effort of teachers was demonstrating lower than recommended SNRs even in rooms which also demonstrated in the results. Furthermore, children experience a satisfy recommended RT60 and noise floor criteria. Although personal wide range of listening needs dependent on the acoustic characteristics FM systems can assist students with hearing loss in situations with of the listening environment and the activity in which they are a low SNR, it is important to note that the ANSI S12.60 (2010) and participating. Current hearing aid technology offers a variety of options ASHA (2005) recommendations apply to all school-age children for management of either loud sounds or sounds with a low SNR. regardless of hearing status. Furthermore, the current study suggests Research investigating the application of secondary listening programs a need to consider the breadth of listening environments (multiple in pediatric hearing instrument fittings to assist listening in non-quiet rooms and locations throughout a school day) and situations (teacher environments and situations appears to be warranted. talking, classmates talking, music) that children encounter. This need is relevant to those interested in the importance of classroom acoustics

32 An Exploration of Non-Quiet Listening at School

Acknowledgements Best Start Expert Panel on Early Learning. (2007). Early learning The authors would like to thank the staff and students of the Thames for every child today: A framework for Ontario early Valley District School Board, especially our liaison Dr. Stella Ng, childhood settings. Ottawa, Canada: Ministry of Children and the YMCA University Child Care in London, Ontario, Canada and Youth Services. for allowing the data collection for this study to take place in Blair, J. C. (1977). Effects of amplification, speechreading, their facilities. We appreciate the guidance of Dr. Todd Ricketts and classroom environments on reception of speech. Volta for our observational data collection. We would also like to thank Review, 79(7), 443-449. Supportive Hearing Systems for loaning to us the loudspeaker Boothroyd, A. (2004). Room acoustics and speech perception. used to present signals for acoustic measurements. Support for this Seminars in Hearing, 25(2), 155-166. work is provided by the Natural Science and Engineering Research Boothroyd, A., & Iglehart, F. (1998). Experiments with classroom Council and the Masons Help 2 Hear Foundation. FM amplification.Ear and Hearing, 19(3), 202. Ching, T. Y. C., Scollie, S. D., Dillon, H., & Seewald, R. (2010a). A cross-over, double-blind comparison of the NAL-NL1 References and the DSL v4.1 prescriptions for children with mild to American Academy of Audiology (AAA). (2003). Pediatric moderately severe hearing loss. International Journal of amplification protocol. Audiology, 49 Suppl 1, S4-15. American National Standards Institute (ANSI). (2010). Ching, T. Y. C., Scollie, S. D., Dillon, H., Seewald, R., Britton, Acoustical performance criteria, design requirements, and L., & Steinberg, J. (2010b). Prescribed real-ear and achieved guidelines for schools, part 1: Permanent schools, ANSI real-life differences in children’s hearing aids adjusted S12.60-2010/Part 1. New York: Acoustical Society of according to the NAL-NL1 and the DSL v.4.1 prescriptions. America. International Journal of Audiology, 49 Suppl 1, S16-25. American Speech-Language-Hearing Association. (2005). Ching, T. Y. C., Scollie, S. D., Dillon, H., Seewald, R., Britton, Acoustics in educational settings: Position statement. L., Steinberg, J., et al. (2010c). Evaluation of the NAL- Amlani, A. M. (2001). Efficacy of directional microphone NL1 and the DSL v.4.1 prescriptions for children: Paired- hearing aids: a meta-analytic perspective. Journal of the comparison intelligibility judgments and functional American Academy of Audiology, 12(4), 202-214. performance ratings. International Journal of Audiology, 49 Auriemmo, J., Kuk, F., Lau, C., Kelly Dornan, B., Sweeton, S., Suppl 1, S35-48. & Marshall, S. (2009). Efficacy of an adaptive directional Chung, K. (2004). Challenges and recent developments microphone and noise reduction system for school-aged in hearing aids Part I: Speech understanding in noise, children. Journal of Educational Audiology, 15, 16-27. microphone technologies and noise reduction algorithms. Bagatto, M., Scollie, S. D., Hyde, M., & Seewald, R. (2010). Trends in Amplification, (3),8 83-124. Protocol for the provision of amplification within the Ontario College of Audiologists and Speech-Language Pathologists of infant hearing program. International Journal of Audiology, Ontario (CASLPO). (2002). Preferred practice guideline for 49 Suppl 1, S70-79. the prescription of hearing aids for children. Bentler, R. A. (2005). Effectiveness of directional microphones Crandell, C. C., & Smaldino, J. J. (1994). An update of classroom and noise reduction schemes in hearing aids: a systematic acoustics for children with hearing impairment. Volta Review, review of the evidence. Journal of the American Academy of 96(4), 291-306. Audiology, 16(7), 473-484. Crandell, C., & Smaldino, J. (2000). Classroom acoustics for Bentler, R. A., & Chiou, L.-K. (2006). Digital noise reduction: An children with normal hearing and with hearing impairment. overview. Trends in Amplification, 10(2), 67-82. Language, Speech, and Hearing Services in Schools, 31(4), Bentler, R. A., Wu, Y.-H., Kettel, J., & Hurtig, R. (2008). Digital 362. noise reduction: Outcomes from laboratory and field studies. Elliott, L. (1979). Performance of children aged 9 to 17 years International Journal of Audiology, 47(8), 447-460. on a test of speech intelligibility in noise using sentence Bess, F. H., Sinclair, J. S., & Riggs, D. E. (1984). Group material with controlled word predictability. Journal of the amplification in schools for the hearing impaired.Ear and Acoustical Society of America, 66(651), 653. Hearing, 5(3), 138.

33 Journal of Educational Audiology vol. 17, 2011

Fallon, M., Trehub, S. E., & Schneider, B. A. (2002). Children’s Hornsby, B. W., & Ricketts, T. A. (2007a). Effects of noise source use of semantic cues in degraded listening environment. configuration on directional benefit using symmetric and Journal of the Acoustical Society of America, 111(5 Pt. 1), asymmetric directional hearing aid fittings.Ear & Hearing, 2242-2249. 28(2), 177-186. Finitzo-Hieber, T., & Tillman, T. W. (1978). Room acoustics Hornsby, B. W. Y., & Ricketts, T. A. (2007b). Directional benefit effects on monosyllabic word discrimination ability for in the presence of speech and speechlike maskers. Journal of normal and hearing-impaired children. Journal of Speech & the American Academy of Audiology, 18(1), 5-16. Hearing Research, 21(3), 440-458. Houtgast, T. (1981). The effect of ambient noise on speech Foley, R., Cameron, C., & Hostler, M. (2009). Guidelines for intelligibility in classrooms. Applied Acoustics, 14(1), 15-25. fitting hearing aids to young infants.National Health Service King, A. M. (2010). The national protocol for paediatric Newborn Hearing Screening Programme. Retrieved from amplification in Australia. International Journal of http://hearing.screening.nhs.uk/getdata.php?id=19254 Audiology, 49(S1), 64-69. Gatehouse, S., Elberling, C., & Naylor, G. (1999). Aspects Knecht, H., Nelson, P., Whitelaw, G., & Feth, L. (2002). of auditory ecology and psychoacoustic function as Background noise levels and reverberation times in determinants of benefits from and candidature for non- unoccupied classrooms: Predictions and measurements. linear processing in hearing aids. Paper presented at the American Journal of Audiology, 11(2), 65. 18th Danavox Symposium: Auditory models and non-linear Kochkin, S. (2005). MarkeTrak VII: Customer satisfaction with hearing instruments, Kolding. hearing instruments in the digital age. The Hearing Journal, Gatehouse, S., Naylor, G., & Elberling, C. (2003). Benefits from 58(9), 9. hearing aids in relation to the interaction between the user Kuk, F., Kollofski, C., Brown, S., Melum, A., & Rosenthal, and the environment. International Journal of Audiology, 42 A. (1999). Use of a digital hearing aid with directional Suppl 1, S77-85. microphones in school-aged children. Journal of the Gatehouse, S., Naylor, G., & Elberling, C. (2006a). Linear American Academy of Audiology, 10(10), 535. and nonlinear hearing aid fittings--1. Patterns of benefit. Larsen, J., & Blair, J. (2008). The effect of classroom International Journal of Audiology, 45(3), 130-152. amplification on the signal-to-noise ratio in classrooms while Gatehouse, S., Naylor, G., & Elberling, C. (2006b). Linear and class is in session. Language, Speech, and Hearing Services nonlinear hearing aid fittings--2. Patterns of candidature. in Schools, 39(4), 451. International Journal of Audiology, 45(3), 153-171. Lewis, D., Feigin, J., Karasek, A., & Stelmachowicz, P. (1991). Gravel, J. S., Fausel, N., Liskow, C., & Chobot, J. (1999). Evaluation and assessment of FM systems. Ear & Hearing, Children’s speech recognition in noise using omni- 12(4), 268. directional and dual-microphone hearing aid technology. Ear Markides, A. (1986). Speech levels and speech-to-noise ratios. & Hearing, 20(1), 1-11. British Journal of Audiology, 20(2), 115-120. Hawkins, D. (1984). Comparisons of speech recognition in Mueller, H. G., Weber, J., & Hornsby, B. W. Y. (2006). The noise by mildly-to-moderately hearing-impaired children effects of digital noise reduction on the acceptance of using hearing aids and FM systems. Journal of Speech and background noise. Trends in Amplification, 10(2), 83-93. Hearing Disorders, 49(4), 409. Nábĕlek, A., & Nábĕlek, L. (1994). Room acoustics and speech Hawkins, D., & Yacullo, W. (1984). Signal-to-noise ratio perception. In J. Katz (Ed.), Handbook of clinical audiology advantage of binaural hearing aids and directional (Fourth ed., pp. 624–637). Baltimore, MD: Lippincott microphones under different levels of reverberation. Journal Williams & Wilkins. of Speech and Hearing Disorders, 49(3), 278. Nelson, E. L., Smaldino, J., Erler, S., & Garstecki, D. (2008). Hickson, L., Clutterbuck, S., & Khan, A. (2010). Factors Background noise levels and reverberation times in old and associated with hearing aid fitting outcomes on the IOI-HA. new elementary school classrooms. Journal of Educational International Journal of Audiology, 49(8), 586-595. Audiology, 14, 16-22. Hodgson, M. (1994). UBC-classroom acoustical survey. Nelson, P., & Soli, S. (2000). Acoustical barriers to learning: Canadian Acoustics, 22, 3-3. Children at risk in every classroom. Language, Speech, and Hearing Services in Schools, 31(4), 356-361.

34 An Exploration of Non-Quiet Listening at School

Neuman, A., Wroblewski, M., Hajicek, J., & Rubinstein, A. Scollie, S. D. (2008). Children’s speech recognition scores: The (2010). Combined effects of noise and reverberation on speech intelligibility index and proficiency factors for age speech recognition performance of normal-hearing children and hearing level. Ear & Hearing, 29(4), 543-556. and adults. Ear & Hearing, 31(3), 336. Scollie, S. D., Ching, T. Y. C., Seewald, R., Dillon, H., Britton, Palmer, C. V., Bentler, R., & Mueller, H. G. (2006). Amplification L., Steinberg, J., et al. (2010). Evaluation of the NAL-NL1 with digital noise reduction and the perception of annoying and DSL v4.1 prescriptions for children: Preference in real and aversive sounds. Trends in Amplification, 10(2), 95-104. world use. International Journal of Audiology, 49 Suppl 1, Pearsons, K. S., Bennett, R. L., & Fidell, S. (1977). Speech S49-63. levels in various noise environments. (EPA-600/1-77-025). Scollie, S. D., Seewald, R., Cornelisse, L., Moodie, S., Bagatto, Washington DC. M., Laurnagaray, D., et al. (2005). The Desired Sensation Picard, M., & Bradley, J. (2001). Revisiting speech interference Level multistage input/output algorithm. Trends in in classrooms. Audiology, 40(5), 221-244. Amplification, (4),9 159-197. Pioneer Hill Software LLC. (2008). SpectraPLUS (Version Shield, B., & Dockrell, J. (2004). External and internal noise 5.0.26.0 ). Poulsbo, WA. surveys of London primary schools. The Journal of the Pittman, A. (2011). Children’s performance in complex Acoustical Society of America, 115, 730. listening conditions: Effects of hearing loss and digital Smaldino, J., Crandell, C., Brian, M., Kreisman, A., & Kreisman, noise reduction. Journal of Speech, Language, & Hearing N. (2008). Room acoustics for listeners with normal hearing Research, 54(4), 1224-1239 and hearing impairment. In M. Valente, H. Hosford-Dunn Pittman, A., Lewis, D., Hoover, B., & Stelmachowicz, P. (1999). & R. J. Roeser (Eds.), Audiology Treatment (second ed., pp. Recognition performance for four combinations of FM 418-451). New York, NY: Thieme. system and hearing aid microphone signals in adverse Stelmachowicz, P., Lewis, D., Hoover, B., Nishi, K., McCreery, listening conditions. Ear & Hearing, 20(4), 279. R., & Woods, W. (2010). Effects of digital noise reduction Pugh, K. C., Miura, C. A., & Asahara, L. L. Y. (2006). Noise on speech perception for children with hearing loss. Ear & levels among first, second, and third grade elementary school Hearing, 31(3), 345-355. classrooms in Hawaii. Journal of Educational Audiology, 13, The Mathworks Inc. (2004). MATLAB (Version 7.0.0.19920). 32-38. Natick, MA. Ricketts, T., & Galster, J. (2008). Head angle and elevation in Thibodeau, L. (2010). Benefits of adaptive FM systems on speech classroom environments: Implications for amplification. recognition in noise for listeners who use hearing aids. Journal of Speech, Language, & Hearing Research, 51(2), American Journal of Audiology, 19(1), 36-45. 516. Ricketts, T. A. (2000). Impact of noise source configuration on directional hearing aid benefit and performance.Ear & Hearing, 21(3), 194-205. Ricketts, T. A. (2001). Directional hearing aids. Trends in Amplification, (4),5 139-176. Ricketts, T. A. (2005). Directional hearing aids: Then and now. Journal of Rehabilitation Research & Development, 42(4 Suppl 2), 133-144. Ricketts, T. A., & Hornsby, B. W. Y. (2005). Sound quality measures for speech in noise through a commercial hearing aid implementing digital noise reduction. Journal of the American Academy of Audiology, 16(5), 270-277. Ricketts, T. A., Picou, E. M., Galster, J., Federman, J., & Sladen, D. P. (2010). Potential for directional hearing aid benefit in classrooms: Field data. In R. C. Seewald & J. Bamford (Eds.), A sound foundation through early amplification 2010 (pp. 143-152). Chicago, IL: Phonak, AG.

35 Journal of Educational Audiology vol. 17, 2011

Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification

Leigh Ann Reel, Au.D., Ph.D. Candace Bourland Hicks, Ph.D. Texas Tech University Health Sciences Center

In Study One, children’s selective auditory attention was assessed to determine effects of meaning of competing speech and daily exposure to soundfield amplification. Subjects included 70 normal-hearing first grade children whose primary language was English. Four first grade classrooms received soundfield amplification systems (SF group), and four classrooms served as controls (No SF group). Word recognition testing was performed using target English words presented with competing speech spoken in English and French. All testing was administered without soundfield amplification at the beginning and end of the four month study. The SF and No SF groups improved significantly over time in the English and French competing conditions. No significant difference was seen between mean scores for the two groups in the English competing condition. A borderline significant p( = 0.053) effect of soundfield amplification was seen for the French competing condition, indicating a possible positive effect of daily exposure to soundfield amplification on children’s abilities to ignore competing speech that lacks meaning. Study Two was a pilot study in which results from Study One were reanalyzed to investigate possible differential effects of daily exposure to soundfield amplification on development of selective auditory attention in students from different ethnic backgrounds. Results revealed a significant positive effect of soundfield amplification for the English and French competing conditions in classrooms where students were predominately Hispanic, but not for classrooms in which students were predominately African American. Implications for future research are discussed.

Introduction impedes accurate speech perception by masking portions of the target auditory signal due to spectral overlap between the masking In a typical classroom, learning depends heavily on students’ noise and the target speech (i.e., energetic masking) (Crandell & abilities to hear and understand information presented by the Smaldino, 2000). In addition, speech from one or more competing teacher in the form of spoken language. This is particularly true (or masking) talkers may decrease speech perception due to in early elementary grades where almost all classroom instruction informational masking, which is a type of interference that can involves auditory-verbal exchange between the teacher and the affect a listener’s ability to (1) segregate simultaneous speech students (Picard & Bradley, 2001). Accurate speech perception signals, (2) selectively focus attention only on the words spoken is particularly important as children work to develop pre-literacy by the target talker, and (3) accurately process the components of and early literacy skills that rely on accurate perception and the target speech signal (Cooke, Garcia Lecumberri, & Barker, manipulation of individual speech sounds (e.g., phonological 2008). awareness and spelling). To develop these skills, children must Previous studies have shown that the presence of competing have adequate selective auditory attention abilities in order to noise or competing speech decreases speech recognition for focus attention on the target speech signal (i.e., the teacher’s voice) children with hearing loss (e.g., Crandell, 1993; Finitzo-Hieber & while ignoring irrelevant competing sounds often present in the Tillman, 1978) and children with normal hearing (e.g., Crandell & classroom environment. Smaldino, 1996; Elliott, 1979; Elliott et al., 1979; Finitzo-Hieber & Tillman, 1978; Papso & Blood, 1989). To maximize speech Classroom Acoustics understanding in classrooms, the American Speech-Language- Unfortunately, many classrooms are characterized by poor Hearing Association (ASHA, 2005) and the American National acoustical conditions which can interfere with young students’ Standards Institute (ANSI, 2002) have proposed criteria for abilities to attend to and accurately perceive what the teacher is classroom acoustical conditions. Both organizations recommend saying (e.g., Bess, Sinclair, & Riggs, 1984; Bradley; 1986; Eriks- that unoccupied classroom noise levels not exceed 35 dBA and Brophy & Ayukawa, 2000; Knecht, Nelson, Whitelaw, & Feth, that classroom signal-to-noise ratios (SNRs) equal or exceed +15 2002; Palmer, 1998; Picard & Bradley, 2001; Rosenberg, 1998; dB. However, many studies have found that typical classrooms Rosenberg et al., 1999). In early elementary grades, background often fail to meet these standards (e.g., Eriks-Brophy & Ayukawa, noise and competing speech are common problems due to the 2000; Knecht et al., 2002; Larsen & Blair, 2008; Palmer, 1998; active nature of the learning environment. Background noise Picard & Bradley, 2001; Rosenberg, 1998; Rosenberg et al., 1999).

36 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification

Selective Auditory Attention Freyman, Balakrishnan, & Helfer, 2004) of the competing talkers. The presence of classroom noise and competing speech is In contrast, a very small number of studies have investigated of particular concern in early elementary grades because young effects of linguistic characteristics of competing speech on children do not have mature selective auditory attention skills. children’s selective auditory attention performance. For example, Numerous studies have shown that children with normal hearing Cherry and Kruger (1983) found that 7- to 9-year-old children experience greater difficulties than adults in understanding speech in experienced significantly greater difficulties ignoring meaningful, poor acoustical conditions (e.g., Johnson, 2000; Klatte, Lachmann, forward speech as compared to reversed speech. Alone, these & Meis, 2010; Nábĕlek & Robinson, 1982; Neuman & Hochberg, results appear to indicate that meaningful competing speech is a 1983; Neuman, Wroblewski, Hajicek, & Rubinstein, 2010). more effective masker than competing speech that lacks meaning. Evidence from different age groups suggests children’s abilities However, Chermak and Zielonko (1977) found no significant to accurately perceive consonant sounds and target sentences in differences in word recognition performance of 9- to 10-year- background noise do not reach adult-like levels until the teenage old children when the competing signal consisted of grammatical years (e.g., Johnson, 2000; Neuman & Hochberg, 1983; Stuart, speech (i.e., meaningful speech that followed syntactic rules), 2008). A similar pattern of results has been reported in competing semantically anomalous strings (i.e., non-meaningful speech that speech conditions, with children’s syntactic comprehension for followed syntactic rules), or ungrammatical strings (i.e., non- target sentences not reaching adult levels of accuracy until 11 or 12 meaningful speech that violated syntactic rules). years of age (Leech, Aydelott, Symons, Carnevale, & Dick, 2007). There are a number of possible explanations for the As such, even when hearing is normal, young children are at a inconsistencies between results reported by these two studies. disadvantage compared to older children and adults when faced First, the studies included children from different age groups. with the challenge of understanding speech in conditions where Therefore, the effect of linguistic content of competing speech background noise and/or competing speech is present. may have been impacted by the children’s development. Second, Although children’s selective auditory attention improves the SNR differed between the two studies, with a 0 dB SNR used with age, performance on a particular task can differ depending by Cherry and Kruger (1983) and a +8 dB SNR used by Chermak on characteristics of the competing auditory signals. For example, and Zielonko (1977). As such, the different patterns of results Papso and Blood (1989) compared word recognition performance may indicate that effects of linguistic characteristics of competing of 4- and 5-year-old children in 20-talker multi-talker babble speech vary as a function of the SNR. In addition, results of and pink noise. Their results indicated significantly poorer the two studies may have been affected by differences between performance in the multi-talker babble condition. Similarly, the signals used to represent “non-meaningful” speech. Cherry Cherry and Kruger (1983) found that 7- to 9-year-old children and Kruger (1983) used reversed speech; whereas, Chermak experienced significantly greater difficulties selectively attending and Zielonko (1977) used semantically anomalous strings (i.e., to monosyllabic words when the competing signal was meaningful followed syntactic rules) and ungrammatical strings (i.e., did not speech (i.e., a background story read by a single talker) as compared follow syntactic rules) of words. Although these signals each lack to white noise or reversed speech. Together, these results appear to meaning at the sentence level, other characteristics of the signals indicate that children’s difficulties with selective auditory attention may have influenced the listeners’ performance. For example, increase as similarities increase between the target speech signal reversed speech is devoid of meaning; however, reversal of speech and the competing auditory signal(s). Therefore, ignoring speech changes the temporal envelope such that the rapid onsets and slow from other students in the classroom may be more difficult for offsets typical of plosive consonants in forward speech become children than ignoring non-speech noise, such as noise from an slow onsets and rapid offsets (Rhebergen, Versfeld, & Dreschler, overhead projector. 2005). In contrast, the competing speech used by Chermak and Given the nature of the learning environment, competing Zielonko (1977) was not completely devoid of meaning. Although speech conditions are relatively common in classrooms for the words in this study did not combine to express meaning at young children. As such, a question of interest relates to how the sentence level, the individual words still carried their own characteristics of competing speech signals affect children’s meaning. selective auditory attention performance. Numerous studies have To avoid the disadvantages of reversed speech and documented that adult listeners’ performance is affected by non- semantically anomalous sentences, a small number of studies linguistic characteristics, such as the intensity (e.g., Cooke, Garcia have used competing speech spoken in two or more languages to Lecumberri, & Barker, 2008), number (e.g., Simpson & Cooke, assess selective auditory attention in adult listeners (e.g., Freyman, 2005), gender (e.g., Brungart, 2001), and spatial location (e.g., Balakrishnan, & Helfer, 2001; Garcia, Lecumberri & Cooke,

37 Journal of Educational Audiology vol. 17, 2011

2006; Reel & Hicks, in press; Tun, O’Kane, & Wingfield, 2002; amplification. For example, when testing is conducted witha Van Engen & Bradlow, 2007). Speech in an unfamiliar language soundfield system in use, investigations of normal hearing children provides a masking signal that has the same basic time envelope have shown immediate improvements in speech intelligibility as speech in the native language (i.e., with rapid onsets and slow (Eriks-Brophy & Ayukawa, 2000; Flexer, Millin, & Brown, 1990), offsets) but is lacking in semantic content, both at the sentence spelling performance (Zabel & Tabor, 1993), classroom behavior and word level. Therefore, use of speech in an unfamiliar language (Eriks-Brophy & Ayukawa, 2000; Palmer, 1998), and the amount may provide a better control condition than nonsense sentences or of managerial time necessary at the beginning of class (Ryan, reversed speech when testing effects of the meaning of competing 2009). speech. Despite these advantages, there is currently a lack of Although positive immediate effects of soundfield amplifica- studies using competing speech in different languages to evaluate tion are well documented, questions remain regarding how daily selective auditory attention in children. Therefore, the current exposure to soundfield amplification may affect children’s devel- study addressed this gap in previous research by investigating first opment over time. By comparing pre- and post-testing adminis- grade children’s selective attention in conditions where competing tered without use of soundfield amplification, studies have noted speech was spoken in English or French. significantly greater improvements in reading related skills (Da- rai, 2000), language arts, and composite achievement test scores Soundfield Amplification for children in classrooms with soundfield amplification as com- Given poor acoustic conditions in typical classrooms and pared to control classrooms without amplification (Sarf as cited in immature selective auditory attention abilities in children, it Rosenberg & Blake-Rahter, 1995). However, other studies have is important to implement strategies to improve the classroom failed to find significant improvements for students exposed dai- learning environment. Use of a soundfield amplification system may ly to soundfield amplification. For example, Purcell and Millet help overcome negative effects of poor classroom acoustics, thus (2010) compared first grade students in amplified and unamplified potentially improving students’ learning in areas that rely heavily classrooms and found no significant difference between the per- on accurate speech perception (e.g., phonological awareness, centage of students in each group who were reading at grade level spelling). Such systems typically consist of a microphone and by the end of the school year. transmitter worn by the teacher, an amplifier/receiver, and one to Given that the primary purpose of soundfield amplification is five loudspeakers located around the classroom (Flexer, 1995). The to improve children’s abilities to perceive the teacher’s voice in primary goal is to improve the SNR by positioning the microphone noisy classroom conditions, a logical question relates to whether within 3 to 4 inches of the teacher’s mouth (Sapienza, Crandell, & daily exposure to soundfield amplification impacts development Curtis, 1999). At this location, the intensity of the target speech of selective auditory attention over time. More specifically, would signal is greater than the surrounding noise. This optimal SNR is such exposure enhance children’s abilities to focus attention on then delivered through speakers “so that students in the back of the target signal and ignore competing signals, even in conditions the classroom can hear the teacher’s voice as clearly and precisely where soundfield amplification is not used? In contrast, would as students seated near the teacher” (Rosenberg & Blake-Rahter, daily exposure to an amplified signal serve as a “crutch” that could 1995, p. 167). hinder normal development of selective auditory attention? Or Previous studies have investigated effects of classroom would daily use of classroom soundfield amplification improve soundfield amplification on various populations of children, children’s selective auditory attention when the system is in use including those with hearing loss, normal hearing, learning but neither enhance nor hinder overall development of such skills disabilities, and English as a second language. Teachers’ responses (i.e., performance in unamplified environments)? have revealed benefits such as increased student attention (Eriks- Only one published study could be located which attempted Brophy & Ayukawa, 2000; McSporran & Butterworth, 1997; to answer these questions by longitudinally measuring effects Rosenberg et al., 1999), increased classroom control (Palmer, of soundfield amplification on children’s selective auditory 1998), and reduced teacher fatigue/vocal problems (Eriks- attention skills. Mendel, Roberts, and Walton (2003) assessed Brophy & Ayukawa, 2000; Rosenberg et al., 1999). Although normal hearing children in classrooms with (i.e., experimental) useful, these subjective findings must be interpreted with some and without (i.e., control) soundfield amplification over the caution due to the potential for examiner bias in judging/rating kindergarten and first grade years. The experimental group students’ performance. However, studies using objective measures performed significantly better than the control group when (e.g., video recording, in-classroom observation, analysis of test testing was administered through the soundfield system for the scores, etc.) have also reported significant benefits of soundfield experimental group and without the soundfield system for the

38 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification control group. Over time, both groups improved significantly in research procedures were approved by an institutional review their ability to perceive speech in prerecorded classroom-type board. noise. However, no significant differences were noted between All participating schools required every first grade classroom performance of the experimental and control group when tested to strictly follow the curriculum of the Voyager Universal without the use of soundfield amplification. These results suggest Literacy System (2003). The structured nature of the Voyager no significant effect of daily soundfield amplification exposure on program reduced possible effects of variability between the children’s development of speech understanding in classroom- literacy instruction provided in the eight classrooms. Each school type noise. Therefore, daily exposure to soundfield amplification administered Voyager benchmark testing on three dates during the neither hindered nor enhanced actual development of selective school year to assess the students’ reading skills. Scores from the auditory attention, but use of the system did lead to immediate first benchmark test were used to ensure that the students’ baseline improvements in the children’s abilities to understand speech in reading skills were similar among the eight classrooms. the presence of background noise. Although these findings provide Half of the participating classrooms at each school were important preliminary evidence, additional research is needed to randomly chosen to be in the experimental (SF) group, and the determine if such results can be replicated under other conditions, remaining four classrooms served as controls (No SF). In the four including other groups of listeners and other types of competing experimental classrooms, a Phonic Ear Easy Listener Soundfield auditory signals. Amplification System was installed and used during the second semester of the school year. Each system included four soundfield Study One speakers, a receiver, and a transmitter/boom microphone worn by the teacher. Teachers were instructed to use the system every day Although previous studies have demonstrated various benefits during any group instruction time. The volume and tone controls of soundfield amplification, there is a paucity of research using on each receiver were set to the highest levels that could be objective measures to evaluate how daily exposure to soundfield attained without creating feedback or other interference. During amplification affects children’s development of selective auditory periodic visits to each classroom, the soundfield receivers were attention skills over time. As such, Study One compared selective monitored to ensure that the controls had not been adjusted from auditory attention development of two groups of first grade their original positions. students: those exposed to soundfield amplification on a daily Consents were obtained from 99 first grade students, with 49 basis (i.e., experimental or SF group) and those in classrooms students in the experimental (SF) group and 50 in the control (No without soundfield amplification (i.e., control or No SF group). SF) group. As per teacher report, all participants spoke English as The primary goals of the study involved using objective measures their primary language and had no known significant hearing loss. to investigate the following areas: When analyzing the results, scores were excluded for any subjects • Development of selective auditory attention skills in who were not present at both the pre- and post-testing. Results normal hearing first grade students, from 70 students were included in the analyses. • Effects of meaning of competing speech on selective auditory attention performance, and Classroom acoustical measurements. The eight participating • Effects of daily use of classroom soundfield classrooms had roughly similar floor plans, including windows amplification on students’ development of selective along one wall, metal lockers along another wall, a classroom auditory attentions skills. sink, and two bathrooms shared with the adjoining classroom. All classrooms had acoustical ceiling tiles, but flooring materials Method differed somewhat among the classrooms. The six classrooms Schools, classrooms, and subjects. Three elementary schools at School A and School B were carpeted; whereas, the two in Lubbock, Texas, participated in the study during the second School C classrooms had tile flooring. The following acoustical semester of the school year. Each school qualified for Title I funds, measurements were made in each classroom: unoccupied and an indicator of the student population’s low socioeconomic status occupied ambient noise (dBA), teacher’s vocal intensity (dBA), (SES). From these schools, eight regular education first grade unamplified SNR, and amplified SNR (only in the SF classrooms). classrooms took part in the study, including four classrooms at Sound level readings were taken from six locations in each School A, two classrooms at School B, and two classrooms at classroom, including the four corners, center, and center of the School C. As compensation for their assistance, each participating back row (i.e., relative to the typical position where the teacher classroom was given a total of $40.00 in gift certificates. All stood/sat for classroom instruction time). Occupied ambient noise

39 Journal of Educational Audiology vol. 17, 2011 was measured in each classroom while students were engaged in Survey, Spanish is spoken by approximately 19% of the population a quiet activity at their desks/tables. Unamplified and amplified in Lubbock County; whereas, less than 1% of residents speak measures of each teacher’s vocal intensity were performed as French (U.S. Census Bureau, 2005-2009). the teacher read aloud in an unoccupied classroom. Amplified On the Auditec CD recording of the NU-CHIPS and WIPI measurements were taken with the volume control and tone word lists, stimulus words were presented by the same male talker. control of the soundfield amplification system set to the positions Using Cool Edit Pro Version 2 (2002), the words on each list used daily in each classroom. were normalized for peak intensity (50% of the full scale), and interstimulus pauses were increased to 10 seconds. Both competing Test measures. Selective auditory attention testing was stories were read by the same male talker, a native English speaker performed at the beginning and end of the four month study. All with language training in French. The stories were recorded with a testing was administered without amplification in order to assess sampling rate of 44,100 Hz and 16-bit quantization using a Tucker- students’ skills development, rather than immediate effects of Davis System II. Each story recording was normalized to be of soundfield amplification on performance. Prior to the competing equal RMS power as computed over the duration of the stimulus. conditions, word recognition testing was administered in quiet The RMS power of the English story (M = 28.32 dB, SD = 3.03 to ensure that students could listen for the target word, mark the dB) and the French story (M = 28.72 dB, SD = 8.26 dB) were corresponding picture, and quickly turn the page. For the quiet equivalent. condition pre- and post-test, subjects listened to one half-list (25 A different WIPI word list was used for the pre- and post-test words) from the Auditec compact disc (CD) recording of the administration of each competing story condition. For the English Northwestern University Children’s Perception of Speech (NU- pre-test, four classrooms were randomly selected to receive List CHIPS) test (Elliott & Katz, 1980). The NU-CHIPS is a closed-set, 2, and four classrooms were randomly chosen to receive List 3. picture-pointing word recognition test that consists of monosyllabic For the French pre-test, four classrooms were randomly chosen to words appropriate for receptive language ages of at least 2.6 years. receive List 1, and the remaining four classrooms were assigned Use of NU-CHIPS half-lists was considered acceptable based on List 4. On the post-test, each classroom received the unused list Elliott and Katz’ (1980) report that “performance on one half-list from the WIPI list/story pairs. The order of testing for the English will be approximately equivalent to performance on the second and French competing conditions was pseudo-randomized to half-list” (p. 4). prevent order effects. For the pre-tests, four classrooms were For the competing conditions, subjects listened to word lists randomly selected to be tested in the English condition first, while from the Auditec CD recording of the Word Intelligibility by the remaining four classrooms were tested in the French condition Picture Identification (WIPI) test (Ross & Lerman, 1970) while first. On the post-tests, each classroom was tested in the order simultaneously attempting to ignore competing speech from a opposite of the pre-test order. single talker. The WIPI is a closed-set, picture identification test The quiet and competing conditions of the selective auditory consisting of four lists of 25 different monosyllabic words found attention testing were performed in each classroom with the to be appropriate for 5- to 6-year-old children with hearing loss. students in their usual seats. The target word lists and competing As such, normal hearing first grade students were expected to stories were presented from two portable CD players positioned at have adequate receptive language skills for the WIPI task. When the front of the classroom on a plastic stand with two shelves (i.e., presented in quiet, Ross and Lerman (1970) reported that the four one player above the other). The intensity of the word lists and word lists were reliable and essentially equivalent. competing stories was measured at 1 meter from the CD players WIPI word lists were presented with an English and a French using a sound level meter positioned at the approximate height background story to assess effects of the meaning of competing of the students’ ear level. For the quiet condition, the NU-CHIPS speech on selective auditory attention. The Rainbow Passage words were presented at an intensity of 75 dBA in order to be (Fairbanks, 1960 as cited in University of Tampere, n.d.), a story clearly audible to the students. For the pre- and post-test competing containing all normal sounds of the English language, was selected conditions, the WIPI word lists were presented at approximately 73 as the meaningful competing speech signal. The non-meaningful dBA, and each competing story was presented at approximately 76 competing speech signal was Cendrillon (Perrault, n.d.), a French dBA. A -3 dB SNR was selected based on results of pilot testing, version of the tale of Cinderella. The French story was considered which suggested that for normal hearing first grade students a 0 to be non-meaningful due to the low probability that participating dB SNR was too easy; whereas, a – 6 dB SNR was potentially too students would have been exposed to the French language. Based difficult. on estimates provided by the 2005-2009 American Community Prior to each test, subjects were given a privacy tri-fold,

40 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification the appropriate picture booklet, and a writing instrument. The However, technical errors precluded an accurate representation of examiner read the test instructions aloud, and during each test, she the SNR for that classroom. To minimize threats to internal valid- held the picture booklet at the front of the room to assist students in ity, the pre- and post-test selective auditory attention scores for staying on the target page. Students were instructed to listen to the Classroom F were excluded from the data analyses. This resulted man on the CD, mark the picture that matched the word he said, in 30 students in the No SF group and 40 students in the SF group. and then turn the page as quickly as possible. For the competing conditions, students were instructed to ignore the man reading the Quiet. Prior to the competing conditions, NU-CHIPS story and listen to the man saying, “Show me,” followed by a word words were administered in quiet to ensure that students could matching one of the pictures on the test page. A 3- to 5-second perform a picture-pointing word recognition task. On the pre- sample of the competing story was presented prior to each test to test, the mean percent correct scores were 94.00% (SD = 5.12) ensure that the students understood which signal to ignore. During for students in SF classrooms and 89.47% (SD = 10.48) for the all testing, proctors were positioned throughout the classroom to students in No SF classrooms. Similar scores were seen on the ensure that students were quiet, did not share answers, and stayed post-test administration of the NU-CHIPS words. The post-test on the correct page or item number. Tests were scored in terms of mean percent correct scores were 93.70% (SD = 10.36) for the SF the percentage of pictures marked correctly. group and 93.73% (SD = 17.35) for the No SF group. Both groups performed well when NU-CHIPS words were presented in quiet, Results which indicates that the students understood and could perform the Classroom acoustics. The mean unoccupied ambient noise task (i.e., listen to the word, mark the corresponding picture, and level was 37.92 dBA for the SF classrooms and 39.79 dBA for turn the page). A repeated measures ANOVA was conducted with the No SF classrooms. The mean unamplified SNR in the No test as the within-subject factor (pre- or post-test) and group (SF or SF classrooms was 9.45 dBA. In the SF classrooms, the mean No SF) as the between-subject factor. The main effect of test, F(1, SNR was 4.88 dBA in the unamplified condition and 11.38 dBA 68) = 1.33, p = 0.25, the main effect of group, F(1, 68) = 1.16, p = in the amplified condition, indicating that use of the soundfield 0.29, and the interaction effect of test x group, F(1, 68) = 1.76, p amplification system increased the SNR by a mean of 6.5 dBA. = 0.19, were not significant. Therefore, the SF and No SF groups Comparison of the acoustical results for each classroom revealed were similar in their abilities to perform a picture-pointing word that the mean SNRs in two No SF classrooms ranged from 5 to 5.6 recognition task in quiet. dBA, but the mean SNR in the remaining two No SF classrooms ranged from 12.7 to 14.5 dBA. Therefore, two of the No SF English competing speech. Figure 1 shows the mean classrooms had unamplified SNRs that were comparable to the percent correct scores of the SF and No SF group on the pre- and mean amplified SNR in the SF classrooms (11.38 dBA). For the SF post-test of the English competing speech condition. Both groups group, one classroom had a relatively poor SNR in the amplified performed similarly on the pre-test and post-test. For the pre- condition (5.8 dBA), but the amplified SNR in the other three SF test, the mean percent correct score was 44.60% (SD = 18.93) for classrooms ranged from 12.7 to 13.5 dBA. students in SF classrooms and 48.53% (SD = 15.68) for students in No SF classrooms. The post-test mean percent correct score was Benchmark testing. Voyager benchmark scores were 57.60% (SD = 12.93) for the SF group and 57.20% (SD = 15.62) analyzed to determine if the SF and No SF group were similar for the No SF group. A repeated measures ANOVA was conducted in their reading skills prior to implementation of the research with test (pre-test or post-test) as the within-subject factor and protocol. A one-way analysis of variance (ANOVA) revealed no group (SF or No SF) as the between-subject factor. The main significant difference between performance of the SF and No SF effect of test, F(1, 68) = 18.82, p = 0.00, was significant indicating classrooms on all subtests. These results show that the SF and No that the mean percent correct scores of the SF and No SF group SF group were in fact equivalent in their reading-related skills improved significantly over time. The main effect of group, F(1, prior to initiation of the study. 68) = 0.36, p = 0.55, and the interaction effect of test x group, F(1, 68) = 0.75, p = 0.39, were not significant. Thus, while both groups Selective auditory attention. Results for seven classrooms improved over time, there was not a significant difference between were analyzed, including four SF classrooms and three No SF the SF group and No SF group when the competing message was classrooms. Pre- and post-test scores for one No SF classroom meaningful. (Classroom F) were excluded from the analyses. Classroom F was the first classroom in which the pre-testing was administered.

41 Journal of Educational Audiology vol. 17, 2011

Figure 1. Mean percent correct scores for the soundfield amplification system (SF) and the no Figure 2. Mean percent correct scores for the soundfield amplification system (SF) and the soundfield amplification system (No SF) classrooms on the pre- and post-test of the English no soundfield amplification system (No SF) classrooms on the pre- and post-test of the French competing story condition (Study One). competing story condition (Study One).

Competing Condition - English Story Competing Condition - French Story

100 100 90 90 80 80 70 70 60 60 SF SF 50 50 No SF No SF 40 40 % Correct % % Correct % 30 30 20 20 10 10 0 0 Pre-test Post-test Pre-test Post-test Test Test

French competing speech. Figure 2 displays the mean pre- to post-test change in scores was 13.00 percentage points percent correct scores of the SF classrooms and No SF classrooms (SD = 21.72) for the English condition and 28.70 percentage on the pre- and post-test of the French competing speech condition. points (SD = 18.16) for the French condition. A one-way ANOVA Students in the No SF group performed better than students in the revealed that the difference between the mean pre- to post-test SF group on the pre-test and post-test. The mean pre-test score change scores for the English and French competing condition was of the SF group was 38.40% (SD = 13.24), and the mean pre-test significant,F (1, 78) = 12.30, p = 0.001. Therefore, over the course score of the No SF group was 53.33% (SD = 13.18). Scores for of the study, students in the SF classrooms showed significantly both groups improved noticeably from the pre-test to the post-test. greater improvement in their ability to ignore competing speech However, the mean post-test score of the No SF group was still spoken in French as compared to their ability to ignore competing greater than the mean post-test score of the SF group. On the post- speech spoken in English. The No SF group’s mean pre- to post- test of the French competing speech condition, the mean score for test change in scores was 8.67 percentage points (SD = 19.20) for the SF classrooms was 67.10% (SD = 11.90) while students in the the English condition and 20.00 percentage points (SD = 18.55) No SF classrooms had an average score of 73.33% (SD = 12.62). for the French condition. A one-way ANOVA revealed that the A repeated measures ANOVA was performed with test (pre-test difference between the mean pre- to post-test change scores for or post-test) as the within-subject factor and group (SF or No SF) the English and French competing condition was significant, as the between-subject factor. The main effect of test, F(1, 68) F(1, 58) = 5.41, p = 0.02. Therefore, over the course of the study, = 121.02, p = 0.00, was significant indicating that scores of the students in the No SF classrooms also showed significantly greater SF and No SF group improved significantly over time. The main improvement in their abilities to ignore competing speech spoken effect of group, F(1, 68) = 24.65, p = 0.00, was also significant. in French as compared to their abilities to ignore competing speech Post hoc analysis revealed that the mean scores of the No SF group spoken in English. were significantly higher than the mean scores of the SF group Figure 3. Mean pre- to post-test change scores (in percentage points) for the soundfield on both the pre-test and the post-test. The interaction effect of amplification system (SF) and the no soundfield amplification system (No SF) classrooms for the English and French competing story conditions (Study One). test x group, F(1, 68) = 3.86, p = 0.053, approached the level of significance. Although performance of both groups improved over time, students in SF classrooms showed greater improvement English vs. French Pre-Post Change Scores from the pre-test to the post-test (Mean change = 28.70 percentage 50 points) as compared to students in No SF classrooms (Mean 40 change = 20.00 percentage points) when competing speech was 30 not meaningful. SF 20 No SF English vs. French. Results for the SF group and No SF 10 Change (in % correct) % (in Change group were analyzed separately to compare the change in scores 0 from the pre-test to the post-test of the English versus the French English French competing speech condition (see Figure 3). The SF group’s mean Competing Story Condition

42 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification

Study Two Swink, 2010; Takata & Nábělek, 1990; von Hapsburg, Champlin, & Shetty, 2004). However, a very small number of studies have Study Two was a pilot study in which results from Study One investigated abilities of bilingual children to perceive their second were reanalyzed to investigate possible effects of children’s ethnic language in the presence of competing noise or competing speech background on selective auditory attention and effects of daily (Bovo & Callegari, 2009; Crandell & Smaldino, 1996; Nelson, exposure to soundfield amplification on development of these Kohnert, Sabur, & Shaw, 2005). Results from these studies skills. Normal developing children learn to use the language(s) and reveal several key findings. First, in quiet listening conditions, dialect(s) modeled by their parents/ caregivers. As a result, children monolingual and bilingual children tend to perform similarly from different ethnic backgrounds may differ in their exposure to (Crandell & Smaldino, 1996; Nelson et al., 2005). However, in the and use of different dialects and languages. For example, some presence of competing speech, bilingual children need a better SNR African American children use a dialect termed African American than monolingual children in order to achieve 50% intelligibility English (AAE); whereas, some Hispanic children use a dialect for target words (Bovo & Callegari, 2009). In addition, bilingual referred to as Spanish Influenced English (SIE). Educational children perform significantly poorer than monolingual children instruction in the United States is typically based on the Standard on speech recognition tasks at SNRs ranging from -6 dB to +10 American English (SAE) dialect (Craig, Thompson, Washington, dB (Crandell & Smaldino, 1996; Nelson et al., 2005), with the & Potter, 2004). Therefore, some children’s native dialect/ difference between groups increasing as the SNR declines (i.e., language may differ from the SAE dialect they are expected to use becomes less optimal; Crandell & Smaldino, 1996). Previous and understand at school. This mismatch could impact children’s studies have shown that typical classrooms often have SNRs less perception of speech in the classroom, thus potentially affecting than +10 dB (Flexer, 2005). As such, bilingual children are likely learning. However, no published studies have specifically assessed to experience more severe listening difficulties than monolingual effects of speaker-listener dialect differences on children’s speech children in typical classroom conditions, which could impact perception in quiet or in competing conditions. learning when classroom instruction is in the children’s second Despite the lack of evidence specific to children, one study has language. investigated effects of dialect on adult listeners’ speech perception To date, limited data exist regarding potential effects of in noise. Clopper and Bradlow (2008) compared adult listeners’ soundfield amplification for children of different ethnicities abilities to understand target sentences spoken by talkers from and/or dialects. However, available evidence does suggest that four dialect regions of the United States (e.g., Mid-Atlantic, North, soundfield amplification can positively impact ESL children’s South, and General American). Target sentences were presented abilities to listen in the classroom. Crandell (1996) assessed with speech-shaped noise at two signal-to-noise ratios (SNRs): speech perception performance for 8- to 10-year-old ESL children -2 dB and -6 dB. Results revealed significant differences between in unamplified and amplified conditions designed to simulate a percent words correct scores for all four dialect conditions when typical classroom environment. In the unamplified condition, a -6 dB SNR was used. However, at the more favorable -2 dB children listened to monosyllabic words presented in multi-talker SNR, no significant difference was seen between the Northern and babble in three SNR conditions: +6 dB, +1 dB, and -2 dB. In the Southern dialect conditions. These results suggest that the dialect amplified condition, the target words were presented through a of the target talker can have a significant impact on adult listener’s soundfield amplification system at three more favorable SNRs: abilities to understand speech in the presence of background +16 dB, +10 dB, and +8 dB. Speech perception performance of the noise. However, additional research is needed to further assess the ESL children was significantly better in the amplified condition. relationship between dialect and speech perception for children Therefore, soundfield amplification is one method of helping ESL and adults using target speech in other dialects (e.g., African children understand speech in noisy classroom conditions. American English, Spanish Influenced English, and Standard Together, existing evidence demonstrates that children’s American English), and using different types of masking signals speech perception in competing conditions can be affected by (e.g., noise, competing speech). their language background, but additional research is needed Although many questions remain unanswered regarding to investigate factors such as ethnicity and dialect. The target effects of dialectal differences, numerous studies have documented population of Study One included first grade students with an effects of listeners’ language background on speech perception in English primary language background and no known significant competing conditions. The majority of evidence in this area comes hearing loss. Students’ ethnic background was not considered when from studies of monolingual and bilingual adult listeners (e.g., recruiting schools, classrooms, or students. However, as a matter of Mayo, Florentine, & Buus, 1997; Shi, 2009; Stuart, Zhang, & coincidence, the eight participating classrooms came from schools

43 Journal of Educational Audiology vol. 17, 2011 with distinct ethnic backgrounds. Four classrooms (two SF and • School B - 96% Hispanic, 1% African American, two No SF) were from a school with a primarily African American 3% Anglo/Other student population (School A), and the remaining 4 classrooms • School C - 81% Hispanic, 10% African American, were located in two schools (one SF and one No SF classroom at 9% Anglo/Other each school) with primarily Hispanic student populations (School As previously mentioned, the three schools were relabeled for B and C). For ease of understanding, School A will be referred ease of understanding in the pilot study. Specifically, the School to as “School PAA” (predominantly African American), and A group was labeled “School PAA” (predominantly African Schools B and C will collectively be referred to as “School PH” American), and Schools B and C were grouped together under the (predominantly Hispanic). name “School PH” (predominantly Hispanic). As shown in Tables Because of their predominantly Hispanic background, it 1 and 2, the ethnic composition of each participating classroom was presumed that students in School PH classrooms were more was similar to that of each school, with students in the School likely to have been exposed to a second language (i.e., most likely PAA classrooms being primarily African American (Mean = 73% Spanish). On the other hand, it was presumed that students in of the students) and students in the School PH classrooms being classrooms with a predominantly African American background primarily Hispanic (Mean = 82%). were less likely to have had any significant exposure to a second language, but may have used a dialect other than SAE (e.g., AAE). Results In light of these differences, Study Two was conducted as a pilot The School PAA and School PH groups differed in their ethnic study in which results from Study One were reanalyzed to: background and potentially in their exposure to a second language. • Compare selective auditory attention skills of normal However, due to the nature of dividing the participants into groups, hearing first grade students from a primarily Hispanic the schools had an unequal number of subjects in groups that background to those from a primarily African were fairly small (range of 11 to 25 subjects per group). As such, American background, • Investigate how effects of the meaning of Table 1. School PAA (predominantly African American) - Ethnic composition of each competing speech may differentially affect classroom (percentage of students per ethnic group). selective auditory attention performance of children from different ethnic backgrounds, Classroom African American Hispanic White and C (SF) 75% 18.75% 6.25% • Assess effects of daily exposure to soundfield amplification on development D (SF) 68.75% 25% 6.25% of selective auditory attentions skills for children from a primarily Hispanic E (No SF) 75% 25% 0% background versus those from a primarily African American background. F (No SF) 75% 18.75% 6.25%

Method ______Study Two involved reanalyzing data collected Table 2. Schools PH (predominantly Hispanic) – Ethnic composition of each classroom during Study One. As described in Study One, all (percentage of students per ethnic group). participating subjects spoke English as their primary language as per teacher report. In planning the original study, permission was not requested to obtain Classroom African American Hispanic White information regarding each student’s ethnicity, dialect, A (No SF) 7.69% 84.62% 7.69% and/or exposure to a second language. However, information provided by the school district revealed B (SF) 0% 100% 0% that the overall ethnic composition of each participating school was as follows: G (No SF) 14.29% 76.19% 9.52% • School A - 25% Hispanic, 71% African American, 4% Anglo/Other H (SF) 10.53% 73.68% 15.79%

44 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification results for each school were analyzed separately to determine if Figure 4. Mean percent correct scores for School PAA (predominantly African American) in soundfield amplification system (SF) and no soundfield amplification system (No SF) the two groups were also distinct in their development of selective classrooms on the pre- and post-test of the English competing story condition (Study Two). auditory attention skills and their responsiveness to soundfield amplification. Competing Condition - English Story (School PAA) School PAA. School PAA included 15 students in the SF 100 90 group and 11 students in the No SF group. In Study One, NU- 80 CHIPS words were administered in quiet prior to selective auditory 70 60 attention testing to ensure that students could perform a picture- 50 SF pointing word recognition task. On the pre-test, mean percent 40 No SF % Correct % 30 correct scores were 93.33% (SD = 5.16) for students in School PAA 20 SF classrooms and 83.20% (SD = 12.62) for students in School 10 0 PAA No SF classrooms. A one-way ANOVA was conducted to Pre-test Post-test compare the mean score for the SF and No SF group on the pre- Test test of the NU-CHIPS. Results indicated that the mean score for the SF group was significantly higher (i.e., better) than the mean 39.73% (SD = 15.38) for the SF group. Scores for both groups score for the No SF group, F(1, 23) = 7.84, p = 0.01. However, improved from the pre-test to the post-test, but the No SF group the difference was only equivalent to 2.5 words out of the 25 total still scored higher than the SF group on the post-test. On the post- words included on the NU-CHIPS pre-test. Furthermore, the mean test of the French story condition, mean scores were 78.91% scores for the SF group (93.33%) and No SF group (83.20%) both (SD = 12.91) for No SF students and 63.47% (SD = 11.89) for SF represent “good” word recognition abilities according to clinical students. A repeated measures ANOVA was conducted using the standards. As such, these results suggest that students in the School School PAA students’ scores on the French story condition with PAA SF and No SF group were able to understand and perform the test (pre-test or post-test) as the within-subject factor and group task (i.e., listen to the word, mark the corresponding picture, turn (SF or No SF) as the between-subject factor. The main effect of the page, etc.). test, F(1, 24) = 35.79, p = 0.00, was significant indicating that Figure 4 displays the mean percent correct scores of School mean scores of the SF and No SF group improved significantly over PAA students on the English competing story condition. At the time. The main effect of group, F(1, 24) = 17.03, p = 0.00, was also pre-test, students in SF classrooms (M = 61.60%, SD = 14.00) significant. Post hoc analysis revealed that there was a significant performed better than students in No SF classrooms (M = 52.37%, difference between performance of the SF and the No SF group on SD = 19.74). However, the post-test mean percent correct scores both the pre-test and the post-test. The interaction effect of test x of the SF group (M = 55.73%, SD = 12.96) and No SF group group, F(1, 24) = 0.04, p = 0.84, was not significant. As such, there (M = 55.27%, SD = 17.69) were essentially equal. A repeated was not a significant effect of daily use of soundfield amplification measures ANOVA was performed using test (pre-test or post- on the selective auditory attention performance of School PAA test) as the within-subject factor and group (SF or No SF) as the Figure 5. Mean percent correct scores for School PAA (predominantly African American) in between-subject factor. The main effect of test, F(1, 24) = 0.29, p soundfield amplification system (SF) and no soundfield amplification system (No SF) = 0.60, the main effect of group, F(1, 24) = 0.72, p = 0.40, and the classrooms on the pre- and post-test of the French competing story condition (Study Two). interaction effect of test x group, F(1, 24) = 1.85, p = 0.19, were not significant. Thus, students in classrooms with a predominantly Competing Condition - French Story African American background did not improve significantly over (School PAA) 100 time, and there was not a significant effect of daily exposure to 90 soundfield amplification on their selective auditory attention 80 70 performance when the competing story was meaningful. 60 Figure 5 shows the mean percent correct scores of the School 50 SF 40 No SF PAA SF and School PAA No SF students on the pre-test and post- Correct % 30 test of the French competing story condition. The mean score of 20 10 the No SF group was noticeably higher than the mean score of the 0 SF group on both the pre-test and the post-test. On the pre-test, Pre-test Post-test the mean score was 53.45% (SD = 14.12) for the No SF group and Test

45 Journal of Educational Audiology vol. 17, 2011

students when the competing message was not meaningful. than School PH No SF students when the competing speech was meaningful (i.e., spoken in English). The main effect of group, School PH. School PH included 25 students in the SF group F(1, 42) = 3.55, p = 0.07, was not significant. and 19 students in the No SF group. Mean pre-test scores on the Mean percent correct scores for School PH students on the NU-CHIPS were compared for the School PH SF and No SF French competing story condition are shown in Figure 7. On the classrooms. On the pre-test, mean percent correct scores were pre-test, the No SF group (M = 53.26%, SD = 13.0) performed 94.40% (SD = 5.16) for students in SF classrooms and 92.60% better than the SF group (M = 37.60%, SD = 12.06). Performance (SD = 7.82) for students in No SF classrooms. These scores suggest of SF students and No SF students improved from the pre-test to that both groups understood and could perform the task (i.e., listen the post-test. However, SF students showed greater improvement to the word, mark the corresponding picture, turn the page, etc.). to the extent that they were able to “catch up” with the No SF A one-way ANOVA revealed no significant difference between students on the post-test. The post-test mean score of the SF the mean score for the SF and the No SF group, F(1, 43) = 0.86, group was 69.28% (SD = 11.59) as compared to a mean score of p = 0.36. Therefore, the School PH SF and No SF groups were 70.11% (SD = 11.58) for the No SF group. A repeated measures similar in their abilities to perform a picture-pointing word ANOVA was performed. The within-subject factor was test (pre- recognition task in quiet. test or post-test), and the between-subject factor was group (SF or Figure 6 displays the mean percent correct scores for students No SF). The main effect of test, F(1, 42) = 93.31, p = 0.00, was in the School PH SF and No SF classrooms on the English com- significant. Performance of students in SF and No SF classrooms peting story condition. Pre-test performance of students in No SF improved significantly over time from the pre-test to the post- classrooms (M = 46.10%, SD = 12.74) was better than perfor- test. The main effect of group, F(1, 42) = 9.56, p = 0.00, was mance of students in SF classrooms (M = 34.40%, SD = 13.37). also significant indicating that there was a significant difference Both groups improved over time, such that performance of the between scores of the SF group and the No SF group. Post hoc two groups was essentially equal at the post-test. The post-test analysis revealed that this significance between group difference mean percent correct score was 58.72% (SD = 13.05) for the SF was only present on the pre-test of the French story condition. group and 58.32% (SD = 14.69) for the No SF group. A repeated The interaction effect of test x group, F(1, 42) = 8.73, p = 0.01, measures ANOVA was performed with the within-subject factor was also significant, which demonstrates that SF students showed being test (pre-test or post-test), and the between-subject factor significantly greater improvement from the pre-test to the post-test being group (SF or No SF). The main effect of test, F(1, 42) = than No SF students. These findings suggest that for students in 43.03, p = 0.00, and the interaction effect of test x group, F(1, 42) School PH classrooms, daily exposure to soundfield amplification = 4.73, p = 0.04, were significant, indicating that selective audi- may have had a significant positive effect on their ability to ignore tory attention performance for students in the SF and No SF class- competing speech that was not meaningful. rooms improved significantly from the pre-test to the post-test. However, the significant interaction effect reveals that School PH SF students showed significantly greater improvement over time

Figure 6. Mean percent correct scores for School PH (predominantly Hispanic) in Figure 7. Mean percent correct scores for School PH (predominantly Hispanic) in soundfield amplification system (SF) and no soundfield amplification system (No SF) soundfield amplification system (SF) and no soundfield amplification system (No SF) classrooms on the pre- and post-test of the English competing story condition (Study Two). classrooms on the pre- and post-test of the French competing story condition (Study Two).

Competing Condition - English Story Competing Condition - French Story (School PH) (School PH) 100 100 90 90 80 80 70 70 60 60 50 SF 50 SF 40 No SF 40 No SF % Correct % 30 Correct % 30 20 20 10 10 0 0 Pre-test Post-test Pre-test Post-test Test Test

46 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification

General Discussion recognition scores for English and French competing speech conditions were compared for first grade students in classrooms Study One with soundfield amplification (experimental or SF classrooms) Two purposes of Study One included investigating (1) and classrooms without soundfield amplification (control or No children’s development of selective auditory attention, and (2) SF classrooms). For the English competing speech condition, there effects of the meaning of competing speech on the ability to were no significant differences between mean scores of the SF and selectively attend. To assess development, selective auditory No SF group, indicating no significant effect of daily exposure attention testing was administered at the beginning and end of to soundfield amplification. For the French competing speech the students’ second semester in first grade. To evaluate effects condition, students in SF classrooms showed greater improvement of the meaning of competing speech, word recognition testing from the pre-test to the post-test than students in No SF classrooms. was performed with competing speech spoken in English (i.e., This difference approached the level of significance p( = 0.053), the native language) and French (i.e., an unfamiliar language). indicating a possible positive effect of daily exposure to soundfield Analysis of results revealed several key findings. First, the SF amplification. Given that the SF and No SF group both showed and No SF group each improved significantly in their ability to significantly greater improvement in the French competing speech selectively attend to the target speech signal in the presence of condition, these results may indicate that the ability to ignore non- competing speech spoken in English and French. This pattern meaningful competing speech was developing more rapidly than is consistent with findings from previous studies showing that the ability to ignore meaningful competing speech. If maturation children’s selective auditory attention skills improve significantly was driving these aspects of selective auditory attention to develop with increasing age (Cherry, 1981; Doyle, 1973; Geffen & Sexton, at different rates, daily exposure to soundfield amplification 1978; Macoby & Konrad, 1966; Sexton & Geffen, 1979). Second, may have simply enhanced this natural trend. As such, a similar analysis of results revealed that both groups showed significantly effect of soundfield amplification might have occurred in the greater improvement in the French competing speech condition. English competing speech condition if the study had covered a At the pre-test, there was no significant difference between each longer period of time. Therefore, additional research is needed to group’s (i.e., SF or No SF) scores in the English and French investigate how longer exposure to soundfield amplification may competing conditions, but at the post-test, students’ scores were affect children’s development of different aspects of selective significantly better in the French condition. This pattern was seen auditory attention. for both the SF and the No SF group, suggesting that the ability Although Study One did not show a robust positive effect of to ignore non-meaningful competing speech may develop more soundfield amplification, the results demonstrate that daily exposure rapidly than the ability to ignore meaningful competing speech. to amplified speech did not negatively impact the children’s This finding is consistent with previous research showing that development of selective auditory attention. Previous studies children’s selective auditory attention performance is affected have shown immediate improvements in speech intelligibility differently depending on the content of the competing signal (Eriks-Brophy & Ayukawa, 2000; Flexer et al., 1990), spelling (Cherry, 1981; Cherry & Kruger, 1983). However, previous performance (Zabel & Tabor, 1993), and classroom behavior studies have only investigated children’s abilities to ignore English (Eriks-Brophy & Ayukawa, 2000; Palmer, 1998) when testing speech, white noise, and backwards speech. With the exception is conducted with a soundfield system in use. Therefore, use of of the current study, there is a paucity of research examining how soundfield amplification in the current experimental classrooms meaning and other linguistic characteristics, such as language may have resulted in immediate improvements in the students’ rhythm, may affect children’s selective auditory attention. Future speech understanding in classroom noise, but these immediate studies should investigate children’s abilities to ignore different effects may not have been strong enough to affect the students’ types of competing speech signals, such as meaningful speech, underlying development of selective auditory attention. grammatical but non-meaningful speech, ungrammatical strings of words, and speech spoken in different languages. Data from Study Two children of different ages could then be compared to determine Study Two was a pilot study in which results from Study One whether the ability to ignore each type of competing speech signal were reanalyzed to compare selective auditory attention skills of develops at a different rate. children from different ethnic backgrounds. Students at School A third purpose of Study One was to objectively measure PAA were from a predominantly African American background; effects of daily exposure to soundfield amplification on children’s whereas, students at School PH were from a predominately development of selective auditory attention. Mean word Hispanic background. Although all students reportedly spoke

47 Journal of Educational Audiology vol. 17, 2011

English as their primary language, students at School PAA and These results suggest that a listener’s familiarity with linguistic School PH may have differed in their use of and exposure to properties of the language of competing speech may impact different dialects and/or languages. Given the limited evidence selective auditory attention, even if competing speech is spoken in available regarding effects of dialect and second language exposure an unfamiliar language. Given these findings, exposure to a second on children’s speech perception, two purposes of Study Two were language may have impacted how School PH students selectively to (1) compare development of selective auditory attention skills attended to the competing speech spoken in English, their native for normal hearing first grade students from a primarily Hispanic language, and French, an unfamiliar language that shares linguistic background (School PH) to those from a primarily African properties with Spanish (e.g., Sebastián-Gallés, Dupoux, Costa, American background (School PAA), and (2) investigate whether & Mehler, 2000). However, this conclusion is only speculative effects of the meaning of competing speech might differentially considering that information was not collected regarding whether affect selective auditory attention performance of children from the participating students had in fact been exposed to a second different ethnic backgrounds. Results for School PAA and School language. Future studies should, therefore, gather information PH were analyzed separately due to unequal sample sizes. regarding each child’s language exposure and investigate how Data collected for School PAA revealed that the SF and No SF exposure to a second language may affect development of different groups both improved significantly in their ability to selectively aspects of selective auditory attention, even for students who are attend when competing speech was spoken in French, but not when not fluent in their second language. competing speech was spoken in English. In contrast, the School In addition to potential differences in language exposure, PH SF and No SF group both showed significant improvement in dialectical differences between School PAA and School PH their mean scores for the English and French competing speech students may have also influenced their selective auditory attention conditions. All participating students from School PAA and School performance. English was a familiar language for all participating PH were known to use English as their primary language. Therefore, students. However, students in School PAA may have used AAE it was presumed that for both groups, English competing speech dialect; whereas, students in School PH may have used SIE would be meaningful; whereas, French competing speech would dialect. The English competing speech was spoken in SAE dialect; not be meaningful. However, in both competing speech conditions therefore, the degree of mismatch between the dialect used for the (i.e., English and French), performance of School PH students English competing speech (i.e., SAE) and the dialect used by each was similar to how School PAA students performed in the French group of students (i.e., possibly SIE or AAE) may have affected competing story condition. their selective attention. Although a few studies have investigated One possible explanation is that exposure to a second the impact of second language experience on children’s selective language may have influenced students from the predominantly auditory attention (e.g., Bovo & Callegari, 2009; Crandell & Hispanic classrooms (School PH), such that they perceived the Smaldino, 1996; Nelson et al., 2005), no studies have been English and French competing speech differently than students conducted to determine how dialect differences may impact their from the predominantly African American classrooms (School abilities to selectively attend. One question of interest is whether PAA). Although School PH students were known to use English the degree of mismatch between the dialect of the speaker and the as their primary language, they were potentially more likely to dialect of the listener may impact speech perception in competing have been exposed to a second language (i.e., Spanish) in the conditions. Future studies should, therefore, gather specific home environment. Previous research has shown that monolingual information on the dialect of the target talker and the participating and bilingual adult listeners’ speech perception is affected by listeners in order to investigate how dialect may affect selective their familiarity with the language in which competing speech auditory attention development in children. is spoken (e.g., Garcia Lecumberri & Cooke, 2006). In addition, A third purpose of Study Two was to compare effects of daily recent evidence suggests that monolingual adult listeners are exposure to soundfield amplification on development of selective sensitive to underlying linguistic properties of the language of auditory attentions skills for children from a primarily Hispanic competing speech, even when competing speech is spoken in background versus those from a primarily African American an unfamiliar language. For example, Reel and Hicks (in press) background. For School PAA, daily exposure to soundfield found no significant differences between monolingual English- amplification had no significant effect on students’ selective speaking adults’ selective auditory attention performance when auditory attention development in the English or French competing competing speech was spoken in English or German, two speech condition. In contrast, students in SF classrooms at School languages that share a number of linguistic properties, including PH showed significantly greater improvement in their ability to lexical roots and rhythmic structure (e.g., Grabe & Low, 2002). ignore competing speech in English and French, as compared to

48 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification students in the School PH No SF classrooms. Therefore, daily second language. exposure to soundfield amplification only improved the selective Additional research is needed to further investigate auditory attention development of School PH students, a finding the findings of Study One and Study Two. For example, future that may be related to their possible exposure to a second language studies should examine whether a significant effect of soundfield in the home environment. Previous research has shown that amplification would occur if students were exposed to the children who speak English as a second language have greater amplified signal over a period of time longer than the four month difficulties understanding speech in background noise conditions course of the current study. Attempts should also be made to more than do monolingual children (e.g., Crandell & Smaldino, 1996). closely monitor the acoustical conditions in the participating Furthermore, there is evidence that ESL children show significant classrooms to ensure that the SF classrooms are able to achieve improvements in their ability to perceive speech in background and maintain a significantly higher SNR than those of the No SF noise when using soundfield amplification (Crandell, 1996). classrooms. Finally, studies should collect data regarding each However, there is currently a lack of research investigating how student’s ethnicity, dialect, and language background in order exposure to soundfield amplification may affect the selective to assess effects of these factors on development of selective auditory attention development of children who have been exposed auditory attention. Future studies should also consider how to, but are not fluent in, a second language. exposure to soundfield amplification may affect the development of such skills among children from different ethnic, dialect, and/or Conclusions language backgrounds. Together, results of such studies could lead to (1) identification of previously overlooked groups of children Taken as a whole, results of the current study indicate use of who may be at risk for listening difficulties in noisy classroom soundfield amplification does not hinder development of selective conditions (e.g., children from different dialect backgrounds), and auditory attention over time. This is important given that previous (2) the design of new intervention strategies to improve classroom studies have shown immediate benefits for selective auditory listening (and potentially learning) for children who struggle to attention when soundfield amplification is used in the classroom attend to target speech in the presence of competing sounds. (e.g., Mendel et al., 2003). In Study One, daily exposure to soundfield amplification did not significantly affect development of selective auditory attention skills among normal-hearing first grade students. However, there was a borderline significant (p = 0.053) effect of soundfield amplification in the French competing story condition, with the SF group showing greater improvement from pre-test to post-test than the No SF group. This finding indicates a possible positive effect of soundfield amplification on development of the ability to selectively attend to target speech while ignoring competing speech that lacks meaning. Study Two was performed to provide pilot results that could be used to determine whether the relationship between ethnicity, selective auditory attention, and soundfield amplification warranted further investigation. Results revealed a different pattern of selective auditory attention development for students from a predominately African American school as compared to students from a predominately Hispanic school, with the two schools differing in their response to the semantic content of the competing speech message. Furthermore, preliminary results from Study Two indicate that exposure to soundfield amplification may affect development of selective auditory attention skills among certain groups of children, such as those exposed to a second language and/or a second dialect. However, these findings are only speculative in nature, given that specific information was not collected regarding each student’s dialect usage and exposure to a

49 Journal of Educational Audiology vol. 17, 2011

Acknowledgements Craig, H., Thompson, C., Washington, J., & Potter, S. (2004). We gratefully acknowledge Dr. Rajinder Koul, Dr. Katsura Performance of elementary-grade African American students Aoyama, Dr. Renée Bogschutz, and Sherry Sancibrian at the Texas on the Gray Oral Reading tests. Language, Speech, & Tech University Health Sciences Center for their contributions to Hearing Services in the Schools, 35, 141-154. this project. We would also like to thank the students, teachers, Crandell, C. C. (1993). Speech recognition in noise by children and administrators at the participating schools, as well as all other with minimal degrees of sensorineural hearing loss. Ear and individuals who generously gave their time to make the study a Hearing, 14(3), 210-215. success. Crandell, C. C. (1996). Effects of sound-field FM amplification on the speech perception of ESL children. Educational Audiology Monograph, 4, 1-5. References Crandell, C. C., & Smaldino, J. J. (1996). Speech perception in American National Standards Institute. (2002). Acoustical noise by children for whom English is a second language. performance criteria, design requirements and guidelines for American Journal of Audiology, 5(3), 47-51. schools (ANSI S12.60-2002). New York: Author. Crandell, C. C., & Smaldino, J. J. (2000). Room acoustics for American Speech-Language-Hearing Association. (2005). listeners with normal-hearing and hearing impairment. In M. Acoustics in educational settings: Position statement. Valente, H. Hosford-Dunn, & R.J. Roeser (Eds.), Audiology Retrieved from http://www.asha.org/members/deskref- treatment (pp. 601-637). New York: Thieme Medical. journals/deskref/default Darai, B. (2000, July 10). Using sound field FM systems to Bess, F. H., Sinclair, J., & Riggs, D. (1984). Group amplification improve literacy scores. Advance for Speech-Language in schools for the hearing-impaired. Ear and Hearing, 5, Pathologists & Audiologists, 1-2. 138-144. Doyle, A. (1973). Listening to distraction: A developmental Bovo, R., & Callegari, E. (2009). Effects of classroom noise study of selective attention. Journal of Experimental Child on the speech perception of bilingual children learning in Psychology, 15, 100-115. their second language: Preliminary results. Audiological Elliott, L. L. (1979). Performance of children aged 9 to 17 years Medicine, 7(4), 226-232. on a test of speech intelligibility in noise using sentence Bradley, J. S. (1986). Speech intelligibility studies in classrooms. material with controlled word predictability. Journal of the Journal of the Acoustical Society of America, 80(3), 846-854. Acoustical Society of America, 66(3), 651-653. Brungart, D. S. (2001). Informational and energetic masking Elliott, L. L., Connors, S., Kille, E., Levin, S., Ball, K., & Katz, effects in the perception of two simultaneous talkers. Journal D. (1979). Children’s understanding of monosyllabic nouns of the Acoustical Society of America, 109(3), 1101 – 1109. in quiet and in noise. Journal of the Acoustical Society of Chermak, G. D., & Zielonko, B. (1977). Word discrimination in America, 66(1), 12-20. the presence of competing speech with children. Journal of Elliott, L. L., & Katz, D. R. (1980). Northwestern University the American Audiology Society, 2(5), 188-192. Children’s Perception of Speech (NU-CHIPS): Technical Cherry, R. S. (1981). Development of selective auditory attention manual. St. Louis, MO: AUDITEC. skills in children. Perceptual and Motor Skills, 52, 379-385. Eriks-Brophy, A., & Ayukawa, H. (2000). The benefits of Cherry, R. S., & Kruger, B. (1983). Selective auditory attention sound field amplification in classrooms of Inuit students abilities of learning disabled and normal achieving children. in Nunavik: A pilot project. Language, Speech, & Hearing Journal of Learning Disabilities, 16(4), 202-205. Services in Schools, 31, 324-335. Clopper, C. G., & Bradlow, A. R. (2008). Perception of dialect Finitzo-Hieber, T., & Tillman, T. W. (1978). Room acoustics variation in noise: Intelligibility and classification.Language effects on monosyllabic word discrimination ability for and Speech, 51(3), 175-198. normal and hearing-impaired children. Journal of Speech Cooke, M., Garcia Lecumberri, M. L., & Barker, J. (2008). and Hearing Research, 21, 440-457. The foreign language cocktail party problem: Energetic Flexer, C. (1995). Classroom amplification systems. In R. J. and informational masking effects in non-native speech Roeser & M. P. Downs (Eds.), Auditory disorders in school perception. Journal of the Acoustical Society of America, children (3rd ed., pp. 235-260). New York: Thieme Medical. 123(1), 414-427. Cool Edit Pro (Version 2) [Computer software]. (2002). Phoenix, AZ: Syntrillium Software Corporation.

50 Development of Selective Auditory Attention: Effects of the Meaning of Competing Speech and Daily Exposure to Soundfield Amplification

Flexer, C. (2005). Rationale for the use of sound field systems Maccoby, E. E., & Konrad, K. W. (1966). Age trends in selective in classrooms: The basis of teacher in-services. In C. C. listening. Journal of Experimental Child Psychology, 3, 113- Crandell, J. J. Smaldino, & C. Flexer (Eds.), Sound field 122. amplification: Applications to speech perception and Mayo, L. H., Florentine, M., & Buus, S. (1997). Age of second- classroom acoustics (2nd ed., pp. 3-22). Clifton Park, NY: language acquisition and perception of speech in noise. Thomson Delmar Learning. Journal of Speech, Language, and Hearing Research, 40, Flexer, C., Millin, J. P., & Brown, L. (1990). Children with 686-693. developmental disabilities: The effect of sound field McSporran, E., & Butterworth, Y. (1997). Sound field amplification on word identification.Language, Speech, & amplification and listening behaviour in the classroom. Hearing Services in Schools, 21, 177-182. British Educational Research Journal, 23(1), 81-97. Freyman, R. L., Balakrishnan, U., & Helfer, K. S. (2001). Spatial Mendel, L. L., Roberts, R. A., & Walton, J. H. (2003). Speech release from informational masking in speech recognition. perception benefits from sound field FM amplification. Journal of the Acoustical Society of America, 109(5), 2112- American Journal of Audiology, 12, 114-124. 2122. Nábělek, A. K., & Robinson, P. K. (1982). Monaural and binaural Freyman, R. L., Balakrishnan, U., & Helfer, K. S. (2004). Effect speech perception in reverberation for listeners of various of number of masking talkers and auditory priming on ages. Journal of the Acoustical Society of America, 71(5), informational masking in speech recognition. Journal of the 1242-1247. Acoustical Society of America, 115(5), 2246-2256. Nelson, P., Kohnert, K., Sabur, S., & Shaw, D. (2005). Classroom Garcia Lecumberri, M. L., & Cooke, M. (2006). Effect of masker noise and children learning through a second language: type on native and non-native consonant perception in noise. Double jeopardy? Language, Speech, and Hearing Services Journal of the Acoustical Society of America, 119(4), 2445- in Schools, 36, 219-229. 2454. Neuman, A. C., & Hochberg, I. (1983). Children’s perception of Geffen, G., & Sexton, M. A. (1978). The development of auditory speech in reverberation. Journal of the Acoustical Society of strategies of attention. Developmental Psychology, 14(1), America, 73(6), 2145-2149. 11-17. Neuman, A. C., Wroblewski, M., Hajicek, J., & Rubinstein, A. Grabe, E., & Low, E. L. (2002). Durational variability in speech (2010). Combined effects of noise and reverberation on and the rhythmic class hypothesis. Papers in Laboratory speech recognition performance of normal-hearing children Phonology, 7, 1-16. and adults. Ear & Hearing, 31(3), 336-344. Johnson, C. E. (2000). Children’s phoneme identification in Palmer, C. V. (1998). Quantification of the ecobehavioral reverberation and noise. Journal of Speech, Language, and impact of a soundfield loudspeaker system in elementary Hearing Research, 43(1), 144-157. classrooms. Journal of Speech, Language, & Hearing Klatte, M., Lachmann, T., & Meis, M. (2010). Effects of noise Research, 41, 819-833. and reverberation on speech perception and listening Papso, C. F., & Blood, I. M. (1989). Word recognition skills of comprehension of children and adults in a classroom-like children and adults in background noise. Ear and Hearing, setting. Noise & Health, 12(49), 270-282. 10(4), 235-236. Knecht, H. A., Nelson, P. B., Whitelaw, G. M., & Feth, L. L. Perrault, C. (n.d.). Classique: Cendrillon. Retrieved from http:// (2002). Background noise levels and reverberation times www.jecris.com/TXT/CONTES/CENDRILLON/cendrillon1. in unoccupied classrooms: Predictions and measurements. html American Journal of Audiology, 11, 65-71. Picard, M., & Bradley, J. S. (2001). Revisiting speech Larsen, J. B., & Blair, J. C. (2008). The effect of classroom interference in classrooms. Audiology, 40(5), 221-244. amplification on the signal-to-noise ratio in classrooms while Purcell, N., & Millett, P. (2010). Effect of sound field class is in session. Language, Speech, and Hearing Services amplification on grade 1 reading outcomes.Canadian in Schools, 39, 451-460. Journal of Speech-Language Pathology and Audiology, Leech, R., Aydelott, J., Symons, G., Carnevale, J., & Dick, 34(1), 17-24. F. (2007). The development of sentence interpretation: Reel, L. A., & Hicks, C. B. (in press). Selective auditory attention Effects of perceptual, attentional, and semantic interference. in adults: Effects of rhythmic structure of the competing Developmental Science, 10(6), 794-813. language. Journal of Speech, Language, and Hearing Research.

51 Journal of Educational Audiology vol. 17, 2011

Rhebergen, K. S., Versfeld, N. J., & Dreschler, W. A. (2005). Stuart, A., Zhang, J., & Swink, S. (2010). Reception thresholds Release from informational masking by time reversal of for sentences in quiet and noise for monolingual English native and non-native interfering speech (L). Journal of the and bilingual Mandarin-English listeners. Journal of the Acoustical Society of America, 118(3), 1274-1277. American Academy of Audiology, 21(4), 239-248. Rosenberg, G. G. (1998). Relocatable classrooms: Acoustical Takata, Y., & Nábělek, A. K. (1990). English consonant modifications or FM sound field classroom amplification? recognition in noise and in reverberation by Japanese and Journal of Educational Audiology, 6, 9-13. American listeners. Journal of the Acoustical Society of Rosenberg, G. G., & Blake-Rahter, P. (1995). In-service training America, 88(2), 663-666. for the classroom teacher. In C. C. Crandell, J. J. Smaldino, Tun, P. A., O’Kane, G., & Wingfield, A. (2002). Distraction & C. Flexer (Eds.), Sound-field FM amplification: Theory by competing speech in young and older adult listeners. and practical applications (pp. 149-190). San Diego, CA: Psychology and Aging, 17(3), 453-467. Singular Publishing. United States Census Bureau. (2005-2009). 2005-2009 American Rosenberg, G. G., Blake-Rahter, P., Heavner, J., Allen, L., community survey 5-year estimates: Language spoken Redmond, B. M., Phillips, J., & Stigers, K. (1999). at home by ability to speak English for the population Improving classroom acoustics (ICA): A three-year FM 5 years and over (No. B16001). Retrieved from http:// sound field classroom amplification study.Journal of factfinder.census.gov/servlet/DTTable?_bm=y&- Educational Audiology, 7, 8-28. context=dt&-ds_name=ACS_2009_5YR_G00_&- Ross, M., & Lerman, J. (1970). A picture identification test for mt_name=ACS_2009_5YR_G2000_ B16001&- hearing-impaired children. Journal of Speech and Hearing CONTEXT=dt&-tree_id=5309&-redoLog=true&- Research, 13, 44-53. geo_id=01000US&-geo_id= 05000US48303&-search_ Ryan, S. (2009). The effects of a sound-field amplification system results=01000US&-format=&-_lang=en on managerial time in middle school physical education University of Tampere. (n.d.) AV3F English public speaking settings. Language, Speech, and Hearing Services in class reference files. Retrieved from http://www.uta.fi/FAST/ Schools, 40, 131-137. AV3F/rainbow.html Sapienza, C. M., Crandell, C. C., & Curtis, B. (1999). Effects of Van Engen, K. J., & Bradlow, A. R. (2007). Sentence recognition sound-field frequencymodulation amplification on reducing in native- and foreign-language multi-talker background teachers’ sound pressure level in the classroom. Journal of noise. Journal of the Acoustical Society of America, 121(1), Voice, 13(3), 375-381. 519-526. Sebastián-Gallés, N., Dupoux, E., Costa, A., & Mehler, J. (2000). Von Hapsburg, D., Champlin, C. A., & Shetty, S. R. (2004). Adaptation to time-compressed speech: Phonological Reception thresholds for sentences in bilingual (Spanish/ determinants. Perception & Psychophysics, 62(4), 834-842. English) and monolingual (English) listeners. Journal of the Sexton, M. A., & Geffen, G. (1979). Development of three American Academy of Audiology, 15, 88-98. strategies of attention in dichotic monitoring. Developmental Voyager universal literacy system: Teacher training manual. Psychology, 15(3), 299-310. (2003). Dallas, TX: Voyager Expanded Learning. Shi, L. (2009). Normal-hearing English-as-a-second-language Zabel, H., & Tabor, M. (1993). Effects of soundfield amplification listeners’ recognition of English words in competing signals. on spelling performance of elementary school children. International Journal of Audiology, 48, 260-270. Educational Audiology Monograph, 3, 5-9. Simpson, S. A., & Cooke, M. (2005). Consonant identification in N-talker babble is a nonmonotonic function of N (L). Journal of the Acoustical Society of America, 118(5), 2775- 2778. Stuart, A. (2008). Reception thresholds for sentences in quiet, continuous noise, and interrupted noise in school-age children. Journal of the American Academy of Audiology, 19, 135-146.

52 Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise

Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise

Kelsey Nielsen, Au.D. Otolaryngology Associates, P.C. Fairfax, Virginia

Brian M. Kreisman, Ph.D. Towson University Towson, Maryland

Stephen Pallett, Au.D. Towson University Towson, Maryland/ ENTAA Care Annapolis, Maryland

Nicole V. Kreisman, Ph.D. Towson University Towson, Maryland/ Advanced Hearing Centers White Marsh, Maryland

The purpose of this study was to compare the effect of active noise cancellation headphones and standard earmuffs on the ability to screen distortion product otoacoustic emissions (DPOAEs) in the presence of background noise. The time required to screen 1000 to 5000 Hz and 2000 to 5000 Hz (including set up time) was analyzed, as well as the pass/refer for each frequency. Four noise conditions were utilized: quiet, 40 dBA, 60 dBA, and 80 dBA of uncorrelated speech babble. Participants had hearing within normal limits as evidenced through behavioral pure-tone testing, tympanometry, and diagnostic DPOAE measurements. The study included screening DPOAEs from 1000 to 5000 Hz using no headphone, the active noise cancellation headphone, and the standard earmuff in all four noise conditions. Results indicated significant differences between the active noise cancellation headphone and the standard earmuff compared to using no headphone in the time required to screen DPOAEs from 1000 to 5000 Hz in the presence of background noise. Significant differences were also noted in the number of refers recorded. Results suggested that using a modified set up with standard earmuffs to screen DPOAEs in background noise may reduce the time to screen DPOAEs, may provide additional audiometric information that may not be otherwise obtained (1000 Hz), and may reduce the number of false refers due to the background noise.

Introduction a tone burst, whereas distortion product OAEs (DPOAEs) present a simultaneous pair of pure tones and record the 2f1-f2 distortion Hearing screening programs seek to identify individuals product. A healthy middle and inner ear will produce a response at risk for auditory problems from a group of normal hearing recorded by the probe microphone. The presence of middle ear individuals, and those who “refer” on the screening go on for a full pathology, cochlear pathology, and/or poor fit of the probe will diagnostic evaluation. A hearing screening protocol allows for a result in lowered or absent OAEs (Kemp, 2007). large number of individuals to undergo an audiometric procedure Hearing screenings are often performed in acoustically- with accuracy and validity in a short period of time. Otoacoustic poor testing environments. Quiet testing rooms may not be emissions (OAEs) have been shown to be a quick, noninvasive, available. For example, Greenwood (2010) measured the average effective screening tool for newborns and adults when evaluating background noise in five preschool facilities in the rooms in which hard-to-test and normal hearing populations (Owens, McCoy, screenings were being conducted. The average background noise Lonsbury-Martin, & Martin, 1992). levels ranged from 52.8 dBA to 66.3 dBA. Peak background Two main methods of evoking OAEs via a probe in the noise levels for screenings at Special Olympics games have been external ear canal are used clinically today. Transient evoked measured as high as 85 dBA (Neumann et al., 2006). It is well OAEs (TEOAEs) present a complex stimulus, such as a click or known that DPOAE testing in areas with high ambient noise levels

53 Journal of Educational Audiology vol. 17, 2011 in the screening environment will interfere with the recording of (especially with already low signal-to-noise ratios), thus reducing an accurate response in a timely fashion (Hall, 2000). Attempting the specificity of DPOAE results (Whitehead, Lonsbury-Martin, & to obtain OAEs in a noisy environment can result in increased test Martin, 1993). time, thus reducing the number of individuals able to be screened. The effect of noise on the accuracy and length of DPOAE The Healthy Hearing program, a part of Special Olympics screening time has been documented; however, to date, there Healthy Athletes, started in 1998 to provide hearing screening for remains a paucity of data regarding the use of sound-attenuating the participating athletes (Herer & Montgomery, 2006). DPOAEs earmuffs to reduce background noise during OAE screenings. Data are an integral part of the Healthy Hearing screening protocol. regarding the effect of sound-attenuating ear muffs on the ability Athletes are screened with DPOAEs after an otoscopic inspection to accurately record DPOAEs can positively affect the screenings for occluding cerumen. A “pass” completes the screening. in environments with loud background noise, including preschool Athletes who fail the DPOAE screening would continue on for screenings and the Healthy Hearing Program. tympanometry and possibly pure tone testing. To reduce the number The purpose of this study was to determine the effect of of failing athletes, the ambient noise levels have to be controlled, passive noise-attenuating earmuffs and active noise cancellation especially if a quiet screening location is not available. The use of headphones on the ability to obtain DPOAEs in background noise, active noise cancellation headphones or passive standard ear muffs as well as the length of time needed to screen DPOAEs in each was examined as a means to control the amount of ambient noise. condition. This study aimed to determine the effect of each type To date, there are few studies which have assessed specific of headphone on the time it takes to screen DPOAEs under noisy methods for attenuating background noise with DPOAE screenings. conditions, as well as the specificity of pass/refer rates under each In the 2004 German Summer Special Olympics games, attempts condition when compared to not using the headphones or earmuffs. at creating a quiet environment for DPOAE screening were It was hypothesized that the use of either headphone type would made using sound-attenuating ear muffs placed over the DPOAE result in more accurate pass/refer rates and would reduce the time probes on 184 of the 755 athletes tested, in an outdoor tent area needed to screen DPOAEs in background noise. (Neumann et al., 2006). DPOAEs were also recorded without any noise-reducing method (463 athletes), in a sound attenuating van Method (64 athletes), and in a sound attenuating booth (44 athletes). Peak noise levels recorded for the screening areas were between 75 and Participants 85 dBA. No specific data were recorded regarding the effect of Thirty adult volunteers participated in this study; however, the specially constructed sound-attenuating muffs on the pass/ data from only 29 were included in the analyses. One participant refer rate for the DPOAE station. The authors noted, however, was not included in the statistical analysis because diagnostic that approximately one-third of the athletes screened did not pass DPOAE measurements for one frequency did not meet inclusion the healthy hearing screening, with 56.1% of those athletes failing criterion. Participants had to sign an informed consent form the DPOAE and pure tone stations. It was acknowledged that the approved by the Towson University Institutional Review Board. level of ambient noise, among other reasons, could account for the An a priori power analysis was conducted to determine the number high refer rate for the summer 2004 German games; however, the of participants needed for this study based on Lenth (2009). With authors did not provide information regarding how the earmuffs β = 0.8, a sample size of 26 was required. The participants ranged were adapted for the DPOAE probe or effect of noise reduction in age from 19 to 34 years, with a mean age of 23 years. In order techniques on overall pass/refer rates. to be included in the study, each subject needed to have normal hearing (thresholds of 15 dB HL or better, no air-bone gaps for 250 Purpose to 8000 Hz including interoctaves), normal diagnostic DPOAEs, In order for OAEs to be accurately measured, the response normal tympanometric results, and no known cochlear or middle level of the emission needs to be larger than the level of the noise ear pathologies. floor. The main problem with using OAEs in a screening setting is the level of ambient noise. Often screenings are held in less Background Noise than ideal acoustic environments (i.e., cafeterias, rooms with Uncorrelated speech babble from the second track of the high ceilings, crowded auditoriums), which can create a problem commercially available Modified Rhyme Test (MRT) compact when trying to obtain accurate OAE measurements, especially at disc (Cosmos Distributing, Inc.) served as the background noise. frequencies below 2000 Hz (Neumann et al., 2006). Excessive noise can lead to the over-estimation of DPOAE amplitude

54 Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise

Earmuffs and Headphones Comfort 2 Acoustic Noise Cancelling, around-the-ear headphone Two types of noise cancellation headphones were used. The (AH), and the standard noise attenuation headphone used was the active noise cancellation headphone used was the Bose® Quiet Bilsom® Leightning L3 noise blocking earmuff (SE). The following adaptations were made to both the SE and AH headphones in order Figure 1. Masking tape was applied to the top and two sides of a second cushion from a Bilsom Leightning Earmuff. The second cushion was then placed on the to accommodate the DPOAE probe. Masking tape was fashioned existing standard earmuff (SE) or active noise-cancellation headphone cushion (SE shown). into a loop with the sticky surface facing outward and applied to the sides and top of an extra Bilsom® Leightning earmuff cushion. This cushion was then pressed against the existing cushion of the earmuff (SE) or headphone (AH). The DPOAE probe was threaded through the bottom, between the two cushions, leaving several inches of cord hanging and then placed into the ear canal. The cord was gently pulled to reduce the length of DPOAE cord between the ear and cushion, allowing the probe to fit inside the earmuff/ headphone cushion with the cushion snug against the head. The earmuff/headphone was then applied over the opposite ear. Figures 1 through 3 display photos of the headphone modifications.

Procedures Hearing thresholds were determined using the Grason- Stadler Instruments (GSI) 61 audiometer with EAR 3A insert earphones and the Radioear B-71 bone oscillator. Thresholds were determined following the American Speech-Language-Hearing Association (ASHA) guidelines for manual pure-tone audiometry (2005). Middle ear status was determined using the GSI Tympstar. Diagnostic DPOAEs were elicited using the ILO-V.6 OAE software in the Speech, Language & Hearing Center at Towson University.

Figure 2. The probe cord was threaded through the cushions. Figure 3. The DPOAE probe was inserted into the ear canal and the cord gently pulled to reduce slack and allow the probe to fit securely and comfortably in the ear canal.

55 Journal of Educational Audiology vol. 17, 2011

The DPOAEs were measured using eight points per octave with 1000 Hz could provide additional audiometric information that L1=65 dB, L2= 55 dB, and an f2/f1 ratio of 1.22. An analysis otherwise could not be obtained. of each participant’s DPOAEs was made using the DP gram as Background noise levels were determined based upon described by Gorga (1993). Screening DPOAE measurements measurements taken during the 2008 Maryland Summer Special were taken using the AuDX Pro II hand-held OAE screener using Olympic games at Towson University. Noise levels were monitored a stop criterion of 260 sweeps, f2/f1 ratio of 1.22, and L1= 65 throughout the day of screening and ranged from 41 to 80 dBA. The dB, L2= 55 dB. Information regarding the screened measurements three noise levels used were as follows: Soft: 40 dBA, Medium: 60 was stored in a Dell computer using Microsoft Excel. Results dBA, and Loud: 80 dBA. Noise levels were calibrated at the center were also recorded on paper, indicating DPOAE results for each of the speaker array via the Ivie IE-35 Audio Analysis System, condition, as well as timing information. . functioning as a Type I sound level meter. The uncorrelated speech babble was presented using the Three measurements of time were recorded during testing: the ProTools 7.3 software on an iMac laptop computer. The signal length of time for the headphone/probe to be placed on the ear, the was transmitted from the ProTools 7.3 software by the DigiDesign length of time needed to record DPOAEs from 2000 to 5000 Hz, 002, through balanced line cables which were connected to an as well as the length of time for all five frequencies to be screened 8-speaker KRK System’s Rokit5 powered arrangement. The (1000 to5000 Hz) under the above background noise conditions. speakers were set at a height of 3.5 feet and were located at 0-, 45-, Two time measurements were made to follow the Healthy Hearing 90-, 135-, 225-, 270-, 315-, 335-degrees azimuths, 0.75 meters Program protocol, as well as to measure the amount of time needed from the participant. The participant was seated in the center of the to add 1000 Hz to the screening. Pass/refer rates were also recorded speaker array. Measurements using the AuDX Pro II took place in for each frequency under each condition. A refer was recorded if the Center for Amplification, Rehabilitation, & Listening (CARL) the AuDX Pro II specifically stated “refer” for a specific frequency at Towson University in a 7.5’ X 7.0’ IAC double-walled booth. or if noise was recorded for a specific frequency. If the AuDX Pro Once all inclusion criteria were met, DPOAE screening II presented with the message of “could not calibrate” or excessive measurements were taken in the following conditions for both noise levels (“noisy”), the exact answer was recorded and still ears: classified as a refer. The message of excessive noise levels was 1. Without headphones in: only recorded if the message continued to appear after selecting a. ambient background noise “continue” on the testing screen three times in a row. b. 40 dBA background noise c. 60 dBA background noise Measurements using the Knowles Electronic Manikin for d. 80 dBA background noise Acoustic Research (KEMAR) 2. With Bose® Quiet Comfort 2 headphones (AH) in: Measurements of the background noise on KEMAR were a. ambient background noise made using the Bruel & Kjaer Digital Frequency Analyzer Type b. 40 dBA background noise 2131 sound level meter. Noise levels per frequency band from c. 60 dBA background noise 125 to 8000 Hz for each background noise level were measured d. 80 dBA background noise e using KEMAR without headphones. The same measures were then 3. With Bilsom® Leightning earmuffs (SE) in: made with the modified earmuffs and headphones, with the probe a. ambient background noise cord inserted between the regular padding and the taped-on pad b. 40 dBA background noise (but without the probe in KEMAR’s ear) in order to determine the c. 60 dBA background noise attenuation of the SE and AH. d. 80 dBA background noise Statistical Analyses The frequencies that were used for screening DPOAEs Statistical analysis included a 3x4 analysis of variance included 1000, 2000, 3000, 4000 and 5000 Hz in descending order. (ANOVA) for time to determine if there was a significant difference These test frequencies follow the protocol of the Healthy Hearing in length of screening time with and without the headphones/ Screening program of the Special Olympics, with the exception earmuffs in noise, with post-hoc testing completed via paired- of 1000 Hz. Screening included 1000 Hz to determine if lower- samples t-tests. Independent variables were headphone condition frequency DPOAEs, which are most affected by background (NH, AH, SE) and background noise condition (quiet, 40, 60, 80 noise, could be recorded in a noisy environment with the use of dBA). The dependent variable was the time (in seconds), beginning noise-attenuating headphones. Being able to screen DPOAEs at with setting up the DPOAE (and AH or SE, when appropriate) and

56 Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise ending when the screening results displayed on the AuDX II Pro. Effect of Headphone/Earmuff on Time Friedman’s Tests for repeated measures nonparametric data were For each headphone condition, the following times were completed to determine the significance of pass and refer rates for recorded: the time to set up before testing, the time to record screening DPOAEs using the headphones/earmuffs in background DPOAEs from 2000 to 5000 Hz for each background noise noise, with post-hoc analysis completed via Wilcoxon signed rank condition, and the time to obtain DPOAE results for 1000 Hz tests. For these analyses, the dependent variable was the number for each background noise condition. The set up time for each of passes and refers, while the independent variables remained the headphone was added to the DPOAE recording times, as significant headphone condition and background noise condition. differences were noted in the set up time between no-headphone (NH), active headphone (AH), and standard earmuff (SE). Data Results were analyzed via 3x4 repeated-measures ANOVAs. These analyses compared whether there was a statistically significant Noise Measurements difference in the timing between any background noise conditions In order to determine the amount of attenuation of the or headphone conditions. Separate ANOVAs were completed for modified SE and AH, measurements were made on KEMAR for the time to set up and obtain DPOAE results for 2000 to 5000 Hz frequency bands from 125 to 8000 Hz. Table 1 displays the noise and for the time to set up and obtain DPOAE results for 1000 to measures per frequency band in the IAC booth, as measured on 5000 Hz, with post-hoc analyses via paired-samples t-tests using KEMAR without the addition of headphones. Tables 2 and 3 a Bonferroni family-wise correction for each background noise display the attenuation for the modified headphones with the probe condition (α = .05/3 =.017) to guard against the possibility of a cord inserted through the headphones, but without the probe in Type I error. Descriptive statistics (mean and standard deviation) KEMAR’s ear, for the SE and AH respectively. for all noise and earphone conditions are shown in Table 4.

Preliminary Analysis Time to Set Up and Obtain DPOAEs from 2000 to 5000 Hz The length of time it took to set up and screen 2000 to 5000 Results for the ANOVA comparing the time it took to set up Hz and 1000 to 5000 Hz in each background noise condition for and obtain DPOAEs from 2000 to 5000 Hz suggested a significant the right and left ears were compared via paired-sample t-tests. interaction effect between headphone conditions and noise Results suggested no significant ear effects; therefore, the data conditions (F (6,342) = 18.69, p < .001), as well as significant main for both ears were combined for all further statistical analyses effects for headphone condition (F(2,114) = 23.80, p < .001) and (N = 58). noise condition (F(3,171) = 114.88, p < .001). Post-hoc analyses for the quiet and 40 dBA background Table 1. Noise levels (in dBA) on KEMAR for the background noise conditions. noise conditions indicated that Frequency (Hz) recording the DPOAEs from 2000 to Noise Condition 125 250 500 1000 1600 2000 3000 4000 6000 8000 Quiet 11.2 5.4 3.0 3.0 6.0 10.2 11.6 8.7 10.2 10.9 5000 Hz with NH was significantly 40 dBA 26.7 24.1 23.6 22.1 22.6 22.9 19.4 14.9 10.4 10.3 faster than either AH or SE (p < .001 60 dBA 46.0 45.3 45.1 43.1 44.7 41.6 41.1 37.1 27.1 23.4 80 dBA 64.4 66.5 64.8 62.3 61.6 63.2 60.9 56.1 44.0 40.4 for all paired comparisons). For the 60 dBA background noise condition,

Table 2. Noise reduction (attenuation in dBA) on KEMAR for modified standard earmuffs with the probe both the SE and AH took significantly cord inserted through the headphones. less time to record the DPOAEs than Frequency (Hz) the NH (p < .001 for both paired Noise Condition 125 250 500 1000 1600 2000 3000 4000 6000 8000 comparisons), with no significant Quiet -2.8 0.0 1.2 2.0 3.6 8.4 7.6 5.7 5.8 5.8 40 dBA -3.0 1.3 16.0 20.3 20.2 21.1 12.4 9.5 6.0 5.2 differences between the SE or AH. 60 dBA -6.5 0.9 16.2 26.9 30.3 24.2 13.6 18.5 16.9 16.9 For the 80 dBA background noise, 80 dBA -4.7 2.8 19.5 25.3 26.5 26.6 16.6 17 18.9 20.3 the SE took significantly less time to

Table 3. Noise reduction (attenuation in dBA) on KEMAR for modified active noise cancellation headphones record DPOAEs than the NH (p = with the probe cord inserted through the headphones. .007) as well as the AH (p < .001).

Frequency (Hz) Noise Condition 125 250 500 1000 1600 2000 3000 4000 6000 8000 Time to Set Up and Obtain Quiet 0.0 0.3 -1.0 -2.7 3.5 5.8 2.9 5.2 1.8 5.2 40 dBA 10.0 8.7 9.7 8.4 16.3 18.5 11.1 10.5 2.0 4.6 DPOAEs from 1000 to 5000 Hz 60 dBA 7.6 9.7 9.7 8.9 20.2 21.9 21.6 22.4 17.4 17.7 Results for the ANOVA 80 dBA 8.1 10.8 10.0 9.0 17.5 25.1 23.3 22.0 20.6 21.7

57 Journal of Educational Audiology vol. 17, 2011 comparing the time it took to set up and obtain Table 4. Mean time (standard deviation) in seconds to screen DPOAEs from DPOAEs from 1000 to 5000 Hz suggested a significant 1000 to 5000 Hz and 2000 to 5000 Hz. interaction effect between headphone conditions and Noise level Headphone 1000-5000 Hz 2000-5000 Hz noise conditions (F (6,342) = 73.44, p < .001), as well Quiet NH 24.5 (6.5) 22.9 (6.4) as significant main effects for headphone condition AH 35.4 (4.5) 33.6 (4.4) (F(2,114) = 27.87, p < .001) and noise condition SE 35.4 (5.7) 33.6 (5.5) 40 dBA NH 25.5 (9.6) 22.9 (6.2) (F(3,171) = 306.07, p < .001). Post-hoc analyses for AH 35.7 (4.8) 33.3 (4.4) the quiet and 40 dBA background noise conditions SE 35.3 (5.8) 33.5 (5.4) suggested that the screening of DPOAEs from 1000 to 60 dBA NH 38.6 (23.9) 24.7 (7.7) AH 42.5 (10.8) 34.4 (4.8) 5000 Hz with NH was significantly faster than either SE 37.0 (7.4) 33.4 (5.4) AH or SE (p < .001 for all paired comparisons). For 80 dBA NH 88.3 (33.7) 51.8 (27.2) the 60 dBA background noise condition, the SE was AH 85.0 (27.4) 54.4 (18.6) SE 63.3 (20.8) 41.5 (10.9) significantly faster than the AH (p = .001), while there Note. NH = no headphone; AH = active noise-cancellation headphone; were no significant differences between NH and either SE = standard earmuff AH or SE. For the 80 dBA background noise condition, number of refers for headphones/NHs for the following conditions: the SE was significantly faster than the AH or the NH (p < .001 1000 Hz at 60 and 80 dBA, as well as 2000 Hz at 80 dBA (see for both comparisons), with no significant differences between NH Table 6). and AH. Specific reasons for refers recorded at 1000 Hz in the 60 and 80 dBA background noise conditions, as well as the 2000 Hz in 80 Effect of Headphone on Pass or Refer Result dBA background noise condition, are displayed in Table 7. These Table 5 provides the detailed pass/refer results for each conditions were noted as having significant differences between headphone condition and background noise condition. In quiet and headphone, frequency, and noise conditions using the Wilcoxon 40 dBA background noise, no significant differences were found Signed Rank test. A majority of refers for 1000 Hz in the 80 between the pass/refer results for headphone conditions at any of dBA background noise (with no-headphone) were the result of a the frequencies screened. With 60 dBA of background noise, the reading of “noisy,” or the AuDX Pro II could not complete testing. number of refers recorded with NH at 1000 Hz increased to six, Although the participant had normal DPOAEs (as evidenced by compared to one refer with AH, and no refers with SE. In 80 dBA the diagnostic OAEs), the AuDX Pro II was unable to record a background noise, the number of refers for 2000 Hz were 13, five, DPOAE in the presence of 80 dBA of noise. and two for NH, AH, and SE, respectively. A similar decrease in the number of refers for 1000 Hz Table 5. DPOAE pass/refer results for each background noise condition. were 32, 24, and 10 for NH, AH, and SE, respectively. Headphone Condition Noise 1kHz 2kHz 3kHz 4kHz 5kHz No Headphone Quiet 58/0 58/0 58/0 58/0 58/0 Results of the Friedman’s test 40 dBA 58/0 58/0 57/1 58/0 58/0 for non-parametric data suggested 60 dBA 52/6 57/1 57/1 58/0 58/0 significant overall results for pass/ 80 dBA 26/32 45/13 55/3 56/2 56/2 Active Headphone Quiet 58/0 57/1 58/0 58/0 57/1 refer rates between headphones in 40 dBA 58/0 58/0 58/0 58/0 57/1 80 dBA background noise (x2(14) 60 dBA 57/1 57/1 58/0 58/0 57/1 80 dBA 34/24 53/5 57/1 58/0 54/4 = 241.68, p < .001). Further Standard Earmuff Quiet 58/0 58/0 58/0 58/0 57/1 analysis using the Freidman’s 40 dBA 58/0 58/0 58/0 58/0 57/1 60 dBA 58/0 58/0 58/0 58/0 57/1 tests also suggested significant 80 dBA 48/10 56/2 56/2 57/1 56/2 differences between headphone Note. Number of passes/number of refers. and background noise conditions for 1000 Hz in 60 dBA (x2(2) = Table 6. Significant differences in DPOAE “refers” for the Wilcoxon Signed Rank Test. 8.86, p = .012) and 80 dBA (x2(2) Frequency Noise level Comparison Z Value p-value = 24.80, p < .001), as well as for 1000 Hz 60 dBA SE - NH -2.449 0.014 2000 Hz in 80 dBA (x2(2) = 14.92, 80 dBA SE - NH -4.69 < 0.001 p = .001). Post hoc analysis found SE - AH -3.3 0.001 2000 Hz 80 dBA AH - NH -2.309 0.021 significant differences between the SE - NH -3.317 0.001 Note. NH = no headphone; AH = active noise-cancellation headphone; SE = standard earmuff 58 Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise

Table 7. Specific “refer” reasons pertaining to conditions found to have significant differences noise (Zhao & Stephens, 1999), via the Wilcoxon Signed Rank Test. these results suggest that it may be possible to add this frequency Frequency Noise level Result NH AH SE 1000 Hz 60 dBA Could not calibrate 0 0 0 without significantly increasing the Could not Test 1 0 0 time needed to screen DPOAEs in Noisy 5 1 0 Refer 0 0 0 noise. Total 6 1 0 In contrast, in lower noise levels 80 dBA Could not calibrate 1 0 0 Could not Test 19 13 4 (less than or equal to 40 dBA) the SE Noisy 12 11 6 and AH required significantly more Refer 0 0 0 time than the NH condition. When Total 32 24 10 2000 Hz 80 dBA Could not calibrate 1 0 0 testing in a relatively quiet setting, Could not Test 4 0 1 the use of AH or SE devices may Noisy 8 5 1 Refer 0 0 0 not be warranted. However, in noisy Total 13 5 2 conditions, such as those found at . NH = no headphone; AH = active noise-cancellation headphone; SE = standard earmuff Note the 2008 Special Olympics held Discussion at Towson University, it may take several minutes to obtain DPOAE results. Using the SE to help The purpose of this study was to examine the effect of active attenuate the background noise may decrease the time needed to noise-cancelling headphones (AH) and standard hearing protection screen, thus reducing the amount of time needed to screen each earmuffs (SE) on the ability to screen DPOAEs in background individual and increasing the number of individuals who can be noise in a timely and accurate manner. screened each day.

Effects of Headphone/Earmuffs on Time to Set Up and Effects of Headphones/Earmuffs on Pass/Refer Results Obtain DPOAEs from 2000 to 5000 Hz The difference in the number of refers between NH and the use Results indicated that the time needed to screen 2000 to of a headphone/earmuff was significant, especially when screening 5000 Hz (including set up time) was significantly reduced with 1000 Hz. This finding supports the hypothesis that the use of either the use of a SE or AH in high noise levels (i.e. 60 dBA or higher) type of headphone would result in more accurate pass/refer results when compared to NH. Specifically, in 60 dBA background noise, than no-headphone. No significant differences in pass/refers were both SE and AH were equally effective in decreasing the time noted for 5000, 4000, or 3000 Hz, which is in agreement with needed to set up and screen DPOAEs from 2000 to 5000 Hz. In the idea that DPOAEs are more easily recorded for mid and high 80 dBA background noise, the SE was more effective than NH frequencies than for low frequencies (below 1500 Hz) even with or the use of the AH in reducing time to screen 2000 to 5000Hz. background noise (Gorga et al., 1993). Improving the signal-to- For those screening programs whose protocol includes screening noise ratio (SNR) will increase the likelihood that a true DPOAE from 2000 to 5000 Hz (such as the Special Olympics Healthy has been recorded at 1000 Hz, as poor SNRs are one of the main Hearing program), this finding is especially noteworthy. The reasons for a refer to be noted at 1000 Hz even in normal ears use of a standard hearing protection earmuff in their screening (Gorga et al., 1993). protocol may increase the number of people who are able to be Refers recorded for 1000 Hz at 80 dBA with NH were the screened, regardless of the noise environment. These findings are result of either “could not test” or “noisy.” With AH, the number consistent with Hall (2000), who suggests limiting noise levels of refers reduced from 34 to 24 when compared to the NH. The when obtaining DPOAEs. number of refers was reduced even more with the use of the SE (34 refers reduced to 10 refers). Screening 1000 Hz in 60 dBA of Effects of Headphone/Earmuffs on Time to Set Up and noise using the AH reduced the number of refers from six to one Obtain DPOAEs from 1000 to 5000 Hz and from six to none when using the SE. The number of refers Similar results were recorded for the time needed to screen recorded when screening 2000 Hz in 80 dBA of noise was also 1000 to 5000 Hz. In 60 and 80 dBA background noise, the SE was remarkable, with refers reducing from 13 to five with the AH and more effective in reducing the time needed to set up and screen all 13 to two with the SE. five frequencies. For those who are hesitant to include 1000 Hz in Overall, the number of refers were reduced the most when DPOAE screening because of the negative effect of background using the SE. When combined with the significant reduction

59 Journal of Educational Audiology vol. 17, 2011 in time found when screening 1000 Hz, the SE was better at Future Research improving the ability to screen DPOAEs in noise than the AH. More research is necessary to apply the findings of this This finding is important because the SE is less expensive than study to the general population. The use of the AuDX Pro II in the AN and would, therefore, be more affordable for screening screening environments may be more effective if a cushion is made programs. These results may be due to the actual noise attenuation specifically to accommodate the DPOAE probe. It would also be of the SE for the frequencies tested (with or without the addition beneficial to study the impact of headphones/earmuffs on the ability of the second cushion) when compared to the AH. to screen DPOAEs to other populations, including individuals with The findings also lead to the possibility of adding 1000 Hz hearing loss. Although the effect of the SE on refer rates may not to the Healthy Hearing screening protocol and other DPOAE be applicable to those with hearing loss, reducing the time spent screening protocols that normally screen 2000 to 5000 Hz. If it is screening DPOAEs will reduce the amount of time those with true possible to include more frequencies in hearing screening (without hearing loss spend as they continue through the hearing screening significantly increasing the time needed to screen each athlete), stations. Finally, evaluation of the AuDX Pro II in comparison the use of the SE would be justified, especially in poor screening with other OAE screening devices should be considered. environments. Conclusion Limitations of the Study Although current results suggest that the use of a SE for For individuals with normal hearing, the use of a standard screening DPOAEs in high levels of background noise may hearing protection earmuff (SE) significantly reduced the amount reduce the number of refers due to noise, testing was performed of time needed to screen DPOAEs from 2000 to 5000 Hz, as on adults with normal hearing. Applications of this study to other well as from 1000 to 5000 Hz, in background noise at or above populations, such as pre-school and school-age children, would be 60 dBA. Although the active noise-cancellation headphone (AH) necessary to generalize findings to those who participate in hearing also reduced the time needed to screen DPOAEs in these noise screenings. The method in which each headphone/earmuff was levels, the SE was more efficient. The effect of the SE and AH on manipulated to accommodate the DPOAE probe is also a limitation reducing the number of referrals recorded due to noise was also of this study. A cushion made specifically for accommodating the significant for 1000 and 2000 Hz for background noise levels at or DPOAE probe would likely eliminate the possibility of changing above 60 dBA. It is noteworthy that the SE was more effective than the acoustics of the existing ear cushions. The accommodations no headphone (NH) and the AH in reducing the number of refers required for the DPOAE probe also may affect the fit of the probe recorded. In summary, results suggested that using a modified set itself. Proper fit of the DPOAE probe is essential for reducing the up with standard earmuffs to screen DPOAEs in moderate to high effect of noise on the recording of the DPOAE and must be taken levels of background noise may reduce the time needed to screen into consideration when altering the way it is inserted into the ear DPOAEs, may provide additional audiometric information (1000 canal (Hall, 2000). It should be noted that, depending on the size Hz) that may not be otherwise obtained, and may reduce the number and configuration of the probe and cord, different OAE screening of false referrals due to the background noise. This information devices may or may not be able to accommodate the probe under is potentially noteworthy for preschool screening programs that the earmuffs. include DPOAEs in their protocol, as well as other organizations Using only the AuDX Pro II OAE hand-held screener limits (such as the Special Olympics Healthy Hearing program). the ability to generalize the findings of this study to all screening environments and protocols. Although the AuDX Pro II is used by the Special Olympics Healthy Hearing Screening, it may not be the equipment of choice for other screening programs. Also, as only one type of AH and one type of SE were used, other headphones/ earmuffs should be evaluated for their efficiency. Evaluating specific costs against actual benefit in noise of reduced screening times and more accurate screenings will provide more information to organizations that may benefit from the use of a headphone or earmuff in their screening protocols.

60 Effects of Noise Attenuation Devices on Screening Distortion Product Otoacoustic Emissions in Different Levels of Background Noise

References American Speech-Language-Hearing Association. (1997). Guidelines for audiologic screening [Guidelines]. Retrieved from http://www.asha.org/docs/html/GL1997-00199.html. American Speech-Language-Hearing Association. (2005). Guidelines for manual pure-tone threshold audiometry. [Guidelines]. Retrieved from http://www.asha.org/ docs/ html/GL2005-00014.html. Gorga, M. P., Neely, S. T., Bergman, B., Beauchaine, K. L., Kaminski, J. R., Peters, J., et al. (1993). Otoacoustic emissions from normal-hearing and hearing-impaired subjects: Distortion product responses. Journal of the Acoustical Society of America, 93(4), 2050-2060. Greenwood, L.A. (2010). Investigating the sensitivity and specificity of otoacoustic emissions in a pre-school screening program. Unpublished doctoral thesis. Towson University, Towson, MD. Hall, J. W. (2000). In J. Danhauer (Ed.), Handbook of otoacoustic emissions. San Diego, CA: Singular. Herer, G. R., & Montgomery, J. K. (2006). Healthy hearing: Guidelines for standardized screening procedures -Special Olympics. Retrieved from http://www.specialolympics.org. Kemp, D. T. (2007). The basics, the science, and the future potential of otoacoustic emissions. In M. S. Robinette, & T. J. Glattke (Eds.), Otoacoustic emissions: Clinical applications (Third ed., pp. 7). New York, NY: Thieme. Lenth, R, (2009). Java applets for power and sample size. Retrieved from http://www.stat.uiowa.edu/~rlenth/Power/. Natus Medical Incorporated. (2008). Bio-logic hearing diagnostics. Retrieved from http://www.natus.com/index. cfm?page=products_1&crid=33. Neumann, K., Dettmer, G., Euler, H. A., Giebel, A., Gross, M., Herer, G., et al. (2006). Auditory status of persons with intellectual disability at the German Special Olympic games. International Journal of Audiology, 45(2), 83-90. Owens, J. J., McCoy, M. J., Lonsbury-Martin, B. L., & Martin, G. K. (1992). Influence of otitis media on evoked otoacoustic emissions in children. Seminars in Hearing, 13, 53-65. Whitehead, M. L., Lonsbury-Martin, B. L., & Martin, G. K. (1993). The influence of noise on the measured amplitudes of distortion-product otoacoustic emissions. Journal of Speech and Hearing Research, 36(5), 1097-1102. Zhao, F., & Stephens, D. (1999). Test-retest variability of distortion-product otoacoustic emissions in human ears with normal hearing. Scandinavian Audiology, 28(3), 171-178.

61 Journal of Educational Audiology vol. 17, 2011

Development of a Video for Pure Tone Hearing Screening Training in Schools

Diana C. Emanuel, Ph.D. Towson University Towson, Maryland

Merrill Alterman, M.A. Former Coordinator of OT/PT/Audiology Services for Baltimore City Public Schools Baltimore, Maryland

Michelle Betner, Au.D. Sound Audiology, St. Louis, Missouri

Rebecca Book, Au.D. Hearing Specialty Group, Ltd. Pasadena, Maryland

Towson University (TU), in collaboration with Baltimore City Public Schools (BCPS) and Baltimore City Health Department (BCHD), created a video training program for pure tone hearing screening in schools based upon ASHA (1997) guidelines and BCPS training materials. The video was distributed for educational purposes to 128 graduate programs in Communication Sciences and Disorders and was posted on-line. This manuscript describes the creation of the video, two studies comparing video hearing screening training to traditional training, and the results of a national survey used to elicit feedback from educational audiologists regarding the usefulness of the video nationally. The first study on the effectiveness of the video indicated there were no significant differences between written test scores for live versus video training for 154 BCHD employees trained in the BCPS. The second study indicated there were no significant differences on written scores or practical skills between live versus video training for 23 volunteers trained at TU. Participant ratings of training were significantly higher when training included hands-on practice of screening techniques. The national survey indicated the majority of educational audiologists would use the program as a supplement to their current hearing screening training protocols. This video is recommended for use in conjunction with hands-on practice conducted under the supervision of an audiologist. It is hoped that this training program will assist educational audiologists in providing more consistent training for hearing screeners. Introduction screening from birth to 18 years of age using a peer-reviewed process (ASHA, 1997). These guidelines are available on ASHA’s Hearing loss can have a detrimental impact on many aspects of website (www.asha.org). children’s lives, including their academic performance, expressive In order for hearing screening to be effective, hearing language, reading and writing skills, social, emotional, and screeners must be adequately trained. Without appropriate behavioral development (ASHA, 2002). Studies have shown that training, the result is likely to be an unusually high false negative if hearing loss is identified and treated early these negative impacts and/or false positive rate. In addition, it is imperative that adequate may be lessened (e.g., Yoshinaga-Itano, Sedey, Coulter & Mehl, record keeping and follow-up procedures ensure that the long 1998). The first step in the process of hearing loss identification is term goal of the screening (i.e., identifying and treating children the implementation of an effective hearing screening program. The with hearing loss) is actualized by the mass school screening effectiveness of this program relies on appropriate training and process. Richburg and Imhoff (2008) surveyed hearing screeners supervision of hearing screeners. Currently, there are no national in two school districts in the state of Missouri and found huge standards regarding either the way hearing screening should be variability in the procedures used for hearing screening and in the conducted or the qualifications/training requirements for hearing training of school nurses. They also found that more consistent screeners; instead, these issues are specified by state regulations training was seen among contractual hearing screeners, who and regional school system/health department policies. However, they presumed had received uniform training from a supervising the American Speech-Language-Hearing Association (ASHA) educational audiologist. The ASHA (1997) audiologic screening Panel on Audiologic Assessment, consisting of a group of experts guidelines specify that hearing screenings for school-age children in the area of pediatric audiology, developed guidelines for hearing (5 – 18 years) should be conducted by an audiologist, speech-

62 Development of a Video for Pure Tone Hearing Screening Training in Schools language pathologist, or support personnel under the supervision material proctored the video presentation. The tutor was able to stop of an audiologist. The role of a hearing screener falls within the the video and respond to student requests for clarification, thereby guidelines for support personnel published by both ASHA (ASHA, providing personal interaction during learning, but maintaining 2004) and the American Academy of Audiology (AAA, 1997), consistency of content from the video. McAlpine (1996) found with both specifying that audiologists should be responsible for no significant differences in student preference for TVI compared directing and supervising these personnel. Regarding training, with standard lecture. Students also respond well to the use of AAA (1997) specifically states, “Supervising audiologists will video instruction prior to hands-on instruction. For example, provide appropriate training that is competency-based and specific Lewis (1995) incorporated a series of 10-minute instructional to job performance.” Richburg and Imhoff’s 2008 study indicates videos at the beginning of physics laboratories. Via video, the that a method to improve the consistency of hearing screening professor of the physics course introduced the material necessary training for support personnel is needed. One cost-effective way to perform the laboratory, rather than having a teaching assistant to improve the consistency of training across school systems is to perform this function. The laboratory was then conducted in the use video training to deliver content and to demonstrate techniques traditional manner with a live instructor. Students’ evaluations of consistent with ASHA (1997) screening guidelines. If this video this teaching format indicated the videos had a positive effect on were used, individual states could then provide ancillary materials their learning. to address protocols specific to their state and, in addition, provide In 2001, Towson University, in collaboration with the necessary hands-on practice. Baltimore City Public Schools (BCPS) and the Baltimore City Video materials, including DVDs and streaming video, are Health Department (BCHD), developed a pure tone hearing becoming a standard format for instruction. The advantages of screening video in order to teach content and demonstrate video materials include cost effectiveness, consistency of content, hearing screening techniques for school-age children (Alterman availability of graphic animations, use of “real life” scenarios to & Emanuel, 2004). The BCPS serves approximately 83,000 enhance clarity, flexibility, and the opportunity for repetition. In students in the public school system. Hearing screening is addition, students can engage in learning from a site at a distance conducted within this school system at school entry (Pre-K, K, or from the training center. Studies have shown that video instruction 1st grade), late elementary school (4th, 5th, or 6th grade), and high may be used without sacrificing student test performance (e.g., school (9th grade) according to state mandate (Code of Maryland Kline et al., 1986; McAlpine, 1996; Mir, Marshall, Evans, Hall, & Annotated Regulations [COMAR], §7-404) to identify children Duthie, 1984). In addition, several researchers have found videos with previously unidentified hearing loss. The hearing screening to be well received by students when used as ancillary teaching procedures used by the BCPS are based upon American Speech- tools in conjunction with other teaching methods (Lewis, 1995; Language-Hearing Association (ASHA, 1997) and COMAR McAlpine, 1996). It is not suggested that the use of a video can guidelines, which describe a 3-frequency (1000, 2000, 4000) pure replace the need for a live instructor because skills-based training tone screening. should always include hands-on practice. Furthermore, students in Described here are three studies used to examine the efficacy studies of video versus lecture instruction often report preferring the of the video used as an ancillary tool in hearing screening training, interaction of a standard lecture, even when tests scores are equal including two studies used to examine “proof of concept” (i.e., to or poorer than scores from video lecture (Bazyk & Jeziorowski, that the video supplemented program was at least as effective as 1989; Davis, 1987; Kline et al., 1986; Leff, 1988; Paulsen, the current training used in Baltimore City ) and responses to a Higgins, Miller, Strawser, & Boone, 1998; Spitzer, Bauwens, & national survey of educational audiologists used to obtain input Quast, 1989). Paulsen et al. (1998) compared group instruction as to whether the video would be useful nationally and to solicit via traditional lecture, interactive television (ITV), and video suggestions for future modifications. lecture. Their results indicated students achieved equal success on tests, but students in the video groups were not satisfied with Method the instruction. Furthermore, students in the ITV and video groups did not perceive the instructor took an active role in the course. Development of the Video To address student preference for interaction during learning, but A 15-minute, professional quality video was produced by preserve the efficiency of resources offered by video instruction, the Center for Instructional Advancement and Teaching (CIAT) McAlpine (1996) studied Tutored Video Instruction (TVI) for a at Towson University. The creation of the digital recording began basic hemodynamic monitoring course for nurses as an alternative with the creation of a storyboard and script and the planning of to independent video viewing. A tutor who was familiar with the appropriate still and live action photography, narration, props,

63 Journal of Educational Audiology vol. 17, 2011 and graphics. Filming took place at a Baltimore City elementary had to be grouped by region for training; however, the rooms were school with permission of school administrators, parents, students, randomly selected for each of the educational formats. Four of and teachers. CIAT provided the film crew, photographers, the rooms (STANDARD 1-4) used the traditional BCPS training professional quality digital recording, audio/video editors, and format. The audiologists were provided with a script from the editing equipment. video, which was developed from the BCHD training topics (i.e., A rough draft of the video was shown to 50 graduate students the content was the same as in prior trainings), but the audiologists and five clinical supervisors during new student orientation at were allowed the flexibility of answering questions and allowing Towson University. The graduate students had undergraduate spontaneous discussion generated by participant questions, as degrees in Communication Sciences and Disorders, but had not would usually be the case for in-service training. Two of the other previously conducted hearing screenings. The clinical supervisors four rooms were assigned as “video” rooms and two were assigned all had experience supervising hearing screenings and training as “script” rooms. In the “video” rooms, training was conducted students to do hearing screenings. Thus, the audience included via video only. In the “script” rooms, the script of the video was both untrained and trained participants. Participants were asked carefully followed to control for content as much as possible. To to provide written comments regarding the clarity of the video as control for the variability of information associated with question a teaching tool. Based on feedback from these participants, the and answer periods, participants in the “script” and “video” rooms video underwent final editing. The final version was distributed were not allowed to ask questions until after the post-test was to audiologists in the BCPS and to 128 graduate programs in administered. To examine the effect of using the same instrument Communication Sciences and Disorders for educational purposes. for the pre- and post-test (current BCPS procedure), two rooms The recorded program was subsequently posted on-line in (one “video,” one “script”) were selected to receive a post-test streaming video format and is available at http://www.towson.edu/ only, to examine pre-test sensitization. In summary, there were asld/emanuel.asp. 8 groups: NOPRETEST+VIDEO (n=18), PRETEST+VIDEO (n=24), NOPRETEST+SCRIPT (n=22), PRETEST+SCRIPT Study 1: Health Department Screener Training with Written (n=20), and STANDARD 1-4 (n = 14, 23, 13, 20). Assessment Following a short break, all participants completed hands-on Every year, BCHD employees assigned to conduct school training and then completed a program evaluation. Thus, for all hearing screenings report to a designated elementary school for eight groups, the program evaluation included the participant’s training. BCPS audiologists (usually eight) are assigned to provide evaluation of the entire didactic, question and answer, and hands- didactic instruction, hearing screening demonstration, and hands- on portions of the training. Two of the items from the program on instruction. This first study was conducted during one of these evaluation were pertinent to the current study and were analyzed training sessions to see if using the video was as effective as their (the other items were not applicable, e.g., questions regarding traditional live lecture, with a pre- and post-test format using Figure 1. Sample size and procedures for eight hearing screening training rooms for Baltimore City Health Department hearing screening training at a public school site (Study 1). the BCPS written assessment tool. The test consisted of 20, 4-response, multiple-choice questions. One version of the test is available on-line at: http://www.towson.edu/asld/ emanuel/. One hundred fifty four BCHD hearing screeners reported for the annual training. Figure 1 provides a flow chart to show the design of the study. Randomization of individual participants to experimental conditions was not possible because the BCHD employees

64 Development of a Video for Pure Tone Hearing Screening Training in Schools location and facilities). Specifically, the two rated items were: “the test (identical to the pre-test), took a practical examination materials helped to reinforce concepts presented” and “the training (described below) and completed a program evaluation, which will enable me to improve my job performance.” The items were asked participants to rate on a scale of 1-5 (1=strongly agree; rated on a five-point scale from 1 (poor) to 5 (excellent). 5=strongly disagree) the following statements: (1) Information presented was clear and easy to understand, (2) This training was Study 2: Volunteer Training with Written and Practical effective in teaching me how to conduct a hearing screening, and Assessments (3) I would recommend this type of training for others who will be Twenty-three adult volunteers (15 females, 8 males; 21- conducting hearing screenings. 57 years of age) with no previous training in hearing screening The practical examination was conducted using a 10-item participated in the second pilot study. The design of the study is checklist (Table 1) created based on input from BCPS audiologists. shown in Figure 2. Participants were randomly assigned to one These items represent necessary components of the screening and/ of four training groups: STANDARD (n=6; 5F, 1M; 26-51 years), or areas in which mistakes are often made by new screeners. Each which was the same as in Study 1; VIDEO + HANDS-ON (n=5; participant was given a portable audiometer and asked to set up 2F, 3M; 25-52 years), which was the same as in Study 1 except the room for screening, to screen two people, and to document there was a question and answer period prior to the written test; the results. For all participants, two audiologists with normal VIDEO ONLY ON SITE (n=6; 3F, 3M; 24 - 57 years), which hearing served as mock patients; one followed the directions of used video training but no hands-on training; and VIDEO TAKE the examiner and the result of the screening should have been HOME (n=6; 5F, 1M, 21-36), in which participants took home the “pass;” the other feigned a hearing loss at 4000 Hz in the left ear video, were told they could watch it as often as needed, and were and the result of the screening should have been “fail.” Each mock allowed to ask questions when they returned to complete the study. patient also kept a log of the procedures used by each participant. This group did not receive hands-on training. The order in which the “pass” and “fail” audiologists were tested by each participant was randomized. Note that the ultimate goal Figure 2. Sample size and procedures for four hearing of this video is to assist in the training of hearing screening for screening training groups for adult volunteers trained at children; however, in this pilot study adult mock patients were Towson University (Study 2). used to control for the variability that is encountered in difficult-to- test children and to assess basic screening skills that are expected Untrained participants randomly assigned n=23 to be learned in a one-day training. n=6 n=5 n=6 n=6 VIDEO+ VIDEO ONLY VIDEO TAKE Table 1. The 10-item practical evaluation checklist used for Study 2. STANDARD HANDS-ON ON SITE HOME Practical Evaluation Checklist Participant: Pass Fail Written Written Written Written pre-test pre-test pre-test pre-test 1. Appropriately indicates why the room is Live Lecture Video Video Video (home) suitable for conducting a hearing screening. Q & A Q & A Q & A Q & A 2. Appropriately sets up equipment to avoid a Hands on Hands on tripping hazard. training training 3. Appropriately performs a visual and listening check. Written Written Written Written post-test post-test post-test post-test 4. Appropriately positions mock patient so as not to see the examiner presenting stimuli. Practical Test Practical Test Practical Test Practical Test 5. Appropriately gives instructions for the test. Program Program Program Program Evaluation Evaluation Evaluation Evaluation 6. Appropriately demonstrates tone and task to mock patient.

7. Appropriately places earphones on mock All groups took the same written pre-test used for Study 1. Prior patient to instruction, participants were given handouts that paralleled the 8. Correctly follows procedures for screening. content of the video. They were not allowed to utilize the handouts 9. Appropriately re-instructs mock patients if they while taking the tests, but could look at them during instruction. fail the screening. Following training, each participant completed the written post- 10. Correctly documents results.

65

Journal of Educational Audiology vol. 17, 2011

All of the practical examinations were recorded by videotape, identical training who received a pre-test; this suggests a pre-test and each participant’s performance was rated independently by two sensitization. A 2x2 analysis of variance (pre-test [yes, no] x training audiologists as either “pass” or “fail” for each of the 10 items. For [video, script]) showed a significant main effect for pre-test status cases in which the raters scored the participants differently, both (F (1, 80) = 9.056, p = 0.004), but no significant difference for raters reviewed the examination together and came to consensus on training (F (1, 80) = 1.301, p = 0.258) or the interaction between the “pass” or “fail” rating. The practical examination contained 10 pre-test status and training (F (1, 80) = 1.405, p = 0.239). This items; however, item 9 (“appropriately re-instructs mock patient if analysis confirmed that a pre-test sensitization did occur for this they fail the screening”) was not applicable because participants sample, using the protocol described here. were trained to re-instruct the patient only if the patient seemed Figure 3 shows an improvement in mean scores between the to not understand the directions, which was not the case in this pre- and post-test for all groups who took both tests. To determine study. Therefore, only the remaining 9 items were considered in if there were significant differences among the three training the analysis. models, a 2 x 3 mixed model analysis of variance for all groups with a pre- and post-test was completed. This analysis indicated Study 3: National Survey a significant main effect for test scoreF ( (1, 111) = 103.262, A survey entitled “Revision of a Hearing Screening Training p < 0.001), but not for training (F (2, 111) = 0.987, p = 0.376) or Video” was created to obtain educational audiologists’ opinions the interaction between test score and training (F (2, 111) = 0.953, of the content and usefulness of the video as a training tool. An p = 0.389). This indicates all of the groups improved significantly original survey was created and piloted with 10 educational and no one method was superior to any other, when using the audiologists from Maryland and Pennsylvania via convenience BCPS multiple-choice test. sample. Respondents received the survey as a printed document Figure 4 illustrates the mean responses from the program and were asked to complete the survey and include written evaluation. The highest mean ratings were seen for the video comments regarding the clarity and completeness of the questions. training, followed by the standard training, in the areas of Revisions to the survey were made based on this feedback, and reinforcing concepts and anticipated improvements in job a revised survey was formatted electronically and posted on performance. An analysis of variance indicated a significant SurveyMonkey.com. The final survey consisted of 26 questions, difference among the groups (F (2, 149) = 14.363, p < 0.001). A including demographic questions, questions regarding the video, Tukey post hoc analysis indicated a significant difference between and questions about the written materials. A link to the survey was the video training method and the scripted method (p < 0.001) and provided in an e-mail distributed on the Educational Audiology between the standard training method and the scripted method Association (EAA) listserv and the ASHA listserv. (p < 0.001), but not between the standard training method and the

Figure 3. Mean pre- and post-test scores for all training groups for Study 1. Note that two of the groups Results did not take a pre-test by design in order to examine pre-test sensitivity.

Study 1: Health Department Screener Training Written test score (20-items) with Written Assessment 0 2 4 6 8 10 12 14 16 18 20

Since random assignment of individuals to NOPRETEST+VIDEO experimental groups was not possible, an analysis was conducted to determine if the groups were NOPRETEST+SCRIPT “matched by accident.” A one-way analysis of PRETEST+VIDEO variance was conducted for the six groups that pre-test took a pre-test. Results indicated the pre-test scores PRETEST+SCRIPT were not significantly different across the groups post-test (F(5, 110) = 0.929, p = .465), suggesting that the STANDARD1 groups were similar in terms of prior knowledge. The mean scores for each of the groups on STANDARD2 the 20-item multiple-choice test are illustrated in STANDARD3 Figure 3. This figure indicates post-test scores were lower for the “video” and “script” groups who did STANDARD4 not take a pre-test, compared to the groups with

66 Development of a Video for Pure Tone Hearing Screening Training in Schools

Figure 4. Program evaluation ratings by participants in Study 1 for video, script, and standard groups video method (p = 0.414). This indicates that both the standard and Regarding whether the training reinforced concepts and if the participant predicted the training would improve job performance. video training were well received by participants. 5 Study 2: Volunteer Training with Written and Practical Assessments 4.5 Figure 5 illustrates the results of the written assessment for the four groups of volunteers included in the second study. This figure shows that mean written test scores improved from pre- to post-

4 test for all four groups. A 2 x 4 mixed model analysis of variance Reinforce Concepts Improve performance indicated a significant main effect for test (F (1, 19) = 85.250, p < 0.001); however, no significant differences were found for training Meanrating (1-5 scale) 3.5 (F (3, 19) = 0.288, p = 0.833) or for the interaction between test and training (F (3, 19) = 0.728, p = 0.548). This indicates there was a significant improvement in written test score from pre- to

3 post-test, but no one method was superior to another in causing VIDEO SCRIPT STANDARD Training Method this improvement. Figure 6 shows that the mean practical scores were similar for the “pass” versus “fail” mock patients for each training method. Figure 5. Mean pre- and post-test scores for all training groups in Study 2 for a multiple-choice written assessment. The VIDEO ONLY ON SITE group had a slightly higher mean score compared with the other groups; however, an analysis of Written test score (20-items) variance indicated no significant differences among the training 0 5 10 15 20 groups for either the “pass” patient (F (3, 19) = 3.013, p = 0.056) or for the “fail” patient (F (3, 19) = 0.582, p = 0.634). This indicates STANDARD that no one method was superior to another based on the results of the practical assessment, keeping in mind that the sample size was

VIDEO+HANDS-ON pre-test small for each group. post-test Participant responses to the program evaluation are listed in Table 2. This table indicates all of the participants, regardless of training, VIDEO ONLY ON SITE agreed the information was clear and easy to understand (Q1) and that the training was effective (Q2). The responses were slightly more variable for the question regarding recommending the training VIDEO TAKE HOME (Q3) and this was the only question in which the chi square analysis indicated significant differences (Q3 χ( 2 (3) = 9.6, p < 0.5) across the

four groups. Although this table suggests VIDEO + HANDS ON was Figure 6. Mean pre- and post-test scores for all training groups in Study 2 for a practical assessment. the training method most likely to be recommended by participants,

Mean practical assessment score (9-items) the cell sizes were too small to make a definitive conclusion regarding 012345678 training preference across all of the groups. To examine preference for a method that used hands-on training (STANDARD combined with STANDARD VIDEO+HANDS ON) versus no hands-on training (VIDEO ONLY ON SITE combined with VIDEO TAKE HOME), the data were collapsed into two groups and chi-square analyses were conducted VIDEO+HANDS-ON "pass" patient for all three questions. There were no significant findings for Q1; "fail" patient however, there were significant differences for Q2χ ( 2 (1) = 6.1, 2 VIDEO ONLY ON SITE p < 0.5) and Q3 (χ (2) = 8.9, p < 0.5). This indicates participants who had hands-on training had significantly higher ratings in the areas of perceived training effectiveness and their tendency to recommend VIDEO TAKE HOME the training to others compared with participants who did not have hands-on training.

67 Journal of Educational Audiology vol. 17, 2011

Table 2. Participant ratings on the program evaluation for Study 2. Review of the recorded program. Respondents were asked about the appropriateness of the length of Number of respondents selecting each rating the video (15 minutes). The majority of participants Training evaluation Strongly Agree Neutral Disagree Strongly Group question Agree Disagree (79%) indicated the length was about right, 21% 1 2 3 4 5 felt it was too long, and 2% (1 person) said it was STANDARD 4 2 TRAINING too short (note: 1 person selected two answers). The Q1. Information VIDEO+ 4 1 presented was HANDS-ON majority of respondents indicated they would access clear and easy to VIDEO ONLY 4 2 understand. ON SITE the recorded program in DVD format (83%) or VIDEO TAKE 4 2 HOME online (62%). Respondents were asked to indicate the STANDARD 5 1 TRAINING circumstances in which they would consider using Q2. This training VIDEO+ 5 was effective in the video. The majority of participants indicated they HANDS-ON teaching me how to VIDEO ONLY 2 4 conduct a hearing would use it as part of a hearing screening program ON SITE screening. VIDEO TAKE 3 3 (76%), only if accompanied by hands-on training HOME STANDARD 5 1 (66%), and as refresher training for people who Q3. I would TRAINING recommend this VIDEO+ 5 have conducted hearing screening in the past (64%). type of training for HANDS-ON others who will be VIDEO ONLY 2 3 1 About one-third would use the video for students in conducting hearing ON SITE screenings. VIDEO TAKE 2 3 1 a university speech-language-hearing program (36%) HOME and a few indicated they would use it as a temporary training tool if other training options were not available at the time the hearing screener was hired (14%). For the purposes of the review, the video program was divided Study 3: National Survey into 17 sections including: (1) statistics regarding hearing loss, (2) Demographics of the sample. Fifty eight participants characteristics of hearing loss, (3) how sound travels, (4) types completed the national online survey. All of the respondents were of hearing loss, (5) frequency, (6) intensity, (7) audiogram, (8) audiologists and the majority of respondents (85%) indicated speech banana, (9) audiometer, (10) selecting a screening room, their primary work setting was in pediatric audiology (0-21, (11) preparing the equipment, (12) audiometer malfunction, K-12, or Pre-K). The majority of respondents (81%) had ten or (13) calibration, (14) getting the child ready, (15) procedures for more years of experience with only three respondents indicating screening, (16) record keeping, and (17) purpose for screening. they had worked less than five years. The vast majority of the Respondents were asked to indicate if they would keep, modify, respondents (91%) reported that they personally administer or delete each section. The majority of respondents indicated each hearing screenings and most respondents supervise (86%) and section should be kept (60-100%, depending on the section). Very train (83%) hearing screeners. Of the respondents who reportedly few people (n= 1 – 3; i.e., 2-5%) indicated that any section should train hearing screeners, the type of person they were training be deleted, but a number of respondents selected modify and 35 included audiology/speech-language pathology students (59%), respondents provided suggestions for modification. school nurses (45%), parent/community volunteers (38%), Respondents were asked if any of the information was audiology technicians (28%), speech-language pathologists (19%) inaccurate and the majority of respondents (77%) selected “no;” and health department hearing screeners (6%). The majority of several respondents provided specific corrections (e.g., they use a respondents who train hearing screeners reportedly use hands-on different screening technique; there was a typographical error in training (92%), handouts (75%), and instructional lectures (64%). the video). Respondents were asked if they felt any topics were Only four respondents (8%) reportedly used a recorded program missing and 39% said “yes” and most provided comments (e.g., and one reported using a pre-packaged training system, but these how to test young children; avoidance of cueing). Respondents respondents did not indicate the programs they used. The majority were asked the most frequent mistakes they saw with new hearing of respondents who train hearing screeners assess learning screeners. Comments were provided by 44 participants (e.g., outcomes with an oral/practical examination (54%) and fewer than screeners will attempt to find thresholds, incorrect placement of half reported using a written test (22% pre- and post test; 24% the earphones, failure to conduct a listening check). Comments post-test only). In summary, the respondents represented precisely from respondents were collapsed across common items, organized the target audience that was desired for the survey. Overall, they by video section, and summarized (see Appendix, first column) in were experienced educational audiologists who had conducted, order to serve as the basis for the revision of the video script. supervised, and/or provided training for hearing screening.

68 Development of a Video for Pure Tone Hearing Screening Training in Schools

Discussion focused on the video, the graphics/demonstrations in the video enhanced learning to the point that participants had few questions, The Pure Tone Hearing Screening in Schools video, developed the video was short, and/or because it is a traditionally accepted at Towson University in collaboration with the BCPS and BCHD, protocol (habit) to watch a video without asking questions. For was created to enhance the consistency of hearing screening the smaller study with volunteers, the participants who had hands- training and to decrease training costs. The video was found to on training had significantly higher ratings for their perception of be as effective as live lecture for content delivery and screening training effectiveness and their tendency to recommend the training demonstration, when assessed with written and practical tests. In to others. This supports our assertion that training programs should addition, both live- and video-delivery methods were well received include both didactic and hands-on training. by trainees according to the program evaluation ratings, and a The national survey of educational audiologists indicated national survey indicated it would be useful for hearing screening the majority would consider using the video as part of a hearing programs nationally. screening training program in conjunction with hands-on training For the large-scale BCHD study, program evaluations were and over half of the respondents provided suggested modifications. completed at the end of both the didactic and hands-on training, This indicates that the creation and wide-scale dissemination of so they indicated participants’ evaluation of the entire program, a revised video may be widely used, which may improve the suggesting that both live lecture and video lecture were well received consistency of hearing screening training across multiple states and as part of the overall training program. The program evaluation was school systems. The original video is currently available on-line conducted at the end of the training day with the assumption that along with accompanying notes, an example of a written test, and the video would not be used to replace a comprehensive hearing the skills checklist used in this study (Table 1). Support materials screening training program. As Kline et al. (1986) stated, “The use for the revised video will be developed following completion of videotape as a ‘substitute teacher,’…is an abuse of a method that and posting of the video (projected completion date: December should be used to improve, not eliminate, faculty-student contact.” 2011). The national survey indicated just over half (54%) of the Previous research has indicated students prefer the interaction of respondents reported that they conduct a practical assessment and standard lecture to video lecture, even when tests scores are equal only about 25% conduct a written assessment following training. to or poorer than scores from video lecture (Bazyk & Jeziorowski, It is hoped that the provision of assessment instruments that 1989; Davis, 1987; Kline et al., 1986; Leff, 1988; Paulsen et al., accompany training materials will encourage programs that are not 1998; Spitzer, Bauwens, & Quast, 1989); however, when video is conducting assessments to establish and assess learning outcomes. used in conjunction with traditional interactive instruction, studies This study showed that the use of a written pre-test will elevate have reported no differences in student preference between this post-test scores if the same assessment is used for both. If this is format and a standard lecture (Lewis, 1995; McAlpine, 1996). a desired outcome, (perhaps to alert trainees of important items or Therefore, it was never the authors’ intention to replace all of the to document the efficacy of training), then the pre-test should be necessary training with a video. The current study’s findings were included in a training assessment protocol. similar to those of McAlpine (1996) and Lewis (1995); specifically, Almost all of the comments provided by respondents were there were no significant differences in test score, practical skills compiled, collapsed across common topic, used to develop a performance, or performance ratings between the standard lecture revision plan for the video (see Appendix, second column), and format and the video-enhanced training format when the program incorporated into the script for the revision of the video. However, evaluation was conducted at the end of a training day for health a few suggested revisions were considered to be beyond the scope department hearing screeners. The script method (live presentation of this video (e.g., tympanometry and otoacoustic emissions). that followed the video content, but did not allow for questions), The national survey of educational audiologists indicated the which was used as a direct comparison between video and lecture majority of educational audiologists would keep all of the topics without the normal interaction associated with a lecture, was not in the current version of the video, but would make changes and well received; however, this condition was an artificial construct additions; therefore, the basic outline of the video will remain the to control for content, and it appears that the participants did not same. However, all portions will be modified and enhanced, and appreciate having to wait to ask questions until after the post-test. the video will include two new additions: “common mistakes” This provides further support for the need for interaction between and “frequently asked questions” sections. One common problem students and teachers in the training process. In the current study, it stressed by many respondents that was addressed briefly in appears participants in the video groups were tolerant of the delay the original video was background noise. Specifically, hearing in asking questions until after the video, either because they were screeners did not know how to tell if a room was too loud and/

69 Journal of Educational Audiology vol. 17, 2011 or hearing screeners often increased the intensity to account for Conclusions background noise. Because this was a consistent theme in the responses, background noise will be addressed in three sections of The Pure Tone Hearing Screening in Schools video was found the revised video, including an emphasis on keeping the screening to be effective for training based on the results of written and level constant and a discussion of how to check the background skills-based tests. Video training and standard training were both noise level. Specifically, the procedures for assessing ambient well received by hearing screeners; however, hearing screeners noise level using psychoacoustic procedures, as described in ANSI preferred to participate in a training program that included hands- S3.1 (1999), will be included. This procedure was selected instead on instruction in addition to didactic instruction and demonstration. of a tutorial on the use of a sound level meter because sound level The national survey indicated educational audiologists would meters capable of octave or 1/3 octave band filtering with a noise use a video as part of their hearing screening training and many floor capable of measuring the lowest levels specified by ANSI S respondents provided suggestions for the revision of the video. 3.1 standards was considered to be unrealistic for the use of mass The revised video is currently in production, including two new hearing screening. The equipment is both prohibitively expensive sections: frequently asked questions and common mistakes. The and sometimes difficult to operate without adequate training in use of a hearing screening video could reduce training costs calibration procedures. and improve consistency of instruction for the didactic and A few respondents indicated that their screening procedures demonstration portions of a hearing screening program; however, differed from the guidelines presented in the video. The video is the video is intended to be used in conjunction with hands-on based on ASHA (1997) guidelines, which are nationally recognized training within a program that is supervised by an audiologist. audiological screening guidelines developed via a peer-review process with a panel of experts in pediatric audiology. In absence of a national standard, efforts should be made by audiologists to have their local school system or health department follow these established guidelines for consistency and best practices and to avoid practices that are not optimal for screening. For example, a few respondents indicated they test 500 Hz in order to identify cases of otitis media. However, the use of 500 Hz in the pure tone screening in rooms that are not sound treated will increase the false positive rate, due to the effects of background noise at lower test frequencies (Robinson, 1992). If more hearing screening training programs included the use of video training based on ASHA (1997) methods, it could result in more consistent procedures nationally.

Future Research To our knowledge, although recorded training programs have been shown to be effective across a number of disciplines, this is the first study to examine the effectiveness of a video used for hearing screening training. Because only about half of the respondents indicated they conduct a written assessment following hearing screening training, it is unknown if learning outcomes are assessed in another manner, such as an examination of the accuracy of the overall screening program. More research should be conducted into the efficacy of hearing screening training in order to optimize identification of children with hearing loss. Once the revised version of the video is available, further testing of the efficacy of video training can be conducted using outcomes based assessments, such as written and skills-based assessment with adults and children and an examination of overall program accuracy.

70 Development of a Video for Pure Tone Hearing Screening Training in Schools

Appendix

Pure tone hearing screenings in schools video revision plan Issue Action General Items  Current DVD is in standard (low definition)  Filming/editing scheduled in high resolution. format  Have video available as a whole and also  Some respondents only wanted to use divided into sections, so educators can select selected portions topics.  There was no mention of Audiology and  Add a discussion of audiologists in the Audiologists and the Audiologist’s role as introduction (definition; role in the screening the expert on diagnostic testing and process). remediation.  Indicate at the end of the video that some school  Respondents wanted to include tests that systems include other tests, such as are outside the scope of this video (e.g., tympanometry and otoacoustic emissions, but tympanometry, otoacoustic emissions). that these tests are outside the scope of this  All of the children and teachers in the video video. are black. Respondents wanted the video  Include more diversity in video. to be more multi-cultural. Opening credits  Opening credits are out of date (e.g.,  Update all credits. plaque on the wall indicates prior mayor)  Video will be completely re-filmed. Statistics related to hearing loss  Section most often rated as “least useful”  Shorten the statistics section to essential items. by respondents. Highlight with text overlay.  Voice is monotone  Record with more dynamic voice.  Need to update statistics especially  Update statistics. regarding noise (e.g., Ipod, MP 3 player)  Include a reference/information list (e.g., ASHA)  Too many stills of the same kids  Omit duplicate pictures of the same children.  No references provided for statistics  Add pictures of children wearing IPods.  The narrator said "they" when talking about  Double check grammar in script. one child. Characteristics of hearing loss  Statistics and characteristics are merged  Merge statistics and characteristics of hearing and overlap the visual showing the hearing loss sections. screening demonstration.  Omit screening procedure clips from this part of the video. How sound travels to the brain  When the word "cochlea" appears, the line  Revise all graphics and carefully edit. is not pointing to the cochlea - it is pointing to the vestibule.  The lightning bolt meant to represent electrical energy is not located on cochlear part of the nerve. Types of hearing loss  Sensori-neural hearing loss is said to be  Revise to include medical and audiological "not medically treatable" but hearing aids treatment such as hearing aids and cochlear are considered medical treatment implants.  Need to update hearing loss examples for  Update SNHL list; remove presbycusis. SNHL. Characteristics of sound  The transition appears rushed between the  Create headers for each part to signal the end of types of hearing loss and the transition between sections. beginning of the characteristics of sound  Use text overlays to emphasize frequency (in portion. Hz) - perceived as pitch and intensity (in dB) -  Need more emphasis on the difference perceived as loudness. Add musical scale after between frequency and intensity description of frequency/pitch. Audiogram  Need to improve the usefulness of this  Emphasize axes with graphic (arrow) overlay on section as a teaching tool audiogram (fade in and out).

71 Journal of Educational Audiology vol. 17, 2011

 Provide more detailed description of the  Include audio sample of high-pass filtered audiogram, such as x-axis is frequency speech with text emphasizing the missing from low to high and y-axis is intensity from sounds. Then play the unfiltered speech and fad 0-120dB, and the higher the threshold the in the missing letters of the text. more severe the hearing loss etc.  Provide an imitation of how speech may sound with a hearing loss. Locating a screening room  Transition is very quick between audiogram  More defined transitions between sections. and audiologist looking for a room.  Add more text emphasis to supplement  A lot of information with no text support narration.  Respondents didn't like the audiologist  Add a section showing the hearing screening wandering down the hall looking for a working with school staff to locate a room. Make screening room. sure audiologist does not appear to be  Respondents wanted much more emphasis “searching” for a room as she walks down the on the fact that screeners should not turn hall. up the level to compensate for a noisy test  Emphasize the problems with turning up the environment. level to account from background noise in  Respondents wanted the school to be multiple sections – here and in common involved in helping the screener find a quiet mistakes and frequently asked questions room. sections.  Add discussion of why one cannot turn up the level. Discuss ANSI procedures for assessing background noise level. New section: Setting up a group hearing screening  Respondents wanted the video to show the  Show multiple room set ups including 1:1; 1:3 (2 set up for multiple screeners in a room and children waiting); students lined up in a hallway to discuss how to manage the flow of with a monitor; 3:3 (3 screenings taking place at children, how to test more than one child at one time). a time, and how to use volunteers to monitor children waiting to be tested. Listening check  There is a typo on the bulleted list,  Make sure “cushions” is spelled correctly on the "cusions" should be "cushions". revised video. Do a careful edit of all overlaid  This assumes normal hearing, which may text for spelling. not be the case.  Indicate what to do for a listening check if the  Respondents indicated the video should screener does not have normal hearing and show moving the cords and listening for a demonstrate partner listening check. short and checking the entire length for  Add check for frayed cords and moving the fraying. cords when doing the listening check.  Make sure they don’t drop the audiometer  Add "don't drop the audiometer" to the list of things to avoid. Illustrate if possible. New section: Pre-screening  Have screeners check for anything in the  Include what to do if the child has a documented ear canal - some children have cotton in hearing loss or wears hearing aids. their ears from ear aches or ITEs that are  Include the need to sanitize hands prior to not obvious and not known to the screener. starting the screening and between each child.  Add information about children with lesions  Show the screener cleaning the earphone on the pinna, discharge from the ear, skin cushions and headset. conditions, and lice.  Add an item about the need to check the ears  Include the need for screeners to use hand prior to screening (note: otoscopy is outside the sanitizer between children to help protect scope of the video). Referral to the school nurse the screener and later children from for pain, discharge, head lice, and so forth. infectious conditions.  Add information about using an otoscope.  The audiologist does not do a listening check when she takes out the audiometer.  The audiologist does not sanitize her hands (nor is it mentioned).  The audiologist does not clean the earphone cushions.

72 Development of a Video for Pure Tone Hearing Screening Training in Schools

New section: audiometers  Respondents wanted the video to show  Include 3 different portable audiometers updated/multiple audiometers.  Highlight the frequency and intensity dials on each model.  Highlight the need to switch between ears with a toggle on some models. Instructions for screening  The demonstration for the student is done  Show several ways of instruction, including with the headset on the table at 100 dB. group instruction. Keep the one from current This is a concern because if the audiologist video, but also include others. Be sure to forgets to re-set the intensity dial before the emphasize the danger of the loud sound if they test, the student will get 100 dB in their ear. use that instructional method (text highlight)  The audiologist places the earphones from  Show proper earphone placement (from the behind the student. They should be placed front). from the front.  Change “fail” to “refer” in all instances (including  Respondents indicated “fail” should be response forms). changed to “refer”  Remove the sentence (If you have any doubts  One respondent did not like the item: “If …) and include the importance of follow up and you have any doubt about results or child’s the fact that individual school systems set up responses, please ask an audiologist or follow up procedures (list a few, such as referral refer the child” Sometimes an audiologist is to the school audiologist). not available. Closing summary  One person didn't like seeing Martin  Re-film the closing summary. Add information O'Malley's listed as the mayor on the about audiologists here and/or in the school plaque because it dated the video. introduction. Avoid having printed items that will become dated in the background. New section: Commonly asked questions  What if my school  Include in script the fact that the video is based on ASHA (1997) system uses a different guidelines; school systems can supplement if their protocols are screening procedure different; including 500 Hz will increase the false positive rate. than the one shown here?  How do I screen special  Include suggestions for difficult to test children. needs or difficult to test students?  What if I am testing a  Show an example of CPA audiometry. very young child and they do not want to raise their hand?  What should I do if I am  Re-instruct. Try hand over hand demonstration if needed. Ask if not sure if the student they understand. If the child still doesn’t respond, they should be heard the tone or they referred for further testing. don't understand the instructions  What do I do if they  If the child fails only one frequency, then they fail the screening. only fail one frequency?  What is the difference  Make this section clearer with text/graphic emphasis. between frequency and intensity?  What do the screening  Explain what a screening result means and the need for further results mean? If they audiological testing. fail, does it mean they have a hearing loss?  What should I do if the  Demonstrate a systems check (outlet, plug, cords, mode button, equipment doesn't etc.) work?  How are the parents  Notification for the parents varies based on the procedure in effect notified if their child fails in the specific school system. Check with your school system to the screening? see if you need to contact the parents or if that is done by someone else.

73 Journal of Educational Audiology vol. 17, 2011

 Can earwax cause a  Discuss earwax. Add a check for earwax in the pre-screening hearing problem? What section of the video. should I do if I see a lot of wax?  How do I know if the  Explain ANSI procedure to check the background level. screening room is too loud?  If the student fails, what  Explain that they need to follow school system/health department do I do next? procedures for referrals. The child should be seen by an audiologist for a failed screening and referred to the school nurse for issues concerning head lice, pain, drainage, redness, swelling, etc.  What do I do if the child  Have the child watch other children during their screening test. doesn’t speak English? Teach using gestures.  Can I hurt the child if I  A short sound will most likely not cause any damage, but it is mess up and give a uncomfortable and it may make the child difficult to screen. tone that is too loud?  If the room is too loud,  Emphasize that under no circumstances should the intensity level can I just make the tone be adjusted above 20. louder?  What do I do about  Do not screen children who wear hearing aids. They have already students with hearing been identified as having hearing loss. aids?  What should I do if I  Refer the child to the school nurse. see head lice?  Do I need parental  Usually, all students in public schools are required to undergo permission to test a periodic health screenings. If a parent refuses to have his or her student? child screened for reasons that are supported by the school, this information should be on file with the school system. As for a list from the school nurse of any children who cannot be screened for this reason.  Am I ready to go after  Hands-on practice with an audiologist should follow this video. If watching this video? that is not possible, watch the demonstration portion several times and practice with several adults prior to testing children. New section: Common mistakes  Collapsed ear canals  Explain what collapsed ear canals are, how it affects the test, and what to do about it.  Presenting tones in a  Demonstrate patterning and how to avoid it. predictable pattern  Improperly placed  Show how to check fit headphones  Do not let the child place the headphones  Be sure the earphones are on the correct ears. Consider labeling them "right" and "left"  Check for skewed placement and hair under the earphone.  Providing visual cues  Show problems with the child facing the examiner, mirrors/reflective glass, and other visual cues  Not switching from left to  Show left to right switch on several machines. right  Make it part of the routine  Not performing listening  Emphasize: Perform a listening check whenever an audiometer checks is turned on.  Using a noisy room  Include reminder about the room.  Increasing the intensity  Do not raise the intensity of the tone.  If the room is noisy, find another room or re-schedule the screening.  Don't raise the tone to see how ‘bad’ the hearing is. Keep the tone at 20 dB. When the child comes back for a full hearing test the audiologist will find out the status of the hearing.  Not wanting the child to  Explain that it is human nature to want everyone to pass but it is fail not in the best interests of the child to pass them when they did not hear all of the tones.

74 Development of a Video for Pure Tone Hearing Screening Training in Schools

 Not cleaning the  Clean the earphones and the headset between students. earphones  Not feeling confident as a  Explain this is natural. Be sure to check the equipment and screener follow the protocol. If unsure of skills, practice on a few adults before screening children. Arrange to have your supervisor watch you screening.  Equipment failure in the  If several children in a row have the same pattern (they all just middle of the test fail 1000 Hz), check the equipment and do a listening check.  If the lights blink on and off check the power cord - it may have come away from the wall or the audiometer.  If the child reports “strange noises” do a listening check.  If the equipment is faulty and you don't have a backup then you will need to re-schedule the screening.  Wanting to counsel the  Follow the procedures in place at your school for follow up. Have child/parent about results resources ready to give to parents. Do not tell them the child has a hearing loss – further testing is needed.  Talking in the room  Make sure others in the room know they cannot talk.  For multiple children, a room monitor may be needed.  Screening children with  Don't place headphones on top of hearing aids, cochlear hearing aids implants, or other listening devices.  Students with known hearing loss do not need a screening.

75 Journal of Educational Audiology vol. 17, 2011

Acknowledgements Hearing and Vision Screening Tests. Code of Maryland The authors gratefully acknowledge the contributions of the Annotated Regulations (COMAR): Education, Title 7, following people: Dr. James Drummond, principal, and the Subtitle 4, § 7-404. Available from http://198.187.128.12/ teachers and students of Cecil Elementary School in Baltimore maryland/lpext.dll?f=templates&fn=fs-main.htm&2.0. City and Gayle Amos, BCPS Special Education and Student Kline, P., Shesser, R., Smith, M., Turbiak, T., Rosenthal, R., Support Services, for making the filming of the video possible; Chen, H., & Walls, R. (1986). Comparison of a videotape Ron Santana and Matt Wynd from the Center for Instructional instructional program with a traditional lecture series for Advancement and Technology (CIAT) at Towson University for medial student emergency medicine teaching. Annals of filming, editing, and graphics; Stephanie Bronson for starring in Emergency Medicine, 15(1), 16-18. the video and assisting with the pilot studies; BCPS audiologists Leff, E. (1988). Comparison of the effectiveness of videotape Sandra Abramowitz, Michele Allen, Francine Angert, Cindy versus live group infant care classes. Journal of Obstetrical, Blake, Karen Brock, Tanya Green, Theresa Lechlitner, and Estelle Gynecological, and Neonatal Nursing, 17(5), 338-344. Skinner for assistance with the pilot studies; Kim Benson and Lisa Lewis, R.A. (1995). Video introductions to laboratory: Students Ross for assistance with data collection; and Lisette Osborne from positive, grades unchanged. American Journal of Physics, the BCHD for assistance with the pilot study and coordination 63(5), 468-470. with the health department. McAlpine, L. (1996). Comparison of the effectiveness of tutored videotape instruction versus traditional lecture for a basic hemodynamic monitoring course. Journal of Nursing Staff References Development, 13(3), 119-125. Alterman, M., & Emanuel, D. C. (2004, Summer). School Mir, A. M., Marshall, R. J., Evans, R. W., Hall, R., & Duthie, hearing screening and training program in Baltimore, H. L (1984). Comparison between videotape and personal Maryland. Educational Audiology Review, 12-13. teaching as methods of communicating clinical skills to American Academy of Audiology (1997) Position Statement & medical students. British Medical Journal, 289, 31-34. Guidelines of the Consensus Panel on Support Personnel in Paulsen, K.J., Higgins, K., Miller, S.P., Strawser, S., & Boone, R. Audiology. Available from www.audiology.org. (1998). Delivering instruction via interactive television and American National Standards Institute. (1999). Maximum per- videotape: Student achievement and satisfaction. Journal of missible ambient noise levels for audiometric test rooms Special Education Technology,13(4), 59-77. [ANSI S3.1-1999]. Acoustical Society of America. New Richburg, C., & Imhoff, L. (2008). Survey of hearing screeners: York. Training and protocols used in two distinct school systems. American Speech-Language-Hearing Association. (1997). Journal of Educational Audiology, 14, 31-46. Guidelines for Audiologic Screening [Guidelines]. Available Robinson, D. W. (1992). Background noise in rooms used for from www.asha.org/policy. pure-tone audiometry in disability assessment. British Jour- American Speech-Language-Hearing Association. (2002). nal of Audiology, 26, 43-54. Guidelines for Audiology Service Provision in and for Spitzer, D.R., Bauwens, J., & Quast, S. (1989, May). Extending Schools [Guidelines]. Available from www.asha.org/policy. education using video: Lessons learned. Educational American Speech-Language-Hearing Association. (2004). Technology, 28-30. Support Personnel [Issues in Ethics]. Available from www. Yoshinaga-Itano, C., Sedey, A. L., Coulter, D. K., & Mehl, A. L. asha.org/policy. (1998). Language of early- and later- identified children with Bazyk, S., & Jeziorowski, J. (1989). Videotape versus live hearing loss. Pediatrics, 102, 1161-1171. instruction in demonstrating evaluation skills to occupational therapy students. The American Journal of Occupational Therapy, 43(7), 465-468. Davis, A. (1987). Comparing the effectiveness of two teaching methods for neurological assessment. Journal of Nursing Staff Development, 3(3), 138-140.

76 Wii-habilitation to Enhance Auditory Processing Skills

Wii-habilitation to Enhance Auditory Processing Skills

Addie J. Dowell, B.A. Brittany Milligan, B.A. D. Bradley Davis, Au.D. Annette Hurley, Ph.D. Louisiana State University Health Sciences Center New Orleans, Louisiana

Auditory training programs are often included as part of the remediation plan for children with (central) auditory processing disorder [(C)APD]. Training improves performance and usually includes formal therapy; however, it may also utilize informal activities. These informal activities specifically target deficit areas of auditory weaknesses and may include “edutainment” activities, such as board games, computer games, or recorded audio books. This brief report will review Wii video games that target specific auditory processing skills (auditory blending, processing speed, short-term memory, etc.) and which maybe utilized during play for specific age groups. This information can be useful for audiologists, speech-language pathologists, early interventionists, or parents who wish to engage listening and auditory processing skills during play. Introduction in a positive way” (Musiek, Chermak & Weihing, 2007, p. 78). Auditory training activities may be formal or informal. Formal In the past few years, there has been a renewed interest in the therapy activities are often completed in a clinical setting with a diagnosis and treatment of (central) auditory processing disorder professional who has control over the presented stimuli. Formal ([C]APD). A (C)APD refers to “difficulties in the perceptual therapy activities may include computer mediated programs, processing of auditory information in the central nervous system localization training, phoneme discrimination training, speech in and the neurobiologic activity that underlies that processing and noise training, or frequency and intensity discrimination training, gives rise to the electrophysiologic auditory potentials” (ASHA, etc. (Musiek, et al., 2007). 2005). This diagnosis is usually made by a qualified audiologist Informal activities may be done at home and are usually after the patient has completed a comprehensive test battery (AAA, recommended to supplement formal training. These activities may 2010; ASHA, 2005). Treatment of (C)APD generally focuses include computer games, board games, music training, or video on three areas: environmental changes to ease communication games that target specific auditory deficit areas. For example, the difficulties, introducing compensatory skills and strategies for the Simon® game may be useful for temporal processing training and disorder, and remediation of the auditory deficit (AAA, 2010). for auditory memory training (Musiek, 2005). This game has four Auditory training is one type of direct remediation of (C)APD. lighted, colored buttons, with a corresponding audible tone when It is known that the brain retains a lifelong capacity for plasticity activated. During play, the buttons will light up in a random pattern. and adaptive reorganization. Therefore, auditory deficits or The player must replicate the pattern. The number of buttons in the weaknesses may be at least partially reversible through a deficit- random sequence increases in length as the game continues. specific training program. Everyday games and activities can improve auditory Auditory training programs strengthen specific auditory processing skills in children with (C)APD (Ferre, 2002) and skills. These include localization and lateralization, sequencing should be initiated whenever there is a suspicion of (C)APD of sounds, phoneme/syllable discrimination, auditory memory, (AAA, 2010). Ferre (2002) listed several board games that targeted sound blending, frequency and intensity discrimination, temporal specific auditory skills, as described by the Bellis-Ferre profiles of processing training (including temporal ordering, temporal gap (C)APD (Bellis, 2003). (The reader is referred to Bellis, (2003) for detection and discrimination), improving speech discrimination a comprehensive description of the Bellis-Ferre (C)APD profiles, in noise, and an improvement in interhemispheric transfer of as this is beyond the scope of this report.) A very brief description information (ASHA, 2005). of the Bellis-Ferre profiles, characteristic auditory deficits, and Auditory training relies on plasticity in the central auditory suggested examples of activities for auditory specific enrichment system for improved performance. Auditory training is defined are provided in Table 1. as, “a set of (acoustic) conditions and/or tasks that are designed Kuster (2009) compiled a list of free listening games and to activate auditory and related systems in such a manner that activities, available on the Internet. This list of activities ranged their neural base and associated auditory behavior are altered in a hierarchy of auditory skills from detection, discrimination,

77 Journal of Educational Audiology vol. 17, 2011

Table 1. Profiles of (central) auditory processing disorder and associated deficits. potential use of Wii games as an auditory

training tool, for informal (C)APD therapy. Primary Profile Deficits Targeted Skills for Suggested Games & Remediation Activities Method

Decoding ●Listening in background noise ●Phoneme identification ●Red Light-Green Light ●Spelling and reading difficulty ●Phonological ●Telephone ●Sound/symbol association awareness ●Wheel of Fortune® We defined the criteria for inclusion in this ●Sound discrimination ●Sound discrimination ●® ●Sound blending report. (Again, we define report not as scholarly ●Word attack skills research project, but as a description of our research

Integration ●Combining multi-modality ●Interhemispheric ●Scrabble® activity to fulfill our clinic’s need.) Initially, because information transfer ●Bopit® ●Reading comprehension ●Binaural skills ●Simon® of its popularity, we limited our review of video ●Following auditory directions ●Sound localization ●Simon says ●Auditory & visual ●Card games games to the Nintendo Wii format. Secondly, we information reviewed games appropriate for use for children

Prosodic ●Understanding the meaning ●Perception ●MadGab® over the age of 6 years. Thirdly, the Entertainment ●Comprehending main idea ●Temporal patterning ●Singing ●Frequency & temporal ●Pragmatics ●Dramatic arts Software Rating Board (ESRB) ratings were also discrimination reviewed to ensure appropriate content of the video identification, to comprehension. The Ferre (2002) “Games We game for children. Lastly, we also adopted the Play” and Kuster’s (2009) listening activities are useful resources popular Bellis-Ferre subprofile model of (C)APD (Bellis, 2003) available to parents of children diagnosed with (C)APD. as the framework for reviewing video games to target specific To date, we are unaware of a review of video games that would auditory deficits. In this model, there are three primary (C)APD enhance auditory processing skills. This review will be useful for profiles: Decoding, Integration, and Prosodic. These profiles clinicians and parents who wish to enhance auditory processing are based upon an individual’s auditory behavior and (C)APD skills during informal therapy times, outside of the formal therapy behavioral and electrophysiological test results. (See Table 1 for environment. a brief description). The genesis of computer games can be traced back to the early We began our search using popular Internet search engines to 1950s (Kent, 2001). Since this time, these games have evolved locate video gaming manufacturing Web sites, parental information with technology, increased in complexity and sophistication of Web sites, and multinational electronic commerce company Web graphics, narratives, and storylines and have grown in popularity sites (i.e., Amazon.com, GameStop.com) and searched for games over the last five decades (Kent, 2001). Many of the early video marketed or advertised as ‘educational,’ parent-recommended, games were not specifically designed for educational purposes, and/or listed as top-selling video games. We compiled a list of but for “fun.” However, most games promote incidental learning potential video games. as they are built around rule-based strategies, logic, memory, From this list, we researched the video game by reviewing adaptability, and motivation. the game publisher’s synopsis and consulting consumer reviews. During the recent market growth of “edutainment” (education We “sampled” each game by playing a demonstration version on through entertainment), video games and video gaming systems the manufacturer’s Web site, to determine if auditory processing are a fixture in most homes. Educational video games appeal related skills were incorporated in the game. Video games were to young children, as well as their parents, because they make selected if they targeted at least two skills of a (C)APD subprofile. learning fun. Some video games are referred to as families. Therefore, we did One popular gaming system that many families have in their not list each individual video game title. For example, Guitar Hero homes is the Nintendo Wii. Since its launch in November, 2006, Family would be inclusive of such games as Guitar Hero I, II, III, the Wii has sold more than 86 million units, with over 41 million in IV and V, Guitar Hero World Tour, and Guitar Hero: Warriors of the United States alone, making it the best selling gaming system Rock. We did not track the initial number of video games reviewed; (Sloan, 2011). many were quickly eliminated by the mature ESRB rating and not The interactive Wii Nintendo games have received positive appropriate for children. attention for their role in physical and occupational therapy (Deutsch, Borbely, Filler, Huhn, Guarrera-Bowlby, 2008). Results However, there is little information about the use of this popular gaming system for improving listening or auditory processing Based on our inclusion criterion, a total of 11 Wii games were skills. This report (a description of a clinical need) reviewed the found to be appropriate for use by individuals with Decoding

78 Wii-habilitation to Enhance Auditory Processing Skills

Table 2. Video games that target deficit skills associated with the Discussion Decoding Profile of (central) auditory processing disorder.

Video Game Ages Many Wii video games target auditory processing skills and Reader Rabbit (Family) > 6 years may be incorporated as a valuable informal auditory training tool Fit > 6 years Word Jong > 6 years for (C)APD. Video games can be recommended to target the specific Story Hour (Family) > 6 years auditory deficits or weaknesses. It is important to note informal Story Book Workshop (Family) 5-10 years Jump Start Games (Family) 5-6 years therapy or video games should not replace formal rehabilitation My Word Coach > 6 years techniques but should instead be used as a supplement to therapy. Margot’s Word Brain > 10 years A disappointing finding of this investigation was the short Wheel of fortune > 6 years Dance Dance Revolution (Family) > 6 years release time for a video game. Several of the Wii video games in Tables 2 through 4 have been discontinued by the gaming

manufacturer. However, all of the games listed are available in new Table 3. Video games that target deficit skills associated with the Integration Profile of (central) auditory processing disorder. and used condition through multinational electronic commerce company Web sites and popular game retailers, (i.e., Amazon, Video Game Ages Sesame Street Counting Carnival >3 years GameStop). Reader Rabbit (Family) > 6 years Although this report reviewed games specific to the Wii Nickelodeon Fit > 6 years Jump Start Games (Family) > 6 years format, many of these games are available for other formats or Dance Dance Revolution (Family) > 6 years gaming systems such as Xbox™, and PlayStation™. Further We Cheer (Family) > 10 years research is needed to determine the effectiveness of Wii video Family Game Night > 6 years Family Think Smart > 6 years games as an informal therapy option. Pictionary > 6 years Many households have a Wii gaming system already. Big Brain Academy >6 years Disney’s Think Fast > 6 years Therefore, deficit specific auditory games may be an affordable and Cosmic Family > 6 years feasible way for parents to encourage and support children “at risk”

for (C)APD. By incorporating deficit-specific auditory activities Table 4. Video games that target deficit skills associated with the Prosodic into everyday activities, auditory remediation may improve these Profile of (central) auditory processing disorder. auditory deficit skills. The use of popular, interactive games Video Game Ages may be useful and convenient for audiologists, speech-language Disney Sing It (Family) > 3 years Wii Music 3-10 years pathologists, early interventionists, or parents who wish to engage Guitar Hero (Family) > 6 years (Family) > 6 years listening and auditory processing skills during play. Dance Dance Revolution (Family) > 6 years Kidz Bop (Family) > 6 years Lets Tap > 6 years We Sing Down Under > 6 years Jump Start Games (Family) > 6 years Karaoke Revolution Games (Family) > 6 years High School Musical (Family) > 6 years We Cheer(Family) > 10years Michael Jackson the Experience > 6 years

characteristics of (C)APD (see Table 2). These games targeted skills, such as sound recognition, sound blending, vocabulary, and following directions. Twelve Wii video games were found to address deficit skills associated with Integration characteristics of (C)APD and are listed in Table 3. These games addressed skills, such as combining auditory and visual information, following oral directions, auditory and verbal learning, and auditory and visual memory. Twelve Wii video games addressed auditory deficits associated with Prosodic characteristics of (C)APD. These are listed in Table 4. These games addressed such auditory skills as auditory discrimination and temporal patterning.

79 Journal of Educational Audiology vol. 17, 2011

References American Academy of Audiology. (2010). Clinical practice guidelines: Diagnosis, treatment and management of children and adults with central auditory processing disorder. Available from http://www.audiology.org/resources/ documentlibrary/ pages/CentralAuditoryProcessingDisorder. aspx. American Speech-Language-Hearing Association. (2005). (Central) auditory processing disorders [Technical Report]. Available from www.asha.org/policy. Bellis, T. J. (2003). Assessment and management of central auditory processing disorders in the educational setting: From science to practice (2nd ed.). New York, NY: Delmar Learning. Deutsch, J. E., Borbely, M., Filler, J., Huhn, K., & Guarrera- Bowlby, P. (2008). Use of a low-cost, commercially available gaming console (Wii) for rehabilitation of an adolescent with cerebral palsy. Journal of the American Physical Therapy Association. 88, 1196-1207. Ferre, J. M. (2002). Managing children’s central auditory processing deficits in the real world. Seminars in Hearing, 4, 319-326. Kent, S. L. (2001). The ultimate history of video games: From Pong to Pokemon - The story behind the craze that touched our lives and changed the world. New York, NY: Three Rivers Press. Kuster, J. M. (2009, June). Do you hear what I hear? Listening activities, The ASHA Leader, 26-27. Musiek, F. M. (2005). Temporal (auditory) training for (C)APD. The Hearing Journal. doi: 10.1097/01. HJ.0000286118.00336.ec Musiek, F. M., Chermak, G. D., & Weihing, J. (2007). Auditory training. In G.D. Chermak & F.M. Musiek (Eds.) Handbook of (central) auditory processing disorder: Comprehensive intervention. Vol. II. (pp. 77-106). San Diego, CA: Plural Publishing. Sloan, D. (2011). Playing to Wiin: Nintendo and the video game industry’s greatest comeback. Hoboken, NJ: Wiley Publishing.

80 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study

Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study

Annette Hurley, Ph.D. Robert G. Turner, Ph.D. Eric Arriaga, B.S. Amanda Troyer, B.A. Louisiana State University Health Sciences Center New Orleans, Louisiana

We had a unique opportunity to investigate the auditory processing abilities of a 10 year-old female after a left functional hemispherectomy. We administered a comprehensive battery of behavioral and electrophysiological tests, assessing both afferent and efferent auditory pathways. Normal peripheral hearing was established. A normal masking level difference threshold and gap detection threshold were obtained. Normal performance on the Frequency Pattern Test and Duration Pattern Test was also established. However, a right ear deficit was evident on all dichotic speech tests administered. Poor performance on speech-in- noise tests and time-compressed speech tests was noted. The auditory brainstem response was within normal limits, the auditory middle latency response revealed an electrode effect over the left temporal lobe, and auditory late event responses were within normal limits. Suppression of transient evoked otoacoustic emissions was present for the right and left ears. Speech intelligibility in background noise improved with the introduction of contralateral noise for the right and left ears. The introduction of contralateral white noise negatively affected the N1/P2 amplitudes for right and left ear stimulations, and the introduction of contralateral noise did not affect the P3 latency for either ear. The results from this case are important as they demonstrate behavioral and electrophysiological test results in relation to a documented lesion. Introduction after left hemispherectomy in children with congenital damage (Mariotti, Iuvone, Torrioi & Silveri, 1998; Vargha-Khadem A hemispherectomy is a rare surgical procedure in which one & Polkry, 1992). This provides support for the idea that the cerebral hemisphere is removed or disabled. The first published right hemisphere assumes language dominance (Stark, Bleile, report of an anatomical hemispherectomy was reported over Brandt, Freeman & Vining, 1995). Further evidence for cortical 80 years ago (Dandy, 1928). This procedure has been used as a reorganization after hemispherectomy has been provided by fMRI, radical surgical treatment for intractable seizures since 1945 showing an increase in activity in the intact hemisphere (Paiement (Krynauw, 1950). Improved surgical techniques and procedures et al., 2008). have led to modifications of the total anatomical hemispherectomy to a “functional hemispherectomy.” During a functional Central Auditory Processing and Auditory Lesions hemispherectomy, only affected anatomical portions of the central Historically, behavioral tests employed in central auditory and temporal regions are removed, and the two hemispheres are processing assessment were originally developed for site-of-lesion disconnected (Rasmussen, 1973). This procedure has shown testing. Because of similar symptoms, these tests were later used improved control of seizures (Vining et al., 1997). to assess auditory processing. Concern with auditory processing The improved seizure control and psychosocial improvement disorders dates back to the 1950s when a group of Italian physicians following successful surgery outweigh the poor prognosis first reported that patients with temporal lobe lesions had associated with the natural history of the disease processes. Most complaints of difficulty understanding speech (Bocca, Calaero, & hemispherectomized patients do not show a decline of cognitive Cassinari, 1954; Bocca, Calaero, Cassinari & Migliavacca, 1955). function in comparison to their preoperative performance, as A few years later, Kimura (1961) was first to model the measured by verbal and performance IQ (Brandt, Vining, Stark, dominant contralateral pathway in dichotic listening tasks. Ansel, & Freeman, 1990; Devlin et al., 2003; McFire, 1961; Pulsifer Dichotic testing is a non-invasive method for measuring cerebral et al, 2004; Tinuper, Andermann, Villemure & Quesney, 1988; hemispheric specialization of auditory processing and laterality. In Verity et al., 1982; Wyllie et al., 1998). In fact, an improvement of normal listening conditions, auditory information is conducted to cognitive function has been reported in some children following the auditory cortex by both ipsilateral and contralateral auditory hemispherectomy (Devlin et al., 2003). pathways; however, during controlled dichotic listening, the Normal language and fluent speech have been reported ipsilateral pathway is suppressed by the dominant contralateral

81 Journal of Educational Audiology vol. 17, 2011 pathway. A language-related auditory signal presented to the but the amplitudes were slightly lower on the operated side. right ear travels through the dominant contralateral right auditory Kutus, Hillyard & Volpe (1990) reported the P300 response in five pathway directly to the left hemisphere. Conversely, a language- patients post commissurotomy. These investigators found larger related auditory signal directed to the left ear is conducted to the response amplitudes over the right hemisphere in comparison to right cortex and must be transferred to the left hemisphere via the left. Additionally, they reported that the P300 response is not the corpus callosum in order for the person to repeat what was dependent upon the corpus callosum. heard in the left ear. Thus, a slight right ear advantage for normal, Tong, Xu and Fu (2009) successfully recorded P300 waveforms right-handed listeners is present when listening to dichotic tasks in six hemispherectomized subjects and a control group. Four (Berlin, Lowe-Bell, Berlin, Cullen & Thompson, 1973; Kimura, subjects were left hemispherectomized and two subjects were right 1961; Lowe, Cullen, Berlin, Thompson, & Willett, 1970). When hemispherectomized. No statistical differences in P300 amplitude there is damage or a lesion in the auditory temporal lobe, the ear or latency were reported between the hemispherectomized and contralateral to the lesion will be affected in dichotic listening control groups. The authors indicated, “A unilateral hemisphere tasks, as the contralateral pathway is the dominant pathway can generate P300 when given certain tasks.” Furthermore, these (Berlin, Lowe-Bell, Jannetta, & Kline, 1972). authors argued, “The basic cognitive function of the two groups Previous investigations have reported specific ear advantages was not significantly different, and to some extent, reflects the (relative to the anatomically lesioned area) in dichotic listening for plasticity of the cerebral hemisphere” (Tong et al. 2009, p 1773). subjects with agenesis of the corpus callosum (Bryden & Zurif, It is important to note that these authors recorded the P300s to 1970), partial and complete commissurotomy (Zaidel, 1983), binaural stimuli. Therefore, latency and amplitude comparisons congenital hemiplegia (Brizzolara et al., 2002; Issacs, Christie, between monaural and binaural stimulation were not available. Vargha-Khadem & Mishkin, 1996; Korkman & von Wendt, 1995), Additionally, amplitude and latency measurements were made acquired and congenital brain injury (Nass, Sadler & Sidtis, 1992), only at electrode locations Cz and Pz. Thus, information from and hemispherectomy (Damasio, Lima, & Damasio, 1975; Netley, additional electrode sites over the site of hemispherectomy was 1972; Zaidel, 1983). The timing of the lesion onset influences the not available. ear advantage. Congenital lesions may reduce the magnitude of laterality, due to the possibility of increased cortical reorganization Auditory Efferent System (Brizzolara et al., 2002; Fernandes & Smith, 2000; Isaacs, Christie, The auditory efferent system is not completely understood. Vargha-Khadem, & Mishkin, 1996; Woods, 1984). The rostral system projects from the cortex to the medial geniculate There are limited published cases reporting behavioral and body and other brainstem auditory nuclei. Most efferent research electrophysiological central auditory processing results after has focused on the olivocochlear bundle (Rasmussen, 1946; Warr hemispherectomy. Boatman, Vining, Freeman, and Carson (2003) & Guinan, 1979). There are two groups of olivocochlear efferents, report auditory processing abilities from two hemispherectomy the lateral olivocochlear bundle (LOC) and medial olivocochlear patients, one with right hemispherectomy and one with left bundle (MOC). The LOC efferents are made of unmyelinated hemispherectomy. Both patients received hemispherectomies as fibers and synapse primarily ipsilaterally on the auditory nerve children, ages 9 to 9-½ years. Post-surgical testing was done one to afferents beneath the inner hair cells. The MOC is made up of one and a half years after surgery. Both patients had good auditory neurons arising from the peri-olivary nuclei around the region of recognition in quiet for speech and non-speech stimuli and had the superior olivary complex (Rasmussen, 1946; Warr & Guinan, abnormal performance for speech-in-noise testing. The authors 1979). The MOC efferents are myelinated and the majority of purported both hemispheres contribute to speech processing in these fibers cross at the floor of the 4th ventricle to the contralateral background noise by the involvement of the efferent auditory cochlea and synapse directly with outer hair cells (Rasmussen, pathway and attention. Consistent with previous investigations, 1946). both patients reported a deficit in the ear contralateral to the There are limited ways to assess the auditory efferent system. removed hemisphere during dichotic testing. One of the most recent objective applications is the study of the suppression of otoacoustic emissions (OAEs). With the Auditory Evoked Responses Post Hemispherectomy introduction of noise (delivered either binaurally, ipsilaterally, There are few studies reporting auditory evoked potential or contralaterally), the amplitude of OAEs will be reduced in responses after hemispherectomy. Saletu, Itil, and Saletu (1971) most individuals with normal hearing or normal outer hair cell reported auditory late event responses (ALERs) in a patient with function (Berlin, Hood, Hurley, & Wen, 1994; Hood, Berlin, left hemispherectomy. Responses were obtained from both sides, Hurley, Cecola, & Bell, 1996). The reduction in OAE amplitude

82 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study has been attributed to the MOC’s influence on cochlear output. grand mal seizure followed by respiratory arrest, and she required To our knowledge, there has not been an efferent investigation CPR afterwards. Grand mal seizures reoccurred at ages 4 and specifically studying suppression of OAEs for patients who have 6 years. Respiratory rescue was required after each grand mal had a hemispherectomy. seizure. Ongoing seizures continued, even though pharmaceutical One behavioral measure of the efferent system has been management followed. Seizures remained until January 2009, attributed to the MOC’s role in speech perception in noise (Muchnik when a functional modified left hemispherectomy was performed. et al., 2004; Sahley, Nodar & Musiek, 1997). Previous investigators Since that time, CLH has been seizure free. have reported an increase in speech intelligibility scores with the Pre- and post-surgical psychological assessments showed no addition of contralateral noise (Giraud et al., 1997; Kumar & change, indicating CLH fell within the average range of intellectual Vanaja, 2004). Clinically, an improvement of greater than 10% is abilities on the following measures: verbal comprehension, considered to be within normal limits (Kumar & Vanaja, 2004) perceptual reasoning, and working memory. Processing speed was and indicates a functional efferent system. Researchers de Boer in the low- average range. and Thorton (2008) reported a positive correlation between the At the present time, CLH is 11 years of age, mainstreamed in role of the MOC and phoneme-in-noise training. Participants with the fifth grade, and performing well academically. This success is the poorest ability of discrimination on the first day of training, attributed, in part, to support services including speech-language and who showed the most improvement after training, showed an therapy and private tutoring two times per week, as well as weekly, increase in MOC activity. private occupational and physical therapy. Another objective measure of the efferent system is the CLH was referred to this clinic for a (central) auditory effect of contralateral noise on the ALER. A decrease in the N1/ processing disorder [(C)APD] assessment to determine if there P2 amplitude and an increase in the P300 latency have been were any auditory processing recommendations to support a reported with the introduction of contralateral noise (Chueden, successful academic career. Testing was completed during two 1972; Cranford & Martin 1991; Hurley, Bhatt, Davis, & Collins, sessions; CLH returned on a separate date for efferent assessments. 2011; Krishnamurti, 2001; Krumm & Cranford, 1994; Salisbury, Parental consent and patient assent for participation in the efferent Desantis, Shenton & McCarley, 2002; Salo et al., 2003). These auditory measures was obtained in accordance with this university’s changes are reported to be mediated by the efferent system (Salo, institutional review board policies. CLH was compensated for her Lang, Salmivalli, Johansson, & Peltola, 2003). participation in this case study. The present case provided a rare opportunity to study the afferent and efferent auditory pathways for a number of Peripheral Hearing Assessment reasons. There are few cases that provide both behavioral and An otoscopic examination indicated clear ear canals, bilaterally. electrophysiological data from patients with documented central Normal (Type A) tympanograms were obtained bilaterally, and auditory lesions. First, the disconnection of the left temporal lobe normal ipsilateral and contralateral acoustic reflexes were obtained from the corpus callosum creates a right-ear deficit in dichotic bilaterally. Pure tone thresholds were within normal limits (< speech tasks. Second, efferent auditory studies are limited for 15 dBHL), bilaterally. Transient evoked otoacoustic emissions patients with documented cortical lesions. Last, in patients with (TEOAEs) were obtained using the ILO system (Version 6.0). documents lesions, electrophysiological recordings are an objective TEOAEs were present (>3 dB) at all frequencies (1.0, 1.4, 2.0, temporal window into the function of the central auditory nervous 2.8, and 4.0 kHz), and wave reproducibility was greater than70%, system (CANS) and may provide useful information about the suggesting normal outer hair cell function. All of these tests are underlying generators. consistent with normal peripheral hearing.

Case Report Behavioral Tests for (Central) Auditory Processing SCAN-3:C. The SCAN-3:C (Keith, 2010), a test for auditory History processing disorders in Children, was administered at 50 dB HL. CLH is a female born in January, 2000. She was the product of The SCAN-3:C consists of five diagnostic subtests. The first two a full-term pregnancy and birth, weighing 7 lbs-8 oz at birth. At five subtests stress the auditory system by degrading and filtering the weeks chronological age, it was discovered that, at approximately speech signal. The Filtered Words subtest uses a 750 Hz low-pass 27 weeks gestational age, CLH had suffered a left temporal- filter and involves presentation of monosyllabic words to each ear, parietal infarct in-utero. This resulted in the limited use of her and the Auditory Figure-Ground subtest includes monosyllabic right hand and intractable epilepsy. At age 2, she experienced a words in the presence of multi-talker babble at a +8 signal-to-noise

83 Journal of Educational Audiology vol. 17, 2011 ratio (SNR). The Competing Words and the Competing Sentences rise-fall time. The inter-toneburst interval is 150 msec, with a 7 sec subtests are dichotic, whereby two different stimuli (words or inter-pattern interval. For the FPT, three low- and high-frequency sentences) are presented to the right and left ears. In the Competing tones are presented to the listener. Two are the same and one is Words subtest, the listener is asked to repeat both words, with different. The listener must identify and then verbalize the pattern attention directed to the right ear for the first half of the test and with a response, such as “low, low, high” or “high, low, low.” The attention directed to the left ear for the remainder of the test. In frequency of the low tone is 880 Hz, and the frequency of the high the Competing Sentences subtest, the listener is required to repeat tone is 1122 Hz. CLH scored 100% when tones were presented the sentence in a directed ear, while ignoring the sentence in the separately to the right and left ears. other ear. The Time Compressed Speech (60% compression ratio) In the DPT, the 1000 Hz pure tones are either “short” subtest, which removes the temporal cues of speech intelligibility, (250 msec) or “long” (500 msec). Three tones are presented to was also administered. Results for the subtests are presented the listener. Two are the same and one is different. The listener in Table 1. It is important to note, scoring for the SCAN-3:C is responds by identifying and then verbalizing the pattern, such as interpreted from the combined right and left ear. In other words, “long, long, short” or “short, long, long.” The test is scored on the right and left individual scores are added together for the raw percentage correct for each ear. CLH scored 80%, when tones were score, and the Standard score is derived from the raw score. presented to the left ear and 92% when tones were presented to the CLH performed within normal limits for the Auditory Figure right ear. These scores are within normal limits (Bellis, 2003). Ground, Filtered Words, and Time Compressed Speech subtests. A right ear deficit was revealed on the Competing Words and Masking Level Difference (MLD). A MLD was obtained Competing Sentences subtests. where thresholds were compared between two conditions: (1) 500 The Dichotic Digits Test (Musiek, 1983) was also administered. Hz tone and noise in phase (S0, N0), and (2) 500 Hz tone was out In this test, two numbers are presented to the right ear at the same of phase with the contralateral signal and the noise was in phase time two different numbers are presented to the left ear. The with the contralateral noise (SpN0; Hirsh, 1948; Olsen, Noffsinger, listener must repeat all four numbers. This test assesses the ability & Carhart, 1976; Olsen, Noffsinger, & Kurdziel, 1975). CLH had of the auditory system to integrate information from the right and a normal MLD of 10 dB (Olsen et al, 1976). left cerebral hemispheres and is scored based on the percentage of Results from the behavioral test battery are summarized digits repeated correctly. CLH scored 40% for the right auditory in Table 1. This summary table groups the tests according to pathway and 92% for the left auditory pathway. These discrepant classifications of monaural low redundancy, dichotic tests, and scores reflect a right ear deficit. tests of temporal pattern or temporal processing (Bellis, 2003). A Three-Interval Forced Choice Gap Detection Test (Davis & Hurley, 2002) was administered. This test is a variation of the Electrophysiologic Recordings Gaps in Noise (GIN; Musiek et al, 2005) and is used as a temporal Electrophysiologic recordings were obtained while the resolution screening tool. In this test, three bursts of noise are Table 1. Summary of behavioral (central) auditory processing disorder tests. presented with one of the bursts having a silent interval that varies Task Test Result from 2 to 20 msec in length. The listener must identify which Auditory Figure Ground (SCAN- *Normal burst in the series has the silent interval by indicating “1, 2, or 3:C sub-test) Monaural Low 3” or “first, middle, last.” CLH was able to detect a 3 msec silent Redundancy Tests Filtered Words Normal (SCAN-3:C sub-test) interval in both the right and left ears separately, and these results Time Compressed Speech Normal are within normal limits. (SCAN-3:C sub-test)

Competing Words *Normal (SCAN-3:C sub-test) Right Ear Deficit Frequency Pattern Test (FPT) and Duration Pattern Test Dichotic Listening Competing Sentences Abnormal (DPT). The FPT (Musiek & Pinheiro, 1987) and DPT (Pinheiro & Tasks (SCAN-3:C sub-test) Right Ear Deficit

Musiek, 1985) require auditory discrimination, temporal ordering, Dichotic Digits Abnormal Right Ear Deficit and pattern recognition. Both tests are similar in composition. Frequency Pattern Test Right: Normal These tests were included because previous investigators report Temporal Pattern Duration Pattern Test Left Normal that patients with hemispheric or interhemispheric dysfunction Gap Detection Threshold Right: Normal Left: Normal may have difficulty in the ordering of sound sequences (Bamiou et Masking Level Difference Normal al., 2006; Musiek, Baran, & Pinheiro, 1990).

Tones in the FPT test are 200 msec in duration with a 10 msec *1 Standard deviation below the mean.

84 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study subject rested comfortably in a reclined position and watched an the ear by ER3A insert earphones. animated movie with the sound muted. These recordings were Auditory Brainstem Response (ABR). The normal ABR is obtained to assess the integrity of the CANS from the brainstem shown in Figure 1, and the latency and amplitude values of Wave through the auditory cortex. Stimulus parameters for the auditory I, III, and V are reported in Table 3. This ABR is within normal brainstem response (ABR), speech ABR, auditory middle latency clinical values (Hall, 2007). response (AMLR), and auditory late event response (ALER) are Speech Auditory Brainstem Response. Two repeatable presented in Table 2. All recordings were made with three surface recordings of the speech ABR were obtained and then added electrodes attached to the skin at the vertex (non-inverting) and together for a grand average speech ABR response. The grand each ipsilateral mastoid (inverting). Electrode impedance was average was compared to a normative recording for an algorithmic, below 5000 ohms for all recordings, and stimuli were delivered to numeric score that is interpreted by the BioMARK proprietary

software as “normal, borderline, or abnormal.” The left and right Table 2. Parameters for auditory brainstem response (ABR), speech auditory brainstem response (speech ABR), auditory middle latency response (AMLR), and auditory late event response (ALER). monaural summed waveforms are shown in Figure 2. Wave V

Parameters ABR Speech ABR MLR ALER latencies and the BioMARK algorithmic numeric scores are

Time Window 12 msec 100 msec 100 msec 750 msec Table 3. Auditory brainstem response latency and Wave V amplitude measures. Number of Sweeps 2000 3000 1000

Stimulus 100 µsec 40 msec “da” Click Standard: 500 Hz Wave I Wave III Wave V Wave V Condensation Click Rare: 2000 Hz (msec) (msec) (msec) amplitude Presentation Rate 27.7 11.1 6.7 1.1 in µV Filter Settings 100-3000 100-2000 5-100 1-30 Stimulus Sequence 2 runs of 2000 clicks 2 runs of 2 runs of 2000 2 runs 1000 simuli Binaural 1.91 3.86 5.70 .97 3000 clicks 80% frequent Right 1.74 3.74 5.70 .36 20% rare Left 1.66 4.11 5.49 .45 Stimulus level 80 dBnHL 80 dBSPL 70 dBnHL Artifact Rejection Yes Yes Yes Yes Number of channels 1 1 2 2 Cz: A1 Cz: A2

. Normal BioMARK recordings for the right and left ears. C3:A1 Figure 2 C3-A2

C4: A1 C4: A2

Figure 1. A normal auditory brainstem response was obtained for binaural (B), right (R), and left (L) stimulations.

Table 4. Latency information for Wave V and Wave A, and the BioMARK algorithm score for the speech auditory brainstem response.

Wave V Wave A Algorithm (msec) (msec) Score Right 6.53 7.45 1 Left 6.45 7.45 4

85 Journal of Educational Audiology vol. 17, 2011 displayed in Table 4. The algorithm scores are within normal limits Auditory Late Event Response (ALER). The ALER and reflect normal encoding of speech stimuli. recordings were obtained using an “oddball” paradigm (Squires Auditory Middle Latency Response (AMLR). The & Hecox, 1983). The software selection for this oddball paradigm amplitude of the Na-Pa wave complex for the AMLR was obtained ratio was 80/20, indicating the frequent stimulus would be presented by summing the two individual runs. Investigators have previously 80% of the time, and the rare tone would be presented 20% of reported amplitude measures may be more sensitive than latency the time. Although the standard recording procedure requires measurements (Chermak & Musiek, 1997; Kraus, Ozdamar, the subject to attend or count the infrequent or rare stimuli, this Hier, & Stein, 1982; Scherg & Von Cramon, 1986). Latency recording was obtained passively. No instructions were given to and amplitude values for each electrode site and stimulation are CLH, and she sat and watched a muted, animated movie. Latency reported in Table 5. The amplitude for electrode C3 is greater than and amplitude values for monaural and binaural stimulations are 50% less in comparison to other electrodes. Amplitude measures shown in Table 6 and in Figure 4. The P3 is interpreted as within that are less than 50% in comparison to other electrode sites are normal limits (Hall, 2007). On a clinical note, P3 is used when an diagnostically significant (Chermak & Musiek, 1997; Musiek, oddball paradigm is passively recorded; P300 is used when the Charlette, Kelly, Lee, & Musiek, 1999. (This is shown in Figure 3, listener is instructed to attend to novel stimuli. and amplitude and latency values are given in Table 5.) Efferent Assessment Contralateral suppression of TEAOEs. TEOAEs were obtained in response to an 80 dB peak equivalent SPL “non- Figure 3. The auditory middle latency response (AMLR) recording.

Figure 4. Auditory late event response and P300 recordings.

Table 6. Latency and amplitude information for the P300 recording.

Ear N1 latency P2 latency N1/P2 P300 latency P3 Table 5. Latency and amplitude information for the auditory middle latency (msec) (msec) amplitude (msec) amplitude response. An electrode effect was evident for recording over the left temporal in µV in µV lobe (electrode site C3). Binaural 121 155.35 3.25 304.22 1.83 Right 123.71 144.94 1.44 319.60 1.91 Left 116 144.94 1.41 311.50 2.97 Electrode Stimulus Na latency Pa latency Na-Pa Ear (msec) (msec) amplitude in µV linear” click in the right and left ears. The ILO “non-linear” click Cz Left 20.02 26.68 .49 consists of three, in-phase, 80 µs square wave clicks followed by Cz Right 20.85 30.64 .45 C4 Left 22.94 30.22 .44 a fourth out-of-phase click with a 10 dB higher intensity. Three C4 Right 22.52 31.06 .76 TEOAEs in quiet and three TEOAEs obtained in the presence of C3 Left 22.52 26.47 .08 45 dBHL white noise were delivered to the contralateral ear via C3 Right 15.23 24.39 .12

86 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study

Table 7. Contralateral suppression of transient evoked otoacoustic emissions. earlier, an improvement greater than 10% is considered within normal limits clinically Mean Mean Overall Amplitude Amplitude Suppression (Kumar and Vanaja, 2002) and reflects a in Quiet in Noise in in dB in dB db 1kHz 1.4 kHz 2 kHz 2.8 kHz 4 kHz normal functioning efferent system. Right 21.93 21.40 .53 1.23 .53 .23 .77 .27 Left 18.87 18 .17 2.10 1.47 -1.03 -.10 .17 Auditory cortical potentials with contralateral noise. Auditory evoked

Figure 5. The auditory late event response in quiet and in noise. late potential recordings obtained with the oddball paradigm described in an earlier section were recorded in quiet and in the presence of contralateral 50 dB HL white noise. A decrease in the N1/P2 amplitude was obtained when contralateral white noise was delivered to both ears. Additionally, an increase in P3 latency was obtained when contralateral noise was delivered to the right ear, but not when the noise was delivered to the left ear (signal in right ear). Amplitude and latency values for ALER and P3 responses are listed in Table 8 and shown in Figure 5.

Discussion

The results of the behavioral and insert earphone. A slightly greater TEOAE amplitude was obtained for the right ear in quiet conditions as compared to the left ear in electrophysiological test results are consistent with anatomical quiet. The right ear also had slightly more suppression of TEOAE function. The functional hemispherectomy involved disconnection amplitude (noise was delivered to the left ear). Table 7 provides of the left auditory temporal lobe from the corpus callosum. Input TEOAE values for each condition. These findings are consistent from the ipsilateral and contralateral pathways remain present; with previous investigations, showing slightly greater suppression however, we cannot be sure of the functional capabilities of the for the right ear with contralateral noise (Hood et al., 1996). left auditory cortex. Normal peripheral hearing was established by pure tone thresholds, tympanometry, acoustic reflexes, and Speech intelligibility in ipsilateral and contralateral noise. TEOAEs. Behavioral assessment of the efferent system was obtained by measuring the performance of speech intelligibility with ipsilateral Behavioral Assessment four-talker babble and with the introduction of contralateral white Behavioral central auditory processing disorder tests were noise. Speech stimuli consisted of 50 NU-6 monosyllabic words consistent with previous site of lesion investigations. As expected, with speech babble in the ipsilateral ear at +10 SNR. For the first this patient displayed a right ear deficit (the ear contralateral to the half of the word list, the speech and noise were presented to the Table 8. The auditory late event response and P300 recordings in quiet ipsilateral test ear. During the second half of the word list, the and with contralateral noise. speech and noise were still presented to the ipsilateral test ear, but Ear N1 P2 N1/P2 Percentage P3 white noise at 40 dB HL was also delivered via insert earphone to latency latency amplitude of Amplitude latency the contralateral ear. Therefore, each condition yielded an intra- (msec) (msec) (µV) Reduction (msec) Right 114.75 144.35 1.56 333.37 aural comparison.The right ear showed an improvement of 16% Quiet 59% (28% with ipsilateral noise; 44% with ipsilateral and contralateral Right 126.20 151.19 .64 333.07 Noise noise), and the left ear showed a similar improvement of 12% when Left 129.33 163.68 1.22 305.26 contralateral white noise was introduced (72% with ipsilateral Quiet 38% Left 122.24 151.19 .76 321.91 noise; 84% with ipsilateral and contralateral noise). As noted Noise

87 Journal of Educational Audiology vol. 17, 2011 lesion) on all dichotic speech tests administered, the Competing The ALER in this case was within normal latency values. Words and Competing Sentences of the SCAN-3:C, and the Researchers agree that the exact neural generators for the ALER Dichotic Digits Test. Speech introduced to the right ear must be are not known (Clayworth & Woods, 1987; Wood & Wolpaw, processed via the right ipsilateral pathway in the right hemisphere, 1982). Most agree the auditory cortex, auditory association areas and speech introduced to the left ear travels directly to the right and other structures, such as the limbic system, hippocampus, hemisphere for processing. The contralateral auditory pathway is amygdale, and thalamus, are all involved in the generation and dominant. In dichotic listening, the ear contralateral to the lesion regulation of the ALER and P300. Additional information from will be suppressed; thus, CLH’s right ear deficit is evidence of the numerous electrode sites would be of interest for hemispherectomy functional left hemisphere. cases. Consistent with previous investigations, CLH performed within normal limits on the FPT and DPT. Dennis & Hopyan Efferent Assessment (2001) reported normal rhythm perception in children with either One of the functions of the auditory efferent system has been right or left temporal lobectomy. Melody deficits related to pitch linked to enhancement of speech understanding in noise. Boatman perception were reported in subjects who had right temporal et al. (2003) attributed the difficulty of speech understanding in lobectomy. Suprasegmental aspects of speech are processed in the noise in two patients with hemispherectomies to a dysfunctional right hemisphere. CLH’s ability to correctly linguistically label efferent system. The speech-in-noise deficit may be attributed to the pattern is evidence of the migration of language to the right the right hemisphere’s responsibility for processing all spectral and hemisphere. temporal information and reflect more demands on the remaining A normal MLD was obtained. The MLD is mediated by the hemisphere, rather than two specialized hemispheres. Again, the lower brainstem and is often abnormal in patients with brainstem remaining functional capabilities of the left hemisphere are not lesions (Lynn & Gilroy, 1977; Olsen et al., 1976); whereas, cortical completely known. lesions have shown no effect on the MLD (Cullen & Thompson; Contralateral suppression of TEOAEs was evident for both 1974). the right and left ears, suggesting a normal finding. The reduction CLH performed within normal limits, but one standard of amplitude is likely mediated by the olivocochlear bundle deviation below the mean on the Auditory Figure Ground subtest crossing to the contralateral cochlea. Consistent with previous of the SCAN-3:C. This score is based upon combined individual investigations, this patient displayed slightly more TEOAE right and left scores. Previously, poor speech-in-noise performance suppression for the right ear (contralateral noise delivered to the was reported in two patients with hemispherectomy (Boatman et left ear). Contralateral suppression of TEOAEs is mediated by the al, 2003). The authors attributed this deficit to a possible deficit in MOC (Berlin et al., 1994; Collet et al., 1990; Hood et al, 1996). the efferent system. This is, again, consistent with previous investigations, showing more suppression in the right ear in normal hearing adults (Khalfa Electrophysiological Assessment & Collet, 1996). A normal ABR response was obtained in this case. This An improvement in speech intelligibility in noise with the obligatory response is mediated by structures from the distal introduction of contralateral noise was documented for the right portion of the VIII nerve through the superior olivary complex. and left ears. This is in agreement with previous investigations and These normal responses are not surprising, as the generators supports efficient function of the MOC efferent system (Giraud et from this response are in the brainstem and midbrain and not al., 1997; Kumar & Vanaja, 2004). anatomically affected by the functional hemispherectomy. A reduction of amplitude in the N1/P2 response was observed An electrode effect for the left temporal lobe C3 was indicated when contralateral noise was presented to the right and left ears. by the AMLR. The underlying auditory generators of the MLR Again, this change was noted in both ears and represents normal include the thalamocortical pathway, the reticular formation, and the function of the efferent system (Salo et al., 2003). The lack of inferior colliculus (Kraus et al., 1982). Previous investigations of the increase in P3 latency for the right ear with the introduction of AMLR in patients with temporal lobe lesions have been conflicting. contralateral noise may be related to the stimulus recording. (CLH A normal AMLR was reported in one patient with auditory agnosia was given no instructions to attend to the rare or deviant stimuli.) and temporal lobe lesions (Parving, Solomon, Elberling, Larsen, & Previous investigators (Cranford & Martin, 1991; Krumm & Lassen, 1980). Kraus et al. (1982) reported diminished Pa amplitude Cranford, 1994) have shown no statistically significant differences over the lesioned side in 24 patients with temporal lobe lesions. The between the right and left N1/P2 amplitude reduction when latter study is consistent with our results. contralalteral noise was introduced. Also, Hurley et al. (2011)

88 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study reported no significant differences in the amount of reduction in the N1/P2 amplitude between a group of children with (C)APD and a control group when noise was introduced contralaterally. Researchers have also reported an increase in P300 latencies with the introduction of contralateral noise (Krishnamurti, 2001; Polich, Howard, & Starr 1985; Salisbury et al., 2002). The addition of contralateral noise did not significantly affect the P300 amplitude. Hurley et al. (2011) reported a sighificant increase in P3 latency when contralateral noise was introduced for the control group, but no significant latency change in the P3 latency with the addition of noise for the experimental group (i.e., children diagnosed with (C)APD). The different effect of noise on the P3 may be attributed to obtaining the recording passively with no instructions given. It is also important to note the variability of these responses. Additional research is needed. It is important to consider that this patient was referred to this clinic for evaluation and recommendations because of her hearing in background noise. Classroom frequency modulation (FM) system use and dichotic listening training, such as Dichotic Intensity Interaural Difference training (Musiek, Chermak, & Weihing, 2007), were recommended. CLH is currently performing above average in a private school. She is socially adjusted with many friends and social activities.

Summary

In summary, this clinical case report is important to further our understanding of a documented CANS lesion on behavioral and electrophysiological tests of auditory processing. This left lesion was supported behaviorally by dichotic speech tests and electrophysiologically by the electrode effect over the left temporal lobe. This case report is also the first to report normal efferent function for a post hemispherectomy patient.

89 Journal of Educational Audiology vol. 17, 2011

Acknowledgements Chermak, G.D., & Musiek, F.E. (1997). Central Auditory The authors would like to thank CLH and her mother for allowing Processing Disorders: New Perspectives. San Diego: us to learn more about the central auditory pathway from her Singular participation. The authors would also like to thank the reviewers Chueden, H. (1972). The masking noise and its effect upon the who offered valuable comments for this paper. human cortical evokes potential. International Journal of Audiology, 11, 90-96. Clayworth, C. C., & Woods, D. L. (1987). Subcortical References contributions to the auditory N1: a comparison of Bamiou, D., Musiek, F., Stow, I., Stevens, J., Cipolotti, L., distributions of the N1 and wave V of BAEP. In R. Johnson, Brown, M., & Luxon, L.M. (2006). Auditory temporal J. W. Rohbraugh, & R. Parasurama (Eds). Current Trends processing. Neurology, 67, 614-619. in Event Related Potentials (pp. 445-451). Amsterdam: Bellis, T. (2003). Assessment and management of central Elsevier. auditory processing disorders in the educational setting: Collet, L., Kemp, D., Veuillet, E., Duclaux, R., Moulin, A., & From science to practice (2nd Edition ed.). Clifton Park, NY: Morgon, A. (1990). Effect of contralateral auditory stimuli Thomson Learning, Inc. on active cochlear micromechanical properties in human Berlin, C. I., Lowe-Bell, S. S., Cullen, J. K, Jr., & Thompson, C. subjects. Hearing Research, 43, 251-262. L. (1973). Dichotic speech perception: An interpretation of Cranford, J., & Martin, D. (1991). Age-related changes in right-ear advantage and temporal offset effects. Journal of binaural processing U: Evoked potential findings.The the Acoustical Society of America, 53, 699-709. American Journal of Otology, 12, 357-364. Berlin, C., Lowe-Bell, S., Janetta, P., & Kline, D. (1972). Central Cullen, J. C., & Thompson, C. (1974). Masking release auditory deficits after temporal lobectomy. Archives of for speech in subjects with temporal. Archives of Otolaryngology, 96, 4-10. Otolaryngology, 100, 113-116. Berlin, C., Hood, L., Hurley, A., & Wen, H. (1994). Contralateral Damasio, A.R., Lima, P.A., & Damasio, H. (1975). Nervous suppression of otoacoustic emissions: An index of the function after right hemispherectomy. Neurology, 25, 89-93. function of the medial olivocochlear system. Otolaryngology Dandy, W. (1928). Removal of right cerebral hemisphere for Head and Neck Surgery, 110, 3-121. certain tumors with hemiplegia. Journal of the American Boatman, D., Vining, E., Freeman, J., & Carson, B. (2003). Medical Association, 90, 823-825. Auditory processing studied prospectively in two Davis, D.B., & Hurley, A.(2002). A New Format for the Random hemidecorticetomy patients. Journal of Child Neurology, 18, Gap Detection Test ™. Poster presentation at the AAA 228-232. Convention, Philadelphia, PA. Bocca, E., Calearo, C., & Cassinari, V. (1954). A new method for de Boer, J., & Thornton, A.R. (2008). Neural correlates of testing hearing in. Acta Otolaryngologica, 44, 219-221. perceptual learning in the auditory brainstem efferent Bocca, E., Calearo, C., Cassinari, V., & Migliavacca, F. (1955). activity predicts and reflects improvement at speech in noise Testing cortical hearing. Acta Otolaryngologica, 42, 289- discrimination task. Journal of Neuroscience, 28, 4929-4937. 304. Dennis, M., & Hopyan, T. (2001). Rhythm and melody in Brandt, J., Vining, E., Stark, R., Ansel, B., & Freeman, J. (1990). children and adolescents after left or right temporal Hemispherectomy for intractable epilepsy in childhood; lobectomy. Brain and Cognition, 47, 461-469. preliminary report on neuropsychological and psychosocial Devlin, A., Cross, J., Harkness, W., Chong, W., Harding, B., sequelae. Journal of Epilepsy, 3, 261-270. Vargha-Khadem, F., et al.(2003). Clinical outcomes of Bryden, M.P. & Zurif, E.B. (1970). Dichotic listening hemispherectomy for epilepsy. Brain, 26, 556-66. performance in a case of agenesis of the corpus callosum. Fernandes, M., & Smith, M. (2000). Comparing the fused Neuropsychologia, 8, 371-377. dichotic words. Neuropsychologia, 38, 1216-1228. Brizzolara, D., Pecini, C., Brovedani, P., Ferretti, G., Cipriani, Giraud, A., L., Garnier, S., Micheyl, C., Lina, G., Chays, A., P., & Cioni, G. (2002). Timing and type of congenital brain & Chery-Crose, S. (1997). Auditory efferents involved in lesion determine different patterns of language lateralization speech-in-noise intelligibility. Neuroreport, 8, 1779-1783. in hemiplegic children. Neuropsychologia, 40(6), 620-632. Hall, J. (2007). New Handbook for Auditory Evoked Responses. Boston, MA: Allyn & Bacon.

90 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study

Hirsh, I. (1948). The influence of interaural phase on interaural Kutus, M., Hillyard, S., & Volpe, B. (1990). Late positive event- summation and. Journal of the Acoustical Society of related potentials after commissural section in humans. America, 20, 536-544. Journal of Cognitive and Neuroscience, 2, 258-271. Hood, L., Berlin, C., Hurley, A., Cecola, R., & Bell, B. (1996). Lindsay, J., Counsted, C., & Richards, P. (1988). Contralateral suppression of transient-evoked otoacoustic Hemispherectomy for childhood epilepsy: a 36 year study. emissions in humans: intensity effects. Hearing Research, Developmental Medicine and Child Neurology, 24, 27-34. 101, 113-118. Lowe, S., Jr., J. C., Berlin, C., Thompson, C., & Willett, M. Hurley, A., Bhatt, S., Davis, D.B., & Collins, A. (2011). Effect (1970). Perception of simultaneous dichotic and monotic of noise on the LAEP in children with (C)APD. Research monosyllables. Journal of Speech and Hearing Research, 13, Presentation at the Annual ASHA Convention San Diego, 812-822. CA. Lowe-Bell, S., Berlin, C., Cullen, J. C., & Thompson, C. (1973). Isaacs, E., Christie, D., Vargha-Khadem, F., & Mishkin, M. Dichotic speech perception: an interpretation of right- (1996). Effects of hemispheric side of injury, age at injury, ear advantage and temporal offset effects. Journal of the and presence of seizure disorder on functional ear and hand Acoustical Society of America, 53, 699-709. asymmetries in hemiplegic children. Neuropsycholgia, 34, Lynn, G., & Guilroy, J. (1977). In. R.W, Keith (Ed) Evaluation 127-137. of central auditory dysfunction in patients. Central auditory Keith, R.W. (2010). SCAN-3:C Tests for Auditory Processing dysfunction (pp. 177-221). New York: Grune & Stratton. Disorders for Children. San Antonio, TX: Psychological Mariotti, P., Iuvone, L., Torrioi, M., & Silveri, M. (1998). Corporation. Linguistic and non-linguistic abilities in a patient with early Khalfa S. & Collet L. (1996). Functional asymmetry of medial hemispherectomy. Neuropsychologia, 36, 1303-12. olivocochlear system in humans. Towards a peripheral McFire, J. (1961). The effects of hemispherectomy on intellectual auditory lateralization. Neuroreport. 7, 993-996 functioning in cases of infantile hemiplegia. Journal of Kimura, D. (1961). Cerebral dominance and the perception of Neurology Neurosurgery and Psychiatry, 24, 240-249. verbal stimuli. Canadian Journal of Psychology, 15, 166- Muchink, C., Roth, D., Jebara, R., Katz, H., Shabtai, E., & 171. Hildesheimer, M. (2004). Reduced medial olivocochlear Korkman, M., & von Wendt, L. (1995). Evidence of altered bundle system function in children with auditory processing dominance in children with congenital spastic hemiplegia. disorder. Audiology and Neurotology, 9, 107-114. Journal of the International Neuropsychological Society, 1, Musiek, F. (1983). Assessment of central auditory dysfunction: 261-270. the Dichotic Digit Test revisited. Ear and Hearing, 4, 79-83. Kraus, N., O zdamar, O., Hier, D., & Stein, L. (1982). Auditory Musiek, F.E., Charlette, L., Kelly, T., Lee, W., Musiek, E. 1999. middle latency responses. (MLRs) in patients with Hit and false-positive rates for the middle latency response in cortical lesions. Electroencephalography and Clinical patients with central nervous system involvement. Journal of Neurophysiology, 54, 275-287. the American Academy of Audiology, 10, 124-132. Krishnamurti, S. (2001). P300 auditory event-related potentials Musiek, F., Chermak, G., & Weihing, J. (2007). Auditory in binaural and competing noise conditions in adults with training. In Musiek, F.E., & Chermak, G.D. (eds) Handbook central auditory processing disorders. Contemporary Issues of (Central) Auditory Processing Disorder: Comprehensive in Communication Sciences and Disorders, 28, 40-47. Intervention, Vol I. San Diego: Plural Publishing, pp 77-106. Krumm, M., & Cranford, J. (1994). Effects of Contralateral Musiek, F., & Pinheiro, M. (1987). Frequency patterns in Speech Competition. Journal of the American Academy of cochlear, brainstem, and cerebral lesions. Audiology, 26, 79- Audiology 5, 127-132. 88. Krynauw, R. (1950). Infantile hemiplegia treated by removing Musiek, F., Baran, J., & Pinheiro, M. (1990). Duration pattern one cerebral hemisphere. Developmental Medicine and Child recognition in normal subjects and in patients with cerebral. Neurology, 28, 251-258. Audiology, 29, 304-313. Kumar, U.A. & Vanaja, C.S. (2004). Functioning of olivocochlear Musiek, F., Shinn, J., Jirsa, R., Bamiou, D., Baran, J., & Zaiden, bundle and speech perception in noise. Ear and Hearing, 25, E. (2005). The GIN (Gaps-in-Noise) Test performance in 142-146. subjects with confirmed central auditory nervous system involvement. Ear and Hearing, 26, 608-618.

91 Journal of Educational Audiology vol. 17, 2011

Nass, R., Sadler, A., & Sidtis, J. (1992). Differential effects of Salisbury, D., Desantis, M., Shenton, M., & McCarley, R. (2002). congenital versus acquired unilateral brain injury on dichotic The effect of background noise on P300 to suprathreshold listening performance: Evidence for sparing and asymmetric stimuli. Psychophysiology, 39, 111-115. crowding. Neurology, 42, 1960-1965. Scherg, M., & von Cramon, D. (1986). Evoked dipole Netley, C. (1972). Dichotic listening performance of source potentials of the human auditory cortex. hemispherectomized patients. Neuropsychologia, 10, 233- Electroencephalography & Clinical Neurophysiology, 65, 240. 344-360. Olsen, W., Noffsingeer, D., & Kurdziel, S. (1975). Speech Squires, K., & Hecox, K. (1983). Electrophysiologic evaluation discrimination in quiet and in white noise by patients with of higher level auditory processing. Seminars in Hearing, 4, peripheral and central lesions. Acta Otolaryngoloica, 80, 415-433. 375-382. Stark, R., Bleile, K., Brandt, J., Freeman, J., & Vining, E. (1995). Olsen, W., Noffsinger, D., & Carhart, R. (1976). Masking level Speech-language outcomes of hemispherectomy in children differences encountered. Audiology, 15, 287-301. and young adults. Brain and Language, 51, 460-421. Parving, A., Solomon, G., Elberling, C., Larsen, B., & Lassen, Tinuper, P., Andermann, F., Vilemure, J., Rasmussen, T., & N.A. (1980). Middle components of the auditory evoked Quesney, L. (1988). Functional hemispherectomy for response in bilateral temporal lobe lesions. Scandinavian treatment of epilepsy associated with hemiplegia: Rationale, Audiology, 9, 161-167. indications, results, and comparison with callosotomy. Paiement, P., Champoux, F., Bacon, B., Lassonde, M., Gagne, J., Annals of Neurology, 24, 27-34. Mensour, B., et al. (2008). Functional reorganization of the Tong, X., Xu, Y., & Fu, Z. (2009). Long-term P300 in human auditory pathways following hemispherectomy: An hemispherectomized patients. Chinese Medical Journal, 122, fMRI demonstration. Neuropsychologia, 46, 2936-2942. 1769-1774. Pinheiro, M.L., & Musiek, F.E. (1985). Sequencing and temporal Vargha-Khadem, F., & Polkry, C. (1992). A review of cognitive ordering in the auditory system. In M. L. Pinheiro & F.E. outcome after hemidecortication in humans. In F.D. Musiek (Eds.), Assessment of central auditory dysfunction; Rose M.H. Johnson (EDs) Recovery from Brain damage. Foundations and clinical correlates (pp.219-238). Baltimore: Reflections and Directions (pp. 137-148). London, UK: Williams & Wilkins. Plenum Press. Polich, J., Howard, L., & Starr, A. (1985). Stimulus frequency Verity, C., Strauss, E., Moyes, P., Wada, J., Dunn, H., & and masking determinants of P300 latency in event-related Lapointe, J. (1982). Long-term follow-up after cerebral potentials from auditory stimuli. Biological Psychology, 21, hemispherectomy; neurophysiologic, radiologic and 309-318. psychologic findings.Neurology , 32, 629-639. Pulsifer, M., Brandt, J., Salorio, C., Vining, E., Carson, Vining, E., Freeman, J., Pillas, D., Uetmatsu, S., Carson, B., B., & Freeman, J. (2004). The cognitive outcome of Brandt, J., et al. (1997). Why would you remove half a hemispherectomy in 71 children. Epilespsia, 45, 243-254. brain? The outcome of 58 children after hemispherectomy: Rasmussen, G. (1946). The olivary peduncle and other fiber the Johns Hopkins Experience. Pediatrics, 100, 163-71. projections of the superior olivary complex. Journal of Warr, W., & Guinan, J. (1979). Efferent innervation of the organ Comparative Neurology, 84, 141-219. of Corti: Two separate systems. Brain Research, 179, 152- Rasmussen, T. (1973). Postoperative superficial hemosiderosis 155. of the brain, its diagnosis, treatment and prevention. Wood, C., & Wolpaw, J. (1982). Scalp distribution of Transactions of the American Neurological Association, 98, human auditory evoked potentials. 11. Evidence of 133-177. overlapping sources and involvement of auditory cortex. Sahley, T., Nodar, R., & Musiek, F. (1997). Efferent Auditory Electroencephalography and Clinical Neurophysiology, 54, System: Structure and Function. San Diego: Singular. 25-38. Saletu, B., Itil, T.M., & Saletu, M. (1971). Evoked responses after Woods, B. (1984). Dichotic listening ear preference after hemispherectomy. Confinia Neurologica, 22, 221-230. childhood cerebral. Neuropsychologia,, 3, 303-310. Salo, S. K., Lang, A. H. Salmivalli, A. J., Johansson, R. K., Wyllie, E., Comair, Y., Kotagal, P., Bulacio, J., Bingaman, W., & Peltola, M.S., (2003). Contralateral white noise masking Ruggieri, P. (1998). Seizure outcome after epilepsy surgery affects auditory N1 and P2 waves differently. Journal of in children and adolescents. Annals of Neurology, 44, 740- Psychophysiology, 17, 189-194. 748.

92 Auditory Afferent and Efferent Assessment Post Functional Hemispherectomy: A Case Study

Zaidei, E. (1983). Advances and retreats in laterality research. Behavioral and Brain Sciences, 6, 523-533.

93 Journal of Educational Audiology vol. 17, 2011

The Importance of Appropriate Adjustments to Classroom Amplification: A One School, One Classroom Case Study

James C. Blair, Ph.D. Jeffery B. Larsen, Ph.D. Utah State University

The infrared classroom amplification systems in one elementary school building were analyzed to determine the consistency of amplification in each room. Most classroom amplification systems are installed when the building is empty and classes are not in session. These systems are also set at a level that seems appropriate to the installer. The purpose of this research was to determine what actual signal-to-noise levels are present across grades in an elementary building while classes are in session. The results revealed tremendous variability across classrooms by as much as 18 decibels with a range of +5 to +23 dB. The researchers also discovered that teachers are amenable to increasing sound levels and adjusting speakers to be more appropriate to students’ needs. Recommendations are made for additional research and appropriate classroom fitting procedures.

Introduction like the amplification because they can hear everything that is said, and they feel what they say is important when they are able to Research on the effects of noise on understanding speech in use the microphone (Rosenberg & Blake-Rahter, 1995). There is classrooms has been studied for a number of years (Berg, Blair, & also some research that reports improved academic scores when Benson, 1996; Finitzo-Hieber & Tillman, 1978). These research classroom amplification systems are used compared to when they studies ultimately led to the development of acoustic standards for are not used (Sarff, 1981; Gertel, McCarty, & Schoff, 2004). unoccupied classrooms, American National Standards Institute - When infrared classroom amplification was in use and the Standard S 12.60 (ANSI, 2002). While this was an important step rooms met ANSI standards (ANSI, 2002), research by Larsen and forward in stipulating the acoustic characteristics of a classroom, Blair (2008) measured an average SNR of +13 dB for students classrooms are not unoccupied; classrooms are filled with students at various locations in these classrooms. When students were and teachers. Once people are added to an acoustic environment, answering questions, reading, or engaged in discussions, the use many changes occur. The absorption properties of bodies, and the of a handheld microphone by the students provided a significant noise produced by students, alter the noise levels and reverberation improvement in their ability to be heard. While the benefits of within the room. In addition, the effects on classroom acoustics are classroom amplification are clear, there are still many unanswered variable because children are active. However, as students mature, questions regarding the use of these systems. their activity level generally lessens, and the amount of noise they Without classroom amplification, the ability of a child to generate is generally less intense. Another variable that modifies hear depends on where (s)he is seated, the intensity of the talker’s the levels of noise is the amount of control the teacher has on the voice, and the amount of background noise that is present (Larsen students in a class. & Blair, 2008). The variable nature of classroom environments One way to compensate for the acoustic variability present in and the current lack of consistency in classroom design have led every classroom is the addition of classroom amplification, such to questions about how best to overcome these problems. There as infrared soundfield systems. The addition of amplification has are some primary issues that need to be considered in classroom several positive impacts on students’ learning in schools. One of amplification. the greatest benefits is to increase the signal-to-noise ratio (SNR) First, in 2006, the Acoustical Society of America (as cited by so that all students have the opportunity to hear what the teacher Lubman & Sutherland, 2008)) took the position that classroom is saying. Typically, infrared or frequency modulation (FM) amplification systems may do more harm than good if they are not classroom amplification systems are reported to improve the SNR installed properly or if the room is too reverberant. We hypothesize by approximately 8 to 10 dB (Crandell, Smaldino, & Flexer, 1997). that classroom amplification is appropriate in every classroom and There are also reports that, when using amplification systems, vocal in every setting where children are being educated. Reverberation fatigue of teachers is reduced (Roy, Gray, Simon, Dove, & Corbin- may make amplification less than ideal, but even in a poor acoustic Lewis, 2001; Rosenberg & Blake-Rahter, 1995), and teacher environment, children are likely to benefit from a direct signal absenteeism due to voice problems is also reduced (Rosenberg and an increased SNR. However, this hypothesis requires further & Blake-Rahter, 1995). Previous research suggests that students investigation.

94 The Importance of Appropriate Adjustments to Classroom Amplification: A One School, One Classroom Case Study

Second, in a position statement, the American Speech- for the measurements taken in each classroom. The measurements Language-Hearing Association (2005) recommends +15 dB SNR were taken at a central location about half way between the closest at the child’s ear; however, Larsen and Blair (2008) found that and most distant student in each classroom. We also used a half- the average SNR was +13 dB. While the ASHA guideline is inch microphone and preamplifier (Sennheiser) connected to a ideal, we suggest that every classroom should have at least a +12 laptop computer with SIA SMaart software to measure the impulse SNR at every place that a student will be expected to hear. In response of one class in some detail. The impulse response is our anecdotal observations of a variety of classrooms in several measured by playing a pseudo-random noise signal that contains schools, we noticed what appeared to be an inconsistency in the a broad frequency spectrum to stimulate the acoustic properties level of SNR and an inconsistency in the number of teachers who of the room. Once the room is excited with the pseudo-random were actually using the systems on a consistent basis at an optimal noise, the response of the room is recorded. Then, offline, the level. In order to determine if our observations were accurate, we pseudo-random noise is subtracted from the recording so that only sought permission to collect data in one school in our local school the acoustic response of the room, or impulse response, remains. district. From the impulse response of the room, several important acoustic We sought to obtain answers to the following research factors can be obtained, including the reverberation time (RT) of questions: the room. 1. Do amplification systems in every classroom and grade Each classroom was fit with a classroom amplification system level provide the same average levels of amplification? (Audio Enhancement Infrared Wireless Model CAE-50W) that 2. Is the average SNR in each classroom at or above a had four loudspeakers placed in the ceiling tiles of the room. In +12 dB? some classrooms, this was not possible because the room had no 3. Will placing loudspeakers within 8 feet above a hanging ceiling and, therefore, no space to place the loudspeaker. student’s head in a reverberant classroom improve the In these classrooms, the loudspeakers were placed high on the average SNR at the student’s ear? classroom walls at a height of approximately 9 feet and angled down toward the center of the classroom. Method Procedure School Measurements were taken in each of the 14 classrooms A building in a rural northeastern Utah town was used as the approximately half way between the closest and most distant students data collection site. This school was selected because it is the oldest from the teacher in the center of the room. The measurements were school in the district and has been the site of numerous expansions taken at the approximate ear level of an average student in the class and renovations over its 150-year history. The measurements were while the teacher was presenting information to the class. Teachers taken in the morning because this was the time when the most were asked to use the amplification system and then to talk while active teaching and learning was occurring in the building. The the system was turned off. We took measurements every 15 school was fit with classroom amplification two years before this seconds for a period of 10 minutes (i.e., 5 minutes with the system research. The company from whom the equipment was purchased on and 5 minutes with the system off) in each classroom and then also installed all of the systems. Based on information obtained averaged the sound pressure level in dBA for both amplified and from the companies who sell classroom amplification equipment, unamplified conditions. In each classroom, the predominant source the majority of schools use the company from whom the equipment of sound was the teacher during our measurements. Measurements is purchased to install their systems, and most systems are installed were obtained with the sound level meter set to the slow integration in empty classrooms. The installers set the equipment at a level setting and on the A-weighting scale. that “sounds appropriate” to them. If this is the method used most After completing the measurement of all the classrooms, we frequently, variability in amplification levels is expected among returned to a first grade classroom for additional testing. This classrooms. In order to determine if each classroom was amplified particular room was of interest because of its construction. The at an appropriate level, measurements were conducted in each ceilings were at a height of 8 feet for half the room and up to 15 classroom in the school building, starting with first grade and feet for the other half. The teacher taught in the section of the ending with fifth grade. room with the lower ceiling. During the observation, nine students gave a brief report to the class. We measured the level of the Equipment students’ voices compared to the background noise. The speakers A Larsen-Davis Sound Level Meter (type I, 800b) was used were then lowered to a level that was 7 feet- 4 inches off the floor.

95 Journal of Educational Audiology vol. 17, 2011

Measurements of loudspeaker output and reverberation times were amplified and the unamplified conditions may not have noticed repeated. Lowering the speakers could produce feedback, but the that much of an improvement in students being able to hear them. speakers would be in closer proximity to the students and place In contrast, teachers who obtained a 12 dB or better improvement them within the direct field of the loudspeaker. This means that the in amplification all noticed a significant improvement between the students would be receiving the direct signal from the loudspeaker unamplified and the amplified condition. at a more intense level than the level of reverberation in the room. The point at which the intensity of the direct signal is equal to First Grade Classroom with Unusual Design the intensity of the reverberation in the room is called the critical After collecting data for all the classes, one unusual first-grade distance. This critical distance differs for each loudspeaker and classroom, as explained above, was examined in more detail. The each classroom, but can usually be expected to be 6 to 8 feet from measured reverberation time in this room ranged from .70 seconds the loudspeaker. This distance can be calculated when the size of a in the section of the room that had 8 foot high ceilings and then room and the reverberation time within that room is known. This increased to 1.4 seconds in the section in which the ceiling was particular classroom was measured and the reverberation time was 15 feet high. The first measurements taken in this room found that also measured to allow the critical distance to be estimated. When the teacher’s voice was on average 8 dB more intense than the using classroom amplification, placement of a child within the background noise. We asked the teacher if we could increase the critical distance is ideal because the direct speech will be clearer classroom amplification so that it was 12 dB more intense than and more easily comprehended than speech within the reverberant the background noise, to which he agreed. One month later, we field beyond the critical distance (Crandell & Smaldino, 2000; returned to this room to conduct additional measurements. The Peutz, 1971). For this particular room, moving the loudspeaker teacher was now using an average level that was 15 dB louder than to 7 feet- 4 inches above the children’s heads allowed us to make the average background noise. He reported that the students were two measurements with children within and outside of the critical more attentive and that his ability to communicate had improved. distance of the loudspeaker. He was also using a pass-around microphone for the students during sharing time, where the average SNR for the children was Results approximately +10 dB. The students seemed to enjoy using the microphone during their presentations, and the teacher reported Table 1 illustrates the amplified and unamplified results we that more children were willing to participate when using the obtained for each classroom. As can be seen, the average SNR microphone. Table 2 illustrates the vocal intensity of the nine over all the classrooms was almost 13 dBA with a range from +5 to students during sharing time with the microphone compared with +23 dBA. Some of the teachers reported that they were not sure if background noise during their presentations. the microphone helped very much, while others reported that they As was described earlier, one of the problems with this had a “loud teacher voice” and that the children probably heard as classroom was the unusual configuration of the ceiling. To examine well without the microphone as they did with it. It is possible that if the acoustics could be improved, two of the speakers were teachers who only demonstrated a 5 dB difference between the lowered to 7 feet-4 inches (instead of 15 feet) and placed directly overhead. Lowering the speakers would effectively place all of the students in the direct sound field rather than in the distant sound Table 1. The amplification data from 14 classrooms during instruction. ______field. The additional measurements revealed that, by lowering Unamplified level Amplified level Difference Grade Level ______the speakers, another 2 dB in SNR advantage was gained. The reverberation time also changed from .68 to .64 seconds. Finally, 60 dBA 70 dBA 10 dBA 1st 60 68 8 1st this improvement was achieved without significant feedback from st 54 59 5 1 the amplification system, which can occur when the loudspeaker 54 68 14 2nd 49 64 15 2nd is placed in too close proximity to the microphone of the system. 50 68 18 2nd 49 72 23 3rd 60 72 12 3rd Discussion 55 68 13 3rd 42 64 22 4th th 62 68 6 4 The advantages of classroom amplification are well 57 66 9 4th 65 71 6 5th documented; however, as this research has shown, there are some 42 62 19 5th Average Difference 13 dB adjustments that could be made to improve the benefits of these Range 5 – 23 dB systems. One important issue would be to designate an individual ______

96 The Importance of Appropriate Adjustments to Classroom Amplification: A One School, One Classroom Case Study

Table 2. Average intensity levels and signal-to-noise ratios during Limitations of this Research 2-½ minute student presentations with a hand held microphone. The ability to generalize from these conclusions is limited ______Nine Students Background Noise Signal-to-Noise Ratio because we only examined one building in one school district. ______We also only analyzed one classroom in detail and cannot 61 dBA 55 dBA +6 suggest that our findings are similar to what others would find. 64 52 +12 However, our experience and observation suggest that the findings 66 55 +11 of this research warrant further investigation. We hope that such investigation will lead to better use of classroom amplification to 65 56 +9 improve the learning environment for all children in all schools. 63 57 +6

59 52 +7

68 56 +12

69 55 +14

66 57 +9

Average: 64.5 55 Difference: 9.5

______in the school system that would actually go into each classroom on an annual basis and adjust the system so that the teachers’ voice level was consistently at least 12 dB louder than the noise floor in the room during instructional periods. In this way, the individual differences in teachers’ voices and classroom noise could be consistent across classrooms. Another issue that could be addressed would be the positioning of the speaker so that all students are in the direct sound field. It would not be difficult to design speakers so that they could be attractive and hang no more than 8 feet over the floor. There has been a tendency of manufacturers to go to a one- or two-speaker system, which is placed at the front of the room. While this arrangement is certainly convenient, all students will not be in the direct sound field. A potential solution would be to hang speakers from the ceiling to make sure that all children are in the direct sound field. Another advantage of placing the speakers in the ceiling (8 feet over the floor) is that the students’ bodies will diffuse the sound and both the carpet and the bodies will absorb some of the sound. The effects achieved in one classroom by lowering the loudspeaker to be closer to the students demonstrated both a gain in intensity and a very modest reduction in reverberation. It is hypothesized that bringing children into the direct field of the loudspeaker would result in increased speech recognition abilities and less stressful listening conditions for the students. We also believe this would benefit those with hearing loss, those whose first language is not English, and those with other learning problems. Of course, additional research needs to be done to demonstrate the advantages and feasibility for suggesting lowering the loudspeakers to bring children within the critical distance.

97 Journal of Educational Audiology vol. 17, 2011

References American Speech-Language-Hearing Association. (2005). Acoustics in educational settings: Position statement. Available from http://www.asha.org/docs/html/PS2005- 00028.html American National Standards Institute. (2002, June). Acoustical performance criteria, design requirements, and guidelines for schools. Acoustical Society of America. New York. Berg, F., Blair, J., & Benson, P. (1996). Classroom acoustics: The problem, impact, and solution. Language, Speech, and Hearing Services in Schools, 27, 16-20. Crandell, C. C. & Smaldino, J. J. (2000). Classroom acoustics for children with normal hearing and with hearing impairment. Language, Speech, and Hearing Services in the Schools, 31, 362-370. Crandell, C. C., Smaldino, J. J., & Flexer, C. (1997). A suggested protocol for implementing sound-field FM technology in the educational setting. Educational Audiology Monographs, 5, 13-21. Gertel, S., McCarty, P., & Schoff, P. (2004). High performance schools equals high performing students. Educational Facility Planner, 39(2), 5-10. Finitzo-Hiber, T., & Tillman, T. (1978). Room acoustics effects on monosyllabic word discrimination ability by normal and hearing impaired children. Journal of Speech and Hearing Research, 21, 440-458. Larsen, J. B. & Blair, J. C. (2008). The effect of classroom amplification on the signal-to-noise ratio in classrooms while class is in session. Language, Speech, and Hearing Services in Schools, 39(4) 451-460. Lubman, D., & Sutherland, L. (2008) Soundfield amplification is a poor substitute for good classroom acoustics. Journal of the Acoustical Society of America, 123(5), 3919. Peutz, V. (1971). Articulation loss of consonants as a criterion for speech transmission in a room. Journal of the Audio Engineering Society, 19, 915-919. Rosenberg, G. & Blake-Rahter, P. (1995). Sound-field amplification: A review of the literature. In C. Crandell, J. Smaldino & C. Flexer, Sound-field FM amplification.San Diego, CA; Singular. Roy, N., Gray, S. D., Simon, M., Dove, H., & Corbin-Lewis, K. (2001). An evaluation of the effects of two treatment approaches for teachers with voice disorders. Journal of Speech, Language, and Hearing Research, 44, 286-296. Sarff, L. (1981). An innovative use of free field amplification in regular classrooms. In R. Roeser & M. Downs (Eds.), Auditory disorders in school children (pp. 263–272). New York: Thieme-Stratton.

98 Call for Papers 2012 Journal of Educational Audiology

The Journal of Educational Audiology is now soliciting manuscripts for the 2012 issue (Volume 18). All submissions will be peer-reviewed and blind. JEA publishes original manuscripts from a range of authors who work with children and their families in a broad variety of audiological settings. One of the primary purposes of the Journal is to provide a forum to share clinical expertise that is unique or innovative and of interest to other educational audiologists. Our traditional focus has been the auditory assessment, management, and treatment of children in educational settings. However, contributors are not limited to those who work in school settings. We invite authors from parent-infant and early intervention programs, as well as clinicians who work with children in related capacities (e.g. Clinical Pediatric Audiologists, Speech-Language Pathologists, Auditory-Verbal Therapists). As the only audiology journal dedicated to a pediatric population, the intent is to reflect the broad spectrum of issues relevant to the education and development of children with auditory dysfunction (e.g. children with hearing loss, auditory neuropathy/ dys-synchrony, or central auditory processing disorders).

Manuscripts may be submitted in one of the following categories: • Article: a report of scholarly research or study. • Tutorial: an in-depth article on a specific topic. • Report: a description of practices in audiology, such as guidelines, standards of practice, service delivery models, survey findings, case studies, or data management. • Application: a report of an innovative or unique practice, such as a screening program, hearing conservation program, therapy technique or other activity that has been particularly effective.

There are specific manuscript requirements and guidelines for submission posted onthe EAA website (www.edaud.org), or you can obtain these documents by contacting the Editor at Erin. [email protected] or 940-369-7433. The information in a manuscript may have been presented previously, but not published.

Submissions of manuscripts via e-mail to the Editor are required. Send electronic manuscripts to [email protected]. Microsoft Word-compatible documents and graphics are preferred. Questions or comments should be directed to the Editor or one of the Associate Editors: Cynthia Richburg ([email protected]), Andrew John ([email protected]), or Claudia Updike ([email protected]).

*NOTE: Submissions for the 2012 issue of JEA will be accepted until July 31, 2012. Manuscripts received after that date will be considered for the 2013 issue, unless the authors are notified otherwise. 3030 West 81st Avenue, Westminster, CO 80031 Phone: 800-460-7EAA (7322) l Fax: 303-458-0002 www.edaud.org l [email protected] Guidelines for Authors Submitting Manuscripts-2012 Journal of Educational Audiology A Publication of the Educational Audiology Association

1. Format All manuscripts must follow the style specified in the Publication Manual of theAmerican Psychological Association (6th edition). Authors should pay special attention to APA style for tables, figures, and references. Any manuscript not following the 6th edition format will not be reviewed.

2. Cover Letter A cover letter should accompany all submissions. The cover letter should contain a statement that the manuscript has not been published previously and is not currently submitted elsewhere. If IRB approval was needed by the sponsoring institution, a statement to that effect should also be included.

3. Author Information Page The author information page should include the title of the article, complete authors’ names, and authors’ affiliations. This page should include a business address, phone number, and email address for the corresponding author.

4. Title Page This page should contain only the title of the article. No other identifying information should be present.

5. Abstract The second manuscript page (behind the title page) should contain an abstract not to exceed 250 words.

6. Text The text of the manuscript should begin on page 3.

7. Tables, Figures, and Other Graphics Tables, figures, and other graphics should be attached on separate pages and their placement within the manuscript noted (e.g., <

>). These separate pages should appear after the text and before the acknowledgements.

8. Acknowledgements Acknowledgements should appear on a separate page after the tables, figures, and graphs and before the references.

9. References All references should follow APA manual guidelines, as noted above. References are to be listed alphabetically, then chronologically. Journal names should be spelled out and italicized, along with volume number. Authors should consult the APA style manual (6th ed.) for the specifics on citing references within the text, as well as in the reference list. All citations in the text need to be listed in the References.

10. Blind Review All manuscripts will be sent out for blind review. If you have questions about this, please contact the Editor ([email protected]).

11. Submission of Manuscripts Submissions of manuscripts via e-mail to the Editor, Erin Schafer ([email protected]) are required. Microsoft Word-compatible documents and graphics are preferred. Questions or comments should be directed to the Editor ([email protected] /940-369- 7433) or one of the Associate Editors: Cynthia Richburg ([email protected]), Andrew John ([email protected]), or Claudia Updike ([email protected]).

3030 West 81st Avenue, Westminster, CO 80031 Phone: 800-460-7EAA (7322) l Fax: 303-458-0002 www.edaud.org l [email protected]