| Home | E-Submission | Sitemap | Editorial Office |  
top_img
Korean Journal of Otorhinolaryngology-Head and Neck Surgery > Epub ahead of print
가상현실을 활용한 청각 특성 별 감각 통합 연구

Abstract

Background and Objectives

This study aimed to explore speech performance using virtual reality (VR), comparing hearing aid (HA) and non-HA users, and to investigate the relationship between VR and conventional testing methods.

Materials and Method

Eighteen non-HA users and 13 HA users were enrolled in the study and completed puretone audiometry and speech testing in both quiet and noise. The conventional speech test was conducted in auditory only (AO). The multimodal testing conditions included AO, visual only (VO), and audiovisual (AV). Video recordings of a speaker saying test sentences at a café were displayed on a monitor (VO) and a VR headset (AV). Participants performed a sentence repetition task in AO, VO, and AV and percent-correct scores were obtained. All participants completed a questionnaire on listening abilities in diverse situations.

Results

A significant main effect of modality was observed, with the highest speech performance for the non-HA users, followed by HA users in the aided and unaided conditions. Post-hoc comparisons showed a significant difference between AO and VO conditions. No significant correlations were found between questionnaire results and multimodal speech performance in either quiet or noise.

Conclusion

The addition of visual information is helpful for communication for those with HL regardless of HA use and the presence of noise, and multisensory integration may play a more critical role in certain situations depending on the characteristics of HL. There is a potential for multimodal testing to be employed into clinical practice, but ample work is still needed to further validate the test method.

Introduction

It has been well documented in numerous studies that hearing loss (HL) has negative impacts on various aspects of life, including communication, job and academic performance, and social interaction and isolation [1-6]. Since individuals with HL have difficulty hearing, visual cues, such as facial movements and gestures, play a vital role in communication. Several studies have demonstrated the benefits of audiovisual (AV) integration for this population [7-13]. Grant, et al. [7] reported better consonant and sentence recognition in AV than in auditory only (AO) or visual only (VO) conditions. In the study by Puschmann, et al. [12], adults with HL showed a significantly higher target word detection accuracy in AV compared to AO when listening to running speech in noise. This AV benefit was also observed in hearing aid (HA) users, with Moradi, et al. [11] demonstrating improved consonant and vowel recognition in HA users in AV than in AO. Given the findings that individuals with HL rely on visual information in various conditions, including HA use and noisy environments, it is crucial to incorporate visual information into assessment and rehabilitation tools for accurately evaluating and enhancing their communication abilities. However, current assessment and rehabilitation tools targeting individuals with HL mostly utilize auditory stimuli, which poses limitations in reflecting real-life conversational settings and AV benefit [14,15]. There are some tests that include background noise (i.e., babble) to mimic real-life environments, but they do not incorporate any visual cues. Besides, in most settings, these tests are conducted in a sound-treated booth with one or two speakers which may not be enough to fully reflect the real-world environment. From a rehabilitation perspective, such controlled test environments can lead to a gap between test results and perceived benefit. For instance, HA users still experience difficulty understanding speech in noise, such as restaurants and café [16-18], even when their devices are well-fitted by healthcare professionals. If this problem continues, individuals might think that they are unable to hear well even with HAs and this could lead to dissatisfaction towards the devices. Eventually, they may return or give up wearing the devices, resulting in low compliance with aural rehabilitation.
Virtual reality (VR), a key technology of the Fourth Industrial Revolution, offers immersive environments, meaning that individuals feel present and involved in the virtual space, through simultaneous presentation of auditory and visual information [19]. In the field of audiology, Stecker [20] noted that compared to the distracting environment of the current laboratory setting, which consists of a sound booth and a monitor, VR offers advantages, such as being more entertaining and engaging, as well as ensuring consistency in the presentation of visual and auditory information and enabling naturalistic multi-dimensional tasks. However, when examining the various areas where VR has been applied, most studies focus on sound localization and audiology training, while research evaluating speech performance using VR mostly targets children [21-26]. Salanger, et al. [26] assessed the effectiveness of employing VR technology in pediatric hearing research by comparing speech perception performance in a conventional laboratory setting and a simulated VR classroom setting. Children completed speech perception and sound localization tasks in AO and AV. The results showed higher speech perception performance in AV and their performance in the VR classroom setting resembled their performance in the conventional laboratory setting for the AV condition than for the AO condition.
The purpose of this study was to evaluate speech performance in various real-world communication scenarios using VR. Considering reliance on visual information and the challenges with speech understanding in noise even after wearing HAs, individuals with HL were divided into HA users and non-HA users. Additionally, to explore the potential of a VR testing, the relationship between VR testing method and conventional testing method was investigated.

Materials and Methods

Participants

The inclusion criteria for participants in the study are as follows: 1) adults aged 19 and above, 2) individuals with normal otoscopy and normal tympanogram (type A), 3) native Korean speakers, and 4) individuals with no asymmetry in threshold differences of less than 10 dB between the two ears in puretone audiometry. Individuals who were unable to communicate and understand TV at a distance of 1 m based on self-report and those with neurological and mental disorders were excluded from the study. All experimental procedures were approved by Samsung Medical Center’s Institutional Review Board (2021-02-095-001) and an informed consent document was obtained from the participants.

Puretone audiometry

Pure-tone audiometry was performed in a sound-treated booth using an AudioStar Pro (Grason-Stadler) audiometer and insert earphones. Thresholds were obtained for 250-, 500-, 1000-, 2000-, 4000-, and 8000 Hz for both ears.

Speech testing

Two types of speech testing were conducted in a sound-treated booth: conventional and multimodal. For the conventional speech testing, the Korean Standard Sentence Lists for Adults (KS-SL-A) was used. The sentences were presented through a loudspeaker located 1 m in front of the participants. They received a score of 1 point for correctly repeating the sentence and 0 point for an incorrect repetition, resulting in a percent correct score. For the multimodal speech testing, the Korean version of the Hearing in Noise Test (K-HINT) was performed under three conditions: AO, VO, and AV. Both test materials were recorded by a female native Korean speaker who was instructed to maintain clear and natural speech and the quality of her speech was judged by the researchers. Video recordings were made using the Samsung 360 Round VR camera (Samsung Electronics Co.) with the same speaker speaking sentences from K-HINT in a café, which is one of the places people with HL have difficulty with speech understanding [23,27]. The video recordings were edited in a way that the participants engage in a one-on-one conversation with the female speaker. In order to reflect a real-world environment, there were also other people visually talking in the background. For AO, the test sentences were presented through the same loudspeaker. For VO, video recordings of a speaker saying the K-HINT sentences were displayed without sound. Lastly, as shown in Fig. 1, for the AV condition, the video recordings were presented with sound through a Samsung Odyssey VR headset (Samsung Electronics Co.). Before the AV session began, the participants had time to look around the “café,” so that they were familiar with the virtual environment. The presentation level of the test sentences was 60 dBA and percent-correct scores were calculated. To simulate real-world communication situations, the testing was also conducted in quiet and noise. For the noise condition, café noise from the Ambisonics Recordings of Typical Environments database was used [28] and presented from speakers located at 45, 135, 225, and 315 degrees. The noise was also presented at 60 dBA which is equivalent to 0 dB signal-to-noise ratio (SNR). All participants completed a practice test. HA users completed the testing in unaided and aided conditions with no adjustments made to their HA settings.

Questionnaire

The Korean version of the Speech, Spatial and Qualities of Hearing Scale (K-SSQ) was administered as a subjective measure to investigate individuals’ listening abilities in diverse situations. The K-SSQ is widely utilized to assess individuals’ subjective auditory performance before and after the use of hearing devices, such as HAs and cochlear implants. The K-SSQ consists of 49 items that examine various listening abilities in everyday environments. There are 14 items related to speech perception, 17 items related to spatial hearing, and 18 items related to qualities of hearing. The visual analog scale, on a scale from 0 (not at all) to 10 (perfectly), is used and “not applicable” can also be selected if individuals have not experienced the situation described in the questionnaire item.

Statistical analysis

Statistical analysis was performed using Sigma Plot version 14.0 (SYSTAT Software). A three-way analysis of covariance (ANCOVA) was performed to compared the group differences in speech performance across modality and noise conditions while accounting for the degree of HL. The Bonferroni correction was applied to correct for multiple comparisons. Pearson’s correlation was performed to examine the relationship between the K-SSQ and multimodal speech performance (AV), as well as between the conventional and multimodal speech performance (AV).

Results

Participant characteristics

Thirty-one participants with bilateral sensorineural HL were enrolled in the study. Among the 31 participants, 18 of them were non-HA (nHA) and 13 were bilateral HA users. The nHA users had no previous experience with HAs before participating in the study. The mean age was 52.7 years old (standard deviation [SD]=15.1). All participants had normal otoscopy and tympanogram, indicating normal outer and middle ear status. The nHA group’s four-frequency (500, 1000, 2000, and 4000 Hz) puretone averages were 36.8 dB in the right ear and 37.8 dB in the left ear. The HA group had puretone averages of 59.5 dB in the right ear and 57.3 dB in the left ear. The average duration of HA use were 83.9 months in the right ear and 80.2 months in the left ear.

Speech recognition performance

The average scores for conventional and multimodal speech testing are described in Table 1. Comparing average speech performance across all groups for conventional testing, the HA users had the highest score when wearing the HAs (aided), followed by the nHA group and the HA users without wearing the HAs (unaided). In multimodal testing, except for AV in noise and VO, speech performance was highest in the order of nHA, HA users in aided condition, and HA users in unaided condition. In the VO condition, the order was HA users in unaided condition, HA users in aided condition, and nHA, while in the AV in noise condition, it was HA users in aided condition, nHA, and HA users in unaided condition. To reflect real-life communication modes, an ANCOVA was performed using the aided scores of HA users and the scores of nHA users. The results revealed a significant main effect of modality (F=324.95, p=0.003), while the main effect of condition (F=8.77, p=0.21) and the interaction effect of condition and modality (F=7.16, p=0.12) were not statistically significant. Post-hoc comparisons revealed a significant difference between AO and VO (p=0.025). No statistical differences were observed between AO and AV (p=0.66) and between AV and VO (p=0.11).

Relationship between subjective listening abilities and multimodal speech performance using VR

The average K-SSQ scores were 96.7 and 88.4 for the nHA and HA groups, respectively (Fig. 2). The HA users completed the questionnaire based on their aided condition to better reflect real-world communication. For the same reason, the aided scores of HA users were used in the correlational analysis between K-SSQ scores and multimodal speech performance in AV. The analysis included data from all participants, and no significant correlations were found in either quiet (r=-0.23, p=0.22) or noise conditions (r=-0.12, p=0.54).

Relationship between conventional speech test performance and multimodal speech performance using VR

Investigating the feasibility of the multimodal testing method using VR in clinical practice, correlation between the conventional and multimodal (AV) testing methods was explored. Similarly, data from all participants (aided scores of HA users) was used; however, since conventional testing was conducted only in quiet, the correlation was examined solely with AV in quiet. No significant correlation was found (r=-0.22, p=0.23).

Discussion

In this study, individuals with HL were divided into nHA and HA users, and their speech performance was evaluated in AO, VO, and AV conditions in both quiet and noise. The study examined the relationship between speech performance and subjective listening ability as assessed by the K-SSQ, as well as the potential of the clinical applicability of VR testing method by investigating the relationship between conventional speech test performance and multimodal speech performance in AV. Findings of the study are consistent with prior research that integration of auditory and visual information improves speech understanding for individuals with HL [9,10,23,29,30]. For example, Lidestam, et al. [29] compared speech-in-noise performance between AO and AV by dividing participants into two groups (AO training and AV training), with stimuli (word and consonants) presented at 0 dB SNR, and reported that performance improved only in AV. In our study, both the nHA and HA groups showed the best speech performance in all test conditions (unaided, aided, quiet, and noise) in the order of AV, AO, and VO even when the speech testing was conducted using VR. This suggests that, regardless of whether HAs are worn or the presence of noise, the provision of visual information together with auditory information contributes to improved speech understanding. However, when examining modality and noise conditions in multimodal testing, different characteristics of speech performance was observed. For most conditions, speech performance was highest for the nHA group, followed by HA users in aided condition and HA users in unaided condition, suggesting that visual information and usage of HAs may compensate for HL, particularly in the real-world communication scenarios. In VO, HA users showed better performance in the unaided condition than in the aided condition, indicating that visual information may play a stronger role in the absence of auditory information. The statistical analysis using ANCOVA showed a significant main effect of modality, with no significant interaction between modality and condition, nor a significant main effect of condition. This suggests that the modality, whether auditory, visual, or AV had a stronger influence on speech performance than the presence of noise. A significant difference was observed between AO and VO through post-hoc comparisons which is consistent with previous studies that when it comes to speech perception, one modality (auditory in this case) could dominate while information from other modalities serves a supplemental role [31-34]. The sound-induced flash illusion task serves an example; when a single visual cue (flash) is presented alongside two auditory stimuli (beeps), individuals perceive two flashes. This indicates that audition could dominate vision which may not always be the dominant modality [34]. Overall, findings of the study emphasize the importance of considering both auditory and visual information when assessing speech performance in individuals with HL.
Our study is meaningful in that it utilized VR to assess speech performance in individuals with HL in a more ecologically valid setting, incorporating café noise and a variety of test conditions related to communicative environments. Compared to other studies, our study setup used more naturalistic stimuli and testing environments that could be easier to set up when employed in clinical practice. Most auditory research studies using VR used avatars or objects. For instance, Kepp, et al. [35] investigated pitch ranking abilities in children with cochlear implants, HAs and normal hearing using VR. The visual stimuli used in this study was a cockpit of a helicopter so that the participants felt like they were flying forward. Bakhos, et al. [21] explored the effectiveness of VR training for audiology students and an avatar was used as a virtual patient. The video recordings used in our study were created with actors speaking sentences and conversing at a café, with café noise incorporated for the noise condition.
Although the findings suggest the potential of VR-based multimodal testing in capturing AV and HA benefit, they also highlight important considerations when employing such testing in practice. First, while the presentation level of the speech material was controlled at 60 dBA—commonly regarded as conversational level—no adjustments were made to the HA settings in order to preserve the naturalistic communication environment. HA features, such as directionality and noise reduction could have reduced the noise more during testing in noise, contributing to the variability that could confound group differences and noise resilience comparisons. Second, to examine HA benefit within the multimodal condition, speech performance of HA users was evaluated under both unaided and aided conditions, but this resulted in a reduced sample size for each subgroup. Therefore, while findings of the study provide preliminary insight, future studies with larger sample sizes are needed to explore subgroup characteristics. Furthermore, the study found no significant correlations between multimodal speech performance with VR and either the K-SSQ scores or conventional speech testing. These findings may be attributed to several factors. The K-SSQ captures a broad range of subjective listening abilities across various environments, whereas the VR testing focused on a specific communication scenario. This mismatch in ecological coverage may have weakened the correlation. Individual differences in visual reliance and listening strategies may not also have been adequately assessed by either the K-SSQ or the conventional test. These considerations highlight the complexity of linking subjective and objective auditory measures, especially when ecological validity is prioritized. It is worth noting that there is also a need for unification of the definition and a setup for high ecological validity [36]. Keidser, et al. [36] mentioned that there has been a significant increase in studies using more realistic environments or tasks. However, there are still limitations that the definition of ecological validity is different across studies and there are many factors contributing to ecological validity and the underlying process of communication.
In sum, while our findings suggest that VR-based multimodal testing may enhance ecological validity by more closely resembling real-world communication environments, the current data do not provide clear evidence that this method is more effective than the conventional testing. In order to validate the feasibility of the clinical application of such testing, future studies with larger sample size, diverse participant characteristics (i.e., reliance on visual information, HA use etc.), speech materials matching the virtual space, speakers with various speech acoustic features (i.e., male, female, and child) would be necessary.

Notes

Acknowledgments

This work was supported by the Ministry of Education of the Republic of Korea and the National Research Foundation of Korea (NRF-2023S1A5A2A03085474).

Author Contribution

Conceptualization: all authors. Data curation: Soojin Kang, Hye Yoon Seol. Formal analysis: Soojin Kang. Methodology: all authors. Writing—original draft: Soojin Kang. Writing—review & editing: all authors.

Fig. 1.
Speech testing using virtual reality headset.
kjorl-hns-2025-00206f1.jpg
Fig. 2.
Average K-SSQ scores. K-SSQ, Korean version of the Speech, Spatial and Qualities of Hearing Scale; nHA, non-hearing aid user; HA, hearing aid user.
kjorl-hns-2025-00206f2.jpg
Table 1.
Speech performance in conventional and multimodal testing
Group Testing Condition Mean accuracy (%)
nHA Conventional AO 84.4 (SD=21.8)
Multimodal AO_Quiet 93.1 (SD=6.4)
VO_Quiet 1.4 (SD=3.8)
AV_Quiet 96.9 (SD=4.6)
AO_Noise 73.1 (SD=18.0)
VO_Noise 0.8 (SD=2.6)
AV_Noise 82.2 (SD=17.5)
HA_unaided Conventional AO 38.4 (SD=39.1)
Multimodal AO_Quiet 43.8 (SD=40.8)
VO_Quiet 5.0 (SD=8.4)
AV_Quiet 60.4 (SD=35.4)
AO_Noise 30.0 (SD=34.5)
VO_Noise 5.4 (SD=10.5)
AV_Noise 57.3 (SD=34.9)
HA_aided Conventional AO 88.5 (SD=24.4)
Multimodal AO_Quiet 87.7 (SD=15.9)
VO_Quiet 2.3 (SD=3.9)
AV_Quiet 94.6 (SD=6.6)
AO_Noise 69.6 (SD=16.9)
VO_Noise 1.2 (SD=2.2)
AV_Noise 83.8 (SD=14.6)

nHA, non-hearing aid user; HA_unaided, hearing aid users without hearing aids; HA_aided, hearing aid users with hearing aids; AO, auditory only; VO, visual only; AV, audiovisual; SD, standard deviation

REFERENCES

1. Davis A, McMahon CM, Pichora-Fuller KM, Russ S, Lin F, Olusanya BO, et al. Aging and hearing health: the life-course approach. Gerontologist 2016;56(Suppl 2):S256-67.
crossref pmid pmc
2. Härkönen K, Kivekäs I, Rautiainen M, Kotti V, Vasama JP. Quality of life and hearing eight years after sudden sensorineural hearing loss. Laryngoscope 2017;127(4):927-31.
crossref pmid pdf
3. Wilson BS, Tucci DL, Merson MH, O’Donoghue GM. Global hearing health care: new findings and perspectives. Lancet 2017;390(10111):2503-15.
crossref pmid
4. Punch JL, Hitt R, Smith SW. Hearing loss and quality of life. J Commun Disord 2019;78:33-45.
crossref pmid
5. Chang YS, Kim JS, Park S, Hong SH, Moon IJ. Does cognitive function affect performance and listening effort during bilateral wireless streaming in hearing aid users? J Audiol Otol 2024;28(4):271-7.
crossref pmid pmc pdf
6. Limkitisupasin P, Jongpradubgiat P, Utoomprurkporn N. Hearing screening for older adults with cognitive impairment: a systematic review. J Audiol Otol 2024;28(4):260-70.
crossref pmid pmc pdf
7. Grant KW, Walden BE, Seitz PF. Auditory-visual speech recognition by hearing-impaired subjects: consonant recognition, sentence recognition, and auditory-visual integration. J Acoust Soc Am 1998;103(5 Pt 1):2677-90.
crossref pmid pdf
8. Pelson RO, Prather WF. Effects of visual message-related cues, age, and hearing impairment on speechreading performance. J Speech Hear Res 1974;17(3):518-25.
crossref pmid
9. Sumby WH, Pollack I. Visual contribution to speech intelligibility in noise. J Acoust Soc Am 1954;26(2):212-5.
crossref pdf
10. Tye-Murray N, Sommers MS, Spehar B. Audiovisual integration and lipreading abilities of older adults with normal and impaired hearing. Ear Hear 2007;28(5):656-68.
crossref pmid
11. Moradi S, Lidestam B, Danielsson H, Ng EHN, Rönnberg J. Visual cues contribute differentially to audiovisual perception of consonants and vowels in improving recognition and reducing cognitive demands in listeners with hearing impairment using hearing aids. J Speech Lang Hear Res 2017;60(9):2687-703.
crossref pmid
12. Puschmann S, Daeglau M, Stropahl M, Mirkovic B, Rosemann S, Thiel CM, et al. Hearing-impaired listeners show increased audiovisual benefit when listening to speech in noise. Neuroimage 2019;196:261-8.
crossref pmid
13. Lalonde K, McCreery RW. Audiovisual enhancement of speech perception in noise by school-age children who are hard of hearing. Ear Hear 2020;41(4):705-19.
crossref pmid pmc
14. Oreinos C, Buchholz JM. Evaluation of loudspeaker-based virtual sound environments for testing directional hearing aids. J Am Acad Audiol 2016;27(7):541-56.
crossref pmid
15. Taylor B. Self-report assessment of hearing aid outcome - an overview [online] [cited 2025 Apr 1]. Available from: URL: https://www.audiologyonline.com/articles/self-report-assessmenthearing-aid-931.

16. Glyde H, Hickson L, Cameron S, Dillon H. Problems hearing in noise in older adults: a review of spatial processing disorder. Trends Amplif 2011;15(3):116-26.
pmid pmc
17. Gygi B, Ann Hall D. Background sounds and hearing-aid users: a scoping review. Int J Audiol 2016;55(1):1-10.
crossref pmid
18. Kim EY, Seol HY. Comparison of speech perception performance according to prosody change between people with normal hearing and cochlear implant users. J Audiol Otol 2024;28(2):119-25.
crossref pmid pdf
19. Wohlgenannt I, Simons A, Stieglitz S. Virtual reality. Bus Inf Syst Eng 2020;62:455-61.
crossref pdf
20. Stecker GC. Using virtual reality to assess auditory performance. Hear J 2019;72(6):20-3.
crossref pmid pmc
21. Bakhos D, Galvin J, Aoustin JM, Robier M, Kerneis S, Bechet G, et al. Training outcomes for audiology students using virtual reality or traditional training methods. PLoS One 2020;15(12):e0243380.
crossref pmid pmc
22. Serafin S, Adjorlu A, Percy-Smith LM. A review of virtual reality for individuals with hearing impairments. Multimodal Technol Interact 2023;7(4):36.
crossref
23. Seol HY, Kang S, Lim J, Hong SH, Moon IJ. Feasibility of virtual reality audiological testing: prospective study. JMIR Serious Games 2021;9(3):e26976.
crossref pmid pmc
24. Rayes H, Al-Malky G, Vickers D. Systematic review of auditory training in pediatric cochlear implant recipients. J Speech Lang Hear Res 2019;62(5):1574-93.
crossref pmid
25. Firszt JB, Reeder RM, Dwyer NY, Burton H, Holden LK. Localization training results in individuals with unilateral severe to profound hearing loss. Hear Res 2015;319:48-55.
crossref pmid
26. Salanger M, Lewis D, Vallier T, McDermott T, Dergan A. Applying virtual reality to audiovisual speech perception tasks in children. Am J Audiol 2020;29(2):244-58.
crossref pmid pmc
27. Beck DL, Danhauer JL, Abrams HB, Atcherson SR, Brown DK, Chasin M, et al. Audiologic considerations for people with normal hearing sensitivity yet hearing difficulty and/or speech-in-noise problems. Hear Rev 2018;25(10):28-38.

28. Weisser A, Buchholz JM, Oreinos C, Badajoz-Davila J, Galloway J, Beechey T, et al. The ambisonic recordings of typical environments (ARTE) database. Acta Acust United Acust 2019;105(4):695-713.
crossref
29. Lidestam B, Moradi S, Pettersson R, Ricklefs T. Audiovisual training is better than auditory-only training for auditory-only speech-in-noise identification. J Acoust Soc Am 2014;136(2):EL142-7.
crossref pmid pdf
30. Montgomery AA, Walden BE, Schwartz DM, Prosek RA. Training auditory-visual speech reception in adults with moderate sensorineural hearing loss. Ear Hear 1984;5(1):30-6.
crossref pmid
31. Woodhouse L, Hickson L, Dodd B. Review of visual speech perception by hearing and hearing-impaired people: clinical implications. Int J Lang Commun Disord 2009;44(3):253-70.
crossref pmid
32. Napolitano AC, Sloutsky VM. Is a picture worth a thousand words? The flexible nature of modality dominance in young children. Child Dev 2004;75(6):1850-70.
crossref pmid
33. Massaro DW. Multiple book review of speech perception by ear and eye: a paradigm for psychological inquiry. Behav Brain Sci 1989;12(4):741-55.
crossref
34. Shams L, Kamitani Y, Shimojo S. What you see is what you hear. Nature 2000;408(6814):788.
crossref pmid pdf
35. Kepp NE, Arrieta I, Schiøth C, Percy-Smith L. Virtual reality pitch ranking in children with cochlear implants, hearing aids or normal hearing. Int J Pediatr Otorhinolaryngol 2022;161:111241.
crossref pmid
36. Keidser G, Naylor G, Brungart DS, Caduff A, Campos J, Carlile S, et al. The quest for ecological validity in hearing science: What it is, why it matters, and how to advance it. Ear Hear 2020;41(Suppl 1):5S-19.
crossref pmid pmc
TOOLS
PDF Links  PDF Links
PubReader  PubReader
ePub Link  ePub Link
XML Download  XML Download
Full text via DOI  Full text via DOI
Download Citation  Download Citation
  Print
Share:      
METRICS
0
Crossref
55
View
3
Download
Related article
Editorial Office
Korean Society of Otorhinolaryngology-Head and Neck Surgery
103-307 67 Seobinggo-ro, Yongsan-gu, Seoul 04385, Korea
TEL: +82-2-3487-6602    FAX: +82-2-3487-6603   E-mail: kjorl@korl.or.kr
About |  Browse Articles |  Current Issue |  For Authors and Reviewers
Copyright © Korean Society of Otorhinolaryngology-Head and Neck Surgery.                 Developed in M2PI
Close layer
prev next