Acknowledgement
This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (2019R1F1A1053060).
References
- Cherry EC. Some experiments on the recognition of speech, with one and with two ears. J Acoust Soc Am 1953;25:975-9. https://doi.org/10.1121/1.1907229
- Bregman AS. Auditory scene analysis: the perceptual organization of sounds. 1st ed. Cambridge, USA: The MIT Press;1990. p.1-792.
- Brumm H, Slabbekoorn H. Acoustic communication in noise. Adv Study Behav 2005;35:151-209. https://doi.org/10.1016/S0065-3454(05)35004-2
- Repp BH. Integration and segregation in speech perception. Lang Speech 1988;31(Pt 3):239-71. https://doi.org/10.1177/002383098803100302
- Dobreva MS, O'Neill WE, Paige GD. Influence of aging on human sound localization. J Neurophysiol 2011;105:2471-86. https://doi.org/10.1152/jn.00951.2010
- Nager W, Kohlmetz C, Joppich G, Mobes J, Munte TF. Tracking of multiple sound sources defined by interaural time differences: brain potential evidence in humans. Neurosci Lett 2003;344:181-4. https://doi.org/10.1016/S0304-3940(03)00439-7
- Grossberg S, Govindarajan KK, Wyse LL, Cohen MA. ARTSTREAM: a neural network model of auditory scene analysis and source segregation. Neural Netw 2004;17;511-36. https://doi.org/10.1016/j.neunet.2003.10.002
- Cooke MP, Brown GJ. Computational auditory scene analysis: exploiting principles of perceived continuity. Speech Commun 1993;13:391-9. https://doi.org/10.1016/0167-6393(93)90037-L
- Roch MA, Hurtig, RR, Huang T, Liu J, Arteaga SM. Foreground auditory scene analysis for hearing aids. Pattern Recognit Lett 2007;28:1351-9. https://doi.org/10.1016/j.patrec.2007.03.002
- Wang D, Brown GJ. Computational auditory scene analysis: principles, algorithms, and applications. 1st ed. New Jersey, USA: Wiley-IEEE Press;2006. p.1-381.
- Zhong X, Yost WA. How many images are in an auditory scene? J Acoust Soc Am 2017;141:2882. https://doi.org/10.1121/1.4981118
- Kawashima T, Sato T. Perceptual limits in a simulated "Cocktail party". Atten Percept Psychophys 2015;77:2108-20. https://doi.org/10.3758/s13414-015-0910-9
- Moher D, Shamseer L, Clarke M, Ghersi D, Liberati A, Petticrew M, et al. Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015;4:1. https://doi.org/10.1186/2046-4053-4-1
- Amitay S, Halliday L, Taylor J, Sohoglu E, Moore DR. Motivation and intelligence drive auditory perceptual learning. PloS One 2010;5:e9816. https://doi.org/10.1371/journal.pone.0009816
- Macleod MR, O'Collins T, Horky LL, Howells DW, Donnan GA. Systematic review and metaanalysis of the efficacy of FK506 in experimental stroke. J Cereb Blood Flow Metab 2005;25:713-21. https://doi.org/10.1038/sj.jcbfm.9600064
- Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, et al. Grading quality of evidence and strength of recommendations. BMJ 2004;328:1490. https://doi.org/10.1136/bmj.328.7454.1490
- The R Foundation. R: A language and environment for statistical computing [Internet]. The R Foundation; 2018 [cited 2019 May 20]. Available from: URL: https://www.R-project.org/.
- Eramudugolla R, Irvine DR, McAnally KI, Martin RL, Mattingley JB. Directed attention eliminates 'change deafness' in complex auditory scenes. Curr Biol 2005;15:1108-13. https://doi.org/10.1016/j.cub.2005.05.051
- Roberts KL, Doherty NJ, Maylor EA, Watson DG. Can auditory objects be subitized? J Exp Psychol Hum Percept Perform 2019;45:1-15. https://doi.org/10.1037/xhp0000578
- Henshaw H, Ferguson MA. Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence. PloS One 2013;8:e62836. https://doi.org/10.1371/journal.pone.0062836
- Yost WA, Dye RH Jr, Sheft S. A simulated "cocktail party" with up to three sound sources. Percept Psychophys 1996;58:1026-36. https://doi.org/10.3758/BF03206830
- Kahneman D. Attention and effort. 1st ed. Englewood Cliffs, USA: Prentice-Hall;1973. p.1-242.
- Bronkhoust AW. The cocktail-party problem revisited: early processing and selection of multi-talker speech. Atten Percept Psychophys 2015;77:1465-87. https://doi.org/10.3758/s13414-015-0882-9
- Pollack I. Auditory informational masking. J Acoust Soc Am 1975;57(Suppl 1):S5. https://doi.org/10.1121/1.1995329
- Watson CS, Kelly WJ, Wroton HW. Factors in the discrimination of tonal patterns. II. Selective attention and learning under various levels of stimulus uncertainty. J Acoust Soc Am 1976;60:1176-86. https://doi.org/10.1121/1.381220
- Gosselin PA, Gagne JP. Older adults expend more listening effort than young adults recognizing speech in noise. J Speech Lang Hear Res 2011;54:944-58. https://doi.org/10.1044/1092-4388(2010/10-0069)
- Snyder JS, Alain C. Sequential auditory scene analysis is preserved in normal aging adults. Cereb Cortex 2007;17:501-12. https://doi.org/10.1093/cercor/bhj175
- Ben-David BM, Tse VY, Schneider BA. Does it take older adults longer than younger adults to perceptually segregate a speech target from a background masker? Hear Res 2012;290:55-63. https://doi.org/10.1016/j.heares.2012.04.022
- Lopez-Poveda EA. Development of fundamental aspects of human auditory perception. In: Development of auditory and vestibular systems (eds. Romand R, Varela-Nieto I). Cambridge, MA: Academic Press;2014. p.287-314.
- Gordon-Salant S, Frisina RD, Popper AN, Fay RR. The aging auditory system. 1st ed. New York, USA: Springer;2010. p.1-293.