1 |
E. Vincent, S. Araki, P. Bofill, "The 2008 Signal Separation Evaluation Campaign: A community-based Approach to Large-scale Evaluation," In ICA 2009, Paraty, Brazil, pp 734-741, 2009.
|
2 |
J-M. Valin, J. Rouat, F. Michaud, "Enhanced Robot Audition Based on Microphone Array Source Separation with Post-Filter," In IROS 2004, Sendai, Japan, pp. 2123-2128, 2004.
|
3 |
R. Takeda, S. Yamamoto, K. Komatani, T. Ogata, and H. G. Okuno,"Missing-Feature based Speech Recognition for Two Simultaneous Speech Signals Separated by ICA with a pair of Humanoid Eras," In IROS 2006, Beijing, China, pp. 878-885, 2006.
|
4 |
Y. Ephraim, D. Malah, “Speech Enhancement Using a Minimum Mean-Square Error Short-Time Spectral Amplitude Estimator,” IEEE Transaction on Acoustics, Speech, and Signal Processing, Vol. ASSP-32, No. 6, pp. 1109-1121, 1984.
DOI
|
5 |
H-D. Kim, S-S. Ahn, K. Kim, J. Choi, "Single Channel Particular Voice Activity Detection for Monitoring the Violence Situations", In 2013 IEEE RO-MAN, pp. 412-417, 2013.
|
6 |
A Jansson, E Humphrey, N Montecchio, R Bittner, A Kumar, T Weyde, "Singing Voice Separation with Deep U-net Convolutional Networks," In ISMIR 2017, Suzhou, China, pp. 23-27, 2017.
|
7 |
D. Stoller, S. Ewert, S. Dixon, "Wave-u-net: A Multi-scale Neural Network for End-to-end Audio Source Separation," In ICASSP 2018, Calgary, Canada, pp. 2391-2395, 2018.
|
8 |
Douglas O'Shaughnessy, "Speech Communication: Human and Machine," Addison-Wesley. New York, pp. 150, 1987.
|
9 |
O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin, J. Matas, "DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks", In IEEE/CVF, Salt Lake City, UT, USA, pp. 8183-8192, 2018.
|
10 |
O. Ronneberger, P. Fischer, T. Brox, "U-Net: Convolutional Networks for Biomedical Image Segmentation", In MICCAI 2015, Springer, Vol. 9351, pp. 234-241, 2015.
|
11 |
W. Wang, K. Yu, J. Hugonot, P. Fua, M. Salzman, "Recurrent U-Net for Resource-Constrained Segmentation", In ICCV 2019, Seoul, South Korea, pp. 2142-2151, 2019.
|
12 |
Z. Rafii, A. Liutkus, F.-R. Stter, S.-I. Mimilakis, R. Bittner, "The MUSDB18 corpus for music separation," 2017.
|
13 |
A. Liutkus, D. Fitzgerald, Z. Rafii, "Scalable audio separation with light kernel additive modelling," In ICASSP 2015, Brisbane, Australia, pp. 76-80, 2015.
|
14 |
C.-L. Hsu, J. R. Jang, “On the Improvement of Singing Voice Separation for Monaural Recordings Using the MIR-1K Dataset,” IEEE Transactions on Audio Speech and Language Processing, Vol. 18, No. 2, pp. 310-319, 2010.
DOI
|
15 |
C. K. A. Reddy, E. Beyrami, H. Dubey, V. Gopal, R. Cheng, R. Cutler, S. Matusevych, R. Aichner, A. Aazami, S. Braun, P. Rana, S. Srinivasan, J. Gehrke, "The INTERSPEECH 2020 Deep Noise Suppression Challenge: Datasets, Subjective Speech Quality and Testing Framework", 2020.
|
16 |
E. Vincent, R. Gribonval, C. Fevotte, "Performance Measurement in Blind Audio Source Separation," IEEE Transactions on Audio, Speech, and Language Processing, Nol. 14, No. 4, pp. 1462-1469, 2006.
DOI
|