• Title/Summary/Keyword: listening log

Search Result 7, Processing Time 0.022 seconds

Writing Listening Logs and Its Effect on Improving L2 Students' Metacognitive Awareness and Listening Proficiency

  • Lee, You-Jin;Cha, Kyung-Whan
    • International Journal of Contents
    • /
    • v.16 no.4
    • /
    • pp.50-67
    • /
    • 2020
  • This study investigated whether writing weekly listening logs could influence college English learners' metacognitive awareness and listening proficiency. In addition, the Metacognitive Awareness Listening Questionnaire (MALQ) was applied to examine the learners' knowledge of their listening process. It is process-oriented research conducted by analyzing the MALQ and students' listening logs as to how their metacognitive awareness and listening proficiency have changed during the semester. Eighty-nine students who took an English listening practice course at a university participated in this study. The research findings are as follows. First, it turned out that there was a significant relationship between EFL university students' listening comprehension and some subscales of metacognitive awareness. Second, the students had an opportunity to reflect on learning through regular listening activities, and weekly listening logs, which included important information about listening process and practice. Third, as the students' listening proficiency increased at the end of the semester, it was found that introducing listening logs along with classroom lessons helped the students improve their listening ability. Finally, the high proficiency group students used multiple strategies simultaneously, regardless of the type of listening strategies, while the low proficiency group students used one or two limited listening strategies. However, the low proficiency group students may have had trouble expressing their ideas in English or recognizing the listening strategies they used, not because they did not use a lot of listening strategies. Therefore, teachers should regularly check if students are following their instructions and help them use appropriate strategies for better understanding.

Performance comparison evaluation of speech enhancement using various loss functions (다양한 손실 함수를 이용한 음성 향상 성능 비교 평가)

  • Hwang, Seo-Rim;Byun, Joon;Park, Young-Cheol
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.2
    • /
    • pp.176-182
    • /
    • 2021
  • This paper evaluates and compares the performance of the Deep Nerual Network (DNN)-based speech enhancement models according to various loss functions. We used a complex network that can consider the phase information of speech as a baseline model. As the loss function, we consider two types of basic loss functions; the Mean Squared Error (MSE) and the Scale-Invariant Source-to-Noise Ratio (SI-SNR), and two types of perceptual-based loss functions, including the Perceptual Metric for Speech Quality Evaluation (PMSQE) and the Log Mel Spectra (LMS). The performance comparison was performed through objective evaluation and listening tests with outputs obtained using various combinations of the loss functions. Test results show that when a perceptual-based loss function was combined with MSE or SI-SNR, the overall performance is improved, and the perceptual-based loss functions, even exhibiting lower objective scores showed better performance in the listening test.

Personalized Battery Lifetime Prediction for Mobile Devices based on Usage Patterns

  • Kang, Joon-Myung;Seo, Sin-Seok;Hong, James Won-Ki
    • Journal of Computing Science and Engineering
    • /
    • v.5 no.4
    • /
    • pp.338-345
    • /
    • 2011
  • Nowadays mobile devices are used for various applications such as making voice/video calls, browsing the Internet, listening to music etc. The average battery consumption of each of these activities and the length of time a user spends on each one determines the battery lifetime of a mobile device. Previous methods have provided predictions of battery lifetime using a static battery consumption rate that does not consider user characteristics. This paper proposes an approach to predict a mobile device's available battery lifetime based on usage patterns. Because every user has a different pattern of voice calls, data communication, and video call usage, we can use such usage patterns for personalized prediction of battery lifetime. Firstly, we define one or more states that affect battery consumption. Then, we record time-series log data related to battery consumption and the use time of each state. We calculate the average battery consumption rate for each state and determine the usage pattern based on the time-series data. Finally, we predict the available battery time based on the average battery consumption rate for each state and the usage pattern. We also present the experimental trials used to validate our approach in the real world.

Human Laughter Generation using Hybrid Generative Models

  • Mansouri, Nadia;Lachiri, Zied
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.5
    • /
    • pp.1590-1609
    • /
    • 2021
  • Laughter is one of the most important nonverbal sound that human generates. It is a means for expressing his emotions. The acoustic and contextual features of this specific sound are different from those of speech and many difficulties arise during their modeling process. During this work, we propose an audio laughter generation system based on unsupervised generative models: the autoencoder (AE) and its variants. This procedure is the association of three main sub-process, (1) the analysis which consist of extracting the log magnitude spectrogram from the laughter database, (2) the generative models training, (3) the synthesis stage which incorporate the involvement of an intermediate mechanism: the vocoder. To improve the synthesis quality, we suggest two hybrid models (LSTM-VAE, GRU-VAE and CNN-VAE) that combine the representation learning capacity of variational autoencoder (VAE) with the temporal modelling ability of a long short-term memory RNN (LSTM) and the CNN ability to learn invariant features. To figure out the performance of our proposed audio laughter generation process, objective evaluation (RMSE) and a perceptual audio quality test (listening test) were conducted. According to these evaluation metrics, we can show that the GRU-VAE outperforms the other VAE models.

A DNN-Based Personalized HRTF Estimation Method for 3D Immersive Audio

  • Son, Ji Su;Choi, Seung Ho
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.13 no.1
    • /
    • pp.161-167
    • /
    • 2021
  • This paper proposes a new personalized HRTF estimation method which is based on a deep neural network (DNN) model and improved elevation reproduction using a notch filter. In the previous study, a DNN model was proposed that estimates the magnitude of HRTF by using anthropometric measurements [1]. However, since this method uses zero-phase without estimating the phase, it causes the internalization (i.e., the inside-the-head localization) of sound when listening the spatial sound. We devise a method to estimate both the magnitude and phase of HRTF based on the DNN model. Personalized HRIR was estimated using the anthropometric measurements including detailed data of the head, torso, shoulders and ears as inputs for the DNN model. After that, the estimated HRIR was filtered with an appropriate notch filter to improve elevation reproduction. In order to evaluate the performance, both of the objective and subjective evaluations are conducted. For the objective evaluation, the root mean square error (RMSE) and the log spectral distance (LSD) between the reference HRTF and the estimated HRTF are measured. For subjective evaluation, the MUSHRA test and preference test are conducted. As a result, the proposed method can make listeners experience more immersive audio than the previous methods.

Bayesian network based Music Recommendation System considering Multi-Criteria Decision Making (다기준 의사결정 방법을 고려한 베이지안 네트워크 기반 음악 추천 시스템)

  • Kim, Nam-Kuk;Lee, Sang-Yong
    • Journal of Digital Convergence
    • /
    • v.11 no.3
    • /
    • pp.345-352
    • /
    • 2013
  • The demand and production for mobile music increases as the number of smart phone users increase. Thus, the standard of selection of a user's preferred music has gotten more diverse and complicated as the range of popular music has gotten wider. Research to find intelligent techniques to ingeniously recommend music on user preferences under mobile environment is actively being conducted. However, existing music recommendation systems do not consider and reflect users' preferences due to recommendations simply employing users' listening log. This paper suggests a personalized music-recommending system that well reflects users' preferences. Using AHP, it is possible to identify the musical preferences of every user. The user feedback based on the Bayesian network was applied to reflect continuous user's preference. The experiment was carried out among 12 participants (four groups with three persons for each group), resulting in a 87.5% satisfaction level.

A perceptual study of the wh-island constraint in Seoul Korean (서울말의 wh-섬 제약 지각 연구)

  • Yun, Weonhee
    • Phonetics and Speech Sciences
    • /
    • v.13 no.2
    • /
    • pp.27-35
    • /
    • 2021
  • This study investigated the status of the wh-island constraint in Seoul Korean. The syntactic movement of a wh-phrase out of an embedded sentence so as to have wide scope at LF is known to be invalid as it violates the wh-island constraint, but there have been studies in which such a movement was possible when the sentence is read with a wh-intonation. We conducted perceptual tests in which subjects were asked to select an answer after listening to each of the four types of interrogative sentences. Three of them were with 'Nugu-leul', which is an accusative form of the wh-phrase 'who' as well as an indefinite form. The fourth sentence contained the name of a person. 'Nugu-leul' and the noun were positioned in the same embedded sentence to see whether the subjects accepted a matrix scope interpretation of the wh-phrases. Response time was transformed to normalized log response time and checked to find any differences in the time taken to select the answers depending on different types of interrogative sentences. The results showed the subjects had a definite preference for the matrix scope interpretation for the sentences with a wh-intonation. The response time required to select the matrix scope interpretation was longer than for any other type of interrogative sentence. We concluded that the wh-island constraint in Seoul Korean is weak.