• Title/Summary/Keyword: Listener

Search Result 194, Processing Time 0.019 seconds

Subjective Listening Experiments on a Front and Rear Array-Based WFS System

  • Yoo, Jae-Hyoun;Seo, Jeong-Il;Shim, Hwan;Chung, Hyun-Joo;Sung, Koeng-Mo;Kang, Kyeong-Ok
    • ETRI Journal
    • /
    • v.33 no.6
    • /
    • pp.977-980
    • /
    • 2011
  • Wave field synthesis (WFS) has been gathering more and more attention recently due to its ability to perfectly reproduce an original sound field. However, to realize theoretically perfect WFS, a four-sided loudspeaker array that encloses the listener is required. However, it is difficult to build such a system except in large listening spaces, such as a theater or concert hall. In other words, if the listening space is a home, installing a side loudspeaker array is impractical. If the two side walls located to the left and right of the listener can be omitted, a setup using only front and rear loudspeaker arrays may be a solution. In this letter, we present a subjective listening experiment of sound localization/distance based on a WFS using a front and rear loudspeaker array system which is conducted on two listening points and shows average localization errors of $6.1^{\circ}$ and $9.18^{\circ}$, while the average distance errors are -27% (0.5 m) and -29% (0.6 m), respectively.

An Adjacency Effect in Auditory Distance and Loudness Judgments

  • Min, Yoon-Ki;Lee, Kanghee
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.3E
    • /
    • pp.33-39
    • /
    • 2000
  • This study investigated whether the adjacency principle. demonstrated in a perceived visual space, can be applied to auditory space. In order to demonstrate an auditory adjacency principle, multiple sound sources were varied in direction and distance in an acoustically absorbant space. Specifically, a NEAR sound source was located 10° to the left of the listener's midline at a distance of 2 meters; a FAR sound source was located 10° to the right at a distance of 5 meters. These sources served as perceptual reference points with respect to the localization of three test sounds, all at a distance of 3 meters. Two of three test sounds were directionally closer to the NEAR and FAR reference sounds, respectively. The other was between the reference sources directionally. The listener was asked to judge the perceived distances and the loudness of the three test sounds and the two reference sounds. The results indicated that the apparent distances of the test sounds were most determined by the disparity of distance between each test sound and the reference sound most directionally adjacent to it. Therefore, the findings offer evidence that the adjacency principle can be applied to the auditory space.

  • PDF

Modeling HRTFs for Customization (맞춤형 머리전달함수 구현을 위한 모델링 기법)

  • Shin, Ki-H.;Park, Young-Jin;Park, Yoon-Shik
    • Proceedings of the Korean Society for Noise and Vibration Engineering Conference
    • /
    • 2005.11a
    • /
    • pp.641-644
    • /
    • 2005
  • This study reveals some recent attempt in modeling empirically obtained B&K HATS (Head and Torso Simulator) HRTFs (Head Related Transfer Functions) to Isolate parameters that stimulate lateral and elevation perception. Localization using non-individual HRTFs often yields poor performance in synthesizing virtual sound sources when applied to a group of individuals due to differences in size and shape of head, pinnae, and torso. For realization of both effective and efficient virtual audio it is necessary to develop a method to tailor a given set of non-individual HRTFs to fit each listener without measuring his/her HRTF set. Pole-zero modeling is applied to fit HRIRs (Head Related Impulse Responses) and modeling criterions for determining suitable number of parameters are suggested for efficient modeling. Horizontal HRTFs are modeled as minimum-phase transfer functions with appropriate ITDs (Interaural Time Delay) obtained from RTF (Ray Tracing Formula) to better fit the size of listener's head for usage in simple virtualizer algorithms without complex regularization processes. Result of modeling HRTFs in the median plane is shown and parameters responsible for elevation perception are isolated which can be referred to in the future study of developing customizable HRTFs.

  • PDF

Supplementary Event-Listener Injection Attack in Smart Phones

  • Hidhaya, S. Fouzul;Geetha, Angelina;Kumar, B. Nandha;Sravanth, Loganathan Venkat;Habeeb, A.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.10
    • /
    • pp.4191-4203
    • /
    • 2015
  • WebView is a vital component in smartphone platforms like Android, Windows and iOS that enables smartphone applications (apps) to embed a simple yet powerful web browser inside them. WebView not only provides the same functionalities as web browser, it, more importantly, enables a rich interaction between apps and webpages loaded inside the WebView. However, the design and the features of WebView lays path to tamper the sandbox protection mechanism implemented by browsers. As a consequence, malicious attacks can be launched either against the apps or by the apps through the exploitation of WebView APIs. This paper presents a critical attack called Supplementary Event-Listener Injection (SEI) attack which adds auxiliary event listeners, for executing malicious activities, on the HTML elements in the webpage loaded by the WebView via JavaScript Injection. This paper also proposes an automated static analysis system for analyzing WebView embedded apps to classify the kind of vulnerability possessed by them and a solution for the mitigation of the attack.

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • Korean Journal of Audiology
    • /
    • v.24 no.1
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Intelligent Robust Base-Station Research in Harsh Outdoor Wilderness Environments for Wildsense

  • Ahn, Junho;Mysore, Akshay;Zybko, Kati;Krumm, Caroline;Lee, Dohyeon;Kim, Dahyeon;Han, Richard;Mishra, Shivakant;Hobbs, Thompson
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.3
    • /
    • pp.814-836
    • /
    • 2021
  • Wildlife ecologists and biologists recapture deer to collect tracking data from deer collars or wait for a drop-off of a deer collar construction that is automatically detached and disconnected. The research teams need to manage a base camp with medical trailers, helicopters, and airplanes to capture deer or wait for several months until the deer collar drops off of the deer's neck. We propose an intelligent robust base-station research with a low-cost and time saving method to obtain recording sensor data from their collars to a listener node, and readings are obtained without opening the weatherproof deer collar. We successfully designed the and implemented a robust base station system for automatically collecting data of the collars and listener motes in harsh wilderness environments. Intelligent solutions were also analyzed for improved data collections and pattern predictions with drone-based detection and tracking algorithms.

Towards Size of Scene in Auditory Scene Analysis: A Systematic Review

  • Kwak, Chanbeom;Han, Woojae
    • Journal of Audiology & Otology
    • /
    • v.24 no.1
    • /
    • pp.1-9
    • /
    • 2020
  • Auditory scene analysis is defined as a listener's ability to segregate a meaningful message from meaningless background noise in a listening environment. To gain better understanding of auditory perception in terms of message integration and segregation ability among concurrent signals, we aimed to systematically review the size of auditory scenes among individuals. A total of seven electronic databases were searched from 2000 to the present with related key terms. Using our inclusion criteria, 4,507 articles were classified according to four sequential steps-identification, screening, eligibility, included. Following study selection, the quality of four included articles was evaluated using the CAMARADES checklist. In general, studies concluded that the size of auditory scene increased as the number of sound sources increased; however, when the number of sources was five or higher, the listener's auditory scene analysis reached its maximum capability. Unfortunately, the score of study quality was not determined to be very high, and the number of articles used to calculate mean effect size and statistical significance was insufficient to draw significant conclusions. We suggest that study design and materials that consider realistic listening environments should be used in further studies to deep understand the nature of auditory scene analysis within various groups.

Verbal Behaviors and Interactions in Processes of Making Written Test Items Using Paired Think Aloud Problem Solving for Pre-Service Secondary Chemistry Teachers (중등 예비 화학교사의 해결자·청취자 활동을 통한 지필평가 문항 제작 과정에서 언어적 행동 및 상호작용)

  • Kang, Hunsik
    • Journal of The Korean Association For Science Education
    • /
    • v.38 no.5
    • /
    • pp.611-623
    • /
    • 2018
  • This study investigated verbal behaviors and interactions in the processes of making written test items using paired think aloud problem solving for pre-service secondary chemistry teachers. The processes of making written test items using paired think-aloud problem solving in four small groups consisting of two pre-service chemistry teachers were recorded and transcribed. The analysis of the results revealed that 'item making' for ten subcategories for solver's verbal behaviors were most frequently exhibited regardless of 'integration' among the components of pedagogical content knowledge (PCK). The solver's 'provide', 'modify', 'require agreement', 'ask', 'agree', and 'justify' were also frequently exhibited although fewer than 'item making'. Especially, the solver's 'ask' was more frequently used in 'non-integration', whereas 'justify' was more frequently used in 'integration'. In listener's verbal behaviors consisted of eight subcategories, 'point out', 'ask', and 'agree' were frequently exhibited regardless of 'integration'. Listener's 'ask' and 'agree' were exhibited more in 'non-integration', whereas 'point out' was exhibited more in 'integration'. Many verbal interactions were analyzed to be in 'symmetrical type' more than 'solver-dominant type' or 'listener-dominant type'. 'Symmetrical type' was also more frequently exhibited in 'integration', whereas 'solver-dominant type' was more frequently exhibited in 'non-integration'. There were little differences between 'integration' and 'non-integration' in 'listener-dominant type'. In 23 subcategories of 'symmetrical type', 'ask-provide' and 'point out-justify' were most frequently found. Especially, 'ask-provide' was more frequently found in 'non-integration', whereas 'point out-justify' was more frequently found in 'integration'. 'Point out-modify' was the most frequent in 4 subcategories of 'listener-dominant type', while 'item making-agree' in three subcategories of 'solver-dominant type' regardless of 'integration'. However, only a little of other subcategories of the three types were found.

Verbal Interaction in Paired Think-Aloud Problem Solving; Comparison of the Characteristics of Small Groups Based on Achievement (해결자·청취자 활동에서의 언어적 상호작용: 성취도에 의한 소집단별 특성 비교)

  • Taehee Noh;Hunsik Kang;Kyungmoon Jeon
    • Journal of the Korean Chemical Society
    • /
    • v.47 no.5
    • /
    • pp.519-529
    • /
    • 2003
  • This study investigated the characteristics of verbal interactions of various small groups based on previous achievement in paired think-aloud problem solving. Two classes of a high school were assigned to the homogeneous and heterogeneous groups, and taught on chemistry. Students from homogeneous groups (high${\cdot}$high, mid?id) and heterogeneous groups (high${\cdot}$mid, high${\cdot}$low) were selected, and their algorithmic problem solving on chemical equation and stoichiometry were audio/video taped. In high${\cdot}$high group, solver's 'require agreement' and listener 'agree' were frequently exhibited. On the other hands, listener's 'point out' and solver's 'modify' were frequently exhibited in mid${\cdot}$mid group, which was also observed in the heterogeneous groups (high${\cdot}$mid, high${\cdot}$low). Many verbal interactions were analyzed to be in symmetrical type. In this type, 'require agreement-agree' of high${\cdot}$high group was the most frequent. 'problem solving-agree' of high${\cdot}$high group was the most frequent in the solver-dominant type, while 'point out-modify' of high${\cdot}$low group in the listener-dominant type. The verbal behaviors related to the solving stage were frequently observed, but there were few related to the reviewing stage.

A Study on the Listener's Emotional Perception of Music According to Harmonic Progression Level (음악의 화음 전개 수준에 따른 감상자의 정서 지각 연구)

  • Ryu, Hae In;Choi, Jin Hee;Chong, Hyun Ju
    • Journal of Music and Human Behavior
    • /
    • v.19 no.1
    • /
    • pp.93-112
    • /
    • 2022
  • The purpose of this study was to compare participants' perceived emotion following harmonic changes in music. In this study, 144 participants, aged 19 to 29 years, listened to music online that included low to high harmonic progression in tonal music (major-minor). After listening to each piece of music, participants were asked to rate 4 items using a 7-point Likert scale: emotional potency, arousal, degree to which the harmony impacted the listener's emotions, and listener's preference for the music. There were significant differences between each of the four items upon the level of harmonic progression. When the participants were divided into two groups (i.e., those with a background in music and those with no background in music), there was a significant difference between the groups in terms of emotional potency, but there was no significant interaction effect. This study confirmed that various emotional responses in listeners can be induced by controlling the exogenous variables in musical excerpts. Based on this, it is expected that the harmonic progression level can be provided to the client to be used as an effective therapeutic tool in music therapy intervention.