• Title/Summary/Keyword: visual cues

Search Result 140, Processing Time 0.029 seconds

L2 Proficiency Effect on the Acoustic Cue-Weighting Pattern by Korean L2 Learners of English: Production and Perception of English Stops

  • Kong, Eun Jong;Yoon, In Hee
    • Phonetics and Speech Sciences
    • /
    • v.5 no.4
    • /
    • pp.81-90
    • /
    • 2013
  • This study explored how Korean L2 learners of English utilize multiple acoustic cues (VOT and F0) in perceiving and producing the English alveolar stop with a voicing contrast. Thirty-four 18-year-old high-school students participated in the study. Their English proficiency level was classified as either 'high' (HEP) or 'low' (LEP) according to high-school English level standardization. Thirty different synthesized syllables were presented in audio stimuli by combining a 6-step VOTs and a 5-step F0s. The listeners judged how close the audio stimulus was to /t/ or /d/ in L2 using a visual analogue scale. The L2 /d/ and /t/ productions collected from the 22 learners (12 HEP, 10 LEP) were acoustically analyzed by measuring VOT and F0 at the vowel onset. Results showed that LEP listeners attended to the F0 in the stimuli more sensitively than HEP listeners, suggesting that HEP listeners could inhibit less important acoustic dimensions better than LEP listeners in their L2 perception. The L2 production patterns also exhibited a group-difference between HEP and LEP in that HEP speakers utilized their VOT dimension (primary cue in L2) more effectively than LEP speakers. Taken together, the study showed that the relative cue-weighting strategies in L2 perception and production are closely related to the learner's L2 proficiency level in that more proficient learners had a better control of inhibiting and enhancing the relevant acoustic parameters.

Individual differences in categorical perception: L1 English learners' L2 perception of Korean stops

  • Kong, Eun Jong
    • Phonetics and Speech Sciences
    • /
    • v.11 no.4
    • /
    • pp.63-70
    • /
    • 2019
  • This study investigated individual variability of L2 learners' categorical judgments of L2 stops by exploring English learners' perceptual processing of two acoustic cues (voice onset time [VOT] and f0) and working memory capacity as sources of variation. As prior research has reported that English speakers' greater use of the redundant cue f0 was responsible for gradient processing of native stops, we examined whether the same processing characteristics would be observed in L2 learners' perception of Korean stops (/t/-/th/). 22 English learners of L2 Korean with a range of L2 proficiency participated in a visual analogue scaling task and demonstrated variable manners of judging the L2 Korean stops: Some were more gradient than others in performing the task. Correlation analysis revealed that L2 learners' categorical responses were modestly related to individuals' utilizations of a primary cue for the stop contrast (VOT for L1 English stops and f0 for L2 Korean stops), and were also related to better working memory capacity. Together, the current experimental evidence demonstrates adult L2 learners' top-down processing of stop consonants where linguistic and cognitive resources are devoted to a process of determining abstract phonemic identity.

Analysis of Nurses' Soothing Behaviors in Neonatal Intensive Care Unit: Focused on Babies with Bronchopulmonary Dysplasia (신생아 중환자실 환아 달래기시 나타나는 간호사 행위 분석: 기관지폐이형성증 환아 중심으로)

  • Lee, Yu-Nah;Shin, Hyunsook
    • Child Health Nursing Research
    • /
    • v.23 no.4
    • /
    • pp.494-504
    • /
    • 2017
  • Purpose: The aim of this study was to analyze Neonatal Intensive Care Unit nurses' behaviors while soothing newborns with bronchopulmonary dysplasia. Methods: An observational study was used to assess nurses' soothing behaviors. Data were collected from September, 2012 to March, 2013 using an audio-video recording system. Participants were eight babies and 12 nurses caring for those babies. After obtaining parental permission, the overall process of each episode from nurses' engagement in soothing to the end of soothing was recorded. Then a researcher interviewed each participating nurse. Data from 18 episodes were transcribed as verbal and nonverbal nursing behaviors and then categorized by two researchers. Results: There were 177 observed soothing behaviors which were classified with the five sensory-based categories (tactile, oral, visual, auditory, vestibular). Most frequently observed soothing behavior was 'Gently talking' followed by 'Removing irritant', and 'Providing non-nutritive sucking'. Nurses' perceived soothing behaviors were similar to the observed soothing behaviors except for 'Gently talking'. Conclusion: Nurses used diverse and mixed soothing behaviors as well as recognizing those behaviors as essential nursing skills. Nurses' soothing behaviors identified in this study can be used to comfort babies and to enhance their developmental potential in accordance with individual characterstics or cues.

Heave Motion Estimation of a Ship Deck for Shipboard Landing of a VTOL UAV (수직이착륙 무인기 함상 착륙점의 상하 운동 추정)

  • Cho, Am;Yoo, Changsun;Kang, Youngshin;Park, Bumjin
    • Journal of Aerospace System Engineering
    • /
    • v.8 no.3
    • /
    • pp.14-19
    • /
    • 2014
  • When a helicopter lands on a ship deck in high sea states, one of main difficulties is the ship motion by sea wave, In case of a manned helicopter, a pilot lands a helicopter on the deck during quiescent period of ship motion, which is perceived from different visual cues around landing spot. The capability to predict this quiescent period is very important especially for shipboard recovery of VTOL UAV in harsh environments. This paper describes how to predict heave motion of a ship for shipboard landing of a VTOL UAV. For simulation, ship motion by sea wave was generated using a 4,000 ton class US destroyer model. Heave motion of ship deck was predicted by applying auto-regression method to generated time series data of ship motion.

Enhancing Immersiveness in Video see-through HMD based Immersive Model Realization (Video see-through HMD 기반 실감 모델 재현시의 몰입감 향상 방법론)

  • Ha, Tae-Jin;Kim, Yeong-Mi;Ryu, Je-Ha;Woo, Woon-Tack
    • Proceedings of the IEEK Conference
    • /
    • 2006.06a
    • /
    • pp.685-686
    • /
    • 2006
  • Recently, various AR-based product design methodologies have been introduced. In this paper, we propose technologies for enhancing robust augmentation and immersive realization of virtual objects. A robust augmentation technology is developed for various lighting conditions and a partial solution is proposed for the hand occlusion problem that occurs when the virtual objects overlay the user' hands. It provides more immersive or natural images to the users. Finally, vibratory haptic cues by page motors as well as button clicking force feedback by modulating pneumatic pressures are proposed while interacting with virtual widgets. Also our system reduces gabs between modeling spaces and user spaces. An immersive game-phone model is selected to demonstrate that the users can control the direction of the car in the racing game by tilting a tangible object with the proposed augmented haptic and robust non-occluded visual feedback. The proposed methodologies will be contributed to the immersive realization of the conventional AR system.

  • PDF

Deep Multi-task Network for Simultaneous Hazy Image Semantic Segmentation and Dehazing (안개영상의 의미론적 분할 및 안개제거를 위한 심층 멀티태스크 네트워크)

  • Song, Taeyong;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Kuyong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1000-1010
    • /
    • 2019
  • Image semantic segmentation and dehazing are key tasks in the computer vision. In recent years, researches in both tasks have achieved substantial improvements in performance with the development of Convolutional Neural Network (CNN). However, most of the previous works for semantic segmentation assume the images are captured in clear weather and show degraded performance under hazy images with low contrast and faded color. Meanwhile, dehazing aims to recover clear image given observed hazy image, which is an ill-posed problem and can be alleviated with additional information about the image. In this work, we propose a deep multi-task network for simultaneous semantic segmentation and dehazing. The proposed network takes single haze image as input and predicts dense semantic segmentation map and clear image. The visual information getting refined during the dehazing process can help the recognition task of semantic segmentation. On the other hand, semantic features obtained during the semantic segmentation process can provide cues for color priors for objects, which can help dehazing process. Experimental results demonstrate the effectiveness of the proposed multi-task approach, showing improved performance compared to the separate networks.

Real-time 3D Audio Downmixing System based on Sound Rendering for the Immersive Sound of Mobile Virtual Reality Applications

  • Hong, Dukki;Kwon, Hyuck-Joo;Kim, Cheong Ghil;Park, Woo-Chan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.12
    • /
    • pp.5936-5954
    • /
    • 2018
  • Eight out of the top ten the largest technology companies in the world are involved in some way with the coming mobile VR revolution since Facebook acquired Oculus. This trend has allowed the technology related with mobile VR to achieve remarkable growth in both academic and industry. Therefore, the importance of reproducing the acoustic expression for users to experience more realistic is increasing because auditory cues can enhance the perception of the complicated surrounding environment without the visual system in VR. This paper presents a audio downmixing system for auralization based on hardware, a stage of sound rendering pipelines that can reproduce realiy-like sound but requires high computation costs. The proposed system is verified through an FPGA platform with the special focus on hardware architectural designs for low power and real-time. The results show that the proposed system on an FPGA can downmix maximum 5 sources in real-time rate (52 FPS), with 382 mW low power consumptions. Furthermore, the generated 3D sound with the proposed system was verified with satisfactory results of sound quality via the user evaluation.

Metaverse and Media Richness: The Effect of UI Design on User Experience (메타버스와 미디어 풍요성: UI 디자인이 사용자 경험에 미치는 영향)

  • Song, Stephen W.;Hwang, Dongwook;Chung, Donghun
    • Knowledge Management Research
    • /
    • v.23 no.2
    • /
    • pp.83-98
    • /
    • 2022
  • The current study investigated the effect of user interface design on metaverse user's objective task performance, perceived usefulness, presence, and enjoyment. Using a 2 × 2 within-subject repeated measure experimental design, we found that users perceived the interface to be significantly more useful when additional visual cues existed and performed better for the given task. Additionally, a repeated-measure mediation analysis revealed that the effect of richer information in the interface on users' enjoyment as a function of need satisfaction was mediated by perceived usefulness. Theoretical and practical implication are derived from the result of the current study.

A Study on the Interactive Narrative - Focusing on the analysis of VR animation <Wolves in the Walls> (인터랙티브 내러티브에 관한 연구 - VR 애니메이션 <Wolves in the Walls>의 분석을 중심으로)

  • Zhuang Sheng
    • Trans-
    • /
    • v.15
    • /
    • pp.25-56
    • /
    • 2023
  • VR is a dynamic image simulation technology with very high information density. Among them, spatial depth, temporality, and realism bring an unprecedented sense of immersion to the experience. However, due to its high information density, the information contained in it is very easy to be manipulated, creating an illusion of objectivity. Users need guidance to help them interpret the high density of dynamic image information. Just like setting up navigation interfaces and interactivity in games, interactivity in virtual reality is a way to interpret virtual content. At present, domestic research on VR content is mainly focused on technology exploration and visual aesthetic experience. However, there is still a lack of research on interactive storytelling design, which is an important part of VR content creation. In order to explore a better interactive storytelling model in virtual reality content, this paper analyzes the interactive storytelling features of the VR animated version of <Wolves in the walls> through the methods of literature review and case study. We find that the following rules can be followed when creating VR content: 1. the VR environment should fully utilize the advantages of free movement for users, and users should not be viewed as mere observers. The user's sense of presence should be fully considered when designing interaction modules. Break down the "fourth wall" to encourage audience interaction in the virtual reality environment, and make the hot media of VR "cool". 2.Provide developer-driven narrative in the early stages of the work so that users are not confused about the ambiguous world situation when they first enter a virtual environment with a high degree of freedom. 1.Unlike some games that guide users through text, you can guide them through a more natural interactive approach that adds natural dialog between the user and story characters (NPC). Also, since gaze guidance is an important part of story progression, you should set up spatial scene user gaze guidance elements within it. For example, you can provide eye-following cues, motion cues, language cues, and more. By analyzing the interactive storytelling features and innovations of the VR animation <Wolves in the walls>, I hope to summarize the main elements of interactive storytelling from its content. Based on this, I hope to explore how to better showcase interactive storytelling in virtual reality content and provide thoughts on future VR content creation.

Short Term Weight Control Program of Obese Female College Students through Food Consumption Monitoring Using Mobile Phone Equipped with Camera (비만 여대생을 대상으로 카메라가 장착된 모바일 폰을 이용한 음식섭취 모니터링 강화를 통한 단기간 체중조절)

  • Jung, Eun-Young;Hong, Yang-Hee;Kim, Young-Suk;Kim, Yun-Joo;Chang, Un-Jae
    • Journal of the Korean Dietetic Association
    • /
    • v.16 no.4
    • /
    • pp.369-377
    • /
    • 2010
  • This study was conducted to investigate the effects of food consumption monitoring based on a digital photography method using a mobile phone on food consumption and weight reduction. Eighteen female college students (>30% body fat) participated in the weight control program using a mobile-phone for 4 wks. The energy intake was reduced significantly after 3 wks compared to baseline (P<0.05, baseline: 1,453.0 kcal, 3rd wk: 1,171.1 kcal, 4th wk: 1,130.8 kcal). The subjects lost 2.8 kg of body weight, 1.4% of % body fat, and 1.1 $kg/m^2$ of body mass index (BMI) after 4 wks. There were also significant differences in blood pressure (P<0.001) and serum cholesterol (total cholesterol: P<0.05, LDL-cholesterol: P<0.01) before and after the self-regulated diet program. In this study, the digital photography method using a mobile-phone influenced weight control through trained consumption monitoring, which helps individuals reduce discrepancies between perceived and actual consumption levels. Therefore, effective monitoring by taking food pictures using a mobile-phone can lead individuals to rely more heavily on easy-to-monitor visual cues.