• Title/Summary/Keyword: Auditory model

Search Result 160, Processing Time 0.024 seconds

Implementation a Physical Ear Model for Determinating Location of the Microphone of Fully Implantable Middle Ear Hearing Device (완전 이식형 인공중이용 마이크로폰의 위치 결정을 위한 물리적 귀 모델의 구현)

  • Kim, D.W.;Seong, K.W.;Lim, H.K.;Kim, M.W.;Jung, E.S.;Lee, J.W.;Lee, M.W.;Lee, J.H.;Kim, M.N.;Cho, J.H.
    • Journal of rehabilitation welfare engineering & assistive technology
    • /
    • v.2 no.1
    • /
    • pp.27-33
    • /
    • 2009
  • Generally, implantable microphone has been implanted in the temporal bone for implantable middle ear hearing devices (IMEHDs). In this case, the microphone's membrane can be damaged and can be generated biological noise. In order to overcome the these problems, the location of implanted microphone should be changed. As an alternative, the microphone can be implanted in the external auditory canal. However, the sound emission can be produced because of vibration transducer toward reverse direction from the tympanic membrane to the external auditory canal. In this paper, an amount of the emitted sound is measured using a probe microphone as the changing the position of microphone in the external auditory canal of a physical ear model, which is similar to acoustical and vibratory properties of the human ear. Through the measured value, the location of the microphone was assumed in the external auditory canal. According to the analysis, the microphone input sound can be decreased when microphone position become more distance from the tympanic membrane in the auditory canal. However, the external auditory canal is not appropriated to implantable microphone position, because sound emission is not completely eliminated.

  • PDF

Developing the Design Guideline of Auditory User Interface for Domestic Appliances (가전제품의 청각 사용자 인터페이스(AUI) 설계를 위한 가이드라인 개발 연구)

  • Lee, Ju-Hwan;Jeon, Myoung-Hoon;Ahn, Jeong-Hee;Han, Kwang-Hee
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02b
    • /
    • pp.1-8
    • /
    • 2006
  • 본 연구는 가전제품의 제품군과 그 기능들에 따라 차별화 가능한 인지적, 감성적 '청각 사용자 인터페이스 디자인 가이드라인(Auditory User Interface Design Guideline)'을 마련하고, 가전제품의 작동기능 정보와 직관적으로 연합 가능한 청각신호(auditory signal)를 제작할 수 있는 지침을 제시하여 GUI 중심의 제품 설계에서 한 차원 확장되고 사용자의 다중감각적 특성이 적용된 디자인 방법을 실무에 적용하고자 하였다. 특히 AUI 에 대한 체계를 확립함으로써 브랜드 정체성(Brand Identity) 및 기업 이미지를 제고할 수 있다는 목적을 함께 고려하였다. 이러한 연구가 필요했던 이유는 가전제품에 대한 소비자의 심적 모형(mental model)과 감성 측면에서의 접근에 대한 요구 때문인데, 이는 AUI 의 체계적 적용이 아닌 임의적 연결(mapping)으로 인한 버저(buzzer) 청각신호의 짜증(annoying) 발생이 빈번한 사례들에서 출발한다. 또한 GUI 의 변화와 수준에 미치지 못하는 AUI 의 업그레이드 필요성과 가전제품에서의 감성 마케팅 경향을 반영하는 의미를 지니고 있다. 이와 함께 멀티미디어 환경의 급속한 확산으로 다중감각적 정보제시(multimodal display)가 요구되는 상황에 걸맞은 시도이다. 본 연구는 특정 가전제품이나 특정 기능이 지니고 있는 인지적, 감성적 차원의 속성을 청각신호(auditory signal)의 다양한 속성들로 유발하는 관계를 추출하고, 이를 형성하는 기본 메커니즘에 대한 경험적 자료를 제시하여, 가전제품의 AUI 디자인에 유용한 가이드라인을 제공하고자 하였다. 그러나 본 논문에서는 연구의 구체적이고 세부적인 결과보다는 전체적인 계획과 진행과정의 절차를 소개하여 관련분야 연구 진행의 참조적 틀을 마련하고자 한다.

  • PDF

Modeling and Analysis of Eardrum using FEM (고막의 유한요소 모델링 및 해석)

  • 강희용;김봉철;이동헌;임재중;전병훈
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 2000.10a
    • /
    • pp.495-499
    • /
    • 2000
  • Auditory system is separated to Outer Ear, Middle Ear and Inner Ear, Middle Ear plays an important role as the sound transfer on amplitude. With analysing of Middle Ear, we can understand disease and compare unformal auditory systems. However, the investigation of mechanical modeling and analysis have been reported in a few paper. In this paper, a three dimensional Eardrum model of human ear was developed and analysed applying the general purpose Finite-Element program (Nastran). Vibration patterns of the eardrum obtained from FEM analysis are in agreements with the experimental results using stroboscope.

  • PDF

Developing the Design Guideline of Auditory User Interface for Digital Appliances (가전제품의 청각 사용자 인터페이스(AUI) 디자인을 위한 가이드라인 개발 사례)

  • Lee, Ju-Hwan;Jeon, Myoung-Hoon;Han, Kwang-Hee
    • Science of Emotion and Sensibility
    • /
    • v.10 no.3
    • /
    • pp.307-320
    • /
    • 2007
  • In this study, we attempted to provide a distinctive cognitive, emotional 'Auditory User Interface (AUI) Design Guideline' according to home appliance groups and their functions. It is an effort to apply a new design method to practical affairs to overcome the limit of GUI centered appliance design and reflect user multimodal properties by presenting a guideline possible to generate auditory signals intuitively associable with the operational functions. The reason why this study is required is because of frequent instances given rise to annoyance as not systematic application of AUI, but arbitrary mapping. This study tried to provide a useful guideline of AUI in home appliances by extracting the relations with cognitive, emotional properties of a certain device or function induced by several properties of auditory signal and showing the empirical data on the basic mechanism of such relations.

  • PDF

Auditory in formation and Planning of Reactive Interface (리액티브-인터페이스설계와 청각정보)

  • 김상식
    • Archives of design research
    • /
    • v.20
    • /
    • pp.123-132
    • /
    • 1997
  • These days we live in society which the expanion and variety f information continuse with the evolution of technology. However, because much information exists in a black box, we have difficulty in using information embodied in products. For example, companies do not consider that much the position of users in using information and they thnd to tely too much on LCDs. But the limitation of monitors' screen and the tendency of miniaturization cause users some burden in obtaining visual information. Accordingly, the objective of this paper is to study an extent to which the auditory information is able to support userinterface, to compare it with the visual information, and eventually to find the way of suing auditory information as a means of expression.

  • PDF

Mechanism and Neuroanatomy of Auditory Hallucination (환청의 기전과 신경해부학)

  • Lee, Seung-Hwan;Suh, Kwang-Yoon
    • Sleep Medicine and Psychophysiology
    • /
    • v.8 no.2
    • /
    • pp.98-106
    • /
    • 2001
  • Auditory hallucinations are cardinal feature of psychosis. But the mechanism of hallucinated speech is unknown. The hypothesis that these hallucinations arise from pathologically altered brain monitoring system underlying speech perception is influential. With the help of rapidly developing neuroimaging study technologies, many researchers have been finding new organic deficits in the hallucinated schizophrenic patient's brain. In this article, we reviewed the general appearance of hallucination, a computer simulation model of hallucination and several neuroimaging study findings on hallucinating schizophrenic patients. In conclusion, we presented the presumptive mechanism of hallucination based on the anatomical dysfunction of schizophrenia.

  • PDF

Evaluation of Stimulus Strategy for Cochlear Implant Using Neurogram (Neurogram을 이용한 인공와우 자극기법 평가 연구)

  • Yang, Hyejin;Woo, Jihwan
    • Journal of Biomedical Engineering Research
    • /
    • v.34 no.2
    • /
    • pp.47-54
    • /
    • 2013
  • Electrical stimulation is delivered to auditory nerve (AN) through the electrodes in cochlear implant system. Neurogram is a spectrogram that includes information of neural response to electrical stimulation. We hypothesized that the similarity between a neurogram and an input-sound spectrogram could show how well a cochlear implant system works. In this study, we evaluated electrical stimulus configuration of CIS strategy using the computational model. The computational model includes stochastic property and anatomical features of cat auditory nerve fiber. To evaluate similarity between a neurogram and an input-sound spectrogram, we calculated Structural Similarity Index (SSIM). The results show that the dynamic range and the stimulation rate per channel influenced SSIM. Finally, we suggested the optimal configuration within the given stimulus CIS. We expect that the results and the evaluating procedure could be employed to improve the performance of a cochlear implant system.

The System of Converting Muscular Sense into both Color and Sound based on the Synesthetic Perception (공감각인지 기반 근감각신호에서 색·음으로의 변환 시스템)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.24 no.5
    • /
    • pp.462-469
    • /
    • 2014
  • As a basic study on both engineering applications and representation methods of synesthesia, this paper aims at building basic system which converts a muscular sense into both visual and auditory elements. As for the building method, data of the muscular sense can be acquired through roll and pitch signals which are calculated from both three-axis acceleration sensor and the two-axis gyro sensor. The roll and pitch signals are then converted into both visual and auditory information as outputs. The roll signals are converted into both intensity elements of the HSI color model and octaves as one of auditory elements. In addition, the pitch signals are converted into both hue elements of the HSI color model and scales as another one of auditory elements. Each of the extracted elements of the HSI color model is converted into each of the three elements of the RGB color model respectively, so that the real-time output color signals can be obtained. Octaves and scales are also converted and synthesized into MIDI signals, so that the real-time sound signals can be obtained as anther one of output signals. In experiments, the results revealed that normal color and sound output signals were successfully obtained from roll and pitch values that represent muscular senses or physical movements, depending on the conversion relationship based on the similarity between color and sound.

Implementation of Mutual Conversion System between Body Movement and Visual·Auditory Information (신체 움직임-시·청각 정보 상호변환 시스템의 구현)

  • Bae, Myung-Jin;Kim, Sung-Ill
    • Journal of IKEEE
    • /
    • v.22 no.2
    • /
    • pp.362-368
    • /
    • 2018
  • This paper has implemented a mutual conversion system that mutually converts between body motion signals and both visual and auditory signals. The present study is based on intentional synesthesia that can be perceived by learning. The Euler's angle was used in body movements as the output of a wearable armband(Myo). As a muscle sense, roll, pitch and yaw signals were used in this study. As visual and auditory signals, MIDI(Musical Instrument Digital Interface) signals and HSI(Hue, Saturation, Intensity) color model were used respectively. The method of mutual conversion between body motion signals and both visual and auditory signals made it easy to infer by applying one-to-one correspondence. Simulation results showed that input motion signals were compared with output simulation ones using ROS(Root Operation System) and Gazebo which is a 3D simulation tool, to enable the mutual conversion between body motion information and both visual and auditory information.

Auditory Neural Information Processing Modeling for Speech Recognition (음성인식을 위한 청각신경 정보처리 모델링)

  • Lee, Hee-Kyu;Lee, Kwang-Hyung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.9 no.3
    • /
    • pp.42-47
    • /
    • 1990
  • A neural auditory system is studied for the aim of making better speech recognition systems. The cochlear mechanics is described. A IIR digital filter modeling of basilar membrane is discussed for the speech recognition. A multi-layer model of consonant recognition using phoneme detection filters and discriminant functions for feature estimation is constructed. This model shows more then 90% recognition rate in consonants.

  • PDF