• Title/Summary/Keyword: multimodal interface

Search Result 54, Processing Time 0.032 seconds

Performance Improvement of Variable Vocabulary Speech Recognizer (가변어휘 음성인식기의 성능개선)

  • Kim Seunghi;Kim Hoi-Rin
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.21-24
    • /
    • 1999
  • 본 논문에서는 가변어휘 음성인식기의 성능개선 작업에 관한 내용을 기술하고 있다. 묵음을 포함한 총 40개의 문맥독립 음소모델을 사용한다. LDA 기법을 이용하여 동일차수의 특징벡터내에 보다 유용한 정보를 포함시키고, likelihood 계산시 가우시안 분포와 mixture weight에 대한 가중치를 달리 함으로써 성능향상을 볼 수 있었다. ETRI POW 3848 DB만을 사용하여 실험한 경우, $21.7\%$의 오류율 감소를 확인할 수 있었다. 잡음환경 및 어휘독립환경을 고려하여 POW 3848 DB와 PC 168 DB 및 PBW445 DB를 사용한 실험도 행하였으며, PBW 445 DB를 사용한 어휘독립 인식실험의 경우 $56.8\%$의 오류율 감소를 얻을 수 있었다.

  • PDF

Multimodal Interface Control Module for Immersive Virtual Education (몰입형 가상교육을 위한 멀티모달 인터페이스 제어모듈)

  • Lee, Jaehyub;Im, SungMin
    • The Journal of Korean Institute for Practical Engineering Education
    • /
    • v.5 no.1
    • /
    • pp.40-44
    • /
    • 2013
  • This paper suggests a multimodal interface control module which allows a student to naturally interact with educational contents in virtual environment. The suggested module recognizes a user's motion when he/she interacts with virtual environment and then conveys the user's motion to the virtual environment via wireless communication. Futhermore, a haptic actuator is incorporated into the proposed module in order to create haptic information. Due to the proposed module, a user can haptically sense the virtual object as if the virtual object is exists in real world.

  • PDF

Automatic Human Emotion Recognition from Speech and Face Display - A New Approach (인간의 언어와 얼굴 표정에 통하여 자동적으로 감정 인식 시스템 새로운 접근법)

  • Luong, Dinh Dong;Lee, Young-Koo;Lee, Sung-Young
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2011.06b
    • /
    • pp.231-234
    • /
    • 2011
  • Audiovisual-based human emotion recognition can be considered a good approach for multimodal humancomputer interaction. However, the optimal multimodal information fusion remains challenges. In order to overcome the limitations and bring robustness to the interface, we propose a framework of automatic human emotion recognition system from speech and face display. In this paper, we develop a new approach for fusing information in model-level based on the relationship between speech and face expression to detect automatic temporal segments and perform multimodal information fusion.

Recent Research Trend in Skin-Inspired Soft Sensors with Multimodality (피부 모사형 다기능 유연 센서의 연구 동향)

  • Lee, Seung Goo;Choi, Kyung Ho;Shin, Gyo Jic;Lee, Hyo Sun;Bae, Geun Yeol
    • Journal of Adhesion and Interface
    • /
    • v.21 no.4
    • /
    • pp.162-167
    • /
    • 2020
  • The skin-inspired multimodal soft sensors have been developed through multidisciplinary approaches to mimic the sensing ability with high sensitivity and mechanical durability of human skin. For practical application, although the stimulus discriminability against a complex stimulus composed of various mechanical and thermal stimuli experienced in daily life is essential, it still shows a low level actually. In this paper, we first introduce the operating mechanisms and representative studies of the unimodal soft sensor, and then discuss the recent research trend in the multimodal soft sensors and the stimulus discriminability.

Human body learning system using multimodal and user-centric interfaces (멀티모달 사용자 중심 인터페이스를 적용한 인체 학습 시스템)

  • Kim, Ki-Min;Kim, Jae-Il;Park, Jin-Ah
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.85-90
    • /
    • 2008
  • This paper describes the human body learning system using the multi-modal user interface. Through our learning system, students can study about human anatomy interactively. The existing learning methods use the one-way materials like images, text and movies. But we propose the new learning system that includes 3D organ surface models, haptic interface and the hierarchical data structure of human organs to serve enhanced learning that utilizes sensorimotor skills.

  • PDF

Multi - Modal Interface Design for Non - Touch Gesture Based 3D Sculpting Task (비접촉식 제스처 기반 3D 조형 태스크를 위한 다중 모달리티 인터페이스 디자인 연구)

  • Son, Minji;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.5
    • /
    • pp.177-190
    • /
    • 2017
  • This research aims to suggest a multimodal non-touch gesture interface design to improve the usability of 3D sculpting task. The task and procedure of design sculpting of users were analyzed across multiple circumstances from the physical sculpting to computer software. The optimal body posture, design process, work environment, gesture-task relationship, the combination of natural hand gesture and arm movement of designers were defined. The preliminary non-touch 3D S/W were also observed and natural gesture interaction, visual metaphor of UI and affordance for behavior guide were also designed. The prototype of gesture based 3D sculpting system were developed for validation of intuitiveness and learnability in comparison to the current S/W. The suggested gestures were proved with higher performance as a result in terms of understandability, memorability and error rate. Result of the research showed that the gesture interface design for productivity system should reflect the natural experience of users in previous work domain and provide appropriate visual - behavioral metaphor.

Prediction of Concrete Pumping Using Various Rheological Models

  • Choi, Myoung Sung;Kim, Young Jin;Kim, Jin Keun
    • International Journal of Concrete Structures and Materials
    • /
    • v.8 no.4
    • /
    • pp.269-278
    • /
    • 2014
  • When concrete is being transported through a pipe, the lubrication layer is formed at the interface between concrete and the pipe wall and is the major factor facilitating concrete pumping. A possible mechanism that illustrates to the formation of the layer is the shear-induced particle migration and determining the rheological parameters is a paramount factor to simulate the concrete flow in pipe. In this study, numerical simulations considering various rheological models in the shear-induced particle migration were conducted and compared with 170 m full-scale pumping tests. It was found that the multimodal viscosity model representing concrete as a three-phase suspension consisting of cement paste, sand and gravel can accurately simulate the lubrication layer. Moreover, considering the particle shape effects of concrete constituents with increased intrinsic viscosity can more exactly predict the pipe flow of pumped concrete.

Multimodal biometrics system using PDA under ubiquitous environments (유비쿼터스 환경에서 PDA를 이용한 다중생체인식 시스템 구현)

  • Kwon Man-Jun;Yang Dong-Hwa;Kim Yong-Sam;Lee Dae-Jong;Chun Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.16 no.4
    • /
    • pp.430-435
    • /
    • 2006
  • In this paper, we propose a method based on multimodal biometrics system using the face and signature under ubiquitous computing environments. First, the face and signature images are obtained by PDA and then these images with user ID and name are transmitted via WLAN(Wireless LAN) to the server and finally the PDA receives verification result from the server. The multimodal biometrics recognition system consists of two parts. In client part located in PDA, user interface program executes the user registration and verification process. The server consisting of the PCA and LDA algorithm shows excellent face recognition performance and the signature recognition method based on the Kernel PCA and LDA algorithm for signature image projected to vertical and horizontal axes by grid partition method. The proposed algorithm is evaluated with several face and signature images and shows better recognition and verification results than previous unimodal biometrics recognition techniques.

Design of Multimodal User Interface using Speech and Gesture Recognition for Wearable Watch Platform (착용형 단말에서의 음성 인식과 제스처 인식을 융합한 멀티 모달 사용자 인터페이스 설계)

  • Seong, Ki Eun;Park, Yu Jin;Kang, Soon Ju
    • KIISE Transactions on Computing Practices
    • /
    • v.21 no.6
    • /
    • pp.418-423
    • /
    • 2015
  • As the development of technology advances at exceptional speed, the functions of wearable devices become more diverse and complicated, and many users find some of the functions difficult to use. In this paper, the main aim is to provide the user with an interface that is more friendly and easier to use. The speech recognition is easy to use and also easy to insert an input order. However, speech recognition is problematic when using on a wearable device that has limited computing power and battery. The wearable device cannot predict when the user will give an order through speech recognition. This means that while speech recognition must always be activated, because of the battery issue, the time taken waiting for the user to give an order is impractical. In order to solve this problem, we use gesture recognition. This paper describes how to use both speech and gesture recognition as a multimodal interface to increase the user's comfort.

User's Emotional Touch Recognition Interface Using non-contact Touch Sensor and Accelerometer (비접촉식 터치센서와 가속도센서를 이용한 사용자의 감정적 터치 인식 인터페이스 시스템)

  • Koo, Seong-Yong;Lim, Jong-Gwan;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.348-353
    • /
    • 2008
  • This paper proposes a novel touch interface for recognizing user's touch pattern and understanding emotional information by eliciting natural user interaction. To classify physical touches, we represent the similarity between touches by analyzing touches based on its dictionary meaning and design the algorithm to recognize various touch patterns in real time. Finally we suggest the methodology to estimate user's emotional state based on touch.

  • PDF