• Title/Summary/Keyword: 멀티 모달

Search Result 266, Processing Time 0.022 seconds

Ubiquitous Context-aware Modeling and Multi-Modal Interaction Design Framework (유비쿼터스 환경의 상황인지 모델과 이를 활용한 멀티모달 인터랙션 디자인 프레임웍 개발에 관한 연구)

  • Kim, Hyun-Jeong;Lee, Hyun-Jin
    • Archives of design research
    • /
    • v.18 no.2 s.60
    • /
    • pp.273-282
    • /
    • 2005
  • In this study, we proposed Context Cube as a conceptual model of user context, and a Multi-modal Interaction Design framework to develop ubiquitous service through understanding user context and analyzing correlation between context awareness and multi-modality, which are to help infer the meaning of context and offer services to meet user needs. And we developed a case study to verify Context Cube's validity and proposed interaction design framework to derive personalized ubiquitous service. We could understand context awareness as information properties which consists of basic activity, location of a user and devices(environment), time, and daily schedule of a user. And it enables us to construct three-dimensional conceptual model, Context Cube. Also, we developed ubiquitous interaction design process which encloses multi-modal interaction design by studying the features of user interaction presented on Context Cube.

  • PDF

Implementation of Web Game System using Multi Modal Interfaces (멀티모달 인터페이스를 사용한 웹 게임 시스템의 구현)

  • Lee, Jun;Ahn, Young-Seok;Kim, Jee-In;Park, Sung-Jun
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.127-137
    • /
    • 2009
  • Web Game provides computer games through a web browser, and have several benefits. First, we can access the game through web browser easily if we are connected to the internet environment. Second, usually we don't need much space of a game data for downloading it into a local disk. Nowadays, an industry area of Web Game has a chance to grow through advancements of mobile computing technologies and an age of Web 2.0. This study proposes a Web Game system that users can apply to manipulate the game with multimodal interfaces and mobile devices for intuitive interactions. In this study, multi modal interfaces are used to efficient control the game, and both ordinary computers and mobile devices are applied to the game scenarios. The proposed system is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed system reduces total clear time and numbers of errors of the experiment in a mobile device. It can also provide good satisfactions of users.

  • PDF

An Implementation of Multimodal Speaker Verification System using Teeth Image and Voice on Mobile Environment (이동환경에서 치열영상과 음성을 이용한 멀티모달 화자인증 시스템 구현)

  • Kim, Dong-Ju;Ha, Kil-Ram;Hong, Kwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.162-172
    • /
    • 2008
  • In this paper, we propose a multimodal speaker verification method using teeth image and voice as biometric trait for personal verification in mobile terminal equipment. The proposed method obtains the biometric traits using image and sound input devices of smart-phone that is one of mobile terminal equipments, and performs verification with biometric traits. In addition, the proposed method consists the multimodal-fashion of combining two biometric authentication scores for totally performance enhancement, the fusion method is accompanied a weighted-summation method which has comparative simple structure and superior performance for considering limited resources of system. The performance evaluation of proposed multimodal speaker authentication system conducts using a database acquired in smart-phone for 40 subjects. The experimental result shows 8.59% of EER in case of teeth verification 11.73% in case of voice verification and the multimodal speaker authentication result presented the 4.05% of EER. In the experimental result, we obtain the enhanced performance more than each using teeth and voice by using the simple weight-summation method in the multimodal speaker verification system.

Efficient Emotion Classification Method Based on Multimodal Approach Using Limited Speech and Text Data (적은 양의 음성 및 텍스트 데이터를 활용한 멀티 모달 기반의 효율적인 감정 분류 기법)

  • Mirr Shin;Youhyun Shin
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.174-180
    • /
    • 2024
  • In this paper, we explore an emotion classification method through multimodal learning utilizing wav2vec 2.0 and KcELECTRA models. It is known that multimodal learning, which leverages both speech and text data, can significantly enhance emotion classification performance compared to methods that solely rely on speech data. Our study conducts a comparative analysis of BERT and its derivative models, known for their superior performance in the field of natural language processing, to select the optimal model for effective feature extraction from text data for use as the text processing model. The results confirm that the KcELECTRA model exhibits outstanding performance in emotion classification tasks. Furthermore, experiments using datasets made available by AI-Hub demonstrate that the inclusion of text data enables achieving superior performance with less data than when using speech data alone. The experiments show that the use of the KcELECTRA model achieved the highest accuracy of 96.57%. This indicates that multimodal learning can offer meaningful performance improvements in complex natural language processing tasks such as emotion classification.

3D Shape Comparison Using Modal Strain Energy (모달 스트레인 에너지를 이용한 3차원 형상 비교)

  • 최수미
    • Journal of Korea Multimedia Society
    • /
    • v.7 no.3
    • /
    • pp.427-437
    • /
    • 2004
  • Shape comparison between 3D models is essential for shape recognition, retrieval, classification, etc. In this paper, we propose a method for comparing 3D shapes, which is invariant under translation, rotation and scaling of models and is robust to non-uniformly distributed and incomplete data sets. first, a modal model is constructed from input data using vibration modes and then shape similarity is evaluated with modal strain energy. The proposed method provides global-to-local ordering of shape deformation using vibration modes ordered by frequency Thus, we evaluated similarity in terms of global properties of shape without being affected localised shape features using ordered shape representation and modal strain one energy.

  • PDF

Design of Agent Technology based on Device Collaboration for Personal Multi-modal Services (개인형 멀티모달 서비스를 위한 디바이스 협업 기반 에이전트 기술 설계)

  • Kim, Jae-Su;Kim, Hyeong-Seon;Kim, Chi-Su;Kim, Hwang-Rae;Im, Jae-Hyeon
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.254-257
    • /
    • 2009
  • 유비쿼터스 시대가 도래하면서 사용자를 중심으로 하는 서비스에 대한 관심이 증가하고 있다. 더불어 사용자의 특성에 맞는 개인화 된 서비스를 요구하고 있다. 본 논문에서는 유비쿼터스 공간에서 소형화, 지능화되고 있는 개인형 이종 디바이스 간의 협업을 통해 사용자에게 보다 직관적이고 편리한 개인화된 서비스를 제공하기 위한 디바이스 협업 기반 에이전트 기술을 제안한다. 본 연구에서는 센서를 통해 사용자 환경에 대한 정보 및 사용자 정보를 수집하여 기본적인 서비스에 필요한 상황정보를 처리한다. 또한, 유비쿼터스 사용자에게 필요한 멀티모달 서비스를 제공한다. 따라서 일반적인 자동화 서비스 이상의 개인 특성에 맞는 고품질의 서비스를 제공할 수 있다.

  • PDF

Development of a multimodal interface for mobile phones (휴대폰용 멀티모달 인터페이스 개발 - 키패드, 모션, 음성인식을 결합한 멀티모달 인터페이스)

  • Kim, Won-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.559-563
    • /
    • 2008
  • The purpose of this paper is to introduce a multimodal interface for mobile phones and to verify its feasibility. The multimodal interface integrates multiple input devices together including speech, keypad and motion. It can enhance the late and time for speech recognition, and shorten the menu depth.

  • PDF

멀티모달 센서를 이용한 스마트기기 사용자 인증 기술 동향

  • Choi, Jongwon;Yi, Jeong Hyun
    • Review of KIISC
    • /
    • v.24 no.3
    • /
    • pp.7-14
    • /
    • 2014
  • 스마트 환경은, 사용자가 스마트기기를 통해 시간적, 공간적 제약을 받지 않고 스마트기기 서비스를 이용하는 것을 말하며 스마트기기의 보급으로 인하여 보편화되고 있다. 그런데 스마트 환경에서 서비스를 제공받기 위한 사용자와 스마트기기 간 인터페이스에서 각종 보안에 대한 위협이 발생한다. 또 스마트기기의 특성상 사용자 입력이 간편하지 않을 뿐만 아니라 일반 사용자가 계정 종류, 보안 유형 등 전문적인 용어에 대한 지식을 알아야하는 어려움이 존재한다. 최근 이러한 문제를 해결하고자 스마트기기의 터치스크린, 카메라, 가속도 센서, 지문인식 센서 등 다양한 센서를 혼합 사용하여 사용자 인증을 거치는 멀티모달 인터페이스 연구가 각광받고 있다. 따라서 본고에서는 인간과 스마트기기 사이 상호작용 시 안전하고 편리한 스마트 환경 조성을 위하여 멀티모달 센서를 활용한 다양한 스마트기기 사용자 인증 기술 동향에 대해 소개한다.