• Title/Summary/Keyword: multimodal interface

Search Result 54, Processing Time 0.027 seconds

An Economic Analysis on Multi-modal Freight System between Korea and China (한ㆍ중간 복합물류시스템 도입의 경제성분석)

  • 이용상;유재균;김경태;최나나
    • Proceedings of the KSR Conference
    • /
    • 2001.10a
    • /
    • pp.54-61
    • /
    • 2001
  • The purpose of this study is establishing effective multimodal logistics structure in northeast Asia. For this end, the interface between transport modes, construction method of rail-ferry transport system, system operation and implementation strategies by step were studied. Rail-ferry system have a competition over present transportation system in international cargo trade market between Korea and China. And, the operation of rail-ferry transportation system between Korea and China is meaningful project in the point of providing various choices to clients. Korea and China should have agreements in trade, customs duties, ports in the next year for the success of this project.

  • PDF

Virtual Object Generation Technique Using Multimodal Interface With Speech and Hand Gesture (음성 및 손동작 결합 인터페이스를 통한 가상객체의 생성)

  • Kim, Changseob;Nam, Hyeongil;Park, Jong-Il
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.147-149
    • /
    • 2019
  • 가상현실 기술의 발전으로 보다 많은 사람이 가상현실 콘텐츠를 즐길 수 있게 되었다. PC나 스마트폰과 같은 이전의 콘텐츠 플랫폼과 달리 가상현실에서는 3차원 정보를 전달할 수 있는 인터페이스가 요구된다. 2차원에서 3차원으로의 변화는 보다 높은 자유도를 가지는 반면, 사용자는 새로운 인터페이스에 적응해야 하는 불편함 또한 존재한다. 이러한 불편함을 해소하기 위하여 본 논문에서는 가상현실상에서 음성과 손동작을 결합한 인터페이스를 제안한다. 제안하는 인터페이스는 음성과 손동작은 현실 세계에서의 의사소통을 모방하여 구현하였다. 현실 세계의 의사소통을 모방하였기 때문에 사용자는 추가적인 학습이 없이 가상현실 플랫폼에 보다 쉽게 적응할 수 있다. 또한, 본 논문에서는 가상객체를 생성하는 예제를 통하여 기존의 3차원 입력장치를 대신할 수 있음을 보인다.

  • PDF

Implementation of Web Game System using Multi Modal Interfaces (멀티모달 인터페이스를 사용한 웹 게임 시스템의 구현)

  • Lee, Jun;Ahn, Young-Seok;Kim, Jee-In;Park, Sung-Jun
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.127-137
    • /
    • 2009
  • Web Game provides computer games through a web browser, and have several benefits. First, we can access the game through web browser easily if we are connected to the internet environment. Second, usually we don't need much space of a game data for downloading it into a local disk. Nowadays, an industry area of Web Game has a chance to grow through advancements of mobile computing technologies and an age of Web 2.0. This study proposes a Web Game system that users can apply to manipulate the game with multimodal interfaces and mobile devices for intuitive interactions. In this study, multi modal interfaces are used to efficient control the game, and both ordinary computers and mobile devices are applied to the game scenarios. The proposed system is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed system reduces total clear time and numbers of errors of the experiment in a mobile device. It can also provide good satisfactions of users.

  • PDF

A Study on User Experience Factors of Display-Type Artificial Intelligence Speakers through Semantic Network Analysis : Focusing on Online Review Analysis of the Amazon Echo (의미연결망 분석을 통한 디스플레이형 인공지능 스피커의 사용자 경험 요인 연구 : 아마존 에코의 온라인 리뷰 분석을 중심으로)

  • Lee, Jeongmyeong;Kim, Hyesun;Choi, Junho
    • The Journal of the Convergence on Culture Technology
    • /
    • v.5 no.3
    • /
    • pp.9-23
    • /
    • 2019
  • The artificial intelligence speaker market is in a new age of mounting displays. This study aimed to analyze the difference of experience using artificial intelligent speakers in terms of usage context, according to the presence or absence of displays. This was achieved by using semantic network analysis to determine how the online review texts of Amazon Echo Show and Echo Plus consisted of different UX issues with structural differences. Based on the physical context and the social context of the user experience, the ego network was constructed to draw out major issues. Results of the analysis show that users' expectation gap is generated according to the display presence, which can lead to negative experiences. Also, it was confirmed that the Multimodal interface is more utilized in the kitchen than in the bedroom, and can contribute to the activation of communication among family members. Based on these findings, we propose a user experience strategy to be considered in display type speakers to be launched in Korea in the future.

Design of the emotion expression in multimodal conversation interaction of companion robot (컴패니언 로봇의 멀티 모달 대화 인터랙션에서의 감정 표현 디자인 연구)

  • Lee, Seul Bi;Yoo, Seung Hun
    • Design Convergence Study
    • /
    • v.16 no.6
    • /
    • pp.137-152
    • /
    • 2017
  • This research aims to develop the companion robot experience design for elderly in korea based on needs-function deploy matrix of robot and emotion expression research of robot in multimodal interaction. First, Elder users' main needs were categorized into 4 groups based on ethnographic research. Second, the functional elements and physical actuators of robot were mapped to user needs in function- needs deploy matrix. The final UX design prototype was implemented with a robot type that has a verbal non-touch multi modal interface with emotional facial expression based on Ekman's Facial Action Coding System (FACS). The proposed robot prototype was validated through a user test session to analyze the influence of the robot interaction on the cognition and emotion of users by Story Recall Test and face emotion analysis software; Emotion API when the robot changes facial expression corresponds to the emotion of the delivered information by the robot and when the robot initiated interaction cycle voluntarily. The group with emotional robot showed a relatively high recall rate in the delayed recall test and In the facial expression analysis, the facial expression and the interaction initiation of the robot affected on emotion and preference of the elderly participants.

Expression Analysis System of Game Player based on Multi-modal Interface (멀티 모달 인터페이스 기반 플레이어 얼굴 표정 분석 시스템 개발)

  • Jung, Jang-Young;Kim, Young-Bin;Lee, Sang-Hyeok;Kang, Shin-Jin
    • Journal of Korea Game Society
    • /
    • v.16 no.2
    • /
    • pp.7-16
    • /
    • 2016
  • In this paper, we propose a method for effectively detecting specific behavior. The proposed method detects outlying behavior based on the game players' characteristics. These characteristics are captured non-invasively in a general game environment and add keystroke based on repeated pattern. In this paper, cameras were used to analyze observed data such as facial expressions and player movements. Moreover, multimodal data from the game players was used to analyze high-dimensional game-player data for a detection effect of repeated behaviour pattern. A support vector machine was used to efficiently detect outlying behaviors. We verified the effectiveness of the proposed method using games from several genres. The recall rate of the outlying behavior pre-identified by industry experts was approximately 70%. In addition, Repeated behaviour pattern can be analysed possible. The proposed method can also be used for feedback and quantification about analysis of various interactive content provided in PC environments.

Research on Effective Use of A Serious Bio-Game (기능성 Bio-Game의 활용 방안에 관한 연구)

  • Park, Sung-Jun;Lee, Jun;Kim, Jee-In
    • Journal of Korea Game Society
    • /
    • v.9 no.1
    • /
    • pp.93-103
    • /
    • 2009
  • A Serious Game helps the learners to recognize the problems effectively, grasp and classify important information needed to solve the problems and convey the contents of what they have learned. Owing not only to this game-like fun but also to the educational effect, The Serious Game can be usefully applied to education and training in the areas of scientific technology and industrial technology. This study proposes the Serious Game that users can apply to biotechnology by using intuitive multi-modal interfaces. In this study, a stereoscopic monitor is used to make three dimensional molecular structures, and multi-modal interface is used to efficiently control. Based on a such system, this study easily solved the docking simulation function, which is one of the important experiments, by applying these game factors. For this, we suggested the level-up concept as a game factor that depends on numbers of objects and users. The proposed system was evaluated in performance comparison in result time of a new drug design process on AIDS virus with previous approach.

  • PDF

Multimodal Biological Signal Analysis System Based on USN Sensing System (USN 센싱 시스템에 기초한 다중 생체신호 분석 시스템)

  • Noh, Jin-Soo;Song, Byoung-Go;Bae, Sang-Hyun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.5
    • /
    • pp.1008-1013
    • /
    • 2009
  • In this paper, we proposed the biological signal (body heat, pulse, breathe rate, and blood pressure) analysis system using wireless sensor. In order to analyze, we designed a back-propagation neural network system using expert group system. The proposed system is consist of hardware patt such as UStar-2400 ISP and Wireless sensor and software part such as Knowledge Base module, Inference Engine module and User Interface module which is inserted in Host PC. To improve the accuracy of the system, we implement a FEC (Forward Error Correction) block. For conducting simulation, we chose 100 data sets from Knowledge Base module to train the neural network. As a result, we obtained about 95% accuracy using 128 data sets from Knowledge Base module and acquired about 85% accuracy which experiments 13 students using wireless sensor.

Gaze Detection Using Facial Movement in Multimodal Interface (얼굴의 움직임을 이용한 다중 모드 인터페이스에서의 응시 위치 추출)

  • 박강령;남시욱;한승철;김재희
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 1997.11a
    • /
    • pp.168-173
    • /
    • 1997
  • 시선의 추출을 통해 사용자의 관심 방향을 알고자하는 연구는 여러 분야에 응용될 수 있는데, 대표적인 것이 장애인의 컴퓨터 이용이나, 다중 윈도우에서 마우스의 기능 대용 및, VR에서의 위치 추적 장비의 대용 그리고 원격 회의 시스템에서의 view controlling등이다. 기존의 대부분의 연구들에서는 얼굴의 입력된 동영상으로부터 얼굴의 3차원 움직임량(rotation, translation)을 구하는데 중점을 두고 있으나 [1][2], 모니터, 카메라, 얼굴 좌표계간의 복잡한 변환 과정때문에 이를 바탕으로 사용자의 응시 위치를 파악하고자하는 연구는 거으 이루어지지 않고 있다. 본 논문에서는 일반 사무실 환경에서 입력된 얼굴 동영상으로부터 얼굴 영역 및 얼굴내의 눈, 코, 입 영역 등을 추출함으로써 모니터의 일정 영역을 응시하는 순간 변화된 특징점들의 위치 및 특징점들이 형성하는 기하학적 모양의 변화를 바탕으로 응시 위치를 계산하였다. 이때 앞의 세 좌표계간의 복잡한 변환 관계를 해결하기 위하여, 신경망 구조(다층 퍼셉트론)을 이용하였다. 신경망의 학습 과정을 위해서는 모니터 화면을 15영역(가로 5등분, 세로 3등분)으로 분할하여 각 영역의 중심점을 응시할 때 추출된 특징점들을 사용하였다. 이때 학습된 15개의 응시 위치이외에 또 다른 응시 영역에 대한 출력값을 얻기 위해, 출력 함수로 연속적이고 미분가능한 함수(linear output function)를 사용하였다. 실험 결과 신경망을 이용한 응시위치 파악 결과가 선형 보간법[3]을 사용한 결과보다 정확한 성능을 나타냈다.

  • PDF

Methodologies for Enhancing Immersiveness in AR-based Product Design (증강현실 기반 제품 디자인의 몰입감 향상 기법)

  • Ha, Tae-Jin;Kim, Yeong-Mi;Ryu, Je-Ha;Woo, Woon-Tack
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.44 no.2 s.314
    • /
    • pp.37-46
    • /
    • 2007
  • In this paper, we propose technologies for enhancing the immersive realization of virtual objects in AR-based product design. Generally, multimodal senses such as visual/auditory/tactile feedback are well known as a method for enhancing the immersion in case of interaction with virtual objects. By adapting tangible objects we can provide touch sensation to users. A 3D model of the same scale overlays the whole area of the tangible object so the marker area is invisible. This contributes to enhancing immersion. Also, the hand occlusion problem when the virtual objects overlay the user's hands is partially solved, providing more immersive and natural images to users. Finally, multimodal feedback also creates better immersion. In our work, both vibrotactile feedback through page motors, pneumatic tactile feedback, and sound feedback are considered. In our scenario, a game-phone model is selected, by way of proposed augmented vibrotactile feedback, hands occlusion-reduced visual effects and sound feedback are provided to users. These proposed methodologies will contribute to a better immersive realization of the conventional AR system.