• Title/Summary/Keyword: 멀티모달시스템

Search Result 116, Processing Time 0.021 seconds

Sensitivity Lighting System Based on multimodal (멀티모달 기반의 감성 조명 시스템)

  • Kwon, Sun-Min;Jung, In-Bum
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.4
    • /
    • pp.721-729
    • /
    • 2012
  • In this paper, human sensibility is measured on multi-modal environment and a sensitivity lighting system is implemented according to driven emotional indexes. We use LED lighting because it supports ecological circumstance, high efficiency, and long lifetime. In particular, the LED lighting provides various color schemes even in single lighting bulb. To cognize the human sensibility, we use the image information and the arousal state information, which are composed of multi-modal basis and calculates emotional indexes. In experiments, as the LED lighting color vision varies according to users' emotional index, we show that it provides human friendly lighting system compared to the existing systems.

Home Automation Control with Multi-modal Interfaces for Disabled Persons (장애인을 위한 멀티모달 인터페이스 기반의 홈 네트워크 제어)

  • Park, Hee-Dong
    • Journal of Digital Convergence
    • /
    • v.12 no.2
    • /
    • pp.321-326
    • /
    • 2014
  • The needs for IT accessibility for disabled persons has increased for recent years. So, it is very important to support multi-modal interfaces, such as voice and vision recognition, TTS, etc. for disabled persons. In this paper, we deal with IT accessibility issues of home networks and show our implemented home network control system model with multi-modal interfaces including voice recognition and animated user interfaces.

The Effect of AI Agent's Multi Modal Interaction on the Driver Experience in the Semi-autonomous Driving Context : With a Focus on the Existence of Visual Character (반자율주행 맥락에서 AI 에이전트의 멀티모달 인터랙션이 운전자 경험에 미치는 효과 : 시각적 캐릭터 유무를 중심으로)

  • Suh, Min-soo;Hong, Seung-Hye;Lee, Jeong-Myeong
    • The Journal of the Korea Contents Association
    • /
    • v.18 no.8
    • /
    • pp.92-101
    • /
    • 2018
  • As the interactive AI speaker becomes popular, voice recognition is regarded as an important vehicle-driver interaction method in case of autonomous driving situation. The purpose of this study is to confirm whether multimodal interaction in which feedback is transmitted by auditory and visual mode of AI characters on screen is more effective in user experience optimization than auditory mode only. We performed the interaction tasks for the music selection and adjustment through the AI speaker while driving to the experiment participant and measured the information and system quality, presence, the perceived usefulness and ease of use, and the continuance intention. As a result of analysis, the multimodal effect of visual characters was not shown in most user experience factors, and the effect was not shown in the intention of continuous use. Rather, it was found that auditory single mode was more effective than multimodal in information quality factor. In the semi-autonomous driving stage, which requires driver 's cognitive effort, multimodal interaction is not effective in optimizing user experience as compared to single mode interaction.

High-Quality Multimodal Dataset Construction Methodology for ChatGPT-Based Korean Vision-Language Pre-training (ChatGPT 기반 한국어 Vision-Language Pre-training을 위한 고품질 멀티모달 데이터셋 구축 방법론)

  • Jin Seong;Seung-heon Han;Jong-hun Shin;Soo-jong Lim;Oh-woog Kwon
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.603-608
    • /
    • 2023
  • 본 연구는 한국어 Vision-Language Pre-training 모델 학습을 위한 대규모 시각-언어 멀티모달 데이터셋 구축에 대한 필요성을 연구한다. 현재, 한국어 시각-언어 멀티모달 데이터셋은 부족하며, 양질의 데이터 획득이 어려운 상황이다. 따라서, 본 연구에서는 기계 번역을 활용하여 외국어(영문) 시각-언어 데이터를 한국어로 번역하고 이를 기반으로 생성형 AI를 활용한 데이터셋 구축 방법론을 제안한다. 우리는 다양한 캡션 생성 방법 중, ChatGPT를 활용하여 자연스럽고 고품질의 한국어 캡션을 자동으로 생성하기 위한 새로운 방법을 제안한다. 이를 통해 기존의 기계 번역 방법보다 더 나은 캡션 품질을 보장할 수 있으며, 여러가지 번역 결과를 앙상블하여 멀티모달 데이터셋을 효과적으로 구축하는데 활용한다. 뿐만 아니라, 본 연구에서는 의미론적 유사도 기반 평가 방식인 캡션 투영 일치도(Caption Projection Consistency) 소개하고, 다양한 번역 시스템 간의 영-한 캡션 투영 성능을 비교하며 이를 평가하는 기준을 제시한다. 최종적으로, 본 연구는 ChatGPT를 이용한 한국어 멀티모달 이미지-텍스트 멀티모달 데이터셋 구축을 위한 새로운 방법론을 제시하며, 대표적인 기계 번역기들보다 우수한 영한 캡션 투영 성능을 증명한다. 이를 통해, 우리의 연구는 부족한 High-Quality 한국어 데이터 셋을 자동으로 대량 구축할 수 있는 방향을 보여주며, 이 방법을 통해 딥러닝 기반 한국어 Vision-Language Pre-training 모델의 성능 향상에 기여할 것으로 기대한다.

  • PDF

Implementation of Web Game System using Multi Modal Interfaces (멀티모달 인터페이스를 사용한 웹 게임 시스템의 구현)

  • Lee, Jun;Ahn, Young-Seok;Kim, Jee-In;Park, Sung-Jun
    • Journal of Korea Game Society
    • /
    • v.9 no.6
    • /
    • pp.127-137
    • /
    • 2009
  • Web Game provides computer games through a web browser, and have several benefits. First, we can access the game through web browser easily if we are connected to the internet environment. Second, usually we don't need much space of a game data for downloading it into a local disk. Nowadays, an industry area of Web Game has a chance to grow through advancements of mobile computing technologies and an age of Web 2.0. This study proposes a Web Game system that users can apply to manipulate the game with multimodal interfaces and mobile devices for intuitive interactions. In this study, multi modal interfaces are used to efficient control the game, and both ordinary computers and mobile devices are applied to the game scenarios. The proposed system is evaluated in both performance and user acceptability in comparison with previous approaches. The proposed system reduces total clear time and numbers of errors of the experiment in a mobile device. It can also provide good satisfactions of users.

  • PDF

An Implementation of Multimodal Speaker Verification System using Teeth Image and Voice on Mobile Environment (이동환경에서 치열영상과 음성을 이용한 멀티모달 화자인증 시스템 구현)

  • Kim, Dong-Ju;Ha, Kil-Ram;Hong, Kwang-Seok
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.45 no.5
    • /
    • pp.162-172
    • /
    • 2008
  • In this paper, we propose a multimodal speaker verification method using teeth image and voice as biometric trait for personal verification in mobile terminal equipment. The proposed method obtains the biometric traits using image and sound input devices of smart-phone that is one of mobile terminal equipments, and performs verification with biometric traits. In addition, the proposed method consists the multimodal-fashion of combining two biometric authentication scores for totally performance enhancement, the fusion method is accompanied a weighted-summation method which has comparative simple structure and superior performance for considering limited resources of system. The performance evaluation of proposed multimodal speaker authentication system conducts using a database acquired in smart-phone for 40 subjects. The experimental result shows 8.59% of EER in case of teeth verification 11.73% in case of voice verification and the multimodal speaker authentication result presented the 4.05% of EER. In the experimental result, we obtain the enhanced performance more than each using teeth and voice by using the simple weight-summation method in the multimodal speaker verification system.

The design of Multi-modal system for the realization of DARC system controller (DARC 시스템 제어기 구현을 위한 멀티모달 시스템 설계)

  • 최광국;곽상훈;하얀돌이;김유진;김철;최승호
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.179-182
    • /
    • 2000
  • 본 논문은 DARC 시스템 제어기를 구현하기 위해 음성인식기와 입술인식기를 결합하여 멀티모달 시스템을 설계하였다. DARC 시스템에서 사용하고 있는 22개 단어를 DB로 구축하고, HMM을 적용하여 인식기를 설계하였다. 두 모달간 인식 확률 결합방법은 음성인식기가 입술인식기에 비해 높은 인식률을 가지고 있다는 가정 하에 8:2 비율의 가중치로 결합하였고, 결합시점은 인식 후 확률을 결합하는 방법을 적용하였다. 시스템간 인터페이스에서는 인터넷 프로토콜인 TCP/IP의 소켓을 통신모듈로 설계/구현하고, 인식실험은 테스트 DB를 이용한 방법과 5명의 화자가 실시간 실험을 통해 그 성능 평가를 하였다.

  • PDF

Design of Lightweight Artificial Intelligence System for Multimodal Signal Processing (멀티모달 신호처리를 위한 경량 인공지능 시스템 설계)

  • Kim, Byung-Soo;Lee, Jea-Hack;Hwang, Tae-Ho;Kim, Dong-Sun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.1037-1042
    • /
    • 2018
  • The neuromorphic technology has been researched for decades, which learns and processes the information by imitating the human brain. The hardware implementations of neuromorphic systems are configured with highly parallel processing structures and a number of simple computational units. It can achieve high processing speed, low power consumption, and low hardware complexity. Recently, the interests of the neuromorphic technology for low power and small embedded systems have been increasing rapidly. To implement low-complexity hardware, it is necessary to reduce input data dimension without accuracy loss. This paper proposed a low-complexity artificial intelligent engine which consists of parallel neuron engines and a feature extractor. A artificial intelligent engine has a number of neuron engines and its controller to process multimodal sensor data. We verified the performance of the proposed neuron engine including the designed artificial intelligent engines, the feature extractor, and a Micro Controller Unit(MCU).

Multi-modal Meteorological Data Fusion based on Self-supervised Learning for Graph (Self-supervised Graph Learning을 통한 멀티모달 기상관측 융합)

  • Hyeon-Ju Jeon;Jeon-Ho Kang;In-Hyuk Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.589-591
    • /
    • 2023
  • 현재 수치예보 시스템은 항공기, 위성 등 다양한 센서에서 얻은 다종 관측 데이터를 동화하여 대기 상태를 추정하고 있지만, 관측변수 또는 물리량이 서로 다른 관측들을 처리하기 위한 계산 복잡도가 매우 높다. 본 연구에서 기존 시스템의 계산 효율성을 개선하여 관측을 평가하거나 전처리하는 데에 효율적으로 활용하기 위해, 각 관측의 특성을 고려한 자기 지도학습 방법을 통해 멀티모달 기상관측으로부터 실제 대기 상태를 추정하는 방법론을 제안하고자 한다. 비균질적으로 수집되는 멀티모달 기상관측 데이터를 융합하기 위해, (i) 기상관측의 heterogeneous network를 구축하여 개별 관측의 위상정보를 표현하고, (ii) pretext task 기반의 self-supervised learning을 바탕으로 개별 관측의 특성을 표현한다. (iii) Graph neural network 기반의 예측 모델을 통해 실제에 가까운 대기 상태를 추정한다. 제안하는 모델은 대규모 수치 시뮬레이션 시스템으로 수행되는 기존 기술의 한계점을 개선함으로써, 이상 관측 탐지, 관측의 편차 보정, 관측영향 평가 등 관측 전처리 기술로 활용할 수 있다.