• Title/Summary/Keyword: 멀티 모달 센서

Search Result 39, Processing Time 0.03 seconds

Development of Gas Type Identification Deep-learning Model through Multimodal Method (멀티모달 방식을 통한 가스 종류 인식 딥러닝 모델 개발)

  • Seo Hee Ahn;Gyeong Yeong Kim;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.525-534
    • /
    • 2023
  • Gas leak detection system is a key to minimize the loss of life due to the explosiveness and toxicity of gas. Most of the leak detection systems detect by gas sensors or thermal imaging cameras. To improve the performance of gas leak detection system using single-modal methods, the paper propose multimodal approach to gas sensor data and thermal camera data in developing a gas type identification model. MultimodalGasData, a multimodal open-dataset, is used to compare the performance of the four models developed through multimodal approach to gas sensors and thermal cameras with existing models. As a result, 1D CNN and GasNet models show the highest performance of 96.3% and 96.4%. The performance of the combined early fusion model of 1D CNN and GasNet reached 99.3%, 3.3% higher than the existing model. We hoped that further damage caused by gas leaks can be minimized through the gas leak detection system proposed in the study.

Design and Implementation of Emergency Recognition System based on Multimodal Information (멀티모달 정보를 이용한 응급상황 인식 시스템의 설계 및 구현)

  • Kim, Eoung-Un;Kang, Sun-Kyung;So, In-Mi;Kwon, Tae-Kyu;Lee, Sang-Seol;Lee, Yong-Ju;Jung, Sung-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.14 no.2
    • /
    • pp.181-190
    • /
    • 2009
  • This paper presents a multimodal emergency recognition system based on visual information, audio information and gravity sensor information. It consists of video processing module, audio processing module, gravity sensor processing module and multimodal integration module. The video processing module and gravity sensor processing module respectively detects actions such as moving, stopping and fainting and transfer them to the multimodal integration module. The multimodal integration module detects emergency by fusing the transferred information and verifies it by asking a question and recognizing the answer via audio channel. The experiment results show that the recognition rate of video processing module only is 91.5% and that of gravity sensor processing module only is 94%, but when both information are combined the recognition result becomes 100%.

멀티모달 센서를 이용한 스마트기기 사용자 인증 기술 동향

  • Choi, Jongwon;Yi, Jeong Hyun
    • Review of KIISC
    • /
    • v.24 no.3
    • /
    • pp.7-14
    • /
    • 2014
  • 스마트 환경은, 사용자가 스마트기기를 통해 시간적, 공간적 제약을 받지 않고 스마트기기 서비스를 이용하는 것을 말하며 스마트기기의 보급으로 인하여 보편화되고 있다. 그런데 스마트 환경에서 서비스를 제공받기 위한 사용자와 스마트기기 간 인터페이스에서 각종 보안에 대한 위협이 발생한다. 또 스마트기기의 특성상 사용자 입력이 간편하지 않을 뿐만 아니라 일반 사용자가 계정 종류, 보안 유형 등 전문적인 용어에 대한 지식을 알아야하는 어려움이 존재한다. 최근 이러한 문제를 해결하고자 스마트기기의 터치스크린, 카메라, 가속도 센서, 지문인식 센서 등 다양한 센서를 혼합 사용하여 사용자 인증을 거치는 멀티모달 인터페이스 연구가 각광받고 있다. 따라서 본고에서는 인간과 스마트기기 사이 상호작용 시 안전하고 편리한 스마트 환경 조성을 위하여 멀티모달 센서를 활용한 다양한 스마트기기 사용자 인증 기술 동향에 대해 소개한다.

Genetic Algorithm Calibration Method and PnP Platform for Multimodal Sensor Systems (멀티모달 센서 시스템용 유전자 알고리즘 보정기 및 PnP 플랫폼)

  • Lee, Jea Hack;Kim, Byung-Soo;Park, Hyun-Moon;Kim, Dong-Sun;Kwon, Jin-San
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.1
    • /
    • pp.69-80
    • /
    • 2019
  • This paper proposes a multimodal sensor platform which supports plug and play (PnP) technology. PnP technology automatically recognizes a connected sensor module and an application program easily controls a sensor. To verify a multimodal platform for PnP technology, we build up a firmware and have the experiment on a sensor system. When a sensor module is connected to the platform, a firmware recognizes the sensor module and reads sensor data. As a result, it provides PnP technology to simply plug sensors without any software configuration. Measured sensor raw data suffer from various distortions such as gain, offset, and non-linearity errors. Therefore, we introduce a polynomial calculation to compensate for sensor distortions. To find the optimal coefficients for sensor calibration, we apply a genetic algorithm which reduces the calibration time. It achieves reasonable performance using only a few data points with reducing 97% error in the worst case. The platform supports various protocols for multimodal sensors, i.e., UART, I2C, I2S, SPI, and GPIO.

A Study on the Recognition System of Faint Situation based on Bimodal Information (바이모달 정보를 이용한 기절상황인식 시스템에 관한 연구)

  • So, In-Mi;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.2
    • /
    • pp.225-236
    • /
    • 2010
  • This study proposes a method for the recognition of emergency situation according to the bimodal information of camera image sensor and gravity sensor. This method can recognize emergency condition by mutual cooperation and compensation between sensors even when one of the sensors malfunction, the user does not carry gravity sensor, or in the place like bathroom where it is hard to acquire camera images. This paper implemented HMM(Hidden Markov Model) based learning and recognition algorithm to recognize actions such as walking, sitting on floor, sitting at sofa, lying and fainting motions. Recognition rate was enhanced when image feature vectors and gravity feature vectors are combined in learning and recognition process. Also, this method maintains high recognition rate by detecting moving object through adaptive background model even in various illumination changes.

Trend of Technology for Outdoor Security Robots based on Multimodal Sensors (멀티모달 센서 기반 실외 경비로봇 기술 개발 현황)

  • Chang, J.H.;Na, K.I.;Shin, H.C.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • With the development of artificial intelligence, many studies have focused on evaluating abnormal situations by using various sensors, as industries try to automate some of the surveillance and security tasks traditionally performed by humans. In particular, mobile robots using multimodal sensors are being used for pilot operations aimed at helping security robots cope with various outdoor situations. Multiagent systems, which combine fixed and mobile systems, can provide more efficient coverage (than that provided by other systems), but network bottlenecks resulting from increased data processing and communication are encountered. In this report, we will examine recent trends in object recognition and abnormal-situation determination in various changing outdoor security robot environments, and describe an outdoor security robot platform that operates as a multiagent equipped with a multimodal sensor.

Design of Cough Detection System Based on Mutimodal Learning & Wearable Sensor to Predict the Spread of Influenza (독감 확산 예측을 위한 멀티모달 학습과 웨어러블 센서 기반의 기침 감지 시스템 설계)

  • Kang, Jae-Sik;Back, Moon-Ki;Choi, Hyung-Tak;Lee, Kyu-Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.428-430
    • /
    • 2018
  • 본 논문에서는 독감확산 예측을 위한 웨어러블 센서를 이용한 기침 감지 모델을 제안한다. 서로 상이한 기침 신체데이터를 사용하고 기침 감지 알고리즘의 구현없이 기계가 학습하는 방식인 멀티모달 DNN을 이용하여 설계하였다. 또한 웨어러블 센서를 통해 실생활의 기침 오디오 데이터와 기침 3축 가속도 데이터를 수집하였고, 두 개의 데이터중 하나의 데이터만으로도 감지를 위한 학습이 가능토록하기 위해 각각 MFCC와 FFT를 이용하여 특징 벡터를 추출하는 방법을 이용하였다.

Design of Agent Technology based on Device Collaboration for Personal Multi-modal Services (개인형 멀티모달 서비스를 위한 디바이스 협업 기반 에이전트 기술 설계)

  • Kim, Jae-Su;Kim, Hyeong-Seon;Kim, Chi-Su;Kim, Hwang-Rae;Im, Jae-Hyeon
    • Proceedings of the Korean Society for Emotion and Sensibility Conference
    • /
    • 2009.11a
    • /
    • pp.254-257
    • /
    • 2009
  • 유비쿼터스 시대가 도래하면서 사용자를 중심으로 하는 서비스에 대한 관심이 증가하고 있다. 더불어 사용자의 특성에 맞는 개인화 된 서비스를 요구하고 있다. 본 논문에서는 유비쿼터스 공간에서 소형화, 지능화되고 있는 개인형 이종 디바이스 간의 협업을 통해 사용자에게 보다 직관적이고 편리한 개인화된 서비스를 제공하기 위한 디바이스 협업 기반 에이전트 기술을 제안한다. 본 연구에서는 센서를 통해 사용자 환경에 대한 정보 및 사용자 정보를 수집하여 기본적인 서비스에 필요한 상황정보를 처리한다. 또한, 유비쿼터스 사용자에게 필요한 멀티모달 서비스를 제공한다. 따라서 일반적인 자동화 서비스 이상의 개인 특성에 맞는 고품질의 서비스를 제공할 수 있다.

  • PDF

Activity Recognition based on Multi-modal Sensors using Dynamic Bayesian Networks (동적 베이지안 네트워크를 이용한 델티모달센서기반 사용자 행동인식)

  • Yang, Sung-Ihk;Hong, Jin-Hyuk;Cho, Sung-Bae
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.15 no.1
    • /
    • pp.72-76
    • /
    • 2009
  • Recently, as the interest of ubiquitous computing has been increased there has been lots of research about recognizing human activities to provide services in this environment. Especially, in mobile environment, contrary to the conventional vision based recognition researches, lots of researches are sensor based recognition. In this paper we propose to recognize the user's activity with multi-modal sensors using hierarchical dynamic Bayesian networks. Dynamic Bayesian networks are trained by the OVR(One-Versus-Rest) strategy. The inferring part of this network uses less calculation cost by selecting the activity with the higher percentage of the result of a simpler Bayesian network. For the experiment, we used an accelerometer and a physiological sensor recognizing eight kinds of activities, and as a result of the experiment we gain 97.4% of accuracy recognizing the user's activity.

Design of Lightweight Artificial Intelligence System for Multimodal Signal Processing (멀티모달 신호처리를 위한 경량 인공지능 시스템 설계)

  • Kim, Byung-Soo;Lee, Jea-Hack;Hwang, Tae-Ho;Kim, Dong-Sun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.1037-1042
    • /
    • 2018
  • The neuromorphic technology has been researched for decades, which learns and processes the information by imitating the human brain. The hardware implementations of neuromorphic systems are configured with highly parallel processing structures and a number of simple computational units. It can achieve high processing speed, low power consumption, and low hardware complexity. Recently, the interests of the neuromorphic technology for low power and small embedded systems have been increasing rapidly. To implement low-complexity hardware, it is necessary to reduce input data dimension without accuracy loss. This paper proposed a low-complexity artificial intelligent engine which consists of parallel neuron engines and a feature extractor. A artificial intelligent engine has a number of neuron engines and its controller to process multimodal sensor data. We verified the performance of the proposed neuron engine including the designed artificial intelligent engines, the feature extractor, and a Micro Controller Unit(MCU).