• Title/Summary/Keyword: Multimodal Sensor

Search Result 42, Processing Time 0.024 seconds

Development of Gas Type Identification Deep-learning Model through Multimodal Method (멀티모달 방식을 통한 가스 종류 인식 딥러닝 모델 개발)

  • Seo Hee Ahn;Gyeong Yeong Kim;Dong Ju Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.12
    • /
    • pp.525-534
    • /
    • 2023
  • Gas leak detection system is a key to minimize the loss of life due to the explosiveness and toxicity of gas. Most of the leak detection systems detect by gas sensors or thermal imaging cameras. To improve the performance of gas leak detection system using single-modal methods, the paper propose multimodal approach to gas sensor data and thermal camera data in developing a gas type identification model. MultimodalGasData, a multimodal open-dataset, is used to compare the performance of the four models developed through multimodal approach to gas sensors and thermal cameras with existing models. As a result, 1D CNN and GasNet models show the highest performance of 96.3% and 96.4%. The performance of the combined early fusion model of 1D CNN and GasNet reached 99.3%, 3.3% higher than the existing model. We hoped that further damage caused by gas leaks can be minimized through the gas leak detection system proposed in the study.

Trend of Technology for Outdoor Security Robots based on Multimodal Sensors (멀티모달 센서 기반 실외 경비로봇 기술 개발 현황)

  • Chang, J.H.;Na, K.I.;Shin, H.C.
    • Electronics and Telecommunications Trends
    • /
    • v.37 no.1
    • /
    • pp.1-9
    • /
    • 2022
  • With the development of artificial intelligence, many studies have focused on evaluating abnormal situations by using various sensors, as industries try to automate some of the surveillance and security tasks traditionally performed by humans. In particular, mobile robots using multimodal sensors are being used for pilot operations aimed at helping security robots cope with various outdoor situations. Multiagent systems, which combine fixed and mobile systems, can provide more efficient coverage (than that provided by other systems), but network bottlenecks resulting from increased data processing and communication are encountered. In this report, we will examine recent trends in object recognition and abnormal-situation determination in various changing outdoor security robot environments, and describe an outdoor security robot platform that operates as a multiagent equipped with a multimodal sensor.

Emotion Recognition Method Based on Multimodal Sensor Fusion Algorithm

  • Moon, Byung-Hyun;Sim, Kwee-Bo
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.8 no.2
    • /
    • pp.105-110
    • /
    • 2008
  • Human being recognizes emotion fusing information of the other speech signal, expression, gesture and bio-signal. Computer needs technologies that being recognized as human do using combined information. In this paper, we recognized five emotions (normal, happiness, anger, surprise, sadness) through speech signal and facial image, and we propose to method that fusing into emotion for emotion recognition result is applying to multimodal method. Speech signal and facial image does emotion recognition using Principal Component Analysis (PCA) method. And multimodal is fusing into emotion result applying fuzzy membership function. With our experiments, our average emotion recognition rate was 63% by using speech signals, and was 53.4% by using facial images. That is, we know that speech signal offers a better emotion recognition rate than the facial image. We proposed decision fusion method using S-type membership function to heighten the emotion recognition rate. Result of emotion recognition through proposed method, average recognized rate is 70.4%. We could know that decision fusion method offers a better emotion recognition rate than the facial image or speech signal.

Design of the Multimodal Input System using Image Processing and Speech Recognition (음성인식 및 영상처리 기반 멀티모달 입력장치의 설계)

  • Choi, Won-Suk;Lee, Dong-Woo;Kim, Moon-Sik;Na, Jong-Whoa
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.8
    • /
    • pp.743-748
    • /
    • 2007
  • Recently, various types of camera mouse are developed using the image processing. The camera mouse showed limited performance compared to the traditional optical mouse in terms of the response time and the usability. These problems are caused by the mismatch between the size of the monitor and that of the active pixel area of the CMOS Image Sensor. To overcome these limitations, we designed a new input device that uses the face recognition as well as the speech recognition simultaneously. In the proposed system, the area of the monitor is partitioned into 'n' zones. The face recognition is performed using the web-camera, so that the mouse pointer follows the movement of the face of the user in a particular zone. The user can switch the zone by speaking the name of the zone. The multimodal mouse is analyzed using the Keystroke Level Model and the initial experiments was performed to evaluate the feasibility and the performance of the proposed system.

Real-world multimodal lifelog dataset for human behavior study

  • Chung, Seungeun;Jeong, Chi Yoon;Lim, Jeong Mook;Lim, Jiyoun;Noh, Kyoung Ju;Kim, Gague;Jeong, Hyuntae
    • ETRI Journal
    • /
    • v.44 no.3
    • /
    • pp.426-437
    • /
    • 2022
  • To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real-world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long-term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network-based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.

Activity Recognition of Workers and Passengers onboard Ships Using Multimodal Sensors in a Smartphone (선박 탑승자를 위한 다중 센서 기반의 스마트폰을 이용한 활동 인식 시스템)

  • Piyare, Rajeev Kumar;Lee, Seong Ro
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.39C no.9
    • /
    • pp.811-819
    • /
    • 2014
  • Activity recognition is a key component in identifying the context of a user for providing services based on the application such as medical, entertainment and tactical scenarios. Instead of applying numerous sensor devices, as observed in many previous investigations, we are proposing the use of smartphone with its built-in multimodal sensors as an unobtrusive sensor device for recognition of six physical daily activities. As an improvement to previous works, accelerometer, gyroscope and magnetometer data are fused to recognize activities more reliably. The evaluation indicates that the IBK classifier using window size of 2s with 50% overlapping yields the highest accuracy (i.e., up to 99.33%). To achieve this peak accuracy, simple time-domain and frequency-domain features were extracted from raw sensor data of the smartphone.

Characterization of Pipe Defects in Torsional Guided Waves Using Chirplet Transform (첩릿변환을 이용한 배관 결함 특성 규명)

  • Kim, Chung-Youb;Park, Kyung-Jo
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.24 no.8
    • /
    • pp.636-642
    • /
    • 2014
  • The sensor configuration of the magnetostrictive guided wave system can be described as a single continuous transducing element which makes it difficult to separate the individual modes from the reflected signal. In this work we develop the mode decomposition technique employing chirplet transform, which is able to separate the individual modes from dispersive and multimodal waveform measured with the magnetostrictive sensor, and to estimate the time-frequency centers and individual energies of the reflection, which would be used to locate and characterize defects. The reflection coefficients are calculated using the modal energies of the separated mode. Results from experimental results on a carbon steel pipe are presented, which show that the accurate and quantitative defect characterization could become enabled using the proposed technique.

Mode Separation in Torsional Guided Waves Using Chirplet Transform (첩릿변환을 이용한 비틀림 유도파 모드분리)

  • Kim, Young-Wann;Park, Kyung-Jo
    • Transactions of the Korean Society for Noise and Vibration Engineering
    • /
    • v.24 no.4
    • /
    • pp.324-331
    • /
    • 2014
  • The sensor configuration of the magnetostrictive guided wave system can be described as a single continuous transducing element which makes it difficult to separate the individual modes from the reflected signal. In this work we develop the mode decomposition technique employing chirplet transform based on the maximum likelihood estimation, which is able to separate the individual modes from dispersive and multimodal waveform measured with the magnetostrictive sensor, and estimate the time-frequency centers and individual energies of the reflection, which would be used to locate and characterize defects. Simulation results on a carbon steel pipe are presented, which show the accurate mode separation and more discernible time-frequency representation could become enabled using the proposed technique.

High-Performance Multimodal Flexible Tactile Sensor Capable of Measuring Pressure and Temperature Simultaneously (압력과 온도측정 기능을 갖는 고성능 플렉시블 촉각센서)

  • Jang, Jin-Seok;Kang, Tae-Hyung;Song, Han-Wook;Park, Yon-Kyu;Kim, Min-Seok
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.31 no.8
    • /
    • pp.683-688
    • /
    • 2014
  • This paper presents a high-performance flexible tactile sensor based on inorganic silicon flexible electronics. We created 100 nm-thick semiconducting silicon ribbons equally distributed with 1 mm spacing and $8{\times}8$ arrays to sense the pressure distribution with high-sensitivity and repeatability. The organic silicon rubber substrate was used as a spring material to achieve both of mechanical flexibility and robustness. A thin copper layer was deposited and patterned on top of the pressure sensing layer to create a flexible temperature sensing layer. The fabricated tactile sensor was tested through a series of experiments. The results showed that the tactile sensor is capable of measuring pressure and temperature simultaneously and independently with high precision.

Design of Lightweight Artificial Intelligence System for Multimodal Signal Processing (멀티모달 신호처리를 위한 경량 인공지능 시스템 설계)

  • Kim, Byung-Soo;Lee, Jea-Hack;Hwang, Tae-Ho;Kim, Dong-Sun
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.13 no.5
    • /
    • pp.1037-1042
    • /
    • 2018
  • The neuromorphic technology has been researched for decades, which learns and processes the information by imitating the human brain. The hardware implementations of neuromorphic systems are configured with highly parallel processing structures and a number of simple computational units. It can achieve high processing speed, low power consumption, and low hardware complexity. Recently, the interests of the neuromorphic technology for low power and small embedded systems have been increasing rapidly. To implement low-complexity hardware, it is necessary to reduce input data dimension without accuracy loss. This paper proposed a low-complexity artificial intelligent engine which consists of parallel neuron engines and a feature extractor. A artificial intelligent engine has a number of neuron engines and its controller to process multimodal sensor data. We verified the performance of the proposed neuron engine including the designed artificial intelligent engines, the feature extractor, and a Micro Controller Unit(MCU).