• Title/Summary/Keyword: Video sensor

Search Result 319, Processing Time 0.051 seconds

A Study on Integrated Fire Alarm System for Safe Urban Transit (안전한 도시철도를 위한 통합 화재 경보 시스템 구축의 연구)

  • Chang, Il-Sik;Ahn, Tae-Ki;Jeon, Ji-Hye;Cho, Byung-Mok;Park, Goo-Man
    • Proceedings of the KSR Conference
    • /
    • 2011.10a
    • /
    • pp.768-773
    • /
    • 2011
  • Today's urban transit system is regarded as the important public transportation service which saves passengers' time and provides the safety. Many researches focus on the rapid and protective responses that minimize the losses when dangerous situation occurs. In this paper we proposed the early fire detection and corresponding rapid response method in urban transit system by combining automatic fire detection for video input and the sensor system. The fire detection method consists of two parts, spark detection and smoke detection. At the spark detection, the RGB color of input video is converted into HSV color and the frame difference is obtained in temporal direction. The region with high R values is considered as fire region candidate and stepwise fire detection rule is applied to calculate its size. At the smoke detection stage, we used the smoke sensor network to secure the credibility of spark detection. The proposed system can be implemented at low prices. In the future work, we would improve the detection algorithm and the accuracy of sensor location in the network.

  • PDF

The Sensory-Motor Fusion System for Object Tracking (이동 물체를 추적하기 위한 감각 운동 융합 시스템 설계)

  • Lee, Sang-Hee;Wee, Jae-Woo;Lee, Chong-Ho
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.52 no.3
    • /
    • pp.181-187
    • /
    • 2003
  • For the moving objects with environmental sensors such as object tracking moving robot with audio and video sensors, environmental information acquired from sensors keep changing according to movements of objects. In such case, due to lack of adaptability and system complexity, conventional control schemes show limitations on control performance, and therefore, sensory-motor systems, which can intuitively respond to various types of environmental information, are desirable. And also, to improve the system robustness, it is desirable to fuse more than two types of sensory information simultaneously. In this paper, based on Braitenberg's model, we propose a sensory-motor based fusion system, which can trace the moving objects adaptively to environmental changes. With the nature of direct connecting structure, sensory-motor based fusion system can control each motor simultaneously, and the neural networks are used to fuse information from various types of sensors. And also, even if the system receives noisy information from one sensor, the system still robustly works with information from other sensors which compensates the noisy information through sensor fusion. In order to examine the performance, sensory-motor based fusion model is applied to object-tracking four-foot robot equipped with audio and video sensors. The experimental results show that the sensory-motor based fusion system can tract moving objects robustly with simpler control mechanism than model-based control approaches.

A study of information processing method for the situation recognition (상황인식을 위한 정보처리의 연구)

  • Park, Sangjoon;Lee, Jongchan
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2019.01a
    • /
    • pp.275-276
    • /
    • 2019
  • 본 논문에서는 위험지역에서 수집 된 정보에 대한 정보처리 시스템에 대해 고려한다. 영상센서를 통하여 입력되는 영상정보를 실시간 분석 및 분류를 하여 사전에 정의된 상황과의 비교분석을 통하여 인지할 수 있는 상황인지 시스템을 설계한다.

  • PDF

An Occupant Sensing System Using Single Video Camera and Ultrasonic Sensor for Advanced Airbag (단일 비디오 카메라와 초음파센서를 이용한 스마트 에어백용 승객 감지 시스템)

  • Bae, Tae-Wuk;Lee, Jong-Won;Ha, Su-Young;Kim, Young-Choon;Ahn, Sang-Ho;Sohng, Kyu-Ik
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.1
    • /
    • pp.66-75
    • /
    • 2010
  • We proposed an occupant sensing system using single video camera and ultrasonic sensor for the advanced airbag. To detect the occupant form and the face position in real-time, we used the skin color and motion information. We made the candidate face block image using the threshold value of the color difference signal corresponding to skin color and difference value of current image and previous image of luminance signal to gel motion information. And then it detects the face by the morphology and the labeling. In case of night without color and luminance information, it detects the face by using the threshold value of the luminance signal get by infra-red LED instead of the color difference signal. To evaluate the performance of the proposed occupant detection system, it performed various experiments through the setting of the IEEE camera, ultrasonic sensor, and infra-red LED in vehicle jig.

Airborne video as a remote sensor for environmental monitoring of linear infrastructure: a case study and review

  • Um Jung-Sup
    • Spatial Information Research
    • /
    • v.12 no.4 s.31
    • /
    • pp.351-370
    • /
    • 2004
  • At present, environmental monitoring of linear infrastructure is based mainly on field sampling. The 'integrated mapping' approach has received only limited attention from field scientists. The increased environmental regulation of corridor targets has required remote sensing research to develop a sensor or technique for targets ranging from 15 m to 100 m in swath width. In an attempt to identify the optimal remote sensing system for linear targets, an overview is provided of the application requirements and the technology currently available. The relative limitation of traditional remote sensing systems in such a linear application is briefly discussed. It is noted that airborne video could provide, in a cost-effective manner, information required for a very narrow and long strip target utilising the narrow view angle and dynamic stereo coverage. The value of this paper is warranted in proposing a new concept of video infrastructure monitoring as a future research direction in the recognition of sensor characteristics and limitations.

  • PDF

A Study on Development of Visual Navigational Aids to improve Maritime Situation Awareness (해상상황인식 개선을 위한 시각적 항해보조장비 개발에 관한 연구)

  • Kim, Eun-Kyung;Im, Nam-Kyun;Han, Song-Hee;Jeong, Jung-Sik
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.22 no.3
    • /
    • pp.379-385
    • /
    • 2012
  • This paper developes the navigation visual aid supporting a watch officer's situation awareness and analyzes its performance test result. Developing the equipment made from composite video sensor which transfer video signal, ranger laser measurement model which search out distance, Pan/ Tilt, center control device. The developed equipment with Pan/Tilt was made from high performance video sensor and ranger laser measurement. To make a real ship test, we carried on setting the developed equipment on ship, observed a danger factor and analyzed a image, and from that we can evaluate marine environment awareness. Through this result, the developed equipment can show effective ability of the awareness of the clearer check and resolution situation when compare with the binocular.

A Highly Reliable Fall Detection System for The Elderly in Real-Time Environment (실시간 환경에서 노인들을 위한 고신뢰도 낙상 검출 시스템)

  • Lee, Young-Sook;Chung, Wan-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.2
    • /
    • pp.401-406
    • /
    • 2008
  • Fall event detection is one of the most common problems for elderly people, especially those living alone because falls result in serious injuries such as joint dislocations, fractures, severe head injuries or even death. In order to prevent falls or fall-related injuries, several previous methods based on video sensor showed low fall detection rates in recent years. To improve this problem and outperform the system performance, this paper presented a novel approach for fall event detection in the elderly using a subtraction between successive difference images and temporal templates in real time environment. The proposed algorithm obtained the successful detection rate of 96.43% and the low false positive rate of 3.125% even though the low-quality video sequences are obtained by a USB PC camera sensor. The experimental results have shown very promising performance in terms of high detection rate and low false positive rate.

Implementation of a Thermal Imaging System with Focal Plane Array Typed Sensor (초점면 배열 방식의 열상카메라 시스템의 구현)

  • 박세화;원동혁;오세중;윤대섭
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.6 no.5
    • /
    • pp.396-403
    • /
    • 2000
  • A thermal imaging system is implemented for the measurement and the analysis of the thermal distribution of the target objects. The main part of the system is a thermal camera in which a focal plane array typed sensor is introduced. The sensor detects the mid-range infrared spectrum of target objects and then it outputs a generic video signal which should be processed to form a frame thermal image. Here, a digital signal processor(DSP) is applied for the high speed processing of the sensor signals. The DSP controls analog-to-digital converter, performs correction algorithms and outputs the frame thermal data to frame buffers. With the frame buffers can be generated a NTSC signal and transferred the frame data to personal computer(PC) for the analysis and a monitoring of the thermal scenes. By performing the signal processing functions in the DSP the overall system achieves a simple configuration. Several experimental results indicate the performance of the overall system.

  • PDF

Deep Learning-Based Companion Animal Abnormal Behavior Detection Service Using Image and Sensor Data

  • Lee, JI-Hoon;Shin, Min-Chan;Park, Jun-Hee;Moon, Nam-Mee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.10
    • /
    • pp.1-9
    • /
    • 2022
  • In this paper, we propose the Deep Learning-Based Companion Animal Abnormal Behavior Detection Service, which using video and sensor data. Due to the recent increase in households with companion animals, the pet tech industry with artificial intelligence is growing in the existing food and medical-oriented companion animal market. In this study, companion animal behavior was classified and abnormal behavior was detected based on a deep learning model using various data for health management of companion animals through artificial intelligence. Video data and sensor data of companion animals are collected using CCTV and the manufactured pet wearable device, and used as input data for the model. Image data was processed by combining the YOLO(You Only Look Once) model and DeepLabCut for extracting joint coordinates to detect companion animal objects for behavior classification. Also, in order to process sensor data, GAT(Graph Attention Network), which can identify the correlation and characteristics of each sensor, was used.