• Title/Summary/Keyword: Fire recognition and tracking

Search Result 6, Processing Time 0.019 seconds

Development of Autonomous Surface Robot for Marine Fire Safety (해양 소방 안전을 위한 자율수상로봇 개발)

  • Jeong, Jinseok;Sa, Youngmin;Kim, Hyun-Sik
    • Journal of Ocean Engineering and Technology
    • /
    • v.32 no.2
    • /
    • pp.138-142
    • /
    • 2018
  • The marine industry is rapidly developing as a result of the increase in various needs in the marine environment. In addition, accidents involving ship fires and explosions and the resulting casualties are increasing. Generally, manpower and safety problems exist in fire fighting. A fire fighter in the form of an autonomous surface robot would be ideal for marine fire safety, because it has no manpower and safety problems. Therefore, an autonomous surface robot with the abilities of fire recognition and tracking, nozzle selection, position and attitude control, and fire fighting was developed and is discussed in this paper. The test and evaluation results of this robot showed the possibility of real-size applications and the need for additional studies.

Hand Tracking and Hand Gesture Recognition for Human Computer Interaction

  • Bai, Yu;Park, Sang-Yun;Kim, Yun-Sik;Jeong, In-Gab;Ok, Soo-Yol;Lee, Eung-Joo
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.2
    • /
    • pp.182-193
    • /
    • 2011
  • The aim of this paper is to present the methodology for hand tracking and hand gesture recognition. The detected hand and gesture can be used to implement the non-contact mouse. We had developed a MP3 player using this technology controlling the computer instead of mouse. In this algorithm, we first do a pre-processing to every frame which including lighting compensation and background filtration to reducing the adverse impact on correctness of hand tracking and hand gesture recognition. Secondly, YCbCr skin-color likelihood algorithm is used to detecting the hand area. Then, we used Continuously Adaptive Mean Shift (CAMSHIFT) algorithm to tracking hand. As the formula-based region of interest is square, the hand is closer to rectangular. We have improved the formula of the search window to get a much suitable search window for hand. And then, Support Vector Machines (SVM) algorithm is used for hand gesture recognition. For training the system, we collected 1500 hand gesture pictures of 5 hand gestures. Finally we have performed extensive experiment on a Windows XP system to evaluate the efficiency of the proposed scheme. The hand tracking correct rate is 96% and the hand gestures average correct rate is 95%.

Occluded Object Motion Tracking Method based on Combination of 3D Reconstruction and Optical Flow Estimation (3차원 재구성과 추정된 옵티컬 플로우 기반 가려진 객체 움직임 추적방법)

  • Park, Jun-Heong;Park, Seung-Min;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.5
    • /
    • pp.537-542
    • /
    • 2011
  • A mirror neuron is a neuron fires both when an animal acts and when the animal observes the same action performed by another. We propose a method of 3D reconstruction for occluded object motion tracking like Mirror Neuron System to fire in hidden condition. For modeling system that intention recognition through fire effect like Mirror Neuron System, we calculate depth information using stereo image from a stereo camera and reconstruct three dimension data. Movement direction of object is estimated by optical flow with three-dimensional image data created by three dimension reconstruction. For three dimension reconstruction that enables tracing occluded part, first, picture data was get by stereo camera. Result of optical flow is made be robust to noise by the kalman filter estimation algorithm. Image data is saved as history from reconstructed three dimension image through motion tracking of object. When whole or some part of object is disappeared form stereo camera by other objects, it is restored to bring image date form history of saved past image and track motion of object.

Application of Deep Learning: A Review for Firefighting

  • Shaikh, Muhammad Khalid
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.73-78
    • /
    • 2022
  • The aim of this paper is to investigate the prevalence of Deep Learning in the literature on Fire & Rescue Service. It is found that deep learning techniques are only beginning to benefit the firefighters. The popular areas where deep learning techniques are making an impact are situational awareness, decision making, mental stress, injuries, well-being of the firefighter such as his sudden fall, inability to move and breathlessness, path planning by the firefighters while getting to an fire scene, wayfinding, tracking firefighters, firefighter physical fitness, employment, prediction of firefighter intervention, firefighter operations such as object recognition in smoky areas, firefighter efficacy, smart firefighting using edge computing, firefighting in teams, and firefighter clothing and safety. The techniques that were found applied in firefighting were Deep learning, Traditional K-Means clustering with engineered time and frequency domain features, Convolutional autoencoders, Long Short-Term Memory (LSTM), Deep Neural Networks, Simulation, VR, ANN, Deep Q Learning, Deep learning based on conditional generative adversarial networks, Decision Trees, Kalman Filters, Computational models, Partial Least Squares, Logistic Regression, Random Forest, Edge computing, C5 Decision Tree, Restricted Boltzmann Machine, Reinforcement Learning, and Recurrent LSTM. The literature review is centered on Firefighters/firemen not involved in wildland fires. The focus was also not on the fire itself. It must also be noted that several deep learning techniques such as CNN were mostly used in fire behavior, fire imaging and identification as well. Those papers that deal with fire behavior were also not part of this literature review.

Design of Smart Device Assistive Emergency WayFinder Using Vision Based Emergency Exit Sign Detection

  • Lee, Minwoo;Mariappan, Vinayagam;Mfitumukiza, Joseph;Lee, Junghoon;Cho, Juphil;Cha, Jaesang
    • Journal of Satellite, Information and Communications
    • /
    • v.12 no.1
    • /
    • pp.101-106
    • /
    • 2017
  • In this paper, we present Emergency exit signs are installed to provide escape routes or ways in buildings like shopping malls, hospitals, industry, and government complex, etc. and various other places for safety purpose to aid people to escape easily during emergency situations. In case of an emergency situation like smoke, fire, bad lightings and crowded stamped condition at emergency situations, it's difficult for people to recognize the emergency exit signs and emergency doors to exit from the emergency building areas. This paper propose an automatic emergency exit sing recognition to find exit direction using a smart device. The proposed approach aims to develop an computer vision based smart phone application to detect emergency exit signs using the smart device camera and guide the direction to escape in the visible and audible output format. In this research, a CAMShift object tracking approach is used to detect the emergency exit sign and the direction information extracted using template matching method. The direction information of the exit sign is stored in a text format and then using text-to-speech the text synthesized to audible acoustic signal. The synthesized acoustic signal render on smart device speaker as an escape guide information to the user. This research result is analyzed and concluded from the views of visual elements selecting, EXIT appearance design and EXIT's placement in the building, which is very valuable and can be commonly referred in wayfinder system.

Escape Route Prediction and Tracking System using Artificial Intelligence (인공지능을 활용한 도주경로 예측 및 추적 시스템)

  • Yang, Bum-suk;Park, Dea-woo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.225-227
    • /
    • 2022
  • Now In Seoul, about 75,000 CCTVs are installed in 25 district offices. Each ward office in Seoul has built a control center for CCTV control and is building information such as people, vehicle types, license plate recognition and color classification into big data through 24-hour artificial intelligence intelligent image analysis. Seoul Metropolitan Government has signed MOUs with the Ministry of Land, Infrastructure and Transport, the National Police Agency, the Fire Service, the Ministry of Justice, and the military base to enable rapid response to emergency/emergency situations. In other words, we are building a smart city that is safe and can prevent disasters by providing CCTV images of each ward office. In this paper, the CCTV image is designed to extract the characteristics of the vehicle and personnel when an incident occurs through artificial intelligence, and based on this, predict the escape route and enable continuous tracking. It is designed so that the AI automatically selects and displays the CCTV image of the route. It is designed to expand the smart city integration platform by providing image information and extracted information to the adjacent ward office when the escape route of a person or vehicle related to an incident is expected to an area other than the relevant jurisdiction. This paper will contribute as basic data to the development of smart city integrated platform research.

  • PDF