• Title/Summary/Keyword: Scene Analysis Method

Search Result 221, Processing Time 0.026 seconds

Fire Cause Reasoning of Self-regulating Heating Cable by a Fire Investigation Applying the Scientific Method and Fault Tree Analysis (과학적 방법을 적용한 화재조사와 결함수 분석을 이용한 정온전선의 발화원인 추론)

  • Kim, Doo-Hyun;Lee, Heung-Su
    • Fire Science and Engineering
    • /
    • v.30 no.4
    • /
    • pp.73-81
    • /
    • 2016
  • A self-regulating heating cable is an electrical heating element by flowing an electric current between parallel conductors filled with an extruded semi-conductive polymer. Self-regulating heating cables are used mainly for frost protection purposes because the construction is convenient and the price is low. On the other hand, structural problems with imperfections of the insulation can cause a fire despite their usefulness. This paper deduced a direct method to derive the cause by investigating the scene of a fire due to a self-regulating heating cable and analyzed the basic problem using fault tree analysis. In this paper, the actual fire scene was a cold storage warehouse, and fire investigation was conducted. After investigating the fire scene and fault tree analysis, the cause of the fire could be attributed to dielectric breakdown of the self-regulating heating cable. This paper could be utilized in the fire safety activities and similar fire investigations.

Detection of Abnormal Behavior by Scene Analysis in Surveillance Video (감시 영상에서의 장면 분석을 통한 이상행위 검출)

  • Bae, Gun-Tae;Uh, Young-Jung;Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.36 no.12C
    • /
    • pp.744-752
    • /
    • 2011
  • In intelligent surveillance system, various methods for detecting abnormal behavior were proposed recently. However, most researches are not robust enough to be utilized for actual reality which often has occlusions because of assumption the researches have that individual objects can be tracked. This paper presents a novel method to detect abnormal behavior by analysing major motion of the scene for complex environment in which object tracking cannot work. First, we generate Visual Word and Visual Document from motion information extracted from input video and process them through LDA(Latent Dirichlet Allocation) algorithm which is one of document analysis technique to obtain major motion information(location, magnitude, direction, distribution) of the scene. Using acquired information, we compare similarity between motion appeared in input video and analysed major motion in order to detect motions which does not match to major motions as abnormal behavior.

Event Detection on Motion Activities Using a Dynamic Grid

  • Preechasuk, Jitdumrong;Piamsa-nga, Punpiti
    • Journal of Information Processing Systems
    • /
    • v.11 no.4
    • /
    • pp.538-555
    • /
    • 2015
  • Event detection based on using features from a static grid can give poor results from the viewpoint of two main aspects: the position of the camera and the position of the event that is occurring in the scene. The former causes problems when training and test events are at different distances from the camera to the actual position of the event. The latter can be a source of problems when training events take place in any position in the scene, and the test events take place in a position different from the training events. Both issues degrade the accuracy of the static grid method. Therefore, this work proposes a method called a dynamic grid for event detection, which can tackle both aspects of the problem. In our experiment, we used the dynamic grid method to detect four types of event patterns: implosion, explosion, two-way, and one-way using a Multimedia Analysis and Discovery (MAD) pedestrian dataset. The experimental results show that the proposed method can detect the four types of event patterns with high accuracy. Additionally, the performance of the proposed method is better than the static grid method and the proposed method achieves higher accuracy than the previous method regarding the aforementioned aspects.

An Indoor Location Estimation Method Selection Algorithm based on environment of moving object (이동객체가 위치한 환경에 따른 실내 위치추정기법 선택 알고리즘)

  • Jeon, Hyeon-Sig;Yeom, Jin-Young;Park, Hyun-Ju
    • Journal of Internet Computing and Services
    • /
    • v.12 no.2
    • /
    • pp.19-28
    • /
    • 2011
  • Recently, ubiquitous computing and related technologies is more and more growing concern about. Depending on the trend, the moving object recognition and tracking research have been required in order to meet the diverse needs of the user. In the location-based services, one of the most important issues in the indoor environment is to provide location-aware services. In this paper, the effective algorithm to help estimate the position of moving objects in an indoor environment is proposed. We propose an algorithm that combined the existing trilateration measurement and the improved measurement of environmental adaptation scene analysis. The proposed indoor location estimation algorithm use the trilateration measurement when we have enough anchor in the line-of-sight environment. Otherwise that use measurement of environmental adaptation scene analysis. Consequently, the proposed algorithm has been improved the localization accuracy of a moving object as well as was able to reduce complexity of the algorithm.

A development of the simple camera calibration system using the grid type frame with different line widths (다른 선폭들로 구성된 격자형 교정판을 이용한 간단한 카메라 교정 시스템의 개발)

  • 정준익;최성구;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.371-374
    • /
    • 1997
  • Recently, the development of computer achieves a system which is similar to the mechanics of human visual system. The 3-dimensional measurement using monocular vision system must be achieved a camera calibration. So far, the camera calibration technique required reference target in a scene. But, these methods are inefficient because they have many calculation procedures and difficulties in analysis. Therefore, this paper proposes a native method that without reference target in a scene. We use the grid type frame with different line widths. This method uses vanishing point concept that possess a rotation parameter of the camera and perspective ration that perspect each line widths into a image. We confirmed accuracy of calibration parameter estimation through experiment on the algorithm with a grid paper with different line widths.

  • PDF

Deep-Learning Approach for Text Detection Using Fully Convolutional Networks

  • Tung, Trieu Son;Lee, Gueesang
    • International Journal of Contents
    • /
    • v.14 no.1
    • /
    • pp.1-6
    • /
    • 2018
  • Text, as one of the most influential inventions of humanity, has played an important role in human life since ancient times. The rich and precise information embodied in text is very useful in a wide range of vision-based applications such as the text data extracted from images that can provide information for automatic annotation, indexing, language translation, and the assistance systems for impaired persons. Therefore, natural-scene text detection with active research topics regarding computer vision and document analysis is very important. Previous methods have poor performances due to numerous false-positive and true-negative regions. In this paper, a fully-convolutional-network (FCN)-based method that uses supervised architecture is used to localize textual regions. The model was trained directly using images wherein pixel values were used as inputs and binary ground truth was used as label. The method was evaluated using ICDAR-2013 dataset and proved to be comparable to other feature-based methods. It could expedite research on text detection using deep-learning based approach in the future.

A Study on the Camera Calibration Algorithm using the Grid Type Frame with Different Line Widths (다른 선폭들로 구성된 격자형 교정판을 이용한 카메라 교정 알고리즘에 관한 연구)

  • Jeong, Jun-Ik;Han, Young-Bae;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 1998.07g
    • /
    • pp.2333-2335
    • /
    • 1998
  • Recently, the development of computer achieves a system which is similar to the mechanics of human visual system. The 3D measurement using monocular vision system must be achieved a camera calibration. So far, the camera calibration technique required reference target in a scene. But, these methods are inefficient because they have many calculation procedures and difficulties in analysis. Therefore, this paper proposes a native method that without reference target in a scene. We use the grid type frame with different line widths. This method uses vanishing point concept that possess a rotation parameter of the camera and perspective ration that perfect each line widths into a image. We confirmed accuracy of calibration parameter estimation through experiment on the algorithm with a grid paper with different line widths.

  • PDF

An EV Range in HDRI Acquisition as a Luminance Map Creation (휘도맵의 작성을 위한 HDRI 획득에 있어서 EV의 범위)

  • Hong, Sung-De
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.24 no.10
    • /
    • pp.5-12
    • /
    • 2010
  • The purpose of this study is to present the EV range in HDRI acquisition process to create luminance map. The proposed method in this study is to capture the scene at EV ${\pm}0$ that is the longest exposure points and reference point in the scene. With this reference point, sets of 25 LDRI test case were taken manually at ${\pm}2$ EV using the aperture-priority manual mode. The 25 HDRIs were created using Adobe Photoshop. The HDRIs were then imported Radiance lighting simulation program to be analyzed into falsecolor. The analysis results of the 25 HDRIs test case are 50[%] of the all tested case have a margin of error of 10[%]. In case of f/5.6, the luminance map generated with HDRI were similar to the spot luminance meter. As a result, the EV range to reduce error of luminance map generated with HDRI is EV $+2{\sim}{\pm}0{\sim}-10$.

Modeling the Visual Target Search in Natural Scenes

  • Park, Daecheol;Myung, Rohae;Kim, Sang-Hyeob;Jang, Eun-Hye;Park, Byoung-Jun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.6
    • /
    • pp.705-713
    • /
    • 2012
  • Objective: The aim of this study is to predict human visual target search using ACT-R cognitive architecture in real scene images. Background: Human uses both the method of bottom-up and top-down process at the same time using characteristics of image itself and knowledge about images. Modeling of human visual search also needs to include both processes. Method: In this study, visual target object search performance in real scene images was analyzed comparing experimental data and result of ACT-R model. 10 students participated in this experiment and the model was simulated ten times. This experiment was conducted in two conditions, indoor images and outdoor images. The ACT-R model considering the first saccade region through calculating the saliency map and spatial layout was established. Proposed model in this study used the guide of visual search and adopted visual search strategies according to the guide. Results: In the analysis results, no significant difference on performance time between model prediction and empirical data was found. Conclusion: The proposed ACT-R model is able to predict the human visual search process in real scene images using salience map and spatial layout. Application: This study is useful in conducting model-based evaluation in visual search, particularly in real images. Also, this study is able to adopt in diverse image processing program such as helper of the visually impaired.

Hydrodynamic scene separation from video imagery of ocean wave using autoencoder (오토인코더를 이용한 파랑 비디오 영상에서의 수리동역학적 장면 분리 연구)

  • Kim, Taekyung;Kim, Jaeil;Kim, Jinah
    • Journal of the Korea Computer Graphics Society
    • /
    • v.25 no.4
    • /
    • pp.9-16
    • /
    • 2019
  • In this paper, we propose a hydrodynamic scene separation method for wave propagation from video imagery using autoencoder. In the coastal area, image analysis methods such as particle tracking and optical flow with video imagery are usually applied to measure ocean waves owing to some difficulties of direct wave observation using sensors. However, external factors such as ambient light and weather conditions considerably hamper accurate wave analysis in coastal video imagery. The proposed method extracts hydrodynamic scenes by separating only the wave motions through minimizing the effect of ambient light during wave propagation. We have visually confirmed that the separation of hydrodynamic scenes is reasonably well extracted from the ambient light and backgrounds in the two videos datasets acquired from real beach and wave flume experiments. In addition, the latent representation of the original video imagery obtained through the latent representation learning by the variational autoencoder was dominantly determined by ambient light and backgrounds, while the hydrodynamic scenes of wave propagation independently expressed well regardless of the external factors.