• Title/Summary/Keyword: 움직이는 객체

Search Result 198, Processing Time 0.028 seconds

Method of Tunnel Incidents Detection Using Background Image (배경영상을 이용한 터널 유고 검지 방법)

  • Jeong, Sung-Hwan;Ju, Young-Ho;Lee, Jong-Tae;Lee, Joon-Whoan
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.13 no.12
    • /
    • pp.6089-6097
    • /
    • 2012
  • This study suggested a method of detecting an incident inside tunnel by using camera that is installed within the tunnel. As for the proposed incident detection method, a static object, travel except vehicles, smoke, and contra-flow were detected by extracting the moving object through using the real-time background image differencing after receiving image from the camera, which is installed inside the tunnel. To detect the moving object within the tunnel, the positive background image was created by using the moving information of the object. The incident detection method was developed, which is strong in a change of lighting that occurs within the tunnel, and in influence of the external lighting that occurs in the entrance and exit of the tunnel. To examine the efficiency of the suggested method, the experimental images were acquired from Marae tunnel and Expo tunnel in Yeosu of Jeonnam and from Unam tunnel in Imsil of Jeonbuk. Number of images, which were used in experiment, included 20 cases for static object, 20 cases for travel except vehicles, 4 cases for smoke, and 10 cases for contra-flow. As for the detection rate, all of the static object, the travel except vehicles, and the contra-flow were detected in the experimental image. In case of smoke, 3 cases were detected. Thus, excellent performance could be confirmed. The proposed method is now under operation in Marae tunnel and Expo tunnel in Yeosu of Jeonnam and in Unam tunnel in Imsil of Jeonbuk. To examine accurate efficiency, the evaluation of performance is considered to be likely to be needed after acquiring the incident videos, which actually occur within tunnel.

MPEG Video Segmentation using Two-stage Neural Networks and Hierarchical Frame Search (2단계 신경망과 계층적 프레임 탐색 방법을 이용한 MPEG 비디오 분할)

  • Kim, Joo-Min;Choi, Yeong-Woo;Chung, Ku-Sik
    • Journal of KIISE:Software and Applications
    • /
    • v.29 no.1_2
    • /
    • pp.114-125
    • /
    • 2002
  • In this paper, we are proposing a hierarchical segmentation method that first segments the video data into units of shots by detecting cut and dissolve, and then decides types of camera operations or object movements in each shot. In our previous work[1], each picture group is divided into one of the three detailed categories, Shot(in case of scene change), Move(in case of camera operation or object movement) and Static(in case of almost no change between images), by analysing DC(Direct Current) component of I(Intra) frame. In this process, we have designed two-stage hierarchical neural network with inputs of various multiple features combined. Then, the system detects the accurate shot position, types of camera operations or object movements by searching P(Predicted), B(Bi-directional) frames of the current picture group selectively and hierarchically. Also, the statistical distributions of macro block types in P or B frames are used for the accurate detection of cut position, and another neural network with inputs of macro block types and motion vectors method can reduce the processing time by using only DC coefficients of I frames without decoding and by searching P, B frames selectively and hierarchically. The proposed method classified the picture groups in the accuracy of 93.9-100.0% and the cuts in the accuracy of 96.1-100.0% with three different together is used to detect dissolve, types of camera operations and object movements. The proposed types of video data. Also, it classified the types of camera movements or object movements in the accuracy of 90.13% and 89.28% with two different types of video data.

Optical Flow-Based Marker Tracking Algorithm for Collaboration Between Drone and Ground Vehicle (드론과 지상로봇 간의 협업을 위한 광학흐름 기반 마커 추적방법)

  • Beck, Jong-Hwan;Kim, Sang-Hoon
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.7 no.3
    • /
    • pp.107-112
    • /
    • 2018
  • In this paper, optical flow based keypoint detection and tracking technique is proposed for the collaboration between flying drone with vision system and ground robots. There are many challenging problems in target detection research using moving vision system, so we combined the improved FAST algorithm and Lucas-Kanade method for adopting the better techniques in each feature detection and optical flow motion tracking, which results in 40% higher in processing speed than previous works. Also, proposed image binarization method which is appropriate for the given marker helped to improve the marker detection accuracy. We also studied how to optimize the embedded system which is operating complex computations for intelligent functions in a very limited resources while maintaining the drone's present weight and moving speed. In a future works, we are aiming to develop collaborating smarter robots by using the techniques of learning and recognizing targets even in a complex background.

Individual Pig Detection Using Kinect Depth Information (키넥트 깊이 정보를 이용한 개별 돼지의 탐지)

  • Choi, Jangmin;Lee, Jonguk;Chung, Yongwha;Park, Daihee
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.5 no.10
    • /
    • pp.319-326
    • /
    • 2016
  • Abnormal situation caused by aggressive behavior of pigs adversely affects the growth of pigs, and comes with an economic loss in intensive pigsties. Therefore, IT-based video surveillance system is needed to monitor the abnormal situations in pigsty continuously in order to minimize the economic demage. In this paper, we propose a new Kinect camera-based monitoring system for the detection of the individual pigs. The proposed system is characterized as follows. 1) The background subtraction method and depth-threshold are used to detect only standing-pigs in the Kinect-depth image. 2) The moving-pigs are labeled as regions of interest. 3) A contour method is proposed and applied to solve the touching-pigs problem in the Kinect-depth image. The experimental results with the depth videos obtained from a pig farm located in Sejong illustrate the efficiency of the proposed method.

A Best View Selection Method in Videos of Interested Player Captured by Multiple Cameras (다중 카메라로 관심선수를 촬영한 동영상에서 베스트 뷰 추출방법)

  • Hong, Hotak;Um, Gimun;Nang, Jongho
    • Journal of KIISE
    • /
    • v.44 no.12
    • /
    • pp.1319-1332
    • /
    • 2017
  • In recent years, the number of video cameras that are used to record and broadcast live sporting events has increased, and selecting the shots with the best view from multiple cameras has been an actively researched topic. Existing approaches have assumed that the background in video is fixed. However, this paper proposes a best view selection method for cases in which the background is not fixed. In our study, an athlete of interest was recorded in video during motion with multiple cameras. Then, each frame from all cameras is analyzed for establishing rules to select the best view. The frames were selected using our system and are compared with what human viewers have indicated as being the most desirable. For the evaluation, we asked each of 20 non-specialists to pick the best and worst views. The set of the best views that were selected the most coincided with 54.5% of the frame selection using our proposed method. On the other hand, the set of views most selected as worst through human selection coincided with 9% of best view shots selected using our method, demonstrating the efficacy of our proposed method.

Highway Incident Detection and Classification Algorithms using Multi-Channel CCTV (다채널 CCTV를 이용한 고속도로 돌발상황 검지 및 분류 알고리즘)

  • Jang, Hyeok;Hwang, Tae-Hyun;Yang, Hun-Jun;Jeong, Dong-Seok
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.23-29
    • /
    • 2014
  • The advanced traffic management system of intelligent transport systems automates the related traffic tasks such as vehicle speed, traffic volume and traffic incidents through the improved infrastructures like high definition cameras, high-performance radar sensors. For the safety of road users, especially, the automated incident detection and secondary accident prevention system is required. Normally, CCTV based image object detection and radar based object detection is used in this system. In this paper, we proposed the algorithm for real time highway incident detection system using multi surveillance cameras to mosaic video and track accurately the moving object that taken from different angles by background modeling. We confirmed through experiments that the video detection can supplement the short-range shaded area and the long-range detection limit of radar. In addition, the video detection has better classification features in daytime detection excluding the bad weather condition.

A Study on Recognition of Moving Object Crowdedness Based on Ensemble Classifiers in a Sequence (혼합분류기 기반 영상내 움직이는 객체의 혼잡도 인식에 관한 연구)

  • An, Tae-Ki;Ahn, Seong-Je;Park, Kwang-Young;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.2A
    • /
    • pp.95-104
    • /
    • 2012
  • Pattern recognition using ensemble classifiers is composed of strong classifier which consists of many weak classifiers. In this paper, we used feature extraction to organize strong classifier using static camera sequence. The strong classifier is made of weak classifiers which considers environmental factors. So the strong classifier overcomes environmental effect. Proposed method uses binary foreground image by frame difference method and the boosting is used to train crowdedness model and recognize crowdedness using features. Combination of weak classifiers makes strong ensemble classifier. The classifier could make use of potential features from the environment such as shadow and reflection. We tested the proposed system with road sequence and subway platform sequence which are included in "AVSS 2007" sequence. The result shows good accuracy and efficiency on complex environment.

An Implementation of Mobile Game using JBox2D Physics Engine in Android Platform (안드로이드 플랫폼에서 JBox2D 물리 엔진을 이용한 모바일 게임구현)

  • Hwang, Ki-Tae
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.11 no.6
    • /
    • pp.119-126
    • /
    • 2011
  • As a software component for computer game, the physics engine simulates objects' movement according to the laws of physics. This paper introduces a design and implementation of mobile game on the Android platform, where we used JBox2D physics engine library and Android graphics APIs. We borrowed the key idea of this game from Crayon Physics which is known as a famous PC game. This game starts with no way from user character to destination character. The game user has to make a way to destination character from user character by creating polygon objects between them. The user wins when user character meets destination character. However, the game user has to decide the time to create objects and their shapes well because all objects in this game are governed by the laws of physics. As an important thing of this paper, we introduced into this game new input methods of LCD touch and sensors embedded in mobile devices but not in PCs. Game users can create objects by drawing polygons with LCD touch and move objects or characters according to sensor values from accelerator sensors by tilting the mobile device.

Moving Object Extraction and Relative Depth Estimation of Backgrould regions in Video Sequences (동영상에서 물체의 추출과 배경영역의 상대적인 깊이 추정)

  • Park Young-Min;Chang Chu-Seok
    • The KIPS Transactions:PartB
    • /
    • v.12B no.3 s.99
    • /
    • pp.247-256
    • /
    • 2005
  • One of the classic research problems in computer vision is that of stereo, i.e., the reconstruction of three dimensional shape from two or more images. This paper deals with the problem of extracting depth information of non-rigid dynamic 3D scenes from general 2D video sequences taken by monocular camera, such as movies, documentaries, and dramas. Depth of the blocks are extracted from the resultant block motions throughout following two steps: (i) calculation of global parameters concerned with camera translations and focal length using the locations of blocks and their motions, (ii) calculation of each block depth relative to average image depth using the global parameters and the location of the block and its motion, Both singular and non-singular cases are experimented with various video sequences. The resultant relative depths and ego-motion object shapes are virtually identical to human vision.

Development of a Monitoring System Based on the Cooperation of Multiple Sensors on SenWeaver Platform (센위버 플랫폼 기반의 다중센서 협업을 이용한 모니터링 시스템 개발)

  • Kwon, Cha-Uk;Cha, Kyung-Ae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.2
    • /
    • pp.91-98
    • /
    • 2010
  • This study proposes a monitoring system that effectively watches surroundings by cooperating the various sensor information including image information on a sensor network system. The monitoring system proposed in this paper is developed to watch certain intruders to the internal spaces through the interested region for exceptional time by installing cameras, PIR(Pyroelectric Infrared Ray) sensor and body detectors in such interested regions. Moreover the monitering system is implemented based on the SenWeaver plateform which is a integrated development tools for building wireless sensor network system. In the results of the test that was applied to a practically experimental environment by implementing some interfaces for the proposed system, it was considered that it is possible to watch surroundings effectively using the image information obtained from cameras and multiple sensor information acquisited from sensor nodes.