• Title/Summary/Keyword: Motion Information of Target

Search Result 201, Processing Time 0.024 seconds

ROI-Based 3D Video Stabilization Using Warping (관심영역 기반 와핑을 이용한 3D 동영상 안정화 기법)

  • Lee, Tae-Hwan;Song, Byung-Cheol
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.76-82
    • /
    • 2012
  • As the portable camcorder becomes popular, various video stabilization algorithms for de-shaking of camera motion have been developed. In the past, most video stabilization algorithms were based on 2-dimensional camera motion, but recent algorithms show much better performance by considering 3-dimensional camera motion. Among the previous video stabilization algorithms, 3D video stabilization algorithm using content-preserving warps is known as the state-of-the art owing to its superior performance. But, the major demerit of the algorithm is its high computational complexity. So, we present a computationally light full-frame warping algorithm based on ROI (region-of-interest) while providing comparable visual quality to the state-of-the art in terms of ROI. First, a proper ROI with a target depth is chosen for each frame, and full-frame warping based on the selected ROI is applied.

Moving Object Block Extraction for Compressed Video Signal Based on 2-Mode Selection (2-모드 선택 기반의 압축비디오 신호의 움직임 객체 블록 추출)

  • Kim, Dong-Wook
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.5
    • /
    • pp.163-170
    • /
    • 2007
  • In this paper, We propose a new technique for extraction of moving objects included in compressed video signal. Moving object extraction is used in several fields such as contents based retrieval and target tracking. In this paper, in order to extract moving object blocks, motion vectors and DCT coefficients are used selectively. The proposed algorithm has a merit that it is no need of perfect decoding, because it uses only coefficients on the DCT transform domain. We used three test video sequences in the computer simulation, and obtained satisfactory results.

  • PDF

3-D vision sensor for arc welding industrial robot system with coordinated motion

  • Shigehiru, Yoshimitsu;Kasagami, Fumio;Ishimatsu, Takakazu
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1992.10b
    • /
    • pp.382-387
    • /
    • 1992
  • In order to obtain desired arc welding performance, we already developed an arc welding robot system that enabled coordinated motions of dual arm robots. In this system one robot arm holds a welding target as a positioning device, and the other robot moves the welding torch. Concerning to such a dual arm robot system, the positioning accuracy of robots is one important problem, since nowadays conventional industrial robots unfortunately don't have enough absolute accuracy in position. In order to cope with this problem, our robot system employed teaching playback method, where absolute error are compensated by the operator's visual feedback. Due to this system, an ideal arc welding considering the posture of the welding target and the directions of the gravity has become possible. Another problem still remains, while we developed an original teaching method of the dual arm robots with coordinated motions. The problem is that manual teaching tasks are still tedious since they need fine movements with intensive attentions. Therefore, we developed a 3-dimensional vision guided robot control method for our welding robot system with coordinated motions. In this paper we show our 3-dimensional vision sensor to guide our arc welding robot system with coordinated motions. A sensing device is compactly designed and is mounted on the tip of the arc welding robot. The sensor detects the 3-dimensional shape of groove on the target work which needs to be weld. And the welding robot is controlled to trace the grooves with accuracy. The principle of the 3-dimensional measurement is depend on the slit-ray projection method. In order to realize a slit-ray projection method, two laser slit-ray projectors and one CCD TV camera are compactly mounted. Tactful image processing enabled 3-dimensional data processing without suffering from disturbance lights. The 3-dimensional information of the target groove is combined with the rough teaching data they are given by the operator in advance. Therefore, the teaching tasks are simplified

  • PDF

Face and Hand Tracking using MAWUPC algorithm in Complex background (복잡한 배경에서 MAWUPC 알고리즘을 이용한 얼굴과 손의 추적)

  • Lee, Sang-Hwan;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Jae-Hui
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.2
    • /
    • pp.39-49
    • /
    • 2002
  • This paper proposes the MAWUPC (Motion Adaptive Weighted Unmatched Pixel Count) algorithm to track multiple objects of similar color The MAWUPC algorithm has the new method that combines color and motion effectively. We apply the MAWUPC algorithm to face and hand tracking against complex background in an image sequence captured by using single camera. The MAWUPC algorithm is an improvement of previously proposed AWUPC (Adaptive weighted Unmatched Pixel Count) algorithm based on the concept of the Moving Color that combines effectively color and motion information. The proposed algorithm incorporates a color transform for enhancing a specific color, the UPC(Unmatched Pixel Count) operation for detecting motion, and the discrete Kalman filter for reflecting motion. The proposed algorithm has advantages in reducing the bad effect of occlusion among target objects and, at the same time, in rejecting static background objects that have a similar color to tracking objects's color. This paper shows the efficiency of the proposed MAWUPC algorithm by face and hands tracking experiments for several image sequences that have complex backgrounds, face-hand occlusion, and hands crossing.

Vision-based Target Tracking for UAV and Relative Depth Estimation using Optical Flow (무인 항공기의 영상기반 목표물 추적과 광류를 이용한 상대깊이 추정)

  • Jo, Seon-Yeong;Kim, Jong-Hun;Kim, Jung-Ho;Lee, Dae-Woo;Cho, Kyeum-Rae
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.3
    • /
    • pp.267-274
    • /
    • 2009
  • Recently, UAVs (Unmanned Aerial Vehicles) are expected much as the Unmanned Systems for various missions. These missions are often based on the Vision System. Especially, missions such as surveillance and pursuit have a process which is carried on through the transmitted vision data from the UAV. In case of small UAVs, monocular vision is often used to consider weights and expenses. Research of missions performance using the monocular vision is continued but, actually, ground and target model have difference in distance from the UAV. So, 3D distance measurement is still incorrect. In this study, Mean-Shift Algorithm, Optical Flow and Subspace Method are posed to estimate the relative depth. Mean-Shift Algorithm is used for target tracking and determining Region of Interest (ROI). Optical Flow includes image motion information using pixel intensity. After that, Subspace Method computes the translation and rotation of image and estimates the relative depth. Finally, we present the results of this study using images obtained from the UAV experiments.

Motion Derivatives based Entropy Feature Extraction Using High-Range Resolution Profiles for Estimating the Number of Targets and Seduction Chaff Detection (표적 개수 추정 및 근접 채프 탐지를 위한 고해상도 거리 프로파일을 이용한 움직임 미분 기반 엔트로피 특징 추출 기법)

  • Lee, Jung-Won;Choi, Gak-Gyu;Na, Kyoungil
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.22 no.2
    • /
    • pp.207-214
    • /
    • 2019
  • This paper proposes a new feature extraction method for automatically estimating the number of target and detecting the chaff using high range resolution profile(HRRP). Feature of one-dimensional range profile is expected to be limited or missing due to lack of information according to the time. The proposed method considers the dynamic movements of targets depending on the radial velocity. The observed HRRP sequence is used to construct a time-range distribution matrix, then assuming diverse radial velocities reflect the number of target and seduction chaff launch, the proposed method utilizes the characteristic of the gradient distribution on the time-range distribution matrix image, which is validated by electromagnetic computation data and dynamic simulation.

A Study of Improved CSP coefficient using Synchronous Addition Methods in Target tracking System. (표적추적 시스템에서 동기가산법을 이용한 CSP계수 향상에 관한 연구)

  • Song Do-Hoon;Kim Jung-Ho;Cha Kyung-Hwan;Kim Chun-Duck
    • Proceedings of the Acoustical Society of Korea Conference
    • /
    • autumn
    • /
    • pp.161-164
    • /
    • 1999
  • 본 논문에서는 표적 추적 시스템에서 센서 어레이에 입사되는 표적신호에 대한 센서 출력 신호간의 지연시간 추정(TDE:Time Delay Estimation)을 위해 백색상호 상관법(CSP:Cross-Power Spectrum Phase Analysis)을 이용한다. 그러나 음파의 다중경로 전달특성 및 배경잡음의 영향으로 인해 CSP계수는 많은 클러터(Clutter)를 포함하게 되고 결국 방위 추정 오차의 요인이 된다. 따라서 센서 어레이 중심좌표를 기준으로 대칭 배열된 센서쌍(Pair)에 대한 CSP계수를 동기가산 하여 실제 표적 방향 이외의 방향정보를 제거하는 방법을 제안한다. 시간에 따라 각도를 변침하는 표적에 대한 표적기동분석 (BOTMA:Bearings Only Target Motion Analysis)을 위해 매 관측시간마다 동기가산을 행한 CSP결과를 누적하여 방위각 궤적을 형성하였을 때 시간 Window에 따라 약간의 차이는 있지만 약 10dB의 궤적 추적 성능을 확인하였다.

  • PDF

A Ubiquitous Vision System based on the Identified Contract Net Protocol (Identified Contract Net 프로토콜 기반의 유비쿼터스 시각시스템)

  • Kim, Chi-Ho;You, Bum-Jae;Kim, Hagbae
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.10
    • /
    • pp.620-629
    • /
    • 2005
  • In this paper, a new protocol-based approach was proposed for development of a ubiquitous vision system. It is possible to apply the approach by regarding the ubiquitous vision system as a multiagent system. Thus, each vision sensor can be regarded as an agent (vision agent). Each vision agent independently performs exact segmentation for a target by color and motion information, visual tracking for multiple targets in real-time, and location estimation by a simple perspective transform. Matching problem for the identity of a target during handover between vision agents is solved by the Identified Contract Net (ICN) protocol implemented for the protocol-based approach. The protocol-based approach by the ICN protocol is independent of the number of vision agents and moreover the approach doesn't need calibration and overlapped region between vision agents. Therefore, the ICN protocol raises speed, scalability, and modularity of the system. The protocol-based approach was successfully applied for our ubiquitous vision system and operated well through several experiments.

Study on Tactical Target Tracking Performance Using Unscented Transform-based Filtering (무향 변환 기반 필터링을 이용한 전술표적 추적 성능 연구)

  • Byun, Jaeuk;Jung, Hyoyoung;Lee, Saewoom;Kim, Gi-Sung;Kim, Kiseon
    • Journal of the Korea Institute of Military Science and Technology
    • /
    • v.17 no.1
    • /
    • pp.96-107
    • /
    • 2014
  • Tracking the tactical object is a fundamental affair in network-equipped modern warfare. Geodetic coordinate system based on longitude, latitude, and height is suitable to represent the location of tactical objects considering multi platform data fusion. The motion of tactical object described as a dynamic model requires an appropriate filtering to overcome the system and measurement noise in acquiring information from multiple sensors. This paper introduces the filter suitable for multi-sensor data fusion and tactical object tracking, particularly the unscented transform(UT) and its detail. The UT in Unscented Kalman Filter(UKF) uses a few samples to estimate nonlinear-propagated statistic parameters, and UT has better performance and complexity than the conventional linearization method. We show the effects of UT-based filtering via simulation considering practical tactical object tracking scenario.

A study on the characteristics of hull shape parameter of fishing vessel types (트롤어선 선종의 선형 특성 계수에 관한 연구)

  • KIM, Su-Hyung;LEE, Chun-Ki;KIM, Min-Son
    • Journal of the Korean Society of Fisheries and Ocean Technology
    • /
    • v.56 no.2
    • /
    • pp.163-171
    • /
    • 2020
  • Engaged in trawling in limited fishing grounds with a number of fish schools could cause collisions between fishing vessels. Therefore, providing accurate maneuver information according to the situation could be regarded as essential for improving seafarers safety and fishing efficiency as well as safety of navigation. It is difficult to obtain all maneuver information through sea trial tests only, so a method through empirical formula is necessary. Since most empirical formulas are developed for merchant ship types, especially the characteristics of hull shape parameter like CbB/L and dCb/B etc. are clearly different between fishing vessels and merchant ships, this could occur estimation errors. Therefore, in this study, the authors have selected target fishing vessels and merchant ships and analyzed the characteristics of hull shape parameter according to the ship types. Based on this analysis, the empirical formula developed for the merchant ship type has applied to the target fishing vessels; it has verified through the turning motion simulation that the estimation error could be generated. In conclusion, it is necessary to include the characteristics of the hull shape parameter of fishing vessels in the empirical formula in order to apply the empirical formula has developed for merchant ship types to fishing vessel types.