• Title/Summary/Keyword: motion features

Search Result 656, Processing Time 0.03 seconds

An Adaptive ROI Detection System for Spatiotemporal Features (시.공간특징에 대해 적응할 수 있는 ROI 탐지 시스템)

  • Park Min-Chul;Cheoi Kyung-Joo
    • The Journal of the Korea Contents Association
    • /
    • v.6 no.1
    • /
    • pp.41-53
    • /
    • 2006
  • In this paper, an adaptive ROI(region of interest) detection system for spatialtemporal features is proposed. It utilizes spatiotemporal features for the purpose of detecting ROI. It is assumed that motion representing temporal visual conspicuity between adjacent frames takes higher priority over spatial visual conspicuity. Because objects or regions in motion usually draw stronger attention than others in motion pictures. In case of still images visual features that constitute topographic feature maps are used as spatial features. Comparative experiments with a human subjective evaluation show that correct detection rate of visual attention region is improved by exploiting both spatial and temporal features compared to the case of exploiting either feature.

  • PDF

Registration of UAV Overlapped Image

  • Ochirbat, Sukhee;Cho, Eun-Rae;Kim, Eui-Myoung;Yoo, Hwan-Hee
    • Proceedings of the Korean Association of Geographic Inforamtion Studies Conference
    • /
    • 2008.10a
    • /
    • pp.245-246
    • /
    • 2008
  • The goal of this study is to explore the possibility of KLT tracker for tracking the features between two images including rotation and shift. As a test site, Jangsu-Gun area of South Korea is selected and the images taken from UAV camera are used for analysis. The analysis was carried out using KLT tracker developed in a PC environment. The results of the experiment used two images with the large overlapping area are compared with the results of two images with the little overlapping area and rotation. Overall, the research indicates that the integrated features of littlerotation and motion images can significantly increase during the tracking process. But using KLT tracker for extracting and tracking features between images with large rotation and motion, the number of tracked features are decreased.

  • PDF

Feature based Object Tracking from an Active Camera (능동카메라 환경에서의 특징기반의 이동물체 추적)

  • 오종안;정영기
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.141-144
    • /
    • 2002
  • This paper describes a new feature based tracking system that can track moving objects with a pan-tilt camera. We extract corner features of the scene and tracks the features using filtering, The global motion energy caused by camera movement is eliminated by finding the maximal matching position between consecutive frames using Pyramidal template matching. The region of moving object is segmented by clustering the motion trajectories and command the pan-tilt controller to follow the object such that the object will always lie at the center of the camera. The proposed system has demonstrated good performance for several video sequences.

  • PDF

Motion Detection Using Electric Field Theory

  • Ono, Naoki;Yang, Yee-Hong
    • Proceedings of the IEEK Conference
    • /
    • 2000.07b
    • /
    • pp.823-826
    • /
    • 2000
  • Motion detection is an important step in computer vision and image processing. Traditional motion detection systems are classified into two categories, namely, feature based and gradient based. In feature based motion detection, features in consecutive frames are detected and matched. Gradient based methods assume that the intensity varies linearly and locally. The method, which we propose, is neither feature nor gradient based but uses the electric field theory. The pixels in an image are modeled as point charges and motion is detected by using the variations between the two electric fields produced by the charges corresponding to the two images.

  • PDF

Video Scene Segmentation Technique based on Color and Motion Features (칼라 및 모션 특징 기반 비디오 씬 분할 기법)

  • 송창준;고한석;권용무
    • Journal of Broadcast Engineering
    • /
    • v.5 no.1
    • /
    • pp.102-112
    • /
    • 2000
  • The previous video structuring techniques are mainly limited to shot or shot group level. However, the shot level structure couldn't provide semantics within a video. So, researches on high level structuring are going on for getting over the drawbacks of shot level structure, recently. To overcome the drawbacks of shot level structure, we propose video scene segmentation technique based on color and motion features. For considering various color distribution, each shot is divided into sub-shots based on color feature. A key frame is extracted from each sub-shot. The motion feature in a shot is extracted from MPEG-1 video's motion vector. Moreover adaptive weights based on motion's property in search range are applied to color and motion features. The experiment results of proposed technique show the excellence in view of the over-segmentation and the reflection of semantics, comparing with those of previous techniques. The proposed technique decomposes video into meaningful hierarchical structure and provides video browsing or retrieval based on scene.

  • PDF

Dual-stream Co-enhanced Network for Unsupervised Video Object Segmentation

  • Hongliang Zhu;Hui Yin;Yanting Liu;Ning Chen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.4
    • /
    • pp.938-958
    • /
    • 2024
  • Unsupervised Video Object Segmentation (UVOS) is a highly challenging problem in computer vision as the annotation of the target object in the testing video is unknown at all. The main difficulty is to effectively handle the complicated and changeable motion state of the target object and the confusion of similar background objects in video sequence. In this paper, we propose a novel deep Dual-stream Co-enhanced Network (DC-Net) for UVOS via bidirectional motion cues refinement and multi-level feature aggregation, which can fully take advantage of motion cues and effectively integrate different level features to produce high-quality segmentation mask. DC-Net is a dual-stream architecture where the two streams are co-enhanced by each other. One is a motion stream with a Motion-cues Refine Module (MRM), which learns from bidirectional optical flow images and produces fine-grained and complete distinctive motion saliency map, and the other is an appearance stream with a Multi-level Feature Aggregation Module (MFAM) and a Context Attention Module (CAM) which are designed to integrate the different level features effectively. Specifically, the motion saliency map obtained by the motion stream is fused with each stage of the decoder in the appearance stream to improve the segmentation, and in turn the segmentation loss in the appearance stream feeds back into the motion stream to enhance the motion refinement. Experimental results on three datasets (Davis2016, VideoSD, SegTrack-v2) demonstrate that DC-Net has achieved comparable results with some state-of-the-art methods.

Cancellation of MRI Motion Artifact in Image Plane (촬상단면내의 MRI 체동 아티팩트의 제거)

  • Kim, Eung-Kyeu
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.4
    • /
    • pp.432-440
    • /
    • 2000
  • In this study, a new algorithm for canceling MRI artifact due to translational motion in image plane is described. Unlike the conventional iterative phase retrieval algorithm, in which there is no guarantee for the convergence, a direct method for estimating the motion is presented. In previous approaches, the motions in the x(read out) direction and the y(phase encoding) direction are estimated simultaneously. However, the features of x and y directional motions are different from each other. By analyzing their features, each x and y directional motion is canceled by different algorithms in two steps. First, it is noticed that the x directional motion corresponds to a shift of the x directional spectrum of the MRI signal, and the non-zero area of the spectrum just corresponds to the projected area of the density function on the x-axis. So the motion is estimated by tracing the edges between non-zero area and zero area of the spectrum, and the x directional motion is canceled by shifting the spectrum in inverse direction. Next, the y directional motion is canceled by using a new constraint condition, with which the motion component and the true image component can be separated. This algorithm is shown to be effective by using a phantom image with simulated motion.

  • PDF

A study on the real time obstacle recognition by scanned line image (스캔라인 연속영상을 이용한 실시간 장애물 인식에 관한 연구)

  • Cheung, Sheung-Youb;Oh, Jun-Ho
    • Transactions of the Korean Society of Mechanical Engineers A
    • /
    • v.21 no.10
    • /
    • pp.1551-1560
    • /
    • 1997
  • This study is devoted to the detection of the 3-dimensional point obstacles on the plane by using accumulated scan line images. The proposed accumulating only one scan line allow to process image at real time. And the change of motion of the feature in image is small because of the short time between image frames, so it does not take much time to track features. To obtain recursive optimal obstacles position and robot motion along to the motion of camera, Kalman filter algorithm is used. After using Kalman filter in case of the fixed environment, 3-dimensional obstacles point map is obtained. The position and motion of moving obstacles can also be obtained by pre-segmentation. Finally, to solve the stereo ambiguity problem from multiple matches, the camera motion is actively used to discard mis-matched features. To get relative distance of obstacles from camera, parallel stereo camera setup is used. In order to evaluate the proposed algorithm, experiments are carried out by a small test vehicle.

Robust Features and Accurate Inliers Detection Framework: Application to Stereo Ego-motion Estimation

  • MIN, Haigen;ZHAO, Xiangmo;XU, Zhigang;ZHANG, Licheng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.1
    • /
    • pp.302-320
    • /
    • 2017
  • In this paper, an innovative robust feature detection and matching strategy for visual odometry based on stereo image sequence is proposed. First, a sparse multiscale 2D local invariant feature detection and description algorithm AKAZE is adopted to extract the interest points. A robust feature matching strategy is introduced to match AKAZE descriptors. In order to remove the outliers which are mismatched features or on dynamic objects, an improved random sample consensus outlier rejection scheme is presented. Thus the proposed method can be applied to dynamic environment. Then, geometric constraints are incorporated into the motion estimation without time-consuming 3-dimensional scene reconstruction. Last, an iterated sigma point Kalman Filter is adopted to refine the motion results. The presented ego-motion scheme is applied to benchmark datasets and compared with state-of-the-art approaches with data captured on campus in a considerably cluttered environment, where the superiorities are proved.

Video retrieval method using non-parametric based motion classification (비-파라미터 기반의 움직임 분류를 통한 비디오 검색 기법)

  • Kim Nac-Woo;Choi Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.1-11
    • /
    • 2006
  • In this paper, we propose the novel video retrieval algorithm using non-parametric based motion classification in the shot-based video indexing structure. The proposed system firstly gets the key frame and motion information from each shot segmented by scene change detection method, and then extracts visual features and non-parametric based motion information from them. Finally, we construct real-time retrieval system supporting similarity comparison of these spatio-temporal features. After the normalized motion vector fields is created from MPEG compressed stream, the extraction of non-parametric based motion feature is effectively achieved by discretizing each normalized motion vectors into various angle bins, and considering a mean, a variance, and a direction of these bins. We use the edge-based spatial descriptor to extract the visual feature in key frames. Experimental evidence shows that our algorithm outperforms other video retrieval methods for image indexing and retrieval. To index the feature vectors, we use R*-tree structures.