• Title/Summary/Keyword: Motion Descriptor

Search Result 38, Processing Time 0.039 seconds

Blur-Invariant Feature Descriptor Using Multidirectional Integral Projection

  • Lee, Man Hee;Park, In Kyu
    • ETRI Journal
    • /
    • v.38 no.3
    • /
    • pp.502-509
    • /
    • 2016
  • Feature detection and description are key ingredients of common image processing and computer vision applications. Most existing algorithms focus on robust feature matching under challenging conditions, such as inplane rotations and scale changes. Consequently, they usually fail when the scene is blurred by camera shake or an object's motion. To solve this problem, we propose a new feature description algorithm that is robust to image blur and significantly improves the feature matching performance. The proposed algorithm builds a feature descriptor by considering the integral projection along four angular directions ($0^{\circ}$, $45^{\circ}$, $90^{\circ}$, and $135^{\circ}$) and by combining four projection vectors into a single highdimensional vector. Intensive experiment shows that the proposed descriptor outperforms existing descriptors for different types of blur caused by linear motion, nonlinear motion, and defocus. Furthermore, the proposed descriptor is robust to intensity changes and image rotation.

Efficient Representation and Matching of Object Movement using Shape Sequence Descriptor (모양 시퀀스 기술자를 이용한 효과적인 동작 표현 및 검색 방법)

  • Choi, Min-Seok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.391-396
    • /
    • 2008
  • Motion of object in a video clip often plays an important role in characterizing the content of the clip. A number of methods have been developed to analyze and retrieve video contents using motion information. However, most of these methods focused more on the analysis of direction or trajectory of motion but less on the analysis of the movement of an object itself. In this paper, we propose the shape sequence descriptor to describe and compare the movement based on the shape deformation caused by object motion along the time. A movement information is first represented a sequence of 2D shape of object extracted from input image sequence, and then 2D shape information is converted 1D shape feature using the shape descriptor. The shape sequence descriptor is obtained from the shape descriptor sequence by frequency transform along the time. Our experiment results show that the proposed method can be very simple and effective to describe the object movement and can be applicable to semantic applications such as content-based video retrieval and human movement recognition.

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

A motion descriptor design combining the global feature of an image and the local one of an moving object (영상의 전역 특징과 이동객체의 지역 특징을 융합한 움직임 디스크립터 설계)

  • Jung, Byeong-Man;Lee, Kyu-Won
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.10a
    • /
    • pp.898-902
    • /
    • 2012
  • A descriptor which is suitable for motion analysis by using the motion features of moving objects from the real time image sequence is proposed. To segment moving objects from the background, the background learning is performed. We extract motion trajectories of individual objects by using the sequence of the $1^{st}$ order moment of moving objects. The center points of each object are managed by linked list. The descriptor includes the $1^{st}$ order coordinates of moving object belong to neighbor of the per-defined position in grid pattern, the start frame number which a moving object appeared in the scene and the end frame number which it disappeared. A video retrieval by the proposed descriptor combining global and local feature is more effective than conventional methods which adopt a single feature among global and local features.

  • PDF

A Descriptor Design for the Video Retrieval Combining the Global Feature of an Image and the Local of a Moving Object (영상의 전역 특징과 이동객체의 지역 특징을 융합한 동영상 검색 디스크립터 설계)

  • Jung, Byung-Man;Lee, Kyu-Won
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.18 no.1
    • /
    • pp.142-148
    • /
    • 2014
  • A descriptor which is suitable for motion analysis by using the motion features of moving objects from the real time image sequence is proposed. To segment moving objects from the background, the background learning is performed. We extract motion trajectories of individual objects by using the sequence of the 1st order moment of moving objects. The center points of each object are managed by linked list. The descriptor includes the 1st order coordinates of moving object belong to neighbor of the pre-defined position in grid pattern, The start frame number which a moving object appeared in the scene and the end frame number which it disappeared. A video retrieval by the proposed descriptor combining global and local feature is more effective than conventional methods which adopt a single feature among global and local features.

Content based Video Copy Detection Using Spatio-Temporal Ordinal Measure (시공간 순차 정보를 이용한 내용기반 복사 동영상 검출)

  • Jeong, Jae-Hyup;Kim, Tae-Wang;Yang, Hun-Jun;Jin, Ju-Kyong;Jeong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.113-121
    • /
    • 2012
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

A Descriptor for Characteristics of Local Motion in a Video (비디오 영상에서 지역적 움직임 특성을 표현할 수 있는 기술자)

  • 김형준;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2000.09a
    • /
    • pp.359-362
    • /
    • 2000
  • 본 논문에서는 비디오 영상에서 지역적 움직임 특성을 표현할 수 있는 지역적 움직임 활동(motion activity)에 관한 기술자(descriptor)를 제안한다. 제안된 방법은 화면 전체에 대해 지역적으로 높은 움직임 활동 정도를 갖는 영역에 대한 공간적 정보를 기술하고, 카메라 움직임에 무관하게 물체의 움직임 활동 특성을 정확히 표현하기 위해 움직임 벡터의 통계적 특성과 화면 분할을 이용한다 본 논문에서 제안하는 움직임 활동의 공간적 특성을 이용하면 동영상에서 화면의 일부에서 일어나는 움직임을 이용한 검색이 가능하고, 물체 추적, 감시 시스템에서도 활용이 가능하다. 실험으로 제안한 방법을 이용해서 움직임 활동이 높은 영역의 추출과정을 보이고, 이를 이용한 검색 결과를 보인다.

  • PDF

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Video Retrieval based on Objects Motion Trajectory (객체 이동 궤적 기반 비디오의 검색)

  • 유웅식;이규원;김재곤;김진웅;권오석
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.25 no.5B
    • /
    • pp.913-924
    • /
    • 2000
  • This paper proposes an efficient descriptor for objects motion trajectory and a video retrieval algorithm based on objects motion trajectory. The algorithm describes parameters with coefficients of 2-order polynomial for objects motion trajectory after segmentation of the object from the scene. The algorithm also identifies types, intervals, and magnitude of global motion caused by camera motion and indexes them with 6-affine parameters. This paper implements content-based video retrieval using similarity-match between indexed parameters and queried ones for objects motion trajectory. The proposed algorithm will support not only faster retrieval for general videos but efficient operation for unmanned video surveillance system.

  • PDF

A Low Complexity, Descriptor-Less SIFT Feature Tracking System

  • Fransioli, Brian;Lee, Hyuk-Jae
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2012.07a
    • /
    • pp.269-270
    • /
    • 2012
  • Features which exhibit scale and rotation invariance, such as SIFT, are notorious for expensive computation time, and often overlooked for real-time tracking scenarios. This paper proposes a descriptorless matching algorithm based on motion vectors between consecutive frames to find the geometrically closest candidate to each tracked reference feature in the database. Descriptor-less matching forgoes expensive SIFT descriptor extraction without loss of matching accuracy and exhibits dramatic speed-up compared to traditional, naive matching based trackers. Descriptor-less SIFT tracking runs in real-time on an Intel dual core machine at an average of 24 frames per second.

  • PDF