• Title/Summary/Keyword: motion features

Search Result 656, Processing Time 0.025 seconds

A Fast and Robust Algorithm for Fighting Behavior Detection Based on Motion Vectors

  • Xie, Jianbin;Liu, Tong;Yan, Wei;Li, Peiqin;Zhuang, Zhaowen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.5 no.11
    • /
    • pp.2191-2203
    • /
    • 2011
  • In this paper, we propose a fast and robust algorithm for fighting behavior detection based on Motion Vectors (MV), in order to solve the problem of low speed and weak robustness in traditional fighting behavior detection. Firstly, we analyze the characteristics of fighting scenes and activities, and then use motion estimation algorithm based on block-matching to calculate MV of motion regions. Secondly, we extract features from magnitudes and directions of MV, and normalize these features by using Joint Gaussian Membership Function, and then fuse these features by using weighted arithmetic average method. Finally, we present the conception of Average Maximum Violence Index (AMVI) to judge the fighting behavior in surveillance scenes. Experiments show that the new algorithm achieves high speed and strong robustness for fighting behavior detection in surveillance scenes.

Estimation of Camera Motion Parameter using Invariant Feature Models (불변 특징모델을 이용한 카메라 동작인수 측정)

  • Cha, Jeong-Hee;Lee, Keun-Soo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.191-201
    • /
    • 2005
  • In this paper, we propose a method to calculate camera motion parameter, which is based on efficient invariant features irrelevant to the camera veiwpoint. As feature information in previous research is variant to camera viewpoint. information content is increased, therefore, extraction of accurate features is difficult. LM(Levenberg-Marquardt) method for camera extrinsic parameter converges on the goat value exactly, but it has also drawback to take long time because of minimization process by small step size. Therefore, in this paper, we propose the extracting method of invariant features to camera viewpoint and two-stage calculation method of camera motion parameter which enhances accuracy and convergent degree by using camera motion parameter by 2D homography to the initial value of LM method. The proposed method are composed of features extraction stage, matching stage and calculation stage of motion parameter. In the experiments, we compare and analyse the proposed method with existing methods by using various indoor images to demonstrate the superiority of the proposed algorithm.

  • PDF

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Hand Gesture Recognition for Understanding Conducting Action (지휘행동 이해를 위한 손동작 인식)

  • Je, Hong-Mo;Kim, Ji-Man;Kim, Dai-Jin
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2007.10c
    • /
    • pp.263-266
    • /
    • 2007
  • We introduce a vision-based hand gesture recognition fer understanding musical time and patterns without extra special devices. We suggest a simple and reliable vision-based hand gesture recognition having two features First, the motion-direction code is proposed, which is a quantized code for motion directions. Second, the conducting feature point (CFP) where the point of sudden motion changes is also proposed. The proposed hand gesture recognition system extracts the human hand region by segmenting the depth information generated by stereo matching of image sequences. And then, it follows the motion of the center of the gravity(COG) of the extracted hand region and generates the gesture features such as CFP and the direction-code finally, we obtain the current timing pattern of beat and tempo of the playing music. The experimental results on the test data set show that the musical time pattern and tempo recognition rate is over 86.42% for the motion histogram matching, and 79.75% fer the CFP tracking only.

  • PDF

Arctic Sea Ice Motion Measurement Using Time-Series High-Resolution Optical Satellite Images and Feature Tracking Techniques (고해상도 시계열 광학 위성 영상과 특징점 추적 기법을 이용한 북극해 해빙 이동 탐지)

  • Hyun, Chang-Uk;Kim, Hyun-cheol
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_2
    • /
    • pp.1215-1227
    • /
    • 2018
  • Sea ice motion is an important factor for assessing change of sea ice because the motion affects to not only regional distribution of sea ice but also new ice growth and thickness of ice. This study presents an application of multi-temporal high-resolution optical satellites images obtained from Korea Multi-Purpose Satellite-2 (KOMPSAT-2) and Korea Multi-Purpose Satellite-3 (KOMPSAT-3) to measure sea ice motion using SIFT (Scale-Invariant Feature Transform), SURF (Speeded Up Robust Features) and ORB (Oriented FAST and Rotated BRIEF) feature tracking techniques. In order to use satellite images from two different sensors, spatial and radiometric resolution were adjusted during pre-processing steps, and then the feature tracking techniques were applied to the pre-processed images. The matched features extracted from the SIFT showed even distribution across whole image, however the matched features extracted from the SURF showed condensed distribution of features around boundary between ice and ocean, and this regionally biased distribution became more prominent in the matched features extracted from the ORB. The processing time of the feature tracking was decreased in order of SIFT, SURF and ORB techniques. Although number of the matched features from the ORB was decreased as 59.8% compared with the result from the SIFT, the processing time was decreased as 8.7% compared with the result from the SIFT, therefore the ORB technique is more suitable for fast measurement of sea ice motion.

Selection of features and hidden Markov model parameters for English word recognition from Leap Motion air-writing trajectories

  • Deval Verma;Himanshu Agarwal;Amrish Kumar Aggarwal
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.250-262
    • /
    • 2024
  • Air-writing recognition is relevant in areas such as natural human-computer interaction, augmented reality, and virtual reality. A trajectory is the most natural way to represent air writing. We analyze the recognition accuracy of words written in air considering five features, namely, writing direction, curvature, trajectory, orthocenter, and ellipsoid, as well as different parameters of a hidden Markov model classifier. Experiments were performed on two representative datasets, whose sample trajectories were collected using a Leap Motion Controller from a fingertip performing air writing. Dataset D1 contains 840 English words from 21 classes, and dataset D2 contains 1600 English words from 40 classes. A genetic algorithm was combined with a hidden Markov model classifier to obtain the best subset of features. Combination ftrajectory, orthocenter, writing direction, curvatureg provided the best feature set, achieving recognition accuracies on datasets D1 and D2 of 98.81% and 83.58%, respectively.

Binary Hashing CNN Features for Action Recognition

  • Li, Weisheng;Feng, Chen;Xiao, Bin;Chen, Yanquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.9
    • /
    • pp.4412-4428
    • /
    • 2018
  • The purpose of this work is to solve the problem of representing an entire video using Convolutional Neural Network (CNN) features for human action recognition. Recently, due to insufficient GPU memory, it has been difficult to take the whole video as the input of the CNN for end-to-end learning. A typical method is to use sampled video frames as inputs and corresponding labels as supervision. One major issue of this popular approach is that the local samples may not contain the information indicated by the global labels and sufficient motion information. To address this issue, we propose a binary hashing method to enhance the local feature extractors. First, we extract the local features and aggregate them into global features using maximum/minimum pooling. Second, we use the binary hashing method to capture the motion features. Finally, we concatenate the hashing features with global features using different normalization methods to train the classifier. Experimental results on the JHMDB and MPII-Cooking datasets show that, for these new local features, binary hashing mapping on the sparsely sampled features led to significant performance improvements.

Combining Dynamic Time Warping and Single Hidden Layer Feedforward Neural Networks for Temporal Sign Language Recognition

  • Thi, Ngoc Anh Nguyen;Yang, Hyung-Jeong;Kim, Sun-Hee;Kim, Soo-Hyung
    • International Journal of Contents
    • /
    • v.7 no.1
    • /
    • pp.14-22
    • /
    • 2011
  • Temporal Sign Language Recognition (TSLR) from hand motion is an active area of gesture recognition research in facilitating efficient communication with deaf people. TSLR systems consist of two stages: a motion sensing step which extracts useful features from signers' motion and a classification process which classifies these features as a performed sign. This work focuses on two of the research problems, namely unknown time varying signal of sign languages in feature extraction stage and computing complexity and time consumption in classification stage due to a very large sign sequences database. In this paper, we propose a combination of Dynamic Time Warping (DTW) and application of the Single hidden Layer Feedforward Neural networks (SLFNs) trained by Extreme Learning Machine (ELM) to cope the limitations. DTW has several advantages over other approaches in that it can align the length of the time series data to a same prior size, while ELM is a useful technique for classifying these warped features. Our experiment demonstrates the efficiency of the proposed method with the recognition accuracy up to 98.67%. The proposed approach can be generalized to more detailed measurements so as to recognize hand gestures, body motion and facial expression.

Multiple Moving Object Tracking Using The Background Model and Neighbor Region Relation (배경 모델과 주변 영역과의 상호관계를 이용한 다중 이동 물체 추적)

  • Oh, Jeong-Won;Yoo, Ji-Sang
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.39 no.4
    • /
    • pp.361-369
    • /
    • 2002
  • In order to extract motion features from an input image acquired by a static CCD-camera in a restricted area, we need a robust algorithm to cope with noise sensitivity and condition change. In this paper, we proposed an efficient algorithm to extract and track motion features in a noisy environment or with sudden condition changes. We extract motion features by considering a change of neighborhood pixels when moving objects exist in a current frame with an initial background. To remove noise in moving regions, we used a morphological filter and extracted a motion of each object using 8-connected component labeling. Finally, we provide experimental results and statistical analysis with various conditions and models.