• Title/Summary/Keyword: motion features

Search Result 656, Processing Time 0.031 seconds

Video Representation via Fusion of Static and Motion Features Applied to Human Activity Recognition

  • Arif, Sheeraz;Wang, Jing;Fei, Zesong;Hussain, Fida
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.7
    • /
    • pp.3599-3619
    • /
    • 2019
  • In human activity recognition system both static and motion information play crucial role for efficient and competitive results. Most of the existing methods are insufficient to extract video features and unable to investigate the level of contribution of both (Static and Motion) components. Our work highlights this problem and proposes Static-Motion fused features descriptor (SMFD), which intelligently leverages both static and motion features in the form of descriptor. First, static features are learned by two-stream 3D convolutional neural network. Second, trajectories are extracted by tracking key points and only those trajectories have been selected which are located in central region of the original video frame in order to to reduce irrelevant background trajectories as well computational complexity. Then, shape and motion descriptors are obtained along with key points by using SIFT flow. Next, cholesky transformation is introduced to fuse static and motion feature vectors to guarantee the equal contribution of all descriptors. Finally, Long Short-Term Memory (LSTM) network is utilized to discover long-term temporal dependencies and final prediction. To confirm the effectiveness of the proposed approach, extensive experiments have been conducted on three well-known datasets i.e. UCF101, HMDB51 and YouTube. Findings shows that the resulting recognition system is on par with state-of-the-art methods.

Feature-Oriented Adaptive Motion Analysis For Recognizing Facial Expression (특징점 기반의 적응적 얼굴 움직임 분석을 통한 표정 인식)

  • Noh, Sung-Kyu;Park, Han-Hoon;Shin, Hong-Chang;Jin, Yoon-Jong;Park, Jong-Il
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.667-674
    • /
    • 2007
  • Facial expressions provide significant clues about one's emotional state; however, it always has been a great challenge for machine to recognize facial expressions effectively and reliably. In this paper, we report a method of feature-based adaptive motion energy analysis for recognizing facial expression. Our method optimizes the information gain heuristics of ID3 tree and introduces new approaches on (1) facial feature representation, (2) facial feature extraction, and (3) facial feature classification. We use minimal reasonable facial features, suggested by the information gain heuristics of ID3 tree, to represent the geometric face model. For the feature extraction, our method proceeds as follows. Features are first detected and then carefully "selected." Feature "selection" is finding the features with high variability for differentiating features with high variability from the ones with low variability, to effectively estimate the feature's motion pattern. For each facial feature, motion analysis is performed adaptively. That is, each facial feature's motion pattern (from the neutral face to the expressed face) is estimated based on its variability. After the feature extraction is done, the facial expression is classified using the ID3 tree (which is built from the 1728 possible facial expressions) and the test images from the JAFFE database. The proposed method excels and overcomes the problems aroused by previous methods. First of all, it is simple but effective. Our method effectively and reliably estimates the expressive facial features by differentiating features with high variability from the ones with low variability. Second, it is fast by avoiding complicated or time-consuming computations. Rather, it exploits few selected expressive features' motion energy values (acquired from intensity-based threshold). Lastly, our method gives reliable recognition rates with overall recognition rate of 77%. The effectiveness of the proposed method will be demonstrated from the experimental results.

  • PDF

A new approach for content-based video retrieval

  • Kim, Nac-Woo;Lee, Byung-Tak;Koh, Jai-Sang;Song, Ho-Young
    • International Journal of Contents
    • /
    • v.4 no.2
    • /
    • pp.24-28
    • /
    • 2008
  • In this paper, we propose a new approach for content-based video retrieval using non-parametric based motion classification in the shot-based video indexing structure. Our system proposed in this paper has supported the real-time video retrieval using spatio-temporal feature comparison by measuring the similarity between visual features and between motion features, respectively, after extracting representative frame and non-parametric motion information from shot-based video clips segmented by scene change detection method. The extraction of non-parametric based motion features, after the normalized motion vectors are created from an MPEG-compressed stream, is effectively fulfilled by discretizing each normalized motion vector into various angle bins, and by considering the mean, variance, and direction of motion vectors in these bins. To obtain visual feature in representative frame, we use the edge-based spatial descriptor. Experimental results show that our approach is superior to conventional methods with regard to the performance for video indexing and retrieval.

Video Indexing using Motion vector and brightness features (움직임 벡터와 빛의 특징을 이용한 비디오 인덱스)

  • 이재현;조진선
    • Journal of the Korea Society of Computer and Information
    • /
    • v.3 no.4
    • /
    • pp.27-34
    • /
    • 1998
  • In this paper we present a method for automatic motion vector and brightness based video indexing and retrieval. We extract a representational frame from each shot and compute some motion vector and brightness based features. For each R-frame we compute the optical flow field; motion vector features are then derived from this flow field, BMA(block matching algorithm) is used to find motion vectors and Brightness features are related to the cut detection of method brightness histogram. A video database provided contents based access to video. This is achieved by organizing or indexing video data based on some set of features. In this paper the index of features is based on a B+ search tree. It consists of internal and leaf nodes stores in a direct access a storage device. This paper defines the problem of video indexing based on video data models.

  • PDF

Silhouette-based motion recognition for young children using an RBF network (RBF 신경망을 이용한 실루엣 기반 유아 동작 인식)

  • Kim, Hye-Jeong;Lee, Kyoung-Mi
    • Journal of Internet Computing and Services
    • /
    • v.8 no.3
    • /
    • pp.119-129
    • /
    • 2007
  • To recognition a human motion, in this paper, we propose a neural approach using silhouettes in video frames captured by two cameras placed at the front and side of the human body. To extract features of the silhouettes for motion estimation, the proposed system computes both global and local features and then groups these features into static and dynamic features depending on whether features are in a static frame. Extracted features are in a static frame. Extracted features are used to train a RBF network. The neural system uses static features as the input of the neural network and dynamic features as additional features for recognition. In this paper, the proposed method was applied to movement education for young children. The basic movements for such education consist of locomotor movements, such as walking, jumping, and hopping, and non-locomotor movements, including bending, stretching, balancing and turning. The system demonstrated the effectiveness of motion recognition for movement education generated by the proposed neural network. The proposed system dan be extended to the system for movement education which develops the spatial sense of young children.

  • PDF

Stereo Vision-based Visual Odometry Using Robust Visual Feature in Dynamic Environment (동적 환경에서 강인한 영상특징을 이용한 스테레오 비전 기반의 비주얼 오도메트리)

  • Jung, Sang-Jun;Song, Jae-Bok;Kang, Sin-Cheon
    • The Journal of Korea Robotics Society
    • /
    • v.3 no.4
    • /
    • pp.263-269
    • /
    • 2008
  • Visual odometry is a popular approach to estimating robot motion using a monocular or stereo camera. This paper proposes a novel visual odometry scheme using a stereo camera for robust estimation of a 6 DOF motion in the dynamic environment. The false results of feature matching and the uncertainty of depth information provided by the camera can generate the outliers which deteriorate the estimation. The outliers are removed by analyzing the magnitude histogram of the motion vector of the corresponding features and the RANSAC algorithm. The features extracted from a dynamic object such as a human also makes the motion estimation inaccurate. To eliminate the effect of a dynamic object, several candidates of dynamic objects are generated by clustering the 3D position of features and each candidate is checked based on the standard deviation of features on whether it is a real dynamic object or not. The accuracy and practicality of the proposed scheme are verified by several experiments and comparisons with both IMU and wheel-based odometry. It is shown that the proposed scheme works well when wheel slip occurs or dynamic objects exist.

  • PDF

A Robust Approach for Human Activity Recognition Using 3-D Body Joint Motion Features with Deep Belief Network

  • Uddin, Md. Zia;Kim, Jaehyoun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1118-1133
    • /
    • 2017
  • Computer vision-based human activity recognition (HAR) has become very famous these days due to its applications in various fields such as smart home healthcare for elderly people. A video-based activity recognition system basically has many goals such as to react based on people's behavior that allows the systems to proactively assist them with their tasks. A novel approach is proposed in this work for depth video based human activity recognition using joint-based motion features of depth body shapes and Deep Belief Network (DBN). From depth video, different body parts of human activities are segmented first by means of a trained random forest. The motion features representing the magnitude and direction of each joint in next frame are extracted. Finally, the features are applied for training a DBN to be used for recognition later. The proposed HAR approach showed superior performance over conventional approaches on private and public datasets, indicating a prominent approach for practical applications in smartly controlled environments.

A Block-based Motion Detection Algorithm with Adaptive Thresholds for Digital Video Surveillance Systems (적응적으로 임계값을 결정하는 블럭 기반의 디지털 감시 시스템용 움직임 검출 알고리즘)

  • Yang, Yun-Seok;Lee, Dong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.5
    • /
    • pp.31-41
    • /
    • 2000
  • This paper proposes a block-based motion detection algorithm for digital video surveillance system which adaptively decides the threshold according to the kinds of images We first compute the features of a block after dividing each Image into small sub-block regions, and analyze performance of the motion detection algorithm based on statistic features by using the proposed threshold-decision method. Motion vectors are used to analyze motion degree and adaptively determine the threshold The simulation results show the performances of motion detection algorithms according to sub-block size, statistic features, noise, and threshold.

  • PDF

Feature-based Object Tracking using an Active Camera (능동카메라를 이용한 특징기반의 물체추적)

  • 정영기;호요성
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.3
    • /
    • pp.694-701
    • /
    • 2004
  • In this paper, we proposed a feature-based tracking system that traces moving objects with a pan-tilt camera after separating the global motion of an active camera and the local motion of moving objects. The tracking system traces only the local motion of the comer features in the foreground objects by finding the block motions between two consecutive frames using a block-based motion estimation and eliminating the global motion from the block motions. For the robust estimation of the camera motion using only the background motion, we suggest a dominant motion extraction to classify the background motions from the block motions. We also propose an efficient clustering algorithm based on the attributes of motion trajectories of corner features to remove the motions of noise objects from the separated local motion. The proposed tracking system has demonstrated good performance for several test video sequences.

Navigation based Motion Counting Algorithm for a Wearable Smart Device (항법 기반 웨어러블 스마트 디바이스 동작 카운트 알고리즘)

  • Park, So Young;Lee, Min Su;Song, Jin Woo;Park, Chan Gook
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.21 no.6
    • /
    • pp.547-552
    • /
    • 2015
  • In this paper, an ARS-EKF based motion counting algorithm for repetitive exercises such as calisthenics is proposed using a smartwatch. Raw sensor signals from accelerometers and gyroscopes are widely used for conventional smartwatch counting algorithms based on pattern recognition. However, generated features from raw data are not intuitive to reflect the movement of motions. The proposed motion counter algorithm is composed of navigation based feature generation and counting with error correction. The candidate features for each activity are velocity and attitude calculated through an ARS-EKF algorithm. In order to select those features which reveal the characteristics of each motion, an exercise frame from the initial sensor frame is introduced. Counting processes are basically based on the zero crossing method, and misdetected counts are eliminated via simple classification algorithms considering the frequency of the counted motions. Experimental results show that the proposed algorithm efficiently and accurately counts the number of exercises.