• Title/Summary/Keyword: motion feature vector

Search Result 91, Processing Time 0.025 seconds

Camera Extrinsic Parameter Estimation using 2D Homography and Nonlinear Minimizing Method based on Geometric Invariance Vector (기하학적 불변벡터 기탄 2D 호모그래피와 비선형 최소화기법을 이용한 카메라 외부인수 측정)

  • Cha, Jeong-Hee
    • Journal of Internet Computing and Services
    • /
    • v.6 no.6
    • /
    • pp.187-197
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features, Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time, The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum, In order to complement these shortfalls, we, first proposed constructing feature models using invariant vector of geometry, Secondly, we proposed a two-stage calculation method to improve accuracy and convergence by using 2D homography and LM method, In the experiment, we compared and analyzed the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

  • PDF

Video Based Fall Detection Algorithm Using Hidden Markov Model (은닉 마르코프 모델을 이용한 동영상 기반 낙상 인식 알고리듬)

  • Kim, Nam Ho;Yu, Yun Seop
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.50 no.8
    • /
    • pp.232-237
    • /
    • 2013
  • A newly developed fall detection algorithm using the HMM (Hidden Markov Model) extracted from the video is introduced. To distinguish between the fall from personal difference fall pattern or the normal activities of daily living (ADL), HMM machine learning algorithm is used. For getting fall feature vector of video, the motion vector from the optical flow is applied to the PCA (Principal Component Analysis). The combination of the angle, ratio of long-short axis, velocity from results of PCA make the new fall feature parameters. These parameters were applied to the HMM and the results were compared and analyzed. Among the newly proposed various kinds of fall parameters, the angle of movement showed the best results. The results show that this parameter can distinguish various types of fall from ADLs with 91.5% sensitivity and 88.01% specificity.

(A Comparison of Gesture Recognition Performance Based on Feature Spaces of Angle, Velocity and Location in HMM Model) (HMM인식기 상에서 방향, 속도 및 공간 특징량에 따른 제스처 인식 성능 비교)

  • 윤호섭;양현승
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.5_6
    • /
    • pp.430-443
    • /
    • 2003
  • The objective of this paper is to evaluate most useful feature vector space using the angle, velocity and location features from gesture trajectory which extracted hand regions from consecutive input images and track them by connecting their positions. For this purpose, the gesture tracking algorithm using color and motion information is developed. The recognition module is a HMM model to adaptive time various data. The proposed algorithm was applied to a database containing 4,800 alphabetical handwriting gestures of 20 persons who was asked to draw his/her handwriting gestures five times for each of the 48 characters.

Enhancement of the k-Means Clustering Speed by Emulation of Birds' Motion in Flock (새떼 이동의 모방에 의한 k-평균 군집 속도의 향상)

  • Lee, Chang-Young
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.9
    • /
    • pp.965-970
    • /
    • 2014
  • In an effort to improve the convergence speed in k-means clustering, we introduce the notion of the birds' movement in a flock. Their motion is characterized by the observation that each bird runs after his nearest neighbor. We utilize this feature in clustering procedure. Once the class of a vector is determined, then a number of vectors in the vicinity of it are assigned to the same class. Experiments have shown that the required number of iterations for termination is significantly lower in the proposed method than in the conventional one. Furthermore, the time of calculation per iteration is more than 5% shorter in the proposed case. The quality of the clustering, as determined from the total accumulated distance between the vector and its centroid vector, was found to be practically the same. It might be phrased that we may acquire practically the same clustering result with shorter computational time.

Estimating Motion Information Using Multiple Features (다중 특징을 이용한 동작정보 측정)

  • Jang Seok-Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.2 s.34
    • /
    • pp.1-10
    • /
    • 2005
  • In this Paper, we propose a new block matching a1gorithm that extracts motion vectors from consecutive range data. The proposed method defines a matching metric that integrates intensity, hue, and range. Our algorithm begins matching with a small matching template. If the matching degree is not good enough, we slightly expand the size of a matching template and then repeat the matching process until our matching criterion is satisfied or the predetermined maximum size has been reached. As the iteration proceeds, we adaptively adjust weights of the matching metric by considering the importance of each feature. In the experiments, we show that our block matching approach can work as a promising solution by comparing the proposed method with previously known method in terms of performance.

  • PDF

Establishment of Correspondent points and Sampling Period Needed to Estimate Object Motion Parameters (운동물체의 파라미터 추정에 필요한 대응점과 샘플링주기의 설정)

  • Jung, Nam-Chae;Moon, Yong-Sun;Park, Jong-An
    • The Journal of the Acoustical Society of Korea
    • /
    • v.16 no.5
    • /
    • pp.26-35
    • /
    • 1997
  • This paper deals with establishing correspondent points of feature pints and sampling period when we estimate object motion parameters from image information of freely moving objects in space of gravity-free state. Replacing the inertial coordinate system with the camera coordinate system which is equipped within a space robot, it is investigated to be able to analyze a problem of correspond points from image information, and to obtain sequence of angular velocity $\omega$ which determine a motion of object by means of computer simulation. And if a sampling period ${\Delta}t$ is shortened, the relative errors of angular velocity are increased because the relative errors against moving distance of feature points are increased by quantization. In reverse, if a sampling period ${\Delta}t$ is lengthened too much, the relative error are likewise increased because a sampling period is long for angular velocity to be approximated, and we confirmed the precision that grows according to ascending of resolution.

  • PDF

Optimal Facial Emotion Feature Analysis Method based on ASM-LK Optical Flow (ASM-LK Optical Flow 기반 최적 얼굴정서 특징분석 기법)

  • Ko, Kwang-Eun;Park, Seung-Min;Park, Jun-Heong;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.21 no.4
    • /
    • pp.512-517
    • /
    • 2011
  • In this paper, we propose an Active Shape Model (ASM) and Lucas-Kanade (LK) optical flow-based feature extraction and analysis method for analyzing the emotional features from facial images. Considering the facial emotion feature regions are described by Facial Action Coding System, we construct the feature-related shape models based on the combination of landmarks and extract the LK optical flow vectors at each landmarks based on the centre pixels of motion vector window. The facial emotion features are modelled by the combination of the optical flow vectors and the emotional states of facial image can be estimated by the probabilistic estimation technique, such as Bayesian classifier. Also, we extract the optimal emotional features that are considered the high correlation between feature points and emotional states by using common spatial pattern (CSP) analysis in order to improvise the operational efficiency and accuracy of emotional feature extraction process.

Interaction Augmented Reality System using a Hand Motion (손동작을 이용한 상호작용 증강현실 시스템)

  • Choi, Kwang-Woon;Jung, Da-Un;Lee, Suk-Han;Choi, Jong-Soo
    • Journal of Korea Multimedia Society
    • /
    • v.15 no.4
    • /
    • pp.425-438
    • /
    • 2012
  • In this paper, We propose Augmented Reality (AR) System for the interaction between user's hand motion and virtual object motion based on computer vision. The previous AR system provides inconvenience to user because the users have to control the marker and the sensor like a tracker. We solved the problem through hand motion and provide the convenience to the user. Also the motion of virtual object using a physical phenomenon gives a reality. The proposed system obtains geometrical information by the marker and hand. The system environments like virtual space of moving virtual ball and bricks are made by using the geometrical information and user's hand motion is obtained from the hand's information with extracted feature point through the taping hand. And it registers a virtual plane stably by getting movement of the feature points. The movement of the virtual ball basically is parabolic motion with a parabolic equation. When the collision occurs either the planes or the bricks, we show movement of the virtual ball with ball position and normal vector of plane and the ball position is faulted. So we showed corrected ball position through experiment. and we proved that this system can replaced the marker system to compare to jitter of augmented virtual object and progress speed with it.

Real-Time Human Tracker Based Location and Motion Recognition for the Ubiquitous Smart Home (유비쿼터스 스마트 홈을 위한 위치와 모션인식 기반의 실시간 휴먼 트랙커)

  • Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il;Cuong, Nguyen Quoe
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2008.06d
    • /
    • pp.444-448
    • /
    • 2008
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2:image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.

  • PDF

Real-Time Human Tracker Based on Location and Motion Recognition of User for Smart Home (스마트 홈을 위한 사용자 위치와 모션 인식 기반의 실시간 휴먼 트랙커)

  • Choi, Jong-Hwa;Park, Se-Young;Shin, Dong-Kyoo;Shin, Dong-Il
    • The KIPS Transactions:PartA
    • /
    • v.16A no.3
    • /
    • pp.209-216
    • /
    • 2009
  • The ubiquitous smart home is the home of the future that takes advantage of context information from the human and the home environment and provides an automatic home service for the human. Human location and motion are the most important contexts in the ubiquitous smart home. We present a real-time human tracker that predicts human location and motion for the ubiquitous smart home. We used four network cameras for real-time human tracking. This paper explains the real-time human tracker's architecture, and presents an algorithm with the details of two functions (prediction of human location and motion) in the real-time human tracker. The human location uses three kinds of background images (IMAGE1: empty room image, IMAGE2: image with furniture and home appliances in the home, IMAGE3: image with IMAGE2 and the human). The real-time human tracker decides whether the human is included with which furniture (or home appliance) through an analysis of three images, and predicts human motion using a support vector machine. A performance experiment of the human's location, which uses three images, took an average of 0.037 seconds. The SVM's feature of human's motion recognition is decided from pixel number by array line of the moving object. We evaluated each motion 1000 times. The average accuracy of all the motions was found to be 86.5%.