• Title/Summary/Keyword: motion features

Search Result 657, Processing Time 0.032 seconds

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

Representation and Detection of Video Shot s Features for Emotional Events (감정에 관련된 비디오 셧의 특징 표현 및 검출)

  • Kang, Hang-Bong;Park, Hyun-Jae
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.53-62
    • /
    • 2004
  • The processing of emotional information is very important in Human-Computer Interaction (HCI). In particular, it is very important in video information processing to deal with a user's affection. To handle emotional information, it is necessary to represent meaningful features and detect them efficiently. Even though it is not an easy task to detect emotional events from low level features such as colour and motion, it is possible to detect them if we use statistical analysis like Linear Discriminant Analysis (LDA). In this paper, we propose a representation scheme for emotion-related features and a defection method. We experiment with extracted features from video to detect emotional events and obtain desirable results.

EMPIRICAL REALITIES FOR A MINIMAL DESCRIPTION RISKY ASSET MODEL. THE NEED FOR FRACTAL FEATURES

  • Christopher C.Heyde;Liu, S.
    • Journal of the Korean Mathematical Society
    • /
    • v.38 no.5
    • /
    • pp.1047-1059
    • /
    • 2001
  • The classical Geometric Brownian motion (GBM) model for the price of a risky asset, from which the huge financial derivatives industry has developed, stipulates that the log returns are iid Gaussian. however, typical log returns data show a distribution with much higher peaks and heavier tails than the Gaussian as well as evidence of strong and persistent dependence. In this paper we describe a simple replacement for GBM, a fractal activity time Geometric Brownian motion (FATGBM) model based on fractal activity time which readily explains these observed features in the data. Consequences of the model are explained, and examples are given to illustrate how the self-similar scaling properties of the activity time check out in practice.

  • PDF

Content-Based Video Retrieval System Using Color and Motion Features (색상과 움직임 정보를 이용한 내용기반 동영상 검색 시스템)

  • 김소희;김형준;정연구;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2001.06c
    • /
    • pp.133-136
    • /
    • 2001
  • Numerous challenges have been made to retrieve video using the contents. Recently MPEG-7 had set up a set of visual descriptors for such purpose of searching and retrieving multimedia data. Among them, color and motion descriptors are employed to develop a content-based video retrieval system to search for videos that have similar characteristics in terms of color and motion features of the video sequence. In this paper, the performance of the proposed system is analyzed and evaluated. Experimental results indicate that the processing time required for a retrieval using MPEG-7 descriptors is relatively short at the expense of the retrieval accuracy.

  • PDF

A Distance Estimation Method of Object′s Motion by Tracking Field Features and A Quantitative Evaluation of The Estimation Accuracy (배경의 특징 추적을 이용한 물체의 이동 거리 추정 및 정확도 평가)

  • 이종현;남시욱;이재철;김재희
    • Proceedings of the IEEK Conference
    • /
    • 1999.11a
    • /
    • pp.621-624
    • /
    • 1999
  • This paper describes a distance estimation method of object's motion in soccer image sequence by tracking field features. And we quantitatively evaluate the estimation accuracy We suppose that the input image sequence is taken with a camera on static axis and includes only zooming and panning transformation between frames. Adaptive template matching is adopted for non-rigid object tracking. For background compensation, feature templates selected from reference frame image are matched in following frames and the matched feature point pairs are used in computing Affine motion parameters. A perspective displacement field model is used for estimating the real distance between two position on Input Image. To quantitatively evaluate the accuracy of the estimation, we synthesized a 3 dimensional virtual stadium with graphic tools and experimented on the synthesized 2 dimensional image sequences. The experiment shows that the average of the error between the actual moving distance and the estimated distance is 1.84%.

  • PDF

Anomaly Detection using Combination of Motion Features (움직임 특징 조합을 통한 이상 행동 검출)

  • Jeon, Minseong;Cheoi, Kyung Joo
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.348-357
    • /
    • 2018
  • The topic of anomaly detection is one of the emerging research themes in computer vision, computer interaction, video analysis and monitoring. Observers focus attention on behaviors that vary in the magnitude or direction of the motion and behave differently in rules of motion with other objects. In this paper, we use this information and propose a system that detects abnormal behavior by using simple features extracted by optical flow. Our system can be applied in real life. Experimental results show high performance in detecting abnormal behavior in various videos.

Camera Motion Parameter Estimation Technique using 2D Homography and LM Method based on Invariant Features

  • Cha, Jeong-Hee
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.5 no.4
    • /
    • pp.297-301
    • /
    • 2005
  • In this paper, we propose a method to estimate camera motion parameter based on invariant point features. Typically, feature information of image has drawbacks, it is variable to camera viewpoint, and therefore information quantity increases after time. The LM(Levenberg-Marquardt) method using nonlinear minimum square evaluation for camera extrinsic parameter estimation also has a weak point, which has different iteration number for approaching the minimal point according to the initial values and convergence time increases if the process run into a local minimum. In order to complement these shortfalls, we, first propose constructing feature models using invariant vector of geometry. Secondly, we propose a two-stage calculation method to improve accuracy and convergence by using homography and LM method. In the experiment, we compare and analyze the proposed method with existing method to demonstrate the superiority of the proposed algorithms.

Temporal Texture modeling for Video Retrieval (동영상 검색을 위한 템포럴 텍스처 모델링)

  • Kim, Do-Nyun;Cho, Dong-Sub
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.50 no.3
    • /
    • pp.149-157
    • /
    • 2001
  • In the video retrieval system, visual clues of still images and motion information of video are employed as feature vectors. We generate the temporal textures to express the motion information whose properties are simple expression, easy to compute. We make those temporal textures of wavelet coefficients to express motion information, M components. Then, temporal texture feature vectors are extracted using spatial texture feature vectors, i.e. spatial gray-level dependence. Also, motion amount and motion centroid are computed from temporal textures. Motion trajectories provide the most important information for expressing the motion property. In our modeling system, we can extract the main motion trajectory from the temporal textures.

  • PDF

A novel visual servoing techniques considering robot dynamics (로봇의 운동특성을 고려한 새로운 시각구동 방법)

  • 이준수;서일홍;김태원
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1996.10b
    • /
    • pp.410-414
    • /
    • 1996
  • A visual servoing algorithm is proposed for a robot with a camera in hand. Specifically, novel image features are suggested by employing a viewing model of perspective projection to estimate relative pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a, commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF

An Intelligent Visual Servoing Method using Vanishing Point Features

  • Lee, Joon-Soo;Suh, Il-Hong
    • Journal of Electrical Engineering and information Science
    • /
    • v.2 no.6
    • /
    • pp.177-182
    • /
    • 1997
  • A visual servoing method is proposed for a robot with a camera in hand. Specifically, vanishing point features are suggested by employing a viewing model of perspective projection to calculate the relative rolling, pitching and yawing angles between the object and the camera. To compensate dynamic characteristics of the robot, desired feature trajectories for the learning of visually guided line-of-sight robot motion are obtained by measuring features by the camera in hand not in the entire workspace, but on a single linear path along which the robot moves under the control of a commercially provided function of linear motion. And then, control actions of the camera are approximately found by fuzzy-neural networks to follow such desired feature trajectories. To show the validity of proposed algorithm, some experimental results are illustrated, where a four axis SCARA robot with a B/W CCD camera is used.

  • PDF