• Title/Summary/Keyword: 2-D Motion

Search Result 1,431, Processing Time 0.026 seconds

Selection of features and hidden Markov model parameters for English word recognition from Leap Motion air-writing trajectories

  • Deval Verma;Himanshu Agarwal;Amrish Kumar Aggarwal
    • ETRI Journal
    • /
    • v.46 no.2
    • /
    • pp.250-262
    • /
    • 2024
  • Air-writing recognition is relevant in areas such as natural human-computer interaction, augmented reality, and virtual reality. A trajectory is the most natural way to represent air writing. We analyze the recognition accuracy of words written in air considering five features, namely, writing direction, curvature, trajectory, orthocenter, and ellipsoid, as well as different parameters of a hidden Markov model classifier. Experiments were performed on two representative datasets, whose sample trajectories were collected using a Leap Motion Controller from a fingertip performing air writing. Dataset D1 contains 840 English words from 21 classes, and dataset D2 contains 1600 English words from 40 classes. A genetic algorithm was combined with a hidden Markov model classifier to obtain the best subset of features. Combination ftrajectory, orthocenter, writing direction, curvatureg provided the best feature set, achieving recognition accuracies on datasets D1 and D2 of 98.81% and 83.58%, respectively.

3D Object's shape and motion recovery using stereo image and Paraperspective Camera Model (스테레오 영상과 준원근 카메라 모델을 이용한 객체의 3차원 형태 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.10B no.2
    • /
    • pp.135-142
    • /
    • 2003
  • Robust extraction of 3D object's features, shape and global motion information from 2D image sequence is described. The object's 21 feature points on the pyramid type synthetic object are extracted automatically using color transform technique. The extracted features are used to recover the 3D shape and global motion of the object using stereo paraperspective camera model and sequential SVD(Singuiar Value Decomposition) factorization method. An inherent error of depth recovery due to the paraperspective camera model was removed by using the stereo image analysis. A 30 synthetic object with 21 features reflecting various position was designed and tested to show the performance of proposed algorithm by comparing the recovered shape and motion data with the measured values.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.

3D Depth Information Extraction Algorithm Based on Motion Estimation in Monocular Video Sequence (단안 영상 시퀸스에서 움직임 추정 기반의 3차원 깊이 정보 추출 알고리즘)

  • Park, Jun-Ho;Jeon, Dae-Seong;Yun, Yeong-U
    • The KIPS Transactions:PartB
    • /
    • v.8B no.5
    • /
    • pp.549-556
    • /
    • 2001
  • The general problems of recovering 3D for 2D imagery require the depth information for each picture element form focus. The manual creation of those 3D models is consuming time and cost expensive. The goal in this paper is to simplify the depth estimation algorithm that extracts the depth information of every region from monocular image sequence with camera translation to implement 3D video in realtime. The paper is based on the property that the motion of every point within image which taken from camera translation depends on the depth information. Full-search motion estimation based on block matching algorithm is exploited at first step and ten, motion vectors are compensated for the effect by camera rotation and zooming. We have introduced the algorithm that estimates motion of object by analysis of monocular motion picture and also calculates the averages of frame depth and relative depth of region to the average depth. Simulation results show that the depth of region belongs to a near object or a distant object is in accord with relative depth that human visual system recognizes.

  • PDF

3D conversion of 2D video using depth layer partition (Depth layer partition을 이용한 2D 동영상의 3D 변환 기법)

  • Kim, Su-Dong;Yoo, Ji-Sang
    • Journal of Broadcast Engineering
    • /
    • v.16 no.1
    • /
    • pp.44-53
    • /
    • 2011
  • In this paper, we propose a 3D conversion algorithm of 2D video using depth layer partition method. In the proposed algorithm, we first set frame groups using cut detection algorithm. Each divided frame groups will reduce the possibility of error propagation in the process of motion estimation. Depth image generation is the core technique in 2D/3D conversion algorithm. Therefore, we use two depth map generation algorithms. In the first, segmentation and motion information are used, and in the other, edge directional histogram is used. After applying depth layer partition algorithm which separates objects(foreground) and the background from the original image, the extracted two depth maps are properly merged. Through experiments, we verify that the proposed algorithm generates reliable depth map and good conversion results.

The Sloshing Effect on the Roll Motion and 2-DoF Motions of a 2D Rectangular Cylinder (2차원 사각형 주상체의 횡동요 및 2자유도 운동에 미치는 슬로싱의 영향)

  • Kim, Yun-Ho;Sung, Hong-Gun;Cho, Seok-Kyu;Choi, Hang-Shoon
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.50 no.2
    • /
    • pp.69-78
    • /
    • 2013
  • This study is constructed to investigate the sloshing effect on the motions of a two-dimensional rectangular cylinder experimentally and numerically. The modes of motion under consideration are sway and roll, and also experimental cases are divided by two categories; 1-DoF roll motion and 2-DoF motion (Coupling sway and roll). It is found that the sway response is considerably affected by the motion of the fluid, particularly near the sloshing natural frequency, while the roll response changes comparatively small. The dominant mode of motion is analyzed for 2-DoF experiments as well. The measured data for 1-DoF motions is compared with numerical results obtained by the Multi-modal approach. The numerical schemes vary in detail with the number of dominant sloshing modes; i.e. there is a single dominant mode for the Single-dominant method, while the Model 2 method assumes that the first two modes are superior. For the roll motion, numerical results obtained by the two different methods are relatively in good agreement with the experiments, and these two results are similar in most wave frequency range. However, the discrepancies are apparent where the fluid motion is not governed by a single mode. But both of numerical methods over-predict the motion at the vicinity of the sloshing natural frequency. In order to correct the discrepancy, the modal damping needs to be investigated more precisely. Furthermore, another multi-modal approach, such as the Boussinesq-type method, seems to be required in the region of the intermediate liquid.

A Study on Effective Facial Expression of 3D Character through Variation of Emotions (Model using Facial Anatomy) (감정변화에 따른 3D캐릭터의 표정연출에 관한 연구 (해부학적 구조 중심으로))

  • Kim, Ji-Ae
    • Journal of Korea Multimedia Society
    • /
    • v.9 no.7
    • /
    • pp.894-903
    • /
    • 2006
  • Rapid technology growth of hardware have brought about development and expansion of various digital motion pictured information including 3-Dimension. 3D digital techniques can be used to be diversity in Animation, Virtual-Reality, Movie, Advertisement, Game and so on. 3D characters in digital motion picture take charge of the core as to communicate emotions and information to users through sounds, facial expression and characteristic motions. Concerns about 3D motion and facial expression is getting higher with extension of frequency in use and range about 3D character design. In this study, the facial expression can be used as a effective method about implicit emotions will be studied and research 3D character's facial expressions and muscles movement which are based on human anatomy and then try to find effective method of facial expression. Finally, also, study the difference and distinguishing between 2D and 3D character through the preceding study what I have researched before.

  • PDF

Development of a 3-D Rehabilitation Robot System for Upper Extremities (상지 재활을 위한 3-D 로봇 시스템의 개발)

  • Shin, Kyu-Hyeon;Lee, Soo-Han
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.26 no.4
    • /
    • pp.64-71
    • /
    • 2009
  • A 3-D rehabilitation robot system is developed in this paper. The robot system is for the rehabilitation of upper extremities, especially the shoulder and elbow joints, and has 3-D workspace for enabling occupational therapy to recover physical functions in activities of daily living(ADL). The rehabilitation robot system, which is driven by actuators, has 1 DOF in horizontal rotational motion and 2 DOF in vertical rotational motion, where all actuators are set on the ground. Parallelogram linkage mechanisms lower the equivalent inertia of the control elements as well as control forces. Also the mechanisms have high mechanical rigidity for the end effector and the handle. Passive motion mode experiments have been performed to evaluate the proposed robot system. The results of the experiments show and excellent performance in simulating spasticity of patients.

Temporal Anti-aliasing of a Stereoscopic 3D Video

  • Kim, Wook-Joong;Kim, Seong-Dae;Hur, Nam-Ho;Kim, Jin-Woong
    • ETRI Journal
    • /
    • v.31 no.1
    • /
    • pp.1-9
    • /
    • 2009
  • Frequency domain analysis is a fundamental procedure for understanding the characteristics of visual data. Several studies have been conducted with 2D videos, but analysis of stereoscopic 3D videos is rarely carried out. In this paper, we derive the Fourier transform of a simplified 3D video signal and analyze how a 3D video is influenced by disparity and motion in terms of temporal aliasing. It is already known that object motion affects temporal frequency characteristics of a time-varying image sequence. In our analysis, we show that a 3D video is influenced not only by motion but also by disparity. Based on this conclusion, we present a temporal anti-aliasing filter for a 3D video. Since the human process of depth perception mainly determines the quality of a reproduced 3D image, 2D image processing techniques are not directly applicable to 3D images. The analysis presented in this paper will be useful for reducing undesirable visual artifacts in 3D video as well as for assisting the development of relevant technologies.

  • PDF

Foreground Extraction and Depth Map Creation Method based on Analyzing Focus/Defocus for 2D/3D Video Conversion (2D/3D 동영상 변환을 위한 초점/비초점 분석 기반의 전경 영역 추출과 깊이 정보 생성 기법)

  • Han, Hyun-Ho;Chung, Gye-Dong;Park, Young-Soo;Lee, Sang-Hun
    • Journal of Digital Convergence
    • /
    • v.11 no.1
    • /
    • pp.243-248
    • /
    • 2013
  • In this paper, depth of foreground is analysed by focus and color analysis grouping for 2D/3D video conversion and depth of foreground progressing method is preposed by using focus and motion information. Candidate foreground image is generated by estimated movement of image focus information for extracting foreground from 2D video. Area of foreground is extracted by filling progress using color analysis on hole area of inner object existing candidate foreground image. Depth information is generated by analysing value of focus existing on actual frame for allocating depth at generated foreground area. Depth information is allocated by weighting motion information. Results of previous proposed algorithm is compared with proposed method from this paper for evaluating the quality of generated depth information.