• Title/Summary/Keyword: Range of singular fusion

Search Result 2, Processing Time 0.015 seconds

An Analysis on the Range of Singular Fusion of Augmented Reality Devices

  • Lee, Hanul;Park, Minyoung;Lee, Hyeontaek;Choi, Hee-Jin
    • Current Optics and Photonics
    • /
    • v.4 no.6
    • /
    • pp.540-544
    • /
    • 2020
  • Current two-dimensional (2D) augmented reality (AR) devices present virtual image and information to a fixed focal plane, regardless of the various locations of ambient objects of interest around the observer. This limitation can lead to a visual discomfort caused by misalignments between the view of the ambient object of interest and the visual representation on the AR device due to a failing of the singular fusion. Since the misalignment becomes more severe as the depth difference gets greater, it can hamper visual understanding of the scene, interfering with task performance of the viewer. Thus, we analyzed the range of singular fusion (RSF) of AR images within which viewers can perceive the shape of an object presented on two different depth planes without difficulty due to the failure of singular fusion. It is expected that our analysis can inspire the development of advanced AR systems with low visual discomfort.

Facial Features and Motion Recovery using multi-modal information and Paraperspective Camera Model (다양한 형식의 얼굴정보와 준원근 카메라 모델해석을 이용한 얼굴 특징점 및 움직임 복원)

  • Kim, Sang-Hoon
    • The KIPS Transactions:PartB
    • /
    • v.9B no.5
    • /
    • pp.563-570
    • /
    • 2002
  • Robust extraction of 3D facial features and global motion information from 2D image sequence for the MPEG-4 SNHC face model encoding is described. The facial regions are detected from image sequence using multi-modal fusion technique that combines range, color and motion information. 23 facial features among the MPEG-4 FDP (Face Definition Parameters) are extracted automatically inside the facial region using color transform (GSCD, BWCD) and morphological processing. The extracted facial features are used to recover the 3D shape and global motion of the object using paraperspective camera model and SVD (Singular Value Decomposition) factorization method. A 3D synthetic object is designed and tested to show the performance of proposed algorithm. The recovered 3D motion information is transformed into global motion parameters of FAP (Face Animation Parameters) of the MPEG-4 to synchronize a generic face model with a real face.