• Title/Summary/Keyword: Image 2D to 3D Model

Search Result 510, Processing Time 0.039 seconds

Dynamic Facial Expression of Fuzzy Modeling Using Probability of Emotion (감정확률을 이용한 동적 얼굴표정의 퍼지 모델링)

  • Kang, Hyo-Seok;Baek, Jae-Ho;Kim, Eun-Tai;Park, Mignon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.1
    • /
    • pp.1-5
    • /
    • 2009
  • This paper suggests to apply mirror-reflected method based 2D emotion recognition database to 3D application. Also, it makes facial expression of fuzzy modeling using probability of emotion. Suggested facial expression function applies fuzzy theory to 3 basic movement for facial expressions. This method applies 3D application to feature vector for emotion recognition from 2D application using mirror-reflected multi-image. Thus, we can have model based on fuzzy nonlinear facial expression of a 2D model for a real model. We use average values about probability of 6 basic expressions such as happy, sad, disgust, angry, surprise and fear. Furthermore, dynimic facial expressions are made via fuzzy modelling. This paper compares and analyzes feature vectors of real model with 3D human-like avatar.

3D Image Representation Using Color Correction Matrix According to the CCT of a Display (디스플레이 상관 색온도에 따른 색 보정 매트릭스를 이용한 3D 영상 재생)

  • Song, Inho;Kwon, Hyuk-Ju;Kim, Tae-Kyu;Lee, Sung-Hak
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.1
    • /
    • pp.55-61
    • /
    • 2019
  • Almost all 3D displays have a brightness reduction in the 3D mode comparing to the 2D mode. When the brightness is reduced, one of the color attributes, the colorfulness, is decreased. In this case, the viewer feels that the image quality is deteriorated. In this paper, we proposed a method to compensate for the degradation of colorfulness due to brightness reduction in 3D mode for high quality 3D image viewing using the CIECAM02 model and the color correction matrix. As a result of applying the proposed method, we can confirm that the colorfulness is improved in 3D mode.

3D Facial Synthesis and Animation for Facial Motion Estimation (얼굴의 움직임 추적에 따른 3차원 얼굴 합성 및 애니메이션)

  • Park, Do-Young;Shim, Youn-Sook;Byun, Hye-Ran
    • Journal of KIISE:Software and Applications
    • /
    • v.27 no.6
    • /
    • pp.618-631
    • /
    • 2000
  • In this paper, we suggest the method of 3D facial synthesis using the motion of 2D facial images. We use the optical flow-based method for estimation of motion. We extract parameterized motion vectors using optical flow between two adjacent image sequences in order to estimate the facial features and the facial motion in 2D image sequences. Then, we combine parameters of the parameterized motion vectors and estimate facial motion information. We use the parameterized vector model according to the facial features. Our motion vector models are eye area, lip-eyebrow area, and face area. Combining 2D facial motion information with 3D facial model action unit, we synthesize the 3D facial model.

  • PDF

Integrated editing system for 3D stereoscopic contents production (3차원 입체 콘텐츠 제작을 위한 통합 저작 시스템)

  • Yun, Chang-Ok;Yun, Tae-Soo;Lee, Dong-Hoon
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.13 no.1
    • /
    • pp.11-21
    • /
    • 2008
  • Recently, it has shown an increased interest in 3D stereoscopic contents due to the development of the digital image media. Therefore, many techniques in 3D stereoscopic image generation have being researched and developed. However, it is difficult to generate high immersion and natural 3D stereoscopic contents, because the lack of 3D geometric information imposes restrictions in 2D image. In addition, control of the camera interval and rendering of the both eyes must be repetitively accomplished for the stereo effect being high. Therefore, we propose integrated editing system for 3D stereoscopic contents production using a variety of images. Then we generate 3D model from projective geometric information in single 2D image using image-based modeling method. And we offer real-time interactive 3D stereoscopic preview function for determining high immersion 3D stereo view. And then we generate intuitively 3D stereoscopic contents of high-quality through a stereoscopic LCD monitor and a polarizing filter glasses.

  • PDF

3D Line Segment Detection from Aerial Images using DEM and Ortho-Image (DEM과 정사영상을 이용한 항공 영상에서의 3차원 선소추출)

  • Woo Dong-Min;Jung Young-Kee;Lee Jeong-Yong
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.174-179
    • /
    • 2005
  • This paper presents 3D line segment extraction method, which can be used in generating 3D rooftop model. The core of our method is that 3D line segment is extracted by using line fitting of elevation data on 2D line coordinates of ortho-image. In order to use elevations in line fitting, the elevations should be reliable. To measure the reliability of elevation, in this paper, we employ the concept of self-consistency. We test the effectiveness of the proposed method with a quantitative accuracy analysis using synthetic images generated from Avenches data set of Ascona aerial images. Experimental results indicate that the proposed method shows average 30 line errors of .16 - .30 meters, which are about $10\%$ of the conventional area-based method.

Registration System of 3D Footwear data by Foot Movements (발의 움직임 추적에 의한 3차원 신발모델 정합 시스템)

  • Jung, Da-Un;Seo, Yung-Ho;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.6
    • /
    • pp.24-34
    • /
    • 2007
  • Application systems that easy to access a information have been developed by IT growth and a human life variation. In this paper, we propose a application system to register a 3D footwear model using a monocular camera. In General, a human motion analysis research to body movement. However, this system research a new method to use a foot movement. This paper present a system process and show experiment results. For projection to 2D foot plane from 3D shoe model data, we construct processes that a foot tracking, a projection expression and pose estimation process. This system divide from a 2D image analysis and a 3D pose estimation. First, for a foot tracking, we propose a method that find fixing point by a foot characteristic, and propose a geometric expression to relate 2D coordinate and 3D coordinate to use a monocular camera without a camera calibration. We make a application system, and measure distance error. Then, we confirmed a registration very well.

Stabilized 3D Pose Estimation of 3D Volumetric Sequence Using 360° Multi-view Projection (360° 다시점 투영을 이용한 3D 볼류메트릭 시퀀스의 안정적인 3차원 자세 추정)

  • Lee, Sol;Seo, Young-ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.76-77
    • /
    • 2022
  • In this paper, we propose a method to stabilize the 3D pose estimation result of a 3D volumetric data sequence by matching the pose estimation results from multi-view. Draw a circle centered on the volumetric model and project the model from the viewpoint at regular intervals. After performing Openpose 2D pose estimation on the projected 2D image, the 2D joint is matched to localize the 3D joint position. The tremor of 3D joints sequence according to the angular spacing was quantified and expressed in graphs, and the minimum conditions for stable results are suggested.

  • PDF

Reconstructing 3-D Facial Shape Based on SR Imagine

  • Hong, Yu-Jin;Kim, Jaewon;Kim, Ig-Jae
    • Journal of International Society for Simulation Surgery
    • /
    • v.1 no.2
    • /
    • pp.57-61
    • /
    • 2014
  • We present a robust 3D facial reconstruction method using a single image generated by face-specific super resolution technique. Based on the several consecutive frames with low resolution, we generate a single high resolution image and a three dimensional facial model based on it. To do this, we apply PME method to compute patch similarities for SR after two-phase warping according to facial attributes. Based on the SRI, we extract facial features automatically and reconstruct 3D facial model with basis which selected adaptively according to facial statistical data less than a few seconds. Thereby, we can provide the facial image of various points of view which cannot be given by a single point of view of a camera.

High Performance Millimeter-Wave Image Reject Low-Noise Amplifier Using Inter-stage Tunable Resonators

  • Kim, Jihoon;Kwon, Youngwoo
    • ETRI Journal
    • /
    • v.36 no.3
    • /
    • pp.510-513
    • /
    • 2014
  • A Q-band pHEMT image-rejection low-noise amplifier (IR-LNA) is presented using inter-stage tunable resonators. The inter-stage L-C resonators can maximize an image rejection by functioning as inter-stage matching circuits at an operating frequency ($F_{OP}$) and short circuits at an image frequency ($F_{IM}$). In addition, it also brings more wideband image rejection than conventional notch filters. Moreover, tunable varactors in L-C resonators not only compensate for the mismatch of an image frequency induced by the process variation or model error but can also change the image frequency according to a required RF frequency. The implemented pHEMT IR-LNA shows 54.3 dB maximum image rejection ratio (IRR). By changing the varactor bias, the image frequency shifts from 27 GHz to 37 GHz with over 40 dB IRR, a 19.1 dB to 17.6 dB peak gain, and 3.2 dB to 4.3 dB noise figure. To the best of the authors' knowledge, it shows the highest IRR and $F_{IM}/F_{OP}$ of the reported millimeter/quasi-millimeter wave IR-LNAs.

Designing and Implementing 3D Virtual Face Aesthetic Surgery System Based on Korean Standard Facial Data (한국 표준 얼굴 데이터를 적용한 3D 가상 얼굴 성형 제작 시스템 설계 및 구현)

  • Lee, Cheol-Woong;Kim, II-Min;Cho, Sae-Hong
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.5
    • /
    • pp.737-744
    • /
    • 2009
  • This paper is to study and implement 3D Virtual Face Aesthetic Surgery System which provides more satisfaction by comparing the before-and-after plastic face surgery using 3D face model. For this study, we implemented 3D Face Model Generating System which resembles 2D image of the user based on 3D Korean standard face model and user's 2D pictures. The proposed 3D Virtual Face Aesthetic Surgery System in this paper consists of 3D Face Model Generating System, 3D Skin Texture Mapping System, and Detailed Adjustment System for reflecting the detailed description of face. The proposed system provides more satisfaction to the medical uses and stability in the surgery in compare with other existing systems.

  • PDF