• Title/Summary/Keyword: Image warping

Search Result 170, Processing Time 0.026 seconds

Improvement of Face Recognition Rate by Normalization of Facial Expression (표정 정규화를 통한 얼굴 인식율 개선)

  • Kim, Jin-Ok
    • The KIPS Transactions:PartB
    • /
    • v.15B no.5
    • /
    • pp.477-486
    • /
    • 2008
  • Facial expression, which changes face geometry, usually has an adverse effect on the performance of a face recognition system. To improve the face recognition rate, we propose a normalization method of facial expression to diminish the difference of facial expression between probe and gallery faces. Two approaches are used to facial expression modeling and normalization from single still images using a generic facial muscle model without the need of large image databases. The first approach estimates the geometry parameters of linear muscle models to obtain a biologically inspired model of the facial expression which may be changed intuitively afterwards. The second approach uses RBF(Radial Basis Function) based interpolation and warping to normalize the facial muscle model as unexpressed face according to the given expression. As a preprocessing stage for face recognition, these approach could achieve significantly higher recognition rates than in the un-normalized case based on the eigenface approach, local binary patterns and a grey-scale correlation measure.

Hole-Filling Method Using Extrapolated Spatio-temporal Background Information (추정된 시공간 배경 정보를 이용한 홀채움 방식)

  • Kim, Beomsu;Nguyen, Tien Dat;Hong, Min-Cheol
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.54 no.8
    • /
    • pp.67-80
    • /
    • 2017
  • This paper presents a hole-filling method using extrapolated spatio-temporal background information to obtain a synthesized view. A new temporal background model using non-overlapped patch based background codebook is introduced to extrapolate temporal background information In addition, a depth-map driven spatial local background estimation is addressed to define spatial background constraints that represent the lower and upper bounds of a background candidate. Background holes are filled by comparing the similarities between the temporal background information and the spatial background constraints. Additionally, a depth map-based ghost removal filter is described to solve the problem of the non-fit between a color image and the corresponding depth map of a virtual view after 3-D warping. Finally, an inpainting is applied to fill in the remaining holes with the priority function that includes a new depth term. The experimental results demonstrated that the proposed method led to results that promised subjective and objective improvement over the state-of-the-art methods.

Virtual Make-up System Using Light and Normal Map Approximation (조명 및 법선벡터 지도 추정을 이용한 사실적인 가상 화장 시스템)

  • Yang, Myung Hyun;Shin, Hyun Joon
    • Journal of the Korea Computer Graphics Society
    • /
    • v.21 no.3
    • /
    • pp.55-61
    • /
    • 2015
  • In this paper, we introduce a method to synthesize realistic make-up effects on input images efficiently. In particular, we focus on shading on the make-up effects due to the lighting and face curvature. By doing this, we can synthesize a wider range of effects realistically than the previous methods. To do this, the information about lighting information together with the normal vectors on all pixels over the face region in the input image. Since the previous methods that compute lighting information and normal vectors require relatively heavy computation cost, we introduce an approach to approximate lighting information using cascade pose regression process and normal vectors by transforming, rendering, and warping a standard 3D face model. The proposed method consumes much less computation time than the previous methods. In our experiment, we show the proposed approximation technique can produce naturally looking virtual make-up effects.

The affective components of facial beauty (아름다운 얼굴의 감성적 특징)

  • 김한경;박수진;정찬섭
    • Science of Emotion and Sensibility
    • /
    • v.7 no.1
    • /
    • pp.23-28
    • /
    • 2004
  • In this paper, we investigated the affective components of facial beauty. In study 1, we did factor analysis of affective evaluations of the faces, and about 65% of the variances are explained by only two factors. Two factors were named 'sharp' and 'soft', respectively. In study 2, the correlation between facial beauty and affective evaluations was analyzed, and the correlation between facial beauty and sharp factor was significant. In study 3, we made the new images by morphing and warping the faces: 'average', 'high-ranked', and 'exaggerated'. The participants evaluated the 'high-ranked' face more beautiful than the 'average' face, and the 'exaggerated' face more beautiful than the 'high-ranked' face. The rating of affective words on the faces showed that the 'average' face was related to 'soft' impression, the 'high-ranked' image to 'sharp' impression, and the 'exaggerated' face might have double impression. These results might support the directional hypothesis for the facial beauty.

  • PDF

Speed Enhancement Technique for Ray Casting using 2D Resampling (2차원 리샘플링에 기반한 광선추적법의 속도 향상 기법)

  • Lee, Rae-Kyoung;Ihm, In-Sung
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.27 no.8
    • /
    • pp.691-700
    • /
    • 2000
  • The standard volume ray-tracing, optimized with octree, needs to repeatedly traverse hierarchical structures for each ray that often leads to redundant computations. It also employs the expensive 3D interpolation for producing high quality images. In this paper, we present a new ray-casting method that efficiently computes shaded colors and opacities at resampling points by traversing octree only once. This method traverses volume data in object-order, finds resampling points on slices incrementally, and performs resampling based on 2D interpolation. While the early ray-termination, which is one of the most effective optimization techniques, is not easily combined with object-order methods, we solved this problem using a dynamic data structure in image space. Considering that our new method is easy to implement, and need little additional memory, it will be used as very effective volume method that fills the performance gap between ray-casting and shear-warping.

  • PDF

Eye Contact System Using Depth Fusion for Immersive Videoconferencing (실감형 화상 회의를 위해 깊이정보 혼합을 사용한 시선 맞춤 시스템)

  • Jang, Woo-Seok;Lee, Mi Suk;Ho, Yo-Sung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.7
    • /
    • pp.93-99
    • /
    • 2015
  • In this paper, we propose a gaze correction method for realistic video teleconferencing. Typically, cameras used in teleconferencing are installed at the side of the display monitor, but not in the center of the monitor. This system makes it too difficult for users to contact each eyes. Therefore, eys contact is the most important in the immersive videoconferencing. In the proposed method, we use the stereo camera and the depth camera to correct the eye contact. The depth camera is the kinect camera, which is the relatively cheap price, and estimate the depth information efficiently. However, the kinect camera has some inherent disadvantages. Therefore, we fuse the kinect camera with stereo camera to compensate the disadvantages of the kinect camera. Consecutively, for the gaze-corrected image, view synthesis is performed by 3D warping according to the depth information. Experimental results verify that the proposed system is effective in generating natural gaze-corrected images.

3D Clothes Modeling of Virtual Human for Metaverse (메타버스를 위한 가상 휴먼의 3차원 의상 모델링)

  • Kim, Hyun Woo;Kim, Dong Eon;Kim, Yujin;Park, In Kyu
    • Journal of Broadcast Engineering
    • /
    • v.27 no.5
    • /
    • pp.638-653
    • /
    • 2022
  • In this paper, we propose the new method of creating 3D virtual-human reflecting the pattern of clothes worn by the person in the high-resolution whole body front image and the body shape data about the person. To get the pattern of clothes, we proceed Instance Segmentation and clothes parsing using Cascade Mask R-CNN. After, we use Pix2Pix to blur the boundaries and estimate the background color and can get UV-Map of 3D clothes mesh proceeding UV-Map base warping. Also, we get the body shape data using SMPL-X and deform the original clothes and body mesh. With UV-Map of clothes and deformed clothes and body mesh, user finally can see the animation of 3D virtual-human reflecting user's appearance by rendering with the state-of-the game engine, i.e. Unreal Engine.

Synthesis of Realistic Facial Expression using a Nonlinear Model for Skin Color Change (비선형 피부색 변화 모델을 이용한 실감적인 표정 합성)

  • Lee Jeong-Ho;Park Hyun;Moon Young-Shik
    • Journal of the Institute of Electronics Engineers of Korea CI
    • /
    • v.43 no.3 s.309
    • /
    • pp.67-75
    • /
    • 2006
  • Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.

Multi-view Generation using High Resolution Stereoscopic Cameras and a Low Resolution Time-of-Flight Camera (고해상도 스테레오 카메라와 저해상도 깊이 카메라를 이용한 다시점 영상 생성)

  • Lee, Cheon;Song, Hyok;Choi, Byeong-Ho;Ho, Yo-Sung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.4A
    • /
    • pp.239-249
    • /
    • 2012
  • Recently, the virtual view generation method using depth data is employed to support the advanced stereoscopic and auto-stereoscopic displays. Although depth data is invisible to user at 3D video rendering, its accuracy is very important since it determines the quality of generated virtual view image. Many works are related to such depth enhancement exploiting a time-of-flight (TOF) camera. In this paper, we propose a fast 3D scene capturing system using one TOF camera at center and two high-resolution cameras at both sides. Since we need two depth data for both color cameras, we obtain two views' depth data from the center using the 3D warping technique. Holes in warped depth maps are filled by referring to the surrounded background depth values. In order to reduce mismatches of object boundaries between the depth and color images, we used the joint bilateral filter on the warped depth data. Finally, using two color images and depth maps, we generated 10 additional intermediate images. To realize fast capturing system, we implemented the proposed system using multi-threading technique. Experimental results show that the proposed capturing system captured two viewpoints' color and depth videos in real-time and generated 10 additional views at 7 fps.

View Synthesis Error Removal for Comfortable 3D Video Systems (편안한 3차원 비디오 시스템을 위한 영상 합성 오류 제거)

  • Lee, Cheon;Ho, Yo-Sung
    • Smart Media Journal
    • /
    • v.1 no.3
    • /
    • pp.36-42
    • /
    • 2012
  • Recently, the smart applications, such as smart phone and smart TV, become a hot issue in IT consumer markets. In particular, the smart TV provides 3D video services, hence efficient coding methods for 3D video data are required. Three-dimensional (3D) video involves stereoscopic or multi-view images to provide depth experience through 3D display systems. Binocular cues are perceived by rendering proper viewpoint images obtained at slightly different view angles. Since the number of viewpoints of the multi-view video is limited, 3D display devices should generate arbitrary viewpoint images using available adjacent view images. In this paper, after we explain a view synthesis method briefly, we propose a new algorithm to compensate view synthesis errors around object boundaries. We describe a 3D warping technique exploiting the depth map for viewpoint shifting and a hole filling method using multi-view images. Then, we propose an algorithm to remove boundary noises that are generated due to mismatches of object edges in the color and depth images. The proposed method reduces annoying boundary noises near object edges by replacing erroneous textures with alternative textures from the other reference image. Using the proposed method, we can generate perceptually inproved images for 3D video systems.

  • PDF