• Title/Summary/Keyword: 3D얼굴 모델링

Search Result 68, Processing Time 0.032 seconds

A Method of Integrating Scan Data for 3D Face Modeling (3차원 얼굴 모델링을 위한 스캔 데이터의 통합 방법)

  • Yoon, Jin-Sung;Kim, Gye-Young;Choi, Hyung-Il
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.6
    • /
    • pp.43-57
    • /
    • 2009
  • Integrating 3D data acquired in multiple views is one of the most important techniques in 3D modeling. However, the existing integration methods are sensitive to registration errors and surface scanning noise. In this paper, we propose a integration algorithm using the local surface topology. We first find all boundary vertex pairs satisfying a prescribed geometric condition in the areas between neighboring surfaces, and then separates areas to several regions by using boundary vertex pairs. We next compute best fitting planes suitable to each regions through PCA(Principal Component Analysis). They are used to produce triangles that be inserted into empty areas between neighboring surfaces. Since each regions between neighboring surfaces can be integrated by using local surface topology, a proposed method is robust to registration errors and surface scanning noise. We also propose a method integrating of textures by using parameterization technique. We first transforms integrated surface into initial viewpoints of each surfaces. We then project each textures to transformed integrated surface. They will be then assigned into parameter domain for integrated surface and be integrated according to the seaming lines for surfaces. Experimental results show that the proposed method is efficient to face modeling.

Pose Transformation of a Frontal Face Image by Invertible Meshwarp Algorithm (역전가능 메쉬워프 알고리즘에 의한 정면 얼굴 영상의 포즈 변형)

  • 오승택;전병환
    • Journal of KIISE:Software and Applications
    • /
    • v.30 no.1_2
    • /
    • pp.153-163
    • /
    • 2003
  • In this paper, we propose a new technique of image based rendering(IBR) for the pose transformation of a face by using only a frontal face image and its mesh without a three-dimensional model. To substitute the 3D geometric model, first, we make up a standard mesh set of a certain person for several face sides ; front. left, right, half-left and half-right sides. For the given person, we compose only the frontal mesh of the frontal face image to be transformed. The other mesh is automatically generated based on the standard mesh set. And then, the frontal face image is geometrically transformed to give different view by using Invertible Meshwarp Algorithm, which is improved to tolerate the overlap or inversion of neighbor vertexes in the mesh. The same warping algorithm is used to generate the opening or closing effect of both eyes and a mouth. To evaluate the transformation performance, we capture dynamic images from 10 persons rotating their heads horizontally. And we measure the location error of 14 main features between the corresponding original and transformed facial images. That is, the average difference is calculated between the distances from the center of both eyes to each feature point for the corresponding original and transformed images. As a result, the average error in feature location is about 7.0% of the distance from the center of both eyes to the center of a mouth.

Integrated 3D Skin Color Model for Robust Skin Color Detection of Various Races (강건한 다인종 얼굴 검출을 위한 통합 3D 피부색 모델)

  • Park, Gyeong-Mi;Kim, Young-Bong
    • The Journal of the Korea Contents Association
    • /
    • v.9 no.5
    • /
    • pp.1-12
    • /
    • 2009
  • The correct detection of skin color is an important preliminary process in fields of face detection and human motion analysis. It is generally performed by three steps: transforming the pixel color to a non-RGB color space, dropping the illuminance component of skin color, and classifying the pixels by the skin color distribution model. Skin detection depends on by various factors such as color space, presence of the illumination, skin modeling method. In this paper we propose a 3d skin color model that can segment pixels with several ethnic skin color from images with various illumination condition and complicated backgrounds. This proposed skin color model are formed with each components(Y, Cb, Cr) which transform pixel color to YCbCr color space. In order to segment the skin color of several ethnic groups together, we first create the skin color model of each ethnic group, and then merge the skin color model using its skin color probability. Further, proposed model makes several steps of skin color areas that can help to classify proper skin color areas using small training data.

Fairy tale experience system using VR (가상현실 기술을 활용한 동화 체험 시스템)

  • Yoo, Sangwook;Kim, Suhyeon;Kim, Jonghyun;Hwang, Chanho;Choi, Hyosub
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.11a
    • /
    • pp.482-485
    • /
    • 2020
  • 최근 물리적 언택트(Untact)와 사회적 온택트(Ontact)를 이끄는 기술에 대한 관심이 커지면서 가상현실(VR:Virtual Reality) 기술을 활용한 교육이 주목받고 있다. 이와 더불어 교육 환경이 경험과 체험 위주의 맞춤형 교육으로 급격히 변화함에 따라 사용자 요구를 만족시킬 콘텐츠는 부족한 실태이다. 따라서, 본 논문에서는 효과적인 독서교육을 위해 종이책과 전자책을 비교 설명하며 가상현실 기술을 활용한 동화 체험 시스템을 제안하였다. 사용자 얼굴 3D 텍스처 모델링과 멀티플레이어를 통해 비대면 실감형 콘텐츠와 공공데이터를 활용하여 동화선택 기능을 구현하였다. 이처럼 교육적인 관점에서 스스로 선택한 동화를 함께 체험하고 인지한다는 점에서 효과적이다.

Emotion-based Gesture Stylization For Animated SMS (모바일 SMS용 캐릭터 애니메이션을 위한 감정 기반 제스처 스타일화)

  • Byun, Hae-Won;Lee, Jung-Suk
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.5
    • /
    • pp.802-816
    • /
    • 2010
  • To create gesture from a new text input is an important problem in computer games and virtual reality. Recently, there is increasing interest in gesture stylization to imitate the gestures of celebrities, such as announcer. However, no attempt has been made so far to stylize a gestures using emotion such as happiness and sadness. Previous researches have not focused on real-time algorithm. In this paper, we present a system to automatically make gesture animation from SMS text and stylize the gesture from emotion. A key feature of this system is a real-time algorithm to combine gestures with emotion. Because the system's platform is a mobile phone, we distribute much works on the server and client. Therefore, the system guarantees real-time performance of 15 or more frames per second. At first, we extract words to express feelings and its corresponding gesture from Disney video and model the gesture statistically. And then, we introduce the theory of Laban Movement Analysis to combine gesture and emotion. In order to evaluate our system, we analyze user survey responses.

A Study on Human-Robot Interaction Trends Using BERTopic (BERTopic을 활용한 인간-로봇 상호작용 동향 연구)

  • Jeonghun Kim;Kee-Young Kwahk
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.185-209
    • /
    • 2023
  • With the advent of the 4th industrial revolution, various technologies have received much attention. Technologies related to the 4th industry include the Internet of Things (IoT), big data, artificial intelligence, virtual reality (VR), 3D printers, and robotics, and these technologies are often converged. In particular, the robotics field is combined with technologies such as big data, artificial intelligence, VR, and digital twins. Accordingly, much research using robotics is being conducted, which is applied to distribution, airports, hotels, restaurants, and transportation fields. In the given situation, research on human-robot interaction is attracting attention, but it has not yet reached the level of user satisfaction. However, research on robots capable of perfect communication is steadily being conducted, and it is expected that it will be able to replace human emotional labor. Therefore, it is necessary to discuss whether the current human-robot interaction technology can be applied to business. To this end, this study first examines the trend of human-robot interaction technology. Second, we compare LDA (Latent Dirichlet Allocation) topic modeling and BERTopic topic modeling methods. As a result, we found that the concept of human-robot interaction and basic interaction was discussed in the studies from 1992 to 2002. From 2003 to 2012, many studies on social expression were conducted, and studies related to judgment such as face detection and recognition were conducted. In the studies from 2013 to 2022, service topics such as elderly nursing, education, and autism treatment appeared, and research on social expression continued. However, it seems that it has not yet reached the level that can be applied to business. As a result of comparing LDA (Latent Dirichlet Allocation) topic modeling and the BERTopic topic modeling method, it was confirmed that BERTopic is a superior method to LDA.

Fixed-Point Modeling and Performance Analysis of a SIFT Keypoints Localization Algorithm for SoC Hardware Design (SoC 하드웨어 설계를 위한 SIFT 특징점 위치 결정 알고리즘의 고정 소수점 모델링 및 성능 분석)

  • Park, Chan-Ill;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.45 no.6
    • /
    • pp.49-59
    • /
    • 2008
  • SIFT(Scale Invariant Feature Transform) is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vortices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3-D image constructions, and its most computation-intensive stage is a keypoint localization. In this paper, we develope a fixed-point model of the keypoint localization and propose its efficient hardware architecture for embedded applications. The bit-length of key variables are determined based on two performance measures: localization accuracy and error rate. Comparing with the original algorithm (implemented in Matlab), the accuracy and error rate of the proposed fixed point model are 93.57% and 2.72% respectively. In addition, we found that most of missing keypoints appeared at the edges of an object which are not very important in the case of keypoints matching. We estimate that the hardware implementation will give processing speed of $10{\sim}15\;frame/sec$, while its fixed point implementation on Pentium Core2Duo (2.13 GHz) and ARM9 (400 MHz) takes 10 seconds and one hour each to process a frame.

Studies on the Modeling of the Three-dimensional Standard Face and Deriving of Facial Characteristics Depending on the Taeeumin and Soyangin (소양인, 태음인의 표준 3차원 얼굴 모델링 개발 및 그 특성에 관한 연구)

  • Lee, Seon-Young;Hwang, Min-Woo
    • Journal of Sasang Constitutional Medicine
    • /
    • v.26 no.4
    • /
    • pp.350-364
    • /
    • 2014
  • Objectives This study was aimed to find the significant features of face form according to the Taeeumin and Soyangin by analyzing the three-dimensional face information data. Also, making standard face of the Taeeumin and Soyangin was an object of this study. Methods We collected three-dimensional face data of patients aged between 20~45 years old diagnosed by a specialist of Sasang constitutional medicine. The data were collected using a 3D scanner, Morpheus 3D(Morpheus Corporation, KOREA). Extracting a face feature point total of 64, was set to 332 pieces(height, angle, ratio, etc.) of each variable between feature points. ANOVA test were used to compare the characteristics of subjects according to the Taeeumin and Soyangin. Results When not to consider gender, the Taeeumin and Soyangin were different from the 18 items(3 items in the ear, 9 items in the eye, 1 item in the nose, 1 item in the mouth, 4 items in the jaw). When to consider gender, the Taeeumin and Soyangin men were different from the 6 items(1 item in the ear, 2 items in the nose, 3 items in the face). And the Taeeumin and Soyangin women were different from 17 items(1 item in the ear, 10 items in the eye, 2 items in the nose, 1 item in the mouth, 3 items in the face). Conclusions These results show Taeeumin's face(both men and women) width of the right and left is larger than the length of the top and bottom. Compared to men of Soyangin, men of Taeeumin has greater wings of the nose. Compared to women of Soyangin, women of Taeeumin has longer length of the eye. Soyangin's face(both men and women) length of the top and bottom is larger than the width of the right and left. Compared to men of Taeeumin, men of Soyangin has smaller wings of the nose. Compared to women of Taeeumin, women of Soyangin has more stereoscopic facial features at the top and bottom of the lateral face. Also, by accumulating three-dimensional face data, this study modeled the standard facial features by Taeeumin and Soyangin. These results may be helpful in the development of Sasang constitutional diagnostics utilizing the characteristics of the facial form at later.