• Title/Summary/Keyword: 3차원 얼굴

Search Result 283, Processing Time 0.023 seconds

Development of Facial Animation Generator on CGS System (CGS 시스템의 페이셜 애니메이션 발상단계 개발)

  • Cho, Dong-Min
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.6
    • /
    • pp.813-823
    • /
    • 2011
  • This study is to suggest the facial animation methodology for that 3D character animators can use CGS system effectively during on their stage which they create ideas and use repeating a process of facial animation, it has suggested the CGS(Character Generation System) that is a creative idea generation methodology identified and complemented the problem of the existing computerized idea generation, in addition, this research being extended on the article vol.13, no.7, "CGS System based on Three-Dimensional Character Modeling II (Part2: About Digital Process)," on Korea Multimedia Society in July 2010 issue, Through the preceding study on 3D character facial expression according to character's feelings as an anatomical structure and the case study on character expressions of theatrical animation, this study is expected to have effectives as one method for maximization of facial animation and idea generation ability.

Design of RBFNNs Pattern Classifier Realized with the Aid of PSO and Multiple Point Signature for 3D Face Recognition (3차원 얼굴 인식을 위한 PSO와 다중 포인트 특징 추출을 이용한 RBFNNs 패턴분류기 설계)

  • Oh, Sung-Kwun;Oh, Seung-Hun
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.6
    • /
    • pp.797-803
    • /
    • 2014
  • In this paper, 3D face recognition system is designed by using polynomial based on RBFNNs. In case of 2D face recognition, the recognition performance reduced by the external environmental factors such as illumination and facial pose. In order to compensate for these shortcomings of 2D face recognition, 3D face recognition. In the preprocessing part, according to the change of each position angle the obtained 3D face image shapes are changed into front image shapes through pose compensation. the depth data of face image shape by using Multiple Point Signature is extracted. Overall face depth information is obtained by using two or more reference points. The direct use of the extracted data an high-dimensional data leads to the deterioration of learning speed as well as recognition performance. We exploit principle component analysis(PCA) algorithm to conduct the dimension reduction of high-dimensional data. Parameter optimization is carried out with the aid of PSO for effective training and recognition. The proposed pattern classifier is experimented with and evaluated by using dataset obtained in IC & CI Lab.

A Hardware Design of Feature Detector for Realtime Processing of SIFT(Scale Invariant Feature Transform) Algorithm in Embedded Systems (임베디드 환경에서 SIFT 알고리즘의 실시간 처리를 위한 특징점 검출기의 하드웨어 구현)

  • Park, Chan-Il;Lee, Su-Hyun;Jeong, Yong-Jin
    • Journal of the Institute of Electronics Engineers of Korea SD
    • /
    • v.46 no.3
    • /
    • pp.86-95
    • /
    • 2009
  • SIFT is an algorithm to extract vectors at pixels around keypoints, in which the pixel colors are very different from neighbors, such as vertices and edges of an object. The SIFT algorithm is being actively researched for various image processing applications including 3D image reconstructions and intelligent vision system for robots. In this paper, we implement a hardware to sift feature detection algorithm for real time processing in embedded systems. We estimate that the hardware implementation give a performance 25ms of $1,280{\times}960$ image and 5ms of $640{\times}480$ image at 100MHz. And the implemented hardware consumes 45,792 LUTs(85%) with Synplify 8.li synthesis tool.

Implementation of Intelligent Moving Target Tracking and Surveillance System Using Pan/Tilt-embedded Stereo Camera System (팬/틸트 탑제형 스테레오 카메라를 이용한 지능형 이동표적 추적 및 감시 시스템의 구현)

  • 고정환;이준호;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.514-523
    • /
    • 2004
  • In this paper, a new intelligent moving target tracking and surveillance system basing on the pan/tilt-embedded stereo camera system is suggested and implemented. In the proposed system, once the face area of a target is detected from the input stereo image by using a YCbCr color model and then, using this data as well as the geometric information of the tracking system, the distance and 3D information of the target are effectively extracted in real-time. Basing on these extracted data the pan/tilted-embedded stereo camera system is adaptively controlled and as a result, the proposed system can track the target adaptively under the various circumstance of the target. From some experiments using 80 frames of the test input stereo image, it is analyzed that standard deviation of the position displacement of the target in the horizontal and vertical directions after tracking is kept to be very low value of 1.82, 1.11, and error ratio between the measured and computed 3D coordinate values of the target is also kept to be very low value of 0.5% on average. From these good experimental results a possibility of implementing a new real-time intelligent stereo target tracking and surveillance system using the proposed scheme is finally suggested.

Multi-modal Emotion Recognition using Semi-supervised Learning and Multiple Neural Networks in the Wild (준 지도학습과 여러 개의 딥 뉴럴 네트워크를 사용한 멀티 모달 기반 감정 인식 알고리즘)

  • Kim, Dae Ha;Song, Byung Cheol
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.351-360
    • /
    • 2018
  • Human emotion recognition is a research topic that is receiving continuous attention in computer vision and artificial intelligence domains. This paper proposes a method for classifying human emotions through multiple neural networks based on multi-modal signals which consist of image, landmark, and audio in a wild environment. The proposed method has the following features. First, the learning performance of the image-based network is greatly improved by employing both multi-task learning and semi-supervised learning using the spatio-temporal characteristic of videos. Second, a model for converting 1-dimensional (1D) landmark information of face into two-dimensional (2D) images, is newly proposed, and a CNN-LSTM network based on the model is proposed for better emotion recognition. Third, based on an observation that audio signals are often very effective for specific emotions, we propose an audio deep learning mechanism robust to the specific emotions. Finally, so-called emotion adaptive fusion is applied to enable synergy of multiple networks. The proposed network improves emotion classification performance by appropriately integrating existing supervised learning and semi-supervised learning networks. In the fifth attempt on the given test set in the EmotiW2017 challenge, the proposed method achieved a classification accuracy of 57.12%.

Classification of Head Shape and 3-dimensional analysis for Korean Men (성인 남성 머리와 얼굴 부위의 형태분류와 3차원적 분석)

  • Choi, Young-Lim;Kim, Jae-Seung;Nam, Yun-Ja
    • Fashion & Textile Research Journal
    • /
    • v.12 no.6
    • /
    • pp.812-820
    • /
    • 2010
  • The objectives of this study were to classify the head shapes of Korean men and to suggest computer tomography as a new body measurement method. The 23 head measurement items of 760 men, aged more than 18 in Sizekorea 2004 database were used to analyze, measured by using statistical methods. Factor analysis, cluster analysis and duncan test were performed using these data. Through factor analysis, 5 factors were extracted upon factor scores and those factors comprised 70.91% for the total variances. The head and face shapes were categorized as 5 types-triangle, round, oval, long, rectangle. We decided for the type 1(triangle) to standard head shape since this type was the most observed. 21 participants were measured using computed tomography(CT). The measured data of skin and skeleton and the standard head shapes were illustrated.

Implicit Self-anxious and Self-depressive Associations among College Students with Posttraumatic Stress Symptoms (외상 경험자의 암묵적 자기-불안 및 자기-우울의 연합)

  • Yun Kyeung, Choi;Jae Ho, Lee
    • Korean Journal of Culture and Social Issue
    • /
    • v.24 no.3
    • /
    • pp.451-472
    • /
    • 2018
  • The purpose of this study was to examine implicit associations of negative emotion (i.e. anxiety and depression) and self among a college students having experienced posttraumatic stress symptoms. The participants were 61 college students(male 16, female 45). They were classified into two groups, trauma group(n=35) and control group(n=26) according to scores of Korean version of Impact of Events Scale-Revised. Two groups were compared with regard to automatic self-anxious and self-depressive associations measured with the Implicit Association Test using both words and facial expression pictures, respectively. As results, trauma group showed more enhanced self-anxious association in the words conditions, and stronger self-anxious and self-depressive associations in the pictures conditions than control group, whereas there were no significant differences between two groups in explicit cognition and depression. These results suggest that traumatic experiences could influence self-concepts in the automatic process. Limitations of the current study and suggestions for future research were discussed.

2D Spatial-Map Construction for Workers Identification and Avoidance of AGV (AGV의 작업자 식별 및 회피를 위한 2D 공간 지도 구성)

  • Ko, Jung-Hwan
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.347-352
    • /
    • 2012
  • In this paper, an 2D spatial-map construction for workers identification and avoidance of AGV using the detection scheme of the spatial coordinates based on stereo camera is proposed. In the proposed system, face area of a moving person is detected from a left image among the stereo image pairs by using the YCbCr color model and its center coordinates are computed by using the centroid method and then using these data, the stereo camera embedded on the mobile robot can be controlled for tracking the moving target in real-time. Moreover, using the disparity map obtained from the left and right images captured by the tracking-controlled stereo camera system and the perspective transformation between a 3-D scene and an image plane, depth map can be detected. From some experiments on AGV driving with 240 frames of the stereo images, it is analyzed that error ratio between the calculated and measured values of the worker's width is found to be very low value of 2.19% and 1.52% on average.

A Study On Three-dimensional Optimized Face Recognition Model : Comparative Studies and Analysis of Model Architectures (3차원 얼굴인식 모델에 관한 연구: 모델 구조 비교연구 및 해석)

  • Park, Chan-Jun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.6
    • /
    • pp.900-911
    • /
    • 2015
  • In this paper, 3D face recognition model is designed by using Polynomial based RBFNN(Radial Basis Function Neural Network) and PNN(Polynomial Neural Network). Also recognition rate is performed by this model. In existing 2D face recognition model, the degradation of recognition rate may occur in external environments such as face features using a brightness of the video. So 3D face recognition is performed by using 3D scanner for improving disadvantage of 2D face recognition. In the preprocessing part, obtained 3D face images for the variation of each pose are changed as front image by using pose compensation. The depth data of face image shape is extracted by using Multiple point signature. And whole area of face depth information is obtained by using the tip of a nose as a reference point. Parameter optimization is carried out with the aid of both ABC(Artificial Bee Colony) and PSO(Particle Swarm Optimization) for effective training and recognition. Experimental data for face recognition is built up by the face images of students and researchers in IC&CI Lab of Suwon University. By using the images of 3D face extracted in IC&CI Lab. the performance of 3D face recognition is evaluated and compared according to two types of models as well as point signature method based on two kinds of depth data information.

Soft tissue evaluation using 3-dimensional face image after maxillary protraction therapy (3차원 얼굴 영상을 이용한 상악 전방견인 치료 후의 연조직 평가)

  • Choi, Dong-Soon;Lee, Kyoung-Hoon;Jang, Insan;Cha, Bong-Kuen
    • The Journal of the Korean dental association
    • /
    • v.54 no.3
    • /
    • pp.217-229
    • /
    • 2016
  • Purpose: The aim of this study was to evaluate the soft-tissue change after the maxillary protraction therapy using threedimensional (3D) facial images. Materials and Methods: This study used pretreatment (T1) and posttreatment (T2) 3D facial images from thirteen Class III malocclusion patients (6 boys and 7 girls; mean age, $8.9{\pm}2.2years$) who received maxillary protraction therapy. The facial images were taken using the optical scanner (Rexcan III 3D scanner), and T1 and T2 images were superimposed using forehead area as a reference. The soft-tissue changes after the treatment (T2-T1) were three-dimensionally calculated using 15 soft-tissue landmarks and 3 reference planes. Results: Anterior movements of the soft-tissue were observed on the pronasale, subnasale, nasal ala, soft-tissue zygoma, and upper lip area. Posterior movements were observed on the lower lip, soft-tissue B-point, and soft-tissue gnathion area. Vertically, most soft-tissue landmarks moved downward at T2. In transverse direction, bilateral landmarks, i.e. exocanthion, zygomatic point, nasal ala, and cheilion moved more laterally at T2. Conclusion: Facial soft-tissue of Class III malocclusion patients was changed three-dimensionally after maxillary protraction therapy. Especially, the facial profile was improved by forward movement of midface and downward and backward movement of lower face.

  • PDF