• Title/Summary/Keyword: Facial image

Search Result 828, Processing Time 0.026 seconds

3-D Facial Animation on the PDA via Automatic Facial Expression Recognition (얼굴 표정의 자동 인식을 통한 PDA 상에서의 3차원 얼굴 애니메이션)

  • Lee Don-Soo;Choi Soo-Mi;Kim Hae-Hwang;Kim Yong-Guk
    • The KIPS Transactions:PartB
    • /
    • v.12B no.7 s.103
    • /
    • pp.795-802
    • /
    • 2005
  • In this paper, we present a facial expression recognition-synthesis system that recognizes 7 basic emotion information automatically and renders face with non-photorelistic style in PDA For the recognition of the facial expressions, first we need to detect the face area within the image acquired from the camera. Then, a normalization procedure is applied to it for geometrical and illumination corrections. To classify a facial expression, we have found that when Gabor wavelets is combined with enhanced Fisher model the best result comes out. In our case, the out put is the 7 emotional weighting. Such weighting information transmitted to the PDA via a mobile network, is used for non-photorealistic facial expression animation. To render a 3-D avatar which has unique facial character, we adopted the cartoon-like shading method. We found that facial expression animation using emotional curves is more effective in expressing the timing of an expression comparing to the linear interpolation method.

A study on the 3-D standard value of mandible for the diagnosis of facial asymmetry (안면비대칭 진단을 위한 하악골 3차원영상 계측기준치에 관한 연구)

  • Ahn, Jeong-Soon;Lee, Ki-Heon;Hwang, Hyeon-Shik
    • The korean journal of orthodontics
    • /
    • v.35 no.2 s.109
    • /
    • pp.91-105
    • /
    • 2005
  • For af accurate diagnosis and treatment planning of facial asymmetry, the use of 3-dimensional (3-D) image is indispensable. The purpose of this study was to get standard data for the 3-D analysis of facial asymmetry Computerized tomography (CT) was taken in the 60 normal occlusion individuals (30 male. 30 female) who did not have any apparent facial asymmetry. The acquired 2D CT DICOM data were input on a computer, and the reformatted 3-D images were created using a 3-D image software. Twenty three measurements were established in order to evaluate asymmetry; 15 ;omear measurements (6 for ramus length. 1 for condylar neck length, and 8 for mandibular body length) and 8 angular measurements (4 for gonial angle. 2 for frontal ramal inclination. and 2 for lateral ramal inclination) The right aid left difference of each measurement was calculated and analyzed. It is suggested that the right and left differences of the measurements obtained from the study could be used as references for the diagnosis of facial asymmetric patients.

Lossless Deformation of Brain Images for Concealing Identification (신원 은닉을 위한 두뇌 영상의 무손실 변경)

  • Lee, Hyo-Jong;Yu, Du Ruo
    • The KIPS Transactions:PartB
    • /
    • v.18B no.6
    • /
    • pp.385-388
    • /
    • 2011
  • Patients' privacy protection is a heated issue in medical business, as medical information in digital format transmit everywhere through networks without any limitation. A current protection method for brain images is to deface from the brain image for patient's privacy. However, the defacing process often removes important brain voxels so that the defaced brain image is damaged for medical analysis. An ad-hoc method is proposed to conceal patient's identification by adding cylindrical mask, while the brain keep all important brain voxels. The proposed lossless deformation of brain image is verified not to loose any important voxels. Futhermore, the masked brain image is proved not to be recognized by others.

Measurement and Analysis of Arousal While Experiencing Light-Field Display Device

  • Choi, Hyun-Jun;Kim, Noo-Ree;Park, Hyun-Rin
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.3
    • /
    • pp.188-193
    • /
    • 2020
  • In this paper, we examine whether the 3D image experience through a light-field display device showed the difference in the arousal of the user compared with the 2D image experience. For our experiment, the Looking GlassTM (LG) was used as a lightfield display device that provided 3D images, and 2D images were provided by digital and printed images. The subject's facial behavior during each media experience was recorded for analysis and the degree of arousal was measured by FaceReaderTM. As a result, the first image presented in the first order among the three kinds of images showed that there was a statistical difference in the degree of arousal between the three media. However, no significant differences were found between the three media in the other images. This may be because the arousal did not increase from the experience of the second image through the LG, owing to habituation. In conclusion, the 3D imaging experience may appear in the beginning, but does not continue.

Face inpainting via Learnable Structure Knowledge of Fusion Network

  • Yang, You;Liu, Sixun;Xing, Bin;Li, Kesen
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.877-893
    • /
    • 2022
  • With the development of deep learning, face inpainting has been significantly enhanced in the past few years. Although image inpainting framework integrated with generative adversarial network or attention mechanism enhanced the semantic understanding among facial components, the issues of reconstruction on corrupted regions are still worthy to explore, such as blurred edge structure, excessive smoothness, unreasonable semantic understanding and visual artifacts, etc. To address these issues, we propose a Learnable Structure Knowledge of Fusion Network (LSK-FNet), which learns a prior knowledge by edge generation network for image inpainting. The architecture involves two steps: Firstly, structure information obtained by edge generation network is used as the prior knowledge for face inpainting network. Secondly, both the generated prior knowledge and the incomplete image are fed into the face inpainting network together to get the fusion information. To improve the accuracy of inpainting, both of gated convolution and region normalization are applied in our proposed model. We evaluate our LSK-FNet qualitatively and quantitatively on the CelebA-HQ dataset. The experimental results demonstrate that the edge structure and details of facial images can be improved by using LSK-FNet. Our model surpasses the compared models on L1, PSNR and SSIM metrics. When the masked region is less than 20%, L1 loss reduce by more than 4.3%.

Improving the Processing Speed and Robustness of Face Detection for a Psychological Robot Application (심리로봇적용을 위한 얼굴 영역 처리 속도 향상 및 강인한 얼굴 검출 방법)

  • Ryu, Jeong Tak;Yang, Jeen Mo;Choi, Young Sook;Park, Se Hyun
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.20 no.2
    • /
    • pp.57-63
    • /
    • 2015
  • Compared to other emotion recognition technology, facial expression recognition technology has the merit of non-contact, non-enforceable and convenience. In order to apply to a psychological robot, vision technology must be able to quickly and accurately extract the face region in the previous step of facial expression recognition. In this paper, we remove the background from any image using the YCbCr skin color technology, and use Haar-like Feature technology for robust face detection. We got the result of improved processing speed and robust face detection by removing the background from the input image.

Real-Time Automatic Human Face Detection and Recognition System Using Skin Colors of Face, Face Feature Vectors and Facial Angle Informations (얼굴피부색, 얼굴특징벡터 및 안면각 정보를 이용한 실시간 자동얼굴검출 및 인식시스템)

  • Kim, Yeong-Il;Lee, Eung-Ju
    • The KIPS Transactions:PartB
    • /
    • v.9B no.4
    • /
    • pp.491-500
    • /
    • 2002
  • In this paper, we propose a real-time face detection and recognition system by using skin color informations, geometrical feature vectors of face, and facial angle informations from color face image. The proposed algorithm improved face region extraction efficiency by using skin color informations on the HSI color coordinate and face edge information. And also, it improved face recognition efficiency by using geometrical feature vectors of face and facial angles from the extracted face region image. In the experiment, the proposed algorithm shows more improved recognition efficiency as well as face region extraction efficiency than conventional methods.

A Study on the Facial Image and Recognition of Cosmetics Brand Personality of University Women (여대생들의 얼굴 이미지와 화장품 브랜드 개성 인지도)

  • Kim, Hyun-Hee;Kim, Yong-Sook
    • The Research Journal of the Costume Culture
    • /
    • v.17 no.4
    • /
    • pp.640-652
    • /
    • 2009
  • The purposes of this study were to provide information on customers for cosmetic companies to develop goods and promotion strategy by examining facial images of university women and their recognition level about cosmetics brand personality. The results were as follows; First, satisfaction level of university women with their lips and eyes was very high, while lowest in skins. Second, factors of brand personality of three kinds of foreign cosmetics brands and three kinds of domestic brands were sincerity, beauty, renovation, reliability and ruggedness. In beauty, reliability and ruggedness, they preferred foreign brands to domestic ones, while they preferred domestic ones in sincerity and renovation. Third, the satisfaction level with face had a statistically significant relationship to the importance of face and cosmetic brands, while the importance of face had to the beauty of the brand. In the interrelationship among facial images and the factors of brand personality, they had significant interrelationships, provided beauty and ruggedness, and reliability and ruggedness had no significant interrelationship.

  • PDF

A Design of Small Scale Deep CNN Model for Facial Expression Recognition using the Low Resolution Image Datasets (저해상도 영상 자료를 사용하는 얼굴 표정 인식을 위한 소규모 심층 합성곱 신경망 모델 설계)

  • Salimov, Sirojiddin;Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.75-80
    • /
    • 2021
  • Artificial intelligence is becoming an important part of our lives providing incredible benefits. In this respect, facial expression recognition has been one of the hot topics among computer vision researchers in recent decades. Classifying small dataset of low resolution images requires the development of a new small scale deep CNN model. To do this, we propose a method suitable for small datasets. Compared to the traditional deep CNN models, this model uses only a fraction of the memory in terms of total learnable weights, but it shows very similar results for the FER2013 and FERPlus datasets.

A Software Error Examination of 3D Automatic Face Recognition Apparatus(3D-AFRA) : Measurement of Facial Figure Data (3차원 안면자동인식기(3D-AFRA)의 Software 정밀도 검사 : 형상측정프로그램 오차분석)

  • Seok, Jae-Hwa;Song, Jung-Hoon;Kim, Hyun-Jin;Yoo, Jung-Hee;Kwak, Chang-Kyu;Lee, Jun-Hee;Kho, Byung-Hee;Kim, Jong-Won;Lee, Eui-Ju
    • Journal of Sasang Constitutional Medicine
    • /
    • v.19 no.3
    • /
    • pp.51-61
    • /
    • 2007
  • 1. Objectives The Face is an important standard for the classification of Sasang Constitutions. We are developing 3D Automatic Face Recognition Apparatus(3D-AFRA) to analyse the facial characteristics. This apparatus show us 3D image and data of man's face and measure facial figure data. So We should examine the Measurement of Facial Figure data error of 3D Automatic Face Recognition Apparatus(3D-AFRA) in Software Error Analysis. 2. Methods We scanned face status by using 3D Automatic Face Recognition Apparatus(3D-AFRA). And we measured lengths Between Facial Definition Parameters of facial figure data by Facial Measurement program. 2.1 Repeatability test We measured lengths Between Facial Definition Parameters of facial figure data restored by 3D-AFRA by Facial Measurement program 10 times. Then we compared 10 results each other for repeatability test. 2.2 Measurement error test We measured lengths Between Facial Definition Parameters of facial figure data by two different measurement program that are Facial Measurement program and Rapidform2006. At measuring lengths Between Facial Definition Parameters, we uses two measurement way. The one is straight line measurement, the other is curved line measurement. Then we compared results measured by Facial Measurement program with results measured by Rapidform2006. 3. Results and Conclusions In repeatability test, standard deviation of results is 0.084-0.450mm. And in straight line measurement error test, the average error 0.0582mm, and the maximum error was 0.28mm. In curved line measurement error test, the average error 0.413mm, and the maximum error was 1.53mm. In conclusion, we assessed that the accuracy and repeatability of Facial Measurement program is considerably good. From now on we complement accuracy of 3D-AFRA in Hardware and Software.

  • PDF