• Title/Summary/Keyword: 얼굴 영상

Search Result 1,527, Processing Time 0.03 seconds

A Childhood in the Countryside (고향이야기)

  • Jo, Jae-Yeong
    • Cartoon and Animation Studies
    • /
    • s.5
    • /
    • pp.500-503
    • /
    • 2001
  • 연구 주제는 우리 모두가 성자하면서 느끼고 경험했을, 그러나 고단한 일상 속에 파묻혀 잊고 지냈을, 유년시절의 이야기를 조그마한 시골지역을 비경으로 잔잔하게 그려내는 데 초점을 맞추었다. 도시에서 이사온 한 초등학생에게 시골마을은 제대로 포장된 도로도, 이층집도 없는 곳이었지만 주변에는 친구들과 뛰어 놀기에 부족함이 없는 경이로움으로 가득 찬 곳이었다. 이때의 느낌과 에피소드가 여러 장면 속에 봄부터 겨울까지 일년 사계절을 배경으로 담겨져 있다. 이런 의도 속에서 각 그림의 주제와 등장인물간의 조화를 맞추며, 각각의 그림은 개별적으로 독립적인 이야기를 담고 있으면서도 전체적으로 서로 연관된 고향이라는 주제의 하나의 이야기이다. 독특한 표현세계의 추구를 위한 기법연구로 한국화 기법을 만화와 접목을 통해 동양적인 정서와 회화적인 표현이 접목된 만화를 연구함으로써, 대중성과 예술성이 보완된 작품을 시도하고자 하였다. 과정 속에서 본인은 전형적인 우리나라 시골 어린이들의 생생한 감정과 동작의 움직임, 각가지 얼굴표정과 제스처를 만화적인 기법으로 표현하고자 하였으며, 한편으로는 그림의 배경이 되는 우리의 자연과 산하를 동양화 기법으로 특징을 잡아내고자 노력하였다. 이와 함께 독자의 이해를 높이기 위해 매 장면마다 그에맞는 이야기를 나레이션 형식으로 삽입하여 평면 작품으로 제작하였고, 또한 그 작품을 CD-ROM 속에 담음으로써 영상화된 화면에서 제작하고자 하였다. 이번 작품 제작을 통하여 본인의 의식 깊은 곳에 영향을 주고, 낳고 키워준 고향의 이야기들을 표현함으로 본인을 포함한 도시생활에 찌들어 있는 현대 대중들에게 위로와 휴식을 줄 수 있는 작품이 되고자 노력하였다.

  • PDF

A study on Extraction and Analysis of the Lip in the Shape According to Personality of Big 5 Model (입술형태 추출 및 분석에 따른 5대 성격 연구)

  • Youn, Yong-Heum;Lim, Soon-Yong;Song, Han-Sol;Lim, Sung-Su;Min, Ji-Sun;Kim, Bong-Hyun;Ka, Min-Kyoung;Cho, Dong-Uk;Bae, Young-Lae J.
    • Proceedings of the KAIS Fall Conference
    • /
    • 2011.05b
    • /
    • pp.888-891
    • /
    • 2011
  • 원만한 인간관계를 유지하기 위해 대화는 필수적인 요소이며 대화를 할 때 대체로 상대방의 눈이나 입을 주시하게 된다. 사람들은 대화를 할 때 상대방이 무슨 생각을 하는지를 눈빛, 입모양 등을 통해서 직감적으로 파악하는 방법에 의존한다. 그러나 제스처를 취하는 경우는 드물고 눈빛은 보더라도 상대방의 의도를 잘 파악하지 못하는 경우가 대부분이다. 따라서 본 논문에서는 입술 형태를 추출하고 이를 분석하여 5대 성격과의 상관관계를 연구하는 실험을 수행하였다. 이를 위해 측면 얼굴 영상을 입력자료로 사용하여 입술의 형태에 따른 피실험자 집단을 분류하고 5대 성격을 분석하기 위한 표준 설문지를 통해 무표정 상태의 입술 모양에 따른 성격을 파악하는 연구를 수행하였다.

  • PDF

Polygonal Model Simplification Method for Game Character (게임 캐릭터를 위한 폴리곤 모델 단순화 방법)

  • Lee, Chang-Hoon;Cho, Seong-Eon;Kim, Tai-Hoon
    • Journal of Advanced Navigation Technology
    • /
    • v.13 no.1
    • /
    • pp.142-150
    • /
    • 2009
  • It is very important to generate a simplified model from a complex 3D character in computer game. We propose a new method of extracting feature lines from a 3D game character. Given an unstructured 3D character model containing texture information, we use model feature map (MFM), which is a 2D map that abstracts the variation of texture and curvature in the 3D character model. The MFM is created from both a texture map and a curvature map, which are produced separately by edge-detection to locate line features. The MFM can be edited interactively using standard image-processing tools. We demonstrate the technique on several data sets, including, but not limited to facial character.

  • PDF

Emotion Recognition and Expression System of Robot Based on 2D Facial Image (2D 얼굴 영상을 이용한 로봇의 감정인식 및 표현시스템)

  • Lee, Dong-Hoon;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.13 no.4
    • /
    • pp.371-376
    • /
    • 2007
  • This paper presents an emotion recognition and its expression system of an intelligent robot like a home robot or a service robot. Emotion recognition method in the robot is used by a facial image. We use a motion and a position of many facial features. apply a tracking algorithm to recognize a moving user in the mobile robot and eliminate a skin color of a hand and a background without a facial region by using the facial region detecting algorithm in objecting user image. After normalizer operations are the image enlarge or reduction by distance of the detecting facial region and the image revolution transformation by an angel of a face, the mobile robot can object the facial image of a fixing size. And materialize a multi feature selection algorithm to enable robot to recognize an emotion of user. In this paper, used a multi layer perceptron of Artificial Neural Network(ANN) as a pattern recognition art, and a Back Propagation(BP) algorithm as a learning algorithm. Emotion of user that robot recognized is expressed as a graphic LCD. At this time, change two coordinates as the number of times of emotion expressed in ANN, and change a parameter of facial elements(eyes, eyebrows, mouth) as the change of two coordinates. By materializing the system, expressed the complex emotion of human as the avatar of LCD.

ROI-based Encoding using Face Detection and Tracking for mobile video telephony (얼굴 인식과 추적을 이용한 ROI 기반 영상 통화 코덱 설계 및 구현)

  • Lee, You-Sun;Kim, Chang-Hee;Na, Tae-Young;Lim, Jeong-Yeon;Joo, Young-Ho;Kim, Ki-Mun;Byun, Jae-Woan;Kim, Mun-Churl
    • Proceedings of the IEEK Conference
    • /
    • 2008.06a
    • /
    • pp.77-78
    • /
    • 2008
  • With advent of 3G mobile communication services, video telephony becomes one of the major services. However, due to a narrow channel bandwidth, the current video telephony services have not yet reached a satisfied level. In this paper, we propose an ROI (Region-Of-Interest) based improvement of visual quality for video telephony services with the H.264|MPEG-4 Part 10 (AVC: Advanced Video Coding) codec. To this end, we propose a face detection and tracking method to define ROI for the AVC codec based video telephony. Experiment results show that our proposed ROI based method allowed for improved visual quality in both objective and subjective perspectives.

  • PDF

Analysis and Syntheris of Facial Images for Age Change (나이변화를 위한 얼굴영상의 분석과 합성)

  • 박철하;최창석;최갑석
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.101-111
    • /
    • 1994
  • The human face can provide a great deal of information in regard to his/her race, age, sex, personality, feeling, psychology, mental state, health condition and ect. If we pay a close attention to the aging process, we are able to find out that there are recognizable phenomena such as eyelid drooping, cheek drooping, forehead furrowing, hair falling-out, the hair becomes gray and etc. This paper proposes that the method to estimate the age by analyzing these feature components for the facial image. Ang we also introduce the method of facial image synthesis in accordance with the cange of age. The feature components according to the change of age can be obtainec by dividing the facial image into the 3-dimensional shape of a face and the texture of a face and then analyzing the principle component respectively using 3-dimensional model. We assume the age of the facial image by comparing the extracted feature component to the facial image and synthesize the resulted image by adding or subtracting the feature component to/from the facial image. As a resurt of this simulation, we have obtained the age changed ficial image of high quality.

  • PDF

Multimodal Emotion Recognition using Face Image and Speech (얼굴영상과 음성을 이용한 멀티모달 감정인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.29-40
    • /
    • 2012
  • A challenging research issue that has been one of growing importance to those working in human-computer interaction are to endow a machine with an emotional intelligence. Thus, emotion recognition technology plays an important role in the research area of human-computer interaction, and it allows a more natural and more human-like communication between human and computer. In this paper, we propose the multimodal emotion recognition system using face and speech to improve recognition performance. The distance measurement of the face-based emotion recognition is calculated by 2D-PCA of MCS-LBP image and nearest neighbor classifier, and also the likelihood measurement is obtained by Gaussian mixture model algorithm based on pitch and mel-frequency cepstral coefficient features in speech-based emotion recognition. The individual matching scores obtained from face and speech are combined using a weighted-summation operation, and the fused-score is utilized to classify the human emotion. Through experimental results, the proposed method exhibits improved recognition accuracy of about 11.25% to 19.75% when compared to the most uni-modal approach. From these results, we confirmed that the proposed approach achieved a significant performance improvement and the proposed method was very effective.

Study of Emotion Recognition based on Facial Image for Emotional Rehabilitation Biofeedback (정서재활 바이오피드백을 위한 얼굴 영상 기반 정서인식 연구)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.10
    • /
    • pp.957-962
    • /
    • 2010
  • If we want to recognize the human's emotion via the facial image, first of all, we need to extract the emotional features from the facial image by using a feature extraction algorithm. And we need to classify the emotional status by using pattern classification method. The AAM (Active Appearance Model) is a well-known method that can represent a non-rigid object, such as face, facial expression. The Bayesian Network is a probability based classifier that can represent the probabilistic relationships between a set of facial features. In this paper, our approach to facial feature extraction lies in the proposed feature extraction method based on combining AAM with FACS (Facial Action Coding System) for automatically modeling and extracting the facial emotional features. To recognize the facial emotion, we use the DBNs (Dynamic Bayesian Networks) for modeling and understanding the temporal phases of facial expressions in image sequences. The result of emotion recognition can be used to rehabilitate based on biofeedback for emotional disabled.

Extraction of Tongue Region for Heart Disease Diagnosis Using (심장질환진단을 위한 혀 영역 추출)

  • Cho, Dong-Uk;Kim, Bong-Hyun;Lee, Se-Hwan
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2005.11a
    • /
    • pp.655-658
    • /
    • 2005
  • 질병과 관련된 인간의 노력은 병의 치료보다는 몸의 이상 유무를 빠르게 진단하는 것과 이를 통한 예방에 초점을 맞춰져 있다. 이를 위해 혈액과 소변 검사등과 같은 검사부터 시작해서 초음파와 CT등 다양한 진단 기기가 발전되어 왔다. 그러나 인체는 스스로 자신의 질병 유무에 대한 인체의 생체 신호를 몸밖에 나타내게 되어 있다. 이를 해독하여 질병을 진단하는 것이 한방에서 주로 사용하는 방법이다. 따라서 본 논문에서는 한의학에 있어서 생체 신호를 해석하는 4대 질환 진단 방법 중 가장 중요한 망진(望診)에 대해 기술하고자 한다. 특히, 인체의 중심 기관인 심장에 대해 망진을 IT기술로 구현하고자 한다. 심장은 오관중 혀와 관계가 되어 있으며 따라서 심장에 대한 인간 생체 신호 해석시 혀는 중요한 분석 기관이 된다. 이를 위해 혀를 통해 인간의 생체 신호에 대한 결과를 반영하여 심장의 질환 여부에 대한 정보를 제공하여 주는 설진 시스템을 구축하고자 하며 우선적으로 본 논문은 얼굴 영상에서 혀 영역을 추출하는 방법에 대해 제안하고자 하며 실험에 의해 제안한 방법의 유용성을 입증하고자 한다.

  • PDF

Face Recognition using 2D-PCA and Image Partition (2D - PCA와 영상분할을 이용한 얼굴인식)

  • Lee, Hyeon Gu;Kim, Dong Ju
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.2
    • /
    • pp.31-40
    • /
    • 2012
  • Face recognition refers to the process of identifying individuals based on their facial features. It has recently become one of the most popular research areas in the fields of computer vision, machine learning, and pattern recognition because it spans numerous consumer applications, such as access control, surveillance, security, credit-card verification, and criminal identification. However, illumination variation on face generally cause performance degradation of face recognition systems under practical environments. Thus, this paper proposes an novel face recognition system using a fusion approach based on local binary pattern and two-dimensional principal component analysis. To minimize illumination effects, the face image undergoes the local binary pattern operation, and the resultant image are divided into two sub-images. Then, two-dimensional principal component analysis algorithm is separately applied to each sub-images. The individual scores obtained from two sub-images are integrated using a weighted-summation rule, and the fused-score is utilized to classify the unknown user. The performance evaluation of the proposed system was performed using the Yale B database and CMU-PIE database, and the proposed method shows the better recognition results in comparison with existing face recognition techniques.