• Title/Summary/Keyword: Facial image

Search Result 828, Processing Time 0.03 seconds

Analysis of Advertisement Types of Global Fashion Brands : A study focused on the trends of photo image components and styles of expression in global fashion advertisements. (글로벌 패션브랜드 광고의 유형 분석 - 패션광고 사진이미지 구성요소와 표현형식을 중심으로 -)

  • Chang, Gyeong-Hae
    • Journal of the Korea Fashion and Costume Design Association
    • /
    • v.19 no.4
    • /
    • pp.17-27
    • /
    • 2017
  • This study analyzes the trends of photo image components and forms of expression in global fashion advertising photos. First, photo image components are classified into seven categories: location (indoor-outdoor), the model's movement, pose, facial expression, gender, race and number of models. The forms of expression are classified into six categories: direct expression, sensual expression, symbolic expression, storytelling expression, dramatic expression, and sexual expression. With the aforementioned classifications, the trends were studied for three years from 2013 to 2015. The analysis result indicates the following: for the details of photo image components, the portion of indoor photos, static poses and conscious facial expressions was over 60% of the total for every season of the 3 years, while there was a slight increase in the number of models and the diversity of races. For the forms of expression, the sensual expression showed the largest portion accounting for over 50% of the total, followed by direct expression and storytelling expression. The findings from this study show that the trends of photo image components and forms of expression in global fashion advertisements are changing. Therefore, domestic companies will need to develop photo image components and forms of expression in line with the changing global fashion advertisement trends.

  • PDF

Head Pose Estimation with Accumulated Historgram and Random Forest (누적 히스토그램과 랜덤 포레스트를 이용한 머리방향 추정)

  • Mun, Sung Hee;Lee, Chil woo
    • Smart Media Journal
    • /
    • v.5 no.1
    • /
    • pp.38-43
    • /
    • 2016
  • As smart environment is spread out in our living environments, the needs of an approach related to Human Computer Interaction(HCI) is increases. One of them is head pose estimation. it related to gaze direction estimation, since head has a close relationship to eyes by the body structure. It's a key factor in identifying person's intention or the target of interest, hence it is an essential research in HCI. In this paper, we propose an approach for head pose estimation with pre-defined several directions by random forest classifier. We use canny edge detector to extract feature of the different facial image which is obtained between input image and averaged frontal facial image for extraction of rotation information of input image. From that, we obtain the binary edge image, and make two accumulated histograms which are obtained by counting the number of pixel which has non-zero value along each of the axes. This two accumulated histograms are used to feature of the facial image. We use CAS-PEAL-R1 Dataset for training and testing to random forest classifier, and obtained 80.6% accuracy.

3D Facial Animation with Head Motion Estimation and Facial Expression Cloning (얼굴 모션 추정과 표정 복제에 의한 3차원 얼굴 애니메이션)

  • Kwon, Oh-Ryun;Chun, Jun-Chul
    • The KIPS Transactions:PartB
    • /
    • v.14B no.4
    • /
    • pp.311-320
    • /
    • 2007
  • This paper presents vision-based 3D facial expression animation technique and system which provide the robust 3D head pose estimation and real-time facial expression control. Many researches of 3D face animation have been done for the facial expression control itself rather than focusing on 3D head motion tracking. However, the head motion tracking is one of critical issues to be solved for developing realistic facial animation. In this research, we developed an integrated animation system that includes 3D head motion tracking and facial expression control at the same time. The proposed system consists of three major phases: face detection, 3D head motion tracking, and facial expression control. For face detection, with the non-parametric HT skin color model and template matching, we can detect the facial region efficiently from video frame. For 3D head motion tracking, we exploit the cylindrical head model that is projected to the initial head motion template. Given an initial reference template of the face image and the corresponding head motion, the cylindrical head model is created and the foil head motion is traced based on the optical flow method. For the facial expression cloning we utilize the feature-based method, The major facial feature points are detected by the geometry of information of the face with template matching and traced by optical flow. Since the locations of varying feature points are composed of head motion and facial expression information, the animation parameters which describe the variation of the facial features are acquired from geometrically transformed frontal head pose image. Finally, the facial expression cloning is done by two fitting process. The control points of the 3D model are varied applying the animation parameters to the face model, and the non-feature points around the control points are changed by use of Radial Basis Function(RBF). From the experiment, we can prove that the developed vision-based animation system can create realistic facial animation with robust head pose estimation and facial variation from input video image.

Constructing Impressions with Multimedia Ringtones and a Smartphone Usage Tracker

  • Lee, KangWoo;Choo, Hyunseung
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.5
    • /
    • pp.1870-1880
    • /
    • 2015
  • In this paper, we studied facial impression construction with smartphones in a series of experiments with two smartphone applications: SmartRing and SystemSens+. In the first experiment, impressions of faces associated with different music genres (trot vs. classical) were compared to impressions formed from a facial image alone along the social warmth and intelligence dimensions. In the second experiment, the effect of similarity attraction was investigated by manipulating the extroversion of facial images. Results indicated that impressions of faces cannot only be constructed along the social warmth and intelligence dimensions, but can also be made more or less attractive based on their similarity to the viewer's personality. Our experiments provide interesting insights into facial impressions formed in a smartphone environment.

Detection of Face and Facial Features in Complex Background from Color Images (복잡한 배경의 칼라영상에서 Face and Facial Features 검출)

  • 김영구;노진우;고한석
    • Proceedings of the IEEK Conference
    • /
    • 2002.06d
    • /
    • pp.69-72
    • /
    • 2002
  • Human face detection has many applications such as face recognition, face or facial feature tracking, pose estimation, and expression recognition. We present a new method for automatically segmentation and face detection in color images. Skin color alone is usually not sufficient to detect face, so we combine the color segmentation and shape analysis. The algorithm consists of two stages. First, skin color regions are segmented based on the chrominance component of the input image. Then regions with elliptical shape are selected as face hypotheses. They are certificated to searching for the facial features in their interior, Experimental results demonstrate successful detection over a wide variety of facial variations in scale, rotation, pose, lighting conditions.

  • PDF

An Intelligent Emotion Recognition Model Using Facial and Bodily Expressions

  • Jae Kyeong Kim;Won Kuk Park;Il Young Choi
    • Asia pacific journal of information systems
    • /
    • v.27 no.1
    • /
    • pp.38-53
    • /
    • 2017
  • As sensor technologies and image processing technologies make collecting information on users' behavior easy, many researchers have examined automatic emotion recognition based on facial expressions, body expressions, and tone of voice, among others. Specifically, many studies have used normal cameras in the multimodal case using facial and body expressions. Thus, previous studies used a limited number of information because normal cameras generally produce only two-dimensional images. In the present research, we propose an artificial neural network-based model using a high-definition webcam and Kinect to recognize users' emotions from facial and bodily expressions when watching a movie trailer. We validate the proposed model in a naturally occurring field environment rather than in an artificially controlled laboratory environment. The result of this research will be helpful in the wide use of emotion recognition models in advertisements, exhibitions, and interactive shows.

A Facial Feature Detection using Light Compensation and Appearance-based Features (빛 보상과 외형 기반의 특징을 이용한 얼굴 특징 검출)

  • Kim Jin-Ok
    • Journal of Internet Computing and Services
    • /
    • v.7 no.3
    • /
    • pp.143-153
    • /
    • 2006
  • Facial feature detection is a basic technology in applications such as human computer interface, face recognition, face tracking and image database management. The speed of feature detection algorithm is one of the main issues for facial feature detection in real-time environment. Primary factors like a variation by lighting effect, location, rotation and complex background give an effect to decrease a detection ratio. A facial feature detection algorithm is proposed to improve the detection ratio and the detection speed. The proposed algorithm detects skin regions over the entire image improved by CLAHE, an algorithm for light compensation against varying lighting conditions. To extract facial feature points on detected skin regions, it uses appearance-based geometrical characteristics of a face. Since the method shows fast detection speed as well as efficient face-detection ratio, it can be applied in real-time application to face tracking and face recognition.

  • PDF

Clinical Case Study of Facial Nerve Paralysis with Sensorineural Hearing Loss and Tinnitus Caused by Traumatic Temporal Bone Fracture (난청과 이명을 동반한 외상성 안면신경마비 치험 1례)

  • Jang, Yeo Jin;Yang, Tae Joon;Shin, Jeong Cheol;Kim, Hye Hwa;Kim, Tae Gwang;Jeong, Mi Young;Kim, Jae Hong
    • Journal of Acupuncture Research
    • /
    • v.33 no.1
    • /
    • pp.95-101
    • /
    • 2016
  • Objectives : The aim of this report was to investigate the effects of Korean medical treatment on facial nerve paralysis with sensorineural hearing loss and tinnitus caused by traumatic temporal bone fracture. Methods : We treated a patient with acupuncture, herbal medicine and physiotherapy. The effect of these treatments was evaluated by House-Brackmann facial grading scale, Yanagihara's unweighted grading system and by Digital Infrared Thermographic Image. Results : After 21 days of Korean medical treatment, House-Brackmann facial grading scale changed from III to II and Yanagihara's unweighted grading score increased from 14 to 27. Digital Infrared Thermographic Image also improved. Conclusions : These results suggest that Korean medical treatments were effective in treating facial nerve paralysis with sensorineural hearing loss and tinnitus caused by traumatic temporal bone fracture. We hope that a more efficient application of this treatment will be the result of clinical data accumulated in future studies.

People Counting System by Facial Age Group (얼굴 나이 그룹별 피플 카운팅 시스템)

  • Ko, Ginam;Lee, YongSub;Moon, Nammee
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.51 no.2
    • /
    • pp.69-75
    • /
    • 2014
  • Existing People Counting System using a single overhead mounted camera has limitation in object recognition and counting in various environments. Those limitations are attributable to overlapping, occlusion and external factors, such as over-sized belongings and dramatic light change. Thus, this paper proposes the new concept of People Counting System by Facial Age Group using two depth cameras, at overhead and frontal viewpoints, in order to improve object recognition accuracy and robust people counting to external factors. The proposed system is counting the pedestrians by five process such as overhead image processing, frontal image processing, identical object recognition, facial age group classification and in-coming/out-going counting. The proposed system developed by C++, OpenCV and Kinect SDK, and it target group of 40 people(10 people by each age group) was setup for People Counting and Facial Age Group classification performance evaluation. The experimental results indicated approximately 98% accuracy in People Counting and 74.23% accuracy in the Facial Age Group classification.

NASAL DEVIATION IN PATIENTS WITH MANDIBULO-FACIAL ASYMMETRY (안모 비대칭환자의 두부정중선에 대한 비부의 편위)

  • Park, Ji-Hwa;Son, Seong-Il;Jang, Hyun-Jung;Kwon, Tae-Geon;Lee, Sang-Han
    • Maxillofacial Plastic and Reconstructive Surgery
    • /
    • v.27 no.2
    • /
    • pp.151-159
    • /
    • 2005
  • The purpose of this study was to evaluate the nasal deviation in mandibular prognathism with mandibulo-facial asymmetry. There were 40 patients whose mandibular prognathism with/without facial asymmetry were treated with orthognathic surgery from March 2002 to October 2003. The Group A(n=20) had a mandibulo-facial asymmetry over 6mm menton deviation in cephalogram PA and the Group B(n=20) had a mandibular prognathism. The preoperative frontal photograph, cephalogram PA and three dimensionalcomputed tomography(divided in hard tissuse image and soft tissue image) of two group was evaluated NDA(nasal deviation angle) and MDA(mandibular deviation angle). The NDA was statistical difference between asymmetry Group A and symmetry Group B(p<0.01), and was deviated in affected side of asymmetry. The MDA were also statistical difference between Group A and Group B(p<0.01), however the measurements of MDA between the frontal photograph, 3D-CT and cephalogram PA were similar to each others. The low correlation of NDA between frontal photograph and cephalogram PA in Group A and B demonstrate that we couldn't assess nasal deviation in cephalogram PA. It could be concluded that patients with mandibulo-facial asymmetry have a nasal deviation and clinician must remember this fact when they assess and treat patients.