• Title/Summary/Keyword: Facial Behavior

Search Result 110, Processing Time 0.024 seconds

Study on Clothing Style Preference according to Cosmetic Surgery Parts and Clothing Behavior Group: Based on Cosmetic Surgery Experienced by Women in their 20s and 30s (미용성형부위 및 의복행동그룹에 따른 의복스타일선호에 관한 연구: 미용성형을 경험한 20~30대 여성을 중심으로)

  • Lee, Jungeun;Choi, Jeongwook
    • Journal of Fashion Business
    • /
    • v.18 no.1
    • /
    • pp.182-198
    • /
    • 2014
  • The main purposes of this study is to evaluate clothing behaviors according to cosmetic surgery parts and to research how the clothing style preference is expressed depending on each clothing behavior group. This study focuses on women in their 20s and 30s living in Seoul and Gyeonggi area whom have cosmetic surgery experiences. From the women being evaluated, the following groups are divided and then surveyed with equal frequency and ratio: 'facial surgery', 'face contour surgery', 'breast surgery', and 'body figure revision'. When comparing the changes in clothing style preferences before and after the cosmetic surgery, they prefer silhouettes which show body shapes, diversity of color tones, and more overall exposing preferences. After investigating the preferred clothing styles based on cosmetic surgery parts, it is being analyzed that body exposure is more aggressively expressed upon after taking the surgery because the self satisfaction is increased according to the changes in their body shapes after the surgery. Lastly, after looking into the cosmetic surgery and the clothing preferences styles of each clothing behavior group, there seems to be more breast surgeries and body figure revisions for aggressive and extroverted characters: the sex-appeal and mood switching type. It is also being analyzed that facial surgeries are more common in the passive group: information collection, trend alignment, and beauty preference. Such results are also reflected in clothing preferences styles: the biggest change is shown in the aggressive and extroverted group, the sex-appeal types.

A Case Report of Noonan Syndrome with Mental Retardation and Attention-Deficit Hyperactivity Disorder (정신지체와 주의력결핍 과잉행동장애를 보이는 Noonan 증후군 1예)

  • Kim, Won-Woo;Shim, Se-Hoon
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.23 no.1
    • /
    • pp.31-35
    • /
    • 2012
  • Noonan syndrome is characterized by short stature, typical facial dysmorphology, and congenital heart defects. The main facial features of Noonan syndrome are hypertelorism with down-slanting palpebral fissures, ptosis, and low-set posteriorly-rotated ears with a thickened helix. The cardiovascular defects most commonly associated with this condition are pulmonary stenosis and hypertrophic cardiomyopathy. Other associated features are webbed neck, chest deformity, mild intellectual deficit, cryptorchidism, poor feeding in infancy, bleeding tendency, and lymphatic dysplasias. The patient is a 10-year-old boy. He had experienced repeated febrile convulsions. He had typical facial features, a short stature, chest deformity, cryptorchidism, vesicoureteral reflux, and mental retardation. His language and motor development were delayed. When he went to school, it was difficult for him to pay attention, follow directions, and organize tasks. He also displayed behavior such as squirming, leaving his seat in class, and running around inappropriately. Clinical observation is important for the diagnosis, so we report a patient who was diagnosed with Noonan syndrome, mental retardation, and attention-deficit hyperactivity disorder.

Novel Method for Face Recognition using Laplacian of Gaussian Mask with Local Contour Pattern

  • Jeon, Tae-jun;Jang, Kyeong-uk;Lee, Seung-ho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.10 no.11
    • /
    • pp.5605-5623
    • /
    • 2016
  • We propose a face recognition method that utilizes the LCP face descriptor. The proposed method applies a LoG mask to extract a face contour response, and employs the LCP algorithm to produce a binary pattern representation that ensures high recognition performance even under the changes in illumination, noise, and aging. The proposed LCP algorithm produces excellent noise reduction and efficiency in removing unnecessary information from the face by extracting a face contour response using the LoG mask, whose behavior is similar to the human eye. Majority of reported algorithms search for face contour response information. On the other hand, our proposed LCP algorithm produces results expressing major facial information by applying the threshold to the search area with only 8 bits. However, the LCP algorithm produces results that express major facial information with only 8-bits by applying a threshold value to the search area. Therefore, compared to previous approaches, the LCP algorithm maintains a consistent accuracy under varying circumstances, and produces a high face recognition rate with a relatively small feature vector. The test results indicate that the LCP algorithm produces a higher facial recognition rate than the rate of human visual's recognition capability, and outperforms the existing methods.

Energy-Efficient DNN Processor on Embedded Systems for Spontaneous Human-Robot Interaction

  • Kim, Changhyeon;Yoo, Hoi-Jun
    • Journal of Semiconductor Engineering
    • /
    • v.2 no.2
    • /
    • pp.130-135
    • /
    • 2021
  • Recently, deep neural networks (DNNs) are actively used for action control so that an autonomous system, such as the robot, can perform human-like behaviors and operations. Unlike recognition tasks, the real-time operation is essential in action control, and it is too slow to use remote learning on a server communicating through a network. New learning techniques, such as reinforcement learning (RL), are needed to determine and select the correct robot behavior locally. In this paper, we propose an energy-efficient DNN processor with a LUT-based processing engine and near-zero skipper. A CNN-based facial emotion recognition and an RNN-based emotional dialogue generation model is integrated for natural HRI system and tested with the proposed processor. It supports 1b to 16b variable weight bit precision with and 57.6% and 28.5% lower energy consumption than conventional MAC arithmetic units for 1b and 16b weight precision. Also, the near-zero skipper reduces 36% of MAC operation and consumes 28% lower energy consumption for facial emotion recognition tasks. Implemented in 65nm CMOS process, the proposed processor occupies 1784×1784 um2 areas and dissipates 0.28 mW and 34.4 mW at 1fps and 30fps facial emotion recognition tasks.

Detection of video editing points using facial keypoints (얼굴 특징점을 활용한 영상 편집점 탐지)

  • Joshep Na;Jinho Kim;Jonghyuk Park
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.15-30
    • /
    • 2023
  • Recently, various services using artificial intelligence(AI) are emerging in the media field as well However, most of the video editing, which involves finding an editing point and attaching the video, is carried out in a passive manner, requiring a lot of time and human resources. Therefore, this study proposes a methodology that can detect the edit points of video according to whether person in video are spoken by using Video Swin Transformer. First, facial keypoints are detected through face alignment. To this end, the proposed structure first detects facial keypoints through face alignment. Through this process, the temporal and spatial changes of the face are reflected from the input video data. And, through the Video Swin Transformer-based model proposed in this study, the behavior of the person in the video is classified. Specifically, after combining the feature map generated through Video Swin Transformer from video data and the facial keypoints detected through Face Alignment, utterance is classified through convolution layers. In conclusion, the performance of the image editing point detection model using facial keypoints proposed in this paper improved from 87.46% to 89.17% compared to the model without facial keypoints.

The Effects of Gabapentin on Facial Formalin Test (백서에서 Gabapentin 전신투여가 Facial Formalin Test에 미치는 영향)

  • Kim, Chul-Hong;Baik, Seong-Wan;Kim, Hae-Kyu;Kwon, Jae-Young;Kim, Kyoung-Hun;Choi, Sung-Hwan
    • Journal of The Korean Dental Society of Anesthesiology
    • /
    • v.3 no.2 s.5
    • /
    • pp.92-97
    • /
    • 2003
  • Background: Gabapentin is a novel anti-epileptic drug, which is used in clinical practice to treat epilepsy. This drug is also used as an analgesic in pain patients. The antinociceptive effect of this drug was assessed using the formalin test in the rat. Methods: In order to investigate the effects of gabapentin on the trigeminal nerve territory, we injected 0.5% formalin into the upper lip. Adult, male, Sprague-Dawley rats received a $50{\mu}l$ subcutaneous injection of 5% formalin into one vibrissal pad and the consequent, facial grooming behavior was monitored. Consistent with previous investigations using tile formalin model, animals exhibited biphasic nocifensive grooming (phase 1, 0-12 min; phase 2, 12-60 min). Results: The intraperitoneal administration gabapentin 5 minutes prior to the formalin injection led to a significant, dose-dependent reduction in grooming time during phase 2. In high doses, gabapentin also reduced the time of grooming during phase 1. Conclusions: The Intraperitoneal injection of gabapentin has an analgesic effect in the facial formalin rat model and this analgesic effect increases dose-dependently.

  • PDF

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

The Influence of Self-Perceived Physical Attractiveness on Self-Esteem and Appearance Management Behavior of Adult Women (성인 여성의 신체적 매력성 자아지각이 자존심과 외모관리 행동에 미치는 영향)

  • 정명선
    • Journal of the Korean Society of Costume
    • /
    • v.53 no.3
    • /
    • pp.165-179
    • /
    • 2003
  • The purpose of this study was to investigate the Influence of self-perceived physical attractiveness on self-esteem and appearance management behavior cf adult women. The data for this study were collected using questionnaire from 511 adult women living in Kwangju, Korea. The data were analysed using frequency, variance analysis, Duncan's multiple range test, and cross-taps. The results were as follows 1. The focuses of respondents' appearance management behaviors were largely centered on facial and skin texture improvement and hair styling. The frequency of all the Plastic surgeries was not so high, but intention of them was much higher than the practice. 2. The self-perceived physical attractiveness of the respondents influenced significantly on their self-esteem. 3. The self-perceived physical attractiveness of the respondents influenced significantly on their several appearance management behaviors excluding plastic surgeries. 4. The self-esteem of respondents influenced significantly on their several appearance management behaviors excluding plastic surgeries.

Sociocultural Influences of Appearance and Body Image on Appearance Enhancement Behavior (외모에 대한 사회문화적 영향과 신체이미지가 외모향상추구행동에 미치는 영향)

  • Park, Eun-Jeong;Chung, Myung-Sun
    • Journal of the Korean Society of Clothing and Textiles
    • /
    • v.36 no.5
    • /
    • pp.549-561
    • /
    • 2012
  • This study investigates the effects of sociocultural influences and body image on appearance enhancement behaviors (facial management, clothing selection, and weight/figure management). For data collection, a questionnaire was administrated to 562 female college students in Gwangju City, Chonnam area and Chonbuk area, Korea, from May 23 to June 10, 2011. To analyze the data, a SPSS 18.0 statistics package was used, and descriptive statistical analysis, frequency analysis, factor analysis, reliability analysis, and regression analysis were conducted. The results were as follows. First, sociocultural influences were divided into three factors: parental influence, media influence, and peer influence. Overall sociocultural influences had positive effects on appearance enhancement behavior. Second, body image was divided into two factors: weight-concern and appearance-concern. Sociocultural influences had significant effects on overall body image. Third, body image appeared to have positive effects on overall appearance enhancement behavior.

A Study on Flow-emotion-state for Analyzing Flow-situation of Video Content Viewers (영상콘텐츠 시청자의 몰입상황 분석을 위한 몰입감정상태 연구)

  • Kim, Seunghwan;Kim, Cheolki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.3
    • /
    • pp.400-414
    • /
    • 2018
  • It is required for today's video contents to interact with a viewer in order to provide more personalized experience to viewer(s) than before. In order to do so by providing friendly experience to a viewer from video contents' systemic perspective, understanding and analyzing the situation of the viewer have to be preferentially considered. For this purpose, it is effective to analyze the situation of a viewer by understanding the state of the viewer based on the viewer' s behavior(s) in the process of watching the video contents, and classifying the behavior(s) into the viewer's emotion and state during the flow. The term 'Flow-emotion-state' presented in this study is the state of the viewer to be assumed based on the emotions that occur subsequently in relation to the target video content in a situation which the viewer of the video content is already engaged in the viewing behavior. This Flow-emotion-state of a viewer can be expected to be utilized to identify characteristics of the viewer's Flow-situation by observing and analyzing the gesture and the facial expression that serve as the input modality of the viewer to the video content.