• Title/Summary/Keyword: 표정 강도

Search Result 47, Processing Time 0.032 seconds

Development of facial recognition application for automation logging of emotion log (감정로그 자동화 기록을 위한 표정인식 어플리케이션 개발)

  • Shin, Seong-Yoon;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.737-743
    • /
    • 2017
  • The intelligent life-log system proposed in this paper is intended to identify and record a myriad of everyday life information as to the occurrence of various events based on when, where, with whom, what and how, that is, a wide variety of contextual information involving person, scene, ages, emotion, relation, state, location, moving route, etc. with a unique tag on each piece of such information and to allow users to get a quick and easy access to such information. Context awareness generates and classifies information on a tag unit basis using the auto-tagging technology and biometrics recognition technology and builds a situation information database. In this paper, we developed an active modeling method and an application that recognizes expressionless and smile expressions using lip lines to automatically record emotion information.

Non-verbal Emotional Expressions for Social Presence of Chatbot Interface (챗봇의 사회적 현존감을 위한 비언어적 감정 표현 방식)

  • Kang, Minjeong
    • The Journal of the Korea Contents Association
    • /
    • v.21 no.1
    • /
    • pp.1-11
    • /
    • 2021
  • The users of a chatbot messenger can be better engaged in the conversation if they feel intimacy with the chatbot. This can be achieved by the chatbot's effective expressions of human emotions to chatbot users. Thus motivated, this study aims to identify the appropriate emotional expressions of a chatbot that make people feel the social presence of the chatbot. In the background research, we obtained that facial expression is the most effective way of emotions and movement is important for relationship emersion. In a survey, we prepared moving text, moving gestures, and still emoticon that represent five emotions such as happiness, sadness, surprise, fear, and anger. Then, we asked the best way for them to feel social presence with a chatbot in each emotion. We found that, for an arousal and pleasant emotion such as 'happiness', people prefer moving gesture and text most while for unpleasant emotions such as 'sadness' and 'anger', people prefer emoticons. Lastly, for the neutral emotions such as 'surprise' and 'fear', people tend to select moving text that delivers clear meaning. We expect that this results of the study are useful for developing emotional chatbots that enable more effective conversations with users.

Reliability Analysis of the Three-Dimensional Deformation Measurement by Terrestrial Photogrammetry (지상사진(地上寫眞)에 의한 삼차원변형측량(三次元變形測量)의 신뢰도(信賴度) 분석(分析)(기일(其一)))

  • Yeu, Bock Mo;Yoo, Hwan Hee;Kim, In Sub
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.7 no.4
    • /
    • pp.139-146
    • /
    • 1987
  • The 3-dimensional deformation measurement by the terrestrial photogrammetry is consist of 3-dimensional coordinates computation, displaced point detection and deformation estimation of object targets. In this study, at the first step of deformation analysis, the variation of the variance-covariance matrix for the exterior orientation elements was analyzed by the increment of the ground control points and the photos in the Bundle adjustment. And then, to give the constraints for improving accuracy of ground control points, the concept of Free-Network adjustment was applied to Bundle adjustment. As a result, we knew that it was desired in the accuracy and the economy, the observation time when the numbers of ground control point and photo were respectively 6 points and 3 photos. In addition, in the case of applying the concept of Free Network adjustment in Bundle adjutment, it was desirable that the space distance for the constraints is distributed outside.

  • PDF

Fusing texture and depth edge information for face recognition (조명에 강인한 얼굴인식을 위한 텍스쳐 정보와 깊이 에지 기반의 퓨전 벡터 생성기법)

  • Ahn Byung-Woo;Sung Won-Je;Yi June-Ho
    • Proceedings of the Korea Institutes of Information Security and Cryptology Conference
    • /
    • 2006.06a
    • /
    • pp.246-250
    • /
    • 2006
  • 얼굴의 중요한 특징부분을 잘 나타내는 깊이 에지 정보를 사용하면 표정과 조명변화로 인한 얼굴 픽셀의 밝기 값 변화에 대해 강인한 특징벡터를 생성할 수 있다. 본 논문에서는 깊이 에지(depth edge)를 이용한 새로운 특징벡터를 제안하고 그 유용성에 대하여 실험하였다. 새롭게 제안한 특징벡터는 얼굴의 깊이 에지 영상을 수평과 수직 방향으로 투영하여 얻어지는 에지 강도 히스토그램을 이용하기 때문에 얼굴의 움직임으로 인한 변형에 영향을 받지 않는다. 또한, 실시간 검출과 인식이 매우 용이하다. 제안한 깊이 에지 기반 특징벡터와 백색광 영상의 픽셀 값 기반 특징벡터에 대해 부공간 투영기반의 얼굴인식 알고리즘을 적용하여 성능을 비교 평가하였다. 실험 결과, 얼굴의 깊이 에지에 기반한 얼굴인식이 기존의 백색광만을 이용한 방법에 비해 높은 인식성능을 보였다

  • PDF

Nondestructive Interfacial Evaluation and fiber fracture Source Location of Single-Fiber/Epoxy Composite using Micromechanical Technique and Acoustic Emission (음향방출과 미세역학적시험법을 이용한 단일섬유강화 에폭시 복합재료의 비파지적 섬유파단 위치표정 및 계면물성 평가)

  • Park, Joung-Man;Kong, Jin-Woo;Kim, Dae-Sik;Yoon, Dong-Jin
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.23 no.5
    • /
    • pp.418-428
    • /
    • 2003
  • Fiber fracture is one of the dominant failure phenomena affecting the total mechanical Performance of the composites. Fiber fracture locations were measured through the conventional optical microscope and the nondestructive acoustic emission (AE) technique and then were compared together as a function of the epoxy matrix modulus and the fiber surface treatment by the electrodeposition method (ED). Interfacial shear strength (IFSS) was measured using tensile fragmentation test in combination of AE method. ED treatment of the fiber surface enlarged the number of fiber fracture locations in comparison to the untreated case. The number of fiber fracture events measured by the AE method was less than optically obtained one. However, fiber fracture locations determined by AE detection corresponded with those by optical observation with small errors. The source location of fiber breaks by AE analysis could be a nondestructive, valuable method to measure interfacial shear strength (IFSS) of matrix in non-, semi- and/or transparent polymer composites.

Development of Recognition Application of Facial Expression for Laughter Theraphy on Smartphone (스마트폰에서 웃음 치료를 위한 표정인식 애플리케이션 개발)

  • Kang, Sun-Kyung;Li, Yu-Jie;Song, Won-Chang;Kim, Young-Un;Jung, Sung-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.14 no.4
    • /
    • pp.494-503
    • /
    • 2011
  • In this paper, we propose a recognition application of facial expression for laughter theraphy on smartphone. It detects face region by using AdaBoost face detection algorithm from the front camera image of a smartphone. After detecting the face image, it detects the lip region from the detected face image. From the next frame, it doesn't detect the face image but tracks the lip region which were detected in the previous frame by using the three step block matching algorithm. The size of the detected lip image varies according to the distance between camera and user. So, it scales the detected lip image with a fixed size. After that, it minimizes the effect of illumination variation by applying the bilateral symmetry and histogram matching illumination normalization. After that, it computes lip eigen vector by using PCA(Principal Component Analysis) and recognizes laughter expression by using a multilayer perceptron artificial network. The experiment results show that the proposed method could deal with 16.7 frame/s and the proposed illumination normalization method could reduce the variations of illumination better than the existing methods for better recognition performance.

A Study of Improving LDP Code Using Edge Directional Information (에지 방향 정보를 이용한 LDP 코드 개선에 관한 연구)

  • Lee, Tae Hwan;Cho, Young Tak;Ahn, Yong Hak;Chae, Ok Sam
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.52 no.7
    • /
    • pp.86-92
    • /
    • 2015
  • This study proposes new LDP code to improve facial expression recognition rate by including local directional number(LDN), edge magnitudes and differences of neighborhood edge intensity. LDP is less sensitive on the change of intensity and stronger about noise than LBP. But LDP is difficult to express the smooth area without changing of intensity and if background image has the similar pattern with a face, the facial expression recognition rate of LDP is low. Therefore, we make the LDP code has the local directional number and the edge strength and experiment the facial expression recognition rate of changed LDP code.

Evaluation of Static Structural Integrity for Composites Wing Structure by Acoustic Emission Technique (음향방출법을 응용한 복합재 날개 구조물의 정적구조 건전성 평가)

  • Jun, Joon-Tak;Lee, Young-Shin
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • v.37 no.8
    • /
    • pp.780-788
    • /
    • 2009
  • AE technique was applied to the static structural test of the composite wing structure to evaluate the structural integrity and damage. During the test, strain and displacements measurement technique were used to figure out for static structural strength. AE parameter analysis and source location technique were used to evaluate the internal damage and find out damage source location. Design limit load test, the 1st and 2nd design ultimate load tests and fracture test were performed. Main AE source was detected by an sensor attached on skin near by front lug. Especially, at the 1st design ultimate test, strain and displacements results didn't show internal damage but AE signal presented a phenomenon that the internal damage was formed. At the fracture test, AE activities were very lively, and strain and displacements results showed a tendency that the load path was changed by severe damage. The internal damage initiation load and location were accurately evaluated during the static structural test using AE technique. It is certified from this paper that AE technique is useful technique for evaluation of internal damage at static structural strength test.

Difference of Facial Emotion Recognition and Discrimination between Children with Attention-Deficit Hyperactivity Disorder and Autism Spectrum Disorder (주의력결핍과잉행동장애 아동과 자폐스펙트럼장애 아동에서 얼굴 표정 정서 인식과 구별의 차이)

  • Lee, Ji-Seon;Kang, Na-Ri;Kim, Hui-Jeong;Kwak, Young-Sook
    • Journal of the Korean Academy of Child and Adolescent Psychiatry
    • /
    • v.27 no.3
    • /
    • pp.207-215
    • /
    • 2016
  • Objectives: This study aimed to investigate the differences in the facial emotion recognition and discrimination ability between children with attention-deficit hyperactivity disorder (ADHD) and autism spectrum disorder (ASD). Methods: Fifty-three children aged 7 to 11 years participated in this study. Among them, 43 were diagnosed with ADHD and 10 with ASD. The parents of the participants completed the Korean version of the Child Behavior Checklist, ADHD Rating Scale and Conner's scale. The participants completed the Korean Wechsler Intelligence Scale for Children-fourth edition and Advanced Test of Attention (ATA), Penn Emotion Recognition Task and Penn Emotion Discrimination Task. The group differences in the facial emotion recognition and discrimination ability were analyzed by using analysis of covariance for the purpose of controlling the visual omission error index of ATA. Results: The children with ADHD showed better recognition of happy and sad faces and less false positive neutral responses than those with ASD. Also, the children with ADHD recognized emotions better than those with ASD on female faces and in extreme facial expressions, but not on male faces or in mild facial expressions. We found no differences in the facial emotion discrimination between the children with ADHD and ASD. Conclusion: Our results suggest that children with ADHD recognize facial emotions better than children with ASD, but they still have deficits. Interventions which consider their different emotion recognition and discrimination abilities are needed.

A Design of Artificial Emotion Model (인공 감정 모델의 설계)

  • Lee, In-Geun;Seo, Seok-Tae;Jeong, Hye-Cheon;Gwon, Sun-Hak
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2007.04a
    • /
    • pp.58-62
    • /
    • 2007
  • 인간이 생성한 음성, 표정 영상, 문장 등으로부터 인간의 감정 상태를 인식하는 연구와 함께, 인간의 감정을 모방하여 다양한 외부 자극으로 감정을 생성하는 인공 감정(Artificial Emotion)에 관한 연구가 이루어지고 있다. 그러나 기존의 인공 감정 연구는 외부 감정 자극에 대한 감정 변화 상태를 선형적, 지수적으로 변화시킴으로써 감정 상태가 급격하게 변하는 형태를 보인다. 본 논문에서는 외부 감정 자극의 강도와 빈도뿐만 아니라 자극의 반복 주기를 감정 상태에 반영하고, 시간에 따른 감정의 변화를 Sigmoid 곡선 형태로 표현하는 감정 생성 모델을 제안한다. 그리고 기존의 감정 자극에 대한 회상(recollection)을 통해 외부 감정 자극이 없는 상황에서도 감정을 생성할 수 있는 인공 감정 시스템을 제안한다.

  • PDF