• Title/Summary/Keyword: 상호표정

Search Result 112, Processing Time 0.021 seconds

A Study on the Automation of Interior Orientation and Relative Orientation (내부표정과 상호표정의 자동화에 관한 연구)

  • Jeong, Soo;Park, Choung-Hwan;Yun, Kong-Hyun;Yeu, Bock-Mo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.17 no.2
    • /
    • pp.105-116
    • /
    • 1999
  • Owing to the rapid development of computer system and the introduction of image processing technique, recent photogrammetric studies have been concentrated on the automation of photogrammetric orientation work that have been carried out by skilled professionals in analog and/or analytical pbotogrammetric field. To automate the whole photogrammetric work, the automation of the orientation processes including interior, relative and absolute orientation should be preceded. This study aims to automate interior orientation and relative orientation process. For this purpose, we applied Hough transform to interior orientation process and object space matching technique to relative orientation process. As the result of this study, we can present a method to automate interior and relative orientation process that has been semi-automatically operated in most commercial digital photogrammetric workstations currently available.

  • PDF

Analysis of the Internal Reliability in Relative Orientation and Independent Model Method (상호표정 및 독립모델법에서의 내적신뢰성 분석)

  • 양인태
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.5 no.1
    • /
    • pp.59-65
    • /
    • 1987
  • This paper presented the procedures for detecting gross errors, and described the influence of the number and distribution of points on the internal reliability in photogrammetric adjustment, such as relative orientation and independent model method. The use of the standard six points for relative orientation and the regular four points for independent model method result in low internal reliability. With such a distribution, gross erors in measured points might not be detected But using cluster of double or triple points instead of individual point, internal reliability improves remarkably.

  • PDF

SVM Based Facial Expression Recognition for Expression Control of an Avatar in Real Time (실시간 아바타 표정 제어를 위한 SVM 기반 실시간 얼굴표정 인식)

  • Shin, Ki-Han;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1057-1062
    • /
    • 2007
  • 얼굴표정 인식은 심리학 연구, 얼굴 애니메이션 합성, 로봇공학, HCI(Human Computer Interaction) 등 다양한 분야에서 중요성이 증가하고 있다. 얼굴표정은 사람의 감정 표현, 관심의 정도와 같은 사회적 상호작용에 있어서 중요한 정보를 제공한다. 얼굴표정 인식은 크게 정지영상을 이용한 방법과 동영상을 이용한 방법으로 나눌 수 있다. 정지영상을 이용할 경우에는 처리량이 적어 속도가 빠르다는 장점이 있지만 얼굴의 변화가 클 경우 매칭, 정합에 의한 인식이 어렵다는 단점이 있다. 동영상을 이용한 얼굴표정 인식 방법은 신경망, Optical Flow, HMM(Hidden Markov Models) 등의 방법을 이용하여 사용자의 표정 변화를 연속적으로 처리할 수 있어 실시간으로 컴퓨터와의 상호작용에 유용하다. 그러나 정지영상에 비해 처리량이 많고 학습이나 데이터베이스 구축을 위한 많은 데이터가 필요하다는 단점이 있다. 본 논문에서 제안하는 실시간 얼굴표정 인식 시스템은 얼굴영역 검출, 얼굴 특징 검출, 얼굴표정 분류, 아바타 제어의 네 가지 과정으로 구성된다. 웹캠을 통하여 입력된 얼굴영상에 대하여 정확한 얼굴영역을 검출하기 위하여 히스토그램 평활화와 참조 화이트(Reference White) 기법을 적용, HT 컬러모델과 PCA(Principle Component Analysis) 변환을 이용하여 얼굴영역을 검출한다. 검출된 얼굴영역에서 얼굴의 기하학적 정보를 이용하여 얼굴의 특징요소의 후보영역을 결정하고 각 특징점들에 대한 템플릿 매칭과 에지를 검출하여 얼굴표정 인식에 필요한 특징을 추출한다. 각각의 검출된 특징점들에 대하여 Optical Flow알고리즘을 적용한 움직임 정보로부터 특징 벡터를 획득한다. 이렇게 획득한 특징 벡터를 SVM(Support Vector Machine)을 이용하여 얼굴표정을 분류하였으며 추출된 얼굴의 특징에 의하여 인식된 얼굴표정을 아바타로 표현하였다.

  • PDF

Stability Analysis of a Stereo-Camera for Close-range Photogrammetry (근거리 사진측량을 위한 스테레오 카메라의 안정성 분석)

  • Kim, Eui Myoung;Choi, In Ha
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.3
    • /
    • pp.123-132
    • /
    • 2021
  • To determine 3D(three-dimensional) positions using a stereo-camera in close-range photogrammetry, camera calibration to determine not only the interior orientation parameters of each camera but also the relative orientation parameters between the cameras must be preceded. As time passes after performing camera calibration, in the case of non-metric cameras, the interior and relative orientation parameters may change due to internal instability or external factors. In this study, to evaluate the stability of the stereo-camera, not only the stability of two single cameras and a stereo-camera were analyzed, but also the three-dimensional position accuracy was evaluated using checkpoints. As a result of evaluating the stability of two single cameras through three camera calibration experiments over four months, the root mean square error was ±0.001mm, and the root mean square error of the stereo-camera was ±0.012mm ~ ±0.025mm, respectively. In addition, as the results of distance accuracy using the checkpoint were ±1mm, the interior and relative orientation parameters of the stereo-camera were considered stable over that period.

Realtime Facial Expression Recognition from Video Sequences Using Optical Flow and Expression HMM (광류와 표정 HMM에 의한 동영상으로부터의 실시간 얼굴표정 인식)

  • Chun, Jun-Chul;Shin, Gi-Han
    • Journal of Internet Computing and Services
    • /
    • v.10 no.4
    • /
    • pp.55-70
    • /
    • 2009
  • Vision-based Human computer interaction is an emerging field of science and industry to provide natural way to communicate with human and computer. In that sense, inferring the emotional state of the person based on the facial expression recognition is an important issue. In this paper, we present a novel approach to recognize facial expression from a sequence of input images using emotional specific HMM (Hidden Markov Model) and facial motion tracking based on optical flow. Conventionally, in the HMM which consists of basic emotional states, it is considered natural that transitions between emotions are imposed to pass through neutral state. However, in this work we propose an enhanced transition framework model which consists of transitions between each emotional state without passing through neutral state in addition to a traditional transition model. For the localization of facial features from video sequence we exploit template matching and optical flow. The facial feature displacements traced by the optical flow are used for input parameters to HMM for facial expression recognition. From the experiment, we can prove that the proposed framework can effectively recognize the facial expression in real time.

  • PDF

Affective interaction to emotion expressive VR agents (가상현실 에이전트와의 감성적 상호작용 기법)

  • Choi, Ahyoung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.37-47
    • /
    • 2016
  • This study evaluate user feedback such as physiological response and facial expression when subjects play a social decision making game with interactive virtual agent partners. In the social decision making game, subjects will invest some of money or credit in one of projects. Their partners (virtual agents) will also invest in one of the projects. They will interact with different kinds of virtual agents which behave reciprocated or unreciprocated behavior while expressing socially affective facial expression. The total money or credit which the subject earns is contingent on partner's choice. From this study, I observed that subject's appraisal of interaction with cooperative/uncooperative (or friendly/unfriendly) virtual agents in an investment game result in increased autonomic and somatic response, and that these responses were observed by physiological signal and facial expression in real time. For assessing user feedback, Photoplethysmography (PPG) sensor, Galvanic skin response (GSR) sensor while capturing front facial image of the subject from web camera were used. After all trials, subjects asked to answer to questions associated with evaluation how much these interaction with virtual agents affect to their appraisals.

The Technique Development for 3D Deformation Analysis of Railroad Bridge Using the Non-metric Camera (비측정용 디지털 카메라를 이용한 철도교량의 3차원 변형해석 기법개발)

  • Lee, Hyo-Seong;Ahn, Ki-Weon;Park, Byung-Uk;Shin, Seok-Hyo
    • Proceedings of the Korean Society of Surveying, Geodesy, Photogrammetry, and Cartography Conference
    • /
    • 2010.04a
    • /
    • pp.129-131
    • /
    • 2010
  • This study is to measure 3d deformation of railroad-bridge of steel structure using the non-metric high-resolution digital camera. Measuring the deformation is to be utilized relative orientation using the coplanarity for reduction of the field survey and efficiency of the work. The results are compared with deformation by exterior orientation parameters, which are computed from 3d measurement of control points by the Total-station. Then accuracy of the utilized method will be verified.

  • PDF

Speech Animation with Multilevel Control (다중 제어 레벨을 갖는 입모양 중심의 표정 생성)

  • Moon, Bo-Hee;Lee, Son-Ou;Wohn, Kwang-yun
    • Korean Journal of Cognitive Science
    • /
    • v.6 no.2
    • /
    • pp.47-79
    • /
    • 1995
  • Since the early age of computer graphics, facial animation has been applied to various fields, and nowadays it has found several novel applications such as virtual reality(for representing virtual agents), teleconference, and man-machine interface.When we want to apply facial animation to the system with multiple participants connected via network, it is hard to animate facial expression as we desire in real-time because of the size of information to maintain an efficient communication.This paper's major contribution is to adapt 'Level-of-Detail'to the facial animation in order to solve the above problem.Level-of-Detail has been studied in the field of computer graphics to reperesent the appearance of complicated objects in efficient and adaptive way, but until now no attempt has mode in the field of facial animation. In this paper, we present a systematic scheme which enables this kind of adaptive control using Level-of-Detail.The implemented system can generate speech synchronized facial expressions with various types of user input such as text, voice, GUI, head motion, etc.

  • PDF

The Impact of Gesture and Facial Expression on Learning Comprehension and Persona Effect of Pedagogical Agent (학습용 에이전트의 제스처와 얼굴표정이 학습이해도 및 의인화 효과에 미치는 영향)

  • Ryu, Jeeheon;Yu, Jeehee
    • Science of Emotion and Sensibility
    • /
    • v.16 no.3
    • /
    • pp.281-292
    • /
    • 2013
  • The purpose of this study was to identify the effect of gesture and facial expression on persona effects. Fifty-six college students were recruited for this study, and non-verbal communication skills were applied to a pedagogical agent with gesture (conversational vs. deictic) and facial expression. The conversational gesture may have relationship with social interaction hypothesis of pedagogical agent while the deictic gesture may have relationship with attentional guidance hypothesis. The facial expression can be assumed to facilitate the social interaction between the pedagogical agent and learners. Interestingly, the conversational gesture group showed a tendency of outperforming the deictic gesture group. It may imply that the social interaction theory has a strong impact on cognitive support as well as social interaction for learners. There was a significant interaction effect on the engagement when both of facial expression and conversational gesture were applied. This result has two implications. First, facial expression can facilitate the persona effect for engagement.

  • PDF

A Design and Implementation of 3D Facial Expressions Production System based on Muscle Model (근육 모델 기반 3D 얼굴 표정 생성 시스템 설계 및 구현)

  • Lee, Hyae-Jung;Joung, Suck-Tae
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.16 no.5
    • /
    • pp.932-938
    • /
    • 2012
  • Facial expression has its significance in mutual communication. It is the only means to express human's countless inner feelings better than the diverse languages human use. This paper suggests muscle model-based 3D facial expression generation system to produce easy and natural facial expressions. Based on Waters' muscle model, it adds and used necessary muscles to produce natural facial expressions. Also, among the complex elements to produce expressions, it focuses on core, feature elements of a face such as eyebrows, eyes, nose, mouth, and cheeks and uses facial muscles and muscle vectors to do the grouping of facial muscles connected anatomically. By simplifying and reconstructing AU, the basic nuit of facial expression changes, it generates easy and natural facial expressions.