• Title/Summary/Keyword: 위치표정방법

Search Result 124, Processing Time 0.027 seconds

Eye Detection Based on Texture Information (텍스처 기반의 눈 검출 기법)

  • Park, Chan-Woo;Park, Hyun;Moon, Young-Shik
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2007.05a
    • /
    • pp.315-318
    • /
    • 2007
  • 자동 얼굴 인식, 표정 인식과 같은 얼굴 영상과 관련된 다양한 연구 분야는 일반적으로 입력 얼굴 영상에 대한 정규화가 필요하다. 사람의 얼굴은 표정, 조명 등에 따라 다양한 형태변화가 있어 입력 영상 마다 정확한 대표 특징 점을 찾는 것은 어려운 문제이다. 특히 감고 있는 눈이나 작은 눈 등은 검출하기 어렵기 때문에 얼굴 관련 연구에서 성능을 저하시키는 주요한 원인이 되고 있다. 이에 다양한 변화에 강건한 눈 검출을 위하여 본 논문에서는 눈의 텍스처 정보를 이용한 눈 검출 방법을 제안한다. 얼굴 영역에서 눈의 텍스처가 갖는 특성을 정의하고 두 가지 형태의 Eye 필터를 정의하였다. 제안된 방법은 Adaboost 기반의 얼굴 영역 검출 단계, 조명 정규화 단계, Eye 필터를 이용한 눈 후보 영역 검출 단계, 눈 위치 점 검출 단계 등 총 4단계로 구성된다. 실험 결과들은 제안된 방법이 얼굴의 자세, 표정, 조명 상태 등에 강건한 검출 결과를 보여주며 감은 눈 영상에서도 강건한 결과를 보여준다.

2D Image-Based Individual 3D Face Model Generation and Animation (2차원 영상 기반 3차원 개인 얼굴 모델 생성 및 애니메이션)

  • 김진우;고한석;김형곤;안상철
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1999.11b
    • /
    • pp.15-20
    • /
    • 1999
  • 본 논문에서는 사람의 정면 얼굴을 찍은 컬러 동영상에서 얼굴의 각 구성 요소에 대한 특징점들을 추출하여 3차원 개인 얼굴 모델을 생성하고 이를 얼굴의 표정 움직임에 따라 애니메이션 하는 방법을 제시한다. 제안된 방법은 얼굴의 정면만을 촬영하도록 고안된 헬멧형 카메라( Head-mounted camera)를 사용하여 얻은 2차원 동영상의 첫 프레임(frame)으로부터 얼굴의 특징점들을 추출하고 이들과 3차원 일반 얼굴 모델을 바탕으로 3차원 얼굴 특징점들의 좌표를 산출한다. 표정의 변화는 초기 영상의 특징점 위치와 이 후 영상들에서의 특징점 위치의 차이를 기반으로 알아낼 수 있다. 추출된 특징점 및 얼굴 움직임은 보다 다양한 응용 이 가능하도록 최근 1단계 표준이 마무리된 MPEG-4 SNHC의 FDP(Facial Definition Parameters)와FAP(Facial Animation Parameters)의 형식으로 표현되며 이를 이용하여 개인 얼굴 모델 및 애니메이션을 수행하였다. 제안된 방법은 단일 카메라로부터 촬영되는 영상을 기반으로 이루어지는 MPEG-4 기반 화상 통신이나 화상 회의 시스템 등에 유용하게 사용될 수 있다.

  • PDF

Detection of Fatigue Damage in Aluminum Thin Plates with Rivet Holes by Acoustic Emission (리벳 구멍을 가진 알루미늄 박판구조의 피로손상 탐지를 위한 음향방출의 활용)

  • Kim, Jung-Chan;Kim, Sung-Jin;Kwon, Oh-Yang
    • Journal of the Korean Society for Nondestructive Testing
    • /
    • v.23 no.3
    • /
    • pp.246-253
    • /
    • 2003
  • The initiation and growth of short fatigue cracks in the simulated aircraft structure with a series of rivet holes was detected by acoustic emission (AE). The location and the size of short tracks were determined by AE source location techniques and the measurement with traveling microscope. AE events increased intermittently with the initiation and growth of short cracks to form a stepwise increment curve of cumulative AE events. For the precise determination of AE source locations, a region-of-interest (ROI) was set around the rivet holes based on the plastic zone size in fracture mechanics. Since the signal-to-noise ratio (SNR) was very low at this early stage of fatigue cracks, the accuracy of source location was also enhanced by the wavelet transform do-noising. In practice, the majority of AE signals detected within the ROI appeared to be noise from various origins. The results showed that the effort of structural geometry and SNR should be closely taken into consideration for the accurate evaluation of fatigue damage in the structure.

Interactive Realtime Facial Animation with Motion Data (모션 데이터를 사용한 대화식 실시간 얼굴 애니메이션)

  • 김성호
    • Journal of the Korea Computer Industry Society
    • /
    • v.4 no.4
    • /
    • pp.569-578
    • /
    • 2003
  • This paper presents a method in which the user produces a real-time facial animation by navigating in the space of facial expressions created from a great number of captured facial expressions. The core of the method is define the distance between each facial expressions and how to distribute into suitable intuitive space using it and user interface to generate realtime facial expression animation in this space. We created the search space from about 2,400 raptured facial expression frames. And, when the user free travels through the space, facial expressions located on the path are displayed in sequence. To visually distribute about 2,400 captured racial expressions in the space, we need to calculate distance between each frames. And we use Floyd's algorithm to get all-pairs shortest path between each frames, then get the manifold distance using it. The distribution of frames in intuitive space apply a multi-dimensional scaling using manifold distance of facial expression frames, and distributed in 2D space. We distributed into intuitive space with keep distance between facial expression frames in the original form. So, The method presented at this paper has large advantage that free navigate and not limited into intuitive space to generate facial expression animation because of always existing the facial expression frames to navigate by user. Also, It is very efficient that confirm and regenerate nth realtime generation using user interface easy to use for facial expression animation user want.

  • PDF

The Design of Context-Aware Middleware Architecture for Processing Facial Expression Information (얼굴표정정보를 처리하는 상황인식 미들웨어의 구조 설계)

  • Jin-Bong Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2008.11a
    • /
    • pp.649-651
    • /
    • 2008
  • 상황인식 컴퓨팅 기술은 넓게 보면 유비쿼터스 컴퓨팅 기술의 일부분으로 볼 수 있다. 그러나 상황인식 컴퓨팅 기술의 적용측면에 대한 접근 방법이 유비쿼터스 컴퓨팅과는 다르다고 할 수 있다. 지금까지 연구된 상황인식 컴퓨팅 기술은 지정된 공간에서 상황을 발생시키는 객체를 식별하는 일과 식별된 객체가 발생하는 상황의 인식에 주된 초점을 두고 있다. 또한, 상황정보로는 객체의 위치 정보만을 주로 사용하고 있다. 그러나 본 논문에서는 객체의 얼굴표정을 상황정보로 사용하여 감성을 인식할 수 있는 상황인식 미들웨어로서 CM-FEIP의 구조를 제안한다. CM-FEIP의 가상공간 모델링은 상황 모델링과 서비스 모델링으로 구성된다. 또한, 얼굴표정의 인식기술을 기반으로 온톨로지를 구축하여 객체의 감성을 인식한다. 객체의 얼굴표정을 상황정보로 사용하고, 무표정일 경우에는 여러 가지 환경정보(온도, 습도, 날씨 등)를 이용한다. 온톨로지를 구축하기 위하여 OWL 언어를 사용하여 객체의 감성을 표현하고, 감성추론 엔진은 Jena를 사용한다.

3D Position Tracking for Moving objects using Stereo CCD Cameras (스테레오 CCD 카메라를 이용한 이동체의 실시간 3차원 위치추적)

  • Kwon, Hyuk-Jong;Bae, Sang-Keun;Kim, Byung-Guk
    • Spatial Information Research
    • /
    • v.13 no.2 s.33
    • /
    • pp.129-138
    • /
    • 2005
  • In this paper, a 3D position tracking algorithm for a moving objects using a stereo CCD cameras was proposed. This paper purposed the method to extract the coordinates of the moving objects. That is improve the operating and data processing efficiency. We were applied the relative orientation far the stereo CCD cameras and image coordinates extraction in the left and right images after the moving object segmentation. Also, it is decided on 3D position far moving objects using an acquired image coordinates in the left and right images. We were used independent relative orientation to decide the relative location and attitude of the stereo CCD cameras and RGB pixel values to segment the moving objects. To calculate the coordinates of the moving objects by space intersection. And, We conducted the experiment the system and compared the accuracy of the results.

  • PDF

Performance tests for the expression synthesis system based on pleasure and arousal dimensions and efficiency comparisons for its interfaces (쾌 및 각성 차원 기반 표정 합성 시스템의 성능 검증 및 인터페이스의 효율성 비교)

  • 한재현;정찬섭
    • Korean Journal of Cognitive Science
    • /
    • v.14 no.1
    • /
    • pp.41-50
    • /
    • 2003
  • We tested the capability of the pleasure and arousal dimension-based facial expression synthesis system and proposed the most effective interface for it. First, we tried to confirm the adequateness of the dimensional model as a basic structure of the internal states for the system. Fer it, subjects compared the 17 facial expressions on the two axes. The results validated the fundamental hypothesis of the system. Second, we chose 21 representative expressions from the system to test its performance and had subjects rate their similarities. We analyzed these data using multidimensional scaling methods and these results verified the system's reliability. Third, we compared the efficiencies of two interfaces -coordinate values and slide bars- to find the most suitable interface for the system. Subjects synthesise 25 facial expressions with each interface of it. The results showed that the visualization of two dimensional values into Cartesian coordinate is more stable as an input display of facial expression synthesis system based on dimensions.

  • PDF

Feasibility Assessment of the Photogrammetric-board for Deformation Measuring of Reinforced-soil Wall (보강토 옹벽 변위측량을 위한 사진측량용 표정판 적용 가능성 평가)

  • Lee, Hyoseong;Na, Hyunho;Park, Byung-Wook;Kim, Yong Don
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.5
    • /
    • pp.495-501
    • /
    • 2016
  • This study applied close-range digital photogrammetry to measure deformation of reinforced-soil wall as the passed time. We proposed to utilize the photogrammetric-board to determine 3D coordinates and compute exterior orientation parameters from the images without measuring control points. The displacements by the proposed method are compared with those of the Total-station. As results, measuring errors was within 5cm, and the deformation was not occurred in the 3 months. The proposed method using the photogrammetric-board therefore can be utilized to measure deformation of the reinforced-soil wall.

Comparison of Position-Rotation Models and Orbit-Attitude Models with SPOT images (SPOT 위성영상에서의 위치-회전각 모델과 궤도-자세각 모델의 비교)

  • Kim Tae-Jung
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.24 no.1
    • /
    • pp.47-55
    • /
    • 2006
  • This paper investigates the performance of sensor models based on satellite position and rotation angles and sensor models based on satellite orbit and attitude angles. We analyze the performance with respect to the accuracy of bundle adjustment and the accuracy of exterior orientation estimation. In particular, as one way to analyze the latter, we establish sensor models with respect to one image and apply the models to other scenes that have been acquired from the same orbit. Experiment results indicated that fer the sole purpose of bundle adjustment accuracy one could use both position-rotation models and orbit-attitude models. The accuracy of estimating exterior orientation parameters appeared similar for both models when analysis was performed based on single scene. However, when multiple scenes within the same orbital segment were used for analysis, the orbit-attitude model with attitude biases as unknowns showed the most accurate results.

A study on the Accuracy Improvement of Three Dimensional Positioning Using SPOT Imagery (SPOT 위성영상(衛星映像)을 이용(利用)한 3차원(次元) 위치결정(位置決定)의 정확도(正確度) 향상(向上)에 관(關)한 연구(硏究))

  • Yeu, Bock Mo;Cho, Gi Sung;Lee, Hyun Jik
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.11 no.4
    • /
    • pp.151-162
    • /
    • 1991
  • This study aims to improve the positioning accuracy by analizing the accuracys of three dimensional positioning according to various data types and preprocessing levels of SPOT imagery and the acquisition method for ground control points, and to develop the three dimensional positioning algorithm and program. In this study, the optimum polynomials of exterior orientation parameters according to each preprocessing levels (level 1B; 15 variables, level 1AP, 1A; 12 variables) are determined. As a results, the accuracy of level lAP is the best in the results of analysis about the accuracy of positioning, but level 1A which is digital image data form also shows similar positioning accuracy. Also, in level 1A image which have different acquisition method for ground control points, the accuracy of three dimensional positioning is highly improved. But, in case of low accuracy of ground control points, only introduction of additional parameters does not effect to the improvement of accuracy. Therefore simultaneous adjustment including blunder detection method should be adopted.

  • PDF