• Title/Summary/Keyword: 움직임 인식

Search Result 331, Processing Time 0.023 seconds

SVM Based Facial Expression Recognition for Expression Control of an Avatar in Real Time (실시간 아바타 표정 제어를 위한 SVM 기반 실시간 얼굴표정 인식)

  • Shin, Ki-Han;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1057-1062
    • /
    • 2007
  • 얼굴표정 인식은 심리학 연구, 얼굴 애니메이션 합성, 로봇공학, HCI(Human Computer Interaction) 등 다양한 분야에서 중요성이 증가하고 있다. 얼굴표정은 사람의 감정 표현, 관심의 정도와 같은 사회적 상호작용에 있어서 중요한 정보를 제공한다. 얼굴표정 인식은 크게 정지영상을 이용한 방법과 동영상을 이용한 방법으로 나눌 수 있다. 정지영상을 이용할 경우에는 처리량이 적어 속도가 빠르다는 장점이 있지만 얼굴의 변화가 클 경우 매칭, 정합에 의한 인식이 어렵다는 단점이 있다. 동영상을 이용한 얼굴표정 인식 방법은 신경망, Optical Flow, HMM(Hidden Markov Models) 등의 방법을 이용하여 사용자의 표정 변화를 연속적으로 처리할 수 있어 실시간으로 컴퓨터와의 상호작용에 유용하다. 그러나 정지영상에 비해 처리량이 많고 학습이나 데이터베이스 구축을 위한 많은 데이터가 필요하다는 단점이 있다. 본 논문에서 제안하는 실시간 얼굴표정 인식 시스템은 얼굴영역 검출, 얼굴 특징 검출, 얼굴표정 분류, 아바타 제어의 네 가지 과정으로 구성된다. 웹캠을 통하여 입력된 얼굴영상에 대하여 정확한 얼굴영역을 검출하기 위하여 히스토그램 평활화와 참조 화이트(Reference White) 기법을 적용, HT 컬러모델과 PCA(Principle Component Analysis) 변환을 이용하여 얼굴영역을 검출한다. 검출된 얼굴영역에서 얼굴의 기하학적 정보를 이용하여 얼굴의 특징요소의 후보영역을 결정하고 각 특징점들에 대한 템플릿 매칭과 에지를 검출하여 얼굴표정 인식에 필요한 특징을 추출한다. 각각의 검출된 특징점들에 대하여 Optical Flow알고리즘을 적용한 움직임 정보로부터 특징 벡터를 획득한다. 이렇게 획득한 특징 벡터를 SVM(Support Vector Machine)을 이용하여 얼굴표정을 분류하였으며 추출된 얼굴의 특징에 의하여 인식된 얼굴표정을 아바타로 표현하였다.

  • PDF

Detecting near-duplication Video Using Motion and Image Pattern Descriptor (움직임과 영상 패턴 서술자를 이용한 중복 동영상 검출)

  • Jin, Ju-Kyong;Na, Sang-Il;Jenong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.48 no.4
    • /
    • pp.107-115
    • /
    • 2011
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

Content based Video Copy Detection Using Spatio-Temporal Ordinal Measure (시공간 순차 정보를 이용한 내용기반 복사 동영상 검출)

  • Jeong, Jae-Hyup;Kim, Tae-Wang;Yang, Hun-Jun;Jin, Ju-Kyong;Jeong, Dong-Seok
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.2
    • /
    • pp.113-121
    • /
    • 2012
  • In this paper, we proposed fast and efficient algorithm for detecting near-duplication based on content based retrieval in large scale video database. For handling large amounts of video easily, we split the video into small segment using scene change detection. In case of video services and copyright related business models, it is need to technology that detect near-duplicates, that longer matched video than to search video containing short part or a frame of original. To detect near-duplicate video, we proposed motion distribution and frame descriptor in a video segment. The motion distribution descriptor is constructed by obtaining motion vector from macro blocks during the video decoding process. When matching between descriptors, we use the motion distribution descriptor as filtering to improving matching speed. However, motion distribution has low discriminability. To improve discrimination, we decide to identification using frame descriptor extracted from selected representative frames within a scene segmentation. The proposed algorithm shows high success rate and low false alarm rate. In addition, the matching speed of this descriptor is very fast, we confirm this algorithm can be useful to practical application.

Robust object tracking using projected motion and histogram intersection (투영된 모션과 히스토그램 인터섹션 기법을 이용한 강건한 물체추적)

  • 이봉석;문영식
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2000.11b
    • /
    • pp.143-148
    • /
    • 2000
  • 본 논문에서는 투영된 모션과 히스토그램 인터섹션을 이용한 노이즈에 강건한 물체추적 방법을 제안한다. 기존의 방법은 템플릿 매칭, 물체의 경계선 재 검출, 물체의 움직임 정보 등을 사용하여 물체추적을 하였으나, 템플릿 매칭의 경우 많은 계산 시간을 요구하며 경계선을 재 검출하는 경우 윤곽선이 잘못 설정되는 경우가 있고 물체의 움직임 정보를 사용하는 경우에는 움직이는 카메라에서 움직이는 물체만을 추적하기가 쉽지 않은 단점이 있다. 본 논문에서는 투영된 모션과 질의 영상의 템플릿 마스크를 사용하여 물체의 이동, 회전과 스케일을 고려한 노이즈에 강건한 물체추적 기법을 제안한다. 질의영상은 영상분할 후 영역선택을 통하여 구성하고 물체의 인식은 색상을 이용한 히스토그램 인터섹션 기법을 사용한다. 물체의 이동은 가로 및 세로의 밝기 값을 1차원 신호로 투영하여 개략적인 움직임을 감지하고 이동에 대한 에러를 보정하며 회전과 스케일의 변화는 질의 영상의 템플릿 마스크를 이동하여 회전과 스케일에 맞게 변경하여 감지한다

  • PDF

The Multi-marker Tracking for Facial Animation (Facial Animation을 위한 다중 마커의 추적)

  • 이문희;김철기;김경석
    • Proceedings of the Korea Multimedia Society Conference
    • /
    • 2001.06a
    • /
    • pp.553-557
    • /
    • 2001
  • 얼굴 표정을 애니메이션하는 것은 얼굴 구조의 복잡성과 얼굴 표면의 섬세한 움직임으로 인해 컴퓨터 애니메이션 분야에서 가장 어려운 분야로 인식되고 있다. 최근 3D 애니메이션, 영화 특수효과 그리고 게임 제작시 모션 캡처 시스템(Motion Capture System)을 통하여 실제 인간의 동작 및 얼굴 표정을 수치적으로 측정해내어 이를 실제 애니메이션에 직접 사용함으로써 막대한 작업시간 및 인력 그리고 자본을 획기적으로 줄이고 있다. 그러나 기존의 모션 캡처 시스템은 고속 카메라를 이용함으로써 가격이 고가이고 움직임 추적에서도 여러 가지 문제점을 가지고 있다. 본 논문에서는 일반 저가의 카메라와 신경회로망 및 영상처리기법을 이용하여 얼굴 애니메이션용 모션 캡처 시스템에 적응할 수 있는 경제적이고 효율적인 얼굴 움직임 추적기법을 제안한다.

  • PDF

Development of AR-based Coding Puzzle Mobile Application Using Command Placement Recognition (명령어 배치 인식을 활용한 AR 코딩퍼즐 모바일앱 개발)

  • Seo, Beomjoo;Cho, Sung Hyun
    • Journal of Korea Game Society
    • /
    • v.20 no.3
    • /
    • pp.35-44
    • /
    • 2020
  • In this study, we propose a reliable command placement recognition algorithm using tangible commands blocks developed for our coding puzzle platform, and present its performance measurement results on an Augmented Reality testbed environment. As a result, it can recognize up to 30 tangible blocks simultaneously and their placements within 5 seconds reliably. It is successfully ported to an existing coding puzzle mobile app and can operate an IoT attached robot via bluetooth connected mobile app.

A method of describing and retrieving a sequence of moving object using Shape Variation Map (모양 변화 축적도를 이용한 움직이는 객체의 표현 및 검색 방법)

  • Choi, Min-Seok;Kim, Whoi-Yul
    • The KIPS Transactions:PartB
    • /
    • v.11B no.1
    • /
    • pp.1-6
    • /
    • 2004
  • Motion Information in a video clip often plays an important role in characterizing the content of the clip. A number of methods have been developed to analyze and retrieve video contents using motion information. However, most of these methods focused more on the analysis of direction or trajectory of motion but less on the analysis of the movement of an object. In this paper, we introduce the shape variation descriptor for describing shape variation caused by object movement along time, and propose a method to describe and retrieve the shape variation of the object using shape variation map. The experimental results shows that the proposed method performed much better than the previous method by l1% and is very effective for describing the shape variation which is applicable to semantic retrieval applications.

Mobile Object Tracking Algorithm Using Particle Filter (Particle filter를 이용한 이동 물체 추적 알고리즘)

  • Kim, Se-Jin;Joo, Young-Hoon
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.586-591
    • /
    • 2009
  • In this paper, we propose the mobile object tracking algorithm based on the feature vector using particle filter. To do this, first, we detect the movement area of mobile object by using RGB color model and extract the feature vectors of the input image by using the KLT-algorithm. And then, we get the first feature vectors by matching extracted feature vectors to the detected movement area. Second, we detect new movement area of the mobile objects by using RGB and HSI color model, and get the new feature vectors by applying the new feature vectors to the snake algorithm. And then, we find the second feature vectors by applying the second feature vectors to new movement area. So, we design the mobile object tracking algorithm by applying the second feature vectors to particle filter. Finally, we validate the applicability of the proposed method through the experience in a complex environment.

Application of Euclidean Distance Similarity for Smartphone-Based Moving Context Determination (스마트폰 기반의 이동상황 판별을 위한 유클리디안 거리유사도의 응용)

  • Jang, Young-Wan;Kim, Byeong Man;Jang, Sung Bong;Shin, Yoon Sik
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.19 no.4
    • /
    • pp.53-63
    • /
    • 2014
  • Moving context determination is an important issue to be resolved in a mobile computing environment. This paper presents a method for recognizing and classifying a mobile user's moving context by Euclidean distance similarity. In the proposed method, basic data are gathered using Global Positioning System (GPS) and accelerometer sensors, and by using the data, the system decides which moving situation the user is in. The decided situation is one of the four categories: stop, walking, run, and moved by a car. In order to evaluate the effectiveness and feasibility of the proposed scheme, we have implemented applications using several variations of Euclidean distance similarity on the Android system, and measured the accuracies. Experimental results show that the proposed system achieves more than 90% accuracy.

Research on Micro-Movement Responses of Facial Muscles by Intimacy, Empathy, Valence (친밀도, 공감도, 긍정도에 따른 얼굴 근육의 미세움직임 반응 차이)

  • Cho, Ji Eun;Park, Sang-In;Won, Myoung Ju;Park, Min Ji;Whang, Min-Cheol
    • The Journal of the Korea Contents Association
    • /
    • v.17 no.2
    • /
    • pp.439-448
    • /
    • 2017
  • Facial expression is important factor on social interaction. Facial muscle movement provides emotion information to develop social network. However, facial movement has less determined to recognize social emotion. This study is to analyze facial micro-movements and to recognize the social emotion such as intimacy, empathy, and valence. 76 university students were presented to the stimuli for social emotions and was measure their facial expression using camera. As a results, facial micro-movement. showed significant difference of social emotion. After extracting the movement amount of 3 unconscious muscles and 18 conscious muscles, Dominant Frequency band was confirmed. While muscle around the nose and cheek showed significant difference in the intimacy, one around mouth did in the empathy and one around jaw in the valence. The results proposed new facial movement to express social emotion in virtual avatars and to recognize social emotion.