• Title/Summary/Keyword: 얼굴모델

Search Result 597, Processing Time 0.027 seconds

Pictorial Model of Upper Body based Pose Recognition and Particle Filter Tracking (그림모델과 파티클필터를 이용한 인간 정면 상반신 포즈 인식)

  • Oh, Chi-Min;Islam, Md. Zahidul;Kim, Min-Wook;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.186-192
    • /
    • 2009
  • In this paper, we represent the recognition method for human frontal upper body pose. In HCI(Human Computer Interaction) and HRI(Human Robot Interaction) when a interaction is established the human has usually frontal direction to the robot or computer and use hand gestures then we decide to focus on human frontal upper-body pose, The two main difficulties are firstly human pose is consist of many parts which cause high DOF(Degree Of Freedom) then the modeling of human pose is difficult. Secondly the matching between image features and modeling information is difficult. Then using Pictorial Model we model the human main poses which are mainly took the space of frontal upper-body poses and we recognize the main poses by making main pose database. using determined main pose we used the model parameters for particle filter which predicts the posterior distribution for pose parameters and can determine more specific pose by updating model parameters from the particle having the maximum likelihood. Therefore based on recognizing main poses and tracking the specific pose we recognize the human frontal upper body poses.

  • PDF

A Research on a Context-Awareness Middleware for Intelligent Homes (지능적인 홈을 위한 상황인식 미들웨어에 대한 연구)

  • Choi Jonghwa;Choi Soonyong;Shin Dongkyoo;Shin Dongil
    • The KIPS Transactions:PartA
    • /
    • v.11A no.7 s.91
    • /
    • pp.529-536
    • /
    • 2004
  • Smart homes integrated with sensors, actuators, wireless networks and context-aware middleware will soon become part of our daily life. This paper describes a context-aware middleware providing an automatic home service based on a user's preference. The context-aware middle-ware utilizes 6 basic data for learning and predicting the user's preference on the multimedia content : the pulse, the body temperature, the facial expression, the room temperature, the time, and the location. The six data sets construct the context model and are used by the context manager module. The log manager module maintains history information for multimedia content chosen by the user. The user-pattern learning and pre-dicting module based on a neural network predicts the proper home service for the user. The testing results show that the pattern of an in-dividual's preferences can be effectively evaluated and predicted by adopting the proposed context model.

A Study On Three-dimensional Optimized Face Recognition Model : Comparative Studies and Analysis of Model Architectures (3차원 얼굴인식 모델에 관한 연구: 모델 구조 비교연구 및 해석)

  • Park, Chan-Jun;Oh, Sung-Kwun;Kim, Jin-Yul
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.64 no.6
    • /
    • pp.900-911
    • /
    • 2015
  • In this paper, 3D face recognition model is designed by using Polynomial based RBFNN(Radial Basis Function Neural Network) and PNN(Polynomial Neural Network). Also recognition rate is performed by this model. In existing 2D face recognition model, the degradation of recognition rate may occur in external environments such as face features using a brightness of the video. So 3D face recognition is performed by using 3D scanner for improving disadvantage of 2D face recognition. In the preprocessing part, obtained 3D face images for the variation of each pose are changed as front image by using pose compensation. The depth data of face image shape is extracted by using Multiple point signature. And whole area of face depth information is obtained by using the tip of a nose as a reference point. Parameter optimization is carried out with the aid of both ABC(Artificial Bee Colony) and PSO(Particle Swarm Optimization) for effective training and recognition. Experimental data for face recognition is built up by the face images of students and researchers in IC&CI Lab of Suwon University. By using the images of 3D face extracted in IC&CI Lab. the performance of 3D face recognition is evaluated and compared according to two types of models as well as point signature method based on two kinds of depth data information.

Training Network Design Based on Convolution Neural Network for Object Classification in few class problem (소 부류 객체 분류를 위한 CNN기반 학습망 설계)

  • Lim, Su-chang;Kim, Seung-Hyun;Kim, Yeon-Ho;Kim, Do-yeon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.1
    • /
    • pp.144-150
    • /
    • 2017
  • Recently, deep learning is used for intelligent processing and accuracy improvement of data. It is formed calculation model composed of multi data processing layer that train the data representation through an abstraction of the various levels. A category of deep learning, convolution neural network is utilized in various research fields, which are human pose estimation, face recognition, image classification, speech recognition. When using the deep layer and lots of class, CNN that show a good performance on image classification obtain higher classification rate but occur the overfitting problem, when using a few data. So, we design the training network based on convolution neural network and trained our image data set for object classification in few class problem. The experiment show the higher classification rate of 7.06% in average than the previous networks designed to classify the object in 1000 class problem.

Development of facial recognition application for automation logging of emotion log (감정로그 자동화 기록을 위한 표정인식 어플리케이션 개발)

  • Shin, Seong-Yoon;Kang, Sun-Kyoung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.21 no.4
    • /
    • pp.737-743
    • /
    • 2017
  • The intelligent life-log system proposed in this paper is intended to identify and record a myriad of everyday life information as to the occurrence of various events based on when, where, with whom, what and how, that is, a wide variety of contextual information involving person, scene, ages, emotion, relation, state, location, moving route, etc. with a unique tag on each piece of such information and to allow users to get a quick and easy access to such information. Context awareness generates and classifies information on a tag unit basis using the auto-tagging technology and biometrics recognition technology and builds a situation information database. In this paper, we developed an active modeling method and an application that recognizes expressionless and smile expressions using lip lines to automatically record emotion information.

A Deep Learning-Based Face Mesh Data Denoising System (딥 러닝 기반 얼굴 메쉬 데이터 디노이징 시스템)

  • Roh, Jihyun;Im, Hyeonseung;Kim, Jongmin
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1250-1256
    • /
    • 2019
  • Although one can easily generate real-world 3D mesh data using a 3D printer or a depth camera, the generated data inevitably includes unnecessary noise. Therefore, mesh denoising is essential to obtain intact 3D mesh data. However, conventional mathematical denoising methods require preprocessing and often eliminate some important features of the 3D mesh. To address this problem, this paper proposes a deep learning based 3D mesh denoising method. Specifically, we propose a convolution-based autoencoder model consisting of an encoder and a decoder. The convolution operation applied to the mesh data performs denoising considering the relationship between each vertex constituting the mesh data and the surrounding vertices. When the convolution is completed, a sampling operation is performed to improve the learning speed. Experimental results show that the proposed autoencoder model produces faster and higher quality denoised data than the conventional methods.

A Black and White Comics Generation Procedure for the Video Frame Image using Region Extension based on HSV Color Model (HSV 색상 모델과 영역 확장 기법을 이용한 동영상 프레임 이미지의 흑백 만화 카투닝 알고리즘)

  • Ryu, Dong-Sung;Cho, Hwan-Gue
    • Journal of KIISE:Computer Systems and Theory
    • /
    • v.35 no.12
    • /
    • pp.560-567
    • /
    • 2008
  • In this paper, we discuss a simple and straightforward binarization procedure which can generate black/white comics from the video frame image. Generally, the region of human's skin is colored white or light gray, while the dark region is filled with the irregular but regular patterns like hatching in most of the black/white comics. Note that it is not enough for simple threshold method to perform this work. Our procedure is decoupled into four processes. First, we use bilateral filter to suppress noise color variation and reserve boundaries. Then, we perform mean-shift segmentation for each similar colored pixels to be clustered. Third, the clustered regions are merged and extended by our region extension algorithm considering each color of their regions. Finally, we decide which pixels are on or off using by our dynamic binarization method based on the HSV color model. Our novel black/white cartooning procedure was so successful to render comic cuts from a well-known cinema in a resonable time and manual intervention.

The Hybrid Model using SVM and Decision Tree for Intrusion Detection (SVM과 의사결정트리를 이용한 혼합형 침입탐지 모델)

  • Um, Nam-Kyoung;Woo, Sung-Hee;Lee, Sang-Ho
    • The KIPS Transactions:PartC
    • /
    • v.14C no.1 s.111
    • /
    • pp.1-6
    • /
    • 2007
  • In order to operate a secure network, it is very important for the network to raise positive detection as well as lower negative detection for reducing the damage from network intrusion. By using SVM on the intrusion detection field, we expect to improve real-time detection of intrusion data. However, due to classification based on calculating values after having expressed input data in vector space by SVM, continuous data type can not be used as any input data. Therefore, we present the hybrid model between SVM and decision tree method to make up for the weak point. Accordingly, we see that intrusion detection rate, F-P error rate, F-N error rate are improved as 5.6%, 0.16%, 0.82%, respectively.

Adult Image Detection Using Skin Color and Multiple Features (피부색상과 복합 특징을 이용한 유해영상 인식)

  • Jang, Seok-Woo;Choi, Hyung-Il;Kim, Gye-Young
    • Journal of the Korea Society of Computer and Information
    • /
    • v.15 no.12
    • /
    • pp.27-35
    • /
    • 2010
  • Extracting skin color is significant in adult image detection. However, conventional methods still have essential problems in extracting skin color. That is, colors of human skins are basically not the same because of individual skin difference or difference races. Moreover, skin regions of images may not have identical color due to makeup, different cameras used, etc. Therefore, most of the existing methods use predefined skin color models. To resolve these problems, in this paper, we propose a new adult image detection method that robustly segments skin areas with an input image-adapted skin color distribution model, and verifies if the segmented skin regions contain naked bodies by fusing several representative features through a neural network scheme. Experimental results show that our method outperforms others through various experiments. We expect that the suggested method will be useful in many applications such as face detection and objectionable image filtering.

Human Fatigue Inferring using Bayesian Networks (베이지안 네트워크를 이용한 인간의 피로도 추론)

  • Park, Ho-Sik;Nam, Kee-Hwan;Han, Jun-Hee;Jung, Yeon-Gil;Lee, Young-Sik;Ra, Sang-Dong;Bae, Cheol-Soo
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • v.9 no.1
    • /
    • pp.1145-1148
    • /
    • 2005
  • In this paper, we introduce a probabilistic model based on Bayesian networks (BNs) for inferring human fatigue by integrating information from various visual cues and certain relevant contextual information. Visual parameters, typically characterizing the cognitive states of a person including parameters related to eyelid movement, gaze, head movement, and facial expression, serve as the sensory observations. But, an individual visual cue or contextual Information does not provide enough information to determine human fatigue. Therefore in this paper, a Bayesian network model was developed to fuse as many as possible contextual and visual cue information for monitoring human fatigue. At the experiment results, display the utility of the proposed BNs for predicting and modeling fatigue.

  • PDF