• Title/Summary/Keyword: 인간 인터페이스

Search Result 499, Processing Time 0.037 seconds

A Study on Creation of 3D Facial Model Using Facial Image (임의의 얼굴 이미지를 이용한 3D 얼굴모델 생성에 관한 연구)

  • Lee, Hea-Jung;Joung, Suck-Tae
    • Journal of the Korea Society of Computer and Information
    • /
    • v.12 no.2 s.46
    • /
    • pp.21-28
    • /
    • 2007
  • The facial modeling and animation technology had been studied in computer graphics field. The facial modeling technology is utilized much in virtual reality research purpose of MPEG-4 and so on and movie, advertisement, industry field of game and so on. Therefore, the development of 3D facial model that can do interaction with human is essential to little more realistic interface. We developed realistic and convenient 3D facial modeling system that using a optional facial image only. This system allows easily fitting to optional facial image by using the Korean standard facial model (generic model). So it generates intuitively 3D facial model as controling control points elastically after fitting control points on the generic model wire to the optional facial image. We can confirm and modify the 3D facial model by movement, magnify, reduce and turning. We experimented with 30 facial images of $630{\times}630$ sizes to verify usefulness of system that developed.

  • PDF

Recognition of Hmm Facial Expressions using Optical Flow of Feature Regions (얼굴 특징영역상의 광류를 이용한 표정 인식)

  • Lee Mi-Ae;Park Ki-Soo
    • Journal of KIISE:Software and Applications
    • /
    • v.32 no.6
    • /
    • pp.570-579
    • /
    • 2005
  • Facial expression recognition technology that has potentialities for applying various fields is appling on the man-machine interface development, human identification test, and restoration of facial expression by virtual model etc. Using sequential facial images, this study proposes a simpler method for detecting human facial expressions such as happiness, anger, surprise, and sadness. Moreover the proposed method can detect the facial expressions in the conditions of the sequential facial images which is not rigid motion. We identify the determinant face and elements of facial expressions and then estimates the feature regions of the elements by using information about color, size, and position. In the next step, the direction patterns of feature regions of each element are determined by using optical flows estimated gradient methods. Using the direction model proposed by this study, we match each direction patterns. The method identifies a facial expression based on the least minimum score of combination values between direction model and pattern matching for presenting each facial expression. In the experiments, this study verifies the validity of the Proposed methods.

Understanding of 3D Human Body Motion based on Mono-Vision (단일 비전 기반 인체의 3차원 운동 해석)

  • Han, Young-Mo
    • The KIPS Transactions:PartB
    • /
    • v.18B no.4
    • /
    • pp.193-200
    • /
    • 2011
  • This paper proposes a low-cost visual analyzer algorithm of human body motion for real-time applications such as human-computer interfacing, virtual reality applications in medicine and telemonitoring of patients. To reduce cost of its use, we design the algorithm to use a single camera. To make the proposed system to be used more conveniently, we avoid from using optical markers. To make the proposed algorithm be convenient for real-time applications, we design it to have a closed-form with high accuracy. To design a closed-form algorithm, we propose an idea that formulates motion of a human body joint as a 2D universal joint model instead of a common 3D spherical joint model, without any kins of approximation. To make the closed-form algorithm has high accuracy, we formulates the estimation process to be an optimization problem. Thus-desined algorithm is applied to each joint of the human body one after another. Through experiments we show that human body motion capturing can be performed in an efficient and robust manner by using our algorithm.

Human Motion Tracking based on 3D Depth Point Matching with Superellipsoid Body Model (타원체 모델과 깊이값 포인트 매칭 기법을 활용한 사람 움직임 추적 기술)

  • Kim, Nam-Gyu
    • Journal of Digital Contents Society
    • /
    • v.13 no.2
    • /
    • pp.255-262
    • /
    • 2012
  • Human motion tracking algorithm is receiving attention from many research areas, such as human computer interaction, video conference, surveillance analysis, and game or entertainment applications. Over the last decade, various tracking technologies for each application have been demonstrated and refined among them such of real time computer vision and image processing, advanced man-machine interface, and so on. In this paper, we introduce cost-effective and real-time human motion tracking algorithms based on depth image 3D point matching with a given superellipsoid body representation. The body representative model is made by using parametric volume modeling method based on superellipsoid and consists of 18 articulated joints. For more accurate estimation, we exploit initial inverse kinematic solution with classified body parts' information, and then, the initial pose is modified to more accurate pose by using 3D point matching algorithm.

Multimedia Content Authoring System for Healthcare Information service based on Web (웹 기반 의료정보 서비스를 위한 멀티미디어 콘텐츠 저작시스템)

  • Lee, Hyae-Jung;Lee, Min-Kyu;Jeong, Young-Sik;Han, Sung-Kook;Joung, Suck-Tea
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.4 s.36
    • /
    • pp.37-46
    • /
    • 2005
  • According to the increasing interests of medical information services for healthy living, the multimedia authoring tools for medical information service contents are strongly required. In this paper, a new multimedia authoring tool supporting user-friendly interfaces is implemented, which is based on SMIL(Synchronized Multimedia Integration Language) producing Web-based multimedia contents. The implemented authoring tool in this paper not only provides functionalities to play, verify and modify the partial contents immediately but also offers capabilities to insert multimedia objects into contents with ease. The multimedia contents containing disparate healthcare and medical information can be easily designed with this tool. The enhanced usability of this tool can contribute to the realization of diverse medical information services.

  • PDF

A Study on Scenario-Centered Design Process for Multimedia Contents Planning & Design - through a Case Study of Contents design of Internet Shopping Mall "Shop 'n' Chat" (멀티미디어 컨텐츠 디자인 프로세스에서의 시나리오 활용에 관한 연구 - 인터넷 쇼핑몰 “Shop 'n' Chat“의 컨텐츠 디자인 사례연구를 중심으로)

  • 김현정
    • Archives of design research
    • /
    • v.15 no.3
    • /
    • pp.137-148
    • /
    • 2002
  • As human-centered design has become new paradigm in the field of design, it is also needed to apply scenario-centered design methodology in planning and design new multi-media contents. In this paper, scenario-centered design process for multimedia contnets planning & design was identified through a case study of contents design of internet shopping mall "Shop 'n' Chat". The study was preceded with three kinds of scenarios during design process, which are present situation scenario to find needed contents, behavior prototype scenario to make contents concrete, and future usage scenario to design visual interface. These three stages of scenario are exampled by of finally shopping scenario between 20's girls, internet shopping scenario with using messenger, and finally navigation/interaction scenario in future internet shopping mall.

  • PDF

An Emotion Recognition Technique using Speech Signals (음성신호를 이용한 감정인식)

  • Jung, Byung-Wook;Cheun, Seung-Pyo;Kim, Youn-Tae;Kim, Sung-Shin
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.18 no.4
    • /
    • pp.494-500
    • /
    • 2008
  • In the field of development of human interface technology, the interactions between human and machine are important. The research on emotion recognition helps these interactions. This paper presents an algorithm for emotion recognition based on personalized speech signals. The proposed approach is trying to extract the characteristic of speech signal for emotion recognition using PLP (perceptual linear prediction) analysis. The PLP analysis technique was originally designed to suppress speaker dependent components in features used for automatic speech recognition, but later experiments demonstrated the efficiency of their use for speaker recognition tasks. So this paper proposed an algorithm that can easily evaluate the personal emotion from speech signals in real time using personalized emotion patterns that are made by PLP analysis. The experimental results show that the maximum recognition rate for the speaker dependant system is above 90%, whereas the average recognition rate is 75%. The proposed system has a simple structure and but efficient to be used in real time.

A system design for developing a 'learning community' (학습 커뮤니티'개발을 위한 시스템 디자인)

  • 천진향
    • Archives of design research
    • /
    • v.12 no.2
    • /
    • pp.19-28
    • /
    • 1999
  • Current technology does not emphasize closer connection through group collaboration, but rather underlines the distances and separations that are visible in today's society. As the interpretation of our territory of life which is still based on the model of the 'physical' of industrial era is no more valid, a different 'sense of belonging' corresponds to this new notion of territory. The main idea is how to make the sense of community to connect actual space with one child from each place. Therefore, the subject of this thesis is to design the system making children from distance to be closer, developing 'learning community' connecting with two classes across the globe. The contents include its studying in respect to the development of a non-geographical 'learning community' based on co-evolution, and to the system design of conceptual models for the community. In conclusion, the physical interface design is suggested to let user develop accurate mental models of the system.

  • PDF

Real-time 3D Pose Estimation of Both Human Hands via RGB-Depth Camera and Deep Convolutional Neural Networks (RGB-Depth 카메라와 Deep Convolution Neural Networks 기반의 실시간 사람 양손 3D 포즈 추정)

  • Park, Na Hyeon;Ji, Yong Bin;Gi, Geon;Kim, Tae Yeon;Park, Hye Min;Kim, Tae-Seong
    • Annual Conference of KIPS
    • /
    • 2018.10a
    • /
    • pp.686-689
    • /
    • 2018
  • 3D 손 포즈 추정(Hand Pose Estimation, HPE)은 스마트 인간 컴퓨터 인터페이스를 위해서 중요한 기술이다. 이 연구에서는 딥러닝 방법을 기반으로 하여 단일 RGB-Depth 카메라로 촬영한 양손의 3D 손 자세를 실시간으로 인식하는 손 포즈 추정 시스템을 제시한다. 손 포즈 추정 시스템은 4단계로 구성된다. 첫째, Skin Detection 및 Depth cutting 알고리즘을 사용하여 양손을 RGB와 깊이 영상에서 감지하고 추출한다. 둘째, Convolutional Neural Network(CNN) Classifier는 오른손과 왼손을 구별하는데 사용된다. CNN Classifier 는 3개의 convolution layer와 2개의 Fully-Connected Layer로 구성되어 있으며, 추출된 깊이 영상을 입력으로 사용한다. 셋째, 학습된 CNN regressor는 추출된 왼쪽 및 오른쪽 손의 깊이 영상에서 손 관절을 추정하기 위해 다수의 Convolutional Layers, Pooling Layers, Fully Connected Layers로 구성된다. CNN classifier와 regressor는 22,000개 깊이 영상 데이터셋으로 학습된다. 마지막으로, 각 손의 3D 손 자세는 추정된 손 관절 정보로부터 재구성된다. 테스트 결과, CNN classifier는 오른쪽 손과 왼쪽 손을 96.9%의 정확도로 구별할 수 있으며, CNN regressor는 형균 8.48mm의 오차 범위로 3D 손 관절 정보를 추정할 수 있다. 본 연구에서 제안하는 손 포즈 추정 시스템은 가상 현실(virtual reality, VR), 증강 현실(Augmented Reality, AR) 및 융합 현실 (Mixed Reality, MR) 응용 프로그램을 포함한 다양한 응용 분야에서 사용할 수 있다.

Depth Image based Egocentric 3D Hand Pose Recognition for VR Using Mobile Deep Residual Network (모바일 Deep Residual Network을 이용한 뎁스 영상 기반 1 인칭 시점 VR 손동작 인식)

  • Park, Hye Min;Park, Na Hyeon;Oh, Ji Heon;Lee, Cheol Woo;Choi, Hyoung Woo;Kim, Tae-Seong
    • Annual Conference of KIPS
    • /
    • 2019.10a
    • /
    • pp.1137-1140
    • /
    • 2019
  • 가상현실(Virtual Reality, VR), 증강현실(Augmented Reality, AR), 혼합현실(Mixed Reality, MR) 분야에 유용한 인간 컴퓨터 인터페이스 기술은 필수적이다. 특히 휴먼 손동작 인식 기술은 직관적인 상호작용을 가능하게 하여, 다양한 분야에서 편리한 컨트롤러로 사용할 수 있다. 본 연구에서는 뎁스 영상 기반의 1 인칭 시점 손동작 인식을 위하여 손동작 데이터베이스 생성 시스템을 구축하여, 손동작 인식기 학습에 필요한 1 인칭(Egocentric View Point) 데이터베이스를 촬영하여 제작한다. 그리고 모바일 Head Mounted Device(HMD) VR 을 위한 뎁스 영상 기반 1 인칭 시점 손동작 인식(Hand Pose Recognition, HPR) 딥러닝 Deep Residual Network 를 구현한다. 최종적으로, 안드로이드 모바일 디바이스에 학습된 Residual Network Regressor 를 이식하고 모바일 VR 에 실시간 손동작 인식 시스템을 구동하여, 모바일 VR 상 실시간 3D 손동작 인식을 가상 물체와의 상호작용을 통하여 확인 한다.