• Title/Summary/Keyword: Human-Computer Interaction (HCI)

Search Result 184, Processing Time 0.025 seconds

Design of Parallel Processing System for Face Tracking (얼굴 추적을 위한 병렬처리 시스템의 설계)

  • ;;;;R.S.Ramakrishna
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 1998.10a
    • /
    • pp.765-767
    • /
    • 1998
  • Many application in human computer interaction(HCI) require tacking a human face and facial features. In this paper we propose efficient parallel processing system for face tracking under heterogeneous networked. To track a face in the video image we use the skin color information and connected components. In terms of parallelism we choose the master-slave model which has thread for each processes, master and slaves, The threads are responsible for real computation in each process. By placing queues between the threads we give flexibility of data flowing

  • PDF

A study of Design Application in Tangible User Interface

  • Zhang, Xiaofang;Kim, Se-hwa
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.943-948
    • /
    • 2009
  • In the research of HCI (Human-Computer Interaction), we always use the GUI(graphical user interface) of graphics input devices until we invent TUI (tangible user interface) which is used to control the computer by hand-touching or other subjects. In this study, we investigate and classify several TUI for the most part in business with the theory and concept of Tangible Bits by Hiroshi Ishii & Brygg Ullmer in order to research the development of TUI.

  • PDF

Laser pointer detection using neural network for human computer interaction (인간-컴퓨터 상호작용을 위한 신경망 알고리즘기반 레이저포인터 검출)

  • Jung, Chan-Woong;Jeong, Sung-Moon;Lee, Min-Ho
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.16 no.1
    • /
    • pp.21-30
    • /
    • 2011
  • In this paper, an effective method to detect the laser pointer on the screen using the neural network algorithm for implementing the human-computer interaction system. The proposed neural network algorithm is used to train the patches without a laser pointer from the input camera images, the trained neural network then generates output values for an input patch from a camera image. If a small variation is perceived in the input camera image, amplify the small variations and detect the laser pointer spot in the camera image. The proposed system consists of a laser pointer, low-price web-camera and image processing program and has a detection capability of laser spot even if the background of computer monitor has a similar color with the laser pointer spot. Therefore, the proposed technique will be contributed to improve the performance of human-computer interaction system.

Comparative Study on the Educational Use of Home Robots for Children

  • Han, Jeong-Hye;Jo, Mi-Heon;Jones, Vicki;Jo, Jun-H.
    • Journal of Information Processing Systems
    • /
    • v.4 no.4
    • /
    • pp.159-168
    • /
    • 2008
  • Human-Robot Interaction (HRI), based on already well-researched Human-Computer Interaction (HCI), has been under vigorous scrutiny since recent developments in robot technology. Robots may be more successful in establishing common ground in project-based education or foreign language learning for children than in traditional media. Backed by its strong IT environment and advances in robot technology, Korea has developed the world's first available e-Learning home robot. This has demonstrated the potential for robots to be used as a new educational media - robot-learning, referred to as 'r-Learning'. Robot technology is expected to become more interactive and user-friendly than computers. Also, robots can exhibit various forms of communication such as gestures, motions and facial expressions. This study compared the effects of non-computer based (NCB) media (using a book with audiotape) and Web-Based Instruction (WBI), with the effects of Home Robot-Assisted Learning (HRL) for children. The robot gestured and spoke in English, and children could touch its monitor if it did not recognize their voice command. Compared to other learning programs, the HRL was superior in promoting and improving children's concentration, interest, and academic achievement. In addition, the children felt that a home robot was friendlier than other types of instructional media. The HRL group had longer concentration spans than the other groups, and the p-value demonstrated a significant difference in concentration among the groups. In regard to the children's interest in learning, the HRL group showed the highest level of interest, the NCB group and the WBI group came next in order. Also, academic achievement was the highest in the HRL group, followed by the WBI group and the NCB group respectively. However, a significant difference was also found in the children's academic achievement among the groups. These results suggest that home robots are more effective as regards children's learning concentration, learning interest and academic achievement than other types of instructional media (such as: books with audiotape and WBI) for English as a foreign language.

The interaction between emotion recognition through facial expression based on cognitive user-centered television (이용자 중심의 얼굴 표정을 통한 감정 인식 TV의 상호관계 연구 -인간의 표정을 통한 감정 인식기반의 TV과 인간의 상호 작용 연구)

  • Lee, Jong-Sik;Shin, Dong-Hee
    • Journal of the HCI Society of Korea
    • /
    • v.9 no.1
    • /
    • pp.23-28
    • /
    • 2014
  • In this study we focus on the effect of the interaction between humans and reactive television when emotion recognition through facial expression mechanism is used. Most of today's user interfaces in electronic products are passive and are not properly fitted into users' needs. In terms of the user centered device, we propose that the emotion based reactive television is the most effective in interaction compared to other passive input products. We have developed and researched next generation cognitive TV models in user centered. In this paper we present a result of the experiment that had been taken with Fraunhofer IIS $SHORE^{TM}$ demo software version to measure emotion recognition. This new approach was based on the real time cognitive TV models and through this approach we studied the relationship between humans and cognitive TV. This study follows following steps: 1) Cognitive TV systems can be on automatic ON/OFF mode responding to motions of people 2) Cognitive TV can directly select channels as face changes (ex, Neutral Mode and Happy Mode, Sad Mode, Angry Mode) 3) Cognitive TV can detect emotion recognition from facial expression of people within the fixed time and then if Happy mode is detected the programs of TV would be shifted into funny or interesting shows and if Angry mode is detected it would be changed to moving or touching shows. In addition, we focus on improving the emotion recognition through facial expression. Furthermore, the improvement of cognition TV based on personal characteristics is needed for the different personality of users in human to computer interaction. In this manner, the study on how people feel and how cognitive TV responds accordingly, plus the effects of media as cognitive mechanism will be thoroughly discussed.

An Interactive Game with a Haptic Mouse (햅틱마우스를 이용한 인터랙티브 게임)

  • Cho, Seong-Man;Jung, Dong-June;Heo, Soo-Chul;Um, Yoo-Jin;Kim, Sang-Youn
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.1-5
    • /
    • 2009
  • In this paper, we develop a haptic mouse system for immersive human computer interaction. The proposed haptic mouse system can provide vibrotactile feedback as well as thermal feedback for realistic virtual experience. For vibrotactile and thermal feedback, we use eccentric motors, a solenoid, and a peltier actuator. In order to evaluate the proposed haptic mouse, we implement a racing game prototype system. The experimental result shows that our haptic mouse is expected to be useful in experiencing virtual world.

  • PDF

HCI 기반 스마트그리드 제어시스템의 인증 기술 동향

  • Lee, Seokcheol;Kwon, Sungmoon;Kim, Sungjin;Shon, Taeshik
    • Review of KIISC
    • /
    • v.25 no.3
    • /
    • pp.5-10
    • /
    • 2015
  • 스마트그리드(Smartgrid) 시대의 도래 및 제어시스템으로의 IIoT(Industrial Internet of Things) 기술 도입으로 외부 인터넷망과의 접점이 증가하면서 스마트그리드 제어시스템을 대상으로 하는 사이버위협이 증가하고 있는 추세이며, 그에 대한 피해 사례가 지속적으로 보고되고 있다. 또한 스마트그리드 제어시스템에 HCI(Human-Computer Interaction) 기술이 악의적인 내부 공격자는 전문적인 지식이 없더라도 비교적 쉽게 제어시스템을 조작 및 오작동 시킬 수 있게 되었다. 본 논문에서는 HCI가 적용된 스마트그리드 제어시스템에서의 내부자 공격에 대응하기 위해 사용자 및 구성 장비에 대한 인증 기능을 강화하여 제어시스템 사용자와 구성 장비의 신뢰성을 보장할 수 있는 방법을 제시한다.

Development of Facial Expression Recognition System based on Bayesian Network using FACS and AAM (FACS와 AAM을 이용한 Bayesian Network 기반 얼굴 표정 인식 시스템 개발)

  • Ko, Kwang-Eun;Sim, Kwee-Bo
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.562-567
    • /
    • 2009
  • As a key mechanism of the human emotion interaction, Facial Expression is a powerful tools in HRI(Human Robot Interface) such as Human Computer Interface. By using a facial expression, we can bring out various reaction correspond to emotional state of user in HCI(Human Computer Interaction). Also it can infer that suitable services to supply user from service agents such as intelligent robot. In this article, We addresses the issue of expressive face modeling using an advanced active appearance model for facial emotion recognition. We consider the six universal emotional categories that are defined by Ekman. In human face, emotions are most widely represented with eyes and mouth expression. If we want to recognize the human's emotion from this facial image, we need to extract feature points such as Action Unit(AU) of Ekman. Active Appearance Model (AAM) is one of the commonly used methods for facial feature extraction and it can be applied to construct AU. Regarding the traditional AAM depends on the setting of the initial parameters of the model and this paper introduces a facial emotion recognizing method based on which is combined Advanced AAM with Bayesian Network. Firstly, we obtain the reconstructive parameters of the new gray-scale image by sample-based learning and use them to reconstruct the shape and texture of the new image and calculate the initial parameters of the AAM by the reconstructed facial model. Then reduce the distance error between the model and the target contour by adjusting the parameters of the model. Finally get the model which is matched with the facial feature outline after several iterations and use them to recognize the facial emotion by using Bayesian Network.

SVM Based Facial Expression Recognition for Expression Control of an Avatar in Real Time (실시간 아바타 표정 제어를 위한 SVM 기반 실시간 얼굴표정 인식)

  • Shin, Ki-Han;Chun, Jun-Chul;Min, Kyong-Pil
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.1057-1062
    • /
    • 2007
  • 얼굴표정 인식은 심리학 연구, 얼굴 애니메이션 합성, 로봇공학, HCI(Human Computer Interaction) 등 다양한 분야에서 중요성이 증가하고 있다. 얼굴표정은 사람의 감정 표현, 관심의 정도와 같은 사회적 상호작용에 있어서 중요한 정보를 제공한다. 얼굴표정 인식은 크게 정지영상을 이용한 방법과 동영상을 이용한 방법으로 나눌 수 있다. 정지영상을 이용할 경우에는 처리량이 적어 속도가 빠르다는 장점이 있지만 얼굴의 변화가 클 경우 매칭, 정합에 의한 인식이 어렵다는 단점이 있다. 동영상을 이용한 얼굴표정 인식 방법은 신경망, Optical Flow, HMM(Hidden Markov Models) 등의 방법을 이용하여 사용자의 표정 변화를 연속적으로 처리할 수 있어 실시간으로 컴퓨터와의 상호작용에 유용하다. 그러나 정지영상에 비해 처리량이 많고 학습이나 데이터베이스 구축을 위한 많은 데이터가 필요하다는 단점이 있다. 본 논문에서 제안하는 실시간 얼굴표정 인식 시스템은 얼굴영역 검출, 얼굴 특징 검출, 얼굴표정 분류, 아바타 제어의 네 가지 과정으로 구성된다. 웹캠을 통하여 입력된 얼굴영상에 대하여 정확한 얼굴영역을 검출하기 위하여 히스토그램 평활화와 참조 화이트(Reference White) 기법을 적용, HT 컬러모델과 PCA(Principle Component Analysis) 변환을 이용하여 얼굴영역을 검출한다. 검출된 얼굴영역에서 얼굴의 기하학적 정보를 이용하여 얼굴의 특징요소의 후보영역을 결정하고 각 특징점들에 대한 템플릿 매칭과 에지를 검출하여 얼굴표정 인식에 필요한 특징을 추출한다. 각각의 검출된 특징점들에 대하여 Optical Flow알고리즘을 적용한 움직임 정보로부터 특징 벡터를 획득한다. 이렇게 획득한 특징 벡터를 SVM(Support Vector Machine)을 이용하여 얼굴표정을 분류하였으며 추출된 얼굴의 특징에 의하여 인식된 얼굴표정을 아바타로 표현하였다.

  • PDF

Robust Deep Age Estimation Method Using Artificially Generated Image Set

  • Jang, Jaeyoon;Jeon, Seung-Hyuk;Kim, Jaehong;Yoon, Hosub
    • ETRI Journal
    • /
    • v.39 no.5
    • /
    • pp.643-651
    • /
    • 2017
  • Human age estimation is one of the key factors in the field of Human-Robot Interaction/Human-Computer Interaction (HRI/HCI). Owing to the development of deep-learning technologies, age recognition has recently been attempted. In general, however, deep learning techniques require a large-scale database, and for age learning with variations, a conventional database is insufficient. For this reason, we propose an age estimation method using artificially generated data. Image data are artificially generated through 3D information, thus solving the problem of shortage of training data, and helping with the training of the deep-learning technique. Augmentation using 3D has advantages over 2D because it creates new images with more information. We use a deep architecture as a pre-trained model, and improve the estimation capacity using artificially augmented training images. The deep architecture can outperform traditional estimation methods, and the improved method showed increased reliability. We have achieved state-of-the-art performance using the proposed method in the Morph-II dataset and have proven that the proposed method can be used effectively using the Adience dataset.