• Title/Summary/Keyword: human and computer interaction

Search Result 612, Processing Time 0.024 seconds

Analysis on Psychological and Educational Effects in Children and Home Robot Interaction (아동과 홈 로봇의 심리적.교육적 상호작용 분석)

  • Kim, Byung-Jun;Han, Jeong-Hye
    • Journal of The Korean Association of Information Education
    • /
    • v.9 no.3
    • /
    • pp.501-510
    • /
    • 2005
  • To facilitate interaction between home robot and humans, it's urgently needed to make in-depth research in Human-Robot Interaction(HRI). The purpose of this study was to examine how children interacted with a newly developed home robot named 'iRobi' in a bid to identify how the home robot affected their psychology and the effectiveness of learning through the home robot. Concerning the psychological effects of the home robot, the children became familiar with the robot, and found it possible to interact with it, and their initial anxiety was removed. As to its learning effect, the group that studied by using the home robot outperformed the others utilizing the other types of learning media (books, WBI)in attention, learning interest and academic achievement. Accordingly, home robot could serve as one of successful vehicles to expedite the psychological and educational interaction of children.

  • PDF

Challenges and New Approaches in Genomics and Bioinformatics

  • Park, Jong Hwa;Han, Kyung Sook
    • Genomics & Informatics
    • /
    • v.1 no.1
    • /
    • pp.1-6
    • /
    • 2003
  • In conclusion, the seemingly fuzzy and disorganized data of biology with thousands of different layers ranging from molecule to the Internet have refused so far to be mapped precisely and predicted successfully by mathematicians, physicists or computer scientists. Genomics and bioinformatics are the fields that process such complex data. The insights on the nature of biological entities as complex interaction networks are opening a door toward a generalization of the representation of biological entities. The main challenge of genomics and bioinformatics now lies in 1) how to data mine the networks of the domains of bioinformatics, namely, the literature, metabolic pathways, and proteome and structures, in terms of interaction; and 2) how to generalize the networks in order to integrate the information into computable genomic data for computers regardless of the levels of layer. Once bioinformatists succeed to find a general principle on the way components interact each other to form any organic interaction network at genomic scale, true simulation and prediction of life in silico will be possible.

Introduction to Visual Analytics Research (비주얼 애널리틱스 연구 소개)

  • Oh, Yousang;Lee, Chunggi;Oh, Juyoung;Yang, Jihyeon;Kwag, Heena;Moon, Seongwoo;Park, Sohwan;Ko, Sungahn
    • Journal of the Korea Computer Graphics Society
    • /
    • v.22 no.5
    • /
    • pp.27-36
    • /
    • 2016
  • As big data become more complex than ever, there has been a need for various techniques and approaches to better analyze and explore such big data. A research discipline of visual analytics has been proposed to help users' visual data analysis and decision-making. Since 2006 when the first symposium of visual analytics was held, the visual analytics research has become popular as the advanced technology in computer graphics, data mining, and human-computer interaction has been incorporated in visual analytics. In this work we introduce the visual analytics research by reviewing and surveying the papers published in IEEE VAST 2015 in terms of data and visualization techniques to help domestics researchers' understanding on visual analytics.

Probabilistic Graph Based Object Category Recognition Using the Context of Object-Action Interaction (물체-행동 컨텍스트를 이용하는 확률 그래프 기반 물체 범주 인식)

  • Yoon, Sung-baek;Bae, Se-ho;Park, Han-je;Yi, June-ho
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.40 no.11
    • /
    • pp.2284-2290
    • /
    • 2015
  • The use of human actions as context for object class recognition is quite effective in enhancing the recognition performance despite the large variation in the appearance of objects. We propose an efficient method that integrates human action information into object class recognition using a Bayesian appraoch based on a simple probabilistic graph model. The experiment shows that by using human actions ac context information we can improve the performance of the object calss recognition from 8% to 28%.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.2
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.2
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.

Digital Leveraging: The Methodology of Applying Technology to Human Life (디지털 레버리징: 기술을 인간의 삶에 적용하는 방법론)

  • Han, Sukyoung;Kim, Hee-Cheol;Hwang, Wonjoo
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.2
    • /
    • pp.322-333
    • /
    • 2019
  • After the launch of smart phones, various miniaturized smart devices such as wearable and IOT devices have deeply embedded in human life, and have created a technology-oriented society. In this technology-oriented society, technology development itself is important, however it seems more important to utilize existing technology appropriately and deliver effectively to human life. As the computer became personalized after the appearance of PC, human-centered computing such as HCI and UCD had begun to appear. However, most of the researches focused on technology that made human being convenient to interact with computer such as computer systems design and UX development. In the technology-oriented society, it seems more urgent to apply existing technology to human life. In this paper, we propose a methodology, 'Digital Leveraging' which guides how to effectively apply technology to human life. Digital Leveraging is the way of convergence between technology and humanities.

A Motion Capture and Mapping System: Kinect Based Human-Robot Interaction Platform (동작포착 및 매핑 시스템: Kinect 기반 인간-로봇상호작용 플랫폼)

  • Yoon, Joongsun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.16 no.12
    • /
    • pp.8563-8567
    • /
    • 2015
  • We propose a human-robot interaction(HRI) platform based on motion capture and mapping. Platform consists of capture, processing/mapping, and action parts. A motion capture sensor, computer, and avatar and/or physical robots are selected as capture, processing/mapping, and action part(s), respectively. Case studies-an interactive presentation and LEGO robot car are presented to show the design and implementation process of Kinect based HRI platform.