• 제목/요약/키워드: human-computer interaction

검색결과 623건 처리시간 0.036초

3D Interaction Technique on Stereo Display System

  • Kwon, Yong-Moo;Ki, Jeong-Seok;Jeon, Kyeong-Won;Kim, Sung-Kyu
    • 한국정보디스플레이학회:학술대회논문집
    • /
    • 한국정보디스플레이학회 2007년도 7th International Meeting on Information Display 제7권2호
    • /
    • pp.1235-1238
    • /
    • 2007
  • There are several researches on 2D gaze tracking techniques to the 2D screen for the Human-Computer Interaction. However, the researches for the gaze-based interaction to the stereo images or 3D contents are not reported. This paper presents a gaze-based 3D interaction technique on autostereoscopic display system.

  • PDF

아동과 홈 로봇의 심리적.교육적 상호작용 분석 (Analysis on Psychological and Educational Effects in Children and Home Robot Interaction)

  • 김병준;한정혜
    • 정보교육학회논문지
    • /
    • 제9권3호
    • /
    • pp.501-510
    • /
    • 2005
  • 홈 로봇이 인간과 원활한 상호작용을 하기 위해서 인간과 로봇의 상호작용 즉 HRI(Human-Robot Interaction) 연구가 절실히 필요하다. 본 연구에서는 최근 개발된 홈 로봇 'iRobi'와 아동의 상호작용을 통해 홈 로봇이 아동의 심리적 인식에 어떤 영향을 미쳤는가와 홈 로봇 학습이 얼마나 효과적인가를 알아보았다. 심리적 인식 측면에서 홈 로봇과의 상호작용은 아동에게 친근감과 상호작용이 가능한 상대로 인식하도록 하였으며 아동의 불안을 해소시키는 것으로 분석되었다. 학습 효과 측면에서 홈 로봇을 이용한 경우가 다른 학습 매체(책, WBI)에 비해 학습 집중도와 학습 흥미도 그리고 학업 성취도가 높은 것으로 분석되었다. 따라서 홈 로봇은 아동의 정서적, 교육적 상호작용 도구로서 긍정적인 의미가 있는 것으로 보여진다.

  • PDF

동심원 확장 및 추적 알고리즘을 이용한 손동작 인식 (Hand-Gesture Recognition Using Concentric-Circle Expanding and Tracing Algorithm)

  • 황동현;장경식
    • 한국정보통신학회논문지
    • /
    • 제21권3호
    • /
    • pp.636-642
    • /
    • 2017
  • 본 논문은 동심원 확장 및 추적 기법을 이용하여 손동작을 인식하는 알고리즘을 제안한다. 제안하는 알고리즘은 웹 카메라로부터 영상을 입력받아 전처리 과정을 통해 손 영상에 대한 ROI를 추출한 뒤 동심원을 사용하여 펴진 손가락의 개수뿐만 아니라 손가락의 끝점, 손가락의 기저의 위치정보, 손가락 사이의 각도를 추출하여 HCI분야에서 활용할 수 있는 다양한 입력 방법을 제공한다. 또한 이 알고리즘은 이미지 전체의 화소를 참조하는 래스터 스캔방식과 비교하여 동심원을 구성하는 화소만을 참조함으로서 계산복잡도를 줄일 수 있다. 제안하는 알고리즘은 9가지의 손동작을 평균 90.7%의 인식률과 평균 78ms의 수행속도를 보여줌을 확인했고, 가상현실, 증강현실 및 혼합현실 그리고 HCI 분야 전반의 입력수단으로의 적용가능성을 확인하였다.

비주얼 애널리틱스 연구 소개 (Introduction to Visual Analytics Research)

  • 오유상;이충기;오주영;양지현;곽희나;문성우;박소환;고성안
    • 한국컴퓨터그래픽스학회논문지
    • /
    • 제22권5호
    • /
    • pp.27-36
    • /
    • 2016
  • 컴퓨터 그래픽스 (Computer Graphics) 및 인간-컴퓨터 상호작용 (Human-Computer Interaction, HCI) 기술을 기반으로 효과적인 데이터 분석을위한 가시화 툴 (Tool) 기술이 크게 발전 하였다. 해당 기술 분야는 Visual Analytics (비주얼애널리틱스)라는 연구 분야로 발전하여 2006년 첫 심포지엄이 열린 이래, 다양한 데이터 마이닝 (Data Mining), 상호작용 (Interaction) 기술이 정보 가시화 (Information Visualization) 기술에 접목하여 사용자 중심의 빅 데이터분석 및 의사 결정 시스템을 연구하는 분야로 확장 되었다. 그러나 국내에서는 아직 해당 연구 분야에 대하여 제대로 알려지지 않아, 국내 컴퓨터 그래픽스 및 HCI 기술 연구에 비하여, 가시화 기술을 통한 빅데이터 분석 및 의사결정을 지원하는 시스템을 설계 하는 기술이 뒤쳐지는 편이다. 따라서 본 논문에서는 비주얼 애널리틱스 연구의 기본 철학을 살펴 보고, IEEE Symposium on Visual Analytics Science and Technology (VAST) 학회에 2015년 출판된 논문으로 사용된 데이터 및 가시화 기술 분석 서베이를 진행함으로써 국내 컴퓨터 그래픽스 연구자들의 해당 분야에 대한 이해를 돕고자 한다.

Challenges and New Approaches in Genomics and Bioinformatics

  • Park, Jong Hwa;Han, Kyung Sook
    • Genomics & Informatics
    • /
    • 제1권1호
    • /
    • pp.1-6
    • /
    • 2003
  • In conclusion, the seemingly fuzzy and disorganized data of biology with thousands of different layers ranging from molecule to the Internet have refused so far to be mapped precisely and predicted successfully by mathematicians, physicists or computer scientists. Genomics and bioinformatics are the fields that process such complex data. The insights on the nature of biological entities as complex interaction networks are opening a door toward a generalization of the representation of biological entities. The main challenge of genomics and bioinformatics now lies in 1) how to data mine the networks of the domains of bioinformatics, namely, the literature, metabolic pathways, and proteome and structures, in terms of interaction; and 2) how to generalize the networks in order to integrate the information into computable genomic data for computers regardless of the levels of layer. Once bioinformatists succeed to find a general principle on the way components interact each other to form any organic interaction network at genomic scale, true simulation and prediction of life in silico will be possible.

물체-행동 컨텍스트를 이용하는 확률 그래프 기반 물체 범주 인식 (Probabilistic Graph Based Object Category Recognition Using the Context of Object-Action Interaction)

  • 윤성백;배세호;박한재;이준호
    • 한국통신학회논문지
    • /
    • 제40권11호
    • /
    • pp.2284-2290
    • /
    • 2015
  • 다양한 외형 변화를 가지는 물체의 범주 인식성능을 향상 시키는데 있어서 사람의 행동은 매우 효과적인 컨텍스트 정보이다. 본 연구에서는 Bayesian 접근법을 기반으로 하는 간단한 확률 그래프 모델을 통해 사람의 행동을 물체 범주 인식을 위한 컨텍스트 정보로 활용하였다. 다양한 외형의 컵, 전화기, 가위 그리고 스프레이 물체에 대해 실험을 수행한 결과 물체의 용도에 대한 사람의 행동을 인식함으로써 물체 인식 성능을 8%~28%개선할 수 있었다.

Classification of Three Different Emotion by Physiological Parameters

  • Jang, Eun-Hye;Park, Byoung-Jun;Kim, Sang-Hyeob;Sohn, Jin-Hun
    • 대한인간공학회지
    • /
    • 제31권2호
    • /
    • pp.271-279
    • /
    • 2012
  • Objective: This study classified three different emotional states(boredom, pain, and surprise) using physiological signals. Background: Emotion recognition studies have tried to recognize human emotion by using physiological signals. It is important for emotion recognition to apply on human-computer interaction system for emotion detection. Method: 122 college students participated in this experiment. Three different emotional stimuli were presented to participants and physiological signals, i.e., EDA(Electrodermal Activity), SKT(Skin Temperature), PPG(Photoplethysmogram), and ECG (Electrocardiogram) were measured for 1 minute as baseline and for 1~1.5 minutes during emotional state. The obtained signals were analyzed for 30 seconds from the baseline and the emotional state and 27 features were extracted from these signals. Statistical analysis for emotion classification were done by DFA(discriminant function analysis) (SPSS 15.0) by using the difference values subtracting baseline values from the emotional state. Results: The result showed that physiological responses during emotional states were significantly differed as compared to during baseline. Also, an accuracy rate of emotion classification was 84.7%. Conclusion: Our study have identified that emotions were classified by various physiological signals. However, future study is needed to obtain additional signals from other modalities such as facial expression, face temperature, or voice to improve classification rate and to examine the stability and reliability of this result compare with accuracy of emotion classification using other algorithms. Application: This could help emotion recognition studies lead to better chance to recognize various human emotions by using physiological signals as well as is able to be applied on human-computer interaction system for emotion recognition. Also, it can be useful in developing an emotion theory, or profiling emotion-specific physiological responses as well as establishing the basis for emotion recognition system in human-computer interaction.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제13권2호
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.

Realistic Visual Simulation of Water Effects in Response to Human Motion using a Depth Camera

  • Kim, Jong-Hyun;Lee, Jung;Kim, Chang-Hun;Kim, Sun-Jeong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제11권2호
    • /
    • pp.1019-1031
    • /
    • 2017
  • In this study, we propose a new method for simulating water responding to human motion. Motion data obtained from motion-capture devices are represented as a jointed skeleton, which interacts with the velocity field in the water simulation. To integrate the motion data into the water simulation space, it is necessary to establish a mapping relationship between two fields with different properties. However, there can be severe numerical instability if the mapping breaks down, with the realism of the human-water interaction being adversely affected. To address this problem, our method extends the joint velocity mapped to each grid point to neighboring nodes. We refine these extended velocities to enable increased robustness in the water solver. Our experimental results demonstrate that water animation can be made to respond to human motions such as walking and jumping.