• Title/Summary/Keyword: human-machine interaction

Search Result 166, Processing Time 0.022 seconds

Bayesian Logistic Regression for Human Detection (Human Detection 을 위한 Bayesian Logistic Regression)

  • Aurrahman, Dhi;Setiawan, Nurul Arif;Lee, Chil-Woo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02a
    • /
    • pp.569-572
    • /
    • 2008
  • The possibility to extent the solution in human detection problem for plug-in on vision-based Human Computer Interaction domain is very attractive, since the successful of the machine leaning theory and computer vision marriage. Bayesian logistic regression is a powerful classifier performing sparseness and high accuracy. The difficulties of finding people in an image will be conquered by implementing this Bavesian model as classifier. The comparison with other massive classifier e.g. SVM and RVM will introduce acceptance of this method for human detection problem. Our experimental results show the good performance of Bavesian logistic regression in human detection problem, both in trade-off curves (ROC, DET) and real-implementation compare to SVM and RVM.

  • PDF

A Human Action Recognition Scheme in Temporal Spatial Data for Intelligent Web Browser

  • Cho, Kyung-Eun
    • Journal of Korea Multimedia Society
    • /
    • v.8 no.6
    • /
    • pp.844-855
    • /
    • 2005
  • This paper proposes a human action recognition scheme for Intelligent Web Browser. Based on the principle that a human action can be defined as a combination of multiple articulation movements, the inference of stochastic grammars is applied to recognize each action. Human actions in 3 dimensional (3D) world coordinate are measured, quantized and made into two sets of 4-chain-code for xy and zy projection planes, consequently they are appropriate for applying the stochastic grammar inference method. We confirm this method by experiments, that various physical actions can be classified correctly against a set of real world 3D temporal data. The result revealed a comparatively successful achievement of $93.8\%$ recognition rate through the experiments of 8 movements of human head and $84.9\%$ recognition rate of 60 movements of human upper body. We expect that this scheme can be used for human-machine interaction commands in a web browser.

  • PDF

High-Fidelity Stable Haptic Interaction for Hybrid Virtual Environments (다양한 형태의 데이터를 포함하는 하이브리드환경을 위한 안정적이고 사실적인 햅틱 제시 알고리즘)

  • Kim, Jong-Phil;Ryu, Je-Ha
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.15-23
    • /
    • 2006
  • 본 논문은 다양한 형태의 객체데이터를 포함하는 하이브리드환경에 대한 안정적이고 사실적인 햅틱 제시 방법을 제안한다. 제안된 방법은 가상객체를 기술하는 방법에 의존하지 않고 일관된 방법으로 충돌검출 및 반력계산을 수행한다. 따라서 사용자 및 개발자는 부가적인 노력 없이 다양한 컨텐츠를 활용할 수 있으며, 빠르고 쉽게 가상환경을 구축할 수 있다. 또한 제안된 방법은 멀티 스레드로 구현된 안정화 연산을 수행하며, 이를 통해 느린 햅틱랜더링 속도를 가지는 환경에 대해서도 안정적이고 사실적인 역감을 제시한다. 따라서 제안된 방법은 다양한 응용분야에서 햅틱기술을 보다 쉽고 보다 효과적으로 적용할 수 있는 기회를 제공할 수 있다.

  • PDF

Development of an Inspection System for Car Seat Bottom Cushion Frame Using Machine Vision (머신 비전을 이용한 카 시트 쿠션 프레임 검사 시스템 개발)

  • Tucit, Joselito;Jung, Ho;Jang, Bong-Choon
    • Proceedings of the KAIS Fall Conference
    • /
    • 2007.05a
    • /
    • pp.253-255
    • /
    • 2007
  • The increasing requirement for consistency and quality in the automotive industry started the development of a Machine Vision Inspection System (MVIS) for a car seat bottom cushion frame with the goal of providing a higher precision Inspection System with minimal components and less human intervention. The modifications made to an existing PC-based MVIS were shown to improve the accuracy and precision of the system. By using four monochrome cameras, the working distance was lowered and the image distortions were lessened without resorting to extensive image processing. The inspection scripts were evaluated if it could recognize good and bad products and were shown to be robust and able to reach an acceptable level of precision. It was also shown that the amount of human interaction was lessened.

  • PDF

A study for safety-accident analysis pattern extract model in semiconductor industry (반도체산업에서의 안전사고 분석 패턴 추출 모델 연구)

  • Yoon Yong-Gu;Park Peom
    • Journal of the Korea Safety Management & Science
    • /
    • v.8 no.2
    • /
    • pp.13-23
    • /
    • 2006
  • The present study has investigated the patterns and the causes of safety -accidents on the accident-data in semiconductor Industries through near miss report the cases in the advanced companies. The ratio of incomplete actions to incomplete state was 4 to 6 as the cases of accidents in semiconductor industries in the respect of Human-ware, Hard- ware, Environment-ware and System-ware. The ratio of Human to machine in the attributes of semiconductor accident was 4 to 1. The study also investigated correlation among the system related to production, accident, losses and time. In semiconductor industry, we found that pattern of safety-accident analysis is organized potential, interaction, complexity, medium. Therefore, this study find out that semiconductor model consists of organization, individual, task, machine, environment and system.

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

A study on the increase of user gesture recognition rate using data preprocessing (데이터 전처리를 통한 사용자 제스처 인식률 증가 방안)

  • Kim, Jun Heon;Song, Byung Hoo;Shin, Dong Ryoul
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2017.07a
    • /
    • pp.13-16
    • /
    • 2017
  • 제스처 인식은 HCI(Human-Computer Interaction) 및 HRI(Human-Robot Interaction) 분야에서 활발히 연구되고 있는 기술이며, 제스처 데이터의 특징을 추출해내고 그에 따른 분류를 통하여 사용자의 제스처를 정확히 판별하는 것이 중요한 과제로 자리 잡았다. 본 논문에서는 EMG(Electromyography) 센서로 측정한 사용자의 손 제스처 데이터를 분석하는 방안에 대하여 서술한다. 수집된 데이터의 노이즈를 제거하고 데이터의 특징을 극대화시키기 위하여 연속적인 데이터로 변환하는 전처리 과정을 거쳐 이를 머신 러닝 알고리즘을 사용하여 분류하였다. 이 때, 기존의 raw 데이터와 전처리 과정을 거친 데이터의 성능을 decision-tree 알고리즘을 통하여 비교하였다.

  • PDF

Research on Classification of Human Emotions Using EEG Signal (뇌파신호를 이용한 감정분류 연구)

  • Zubair, Muhammad;Kim, Jinsul;Yoon, Changwoo
    • Journal of Digital Contents Society
    • /
    • v.19 no.4
    • /
    • pp.821-827
    • /
    • 2018
  • Affective computing has gained increasing interest in the recent years with the development of potential applications in Human computer interaction (HCI) and healthcare. Although momentous research has been done on human emotion recognition, however, in comparison to speech and facial expression less attention has been paid to physiological signals. In this paper, Electroencephalogram (EEG) signals from different brain regions were investigated using modified wavelet energy features. For minimization of redundancy and maximization of relevancy among features, mRMR algorithm was deployed significantly. EEG recordings of a publically available "DEAP" database have been used to classify four classes of emotions with Multi class Support Vector Machine. The proposed approach shows significant performance compared to existing algorithms.

Deep Learning-based Action Recognition using Skeleton Joints Mapping (스켈레톤 조인트 매핑을 이용한 딥 러닝 기반 행동 인식)

  • Tasnim, Nusrat;Baek, Joong-Hwan
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.155-162
    • /
    • 2020
  • Recently, with the development of computer vision and deep learning technology, research on human action recognition has been actively conducted for video analysis, video surveillance, interactive multimedia, and human machine interaction applications. Diverse techniques have been introduced for human action understanding and classification by many researchers using RGB image, depth image, skeleton and inertial data. However, skeleton-based action discrimination is still a challenging research topic for human machine-interaction. In this paper, we propose an end-to-end skeleton joints mapping of action for generating spatio-temporal image so-called dynamic image. Then, an efficient deep convolution neural network is devised to perform the classification among the action classes. We use publicly accessible UTD-MHAD skeleton dataset for evaluating the performance of the proposed method. As a result of the experiment, the proposed system shows better performance than the existing methods with high accuracy of 97.45%.

Real-Time Eye Detection and Tracking Under Various Light Conditions (다양한 조명하에서 실시간 눈 검출 및 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.456-463
    • /
    • 2004
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tacking methodology that works under variable and realistic lighting conditions. Based on combining the bright-Pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils ale not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tacking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.