• 제목/요약/키워드: Human computer interaction

검색결과 634건 처리시간 0.025초

제품의 유지보수를 위한 시각 기반 증강현실 기술 개발 (Development Technology of Vision Based Augmented Reality for the Maintenance of Products)

  • 이경호;이정민;김동근;한영수;이재준
    • 한국CDE학회논문집
    • /
    • 제13권4호
    • /
    • pp.265-272
    • /
    • 2008
  • The flow of technology is going to human-oriented direction, from the time when the computer was first invented, to now where new computing environment using mobile and global network are everywhere. On that technology flow, ubiquitous is being suggested as new paradigm of computing environment. Augmented Reality is one of ubiquitous technologies that provide the interactions between human and computer. By adding computer-generated information to real information and their interaction, user can get the improved and more knowledgeable information about real world. The purpose of this paper is to show the possibility of applying vision based augmented reality to maintenance of product system.

외부 환경 감지 센서 모듈을 이용한 소프트웨어 로봇의 감정 모델 구현 (Implementation of Emotional Model of Software Robot Using the Sensor Modules for External Environments)

  • 이준용;김창현;이주장
    • 한국지능시스템학회:학술대회논문집
    • /
    • 한국퍼지및지능시스템학회 2005년도 추계학술대회 학술발표 논문집 제15권 제2호
    • /
    • pp.179-184
    • /
    • 2005
  • Recently, studying on modeling the emotion of a robot has become issued among fields of a humanoid robot and an interaction of human and robot. Especially, modeling of the motivation, the emotion, the behavior, and so on, in the robot, is hard and need to make efforts to use our originality. In this paper, new modeling using mathematical formulations to represent the emotion and the behavior selection is proposed for the software robot with virtual sensor modules. Various Points which affect six emotional states such as happy or sad are formulated as simple exponential equations with various parameters. There are several experiments with seven external sensor inputs from virtual environment and human to evaluate this modeling.

  • PDF

외부 환경 감지 센서 모듈을 이용한 소프트웨어 로봇의 감정 모델 구현 (Implementation of Emotional Model of Software Robot Using the Sensor Modules for External Environments)

  • 이준용;김창현;이주장
    • 한국지능시스템학회논문지
    • /
    • 제16권1호
    • /
    • pp.37-42
    • /
    • 2006
  • Recently, studying on modeling the emotion of a robot has become issued among fields of a humanoid robot and an interaction of human and robot. Especially, modeling of the motivation, the emotion, the behavior. and so on, in the robot, is hard and need to make efforts to use ow originality. In this paper, new modeling using mathematical formulations to represent the emotion and the behavior selection is proposed for the software robot with virtual sensor modules. Various points which affect six emotional states such as happy or sad are formulated as simple exponential equations with various parameters. There are several experiments with seven external sensor inputs from virtual environment and human to evaluate this modeling.

손 동작을 통한 인간과 컴퓨터간의 상호 작용 (Recognition of Hand gesture to Human-Computer Interaction)

  • 이래경;김성신
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2000년도 하계학술대회 논문집 D
    • /
    • pp.2930-2932
    • /
    • 2000
  • In this paper. a robust gesture recognition system is designed and implemented to explore the communication methods between human and computer. Hand gestures in the proposed approach are used to communicate with a computer for actions of a high degree of freedom. The user does not need to wear any cumbersome devices like cyber-gloves. No assumption is made on whether the user is wearing any ornaments and whether the user is using the left or right hand gestures. Image segmentation based upon the skin-color and a shape analysis based upon the invariant moments are combined. The features are extracted and used for input vectors to a radial basis function networks(RBFN). Our "Puppy" robot is employed as a testbed. Preliminary results on a set of gestures show recognition rates of about 87% on the a real-time implementation.

  • PDF

MPEG-U-based Advanced User Interaction Interface Using Hand Posture Recognition

  • Han, Gukhee;Choi, Haechul
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제5권4호
    • /
    • pp.267-273
    • /
    • 2016
  • Hand posture recognition is an important technique to enable a natural and familiar interface in the human-computer interaction (HCI) field. This paper introduces a hand posture recognition method using a depth camera. Moreover, the hand posture recognition method is incorporated with the Moving Picture Experts Group Rich Media User Interface (MPEG-U) Advanced User Interaction (AUI) Interface (MPEG-U part 2), which can provide a natural interface on a variety of devices. The proposed method initially detects positions and lengths of all fingers opened, and then recognizes the hand posture from the pose of one or two hands, as well as the number of fingers folded when a user presents a gesture representing a pattern in the AUI data format specified in MPEG-U part 2. The AUI interface represents a user's hand posture in the compliant MPEG-U schema structure. Experimental results demonstrate the performance of the hand posture recognition system and verified that the AUI interface is compatible with the MPEG-U standard.

A Human Action Recognition Scheme in Temporal Spatial Data for Intelligent Web Browser

  • Cho, Kyung-Eun
    • 한국멀티미디어학회논문지
    • /
    • 제8권6호
    • /
    • pp.844-855
    • /
    • 2005
  • This paper proposes a human action recognition scheme for Intelligent Web Browser. Based on the principle that a human action can be defined as a combination of multiple articulation movements, the inference of stochastic grammars is applied to recognize each action. Human actions in 3 dimensional (3D) world coordinate are measured, quantized and made into two sets of 4-chain-code for xy and zy projection planes, consequently they are appropriate for applying the stochastic grammar inference method. We confirm this method by experiments, that various physical actions can be classified correctly against a set of real world 3D temporal data. The result revealed a comparatively successful achievement of $93.8\%$ recognition rate through the experiments of 8 movements of human head and $84.9\%$ recognition rate of 60 movements of human upper body. We expect that this scheme can be used for human-machine interaction commands in a web browser.

  • PDF

Human Centered Robot for Mutual Interaction in Intelligent Space

  • Jin Tae-Seok;Hashimoto Hideki
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • 제5권3호
    • /
    • pp.246-252
    • /
    • 2005
  • Intelligent Space is a space where many sensors and intelligent devices are distributed. Mobile robots exist in this space as physical agents, which provide human with services. To realize this, human and mobile robots have to approach each other as much as possible. Moreover, it is necessary for them to perform interactions naturally. It is desirable for a mobile robot to carry out human affinitive movement. In this research, a mobile robot is controlled by the Intelligent Space through its resources. The mobile robot is controlled to follow walking human as stably and precisely as possible. In order to follow a human, control law is derived from the assumption that a human and a mobile robot are connected with a virtual spring model. Input velocity to a mobile robot is generated on the basis of the elastic force from the virtual spring in this model. And its performance is verified by the computer simulation and the experiment.

Signalman Action Analysis for Container Crane Controlling

  • Bae, Suk-Tae
    • 한국멀티미디어학회논문지
    • /
    • 제12권12호
    • /
    • pp.1728-1735
    • /
    • 2009
  • Human action tracking plays an important place in human-computer-interaction, human action tracking is a challenging task because of the exponentially increased computational complexity in terms of the degrees of freedom of the object and the severe image ambiguities incurred by frequent self-occlusions. In this paper, we will propose a novel method to track human action, in our technique, a dynamic background estimation algorithm will be applied firstly. Based on the estimated background, we then extract the human object from the video sequence, and the skeletonization method and Hough transform method will be used to detect the main structure of human body and each part rotation angle. The calculated rotation angles will be used to control a crane in the port, thus we can just control the container crane by using signalman body. And the experimental results can show that our proposed method can get a preferable result than the conventional methods such as: MIT, JPF or MFMC.

  • PDF

Improvement of Accuracy for Human Action Recognition by Histogram of Changing Points and Average Speed Descriptors

  • Vu, Thi Ly;Do, Trung Dung;Jin, Cheng-Bin;Li, Shengzhe;Nguyen, Van Huan;Kim, Hakil;Lee, Chongho
    • Journal of Computing Science and Engineering
    • /
    • 제9권1호
    • /
    • pp.29-38
    • /
    • 2015
  • Human action recognition has become an important research topic in computer vision area recently due to many applications in the real world, such as video surveillance, video retrieval, video analysis, and human-computer interaction. The goal of this paper is to evaluate descriptors which have recently been used in action recognition, namely Histogram of Oriented Gradient (HOG) and Histogram of Optical Flow (HOF). This paper also proposes new descriptors to represent the change of points within each part of a human body, caused by actions named as Histogram of Changing Points (HCP) and so-called Average Speed (AS) which measures the average speed of actions. The descriptors are combined to build a strong descriptor to represent human actions by modeling the information about appearance, local motion, and changes on each part of the body, as well as motion speed. The effectiveness of these new descriptors is evaluated in the experiments on KTH and Hollywood datasets.

데이터 글로브를 이용한 3차원 손동작 인식 (3-D Hand Motion Recognition Using Data Glove)

  • 김지환;박진우;;김태성
    • 한국HCI학회:학술대회논문집
    • /
    • 한국HCI학회 2009년도 학술대회
    • /
    • pp.324-329
    • /
    • 2009
  • Proactive computing의 핵심 기술인 손동작 인식 (Hand Motion Recognition, HMR) 기술은 인간과 컴퓨터 사이의 상호작용(Human Computer Interaction, HCI) 분야에서 많은 연구가 진행되고 있다. 본 연구에서는 3축 가속도 센서를 부착한 data glove를 제작하고, 3차원 손 모델을 구현한 후, 이를 이용한 손동작 인식 기술을 개발하였다. Data glove는 가상현실에 대한 입력 장치로써 본 논문에서는 3축 가속도 센서를 사용하여 획득된 신호를 wireless communication으로 PC에 전송할 수 있도록 구현하였다. 손 모델링은 ellipsoid를 이용한 kinematic chain 이론 바탕의 3차원 손 모델을 구현하였으며, data glove에서 얻어진 가속도 정보에 rule 기반의 알고리즘을 적용하여 구현된 3차원 손 모델을 통하여 간단한 손동작(가위, 바위, 보)을 인식하였다.

  • PDF