• Title/Summary/Keyword: Camera-based Interaction

Search Result 133, Processing Time 0.024 seconds

Real-time Interactive Particle-art with Human Motion Based on Computer Vision Techniques (컴퓨터 비전 기술을 활용한 관객의 움직임과 상호작용이 가능한 실시간 파티클 아트)

  • Jo, Ik Hyun;Park, Geo Tae;Jung, Soon Ki
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.1
    • /
    • pp.51-60
    • /
    • 2018
  • We present a real-time interactive particle-art with human motion based on computer vision techniques. We used computer vision techniques to reduce the number of equipments that required for media art appreciations. We analyze pros and cons of various computer vision methods that can adapted to interactive digital media art. In our system, background subtraction is applied to search an audience. The audience image is changed into particles with grid cells. Optical flow is used to detect the motion of the audience and create particle effects. Also we define a virtual button for interaction. This paper introduces a series of computer vision modules to build the interactive digital media art contents which can be easily configurated with a camera sensor.

Evaluation of sequence tracking methods for Compton cameras based on CdZnTe arrays

  • Lee, Jun;Kim, Younghak;Bolotnikov, Aleksey;Lee, Wonho
    • Nuclear Engineering and Technology
    • /
    • v.53 no.12
    • /
    • pp.4080-4092
    • /
    • 2021
  • In this study, the performance of sequence tracking methods for multiple interaction events in specific CdZnTe Compton imagers was evaluated using Monte Carlo simulations. The Compton imager consisted of a 6 × 6 array of virtual Frisch-grid CZT crystals, where the dimensions of each crystal were 5 × 5 × 12 mm3. The sequence tracking methods for another Compton imager that consists of two identical CZT crystals arrays were also evaluated. When 662 keV radiation was incident on the detectors, the percentages of the correct sequences determined by the simple comparison and deterministic methods for two sequential interactions were identical (~80%), while those evaluated using the minimum squared difference method (55-59%) and Three Compton method (45-55%) for three sequential interactions, differed from each other. The reconstructed images of a 662 keV point source detected using single and double arrays were evaluated based on their angular resolution and signal-to-noise ratio, and the results showed that the double arrays outperformed single arrays.

An Analysis of Human Gesture Recognition Technologies for Electronic Device Control (전자 기기 조종을 위한 인간 동작 인식 기술 분석)

  • Choi, Min-Seok;Jang, Beakcheol
    • Journal of the Korea Society of Computer and Information
    • /
    • v.19 no.12
    • /
    • pp.91-100
    • /
    • 2014
  • In this paper, we categorize existing human gesture recognition technologies to camera-based, additional hardware-based and frequency-based technologies. Then we describe several representative techniques for each of them, emphasizing their strengths and weaknesses. We define important performance issues for human gesture recognition technologies and analyze recent technologies according to the performance issues. Our analyses show that camera-based technologies are easy to use and have high accuracy, but they have limitations on recognition ranges and need additional costs for their devices. Additional hardware-based technologies are not limited by recognition ranges and not affected by light or noise, but they have the disadvantage that human must wear or carry additional devices and need additional costs for their devices. Finally, frequency-based technologies are not limited by recognition ranges, and they do not need additional devices. However, they have not commercialized yet, and their accuracies can be deteriorated by other frequencies and signals.

Semi-automatic 3D Building Reconstruction from Uncalibrated Images (비교정 영상에서의 반자동 3차원 건물 모델링)

  • Jang, Kyung-Ho;Jang, Jae-Seok;Lee, Seok-Jun;Jung, Soon-Ki
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.9
    • /
    • pp.1217-1232
    • /
    • 2009
  • In this paper, we propose a semi-automatic 3D building reconstruction method using uncalibrated images which includes the facade of target building. First, we extract feature points in all images and find corresponding points between each pair of images. Second, we extract lines on each image and estimate the vanishing points. Extracted lines are grouped with respect to their corresponding vanishing points. The adjacency graph is used to organize the image sequence based on the number of corresponding points between image pairs and camera calibration is performed. The initial solid model can be generated by some user interactions using grouped lines and camera pose information. From initial solid model, a detailed building model is reconstructed by a combination of predefined basic Euler operators on half-edge data structure. Automatically computed geometric information is visualized to help user's interaction during the detail modeling process. The proposed system allow the user to get a 3D building model with less user interaction by augmenting various automatically generated geometric information.

  • PDF

Stereo Vision based Human Detection using SVM (SVM을 이용한 스테레오 비전 기반의 사람 탐지)

  • Jung, Sang-Jun;Song, Jae-Bok
    • Proceedings of the KIEE Conference
    • /
    • 2007.10a
    • /
    • pp.117-118
    • /
    • 2007
  • A robot needs a human detection algorithm for interaction with a human. This paper proposes a method that finds people using a SVM (support vector machine) classifier and a stereo camera. Feature vectors of SVM are extracted by HoG (histogram of gradient) within images. After training extracted vectors from the clustered images, the SVM algorithm creates a classifier for human detection. Each candidate for a human in the image is generated by clustering of depth information from a stereo camera and the candidate is evaluated by the classifier. When compared with the existing method of creating candidates for a human, clustering reduces computational time. The experimental results demonstrate that the proposed approach can be executed in real time.

  • PDF

Classification between Intentional and Natural Blinks in Infrared Vision Based Eye Tracking System

  • Kim, Song-Yi;Noh, Sue-Jin;Kim, Jin-Man;Whang, Min-Cheol;Lee, Eui-Chul
    • Journal of the Ergonomics Society of Korea
    • /
    • v.31 no.4
    • /
    • pp.601-607
    • /
    • 2012
  • Objective: The aim of this study is to classify between intentional and natural blinks in vision based eye tracking system. Through implementing the classification method, we expect that the great eye tracking method will be designed which will perform well both navigation and selection interactions. Background: Currently, eye tracking is widely used in order to increase immersion and interest of user by supporting natural user interface. Even though conventional eye tracking system is well focused on navigation interaction by tracking pupil movement, there is no breakthrough selection interaction method. Method: To determine classification threshold between intentional and natural blinks, we performed experiment by capturing eye images including intentional and natural blinks from 12 subjects. By analyzing successive eye images, two features such as eye closed duration and pupil size variation after eye open were collected. Then, the classification threshold was determined by performing SVM(Support Vector Machine) training. Results: Experimental results showed that the average detection accuracy of intentional blinks was 97.4% in wearable eye tracking system environments. Also, the detecting accuracy in non-wearable camera environment was 92.9% on the basis of the above used SVM classifier. Conclusion: By combining two features using SVM, we could implement the accurate selection interaction method in vision based eye tracking system. Application: The results of this research might help to improve efficiency and usability of vision based eye tracking method by supporting reliable selection interaction scheme.

Real-Time Eye Detection and Tracking Under Various Light Conditions (다양한 조명하에서 실시간 눈 검출 및 추적)

  • 박호식;배철수
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.8 no.2
    • /
    • pp.456-463
    • /
    • 2004
  • Non-intrusive methods based on active remote IR illumination for eye tracking is important for many applications of vision-based man-machine interaction. One problem that has plagued those methods is their sensitivity to lighting condition change. This tends to significantly limit their scope of application. In this paper, we present a new real-time eye detection and tacking methodology that works under variable and realistic lighting conditions. Based on combining the bright-Pupil effect resulted from IR light and the conventional appearance-based object recognition technique, our method can robustly track eyes when the pupils ale not very bright due to significant external illumination interferences. The appearance model is incorporated in both eyes detection and tacking via the use of support vector machine and the mean shift tracking. Additional improvement is achieved from modifying the image acquisition apparatus including the illuminator and the camera.

Telepresence Robotic Technology for Individuals with Visual Impairments Through Real-time Haptic Rendering (실시간 햅틱 렌더링 기술을 통한 시각 장애인을 위한 원격현장감(Telepresence) 로봇 기술)

  • Park, Chung Hyuk;Howard, Ayanna M.
    • The Journal of Korea Robotics Society
    • /
    • v.8 no.3
    • /
    • pp.197-205
    • /
    • 2013
  • This paper presents a robotic system that provides telepresence to the visually impaired by combining real-time haptic rendering with multi-modal interaction. A virtual-proxy based haptic rendering process using a RGB-D sensor is developed and integrated into a unified framework for control and feedback for the telepresence robot. We discuss the challenging problem of presenting environmental perception to a user with visual impairments and our solution for multi-modal interaction. We also explain the experimental design and protocols, and results with human subjects with and without visual impairments. Discussion on the performance of our system and our future goals are presented toward the end.

Personalized User Interface for U-Service Selection and Interaction (U-서비스의 선택 및 상호작용을 위한 개인화된 사용자 인터페이스)

  • Yoon, Hyo-Seok;Kim, Hye-Jin;Woo, Woon-Tack
    • 한국HCI학회:학술대회논문집
    • /
    • 2007.02a
    • /
    • pp.360-366
    • /
    • 2007
  • 유비쿼터스 컴퓨팅 환경의 사용자는 환경에서 제공되는 수 많은 서비스 (U-서비스)중에서, 사용자의 특성, 필요, 선호도에 따라 적합한 서비스를 쉽게 선택하여 사용할 수 있어야 한다. 본 논문에서는 사용자의 맥락에 따라 U-서비스를 선택하고 상호작용을 할 수 있는 사용자 인터페이스로 personal companion 을 제안한다. Personal companion 은 서비스 발견 기법과 카메라 기반의 상호작용 방법을 통해 서비스를 선택하고, 선택한 서비스의 인터페이스를 개인화 함으로써 다수의 서비스와 직관적인 상호작용을 가능케 한다. 이를 위해 기존 마커의 가시성을 줄이는 새로운 형태의 마커를 제안하고 카메라 기반의 상호작용 방법에 응용한다. Personal companion 의 유용성 검증을 위해 PDA 와 UMPC 플랫폼에 구현한 후, 스마트 홈 테스트 베드의 여러 응용 서비스를 선택하고 상호작용을 하는데 적용하였다. 제안한 personal companion 은 유비쿼터스 컴퓨팅 환경에서 사용자와 U-서비스를 사용자 중심적으로 연결시켜 주는 중요한 매개체의 역할을 할 수 있을 것으로 기대된다.

  • PDF

Human Robot Interaction Using Face Direction Gestures

  • Kwon, Dong-Soo;Bang, Hyo-Choong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.171.4-171
    • /
    • 2001
  • This paper proposes a method of human- robot interaction (HRI) using face directional gesture. A single CCD color camera is used to input face region, and the robot recognizes the face directional gesture based on the facial feature´s positions. One can give a command such as stop, go, left and right turn to the robot using the face directional gesture. Since the robot also has the ultra sonic sensors, it can detect obstacles and determine a safe direction at the current position. By combining the user´s command with the sensed obstacle configuration, the robot selects the safe and efficient motion direction. From simulation results, we show that the robot with HRI is more reliable for the robot´s navigation.

  • PDF