• Title/Summary/Keyword: Camera-based Interaction

Search Result 132, Processing Time 0.032 seconds

Geometric Formulation of Rectangle Based Relative Localization of Mobile Robot (이동 로봇의 상대적 위치 추정을 위한 직사각형 기반의 기하학적 방법)

  • Lee, Joo-Haeng;Lee, Jaeyeon;Lee, Ahyun;Kim, Jaehong
    • The Journal of Korea Robotics Society
    • /
    • v.11 no.1
    • /
    • pp.9-18
    • /
    • 2016
  • A rectangle-based relative localization method is proposed for a mobile robot based on a novel geometric formulation. In an artificial environment where a mobile robot navigates, rectangular shapes are ubiquitous. When a scene rectangle is captured using a camera attached to a mobile robot, localization can be performed and described in the relative coordinates of the scene rectangle. Especially, our method works with a single image for a scene rectangle whose aspect ratio is not known. Moreover, a camera calibration is unnecessary with an assumption of the pinhole camera model. The proposed method is largely based on the theory of coupled line cameras (CLC), which provides a basis for efficient computation with analytic solutions and intuitive geometric interpretation. We introduce the fundamentals of CLC and describe the proposed method with some experimental results in simulation environment.

Human-Computer Interaction Based Only on Auditory and Visual Information

  • Sha, Hui;Agah, Arvin
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.2 no.4
    • /
    • pp.285-297
    • /
    • 2000
  • One of the research objectives in the area of multimedia human-computer interaction is the application of artificial intelligence and robotics technologies to the development of computer interfaces. This involves utilizing many forms of media, integrating speed input, natural language, graphics, hand pointing gestures, and other methods for interactive dialogues. Although current human-computer communication methods include computer keyboards, mice, and other traditional devices, the two basic ways by which people communicate with each other are voice and gesture. This paper reports on research focusing on the development of an intelligent multimedia interface system modeled based on the manner in which people communicate. This work explores the interaction between humans and computers based only on the processing of speech(Work uttered by the person) and processing of images(hand pointing gestures). The purpose of the interface is to control a pan/tilt camera to point it to a location specified by the user through utterance of words and pointing of the hand, The systems utilizes another stationary camera to capture images of the users hand and a microphone to capture the users words. Upon processing of the images and sounds, the systems responds by pointing the camera. Initially, the interface uses hand pointing to locate the general position which user is referring to and then the interface uses voice command provided by user to fine-the location, and change the zooming of the camera, if requested. The image of the location is captured by the pan/tilt camera and sent to a color TV monitor to be displayed. This type of system has applications in tele-conferencing and other rmote operations, where the system must respond to users command, in a manner similar to how the user would communicate with another person. The advantage of this approach is the elimination of the traditional input devices that the user must utilize in order to control a pan/tillt camera, replacing them with more "natural" means of interaction. A number of experiments were performed to evaluate the interface system with respect to its accuracy, efficiency, reliability, and limitation.

  • PDF

Tangible Interaction : Application for A New Interface Method for Mobile Device -Focused on development of virtual keyboard using camera input - (체감형 인터랙션 : 모바일 기기의 새로운 인터페이스 방법으로서의 활용 -카메라 인식에 의한 가상 키보드입력 방식의 개발을 중심으로 -)

  • 변재형;김명석
    • Archives of design research
    • /
    • v.17 no.3
    • /
    • pp.441-448
    • /
    • 2004
  • Mobile devices such as mobile phones or PDAs are considered as main interlace tools in ubiquitous computing environment. For searching information in mobile device, it should be possible for user to input some text as well as to control cursor for navigation. So, we should find efficient interlace method for text input in limited dimension of mobile devices. This study intends to suggest a new approach to mobile interaction using camera based virtual keyboard for text input in mobile devices. We developed a camera based virtual keyboard prototype using a PC camera and a small size LCD display. User can move the prototype in the air to control the cursor over keyboard layout in screen and input text by pressing a button. The new interaction method in this study is evaluated as competitive compared to mobile phone keypad in left input efficiency. And the new method can be operated by one hand and make it possible to design smaller device by eliminating keyboard part. The new interaction method can be applied to text input method for mobile devices requiring especially small dimension. And this method can be modified to selection and navigation method for wireless internet contents on small screen devices.

  • PDF

A Study on the HMD-AR Interaction System Combining Optical Camera Communication to Provide Location-based Service for Tourist in Jeonju Hanok Village (전주 한옥마을 관광객의 위치 기반 서비스 제공을 위한 광카메라통신 접목형 HMD-AR 인터렉션 시스템에 관한 연구)

  • Min, Byung-Jun;Choi, Jin-Yeong;Cha, Jae-Sang;Choi, Bang-Ho;Cho, Ju-Phil
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.11 no.4
    • /
    • pp.445-451
    • /
    • 2018
  • In this paper, we propose an HMD-AR interaction system that combines optical camera communication to provide location-based service for tourists in Jeonju Hanok Village. The proposed system receives optical camera communication data from Light infrastructure existing in Jeonju Hanok Village and provides service through ID information along with its own location information. We researched optical camera communication technology and smart device based HMD-AR system and constructed the actual HMD-AR system and tested it. The proposed system is expected that it will be applied to various tourist attractions by utilizing the proposed system in the future, and it is expected to be used as a valuable feedback for smart device based HMD-AR.

Motion based interaction techique for a camera-tracked laser pointer system (카메라 추적 기반 레이저 포인터 시스템을 위한 동작 기반 상호작용 기술)

  • Ahn, Sang-Mahn;Lim, Jong-Gwan;Kwon, Dong-Soo
    • 한국HCI학회:학술대회논문집
    • /
    • 2008.02b
    • /
    • pp.257-261
    • /
    • 2008
  • In this paper, intuitive interactions compatible with various software and replaceable with the conventional mouse function are proposed for camera-tracked laser pointer system. For this purpose, this paper designs the motion based interaction using acceleration information from a new laser pointer with 3-axes accelerometer and shows its usability.

  • PDF

Human-Object Interaction Framework Using RGB-D Camera (RGB-D 카메라를 사용한 사용자-실사물 상호작용 프레임워크)

  • Baeka, Yong-Hwan;Lim, Changmin;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.21 no.1
    • /
    • pp.11-23
    • /
    • 2016
  • Recent days, touch interaction interface is the most widely used interaction interface to communicate with digital devices. Because of its usability, touch technology is applied almost everywhere from watch to advertising boards and it is growing much bigger. However, this technology has a critical weakness. Normally, touch input device needs a contact surface with touch sensors embedded in it. Thus, touch interaction through general objects like books or documents are still unavailable. In this paper, a human-object interaction framework based on RGB-D camera is proposed to overcome those limitation. The proposed framework can deal with occluded situations like hovering the hand on top of the object and also moving objects by hand. In such situations object recognition algorithm and hand gesture algorithm may fail to recognize. However, our framework makes it possible to handle complicated circumstances without performance loss. The framework calculates the status of the object with fast and robust object recognition algorithm to determine whether it is an object or a human hand. Then, the hand gesture recognition algorithm controls the context of each object by gestures almost simultaneously.

Fine-Motion Estimation Using Ego/Exo-Cameras

  • Uhm, Taeyoung;Ryu, Minsoo;Park, Jong-Il
    • ETRI Journal
    • /
    • v.37 no.4
    • /
    • pp.766-771
    • /
    • 2015
  • Robust motion estimation for human-computer interactions played an important role in a novel method of interaction with electronic devices. Existing pose estimation using a monocular camera employs either ego-motion or exo-motion, both of which are not sufficiently accurate for estimating fine motion due to the motion ambiguity of rotation and translation. This paper presents a hybrid vision-based pose estimation method for fine-motion estimation that is specifically capable of extracting human body motion accurately. The method uses an ego-camera attached to a point of interest and exo-cameras located in the immediate surroundings of the point of interest. The exo-cameras can easily track the exact position of the point of interest by triangulation. Once the position is given, the ego-camera can accurately obtain the point of interest's orientation. In this way, any ambiguity between rotation and translation is eliminated and the exact motion of a target point (that is, ego-camera) can then be obtained. The proposed method is expected to provide a practical solution for robustly estimating fine motion in a non-contact manner, such as in interactive games that are designed for special purposes (for example, remote rehabilitation care systems).

3D Feature Based Tracking using SVM

  • Kim, Se-Hoon;Choi, Seung-Joon;Kim, Sung-Jin;Won, Sang-Chul
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1458-1463
    • /
    • 2004
  • Tracking is one of the most important pre-required task for many application such as human-computer interaction through gesture and face recognition, motion analysis, visual servoing, augment reality, industrial assembly and robot obstacle avoidance. Recently, 3D information of object is required in realtime for many aforementioned applications. 3D tracking is difficult problem to solve because during the image formation process of the camera, explicit 3D information about objects in the scene is lost. Recently, many vision system use stereo camera especially for 3D tracking. The 3D feature based tracking(3DFBT) which is on of the 3D tracking system using stereo vision have many advantage compare to other tracking methods. If we assumed the correspondence problem which is one of the subproblem of 3DFBT is solved, the accuracy of tracking depends on the accuracy of camera calibration. However, The existing calibration method based on accurate camera model so that modelling error and weakness to lens distortion are embedded. Therefore, this thesis proposes 3D feature based tracking method using SVM which is used to solve reconstruction problem.

  • PDF

Open Standard Based 3D Urban Visualization and Video Fusion

  • Enkhbaatar, Lkhagva;Kim, Seong-Sam;Sohn, Hong-Gyoo
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.28 no.4
    • /
    • pp.403-411
    • /
    • 2010
  • This research demonstrates a 3D virtual visualization of urban environment and video fusion for effective damage prevention and surveillance system using open standard. We present the visualization and interaction simulation method to increase the situational awareness and optimize the realization of environmental monitoring through the CCTV video and 3D virtual environment. New camera prototype was designed based on the camera frustum view model to project recorded video prospectively onto the virtual 3D environment. The demonstration was developed by the X3D, which is royalty-free open standard and run-time architecture, and it offers abilities to represent, control and share 3D spatial information via the internet browsers.

Infrared Sensitive Camera Based Finger-Friendly Interactive Display System

  • Ghimire, Deepak;Kim, Joon-Cheol;Lee, Kwang-Jae;Lee, Joon-Whoan
    • International Journal of Contents
    • /
    • v.6 no.4
    • /
    • pp.49-56
    • /
    • 2010
  • In this paper we present a system that enables the user to interact with large display system even without touching the screen. With two infrared sensitive cameras mounted on the bottom left and bottom right of the display system pointing upwards, the user fingertip position on the selected region of interest of each camera view is found using vertical intensity profile of the background subtracted image. The position of the finger in two images of left and right camera is mapped to the display screen coordinate by using pre-determined matrices, which are calculated by interpolating samples of user finger position on the images taken by pointing finger over some known coordinate position of the display system. The screen is then manipulated according to the calculated position and depth of the fingertip with respect to the display system. Experimental results demonstrate an efficient, robust and stable human computer interaction.