• Title/Summary/Keyword: Vision-based interface

Search Result 130, Processing Time 0.034 seconds

Automation of a Teleoperated Microassembly Desktop Station Supervised by Virtual Reality

  • Antoine Ferreira;Fontaine, Jean-Guy;Shigeoki Hirai
    • Transactions on Control, Automation and Systems Engineering
    • /
    • v.4 no.1
    • /
    • pp.23-31
    • /
    • 2002
  • We proposed a concept of a desktop micro device factory for visually servoed teleoperated microassembly assisted by a virtual reality (VR) interface. It is composed of two micromanipulators equipped with micro tools operating under a light microscope. First a manipulator, control method for the micro object to follow a planned trajectory in pushing operation is proposed undo. vision based-position control. Then, we present the cooperation control strategy of the micro handling operation under vision-based force control integrating a sensor fusion framework approach. A guiding-system based on virtual micro-world exactly reconstructed from the CAD-CAM databases of the real environment being considered is presented for the imprecisely calibrated micro world. Finally, some experimental results of microassembly tasks performed on millimeter-sized components are provided.

Hybrid Inertial and Vision-Based Tracking for VR applications (가상 현실 어플리케이션을 위한 관성과 시각기반 하이브리드 트래킹)

  • Gu, Jae-Pil;An, Sang-Cheol;Kim, Hyeong-Gon;Kim, Ik-Jae;Gu, Yeol-Hoe
    • Proceedings of the KIEE Conference
    • /
    • 2003.11b
    • /
    • pp.103-106
    • /
    • 2003
  • In this paper, we present a hybrid inertial and vision-based tracking system for VR applications. One of the most important aspects of VR (Virtual Reality) is providing a correspondence between the physical and virtual world. As a result, accurate and real-time tracking of an object's position and orientation is a prerequisite for many applications in the Virtual Environments. Pure vision-based tracking has low jitter and high accuracy but cannot guarantee real-time pose recovery under all circumstances. Pure inertial tracking has high update rates and full 6DOF recovery but lacks long-term stability due to sensor noise. In order to overcome the individual drawbacks and to build better tracking system, we introduce the fusion of vision-based and inertial tracking. Sensor fusion makes the proposal tracking system robust, fast, accurate, and low jitter and noise. Hybrid tracking is implemented with Kalman Filter that operates in a predictor-corrector manner. Combining bluetooth serial communication module gives the system a full mobility and makes the system affordable, lightweight energy-efficient. and practical. Full 6DOF recovery and the full mobility of proposal system enable the user to interact with mobile device like PDA and provide the user with natural interface.

  • PDF

Vision-Based Finger Action Recognition by Angle Detection and Contour Analysis

  • Lee, Dae-Ho;Lee, Seung-Gwan
    • ETRI Journal
    • /
    • v.33 no.3
    • /
    • pp.415-422
    • /
    • 2011
  • In this paper, we present a novel vision-based method of recognizing finger actions for use in electronic appliance interfaces. Human skin is first detected by color and consecutive motion information. Then, fingertips are detected by a novel scale-invariant angle detection based on a variable k-cosine. Fingertip tracking is implemented by detected region-based tracking. By analyzing the contour of the tracked fingertip, fingertip parameters, such as position, thickness, and direction, are calculated. Finger actions, such as moving, clicking, and pointing, are recognized by analyzing these fingertip parameters. Experimental results show that the proposed angle detection can correctly detect fingertips, and that the recognized actions can be used for the interface with electronic appliances.

Microassembly System for the assembly of photonic components (광 부품 조립을 위한 마이크로 조립 시스템)

  • 강현재;김상민;남궁영우;김병규
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2003.06a
    • /
    • pp.241-245
    • /
    • 2003
  • In this paper, a microassembly system based on hybrid manipulation schemes is proposed and applied to the assembly of a photonic component. In order to achieve both high precision and dexterity in microassembly, we propose a hybrid microassembly system with sensory feedbacks of vision and force. This system consists of the distributed 6-DOF micromanipulation units, the stereo microscope, and haptic interface for the force feedback-based microassembly. A hybrid assembly method, which combines the vision-based microassembly and the scaled teleoperated microassembly with force feedback, is proposed. The feasibility of the proposed method is investigated via experimental studies for assembling micro opto-electrical components. Experimental results show that the hybrid microassembly system is feasible for applications to the assembly of photonic components in the commercial market with better flexibility and efficiency.

  • PDF

Vision based Fast Hand Motion Recognition Method for an Untouchable User Interface of Smart Devices (스마트 기기의 비 접촉 사용자 인터페이스를 위한 비전 기반 고속 손동작 인식 기법)

  • Park, Jae Byung
    • Journal of the Institute of Electronics and Information Engineers
    • /
    • v.49 no.9
    • /
    • pp.300-306
    • /
    • 2012
  • In this paper, we propose a vision based hand motion recognition method for an untouchable user interface of smart devices. First, an original color image is converted into a gray scaled image and its spacial resolution is reduced, taking the small memory and low computational power of smart devices into consideration. For robust recognition of hand motions through separation of horizontal and vertical motions, the horizontal principal area (HPA) and the vertical principal area (VPA) are defined respectively. From the difference images of the consecutively obtained images, the center of gravity (CoG) of the significantly changed pixels caused by hand motions is obtained, and the direction of hand motion is detected by defining the least mean squared line for the CoG in time. For verifying the feasibility of the proposed method, the experiments are carried out with a vision system.

Vision-based Human-Robot Motion Transfer in Tangible Meeting Space (실감만남 공간에서의 비전 센서 기반의 사람-로봇간 운동 정보 전달에 관한 연구)

  • Choi, Yu-Kyung;Ra, Syun-Kwon;Kim, Soo-Whan;Kim, Chang-Hwan;Park, Sung-Kee
    • The Journal of Korea Robotics Society
    • /
    • v.2 no.2
    • /
    • pp.143-151
    • /
    • 2007
  • This paper deals with a tangible interface system that introduces robot as remote avatar. It is focused on a new method which makes a robot imitate human arm motions captured from a remote space. Our method is functionally divided into two parts: capturing human motion and adapting it to robot. In the capturing part, we especially propose a modified potential function of metaballs for the real-time performance and high accuracy. In the adapting part, we suggest a geometric scaling method for solving the structural difference between a human and a robot. With our method, we have implemented a tangible interface and showed its speed and accuracy test.

  • PDF

RealBook: A Tangible Electronic Book Based on the Interface of TouchFace-V (RealBook: TouchFace-V 인터페이스 기반 실감형 전자책)

  • Song, Dae-Hyeon;Bae, Ki-Tae;Lee, Chil-Woo
    • The Journal of the Korea Contents Association
    • /
    • v.13 no.12
    • /
    • pp.551-559
    • /
    • 2013
  • In this paper, we proposed a tangible RealBook based on the interface of TouchFace-V which is able to recognize multi-touch and hand gesture. The TouchFace-V is applied projection technology on a flat surface such as table, without constraint of space. The system's configuration is addressed installation, calibration, and portability issues that are most existing front-projected vision-based tabletop display. It can provide hand touch and gesture applying computer vision by adopting tracking technology without sensor and traditional input device. The RealBook deals with the combination of each advantage of analog sensibility on texts and multimedia effects of e-book. Also, it provides digitally created stories that would differ in experiences and environments with interacting users' choices on the interface of the book. We proposed e-book that is new concept of electronic book; named RealBook, different from existing and TouchFace-V interface, which can provide more direct viewing, natural and intuitive interactions with hand touch and gesture.

Cloud Broadcasting Service Platform (클라우드 방송 서비스 플랫폼)

  • Kim, Hong-Ik;Lee, Dong-Ik;Lee, Jong-Han
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.42 no.3
    • /
    • pp.623-638
    • /
    • 2017
  • Application fields of cloud technologies have been gradually expanded with development of technology development and diversification of services. Cloud technology is used for investment efficiency, operation efficiency and service competitive advantage in digital broadcasting platform. Recently, Cloud broadcasting platform commercialized for UI(User Interface) and data broadcasting in Korea, and broadcasting service competition becomes fierce. Cloud technology of broadcasting provides remove a service dependency hardware resource and software architecture on STB device, and unified operation of user interface and service using cloud server without legacy separating management of STB types. In this paper, we explain application effects in image based cloud broadcasting service platform.

Evaluation of Human Interface using Fuzzy Measures and Fuzzy Integrals (퍼지척도 퍼지적분을 이용한 휴면 인터페이스의 평가)

  • 손영선
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 1998.10a
    • /
    • pp.31-36
    • /
    • 1998
  • This paper proposes a method to select essential elements in a human evaluation model using the Choquet integral based on fuzzy measures and applies the model to the evaluation of human interface. Three kinds of concepts, Increment Degree, average of Increment Degree, Necessity coefficient, are defined. The proposed method selects essential elements by the use of the Relative necessity coefficient. The proposed method is applied to the analysis of human interface. In the experiment, (1) a warning sound, (2)a color vision, (3) the size of working area, (4) a response of confirmation, are considered as human interface elements. subjects answer the questionnarie after the experiment. From the data of questionnaire, fuzzy measures are identified and are applied to the proposed model. effectiveness of the proposed model is confirmed by the comparison of human interface elements extracted from the proposed model and those from the questionnarie.

  • PDF

Vision based 3D Hand Interface Using Virtual Two-View Method (가상 양시점화 방법을 이용한 비전기반 3차원 손 인터페이스)

  • Bae, Dong-Hee;Kim, Jin-Mo
    • Journal of Korea Game Society
    • /
    • v.13 no.5
    • /
    • pp.43-54
    • /
    • 2013
  • With the consistent development of the 3D application technique, visuals are available at more realistic quality and are utilized in many applications like game. In particular, interacting with 3D objects in virtual environments, 3D graphics have led to a substantial development in the augmented reality. This study proposes a 3D user interface to control objects in 3D space through virtual two-view method using only one camera. To do so, homography matrix including transformation information between arbitrary two positions of camera is calculated and 3D coordinates are reconstructed by employing the 2D hand coordinates derived from the single camera, homography matrix and projection matrix of camera. This method will result in more accurate and quick 3D information. This approach may be advantageous with respect to the reduced amount of calculation needed for using one camera rather than two and may be effective at the same time for real-time processes while it is economically efficient.