• Title/Summary/Keyword: active camera

Search Result 297, Processing Time 0.024 seconds

Cylindrical Object Recognition using Sensor Data Fusion (센서데이터 융합을 이용한 원주형 물체인식)

  • Kim, Dong-Gi;Yun, Gwang-Ik;Yun, Ji-Seop;Gang, Lee-Seok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.7 no.8
    • /
    • pp.656-663
    • /
    • 2001
  • This paper presents a sensor fusion method to recognize a cylindrical object a CCD camera, a laser slit beam and ultrasonic sensors on a pan/tilt device. For object recognition with a vision sensor, an active light source projects a stripe pattern of light on the object surface. The 2D image data are transformed into 3D data using the geometry between the camera and the laser slit beam. The ultrasonic sensor uses an ultrasonic transducer array mounted in horizontal direction on the pan/tilt device. The time of flight is estimated by finding the maximum correlation between the received ultrasonic pulse and a set of stored templates - also called a matched filter. The distance of flight is calculated by simply multiplying the time of flight by the speed of sound and the maximum amplitude of the filtered signal is used to determine the face angle to the object. To determine the position and the radius of cylindrical objects, we use a statistical sensor fusion. Experimental results show that the fused data increase the reliability for the object recognition.

  • PDF

Navigation of a Mobile Robot Using Hand Gesture Recognition (손 동작 인식을 이용한 이동로봇의 주행)

  • Kim, Il-Myeong;Kim, Wan-Cheol;Yun, Gyeong-Sik;Lee, Jang-Myeong
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.7
    • /
    • pp.599-606
    • /
    • 2002
  • A new method to govern the navigation of a mobile robot using hand gesture recognition is proposed based on the following two procedures. One is to achieve vision information by using a 2-DOF camera as a communicating medium between a man and a mobile robot and the other is to analyze and to control the mobile robot according to the recognized hand gesture commands. In the previous researches, mobile robots are passively to move through landmarks, beacons, etc. In this paper, to incorporate various changes of situation, a new control system that manages the dynamical navigation of mobile robot is proposed. Moreover, without any generally used expensive equipments or complex algorithms for hand gesture recognition, a reliable hand gesture recognition system is efficiently implemented to convey the human commands to the mobile robot with a few constraints.

Real Time Discrimination of 3 Dimensional Face Pose (실시간 3차원 얼굴 방향 식별)

  • Kim, Tae-Woo
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.3 no.1
    • /
    • pp.47-52
    • /
    • 2010
  • In this paper, we introduce a new approach for real-time 3D face pose discrimination based on active IR illumination from a monocular view of the camera. Under the IR illumination, the pupils appear bright. We develop algorithms for efficient and robust detection and tracking pupils in real time. Based on the geometric distortions of pupils under different face orientations, an eigen eye feature space is built based on training data that captures the relationship between 3D face orientation and the geometric features of the pupils. The 3D face pose for an input query image is subsequently classified using the eigen eye feature space. From the experiment, we obtained the range of results of discrimination from the subjects which close to the camera are from 94,67%, minimum from 100%, maximum.

  • PDF

SOLAR OBSERVATIONAL SYSTEM OF KYUNGHEE UNIVERSITY (경희대학교 태양관측시스템)

  • KIM IL-HOON;KIM KAP-SUNG
    • Publications of The Korean Astronomical Society
    • /
    • v.13 no.1 s.14
    • /
    • pp.39-54
    • /
    • 1998
  • We have developed solar observational system in the department of Astronomy & Space Sciences of KyungHee University, in order to monitor solar activities and construct solar database for space weather forecasting at maximum of 23rd solar cycle, as well as an solar education and exercise for undergraduate students. Our solar observational system consists of the full disk monitoring system and the regional observation system for H a fine structure. Full disk monitoring system is made of an energy rejection filter, 16cm refractor, video CCD camera and monitor. Monitored data are recorded to VHS video tape and analog output of video CCD can be captured as digital images by the computer with video graphic card. Another system for regional observation of the sun is made of energy rejection filter, 21cm Schmidt-Cassegrain reflector, H a filter with 1.6A pass band width and $375\times242$ CCD camera. We can observe H a fine structure in active regions of solar disk and solar limb, by using this system. We have carried out intense solar observations for a test of our system. It is found that Quality of our H a image is as good as that of solar images provided by Space Environmental Center. In this paper, we introduce the basic characteristics of the KyungHee Solar Observation System and result of our solar observations. We hope that our data should be used for space weather forecasting with domestic data of RRL(Radio Research Laboratory) and SOFT(SOlar Flare Telescope).

  • PDF

Design and Construction of a Miniature PIV (MPIV) System

  • Olivier Chetelat;Yoon, Sang-Youl;Kim, Kyung-Chun
    • Journal of Mechanical Science and Technology
    • /
    • v.15 no.12
    • /
    • pp.1775-1783
    • /
    • 2001
  • For two decades, there has been an active research to enhance the performances of Particle Image Velocimetry (PIV) systems. However, the resulting systems are somewhat very costly, cumbersome and delicate. In this paper, we address the design and some first experimental results of a PIV system belonging to the opposite paradigm. The Miniature PIV or MPIV system feature relatively modest performances, but is considerably smaller (out MPIV could hold in dia. 40 mm$\times$120 mm), cheaper (out MPIV total cost is less than $500) and easy to handle. Potential applications include industrial velocity sensors. The proposed MPIV system uses a one-chip-only CMOS camera with digital output. Only two other chips are needed, one for a buffer memory and one for an interfacing logic that controls the system. Images are transferred to a personal computer (PC or laptop) via its standard parallel port. No extra hardware is required (in particular, no frame grabber board is needed). In our first MPIV prototype presented in this paper, the strobe lighting is generated by a cheap 5 mW laser pointer diode. Experimental results are presented and discussed.

  • PDF

Light 3D Modeling with mobile equipment (모바일 카메라를 이용한 경량 3D 모델링)

  • Ju, Seunghwan;Seo, Heesuk;Han, Sunghyu
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.4
    • /
    • pp.107-114
    • /
    • 2016
  • Recently, 3D related technology has become a hot topic for IT. 3D technologies such as 3DTV, Kinect and 3D printers are becoming more and more popular. According to the flow of the times, the goal of this study is that the general public is exposed to 3D technology easily. we have developed a web-based application program that enables 3D modeling of facial front and side photographs using a mobile phone. In order to realize 3D modeling, two photographs (front and side) are photographed with a mobile camera, and ASM (Active Shape Model) and skin binarization technique are used to extract facial height such as nose from facial and side photographs. Three-dimensional coordinates are generated using the face extracted from the front photograph and the face height obtained from the side photograph. Using the 3-D coordinates generated for the standard face model modeled with the standard face as a control point, the face becomes the face of the subject when the RBF (Radial Basis Function) interpolation method is used. Also, in order to cover the face with the modified face model, the control point found in the front photograph is mapped to the texture map coordinate to generate the texture image. Finally, the deformed face model is covered with a texture image, and the 3D modeled image is displayed to the user.

Real-Time Eye Tracking Using IR Stereo Camera for Indoor and Outdoor Environments

  • Lim, Sungsoo;Lee, Daeho
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.8
    • /
    • pp.3965-3983
    • /
    • 2017
  • We propose a novel eye tracking method that can estimate 3D world coordinates using an infrared (IR) stereo camera for indoor and outdoor environments. This method first detects dark evidences such as eyes, eyebrows and mouths by fast multi-level thresholding. Among these evidences, eye pair evidences are detected by evidential reasoning and geometrical rules. For robust accuracy, two classifiers based on multiple layer perceptron (MLP) using gradient local binary patterns (GLBPs) verify whether the detected evidences are real eye pairs or not. Finally, the 3D world coordinates of detected eyes are calculated by region-based stereo matching. Compared with other eye detection methods, the proposed method can detect the eyes of people wearing sunglasses due to the use of the IR spectrum. Especially, when people are in dark environments such as driving at nighttime, driving in an indoor carpark, or passing through a tunnel, human eyes can be robustly detected because we use active IR illuminators. In the experimental results, it is shown that the proposed method can detect eye pairs with high performance in real-time under variable illumination conditions. Therefore, the proposed method can contribute to human-computer interactions (HCIs) and intelligent transportation systems (ITSs) applications such as gaze tracking, windshield head-up display and drowsiness detection.

Improving light collection efficiency using partitioned light guide on pixelated scintillator-based γ-ray imager

  • Hyeon, Suyeon;Hammig, Mark;Jeong, Manhee
    • Nuclear Engineering and Technology
    • /
    • v.54 no.5
    • /
    • pp.1760-1768
    • /
    • 2022
  • When gamma-camera sensor modules, which are key components of radiation imagers, are derived from the coupling between scintillators and photosensors, the light collection efficiency is an important factor in determining the effectiveness with which the instrument can identify nuclides via their derived gamma-ray spectra. If the pixel area of the scintillator is larger than the pixel area of the photosensor, light loss and cross-talk between pixels of the photosensor can result in information loss, thereby degrading the precision of the energy estimate and the accuracy of the position-of-interaction determination derived from each active pixel in a coded-aperture based gamma camera. Here we present two methods to overcome the information loss associated with the loss of photons created by scintillation pixels that are coupled to an associated silicon photomultiplier pixel. Specifically, we detail the use of either: (1) light guides, or (2) scintillation pixel areas that match the area of the SiPM pixel. Compared with scintillator/SiPM couplings that have slightly mismatched intercept areas, the experimental results show that both methods substantially improve both the energy and spatial resolution by increasing light collection efficiency, but in terms of the image sensitivity and image quality, only slight improvements are accrued.

Performance Evaluation of ARCore Anchors According to Camera Tracking

  • Shinhyup Lee;Leehwan Hwang;Seunghyun Lee;Taewook Kim;Soonchul Kwon
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.4
    • /
    • pp.215-222
    • /
    • 2023
  • Augmented reality (AR), which integrates virtual media into reality, is increasingly utilized across various industrial sectors, thanks to advancements in 3D graphics and mobile device technologies. The IT industry is thus carrying out active R&D activities about AR platforms. Google plays a significant role in the AR landscape, with a focus on ARCore services. An essential aspect of ARCore is the use of anchors, which serve as reference points that help maintain the position and orientation of virtual objects within the physical environment. However, if the accuracy of anchor positioning is suboptimal when running AR content, it can significantly diminish the user's immersive experience. We are to assess the performance of these anchors in this study. To conduct the performance evaluation, virtual 3D objects, matching the shape and size of real-world objects, we strategically positioned ourselves to overlap with their physical counterparts. Images of both real and virtual objects were captured from five distinct camera trajectories, and ARCore's performance was analyzed by examining the difference between these captured images.

Human Tracking and Body Silhouette Extraction System for Humanoid Robot (휴머노이드 로봇을 위한 사람 검출, 추적 및 실루엣 추출 시스템)

  • Kwak, Soo-Yeong;Byun, Hye-Ran
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.6C
    • /
    • pp.593-603
    • /
    • 2009
  • In this paper, we propose a new integrated computer vision system designed to track multiple human beings and extract their silhouette with an active stereo camera. The proposed system consists of three modules: detection, tracking and silhouette extraction. Detection was performed by camera ego-motion compensation and disparity segmentation. For tracking, we present an efficient mean shift based tracking method in which the tracking objects are characterized as disparity weighted color histograms. The silhouette was obtained by two-step segmentation. A trimap is estimated in advance and then this was effectively incorporated into the graph cut framework for fine segmentation. The proposed system was evaluated with respect to ground truth data and it was shown to detect and track multiple people very well and also produce high quality silhouettes. The proposed system can assist in gesture and gait recognition in field of Human-Robot Interaction (HRI).