• 제목/요약/키워드: Camera Performance

검색결과 1,814건 처리시간 0.037초

Virtual Environment Building and Navigation of Mobile Robot using Command Fusion and Fuzzy Inference

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • 제22권4호
    • /
    • pp.427-433
    • /
    • 2019
  • This paper propose a fuzzy inference model for map building and navigation for a mobile robot with an active camera, which is intelligently navigating to the goal location in unknown environments using sensor fusion, based on situational command using an active camera sensor. Active cameras provide a mobile robot with the capability to estimate and track feature images over a hallway field of view. In this paper, instead of using "physical sensor fusion" method which generates the trajectory of a robot based upon the environment model and sensory data. Command fusion method is used to govern the robot navigation. The navigation strategy is based on the combination of fuzzy rules tuned for both goal-approach and obstacle-avoidance. To identify the environments, a command fusion technique is introduced, where the sensory data of active camera sensor for navigation experiments are fused into the identification process. Navigation performance improves on that achieved using fuzzy inference alone and shows significant advantages over command fusion techniques. Experimental evidences are provided, demonstrating that the proposed method can be reliably used over a wide range of relative positions between the active camera and the feature images.

Development of Color 3D Scanner Using Laser Structured-light Imaging Method

  • Ko, Youngjun;Yi, Sooyeong
    • Current Optics and Photonics
    • /
    • 제2권6호
    • /
    • pp.554-562
    • /
    • 2018
  • This study presents a color 3D scanner based on the laser structured-light imaging method that can simultaneously acquire 3D shape data and color of a target object using a single camera. The 3D data acquisition of the scanner is based on the structured-light imaging method, and the color data is obtained from a natural color image. Because both the laser image and the color image are acquired by the same camera, it is efficient to obtain the 3D data and the color data of a pixel by avoiding the complicated correspondence algorithm. In addition to the 3D data, the color data is helpful for enhancing the realism of an object model. The proposed scanner consists of two line lasers, a color camera, and a rotation table. The line lasers are deployed at either side of the camera to eliminate shadow areas of a target object. This study addresses the calibration methods for the parameters of the camera, the plane equations covered by the line lasers, and the center of the rotation table. Experimental results demonstrate the performance in terms of accurate color and 3D data acquisition in this study.

Development of a Camera Self-calibration Method for 10-parameter Mapping Function

  • Park, Sung-Min;Lee, Chang-je;Kong, Dae-Kyeong;Hwang, Kwang-il;Doh, Deog-Hee;Cho, Gyeong-Rae
    • Journal of Ocean Engineering and Technology
    • /
    • 제35권3호
    • /
    • pp.183-190
    • /
    • 2021
  • Tomographic particle image velocimetry (PIV) is a widely used method that measures a three-dimensional (3D) flow field by reconstructing camera images into voxel images. In 3D measurements, the setting and calibration of the camera's mapping function significantly impact the obtained results. In this study, a camera self-calibration technique is applied to tomographic PIV to reduce the occurrence of errors arising from such functions. The measured 3D particles are superimposed on the image to create a disparity map. Camera self-calibration is performed by reflecting the error of the disparity map to the center value of the particles. Vortex ring synthetic images are generated and the developed algorithm is applied. The optimal result is obtained by applying self-calibration once when the center error is less than 1 pixel and by applying self-calibration 2-3 times when it was more than 1 pixel; the maximum recovery ratio is 96%. Further self-correlation did not improve the results. The algorithm is evaluated by performing an actual rotational flow experiment, and the optimal result was obtained when self-calibration was applied once, as shown in the virtual image result. Therefore, the developed algorithm is expected to be utilized for the performance improvement of 3D flow measurements.

Inter-Module Interworking Evaluation of TDMA-Based Wireless IP Video Transmission System (TDMA 기반 무선 IP 영상 전송 시스템의 모듈간 연동 평가)

  • Sang-Ok Yoon;Myoung-Soo Kim;Gyeong-Hyu Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • 제18권1호
    • /
    • pp.1-10
    • /
    • 2023
  • In this paper, the design and implementation of the long-distance wireless transmission technology of high-definition video using domestic wireless communication technology for the existing wired-based CCTV surveillance system and IP camera system was evaluated for performance. The interworking test between the wireless multi-IP camera transmission terminal device and the wireless communication RF module, and the wireless multi-IP camera-based video transmission system integration interworking test confirm the module interworking suitability during video transmission.

Object detection using a light field camera (라이트 필드 카메라를 사용한 객체 검출)

  • Jeong, Mingu;Kim, Dohun;Park, Sanghyun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 한국정보통신학회 2021년도 추계학술대회
    • /
    • pp.109-111
    • /
    • 2021
  • Recently, computer vision research using light field cameras has been actively conducted. Since light field cameras have spatial information, various studies are being conducted in fields such as depth map estimation, super resolution, and 3D object detection. In this paper, we propose a method for detecting objects in blur images through a 7×7 array of images acquired through a light field camera. The blur image, which is weak in the existing camera, is detected through the light field camera. The proposed method uses the SSD algorithm to evaluate the performance using blur images acquired from light field cameras.

  • PDF

Development of a real-time gamma camera for high radiation fields

  • Minju Lee;Yoonhee Jung;Sang-Han Lee
    • Nuclear Engineering and Technology
    • /
    • 제56권1호
    • /
    • pp.56-63
    • /
    • 2024
  • In high radiation fields, gamma cameras suffer from pulse pile-up, resulting in poor energy resolution, count losses, and image distortion. To overcome this problem, various methods have been introduced to reduce the size of the aperture or pixel, reject the pile-up events, and correct the pile-up events, but these technologies have limitations in terms of mechanical design and real-time processing. The purpose of this study is to develop a real-time gamma camera to evaluate the radioactive contamination in high radiation fields. The gamma camera is composed of a pinhole collimator, NaI(Tl) scintillator, position sensitive photomultiplier (PSPMT), signal processing board, and data acquisition (DAQ). The pulse pile-up is corrected in real-time with a field programmable gate array (FPGA) using the start time correction (STC) method. The STC method corrects the amplitude of the pile-up event by correcting the time at the start point of the pile-up event. The performance of the gamma camera was evaluated using a high dose rate 137Cs source. For pulse pile-up ratios (PPRs) of 0.45 and 0.30, the energy resolution improved by 61.5 and 20.3%, respectively. In addition, the image artifacts in the 137Cs radioisotope image due to pile-up were reduced.

Preliminary Study on Performance Evaluation of a Stacking-structure Compton Camera by Using Compton Imaging Simulator (Compton Imaging Simulator를 이용한 다층 구조 컴프턴 카메라 성능평가 예비 연구)

  • Lee, Se-Hyung;Park, Sung-Ho;Seo, Hee;Park, Jin-Hyung;Kim, Chan-Hyeong;Lee, Ju-Hahn;Lee, Chun-Sik;Lee, Jae-Sung
    • Progress in Medical Physics
    • /
    • 제20권2호
    • /
    • pp.51-61
    • /
    • 2009
  • A Compton camera, which is based on the geometrical interpretation of Compton scattering, is a very promising gamma-ray imaging device considering its several advantages over the conventional gamma-ray imaging devices: high imaging sensitivity, 3-D imaging capability from a fixed position, multi-tracing functionality, and almost no limitation in photon energy. In the present study, a Monte Carlo-based, user-friendly Compton imaging simulator was developed in the form of a graphical user interface (GUI) based on Geant4 and $MATLAB^{TM}$. The simulator was tested against the experimental result of the double-scattering Compton camera, which is under development at Hanyang University in Korea. The imaging resolution of the simulated Compton image well agreed with that of the measured image. The imaging sensitivity of the measured data was 2~3 times higher than that of the simulated data, which is due to the fact that the measured data contains the random coincidence events. The performance of a stacking-structure type Compton camera was evaluated by using the simulator. The result shows that the Compton camera shows its highest performance when it uses 4 layers of scatterer detectors.

  • PDF

Tracking of ground objects using image information for autonomous rotary unmanned aerial vehicles (자동 비행 소형 무인 회전익항공기의 영상정보를 이용한 지상 이동물체 추적 연구)

  • Kang, Tae-Hwa;Baek, Kwang-Yul;Mok, Sung-Hoon;Lee, Won-Suk;Lee, Dong-Jin;Lim, Seung-Han;Bang, Hyo-Choong
    • Journal of the Korean Society for Aeronautical & Space Sciences
    • /
    • 제38권5호
    • /
    • pp.490-498
    • /
    • 2010
  • This paper presents an autonomous target tracking approach and technique for transmitting ground control station image periodically for an unmanned aerial vehicle using onboard gimbaled(pan-tilt) camera system. The miniature rotary UAV which was used in this study has a small, high-performance camera, improved target acquisition technique, and autonomous target tracking algorithm. Also in order to stabilize real-time image sequences, image stabilization algorithm was adopted. Finally the target tracking performance was verified through a real flight test.

Efficient Tracking of a Moving Object Using Representative Blocks Algorithm

  • Choi, Sung-Yug;Hur, Hwa-Ra;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2004년도 ICCAS
    • /
    • pp.678-681
    • /
    • 2004
  • In this paper, efficient tracking of a moving object using optimal representative blocks is implemented by a mobile robot with a pan-tilt camera. The key idea comes from the fact that when the image size of moving object is shrunk in an image frame according to the distance between the camera of mobile robot and the moving object, the tracking performance of a moving object can be improved by changing the size of representative blocks according to the object image size. Motion estimation using Edge Detection(ED) and Block-Matching Algorithm(BMA) is often used in the case of moving object tracking by vision sensors. However these methods often miss the real-time vision data since these schemes suffer from the heavy computational load. In this paper, the optimal representative block that can reduce a lot of data to be computed, is defined and optimized by changing the size of representative block according to the size of object in the image frame to improve the tracking performance. The proposed algorithm is verified experimentally by using a two degree-of-freedom active camera mounted on a mobile robot.

  • PDF

Flicker-Free Spatial-PSK Modulation for Vehicular Image-Sensor Systems Based on Neural Networks (신경망 기반 차량 이미지센서 시스템을 위한 플리커 프리 공간-PSK 변조 기법)

  • Nguyen, Trang;Hong, Chang Hyun;Islam, Amirul;Le, Nam Tuan;Jang, Yeong Min
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • 제41권8호
    • /
    • pp.843-850
    • /
    • 2016
  • This paper introduces a novel modulation scheme for vehicular communication in taking advantage of existing LED lights available on a car. Our proposed 2-Phase Shift Keying (2-PSK) is a spatial modulation approach in which a pair of LED light sources in a car (either rear LEDs or front LEDs) is used as a transmitter. A typical camera (i.e. low frame rate at no greater than 30fps) that either a global shutter camera or a rolling shutter camera can be used as a receiver. The modulation scheme is a part of our Image Sensor Communication proposal submitted to IEEE 802.15.7r1 (TG7r1) recently. Also, a neural network approach is applied to improve the performance of LEDs detection and decoding under the noisy situation. Later, some analysis and experiment results are presented to indicate the performance of our system