• Title/Summary/Keyword: Camera Performance

Search Result 1,815, Processing Time 0.031 seconds

Controller Design for Object Tracking with an Active Camera (능동 카메라 기반의 물체 추적 제어기 설계)

  • Youn, Su-Jin;Choi, Goon-Ho
    • Journal of the Semiconductor & Display Technology
    • /
    • v.10 no.1
    • /
    • pp.83-89
    • /
    • 2011
  • In the case of the tracking system with an active camera, it is very difficult to guarantee real-time processing due to the attribute of vision system which handles large amounts of data at once and has time delay to process. The reliability of the processed result is also badly influenced by the slow sampling time and uncertainty caused by the image processing. In this paper, we figure out dynamic characteristics of pixels reflected on the image plane and derive the mathematical model of the vision tracking system which includes the actuating part and the image processing part. Based on this model, we find a controller that stabilizes the system and enhances the tracking performance to track a target rapidly. The centroid is used as the position index of moving object and the DC motor in the actuating part is controlled to keep the identified centroid at the center point of the image plane.

Demosaicing Method for Digital Cameras with White-RGB Color Filter Array

  • Park, Jongjoo;Jang, Euee Seon;Chong, Jong-Wha
    • ETRI Journal
    • /
    • v.38 no.1
    • /
    • pp.164-173
    • /
    • 2016
  • Demosaicing, or color filter array (CFA) interpolation, estimates missing color channels of raw mosaiced images from a CFA to reproduce full-color images. It is an essential process for single-sensor digital cameras with CFAs. In this paper, a new demosaicing method for digital cameras with Bayer-like W-RGB CFAs is proposed. To preserve the edge structure when reproducing full-color images, we propose an edge direction-adaptive method using color difference estimation between different channels, which can be applied to practical digital camera use. To evaluate the performance of the proposed method in terms of CPSNR, FSIM, and S-CIELAB color distance measures, we perform simulations on sets of mosaiced images captured by an actual prototype digital camera with a Bayer-like W-RGB CFA. The simulation results show that the proposed method demosaics better than a conventional one by approximately +22.4% CPSNR, +0.9% FSIM, and +36.7% S-CIELAB distance.

Obstacle Avoidance System Using a Single Camera and LMNN Fuzzy Controller (단일 영상과 LM 신경망 퍼지제어기를 적용한 장애물 회피 시스템)

  • Yoo, Sung-Goo;Chong, Kil-To
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.15 no.2
    • /
    • pp.192-197
    • /
    • 2009
  • In this paper, we proposed the obstacle avoidance system using a single camera image and LM(Levenberg-Marquart) neural network fuzzy controller. According to a robot technology adapt to various fields of industry and public, the robot has to move using self-navigation and obstacle avoidance algorithms. When the robot moves to target point, obstacle avoidance is must-have technology. So in this paper, we present the algorithm that avoidance method based on fuzzy controller by sensing data and image information from a camera and using the LM neural network to minimize the moving error. And then to verify the system performance of the simulation test.

A Wafer Pre-Alignment System Using One Image of a Whole Wafer (하나의 웨이퍼 전체 영상을 이용한 웨이퍼 Pre-Alignment 시스템)

  • Koo, Ja-Myoung;Cho, Tai-Hoon
    • Journal of the Semiconductor & Display Technology
    • /
    • v.9 no.3
    • /
    • pp.47-51
    • /
    • 2010
  • This paper presents a wafer pre-alignment system which is improved using the image of the entire wafer area. In the previous method, image acquisition for wafer takes about 80% of total pre-alignment time. The proposed system uses only one image of entire wafer area via a high-resolution CMOS camera, and so image acquisition accounts for nearly 1% of total process time. The larger FOV(field of view) to use the image of the entire wafer area worsen camera lens distortion. A camera calibration using high order polynomials is used for accurate lens distortion correction. And template matching is used to find a correct notch's position. The performance of the proposed system was demonstrated by experiments of wafer center alignment and notch alignment.

Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation (열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템)

  • Seunghyeon Lee;Taejoo Kim;Yukyung Choi
    • The Journal of Korea Robotics Society
    • /
    • v.18 no.1
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

Imaging Performance Analysis of an EO/IR Dual Band Airborne Camera

  • Lee, Jun-Ho;Jung, Yong-Suk;Ryoo, Seung-Yeol;Kim, Young-Ju;Park, Byong-Ug;Kim, Hyun-Jung;Youn, Sung-Kie;Park, Kwang-Woo;Lee, Haeng-Bok
    • Journal of the Optical Society of Korea
    • /
    • v.15 no.2
    • /
    • pp.174-181
    • /
    • 2011
  • An airborne sensor is developed for remote sensing on an aerial vehicle (UV). The sensor is an optical payload for an eletro-optical/infrared (EO/IR) dual band camera that combines visible and IR imaging capabilities in a compact and lightweight package. It adopts a Ritchey-Chr$\'{e}$tien telescope for the common front end optics with several relay optics that divide and deliver EO and IR bands to a charge-coupled-device (CCD) and an IR detector, respectively. The EO/IR camera for dual bands is mounted on a two-axis gimbal that provides stabilized imaging and precision pointing in both the along and cross-track directions. We first investigate the mechanical deformations, displacements and stress of the EO/IR camera through finite element analysis (FEA) for five cases: three gravitational effects and two thermal conditions. For investigating gravitational effects, one gravitational acceleration (1 g) is given along each of the +x, +y and +z directions. The two thermal conditions are the overall temperature change to $30^{\circ}C$ from $20^{\circ}C$ and the temperature gradient across the primary mirror pupil from $-5^{\circ}C$ to $+5^{\circ}C$. Optical performance, represented by the modulation transfer function (MTF), is then predicted by integrating the FEA results into optics design/analysis software. This analysis shows the IR channel can sustain imaging performance as good as designed, i.e., MTF 38% at 13 line-pairs-per-mm (lpm), with refocus capability. Similarly, the EO channel can keep the designed performance (MTF 73% at 27.3 lpm) except in the case of the overall temperature change, in which the EO channel experiences slight performance degradation (MTF 16% drop) for $20^{\circ}C$ overall temperate change.

Design of 3-Axis Focus Mechanism Using Piezoelectric Actuators for a Small Satellite Camera (소형 위성 카메라의 압전작동기 타입 3-축 포커스 메커니즘 설계)

  • Hong, Dae Gi;Hwang, Jai Hyuk
    • Journal of Aerospace System Engineering
    • /
    • v.12 no.3
    • /
    • pp.9-17
    • /
    • 2018
  • For Earth observation, a small satellite camera has relatively weak structural stability compared to medium-sized satellite, resulting in misalignment of optical components due to severe launching and space environments. These alignment errors can deteriorate the optical performance of satellite cameras. In this study, we proposed a 3-axis focus mechanism to compensate misalignment in a small satellite camera. This mechanism consists of three piezo-electric actuators to perform x-axis and y-axis tilt with de-space compensation. Design requirements for the focus mechanism were derived from the design of the Schmidt-Cassegrain target optical system. To compensate the misalignment of the secondary mirror (M2), the focus mechanism was installed just behind the M2 to control the 3-axis movement of M2. In this case, flexure design with Box-Behnken test plan was used to minimize optical degradation due to wave front error. The wave front error was analyzed using ANSYS. The fabricated focus mechanism demonstrated excellent servo performance in experiments with PID servo control.

A NEW AUTO-GUIDING SYSTEM FOR CQUEAN

  • CHOI, NAHYUN;PARK, WON-KEE;LEE, HYE-IN;JI, TAE-GEUN;JEON, YISEUL;IM, MYUNGSHI;PAK, SOOJONG
    • Journal of The Korean Astronomical Society
    • /
    • v.48 no.3
    • /
    • pp.177-185
    • /
    • 2015
  • We develop a new auto-guiding system for the Camera for QUasars in the EArly uNiverse (CQUEAN). CQUEAN is an optical CCD camera system attached to the 2.1-m Otto-Struve Telescope (OST) at McDonald Observatory, USA. The new auto-guiding system differs from the original one in the following: instead of the cassegrain focus of the OST, it is attached to the finder scope; it has its own filter system for observation of bright targets; and it is controlled with the CQUEAN Auto-guiding Package, a newly developed auto-guiding program. Finder scope commands a very wide field of view at the expense of poorer light gathering power than that of the OST. Based on the star count data and the limiting magnitude of the system, we estimate there are more than 5.9 observable stars with a single FOV using the new auto-guiding CCD camera. An adapter is made to attach the system to the finder scope. The new auto-guiding system successfully guided the OST to obtain science data with CQUEAN during the test run in 2014 February. The FWHM and ellipticity distributions of stellar profiles on CQUEAN, images guided with the new auto-guiding system, indicate similar guiding capabilities with the original auto-guiding system but with slightly poorer guiding performance at longer exposures, as indicated by the position angle distribution. We conclude that the new auto-guiding system has overall similar guiding performance to the original system. The new auto-guiding system will be used for the second generation CQUEAN, but it can be used for other cassegrain instruments of the OST.

Real-Time Image Processing System for PDP Pattern Inspection with Line Scan Camera (PDP 패턴검사를 위한 실시간 영상처리시스템 개발)

  • Cho Seog-Bin;Baek Gyeoung-Hun;Yi Un-Kun;Nam Ki-Gon;Baek Kwang-Ryul
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.42 no.3 s.303
    • /
    • pp.17-24
    • /
    • 2005
  • Various defects are found in PDP manufacturing process. Detecting these defects early and reprocessing them is an important factor that reduces the cost of production. In this paper, the image processing system for the PDP pattern inspection is designed and implemented using the high performance and accuracy CCD line scan camera. For the preprocessing of the high speed line image data, the Image Processing Part (IPP) is designed and implemented using high performance DSP, FIFO and FPGA. Also, the Data Management and System Control Part (DMSCP) are implemented using ARM processor to control many IPP and cameras and to provide remote users with processed data. For evaluating implemented system, experiment environment which has an area camera for reviewing and moving shelf is made. Experimental results showed that proposed system was quite successful.

Development of an FPGA-based Sealer Coating Inspection Vision System for Automotive Glass Assembly Automation Equipment (자동차 글라스 조립 자동화설비를 위한 FPGA기반 실러 도포검사 비전시스템 개발)

  • Ju-Young Kim;Jae-Ryul Park
    • Journal of Sensor Science and Technology
    • /
    • v.32 no.5
    • /
    • pp.320-327
    • /
    • 2023
  • In this study, an FPGA-based sealer inspection system was developed to inspect the sealer applied to install vehicle glass on a car body. The sealer is a liquid or paste-like material that promotes adhesion such as sealing and waterproofing for mounting and assembling vehicle parts to a car body. The system installed in the existing vehicle design parts line does not detect the sealer in the glass rotation section and takes a long time to process. This study developed a line laser camera sensor and an FPGA vision signal processing module to solve this problem. The line laser camera sensor was developed such that the resolution and speed of the camera for data acquisition could be modified according to the irradiation angle of the laser. Furthermore, it was developed considering the mountability of the entire system to prevent interference with the sealer ejection machine. In addition, a vision signal processing module was developed using the Zynq-7020 FPGA chip to improve the processing speed of the algorithm that converted the profile to the sealer shape image acquired from a 2D camera and calculated the width and height of the sealer using the converted profile. The performance of the developed sealer application inspection system was verified by establishing an experimental environment identical to that of an actual automobile production line. The experimental results confirmed the performance of the sealer application inspection at a level that satisfied the requirements of automotive field standards.