• 제목/요약/키워드: Camera Performance

검색결과 1,814건 처리시간 0.031초

능동 카메라 기반의 물체 추적 제어기 설계 (Controller Design for Object Tracking with an Active Camera)

  • 윤수진;최군호
    • 반도체디스플레이기술학회지
    • /
    • 제10권1호
    • /
    • pp.83-89
    • /
    • 2011
  • In the case of the tracking system with an active camera, it is very difficult to guarantee real-time processing due to the attribute of vision system which handles large amounts of data at once and has time delay to process. The reliability of the processed result is also badly influenced by the slow sampling time and uncertainty caused by the image processing. In this paper, we figure out dynamic characteristics of pixels reflected on the image plane and derive the mathematical model of the vision tracking system which includes the actuating part and the image processing part. Based on this model, we find a controller that stabilizes the system and enhances the tracking performance to track a target rapidly. The centroid is used as the position index of moving object and the DC motor in the actuating part is controlled to keep the identified centroid at the center point of the image plane.

Demosaicing Method for Digital Cameras with White-RGB Color Filter Array

  • Park, Jongjoo;Jang, Euee Seon;Chong, Jong-Wha
    • ETRI Journal
    • /
    • 제38권1호
    • /
    • pp.164-173
    • /
    • 2016
  • Demosaicing, or color filter array (CFA) interpolation, estimates missing color channels of raw mosaiced images from a CFA to reproduce full-color images. It is an essential process for single-sensor digital cameras with CFAs. In this paper, a new demosaicing method for digital cameras with Bayer-like W-RGB CFAs is proposed. To preserve the edge structure when reproducing full-color images, we propose an edge direction-adaptive method using color difference estimation between different channels, which can be applied to practical digital camera use. To evaluate the performance of the proposed method in terms of CPSNR, FSIM, and S-CIELAB color distance measures, we perform simulations on sets of mosaiced images captured by an actual prototype digital camera with a Bayer-like W-RGB CFA. The simulation results show that the proposed method demosaics better than a conventional one by approximately +22.4% CPSNR, +0.9% FSIM, and +36.7% S-CIELAB distance.

단일 영상과 LM 신경망 퍼지제어기를 적용한 장애물 회피 시스템 (Obstacle Avoidance System Using a Single Camera and LMNN Fuzzy Controller)

  • 유성구;정길도
    • 제어로봇시스템학회논문지
    • /
    • 제15권2호
    • /
    • pp.192-197
    • /
    • 2009
  • In this paper, we proposed the obstacle avoidance system using a single camera image and LM(Levenberg-Marquart) neural network fuzzy controller. According to a robot technology adapt to various fields of industry and public, the robot has to move using self-navigation and obstacle avoidance algorithms. When the robot moves to target point, obstacle avoidance is must-have technology. So in this paper, we present the algorithm that avoidance method based on fuzzy controller by sensing data and image information from a camera and using the LM neural network to minimize the moving error. And then to verify the system performance of the simulation test.

하나의 웨이퍼 전체 영상을 이용한 웨이퍼 Pre-Alignment 시스템 (A Wafer Pre-Alignment System Using One Image of a Whole Wafer)

  • 구자명;조태훈
    • 반도체디스플레이기술학회지
    • /
    • 제9권3호
    • /
    • pp.47-51
    • /
    • 2010
  • This paper presents a wafer pre-alignment system which is improved using the image of the entire wafer area. In the previous method, image acquisition for wafer takes about 80% of total pre-alignment time. The proposed system uses only one image of entire wafer area via a high-resolution CMOS camera, and so image acquisition accounts for nearly 1% of total process time. The larger FOV(field of view) to use the image of the entire wafer area worsen camera lens distortion. A camera calibration using high order polynomials is used for accurate lens distortion correction. And template matching is used to find a correct notch's position. The performance of the proposed system was demonstrated by experiments of wafer center alignment and notch alignment.

열화상 영상의 Image Translation을 통한 Pseudo-RGB 기반 장소 인식 시스템 (Pseudo-RGB-based Place Recognition through Thermal-to-RGB Image Translation)

  • 이승현;김태주;최유경
    • 로봇학회논문지
    • /
    • 제18권1호
    • /
    • pp.48-52
    • /
    • 2023
  • Many studies have been conducted to ensure that Visual Place Recognition is reliable in various environments, including edge cases. However, existing approaches use visible imaging sensors, RGB cameras, which are greatly influenced by illumination changes, as is widely known. Thus, in this paper, we use an invisible imaging sensor, a long wave length infrared camera (LWIR) instead of RGB, that is shown to be more reliable in low-light and highly noisy conditions. In addition, although the camera sensor used to solve this problem is an LWIR camera, but since the thermal image is converted into RGB image the proposed method is highly compatible with existing algorithms and databases. We demonstrate that the proposed method outperforms the baseline method by about 0.19 for recall performance.

Imaging Performance Analysis of an EO/IR Dual Band Airborne Camera

  • Lee, Jun-Ho;Jung, Yong-Suk;Ryoo, Seung-Yeol;Kim, Young-Ju;Park, Byong-Ug;Kim, Hyun-Jung;Youn, Sung-Kie;Park, Kwang-Woo;Lee, Haeng-Bok
    • Journal of the Optical Society of Korea
    • /
    • 제15권2호
    • /
    • pp.174-181
    • /
    • 2011
  • An airborne sensor is developed for remote sensing on an aerial vehicle (UV). The sensor is an optical payload for an eletro-optical/infrared (EO/IR) dual band camera that combines visible and IR imaging capabilities in a compact and lightweight package. It adopts a Ritchey-Chr$\'{e}$tien telescope for the common front end optics with several relay optics that divide and deliver EO and IR bands to a charge-coupled-device (CCD) and an IR detector, respectively. The EO/IR camera for dual bands is mounted on a two-axis gimbal that provides stabilized imaging and precision pointing in both the along and cross-track directions. We first investigate the mechanical deformations, displacements and stress of the EO/IR camera through finite element analysis (FEA) for five cases: three gravitational effects and two thermal conditions. For investigating gravitational effects, one gravitational acceleration (1 g) is given along each of the +x, +y and +z directions. The two thermal conditions are the overall temperature change to $30^{\circ}C$ from $20^{\circ}C$ and the temperature gradient across the primary mirror pupil from $-5^{\circ}C$ to $+5^{\circ}C$. Optical performance, represented by the modulation transfer function (MTF), is then predicted by integrating the FEA results into optics design/analysis software. This analysis shows the IR channel can sustain imaging performance as good as designed, i.e., MTF 38% at 13 line-pairs-per-mm (lpm), with refocus capability. Similarly, the EO channel can keep the designed performance (MTF 73% at 27.3 lpm) except in the case of the overall temperature change, in which the EO channel experiences slight performance degradation (MTF 16% drop) for $20^{\circ}C$ overall temperate change.

소형 위성 카메라의 압전작동기 타입 3-축 포커스 메커니즘 설계 (Design of 3-Axis Focus Mechanism Using Piezoelectric Actuators for a Small Satellite Camera)

  • 홍대기;황재혁
    • 항공우주시스템공학회지
    • /
    • 제12권3호
    • /
    • pp.9-17
    • /
    • 2018
  • 지구 관측용 소형 위성카메라의 경우, 중대형 위성에 비해 상대적으로 약한 구조 안정성으로 인해 열악한 발사환경 및 우주환경에서 광부품의 정렬오차가 발생하기 쉽다. 발생한 정렬오차는 위성카메라의 광학 성능 저하를 유발시킨다. 본 연구에서는 소형 위성 카메라의 정렬오차를 보상하기 위하여 3축 포커스 메커니즘을 제안하였다. 이 메커니즘은 3개의 압전 작동기로 구성되어 x-축, y-축 틸트 및 디스페이스(De-space) 보정을 수행할 수 있다. 포커스 메커니즘의 설계 요구조건은 슈미트-카세그레인(Schmidt-Cassegrain) 타입의 목표 광학계 설계에서 도출되었다. 부경 정렬오차 보상을 위하여 부 반사경의 뒤에 포커스 메커니즘을 부착하여 부경의 3축 운동을 제어하였다. 이 때 파면오차로 인한 광학 성능 저하를 최소화하기 위한 플렉셔를 Box-Behnken 실험계획법을 통하여 설계하였으며, ANSYS를 이용하여 파면오차 해석을 수행하였다. 제작된 포커스 메커니즘은 작동기의 수학적 모델링, PID 제어기 설계, 서보 제어실험을 통해 서보성능을 검증하였다.

A NEW AUTO-GUIDING SYSTEM FOR CQUEAN

  • CHOI, NAHYUN;PARK, WON-KEE;LEE, HYE-IN;JI, TAE-GEUN;JEON, YISEUL;IM, MYUNGSHI;PAK, SOOJONG
    • 천문학회지
    • /
    • 제48권3호
    • /
    • pp.177-185
    • /
    • 2015
  • We develop a new auto-guiding system for the Camera for QUasars in the EArly uNiverse (CQUEAN). CQUEAN is an optical CCD camera system attached to the 2.1-m Otto-Struve Telescope (OST) at McDonald Observatory, USA. The new auto-guiding system differs from the original one in the following: instead of the cassegrain focus of the OST, it is attached to the finder scope; it has its own filter system for observation of bright targets; and it is controlled with the CQUEAN Auto-guiding Package, a newly developed auto-guiding program. Finder scope commands a very wide field of view at the expense of poorer light gathering power than that of the OST. Based on the star count data and the limiting magnitude of the system, we estimate there are more than 5.9 observable stars with a single FOV using the new auto-guiding CCD camera. An adapter is made to attach the system to the finder scope. The new auto-guiding system successfully guided the OST to obtain science data with CQUEAN during the test run in 2014 February. The FWHM and ellipticity distributions of stellar profiles on CQUEAN, images guided with the new auto-guiding system, indicate similar guiding capabilities with the original auto-guiding system but with slightly poorer guiding performance at longer exposures, as indicated by the position angle distribution. We conclude that the new auto-guiding system has overall similar guiding performance to the original system. The new auto-guiding system will be used for the second generation CQUEAN, but it can be used for other cassegrain instruments of the OST.

PDP 패턴검사를 위한 실시간 영상처리시스템 개발 (Real-Time Image Processing System for PDP Pattern Inspection with Line Scan Camera)

  • 조석빈;백경훈;이운근;남기곤;백광렬
    • 전자공학회논문지SC
    • /
    • 제42권3호
    • /
    • pp.17-24
    • /
    • 2005
  • 본 논문에서는 PDP 상판의 패턴결함을 검출하는 영상처리 알고리즘을 제안하고, 이를 실시간으로 처리하기 위한 영상처리 하드웨어의 구현을 나타낸다. 제안된 영상처리 알고리즘은 참조영상의 패턴간격을 이용하여 결함영상을 추출하는 알고리즘이며, 영상처리 시스템은 실시간 구조로 설계된 고속 영상처리 하드웨어와 여러 개의 영상처리 하드웨어 제어를 위한 데이터관리 및 시스템제어 하드웨어에로 나누어 구현하였다. 또한, 본 논문에서는 구현한 영상처리 시스템을 이용하여 실제 PDP 상판의 결함을 검사하는 실험 환경을 구성하여 패턴의 결함을 검사하는 실험을 수행하였다. 그 결과 제안한 알고리즘과 구현한 하드웨어의 우수성을 입증 하였다.

자동차 글라스 조립 자동화설비를 위한 FPGA기반 실러 도포검사 비전시스템 개발 (Development of an FPGA-based Sealer Coating Inspection Vision System for Automotive Glass Assembly Automation Equipment)

  • 김주영;박재률
    • 센서학회지
    • /
    • 제32권5호
    • /
    • pp.320-327
    • /
    • 2023
  • In this study, an FPGA-based sealer inspection system was developed to inspect the sealer applied to install vehicle glass on a car body. The sealer is a liquid or paste-like material that promotes adhesion such as sealing and waterproofing for mounting and assembling vehicle parts to a car body. The system installed in the existing vehicle design parts line does not detect the sealer in the glass rotation section and takes a long time to process. This study developed a line laser camera sensor and an FPGA vision signal processing module to solve this problem. The line laser camera sensor was developed such that the resolution and speed of the camera for data acquisition could be modified according to the irradiation angle of the laser. Furthermore, it was developed considering the mountability of the entire system to prevent interference with the sealer ejection machine. In addition, a vision signal processing module was developed using the Zynq-7020 FPGA chip to improve the processing speed of the algorithm that converted the profile to the sealer shape image acquired from a 2D camera and calculated the width and height of the sealer using the converted profile. The performance of the developed sealer application inspection system was verified by establishing an experimental environment identical to that of an actual automobile production line. The experimental results confirmed the performance of the sealer application inspection at a level that satisfied the requirements of automotive field standards.