• 제목/요약/키워드: Camera Performance

검색결과 1,814건 처리시간 0.033초

비 가시 환경에서의 LRF와 CCD 카메라의 성능비교 (Performance Comparison of the LRF and CCD Camera under Non-Visibility (Dense Aerosol) Environments)

  • 조재완;최영수;정경민
    • 제어로봇시스템학회논문지
    • /
    • 제22권5호
    • /
    • pp.367-373
    • /
    • 2016
  • In this paper, range measurement performance of LRF (Laser Range Finder) module and image contrast of color CCD camera are evaluated under the aerosol (high temperature steam) environments, which are simulated severe accident conditions of the LWR (Light-Water-Reactor) nuclear power plant. Data of LRF and color CCD camera are key informations, which are needed in the implementation of SLAM (Simultaneous Localization and Mapping) function for emergency response robot system to cope with urgently accidents of the nuclear power plant.

선형 CCD를 이용한 MTF방법에 의한 카메라 렌즈 초점거리의 출정 및 보정 시스템 개발

  • 박희재;이석원;김왕도
    • 한국정밀공학회지
    • /
    • 제15권8호
    • /
    • pp.71-80
    • /
    • 1998
  • A computer aided system has been developed for the focal length measurement/compensation in camera manufacture. Signal data proportional to light intensity is obtained and sampled very rapidly from the line CCD. Based on the measured signal, the MTF performance is calculated, where the MTF is the ratio of magnitude of the output image to the input image. In order to find the optimum MTF performance, an effcient algorithm has been implemented using the least squares technique. The developed system has been applied to a practical camera manufacturing process, and demonstrated high productivity with high precision.

  • PDF

무인항공기 임무장비용 압전 마운트 시스템의 진동 제어 성능 평가 (Evaluation of Vibration Control Performance of Camera Mount System for UAV)

  • 오종석;손정우;최승복
    • 한국소음진동공학회논문집
    • /
    • 제19권12호
    • /
    • pp.1315-1321
    • /
    • 2009
  • In the present work, vibration control performance of active camera mount system for unmanned aero vehicle(UAV) is evaluated. An active mount featuring inertia type of piezostack actuator is designed and manufactured. Then, vibration control performances are experimentally evaluated. A camera mount system with four active mounts is constructed and mechanical model is established. The governing equation for the camera mount system is obtained and control model is constructed in state space model. Sliding mode controller which has inherent robustness to external disturbance is designed and implemented to the system. Vibration control performances are evaluated at each mount and center of gravity point. Effective vibration performances are obtained and presented in time and frequency domains.

화살 탄착점 측정을 위한 레이저 스캔 카메라 파라미터 보정 (Parameter Calibration of Laser Scan Camera for Measuring the Impact Point of Arrow)

  • 백경동;천성표;이인성;김성신
    • 한국생산제조학회지
    • /
    • 제21권1호
    • /
    • pp.76-84
    • /
    • 2012
  • This paper presents the measurement system of arrow's point of impact using laser scan camera and describes the image calibration method. The calibration process of distorted image is primarily divided into explicit and implicit method. Explicit method focuses on direct optical property using physical camera and its parameter adjustment functionality, while implicit method relies on a calibration plate which assumed relations between image pixels and target positions. To find the relations of image and target position in implicit method, we proposed the performance criteria based polynomial theorem model that overcome some limitations of conventional image calibration model such as over-fitting problem. The proposed method can be verified with 2D position of arrow that were taken by SICK Ranger-D50 laser scan camera.

A Vehicle Recognition Method based on Radar and Camera Fusion in an Autonomous Driving Environment

  • Park, Mun-Yong;Lee, Suk-Ki;Shin, Dong-Jin
    • International journal of advanced smart convergence
    • /
    • 제10권4호
    • /
    • pp.263-272
    • /
    • 2021
  • At a time when securing driving safety is the most important in the development and commercialization of autonomous vehicles, AI and big data-based algorithms are being studied to enhance and optimize the recognition and detection performance of various static and dynamic vehicles. However, there are many research cases to recognize it as the same vehicle by utilizing the unique advantages of radar and cameras, but they do not use deep learning image processing technology or detect only short distances as the same target due to radar performance problems. Radars can recognize vehicles without errors in situations such as night and fog, but it is not accurate even if the type of object is determined through RCS values, so accurate classification of the object through images such as cameras is required. Therefore, we propose a fusion-based vehicle recognition method that configures data sets that can be collected by radar device and camera device, calculates errors in the data sets, and recognizes them as the same target.

신경회로망을 이용함 카메라 보정기법 개발 (Development of camera caliberation technique using neural-network)

  • 한성현;왕한홍;장영희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1997년도 한국자동제어학술회의논문집; 한국전력공사 서울연수원; 17-18 Oct. 1997
    • /
    • pp.1617-1620
    • /
    • 1997
  • This paper describes the camera caliberation based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distoriton causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera aclibration is illustrated by simulation and experiment.

  • PDF

사각형 특징 기반 Visual SLAM을 위한 자세 추정 방법 (A Camera Pose Estimation Method for Rectangle Feature based Visual SLAM)

  • 이재민;김곤우
    • 로봇학회논문지
    • /
    • 제11권1호
    • /
    • pp.33-40
    • /
    • 2016
  • In this paper, we propose a method for estimating the pose of the camera using a rectangle feature utilized for the visual SLAM. A warped rectangle feature as a quadrilateral in the image by the perspective transformation is reconstructed by the Coupled Line Camera algorithm. In order to fully reconstruct a rectangle in the real world coordinate, the distance between the features and the camera is needed. The distance in the real world coordinate can be measured by using a stereo camera. Using properties of the line camera, the physical size of the rectangle feature can be induced from the distance. The correspondence between the quadrilateral in the image and the rectangle in the real world coordinate can restore the relative pose between the camera and the feature through obtaining the homography. In order to evaluate the performance, we analyzed the result of proposed method with its reference pose in Gazebo robot simulator.

뉴럴네트워크를 이용한 카메라 보정기법 개발 (Development of Camera Calibration Technique Using Neural-Network)

  • 장영희
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 1997년도 추계학술대회 논문집
    • /
    • pp.225-229
    • /
    • 1997
  • This paper describes the camera calibration based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes and inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is, the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera calibration is illustrated by simulation and experiment.

  • PDF

컬러 정보를 이용한 무인항공기에서 실시간 이동 객체의 카메라 추적 (The Camera Tracking of Real-Time Moving Object on UAV Using the Color Information)

  • 홍승범
    • 한국항공운항학회지
    • /
    • 제18권2호
    • /
    • pp.16-22
    • /
    • 2010
  • This paper proposes the real-time moving object tracking system UAV using color information. Case of object tracking, it have studied to recognizing the moving object or moving multiple objects on the fixed camera. And it has recognized the object in the complex background environment. But, this paper implements the moving object tracking system using the pan/tilt function of the camera after the object's region extraction. To do this tracking system, firstly, it detects the moving object of RGB/HSI color model and obtains the object coordination in acquired image using the compact boundary box. Secondly, the camera origin coordination aligns to object's top&left coordination in compact boundary box. And it tracks the moving object using the pan/tilt function of camera. It is implemented by the Labview 8.6 and NI Vision Builder AI of National Instrument co. It shows the good performance of camera trace in laboratory environment.

Offline Camera Movement Tracking from Video Sequences

  • Dewi, Primastuti;Choi, Yeon-Seok;Cha, Eui-Young
    • 한국정보통신학회:학술대회논문집
    • /
    • 한국해양정보통신학회 2011년도 춘계학술대회
    • /
    • pp.69-72
    • /
    • 2011
  • In this paper, we propose a method to track the movement of camera from the video sequences. This method is useful for video analysis and can be applied as pre-processing step in some application such as video stabilizer and marker-less augmented reality. First, we extract the features in each frame using corner point detection. The features in current frame are then compared with the features in the adjacent frames to calculate the optical flow which represents the relative movement of the camera. The optical flow is then analyzed to obtain camera movement parameter. The final step is camera movement estimation and correction to increase the accuracy. The method performance is verified by generating a 3D map of camera movement and embedding 3D object to the video. The demonstrated examples in this paper show that this method has a high accuracy and rarely produce any jitter.

  • PDF