• 제목/요약/키워드: Camera sensor

검색결과 1,271건 처리시간 0.043초

FPGA를 이용한 고속카메라 시스템 구현 (Designed of High-Speed Camera Using FPGA)

  • 박세훈;신윤수;오태석;김일환
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2008년도 제39회 하계학술대회
    • /
    • pp.1935-1936
    • /
    • 2008
  • 본 논문은 High speed image를 획득하기 위하여 CMOS Image Sensor를 사용한 고속카메라 구현에 관한 연구이다. Image Sensor로는 CCD Image Sensor와 CMOS Image Sensor가 있으며 CMOS Image Sensor는 CCD Image Sensor에 비해 전력소모가 적고 주변회로의 내장으로 소형화 할 수 있는 장점이 있다. 고속카메라는 충돌 테스트, 에어벡 제어 등의 자동차 분야와 골프 자세 교정 장치와 같은 스포츠 분야, 탄도 방향 측정 장비 등의 국방 분야 등 여러 분야에 많이 사용되고 있다. 본 논문에서 구현한 고속카메라 시스템은 CMOS Image Sensor를 사용하여 1280 * 1024의 해상도로 초당 약 500 frames의 영상을 획득할 수 있다. 또한 CMOS Image Sensor를 제어하고 획득한 이미지를 저장할 수 있도록 FPGA와 DDR2 메모리를 사용하고 저장된 데이터를 PC로 전송하기 위한 Camera Link 모듈 그리고 PC에서 카메라를 제어할 수 있도록 RS-422 통신기능 등으로 구성되었다.

  • PDF

CCD카메라와 레이저 센서를 조합한 지능형 로봇 빈-피킹에 관한 연구 (A Study on Intelligent Robot Bin-Picking System with CCD Camera and Laser Sensor)

  • 김진대;이재원;신찬배
    • 한국정밀공학회지
    • /
    • 제23권11호
    • /
    • pp.58-67
    • /
    • 2006
  • Due to the variety of signal processing and complicated mathematical analysis, it is not easy to accomplish 3D bin-picking with non-contact sensor. To solve this difficulties the reliable signal processing algorithm and a good sensing device has been recommended. In this research, 3D laser scanner and CCD camera is applied as a sensing device respectively. With these sensor we develop a two-step bin-picking method and reliable algorithm for the recognition of 3D bin object. In the proposed bin-picking, the problem is reduced to 2D intial recognition with CCD camera at first, and then 3D pose detection with a laser scanner. To get a good movement in the robot base frame, the hand eye calibration between robot's end effector and sensing device should be also carried out. In this paper, we examine auto-calibration technique in the sensor calibration step. A new thinning algorithm and constrained hough transform is also studied for the robustness in the real environment usage. From the experimental results, we could see the robust bin-picking operation under the non-aligned 3D hole object.

고 선량율 감마선 조사에 따른 렌즈의 열화 (A CCD Camera Lens Degradation Caused by High Dose-Rate Gamma Irradiation)

  • 조재완;이준구;허섭;구인수;홍석붕
    • 전기학회논문지
    • /
    • 제58권7호
    • /
    • pp.1450-1455
    • /
    • 2009
  • Assumed that an IPTV camera system is to be used as an ad-hoc sensor for the surveillance and diagnostics of safety-critical equipments installed in the in-containment building of the nuclear power plant, an major problem is the presence of high dose-rate gamma irradiation fields inside the one. In order to uses an IPTV camera in such intense gamma radiation environment of the in-containment building, the radiation-weakened devices including a CCD imaging sensor, FPGA, ASIC and microprocessors are to be properly shielded from high dose-rate gamma radiation using the high-density material, lead or tungsten. But the passive elements such as mirror, lens and window, which are placed in the optical path of the CCD imaging sensor, are exposed to a high dose-rate gamma ray source directly. So, the gamma-ray irradiation characteristics of the passive elements, is needed to test. A CCD camera lens, made of glass material, have been gamma irradiated at the dose rate of 4.2 kGy/h during an hour up to a total dose of 4 kGy. The radiation induced color-center in the glass lens is observed. The degradation performance of the gamma irradiated lens is explained using an color component analysis.

CCD카메라와 적외선 카메라의 융합을 통한 효과적인 객체 추적 시스템 (Efficient Object Tracking System Using the Fusion of a CCD Camera and an Infrared Camera)

  • 김승훈;정일균;박창우;황정훈
    • 제어로봇시스템학회논문지
    • /
    • 제17권3호
    • /
    • pp.229-235
    • /
    • 2011
  • To make a robust object tracking and identifying system for an intelligent robot and/or home system, heterogeneous sensor fusion between visible ray system and infrared ray system is proposed. The proposed system separates the object by combining the ROI (Region of Interest) estimated from two different images based on a heterogeneous sensor that consolidates the ordinary CCD camera and the IR (Infrared) camera. Human's body and face are detected in both images by using different algorithms, such as histogram, optical-flow, skin-color model and Haar model. Also the pose of human body is estimated from the result of body detection in IR image by using PCA algorithm along with AdaBoost algorithm. Then, the results from each detection algorithm are fused to extract the best detection result. To verify the heterogeneous sensor fusion system, few experiments were done in various environments. From the experimental results, the system seems to have good tracking and identification performance regardless of the environmental changes. The application area of the proposed system is not limited to robot or home system but the surveillance system and military system.

SPAD과 CNN의 특성을 반영한 ToF 센서와 스테레오 카메라 융합 시스템 (Fusion System of Time-of-Flight Sensor and Stereo Cameras Considering Single Photon Avalanche Diode and Convolutional Neural Network)

  • 김동엽;이재민;전세웅
    • 로봇학회논문지
    • /
    • 제13권4호
    • /
    • pp.230-236
    • /
    • 2018
  • 3D depth perception has played an important role in robotics, and many sensory methods have also proposed for it. As a photodetector for 3D sensing, single photon avalanche diode (SPAD) is suggested due to sensitivity and accuracy. We have researched for applying a SPAD chip in our fusion system of time-of-fight (ToF) sensor and stereo camera. Our goal is to upsample of SPAD resolution using RGB stereo camera. Currently, we have 64 x 32 resolution SPAD ToF Sensor, even though there are higher resolution depth sensors such as Kinect V2 and Cube-Eye. This may be a weak point of our system, however we exploit this gap using a transition of idea. A convolution neural network (CNN) is designed to upsample our low resolution depth map using the data of the higher resolution depth as label data. Then, the upsampled depth data using CNN and stereo camera depth data are fused using semi-global matching (SGM) algorithm. We proposed simplified fusion method created for the embedded system.

An Obstacle Detection and Avoidance Method for Mobile Robot Using a Stereo Camera Combined with a Laser Slit

  • Kim, Chul-Ho;Lee, Tai-Gun;Park, Sung-Kee;Kim, Jai-Hie
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2003년도 ICCAS
    • /
    • pp.871-875
    • /
    • 2003
  • To detect and avoid obstacles is one of the important tasks of mobile navigation. In a real environment, when a mobile robot encounters dynamic obstacles, it is required to simultaneously detect and avoid obstacles for its body safely. In previous vision system, mobile robot has used it as either a passive sensor or an active sensor. This paper proposes a new obstacle detection algorithm that uses a stereo camera as both a passive sensor and an active sensor. Our system estimates the distances from obstacles by both passive-correspondence and active-correspondence using laser slit. The system operates in three steps. First, a far-off obstacle is detected by the disparity from stereo correspondence. Next, a close obstacle is acquired from laser slit beam projected in the same stereo image. Finally, we implement obstacle avoidance algorithm, adopting the modified Dynamic Window Approach (DWA), by using the acquired the obstacle's distance.

  • PDF

파이프 내부검사를 위한 이동로봇의 유도방법 (Guidance of Mobile Robot for Inspection of Pipe)

  • 정규원
    • 한국공작기계학회:학술대회논문집
    • /
    • 한국공작기계학회 2002년도 춘계학술대회 논문집
    • /
    • pp.480-485
    • /
    • 2002
  • The purpose of this paper is the development of guidance algorithm for a mobile robot which is used to acquire the position and state information of the pipe defects such as crack, damage and through hole. The data used for the algorithm is the range data obtained by the range sensor which is based on an optical triangulation method. The sensor, which consists of a laser slit beam and a CCD camera, measures the 3D profile of the pipe's inner surface. After setting the range sensor on the robot, the robot is put into a pipe. While the camera and the LSB sensor part is rotated about the robot axis, a laser slit beam (LSB) is projected onto the inner surface of the pipe and a CCD camera captures the image. From the images the range data is obtained with respect to the sensor coordinate through a series of image processing and applying the sensor matrix. After the data is transformed into the robot coordinate, the position and orientation of the robot should be obtained in order to guide the robot. In addition, analyzing the data, 3D shape of the pipe is constructed and the numerical data for the defects of the pipe can be found. These data will be used for pipe maintenance and service.

  • PDF

홀위치 측정을 위한 레이져비젼 시스템 개발 (A Laser Vision System for the High-Speed Measurement of Hole Positions)

  • 노영식;서영수;최원태
    • 대한전기학회:학술대회논문집
    • /
    • 대한전기학회 2006년도 심포지엄 논문집 정보 및 제어부문
    • /
    • pp.333-335
    • /
    • 2006
  • In this page, we developed the inspection system for automobile parts using the laser vision sensor. Laser vision sensor has gotten 2 dimensions information and third dimension information of laser vision camera using the vision camera. Used JIG and ROBOT for inspection position transfer. Also, computer integration system developed that control system component pal1s and manage data measurement information. Compare sensor measurement result with CAD Data and verified measurement result effectiveness taking advantage of CAD to get information of measurement object.

  • PDF

멀티 라인 레이저 비전 센서를 이용한 고속 3차원 계측 및 모델링에 관한 연구 (Development of multi-line laser vision sensor and welding application)

  • 성기은;이세헌
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 춘계학술대회 논문집
    • /
    • pp.169-172
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which feeds foster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF

멀티 라인 레이저 비전 센서를 이용한 고속 용접선 추적 기술 (High speed seam tracking using multi-line laser vision sensor)

  • 성기은;이세헌
    • 한국정밀공학회:학술대회논문집
    • /
    • 한국정밀공학회 2002년도 추계학술대회 논문집
    • /
    • pp.584-587
    • /
    • 2002
  • A vision sensor measure range data using laser light source. This sensor generally use patterned laser which shaped single line. But this vision sensor cannot satisfy new trend which needs laster and more precise processing. The sensor's sampling rate increases as reduced image processing time. However, the sampling rate can not over 30fps, because a camera has mechanical sampling limit. If we use multi line laser pattern, we will measure multi range data in one image. In the case of using same sampling rate camera, number of 2D range data profile in one second is directly proportional to laser line's number. For example, the vision sensor using 5 laser lines can sample 150 profiles per second in best condition.

  • PDF