• Title/Summary/Keyword: CCD image sensor

Search Result 206, Processing Time 0.042 seconds

The design of 4S-Van for implementation of ground-laser mapping system (지상 레이져 매핑시스템 구현을 위한 4S-Van 시스템 설계)

  • 김성백;이승용;김민수
    • Spatial Information Research
    • /
    • v.10 no.3
    • /
    • pp.407-419
    • /
    • 2002
  • In this study, the design of 4S-Van system is discussed fur the implementation of laser mapping system. Laser device is fast and accurate sensor that acquires 3D road and surface data. The orientation laser sensor is determined by loosely coupled (D)GPS/INS Integration. Considering current system architecture, (D)GPS/INS integration is performed far performance analysis of direct georeferencing and self-calibration is performed for interior and exterior orientation and displacement. We utilized 3 laser sensors for compensation and performance improvement. 3D surface data from laser scanner and texture image from CCD camera can be used to implement 3D visualization.

  • PDF

Weighted Edge Adaptive POCS Demosaicking Algorithm (Edge 가중치를 이용한 적응적인 POCS Demosaicking 알고리즘)

  • Park, Jong-Soo;Lee, Seong-Won
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.3
    • /
    • pp.46-54
    • /
    • 2008
  • Most commercial CCD/CMOS image sensors have CFA(Color Filter Array) where each pixel gathers light of a selective color to reduce the sensor size and cost. There are many algorithms proposed to reconstruct the original clolr image by adopting pettern recognition of regularization methods to name a few. However the resulting image still suffer from errors such as flase color, zipper effect. In this paper we propose an adaptive edge weight demosaicking algorithm that is based on POCS(Projection Onto Convex Sets) not only to improve the entire image's PSNR but also to reduce the edge region's errors that affect subjective image quality. As a result, the proposed algorithm reconstruct better quality images especially at the edge region.

Detecting and Restoring the Occlusion Area for Generating the True Orthoimage Using IKONOS Image (IKONOS 정사영상제작을 위한 폐색 영역의 탐지와 복원)

  • Seo Min-Ho;Lee Byoung-Kil;Kim Yong-Il;Han Dong-Yeob
    • Korean Journal of Remote Sensing
    • /
    • v.22 no.2
    • /
    • pp.131-139
    • /
    • 2006
  • IKONOS images have the perspective geometry in CCD sensor line like aerial images with central perspective geometry. So the occlusion by buildings, terrain or other objects exist in the image. It is difficult to detect the occlusion with RPCs(rational polynomial coefficients) for ortho-rectification of image. Therefore, in this study, we detected the occlusion areas in IKONOS images using the nominal collection elevation/azimuth angle and restored the hidden areas using another stereo images, from which the rue ortho image could be produced. The algorithm's validity was evaluated using the geometric accuracy of the generated ortho image.

Object Color Identification Embedded System Realization for Uninhabited Stock Management (무인물류관리시스템을 위한 물체컬러식별 임베디드시스템 구현)

  • Lar, Ki-Kong;Ryu, Kwang-Ryol
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2007.10a
    • /
    • pp.289-292
    • /
    • 2007
  • An object color identification and classification embedded system realization for uninhabited stock management is presented in this paper. The embedded system is realized by using ultrasonic sensor to extract the object and distance, and detecting binary image from USB CCD camera. The algorithm is identified by comparing the reference pattern with the color pattern of input image, and move to the settled rack at the store. The experimental result leads to use the uninhibited stock management with practice as a robot.

  • PDF

Multisensor System Integrating Optical Tactile and F/T Sensors for Determination of Type and Position of 3D Contact Surface (3차원 접촉면의 인식 및 위치의 결정의 위한 광촉각센서와 역각센서의 다중센서시스템)

  • 한헌수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.33B no.2
    • /
    • pp.10-19
    • /
    • 1996
  • This paper presents a finger-shaped multisensor system which can measure the tyep and position of a target surface by contactl. The multi-sensor system consists of a sphere-shpaed optical tactile sensor located at the finger tip and a force/torque sensor located at the joint of a finger. The optial tactile sensor determines the type and position of the target surface using the shape and position of the CCD image of the touching area generated by a contact between the sensor and the taget surface. The force/torque sensor also determines the position and surface normal vector by applying the distributionof forces and torques t the contact point to the equations of finger shape. The measurements on the position and surface normal vector at a contact point obtined by two individual sensors are fused using a statistical method. The integrated sensor system has 0.8mm error in position measurement and 1.31$^{\circ}$ error in normal vector measurement. The developed sensor system has many applications, such as autonomous compliance control, automatic grasping and recognition, etc.

  • PDF

3D Spreader Movement Information by the CCD cameras and the Laser Distance Measuring Unit

  • Lee, Bong-Ki;Lee, Jung-Jae;Kim, Sang-Ju;Lee, Jang-Myung
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.241-245
    • /
    • 2003
  • This paper introduces a method that can derive information about the movement of a spreader and skew in order to drive ALS(Automatic Landing System) in the crane used at a harbor. Some methods that use LDL Corner detectors a kind of 2D Laser scanner sensor or Laser distance measuring units to obtain the information in ALS are used presently. But these have some defects in economic efficiency and performance. Therefore, to correct these defects, we propose a method to acquire the information for the movement of a spreader, skew and sway angle using CCD camera image data and Laser distance measuring unit data.

  • PDF

A Study on Depth Information Acquisition Improved by Gradual Pixel Bundling Method at TOF Image Sensor

  • Kwon, Soon Chul;Chae, Ho Byung;Lee, Sung Jin;Son, Kwang Chul;Lee, Seung Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.7 no.1
    • /
    • pp.15-19
    • /
    • 2015
  • The depth information of an image is used in a variety of applications including 2D/3D conversion, multi-view extraction, modeling, depth keying, etc. There are various methods to acquire depth information, such as the method to use a stereo camera, the method to use the depth camera of flight time (TOF) method, the method to use 3D modeling software, the method to use 3D scanner and the method to use a structured light just like Microsoft's Kinect. In particular, the depth camera of TOF method measures the distance using infrared light, whereas TOF sensor depends on the sensitivity of optical light of an image sensor (CCD/CMOS). Thus, it is mandatory for the existing image sensors to get an infrared light image by bundling several pixels; these requirements generate a phenomenon to reduce the resolution of an image. This thesis proposed a measure to acquire a high-resolution image through gradual area movement while acquiring a low-resolution image through pixel bundling method. From this measure, one can obtain an effect of acquiring image information in which illumination intensity (lux) and resolution were improved without increasing the performance of an image sensor since the image resolution is not improved as resolving a low-illumination intensity (lux) in accordance with the gradual pixel bundling algorithm.

A Realization of Deburring Robot using Vision Sensor (비젼 센서를 이용한 디버링 로봇의 구현)

  • 배준영;주윤명;김준업;이상룡
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 2002.05a
    • /
    • pp.466-469
    • /
    • 2002
  • Burr is a projected part of finished workpiece. It is unavoidable and undesirable by-product of most metal cutting or shearing process. Also, it must be removed to improve the fit of machined parts, safety of workers, and the effectiveness of finishing operation. But deburring process is one of manufacturing processes that have net been successfully automated, so deburring automation is strongly needed. This paper focused on developing a basic algorithm to find edge of workpiece and match two different image data for deburring automation which includes automatic recognition of parts, generation of deburring tool paths and edge/corner finding ability by analyzing the DXF drawing file which contains information of part geometry. As an algorithm for corner finding, SUSAN method was chosen. It makes good performance in finding edge and corner in suitable time. And this paper suggested a simple algorithm to find matching point between CCD image and drawing file.

  • PDF

A Study on Automatic Seam Tracking using Vision Sensor (비전센서를 이용한 자동추적장치에 관한 연구)

  • 전진환;조택동;양상민
    • Proceedings of the Korean Society of Precision Engineering Conference
    • /
    • 1995.10a
    • /
    • pp.1105-1109
    • /
    • 1995
  • A CCD-camera, which is structured with vision system, was used to realize automatic seam-tracking system and 3-D information which is needed to generate torch path, was obtained by using laser-slip beam. To extract laser strip and obtain welding-specific point, Adaptive Hough-transformation was used. Although the basic Hough transformation takes too much time to process image on line, it has a tendency to be robust to the noises as like spatter. For that reson, it was complemented with Adaptive Hough transformation to have an on-line processing ability for scanning a welding-specific point. the dead zone,where the sensing of weld line is impossible, is eliminated by rotating the camera with its rotating axis centered at welding torch. The camera angle is controlled so as to get the minimum image data for the sensing of weld line, hence the image processing time is reduced. The fuzzy controller is adapted to control the camera angle.

  • PDF

3D Map Building of The Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2001.10a
    • /
    • pp.123.1-123
    • /
    • 2001
  • For Autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use an sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate $\pm$ $30{\Circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center poings. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF