• 제목/요약/키워드: 3D image sensor

검색결과 333건 처리시간 0.034초

3차원 위성영상과 센서영상의 정합에 의한 가상표적 Overlay 기법 (Virtual Target Overlay Technique by Matching 3D Satellite Image and Sensor Image)

  • 차정희;장효종;박용운;김계영;최형일
    • 정보처리학회논문지D
    • /
    • 제11D권6호
    • /
    • pp.1259-1268
    • /
    • 2004
  • 제한된 훈련장안에서 실전에 대비한 훈련이 되려면, 다양한 전투상황이 부여된 현실감 있는 모의훈련이 필수적이다. 본 논문에서는 현실감 있는 모의훈련을 위해 가상영상이 아닌 지상기반 CCD 카메라영상에 지정된 시나리오대로 가상표적을 전시하는 방법을 제안한다. 이를 위해 고해상도 GeoTIFF(Geographic Tag Image File Format) 위성 영상과 DTED(Digital Terrain Elevation Data)를 이용하여 현실감 있는 3차원 모델을 생성(운용자용)하고, 입력된 CCD 영상(운용자, 훈련자용)으로부터 도로를 추출하였다. 위성영상과 지상기반 센서영상은 관측위치, 분해능, 스케일 등에 많은 차이가 있어 특징기반 정합이 어렵다. 따라서 본 논문에서는 영상 워핑함수인 TPS(Thin-Plate Spline) 보간 함수를 일치하는 두개의 제어점 집합에 적용하여 3차원 모델에 표시된 이동경로를 따라 CCD 영상에서도 표적이 전시되는 이동 동기화 방법을 제안하였다. 실험환경은 Pentium4 1.8MHz(RAM 512M)의 PC 2대를 사용하였으며, 실험 영상은 대전지역의 위성영상과 CCD 영상을 이용, 제안한 알고리즘의 유효성을 입증하였다.

레이저 슬릿을 사용하는 능동거리 센서의 정확한 3D 데이터 추출 알고리즘 (An Exact 3D Data Extraction Algorithm For Active Range Sensor using Laser Slit)

  • 차영엽;권대갑
    • 한국정밀공학회지
    • /
    • 제12권8호
    • /
    • pp.73-85
    • /
    • 1995
  • The sensor system to measure the distance precisely from the center of the sensor system to the obstacle is needed to recognize the surrounding environments, and the sensor system is to be calibrated thoroughly to get the range information exactly. This study covers the calibration of the active range sensor which consists of camera and laser slit emitting device, and provides the equations to get the 3D range data. This can be possible by obtaining the extrinsic parameters of laser slit emitting device through image processing the slits measured during the constant distance intervals and the intrinsic parameters from the calibration of camera. The 3D range data equation derived from the simple geometric assumptions is proved to be applicable to the general cases using the calibration parameters. Also the exact 3D range data were obtained to the object from the real experiment.

  • PDF

깊이 센서를 이용한 등고선 레이어 생성 및 모델링 방법 (A Method for Generation of Contour lines and 3D Modeling using Depth Sensor)

  • 정훈조;이동은
    • 디지털산업정보학회논문지
    • /
    • 제12권1호
    • /
    • pp.27-33
    • /
    • 2016
  • In this study we propose a method for 3D landform reconstruction and object modeling method by generating contour lines on the map using a depth sensor which abstracts characteristics of geological layers from the depth map. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust contour and object can be extracted. The algorithm suggested in this paper first abstracts the characteristics of each geological layer from the depth map image and rearranges it into the proper order, then creates contour lines using the Bezier curve. Using the created contour lines, 3D images are reconstructed through rendering by mapping RGB images of the visual camera. Experimental results show that the proposed method using depth sensor can reconstruct contour map and 3D modeling in real-time. The generation of the contours with depth data is more efficient and economical in terms of the quality and accuracy.

GPU 가속화를 통한 이미지 특징점 기반 RGB-D 3차원 SLAM (Image Feature-Based Real-Time RGB-D 3D SLAM with GPU Acceleration)

  • 이동화;김형진;명현
    • 제어로봇시스템학회논문지
    • /
    • 제19권5호
    • /
    • pp.457-461
    • /
    • 2013
  • This paper proposes an image feature-based real-time RGB-D (Red-Green-Blue Depth) 3D SLAM (Simultaneous Localization and Mapping) system. RGB-D data from Kinect style sensors contain a 2D image and per-pixel depth information. 6-DOF (Degree-of-Freedom) visual odometry is obtained through the 3D-RANSAC (RANdom SAmple Consensus) algorithm with 2D image features and depth data. For speed up extraction of features, parallel computation is performed with GPU acceleration. After a feature manager detects a loop closure, a graph-based SLAM algorithm optimizes trajectory of the sensor and builds a 3D point cloud based map.

3D Map Building of The Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.123.1-123
    • /
    • 2001
  • For Autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use an sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate $\pm$ $30{\Circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center poings. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

3D Map Building of the Mobile Robot Using Structured Light

  • Lee, Oon-Kyu;Kim, Min-Young;Cho, Hyung-Suck;Kim, Jae-Hoon
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 2001년도 ICCAS
    • /
    • pp.123.5-123
    • /
    • 2001
  • For autonomous navigation of the mobile robots, the robots' capability to recognize 3D environment is necessary. In this paper, an on-line 3D map building method for autonomous mobile robots is proposed. To get range data on the environment, we use a sensor system which is composed of a structured light and a CCD camera based on optimal triangulation. The structured laser is projected as a horizontal strip on the scene. The sensor system can rotate$\pm$30$^{\circ}$ with a goniometer. Scanning the system, we get the laser strip image for the environments and update planes composing the environment by some image processing steps. From the laser strip on the captured image, we find a center point of each column, and make line segments through blobbing these center points. Then, the planes of the environments are updated. These steps are done on-line in scanning phase. With the proposed method, we can efficiently get a 3D map about the structured environment.

  • PDF

Compact and versatile range-finding speedometer with wide dynamic range

  • Shinohara, Shigenobu;Pan, Derong;Kosaka, Nozomu;Ikeda, Hiroaki;Yoshida, Hirofumi;Sumi, Masao
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 제어로봇시스템학회 1995년도 Proceedings of the Korea Automation Control Conference, 10th (KACC); Seoul, Korea; 23-25 Oct. 1995
    • /
    • pp.158-161
    • /
    • 1995
  • A new laser diode range-finding speedometer is proposed, which is modulated by a pair of positive and negative triangular pulse current superimposed on a dc current. Since a target velocity is directly obtained form a pure Doppler beat frequency measured during the non-modulation period, the new sensor is free from the difficulties due to the critical velocity encountered in the previous sensor. Furthermore, the different amplitude of the two triangular pluses are so adjusted that the measurable range using only one laser head is greatly expanded to 10cm through 150cm, which is about two times that of the previous sensor. The measurement accuracy for velocity of .+-.6mm/s through .+-.20mm/s and for range is about 1%, and 2%, respectively. Because the new sensor can be operated automatically using a microcomputer, it will be useful for application of a 3-D range image measurement of a slowly moving object.

  • PDF

Low Cost Omnidirectional 2D Distance Sensor for Indoor Floor Mapping Applications

  • Kim, Joon Ha;Lee, Jun Ho
    • Current Optics and Photonics
    • /
    • 제5권3호
    • /
    • pp.298-305
    • /
    • 2021
  • Modern distance sensing methods employ various measurement principles, including triangulation, time-of-flight, confocal, interferometric and frequency comb. Among them, the triangulation method, with a laser light source and an image sensor, is widely used in low-cost applications. We developed an omnidirectional two-dimensional (2D) distance sensor based on the triangulation principle for indoor floor mapping applications. The sensor has a range of 150-1500 mm with a relative resolution better than 4% over the range and 1% at 1 meter distance. It rotationally scans a compact one-dimensional (1D) distance sensor, composed of a near infrared (NIR) laser diode, a folding mirror, an imaging lens, and an image detector. We designed the sensor layout and configuration to satisfy the required measurement range and resolution, selecting easily available components in a special effort to reduce cost. We built a prototype and tested it with seven representative indoor wall specimens (white wallpaper, gray wallpaper, black wallpaper, furniture wood, black leather, brown leather, and white plastic) in a typical indoor illuminated condition, 200 lux, on a floor under ceiling mounted fluorescent lamps. We confirmed the proposed sensor provided reliable distance reading of all the specimens over the required measurement range (150-1500 mm) with a measurement resolution of 4% overall and 1% at 1 meter, regardless of illumination conditions.

다중 TMS320C31 DSP를 사용한 3-D 비젼센서 Implementation (A 3-D Vision Sensor Implementation on Multiple DSPs TMS320C31)

  • V.옥센핸들러;A.벤스하이르;P.미셰;이상국
    • 센서학회지
    • /
    • 제7권2호
    • /
    • pp.124-130
    • /
    • 1998
  • 독립적인 로보트나 자동차 제어 응용을 위하여 고속 3-D 비젼시스템들은 매우 중요하다. 이 논문은 다음과 같은 세가지 과정으로 구성되는 stereo vision process 개발에 대하여 논술한다 : 왼쪽과 오른쪽 이미지의 edges 추출, matching coresponding edges와 3-D map의 계산. 이 process는 VME 150/40 Imaging Technology vision system에서 이루어졌다. 이것은 display, acqusition, 4Mbytes image frame memory와 세 개의 연산 카드로 구성되는 modular system이다. 40 MHz로 작동하는 프로그래머불 연산 모듈은 $64{\times}32$ bit instruction cache와 두개의 $1024{\times}32$ bit RAM을 가진 TMS320C31 DSP에 기초를 두고 있다. 그것들은 각각 512 Kbyte static RAM, 4 Mbyte image memory, 1 Mbyte flash EEPROM과 하나의 직렬 포트로 구성되어있다. 모듈간의 데이터 전송과 교환은 8 bit globalvideo bus와 세 개의 local configurable pipeline 8 bit video bus에 의하여 이루어졌고, system management를 위하여 VME bus가 쓰였다. 두 개의 DSP는 왼쪽 및 오른쪽 이미지 edges 검출을 위하여 쓰였고 마지막 processor는 matching process와 3-D 연산에 사용되었다. $512{\times}512$픽셀 이미지에서 이 센서는 scene complexity에 따라 1Hz정도의 조밀한 3-D map을 생성했다. 특수목적의 multiprocessor card들을 사용하면 결과를 향상시킬 수 있을 것이다.

  • PDF

The Improvement of RFM RPC Using Ground Control Points and 3D Cube

  • Cho, Woo-Sug;Kim, Joo-Hyun
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2003년도 Proceedings of ACRS 2003 ISRS
    • /
    • pp.1143-1145
    • /
    • 2003
  • Some of satellites such as IKONOS don't provide the orbital elements so that we can’ utilize the physical sensor model. Therefore, Rational Function Model(RFM) which is one of mathematical models could be a feasible solution. In order to improve 3D geopositioning accuracy of IKONOS stereo imagery, Rational Polynomial Coefficients(RPCs) of the RFM need to be updated with Ground Control Points(GCPs). In this paper, a method to improve RPCs of RFM using GCPs and 3D cube is proposed. Firstly, the image coordinates of GCPs are observed. And then, using offset values and scale values of RPC provided, the image coordinates and ground coordinates of 3D cube are initially determined and updated RPCs are computed by the iterative least square method. The proposed method was implemented and analyzed in several cases: different numbers of 3D cube layers and GCPs. The experimental results showed that the proposed method improved the accuracy of RPCs in great amount.

  • PDF