• Title/Summary/Keyword: 3D Image Registration

Search Result 135, Processing Time 0.03 seconds

Localization of Unmanned Ground Vehicle based on Matching of Ortho-edge Images of 3D Range Data and DSM (3차원 거리정보와 DSM의 정사윤곽선 영상 정합을 이용한 무인이동로봇의 위치인식)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.1 no.1
    • /
    • pp.43-54
    • /
    • 2012
  • This paper presents a new localization technique of an UGV(Unmanned Ground Vehicle) by matching ortho-edge images generated from a DSM (Digital Surface Map) which represents the 3D geometric information of an outdoor navigation environment and 3D range data which is obtained from a LIDAR (Light Detection and Ranging) sensor mounted at the UGV. Recent UGV localization techniques mostly try to combine positioning sensors such as GPS (Global Positioning System), IMU (Inertial Measurement Unit), and LIDAR. Especially, ICP (Iterative Closest Point)-based geometric registration techniques have been developed for UGV localization. However, the ICP-based geometric registration techniques are subject to fail to register 3D range data between LIDAR and DSM because the sensing directions of the two data are too different. In this paper, we introduce and match ortho-edge images between two different sensor data, 3D LIDAR and DSM, for the localization of the UGV. Details of new techniques to generating and matching ortho-edge images between LIDAR and DSM are presented which are followed by experimental results from four different navigation paths. The performance of the proposed technique is compared to a conventional ICP-based technique.

Study on the Diffuse Texture Acquisition of a Real Object (실세계 객체의 디퓨즈 텍스쳐 획득에 관한 연구)

  • Kim, Kang-Yeon;Lee, Jae-Y.;Yoo, Jae-Doug;Lee, Kwan-H.
    • 한국HCI학회:학술대회논문집
    • /
    • 2006.02a
    • /
    • pp.1222-1227
    • /
    • 2006
  • 본 연구의 목적은 객체의 형상정보(3D mesh)와 색/질감정보(image)를 이용하여 텍스쳐 맵핑된 고품질의 가상모델을 생성하는데 있다. 3 차원 형상정보에 대응하는 이미지 상의 텍스쳐 좌표 관계를 구하기 위해 오브젝트 좌표계와 카메라 좌표계 사이의 변환행렬, 카메라의 초점거리, 카메라 CCD 와 프레임상의 이미지 사이의 aspect ratio 를 파라미터로 하는 3D-2D 정합을 수행한다. 이러한 3D-2D 정합을 효율적으로 수행하기 위하여, 카메라 내부파라미터 검정단계, 신뢰도가 높은 초기해 설정단계, 비선형 최적화(Newton method) 단계로 접근한다. 또한, 색/질감정보로 이용되는 객체의 이미지는 촬영조건에 의해 스펙큘러(specular)나 이미지 픽셀값의 포화상태(saturation) 등의 결점을 포함한다. 영상내의 스펙큘러 좌표와 3D-2D 정합의 결과를 이용하여 촬영 당시의 광원을 추정하고, 근사화된 빛반사도 함수(BRDF)를 이용하여, 텍스쳐의 픽셀값 변조를 통해 이미지 촬영 당시의 광원효과가 제거된 디퓨즈 텍스쳐를 획득한다. 본 연구에서는 퐁(Phong)의 모델을 근사화한 빛 반사도 함수 모델로 사용하였다.

  • PDF

Extracting DEM by using Stereo Image Matching Technique (스테레오 영상 정합에 의한 DEM 추출)

  • Kim, Han-Young;Woo, Dong-Min
    • Proceedings of the KIEE Conference
    • /
    • 1999.07g
    • /
    • pp.2941-2943
    • /
    • 1999
  • The application of the aerial images are to find the 3-D elevations. Image matching techniques such as Multi-resolution techniques, WCC (Weighted Cross-Correlation), NSSR (Narrow Search Sub-pixel Registration) that we know robustly apply to images which have enough features. But the method is not adaptive in images which have not enough features due to increasing of disparity errors. In this paper, we propose Disparity Interpolation that decrease disparity errors occurring in the area where images have not enough features. By using real aerial images we compare the result from existing image matching techniques to the result from proposed method.

  • PDF

Color Enhancement of Low Exposure Images using Histogram Specification and its Application to Color Shift Model-Based Refocusing

  • Lee, Eunsung;Kang, Wonseok;Kim, Sangjin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.1
    • /
    • pp.8-16
    • /
    • 2012
  • An image obtained from a low light environment results in a low-exposure problem caused by non-ideal camera settings, i.e. aperture size and shutter speed. Of particular note, the multiple color-filter aperture (MCA) system inherently suffers from low-exposure problems and performance degradation in its image classification and registration processes due to its finite size of the apertures. In this context, this paper presents a novel method for the color enhancement of low-exposure images and its application to color shift model-based MCA system for image refocusing. Although various histogram equalization (HE) approaches have been proposed, they tend to distort the color information of the processed image due to the range limits of the histogram. The proposed color enhancement algorithm enhances the global brightness by analyzing the basic cause of the low-exposure phenomenon, and then compensates for the contrast degradation artifacts by using an adaptive histogram specification. We also apply the proposed algorithm to the preprocessing step of the refocusing technique in the MCA system to enhance the color image. The experimental results confirm that the proposed method can enhance the contrast of any low-exposure color image acquired by a conventional camera, and is suitable for commercial low-cost, high-quality imaging devices, such as consumer-grade camcorders, real-time 3D reconstruction systems, digital, and computational cameras.

  • PDF

Extra-phase Image Generation for Its Potential Use in Dose Evaluation for a Broad Range of Respiratory Motion

  • Lee, Hyun Su;Choi, Chansoo;Kim, Chan Hyeong;Han, Min Cheol;Yeom, Yeon Soo;Nguyen, Thang Tat;Kim, Seonghoon;Choi, Sang Hyoun;Lee, Soon Sung;Kim, Jina;Hwang, JinHo;Kang, Youngnam
    • Journal of Radiation Protection and Research
    • /
    • v.44 no.3
    • /
    • pp.103-109
    • /
    • 2019
  • Background: Four-dimensional computed tomographic (4DCT) images are increasingly used in clinic with the growing need to account for the respiratory motion of the patient during radiation treatment. One of the reason s that makes the dose evaluation using 4DCT inaccurate is a change of the patient respiration during the treatment session, i.e., intrafractional uncertainty. Especially, when the amplitude of the patient respiration is greater than the respiration range during the 4DCT acquisition, such an organ motion from the larger respiration is difficult to be represented with the 4DCT. In this paper, the method to generate images expecting the organ motion from a respiration with extended amplitude was proposed and examined. Materials and Methods: We propose a method to generate extra-phase images from a given set of the 4DCT images using deformable image registration (DIR) and linear extrapolation. Deformation vector fields (DVF) are calculated from the given set of images, then extrapolated according to respiratory surrogate. The extra-phase images are generated by applying the extrapolated DVFs to the existing 4DCT images. The proposed method was tested with the 4DCT of a physical 4D phantom. Results and Discussion: The tumor position in the generated extra-phase image was in a good agreement with that in the gold-standard image which is separately acquired, using the same 4DCT machine, with a larger range of respiration. It was also found that we can generate the best quality extra-phase image by using the maximum inhalation phase (T0) and maximum exhalation phase (T50) images for extrapolation. Conclusion: In the present study, a method to construct extra-phase images that represent expanded respiratory motion of the patient has been proposed and tested. The movement of organs from a larger respiration amplitude can be predicted by the proposed method. We believe the method may be utilized for realistic simulation of radiation therapy.

Automated Bar Placing Model Generation for Augmented Reality Using Recognition of Reinforced Concrete Details (부재 일람표 도면 인식을 활용한 증강현실 배근모델 자동 생성)

  • Park, U-Yeol;An, Sung-Hoon
    • Journal of the Korea Institute of Building Construction
    • /
    • v.20 no.3
    • /
    • pp.289-296
    • /
    • 2020
  • This study suggests a methodology for automatically extracting placing information from 2D reinforced concrete details drawings and generating a 3D reinforcement placing model to develop a mobile augmented reality for bar placing work. To make it easier for users to acquire placing information, it is suggested that users takes pictures of structural drawings using a camera built into a mobile device and extract placing information using vision recognition and the OCR(Optical Character Registration) tool. In addition, an augmented reality app is implemented using the game engine to allow users to automatically generate 3D reinforcement placing model and review the 3D models by superimposing them with real images. Details are described for application to the proposed methodology using the previously developed programming tools, and the results of implementing reinforcement augmented reality models for typical members at construction sites are reviewed. It is expected that the methodology presented as a result of application can be used for learning bar placing work or construction review.

IMAGE FUSION ACCURACY FOR THE INTEGRATION OF DIGITAL DENTAL MODEL AND 3D CT IMAGES BY THE POINT-BASED SURFACE BEST FIT ALGORITHM (Point-based surface best fit 알고리즘을 이용한 디지털 치아 모형과 3차원 CT 영상의 중첩 정확도)

  • Kim, Bong-Chul;Lee, Chae-Eun;Park, Won-Se;Kang, Jeong-Wan;Yi, Choong-Kook;Lee, Sang-Hwy
    • Journal of the Korean Association of Oral and Maxillofacial Surgeons
    • /
    • v.34 no.5
    • /
    • pp.555-561
    • /
    • 2008
  • Purpose: The goal of this study was to develop a technique for creating a computerized composite maxillofacial-dental model, based on point-based surface best fit algorithm and to test its accuracy. The computerized composite maxillofacial-dental model was made by the three dimensional combination of a 3-dimensional (3D) computed tomography (CT) bone model with digital dental model. Materials and Methods: This integration procedure mainly consists of following steps : 1) a reconstruction of a virtual skull and digital dental model from CT and laser scanned dental model ; 2) an incorporation of dental model into virtual maxillofacial-dental model by point-based surface best fit algorithm; 3) an assessment of the accuracy of incorporation. To test this system, CTs and dental models from 3 volunteers with cranio-maxillofacial deformities were obtained. And the registration accuracy was determined by the root mean squared distance between the corresponding reference points in a set of 2 images. Results and Conclusions: Fusion error for the maxillofacial 3D CT model with the digital dental model ranged between 0.1 and 0.3 mm with mean of 0.2 mm. The range of errors were similar to those reported elsewhere with the fiducial markers. So this study confirmed the feasibility and accuracy of combining digital dental model and 3D CT maxillofacial model. And this technique seemed to be easier for us that its clinical applicability can good in the field of digital cranio-maxillofacial surgery.

AUTOMATIC 3D BUILDING INFORMATION EXTRACTION FROM A SINGLE QUICKBIRD IMAGE AND DIGITAL MAPS

  • Kim, Hye-Jin;Byun, Young-Gi;Choi, Jae-Wan;Han, You-Kyung;Kim, Yong-Il
    • Proceedings of the KSRS Conference
    • /
    • 2007.10a
    • /
    • pp.238-242
    • /
    • 2007
  • Today's commercial high resolution satellite imagery such as that provided by IKONOS and QuickBird, offers the potential to extract useful spatial information for geographical database construction and GIS applications. Digital maps supply the most generally used GIS data probiding topography, road, and building information. Currently, the building information provided by digital maps is incompletely constructed for GIS applications due to planar position error and warped shape. We focus on extracting of the accurate building information including position, shape, and height to update the building information of the digital maps and GIS database. In this paper, we propose a new method of 3D building information extraction with a single high resolution satellite image and digital map. Co-registration between the QuickBird image and the 1:1,000 digital maps was carried out automatically using the RPC adjustment model and the building layer of the digital map was projected onto the image. The building roof boundaries were detected using the building layer from the digital map based on the satellite azimuth. The building shape could be modified using a snake algorithm. Then we measured the building height and traced the building bottom automatically using triangular vector structure (TVS) hypothesis. In order to evaluate the proposed method, we estimated accuracy of the extracted building information using LiDAR DSM.

  • PDF

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.