• Title/Summary/Keyword: Multiple Range Images

Search Result 109, Processing Time 0.028 seconds

A Study on the Automatic Registration of Multiple Range Images Obtained by the 3D Scanner around the Object (물체 주위를 돌아가며 3차원 스캐너로 획득된 다면 이미지의 자동접합에 관한 연구)

  • 홍훈기;조경호
    • Korean Journal of Computational Design and Engineering
    • /
    • v.5 no.3
    • /
    • pp.285-292
    • /
    • 2000
  • A new method for the 3D automatic registration of the multiple range images has been developed for the 3D scanners(non-contact coordinates measurement systems). In the existing methods, the user usually has to input more than 3 pairs of corresponding points for the iterative registration process. The major difficulty of the existing systems lies in that the input corresponding points must be selected very carefully because the optimal searching process and the registration results mostly depend upon the accuracy of the selected points. In the proposed method, this kind of difficulty is greatly mitigated even though it needs only 2 pairs of the corresponding input points. Several registration examples on the 3D measured data have been presented and discussed with the introduction to the proposed algorithm.

  • PDF

Deep Learning Machine Vision System with High Object Recognition Rate using Multiple-Exposure Image Sensing Method

  • Park, Min-Jun;Kim, Hyeon-June
    • Journal of Sensor Science and Technology
    • /
    • v.30 no.2
    • /
    • pp.76-81
    • /
    • 2021
  • In this study, we propose a machine vision system with a high object recognition rate. By utilizing a multiple-exposure image sensing technique, the proposed deep learning-based machine vision system can cover a wide light intensity range without further learning processes on the various light intensity range. If the proposed machine vision system fails to recognize object features, the system operates in a multiple-exposure sensing mode and detects the target object that is blocked in the near dark or bright region. Furthermore, short- and long-exposure images from the multiple-exposure sensing mode are synthesized to obtain accurate object feature information. That results in the generation of a wide dynamic range of image information. Even with the object recognition resources for the deep learning process with a light intensity range of only 23 dB, the prototype machine vision system with the multiple-exposure imaging method demonstrated an object recognition performance with a light intensity range of up to 96 dB.

Super Resolution Reconstruction from Multiple Exposure Images (노출이 다른 다수의 입력 영상을 사용한 초해상도 영상 복원)

  • Lee, Tae-Hyoung;Ha, Ho-Gun;Lee, Cheol-Hee;Ha, Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.49 no.1
    • /
    • pp.73-80
    • /
    • 2012
  • Recent research efforts have focused on combining high dynamic range imaging with super-resolution reconstruction to enhance both the intensity range and resolution of images. The processes developed to date start with a set of multiple-exposure input images with low dynamic range (LDR) and low resolution (LR), and require several procedural steps: conversion from LDR to HDR, SR reconstruction, and tone mapping. Input images captured with irregular exposure steps have an impact on the quality of the output images from this process. In this paper, we present a simplified framework to replace the separate procedures of previous methods that is also robust to different sets of input images. The proposed method first calculates weight maps to determine the best visible parts of the input images. The weight maps are then applied directly to SR reconstruction, and the best visible parts for the dark and highlighted areas of each input image are preserved without LDR-to-HDR conversion, resulting in high dynamic range. A new luminance control factor (LCF) is used during SR reconstruction to adjust the luminance of input images captured during irregular exposure steps and ensure acceptable luminance of the resulting output images. Experimental results show that the proposed method produces SR images of HDR quality with luminance compensation.

On Shape Recovery of 3D Object from Multiple Range Images (시점이 다른 다수의 거리 영상으로부터 3차원 물체의 형상 복원)

  • Kim, Jun-Young;Yun, Il-Dong;Lee, Sang-Uk
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.37 no.1
    • /
    • pp.1-15
    • /
    • 2000
  • To reconstruct 3- D shape, It is a common strategy to acquire multiple range Images from different viewpoints and integrate them into a common coordinates In this paper, we particularly focus on the registration and integration processes for combining all range Images into one surface model. For the registration, we propose the 2-step registration algorithm, which consists of 2 steps the rough registration step using all data points and the fine registration step using the high-curved data points For the integration, we propose a new algorithm, referred to as ‘multi-registration’ technique, to alleviate the error accumulation problem, which occurs during applying the pair-wise registration to each range image sequentially, in order to transform them into a common reference frame Intensive experiments are performed on the various real range data In experiments, all range images were registered within 1 minutes on Pentium 150MHz PC The results show that the proposed algorithms registrate and integrate multiple range Images within a tolerable error bound in a reasonable computation time, and the total error between all range Images are equalized with our proposed algorithms.

  • PDF

Registration multiple range views (복수의 거리영상 간의 변환계수의 추출)

  • 정도현;윤일동;이상욱
    • Journal of the Korean Institute of Telematics and Electronics S
    • /
    • v.34S no.2
    • /
    • pp.52-62
    • /
    • 1997
  • To reconstruct the complete 3-D shape of an object, seveal range images form different viewpoints should be merged into a single model. The process of extraction of the transformation parameters between multiple range views is calle dregistration. In this paper, we propose a new algorithm to find the transformation parameters between multiple range views. Th eproposed algorithm consists of two step: initial estimation and iteratively update the transformation. To guess the initial transformation, we modify the principal axes by considering the projection effect, due to the difference fo viewpoints. Then, the following process is iterated: in order to extract the exact transformation parameters between the range views: For every point of the common region, find the nearest point among the neighborhood of the current corresponding point whose correspondency is defined by the reverse calibration of the range finder. Then, update the transformation to satisfy the new correspondencies. In order to evaluate the performance the proposed registration algorithm, some experiments are performed on real range data, acquired by space encoding range finder. The experimental results show that the proposed initial estimation accelerate the following iterative registration step.

  • PDF

Highly Dense 3D Surface Generation Using Multi-image Matching

  • Noh, Myoung-Jong;Cho, Woo-Sug;Bang, Ki-In
    • ETRI Journal
    • /
    • v.34 no.1
    • /
    • pp.87-97
    • /
    • 2012
  • This study presents an automatic matching method for generating a dense, accurate, and discontinuity-preserved digital surface model (DSM) using multiple images acquired by an aerial digital frame camera. The proposed method consists of two main procedures: area-based multi-image matching (AMIM) and stereo-pair epipolar line matching (SELM). AMIM evaluates the sum of the normalized cross correlation of corresponding image points from multiple images to determine the optimal height of an object point. A novel method is introduced for determining the search height range and incremental height, which are necessary for the vertical line locus used in the AMIM. This procedure also includes the means to select the best reference and target images for each strip so that multi-image matching can resolve the common problem over occlusion areas. The SELM extracts densely positioned distinct points along epipolar lines from the multiple images and generates a discontinuity-preserved DSM using geometric and radiometric constraints. The matched points derived by the AMIM are used as anchor points between overlapped images to find conjugate distinct points using epipolar geometry. The performance of the proposed method was evaluated for several different test areas, including urban areas.

Multi-camera based Images through Feature Points Algorithm for HDR Panorama

  • Yeong, Jung-Ho
    • International journal of advanced smart convergence
    • /
    • v.4 no.2
    • /
    • pp.6-13
    • /
    • 2015
  • With the spread of various kinds of cameras such as digital cameras and DSLR and a growing interest in high-definition and high-resolution images, a method that synthesizes multiple images is being studied among various methods. High Dynamic Range (HDR) images store light exposure with even wider range of number than normal digital images. Therefore, it can store the intensity of light inherent in specific scenes expressed by light sources in real life quite accurately. This study suggests feature points synthesis algorithm to improve the performance of HDR panorama recognition method (algorithm) at recognition and coordination level through classifying the feature points for image recognition using more than one multi frames.

Terahertz Nondestructive Time-of-flight Imaging with a Large Depth Range

  • Kim, Hwan Sik;Kim, Jangsun;Ahn, Yeong Hwan
    • Current Optics and Photonics
    • /
    • v.6 no.6
    • /
    • pp.619-626
    • /
    • 2022
  • In this study, we develop a three-dimensional (3D) terahertz time-of-flight (THz-TOF) imaging technique with a large depth range, based on asynchronous optical sampling (ASOPS) methods. THz-TOF imaging with the ASOPS technique enables rapid scanning with a time-delay span of 10 ns. This means that a depth range of 1.5 m is possible in principle, whereas in practice it is limited by the focus depth determined by the optical geometry, such as the focal length of the scan lens. We characterize the spatial resolution of objects at different vertical positions with a focal length of 5 cm. The lateral resolution varies from 0.8-1.8 mm within the vertical range of 50 mm. We obtain THz-TOF images for samples with multiple reflection layers; the horizontal and vertical locations of the objects are successfully determined from the 2D cross-sectional images, or from reconstructed 3D images. For instance, we can identify metallic objects embedded in insulating enclosures having a vertical depth range greater than 30 mm. For feasible practical use, we employ the proposed technique to locate a metallic object within a thick chocolate bar, which is not accessible via conventional transmission geometry.

View Synthesis and Coding of Multi-view Data in Arbitrary Camera Arrangements Using Multiple Layered Depth Images

  • Yoon, Seung-Uk;Ho, Yo-Sung
    • Journal of Multimedia Information System
    • /
    • v.1 no.1
    • /
    • pp.1-10
    • /
    • 2014
  • In this paper, we propose a new view synthesis technique for coding of multi-view color and depth data in arbitrary camera arrangements. We treat each camera position as a 3-D point in world coordinates and build clusters of those vertices. Color and depth data within a cluster are gathered into one camera position using a hierarchical representation based on the concept of layered depth image (LDI). Since one camera can cover only a limited viewing range, we set multiple reference cameras so that multiple LDIs are generated to cover the whole viewing range. Therefore, we can enhance the visual quality of the reconstructed views from multiple LDIs comparing with that from a single LDI. From experimental results, the proposed scheme shows better coding performance under arbitrary camera configurations in terms of PSNR and subjective visual quality.

  • PDF

AN EXPERIMENTAL EXAMINATION OF MULTIMODAL IMAGING SYSTEM FOR IMPLANT SITE ASSESSMENT (인공치아 이식부위 분석을 위한 다기능 영상체계의 실험적 검사)

  • Park Chang-Seo;Kim Kee-Deog
    • Journal of Korean Academy of Oral and Maxillofacial Radiology
    • /
    • v.28 no.1
    • /
    • pp.7-16
    • /
    • 1998
  • The Scanora/sup (R)/ X-ray unit uses the principles of narrow beam radiography and spiral tomography. Starting with a panoramic overview as a scout image. multiple tomographic projections could be selected. This study evaluated the accuracy of spiral tomography in comparison to routine panoramic radiography for dental implant treatment planning. An experimental study was performed on a cadaver mandible to assess the accuracy of panoramic radiography and spiral tomography film images for measurement of metallic spheres. After radiographic images of the metallic spheres on the surgical stent were measured and corrected for a fixed magnification of radiographic images. following results were obtained. 1. In the optimal position of the mandible. the minimal horizontal and vertical distortion was evident in the panoramic radiography images. The mean horizontal and vertical magnification error in anterior sites was 5.25% and 0.75%. respectively. The mean horizontal and vertical magnification error in posterior sites was 0.50% and 1.50%. respectively. 2. In the displaced forward or in an eccentric position of the mandible. the magnification error of the panoramic radiography images increased significantly over the optimal position. Overall, the mean horizontal magnification error of the anterior site in the different positions changed dramatically within a range of -17.25% to 39.00%, compared to the posterior range of -5.25% to 8.50%. However, the mean vertical magnification error stayed with the range of 0.5% to 3.75% for all the mandibular positions. 3. The magnification effects in the tomographic scans were nearly identical for the anterior and posterior with a range of 2.00% to 5.75% in the horizontal and 4.50% to 5.50% in the vertical dimension, respectively. 4. A statistically significant difference between the anterior and posterior measurements was found in the horizontal measurements of the panoramic radiography images of the displaced forward and backward position of the mandible(P<0.05). Also a significant difference between the optimal panoramic and tomographic projections was found only in the vertical measurement(P<0.05).

  • PDF