• Title/Summary/Keyword: 3-D range image

Search Result 381, Processing Time 0.028 seconds

Development of Structured Light 3D Scanner Based on Image Processing

  • Kim, Kyu-Ha;Lee, Sang-Hyun
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.11 no.4
    • /
    • pp.49-58
    • /
    • 2019
  • 3D scanners are needed in various fields, and their usage range is greatly expanded. In particular, it is being used to reduce costs at various stages during product development and production. Now, the importance of quality inspection in the manufacturing industry is increasing. Structured optical system applied in this study is suitable for measuring high precision of mold, press work, precision products, etc. and economical and effective 3D scanning system for measuring inspection in manufacturing industry can be implemented. We developed Structured light 3D scanner which can measure high precision by using Digital Light Processing (DLP) projector and camera. In this paper, 3D image scanner based on structured optical system can realize 3D scanning system economically and effectively when measuring inspection in the manufacturing industry.

Design and Implementation of an Approximate Surface Lens Array System based on OpenCL (OpenCL 기반 근사곡면 렌즈어레이 시스템의 설계 및 구현)

  • Kim, Do-Hyeong;Song, Min-Ho;Jung, Ji-Sung;Kwon, Ki-Chul;Kim, Nam;Kim, Kyung-Ah;Yoo, Kwan-Hee
    • The Journal of the Korea Contents Association
    • /
    • v.14 no.10
    • /
    • pp.1-9
    • /
    • 2014
  • Generally, integral image used for autostereoscopic 3d display is generated for flat lens array, but flat lens array cannot provide a wide range of view for generated integral image because of narrow range of view. To make up for this flat lens array's weak point, curved lens array has been proposed, and due to technical and cost problem, approximate surface lens array composed of several flat lens array is used instead of ideal curved lens array. In this paper, we constructed an approximate surface lens array arranged for $20{\times}8$ square flat lens in 100mm radius sphere, and we could get about twice angle of view compared to flat lens array. Specially, unlike existing researches which manually generate integral image, we propose an OpenCL GPU parallel process algorithm for generating real-time integral image. As a result, we could get 12-20 frame/sec speed about various 3D volume data from $15{\times}15$ approximate surface lens array.

REAL-TIME 3D MODELING FOR ACCELERATED AND SAFER CONSTRUCTION USING EMERGING TECHNOLOGY

  • Jochen Teizer;Changwan Kim;Frederic Bosche;Carlos H. Caldas;Carl T. Haas
    • International conference on construction engineering and project management
    • /
    • 2005.10a
    • /
    • pp.539-543
    • /
    • 2005
  • The research presented in this paper enables real-time 3D modeling to help make construction processes ultimately faster, more predictable and safer. Initial research efforts used an emerging sensor technology and proved its usefulness in the acquisition of range information for the detection and efficient representation of static and moving objects. Based on the time-of-flight principle, the sensor acquires range and intensity information of each image pixel within the entire sensor's field-of-view in real-time with frequencies of up to 30 Hz. However, real-time working range data processing algorithms need to be developed to rapidly process range information into meaningful 3D computer models. This research ultimately focuses on the application of safer heavy equipment operation. The paper compares (a) a previous research effort in convex hull modeling using sparse range point clouds from a single laser beam range finder, to (b) high-frame rate update Flash LADAR (Laser Detection and Ranging) scanning for complete scene modeling. The presented research will demonstrate if the FlashLADAR technology can play an important role in real-time modeling of infrastructure assets in the near future.

  • PDF

Low Cost Omnidirectional 2D Distance Sensor for Indoor Floor Mapping Applications

  • Kim, Joon Ha;Lee, Jun Ho
    • Current Optics and Photonics
    • /
    • v.5 no.3
    • /
    • pp.298-305
    • /
    • 2021
  • Modern distance sensing methods employ various measurement principles, including triangulation, time-of-flight, confocal, interferometric and frequency comb. Among them, the triangulation method, with a laser light source and an image sensor, is widely used in low-cost applications. We developed an omnidirectional two-dimensional (2D) distance sensor based on the triangulation principle for indoor floor mapping applications. The sensor has a range of 150-1500 mm with a relative resolution better than 4% over the range and 1% at 1 meter distance. It rotationally scans a compact one-dimensional (1D) distance sensor, composed of a near infrared (NIR) laser diode, a folding mirror, an imaging lens, and an image detector. We designed the sensor layout and configuration to satisfy the required measurement range and resolution, selecting easily available components in a special effort to reduce cost. We built a prototype and tested it with seven representative indoor wall specimens (white wallpaper, gray wallpaper, black wallpaper, furniture wood, black leather, brown leather, and white plastic) in a typical indoor illuminated condition, 200 lux, on a floor under ceiling mounted fluorescent lamps. We confirmed the proposed sensor provided reliable distance reading of all the specimens over the required measurement range (150-1500 mm) with a measurement resolution of 4% overall and 1% at 1 meter, regardless of illumination conditions.

Relative Localization for Mobile Robot using 3D Reconstruction of Scale-Invariant Features (스케일불변 특징의 삼차원 재구성을 통한 이동 로봇의 상대위치추정)

  • Kil, Se-Kee;Lee, Jong-Shill;Ryu, Je-Goon;Lee, Eung-Hyuk;Hong, Seung-Hong;Shen, Dong-Fan
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.55 no.4
    • /
    • pp.173-180
    • /
    • 2006
  • A key component of autonomous navigation of intelligent home robot is localization and map building with recognized features from the environment. To validate this, accurate measurement of relative location between robot and features is essential. In this paper, we proposed relative localization algorithm based on 3D reconstruction of scale invariant features of two images which are captured from two parallel cameras. We captured two images from parallel cameras which are attached in front of robot and detect scale invariant features in each image using SIFT(scale invariant feature transform). Then, we performed matching for the two image's feature points and got the relative location using 3D reconstruction for the matched points. Stereo camera needs high precision of two camera's extrinsic and matching pixels in two camera image. Because we used two cameras which are different from stereo camera and scale invariant feature point and it's easy to setup the extrinsic parameter. Furthermore, 3D reconstruction does not need any other sensor. And the results can be simultaneously used by obstacle avoidance, map building and localization. We set 20cm the distance between two camera and capture the 3frames per second. The experimental results show :t6cm maximum error in the range of less than 2m and ${\pm}15cm$ maximum error in the range of between 2m and 4m.

Impulse Noise Detection Using Self-Organizing Neural Network and Its Application to Selective Median Filtering (Self-Organizing Neural Network를 이용한 임펄스 노이즈 검출과 선택적 미디언 필터 적용)

  • Lee Chong Ho;Dong Sung Soo;Wee Jae Woo;Song Seung Min
    • The Transactions of the Korean Institute of Electrical Engineers D
    • /
    • v.54 no.3
    • /
    • pp.166-173
    • /
    • 2005
  • Preserving image features, edges and details in the process of impulsive noise filtering is an important problem. To avoid image blurring, only corrupted pixels must be filtered. In this paper, we propose an effective impulse noise detection method using Self-Organizing Neural Network(SONN) which applies median filter selectively for removing random-valued impulse noises while preserving image features, edges and details. Using a $3\times3$ window, we obtain useful local features with which impulse noise patterns are classified. SONN is trained with sample image patterns and each pixel pattern is classified by its local information in the image. The results of the experiments with various images which are the noise range of $5-15\%$ show that our method performs better than other methods which use multiple threshold values for impulse noise detection.

Development of large-scale 3D printer with position compensation system (구동부 변위의 보상이 가능한 지능형 대형 3D 프린터 개발)

  • Lee, Woo-Song;Park, Sung-Jin;Park, In-Soo
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.22 no.3
    • /
    • pp.293-301
    • /
    • 2019
  • Based on accurate image processing technology, a system for measuring displacement in ${\mu}m$ for drive error (position error, straightness error, flatness error) at a distance using parallel light and image sensor is developed, and a system for applying this technology development to a large 3D rapid prototyping machine and compensating in real time is developed to dramatically reduce the range of measurement error and enable intelligent 3D production of high quality products.

3D Head Modeling using Depth Sensor

  • Song, Eungyeol;Choi, Jaesung;Jeon, Taejae;Lee, Sangyoun
    • Journal of International Society for Simulation Surgery
    • /
    • v.2 no.1
    • /
    • pp.13-16
    • /
    • 2015
  • Purpose We conducted a study on the reconstruction of the head's shape in 3D using the ToF depth sensor. A time-of-flight camera (ToF camera) is a range imaging camera system that resolves distance based on the known speed of light, measuring the time-of-flight of a light signal between the camera and the subject for each point of the image. The above method is the safest way of measuring the head shape of plagiocephaly patients in 3D. The texture, appearance and size of the head were reconstructed from the measured data and we used the SDF method for a precise reconstruction. Materials and Methods To generate a precise model, mesh was generated by using Marching cube and SDF. Results The ground truth was determined by measuring 10 people of experiment participants for 3 times repetitively and the created 3D model of the same part from this experiment was measured as well. Measurement of actual head circumference and the reconstructed model were made according to the layer 3 standard and measurement errors were also calculated. As a result, we were able to gain exact results with an average error of 0.9 cm, standard deviation of 0.9, min: 0.2 and max: 1.4. Conclusion The suggested method was able to complete the 3D model by minimizing errors. This model is very effective in terms of quantitative and objective evaluation. However, measurement range somewhat lacks 3D information for the manufacture of protective helmets, as measurements were made according to the layer 3 standard. As a result, measurement range will need to be widened to facilitate production of more precise and perfectively protective helmets by conducting scans on all head circumferences in the future.

Enhancing Depth Accuracy on the Region of Interest in a Scene for Depth Image Based Rendering

  • Cho, Yongjoo;Seo, Kiyoung;Park, Kyoung Shin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.8 no.7
    • /
    • pp.2434-2448
    • /
    • 2014
  • This research proposed the domain division depth map quantization for multiview intermediate image generation using Depth Image-Based Rendering (DIBR). This technique used per-pixel depth quantization according to the percentage of depth bits assigned in domains of depth range. A comparative experiment was conducted to investigate the potential benefits of the proposed method against the linear depth quantization on DIBR multiview intermediate image generation. The experiment evaluated three quantization methods with computer-generated 3D scenes, which consisted of various scene complexities and backgrounds, under varying the depth resolution. The results showed that the proposed domain division depth quantization method outperformed the linear method on the 7- bit or lower depth map, especially in the scene with the large object.

Design and Fabrication of K-band multi-channel receiver for short-range RADAR (근거리 레이더용 K대역 다채널 전단 수신기 설계 및 제작)

  • Kim, Sang-Il;Lee, Seung-Jun;Lee, Jung-Soo;Lee, Bok-Hyung
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.37 no.7A
    • /
    • pp.545-551
    • /
    • 2012
  • In this paper, K-band multi-channel receiver was designed and fabricated for low noise amplification and down conversion to L-band. The fabricated multi-channel receiver incorporates GaAs-HEMT LNA(Low noise amplifier) which provides less than a 2 dB noise figure, IR(Image Rejection) Filter for rejection of image frequency, IR(Image rejection) mixer to reject a image frequency and improve an IMD(Intermodulation Distortion) characteristic. Test results of the fabricated multi-channel receiver show less than a 3.8 dB noise figure, conversion gain of more than 27dB, and IP1dB(Input 1dB Gain Compression Point) of -9.5 dB and over.