• Title/Summary/Keyword: Single camera reconstruction

Search Result 46, Processing Time 0.035 seconds

Shape Adaptive Searching Region to Find Focused Image Points in 3D Shape Reconstruction (3차원 형체복원에 있어서 측정면에 적응적인 초점화소 탐색영역 결정기법)

  • 김현태;한문용;홍민철;차형태;한헌수
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.77-77
    • /
    • 2000
  • The shape of small or curved object is usually reconstructed using a single camera by moving its lens position to find a sequence of the focused images. Most conventional methods have used a window with fixed shape to test the focus measure, which resulted in a deterioration of accuracy. To solve this problem, this paper proposes a new approach of using a shape adaptive window. It estimates the shape of the object at every step and applies the same shape of window to calculate the focus measure. Focus measure is based on the variance of the pixels inside the window. This paper includes the experimental results.

  • PDF

3D Dense Surface Reconstruction from Single-Camera Video (단일 비디오 카메라를 이용한 3차원 구조의 조밀한 복원)

  • 박정우;박종승;황용구;이만재
    • Proceedings of the Korean Information Science Society Conference
    • /
    • 2004.04b
    • /
    • pp.733-735
    • /
    • 2004
  • 이 논문은 한 대의 카메라에서 얻은 일련의 영상을 해석하여 단순한 2차원의 영상을 3차원물체로 복원하는 방법에 대해 설명을 한다. 이러한 3차원 복원 방법은 카메라 내부 변수가 동일하다는 가정을 이용하여 별도의 캘리브레이션 작업 없이 한 대의 카메라로부터 얻은 여러 장의 영상을 이용한다. 이 논문에서 제안한 방법은 내부 변수 중 카메라 행렬의 단순화와 사영 기하를 이용한 것이다 이 방법은 실제 비디오 프레임에 가상의 그래픽 모델을 더하는 AR (Augmented reality) 분야에 특히 유용하다 이 논문에서의 실험은 실제 여러 비디오 스트림 데이터를 바탕으로 수행되었고, 하나의 카메라를 사용한 동영상에서 3차원 구조로 복원하는 실험 결과는 시스템의 유용성을 보여준다.

  • PDF

Simple image artifact removal technique for more accurate iris diagnosis

  • Kim, Jeong-lae;Kim, Soon Bae;Jung, Hae Ri;Lee, Woo-cheol;Jeong, Hyun-Woo
    • International journal of advanced smart convergence
    • /
    • v.7 no.4
    • /
    • pp.169-173
    • /
    • 2018
  • Iris diagnosis based on the color and texture information is one of a novel approach which can represent the current state of a certain organ inside body or the health condition of a person. In analysis of the iris images, there are critical image artifacts which can prevent of use interpretation of the iris textures on images. Here, we developed the iris diagnosis system based on a hand-held typed imaging probe which consists of a single camera sensor module with 8M pixels, two pairs of 400~700 nm LED, and a guide beam. Two original images with different light noise pattern were successively acquired in turns, and the light noise-free image was finally reconstructed and demonstrated by the proposed artifact removal approach.

LFFCNN: Multi-focus Image Synthesis in Light Field Camera (LFFCNN: 라이트 필드 카메라의 다중 초점 이미지 합성)

  • Hyeong-Sik Kim;Ga-Bin Nam;Young-Seop Kim
    • Journal of the Semiconductor & Display Technology
    • /
    • v.22 no.3
    • /
    • pp.149-154
    • /
    • 2023
  • This paper presents a novel approach to multi-focus image fusion using light field cameras. The proposed neural network, LFFCNN (Light Field Focus Convolutional Neural Network), is composed of three main modules: feature extraction, feature fusion, and feature reconstruction. Specifically, the feature extraction module incorporates SPP (Spatial Pyramid Pooling) to effectively handle images of various scales. Experimental results demonstrate that the proposed model not only effectively fuses a single All-in-Focus image from images with multi focus images but also offers more efficient and robust focus fusion compared to existing methods.

  • PDF

Fundamental Matrix Estimation and Key Frame Selection for Full 3D Reconstruction Under Circular Motion (회전 영상에서 기본 행렬 추정 및 키 프레임 선택을 이용한 전방향 3차원 영상 재구성)

  • Kim, Sang-Hoon;Seo, Yung-Ho;Kim, Tae-Eun;Choi, Jong-Soo
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.46 no.2
    • /
    • pp.10-23
    • /
    • 2009
  • The fundamental matrix and key frame selection are one of the most important techniques to recover full 3D reconstruction of objects from turntable sequences. This paper proposes a new algorithm that estimates a robust fundamental matrix for camera calibration from uncalibrated images taken under turn-table motion. Single axis turntable motion can be described in terms of its fixed entities. This provides new algorithms for computing the fundamental matrix. From the projective properties of the conics and fundamental matrix the Euclidean 3D coordinates of a point are obtained from geometric locus of the image points trajectories. Experimental results on real and virtual image sequences demonstrate good object reconstructions.

Real-Time Hand Pose Tracking and Finger Action Recognition Based on 3D Hand Modeling (3차원 손 모델링 기반의 실시간 손 포즈 추적 및 손가락 동작 인식)

  • Suk, Heung-Il;Lee, Ji-Hong;Lee, Seong-Whan
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.12
    • /
    • pp.780-788
    • /
    • 2008
  • Modeling hand poses and tracking its movement are one of the challenging problems in computer vision. There are two typical approaches for the reconstruction of hand poses in 3D, depending on the number of cameras from which images are captured. One is to capture images from multiple cameras or a stereo camera. The other is to capture images from a single camera. The former approach is relatively limited, because of the environmental constraints for setting up multiple cameras. In this paper we propose a method of reconstructing 3D hand poses from a 2D input image sequence captured from a single camera by means of Belief Propagation in a graphical model and recognizing a finger clicking motion using a hidden Markov model. We define a graphical model with hidden nodes representing joints of a hand, and observable nodes with the features extracted from a 2D input image sequence. To track hand poses in 3D, we use a Belief Propagation algorithm, which provides a robust and unified framework for inference in a graphical model. From the estimated 3D hand pose we extract the information for each finger's motion, which is then fed into a hidden Markov model. To recognize natural finger actions, we consider the movements of all the fingers to recognize a single finger's action. We applied the proposed method to a virtual keypad system and the result showed a high recognition rate of 94.66% with 300 test data.

Spatial Resolution and Dynamic Range Enhancement Algorithm using Multiple Exposures (복수 노출을 이용한 공간 해상도와 다이내믹 레인지 향상 알고리즘)

  • Choi, Jong-Seong;Han, Young-Seok;Kang, Moon-Gi
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.45 no.6
    • /
    • pp.117-124
    • /
    • 2008
  • The approaches to overcome the limited spatial resolution and the limited dynamic range of image sensors have been studied independently. A high resolution image is reconstructed from multiple low resolution observations and a wide dynamic range image is reconstructed from differently exposed multiple low dynamic range in es based on signal processing approach. In practical situations, it is reasonable to address them in a unified context because the recorded image suffers from limitations of both spatial resolution and dynamic range. In this paper, the image acquisition process including limited spatial resolution and limited dynamic range is modelled. With the image acquisition model, the response function of the imaging system is estimated and the single image of which spatial resolution and dynamic range are simultaneously enhanced is obtained. Experimental results indicate that the proposed algorithm outperforms the conventional approaches that perform the high resolution and wide dynamic range reconstruction sequentially with respect to both objective and subjective criteria.

Deep Learning-based Depth Map Estimation: A Review

  • Abdullah, Jan;Safran, Khan;Suyoung, Seo
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.1
    • /
    • pp.1-21
    • /
    • 2023
  • In this technically advanced era, we are surrounded by smartphones, computers, and cameras, which help us to store visual information in 2D image planes. However, such images lack 3D spatial information about the scene, which is very useful for scientists, surveyors, engineers, and even robots. To tackle such problems, depth maps are generated for respective image planes. Depth maps or depth images are single image metric which carries the information in three-dimensional axes, i.e., xyz coordinates, where z is the object's distance from camera axes. For many applications, including augmented reality, object tracking, segmentation, scene reconstruction, distance measurement, autonomous navigation, and autonomous driving, depth estimation is a fundamental task. Much of the work has been done to calculate depth maps. We reviewed the status of depth map estimation using different techniques from several papers, study areas, and models applied over the last 20 years. We surveyed different depth-mapping techniques based on traditional ways and newly developed deep-learning methods. The primary purpose of this study is to present a detailed review of the state-of-the-art traditional depth mapping techniques and recent deep learning methodologies. This study encompasses the critical points of each method from different perspectives, like datasets, procedures performed, types of algorithms, loss functions, and well-known evaluation metrics. Similarly, this paper also discusses the subdomains in each method, like supervised, unsupervised, and semi-supervised methods. We also elaborate on the challenges of different methods. At the conclusion of this study, we discussed new ideas for future research and studies in depth map research.

EPAR V2.0: AUTOMATED MONITORING AND VISUALIZATION OF POTENTIAL AREAS FOR BUILDING RETROFIT USING THERMAL CAMERAS AND COMPUTATIONAL FLUID DYNAMICS (CFD) MODELS

  • Youngjib Ham;Mani Golparvar-Fard
    • International conference on construction engineering and project management
    • /
    • 2013.01a
    • /
    • pp.279-286
    • /
    • 2013
  • This paper introduces a new method for identification of building energy performance problems. The presented method is based on automated analysis and visualization of deviations between actual and expected energy performance of the building using EPAR (Energy Performance Augmented Reality) models. For generating EPAR models, during building inspections, energy auditors collect a large number of digital and thermal imagery using a consumer-level single thermal camera that has a built-in digital lens. Based on a pipeline of image-based 3D reconstruction algorithms built on GPU and multi-core CPU architecture, 3D geometrical and thermal point cloud models of the building under inspection are automatically generated and integrated. Then, the resulting actual 3D spatio-thermal model and the expected energy performance model simulated using computational fluid dynamics (CFD) analysis are superimposed within an augmented reality environment. Based on the resulting EPAR models which jointly visualize the actual and expected energy performance of the building under inspection, two new algorithms are introduced for quick and reliable identification of potential performance problems: 1) 3D thermal mesh modeling using k-d trees and nearest neighbor searching to automate calculation of temperature deviations; and 2) automated visualization of performance deviations using a metaphor based on traffic light colors. The proposed EPAR v2.0 modeling method is validated on several interior locations of a residential building and an instructional facility. Our empirical observations show that the automated energy performance analysis using EPAR models enables performance deviations to be rapidly and accurately identified. The visualization of performance deviations in 3D enables auditors to easily identify potential building performance problems. Rather than manually analyzing thermal imagery, auditors can focus on other important tasks such as evaluating possible remedial alternatives.

  • PDF

Absolute Depth Estimation Based on a Sharpness-assessment Algorithm for a Camera with an Asymmetric Aperture

  • Kim, Beomjun;Heo, Daerak;Moon, Woonchan;Hahn, Joonku
    • Current Optics and Photonics
    • /
    • v.5 no.5
    • /
    • pp.514-523
    • /
    • 2021
  • Methods for absolute depth estimation have received lots of interest, and most algorithms are concerned about how to minimize the difference between an input defocused image and an estimated defocused image. These approaches may increase the complexity of the algorithms to calculate the defocused image from the estimation of the focused image. In this paper, we present a new method to recover depth of scene based on a sharpness-assessment algorithm. The proposed algorithm estimates the depth of scene by calculating the sharpness of deconvolved images with a specific point-spread function (PSF). While most depth estimation studies evaluate depth of the scene only behind a focal plane, the proposed method evaluates a broad depth range both nearer and farther than the focal plane. This is accomplished using an asymmetric aperture, so the PSF at a position nearer than the focal plane is different from that at a position farther than the focal plane. From the image taken with a focal plane of 160 cm, the depth of object over the broad range from 60 to 350 cm is estimated at 10 cm resolution. With an asymmetric aperture, we demonstrate the feasibility of the sharpness-assessment algorithm to recover absolute depth of scene from a single defocused image.