• Title/Summary/Keyword: Focus Distance of Camera

Search Result 44, Processing Time 0.022 seconds

On the Measurement of the Depth and Distance from the Defocused Imagesusing the Regularization Method (비초점화 영상에서 정칙화법을 이용한 깊이 및 거리 계측)

  • 차국찬;김종수
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.32B no.6
    • /
    • pp.886-898
    • /
    • 1995
  • One of the ways to measure the distance in the computer vision is to use the focus and defocus. There are two methods in this way. The first method is caculating the distance from the focused images in a point (MMDFP: the method measuring the distance to the focal plane). The second method is to measure the distance from the difference of the camera parameters, in other words, the apertures of the focal planes, of two images with having the different parameters (MMDCI: the method to measure the distance by comparing two images). The problem of the existing methods in MMDFP is to decide the thresholding vaue on detecting the most optimally focused object in the defocused image. In this case, it could be solved by comparing only the error energy in 3x3 window between two images. In MMDCI, the difficulty is the influence of the deflection effect. Therefor, to minimize its influence, we utilize two differently focused images instead of different aperture images in this paper. At the first, the amount of defocusing between two images is measured through the introduction of regularization and then the distance from the camera to the objects is caculated by the new equation measuring the distance. In the results of simulation, we see the fact to be able to measure the distance from two differently defocused images, and for our approach to be robuster than the method using the different aperture in the noisy image.

  • PDF

Performance Evaluation of Smartphone Camera App with Multi-Focus Shooting and Focus Post-processing Functions (다초점 촬영과 초점후처리 기능을 가진 스마트폰 카메라 앱의 성능평가)

  • Chae-Won Park;Kyung-Mi Kim;Song-Yeon Yoo;Yu-Jin Kim;Kitae Hwang;In-Hwang Jung;Jae-Moon Lee
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.24 no.2
    • /
    • pp.35-40
    • /
    • 2024
  • In this paper, we validate the practicality of the OnePIC app implemented in the previous study by analyzing the execution and storage performance. The OnePIC app is a camera app that allows you to get a photo with a desired focus after taking photos focused on various places. To evaluate performance, we analyzed distance focus shooting time and object focus shooting time in detail. The performance evaluation was measured on actual smartphone. Distance focus shooting time for 5 photos was around 0.84 seconds, the object detection time was around 0.19 seconds regardless of the number of objects and object focus shooting time for 5 photos was around 4.84 seconds. When we compared the size of a single All-in-JPEG file that stores multi-focus photos to the size of the JPEG files stored individually, there was no significant benefit in storage space because the All-in-JPEG file size was subtly reduced. However, All-in-JPEG has the great advantage of managing multi-focus photos. Finally, we conclude that the OnePIC app is practical in terms of shooting time, photo storage size, and management.

Performance Comparison of Template-based Face Recognition under Robotic Environments (로봇 환경의 템플릿 기반 얼굴인식 알고리즘 성능 비교)

  • Ban, Kyu-Dae;Kwak, Keun-Chang;Chi, Su-Young;Chung, Yun-Koo
    • The Journal of Korea Robotics Society
    • /
    • v.1 no.2
    • /
    • pp.151-157
    • /
    • 2006
  • This paper is concerned with the template-based face recognition from robot camera images with illumination and distance variations. The approaches used in this paper consist of Eigenface, Fisherface, and Icaface which are the most representative recognition techniques frequently used in conjunction with face recognition. These approaches are based on a popular unsupervised and supervised statistical technique that supports finding useful image representations, respectively. Thus we focus on the performance comparison from robot camera images with unwanted variations. The comprehensive experiments are completed for a databases with illumination and distance variations.

  • PDF

3D Depth Estimation by a Single Camera (단일 카메라를 이용한 3D 깊이 추정 방법)

  • Kim, Seunggi;Ko, Young Min;Bae, Chulkyun;Kim, Dae Jin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.2
    • /
    • pp.281-291
    • /
    • 2019
  • Depth from defocus estimates the 3D depth by using a phenomenon in which the object in the focal plane of the camera forms a clear image but the object away from the focal plane produces a blurred image. In this paper, algorithms are studied to estimate 3D depth by analyzing the degree of blur of the image taken with a single camera. The optimized object range was obtained by 3D depth estimation derived from depth from defocus using one image of a single camera or two images of different focus of a single camera. For depth estimation using one image, the best performance was achieved using a focal length of 250 mm for both smartphone and DSLR cameras. The depth estimation using two images showed the best 3D depth estimation range when the focal length was set to 150 mm and 250 mm for smartphone camera images and 200 mm and 300 mm for DSLR camera images.

A Relative Depth Estimation Algorithm Using Focus Measure (초점정보를 이용한 패턴간의 상대적 깊이 추정알고리즘 개발)

  • Jeong, Ji-Seok;Lee, Dae-Jong;Shin, Yong-Nyuo;Chun, Myung-Geun
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.23 no.6
    • /
    • pp.527-532
    • /
    • 2013
  • Depth estimation is an essential factor for robot vision, 3D scene modeling, and motion control. The depth estimation method is based on focusing values calculated in a series of images by a single camera at different distance between lens and object. In this paper, we proposed a relative depth estimation method using focus measure. The proposed method is implemented by focus value calculated for each image obtained at different lens position and then depth is finally estimated by considering relative distance of two patterns. We performed various experiments on the effective focus measures for depth estimation by using various patterns and their usefulness.

Focus Adjustment Method with Statistical Analysis for an Interchangeable Zoom Lens with Symmetric Error Factors (대칭성 공차를 갖는 교환렌즈용 줌 렌즈의 핀트 조정법과 통계적 해석)

  • Ryu, J.M.;Jo, J.H.;Kang, G.M.;Lee, H.J.;Yoneyama, Suji
    • Korean Journal of Optics and Photonics
    • /
    • v.22 no.5
    • /
    • pp.230-238
    • /
    • 2011
  • There are many types of interchangeable zoom lens in the digital single lens reflex camera and the compact digital still camera system in order to meet various specifications such as the field angle. Thus special cases for which the focus adjustment using only an auto-focus group is not available in the focal point correction (that is, the focus adjustment) of both wide and tele-zoom positions are sometimes generated. In order to make each BFL(back focal length, BFL) coincide at wide and tele-zoom positions with each designed BFL, focus adjustment processes must be performed at least in these two points within the zoom lens system. In this paper, we propose a method of focus adjustment by using the concept of focus sensitivity, and we calculate a limit on focus adjustment distance by means of statistical analysis.

A Study on Iris Image Restoration Based on Focus Value of Iris Image (홍채 영상 초점 값에 기반한 홍채 영상 복원 연구)

  • Kang Byung-Jun;Park Kang-Ryoung
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.2 s.308
    • /
    • pp.30-39
    • /
    • 2006
  • Iris recognition is that identifies a user based on the unique iris texture patterns which has the functionalities of dilating or contracting pupil region. Iris recognition systems extract the iris pattern in iris image captured by iris recognition camera. Therefore performance of iris recognition is affected by the quality of iris image which includes iris pattern. If iris image is blurred, iris pattern is transformed. It causes FRR(False Rejection Error) to be increased. Optical defocusing is the main factor to make blurred iris images. In conventional iris recognition camera, they use two kinds of focusing methods such as lilted and auto-focusing method. In case of fixed focusing method, the users should repeatedly align their eyes in DOF(Depth of Field), while the iris recognition system acquires good focused is image. Therefore it can give much inconvenience to the users. In case of auto-focusing method, the iris recognition camera moves focus lens with auto-focusing algorithm for capturing the best focused image. However, that needs additional H/W equipment such as distance measuring sensor between users and camera lens, and motor to move focus lens. Therefore the size and cost of iris recognition camera are increased and this kind of camera cannot be used for small sized mobile device. To overcome those problems, we propose method to increase DOF by iris image restoration algorithm based on focus value of iris image. When we tested our proposed algorithm with BM-ET100 made by Panasonic, we could increase operation range from 48-53cm to 46-56cm.

A Study on Design of Visual Sensor Using Scanning Beam for Shape Recognition of Weld Joint. (용접접합부의 형상계측을 위한 주사형 시각센서의 설계에 관한 연구)

  • 배강열
    • Journal of Welding and Joining
    • /
    • v.21 no.2
    • /
    • pp.102-110
    • /
    • 2003
  • A visual sensor consisted of polygonal mirror, laser, and CCD camera was proposed to measure the distance to the weld joint for recognizing the joint shape. To scan the laser beam of the sensor onto an object, 8-facet polygonal mirror was used as the rotating mirror. By locating the laser and the camera at axi-symmetrical positions around the mirror, the synchronized-scan condition could be satisfied even when the mirror was set to rotate through one direction continuously, which could remove the inertia effect of the conventional oscillating-mirror methods. The mathematical modelling of the proposed sensor with the optical triangulation method made it possible to derive the relation between the position of an image on the camera and the one of a laser light on the object. Through the geometrical simulation of the proposed sensor with the principal of reflection and virtual image, the optical path of a laser light could be predicted. The position and direction of the CCD camera were determined based on the Scheimpflug's condition to fit the focus of any image reflected from an object within the field of view. The results of modelling and simulation revealed that the proposed visual sensor could be used to recognize the weld joint and its vicinity located within the range of the field of view and the resolution. (Received February 19, 2003)

Application of Smartphone Camera Calibration for Close-Range Digital Photogrammetry (근접수치사진측량을 위한 스마트폰 카메라 검보정)

  • Yun, MyungHyun;Yu, Yeon;Choi, Chuluong;Park, Jinwoo
    • Korean Journal of Remote Sensing
    • /
    • v.30 no.1
    • /
    • pp.149-160
    • /
    • 2014
  • Recently studies on application development and utilization using sensors and devices embedded in smartphones have flourished at home and abroad. This study aimed to analyze the accuracy of the images of smartphone to determine three-dimension position of close objects prior to the development of photogrammetric system applying smartphone and evaluate the feasibility to use. First of all, camera calibration was conducted on autofocus and infinite focus. Regarding camera calibration distortion model with balance system and unbalance system was used for the decision of lens distortion coefficient, the results of calibration on 16 types of projects showed that all cases were in RMS error by less than 1 mm from bundle adjustment. Also in terms of autofocus and infinite focus on S and S2 model, the pattern of distorted curve was almost the same, so it could be judged that change in distortion pattern according to focus mode is very little. The result comparison according to autofocus and infinite focus and the result comparison according to a software used for multi-image processing showed that all cases were in standard deviation less than ${\pm}3$ mm. It is judged that there is little result difference between focus mode and determination of three-dimension position by distortion model. Lastly the checkpoint performance by total station was fixed as most probable value and the checkpoint performance determined by each project was fixed as observed value to calculate statistics on residual of individual methods. The result showed that all projects had relatively large errors in the direction of Y, the direction of object distance compared to the direction of X and Z. Like above, in terms of accuracy for determination of three-dimension position for a close object, the feasibility to use smartphone camera would be enough.

Statistical Analysis of Focus Adjustment Method for a Floating Imaging System with Symmetric Error Factors (대칭형 공차를 갖는 플로팅 광학계의 상면 변화 보정 방법에 대한 통계적 해석)

  • Ryu, Jae Myung;Kim, Yong Su;Jo, Jae Heung;Kang, Geon Mo;Lee, Hae Jin;Lee, Hyuck Ki
    • Korean Journal of Optics and Photonics
    • /
    • v.23 no.5
    • /
    • pp.189-196
    • /
    • 2012
  • A floating optical system is a system that moves more than 2 groups to focus at the camera lens. At the camera optics, the floating system that is mainly used is an optical system such as a macro lens which changes magnification very much. When the floating system is assembled and fabricated in the factory, there are differences between the image plane of the sensor and the focal plane of the infinity or macro state. Therefore, in a considerable proportion of cases, the focus adjustment to minimize the difference of BWD(Back Working Distance) is carried out in the process of manufacturing. In this paper, in order to decide the movement of each group in a floating system, we evaluated the rotation angle of CAM for the focus adjustment. We know that the maximum magnification of macro state is corrected by this numerical method for the focus adjustment, too. We investigated the limit of CAM rotation angle of the system by using statistical analysis for CAM rotation angle, which uses the focus adjustment of the floating system with symmetric error factors.