• Title/Summary/Keyword: Focal point position

Search Result 38, Processing Time 0.021 seconds

Impact of Feature Positions on Focal Length Estimation of Self-Calibration (Self-calibration의 초점 거리 추정에서 특징점 위치의 영향)

  • Hong Yoo-Jung;Lee Byung-Uk
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.31 no.4C
    • /
    • pp.400-406
    • /
    • 2006
  • Knowledge of camera parameters, such as position, orientation and focal length, is essential to 3D information recovery or virtual object insertion. This paper analyzes the error sensitivity of focal length due to position error of feature points which are employed for self-calibration. We verify the dependency of the focal length on the distance from the principal point to feature points with simulations, and propose a criterion for feature selection to reduce the error sensitivity.

Absolute Depth Estimation Based on a Sharpness-assessment Algorithm for a Camera with an Asymmetric Aperture

  • Kim, Beomjun;Heo, Daerak;Moon, Woonchan;Hahn, Joonku
    • Current Optics and Photonics
    • /
    • v.5 no.5
    • /
    • pp.514-523
    • /
    • 2021
  • Methods for absolute depth estimation have received lots of interest, and most algorithms are concerned about how to minimize the difference between an input defocused image and an estimated defocused image. These approaches may increase the complexity of the algorithms to calculate the defocused image from the estimation of the focused image. In this paper, we present a new method to recover depth of scene based on a sharpness-assessment algorithm. The proposed algorithm estimates the depth of scene by calculating the sharpness of deconvolved images with a specific point-spread function (PSF). While most depth estimation studies evaluate depth of the scene only behind a focal plane, the proposed method evaluates a broad depth range both nearer and farther than the focal plane. This is accomplished using an asymmetric aperture, so the PSF at a position nearer than the focal plane is different from that at a position farther than the focal plane. From the image taken with a focal plane of 160 cm, the depth of object over the broad range from 60 to 350 cm is estimated at 10 cm resolution. With an asymmetric aperture, we demonstrate the feasibility of the sharpness-assessment algorithm to recover absolute depth of scene from a single defocused image.

An Experimental Study on the Characteristics of Flux Density Distributions in the Focal Region of a Solar Concentrator (태양열 집광기의 초점 지역에 형성된 플럭스 밀도 분포의 특성)

  • Hyun, S.T.;Kang, Y.H.;Yoon, H.G.;Yoo, C.K.;Kang, M.C.
    • Journal of the Korean Solar Energy Society
    • /
    • v.22 no.3
    • /
    • pp.31-37
    • /
    • 2002
  • This experimental study represents the results of an analysis on the characteristics of flux density distributions in the focal region of solar concentrator. The characteristics of flux density distributions are investigated to optimally design and position a cavity receiver. This deemed very useful to find and correct various errors associated with a dish concentrator. We estimated the flux density distribution on the target placed along with focal lengths from the dish vertex to experimentally determine the focal length. It is observed that the actual focal point exists when the focal length is 2.17 m. We also evaluated the position of flux centroid, and it was found that there were errors within 2 cm from the target center. The total integrated power of 2467 W was measured under focal flux distributions, which corresponds to the intercept rate of 85.8%. As a result of the percent power within radius, approximately 90% of the incident radiation is intercepted by about 0.06 m radius.

A Study on Measurement and Control of position and pose of Mobile Robot using Ka13nan Filter and using lane detecting filter in monocular Vision (단일 비전에서 칼만 필티와 차선 검출 필터를 이용한 모빌 로봇 주행 위치.자세 계측 제어에 관한 연구)

  • 이용구;송현승;노도환
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.81-81
    • /
    • 2000
  • We use camera to apply human vision system in measurement. To do that, we need to know about camera parameters. The camera parameters are consisted of internal parameters and external parameters. we can fix scale factor&focal length in internal parameters, we can acquire external parameters. And we want to use these parameters in automatically driven vehicle by using camera. When we observe an camera parameters in respect with that the external parameters are important parameters. We can acquire external parameter as fixing focal length&scale factor. To get lane coordinate in image, we propose a lane detection filter. After searching lanes, we can seek vanishing point. And then y-axis seek y-sxis rotation component(${\beta}$). By using these parameter, we can find x-axis translation component(Xo). Before we make stepping motor rotate to be y-axis rotation component(${\beta}$), '0', we estimate image coordinates of lane at (t+1). Using this point, we apply this system to Kalman filter. And then we calculate to new parameters whick make minimum error.

  • PDF

An Experimental Study on the Characteristics of Flux Density Distributions produced by Solar Concentrating System (태양열 집광기의 플럭스 밀도 분포 특성에 관한 연구)

  • Kang Myeongcheol;Kang Yongheack;Yoon Hwanki;Yu Changkyun
    • 한국신재생에너지학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.422-426
    • /
    • 2005
  • This experimental study represents the results of an analysis on the characteristics of flux density distribution in the focal region of solar concentrator. The characteristics of flux density distributions are investigated to optimally design and position a cavity receiver. This was deemed very useful to find and correct various errors associated with a dish concentrator. We estimated the flux density distribution on the target placed along with focal lengths from the dish vertex to experimentally determine the focal length. It is observed that the actual focal point exists when the focal length is 2.17m. The total integrated power and percent power was 2467W and $85.8\%$, respectively, in the case of small dish, and also 2095W and $79\%$, respectively, in the case of KIERDISH II. As a result of the percent power within radius, approximately $90\%$ of the incident radiation is intercepted by about 0.06 m radius. The minimum radius of receiver in KIERDISH II is found to be 0.15m and approximately $90\%$ of the incident radiation is intercepted by receiver aperture.

  • PDF

The Relativity between Vibration of Phantom and Its Break Efficiency Due to Position of Focus Induced by Piezoelectric Extracorporeal Shock Wave Lithotripter (압전식 충격파 체외 쇄석기 사용시 초점위치에 의한 대상물의 진동과 파쇄효율과의 상관성)

  • 장윤석
    • The Journal of the Acoustical Society of Korea
    • /
    • v.19 no.5
    • /
    • pp.35-40
    • /
    • 2000
  • In this paper, the relation between the radiated sound and the vibration due to piezoelectric ESWL(Extracorporeal Shock Wave Lithotripter) is examined and the results of the experiments are represented. Next, the relation between the focal point and the vibration of the objects is examined. The same experiments with the objects that can be broken are done and the relation between the vibration and the break efficiency of the phantom is experimentally investigated. These results show that the relativity between the power of the peak frequency and the break efficiency can be confirmed.

  • PDF

Optical Implementation of Incoherent Holographic 3D Display System using Modified Triangular and Mach-Zehender Interferometer (변형된 삼각 및 마하젠더 간섭계 기반의 인코히어런트 홀로그래픽 3D 디스플레이 시스템의 광학적 구현)

  • 김승철;구정식;김은수
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.29 no.4C
    • /
    • pp.524-532
    • /
    • 2004
  • In this paper, an incoherent holographic 3D imaging and display system based on the modified triangular and Mach-Zehnder interferometers is optically implemented and some experiments are carried out. Incoherent hologram of a 3D object is generated by using the hologram input system of modified triangular interferometer. Then this complex hologram is reconstructed by using the hologram output system of modified Mach-Zehnder interferometer in which two LCD spatial light modulators and a waveplate are inserted. From the experiment with two point sources having a depth difference of 100 mm each other, it is revealed that each point source can be independently reconstructed at its own focal position from the complex hologram, while both of the bias and conjugate image are simultaneously eliminated at the same time. And in the experiment with the real 3D object of two dices having a depth difference of 30 mm each other, it is also conformed that the bias and conjugate image can be effectively eliminated from the hologram pattern and each 3D dice can be also successfully reconstructed at its own focal position from the complex hologram. These experiment results finally suggest a possibility of implementing a new incoherent holographic 3D imaging and display system using the modified triangular and Mach-Zehender interferometers.

Automatic Depth-of-Field Control for Stereoscopic Visualization (입체영상 가시화를 위한 자동 피사계 심도 조절기법)

  • Kang, Dong-Soo;Kim, Yang-Wook;Park, Jun;Shin, Byeong-Seok
    • Journal of Korea Multimedia Society
    • /
    • v.12 no.4
    • /
    • pp.502-511
    • /
    • 2009
  • In order to simulate a depth-of-field effect in real world, there have been several researches in computer graphics field. It can represent an out-of-focused scene by calculating focal plane. When a point in a 3D coordinate lies on further or nearer than focal plane, the point is presented as a blurred circle on image plane according to the characteristic of the aperture and the lens. We can generate a realistic image by simulating the effect because it provides an out-of-focused scene like human eye dose. In this paper, we propose a method to calculate a disparity value of a viewer using a customized stereoscopic eye-tracking system and a GPU-based depth-of-field control method. They enable us to generate more realistic images reducing side effects such as dizziness. Since stereoscopic imaging system compels the users to fix their focal position, they usually feel discomfort during watching the stereoscopic images. The proposed method can reduce the side effect of stereoscopic display system and generate more immersive images.

  • PDF

Camera Calibration for Machine Vision Based Autonomous Vehicles (머신비젼 기반의 자율주행 차량을 위한 카메라 교정)

  • Lee, Mun-Gyu;An, Taek-Jin
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.9
    • /
    • pp.803-811
    • /
    • 2002
  • Machine vision systems are usually used to identify traffic lanes and then determine the steering angle of an autonomous vehicle in real time. The steering angle is calculated using a geometric model of various parameters including the orientation, position, and hardware specification of a camera in the machine vision system. To find the accurate values of the parameters, camera calibration is required. This paper presents a new camera-calibration algorithm using known traffic lane features, line thickness and lane width. The camera parameters considered are divided into two groups: Group I (the camera orientation, the uncertainty image scale factor, and the focal length) and Group II(the camera position). First, six control points are extracted from an image of two traffic lines and then eight nonlinear equations are generated based on the points. The least square method is used to find the estimates for the Group I parameters. Finally, values of the Group II parameters are determined using point correspondences between the image and its corresponding real world. Experimental results prove the feasibility of the proposed algorithm.

A Study on the Effect of Weighting Matrix of Robot Vision Control Algorithm in Robot Point Placement Task (점 배치 작업 시 제시된 로봇 비젼 제어알고리즘의 가중행렬의 영향에 관한 연구)

  • Son, Jae-Kyung;Jang, Wan-Shik;Sung, Yoon-Gyung
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.29 no.9
    • /
    • pp.986-994
    • /
    • 2012
  • This paper is concerned with the application of the vision control algorithm with weighting matrix in robot point placement task. The proposed vision control algorithm involves four models, which are the robot kinematic model, vision system model, the parameter estimation scheme and robot joint angle estimation scheme. This proposed algorithm is to make the robot move actively, even if relative position between camera and robot, and camera's focal length are unknown. The parameter estimation scheme and joint angle estimation scheme in this proposed algorithm have form of nonlinear equation. In particular, the joint angle estimation model includes several restrictive conditions. For this study, the weighting matrix which gave various weighting near the target was applied to the parameter estimation scheme. Then, this study is to investigate how this change of the weighting matrix will affect the presented vision control algorithm. Finally, the effect of the weighting matrix of robot vision control algorithm is demonstrated experimentally by performing the robot point placement.