• Title/Summary/Keyword: , Camera estimation method

Search Result 451, Processing Time 0.035 seconds

Camera Position Estimation in Castor Using Electroendoscopic Image Sequence (전자내시경 순차영상을 이용한 위에서의 카메라 위치 추정)

  • 이상경;민병구
    • Journal of Biomedical Engineering Research
    • /
    • v.12 no.1
    • /
    • pp.49-56
    • /
    • 1991
  • In this paper, a method for camera position estimation in gasher using elechoendoscopic image sequence is proposed. In orders to obtain proper image sequences, the gasser in divided into three sections. It Is presented thats camera position modeling for 3D information extvac lion and image distortion due to the endoscopic lenses is corrected. The feature points are represented with respect to the reference coordinate system below 10 percents error rate. The faster distortion correction algorithm is proposed in this paper. This algorithm uses error table which is faster than coordinate transform method using n -th order polynomials.

  • PDF

Defects Length Measurement using an Estimation Algorithm of the Camera Orientation and an Inclination Angle of a Laser Slit Beam

  • Kim, Young-Hwan;Yoon, Ji-Sup;Kang, E-Sok
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2004.08a
    • /
    • pp.1452-1457
    • /
    • 2004
  • In this paper, a method of measuring the length of defects on the wall and restructuring the defect image is proposed based on the estimation algorithm of a camera orientation which uses the declination angle of a laser slit beam. The estimation algorithm of the horizontally inclined angle of CCD camera adopts a 3-dimensional coordinate transformation of the image plane where both the laser beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect can be reconstructed to an image normal to the wall. From the result of a series of experiments, the measuring accuracy of the defect is measured within 0.5% error bound of real defect size under 30 degree of the horizontally inclined angle. The proposed algorithm provides the method of reconstructing the image taken at any arbitrary horizontally inclined angle to the image normal to the wall and thus, it enables the accurate measurement of the defect lengths only by using a single camera and a laser slit beam.

  • PDF

Defects Length Measurement Using an Estimation Agorithm of the Camera Orientation and an Inclination Angle of a Laser Slit Beam (레이저 슬릿 빔의 경사각과 카메라 자세 추정 알고리듬을 이용한 벽면결함 길이측정)

  • Kim, Young-Hwang;Yoon, Ji-Sup;Kang, E-Sok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.8 no.1
    • /
    • pp.37-45
    • /
    • 2002
  • A method of measuring the length of defects on the wall and restructuring the defect image is proposed based on the estimation algorithm of a camera orientation, which uses the declination angle of a laser slit beam. The estimation algorithm of the horizontally inclined angle of CCD camera adopts a 3-dimensional coordinate transformation of the image plane where both the laser beam and the original image of the defects exist. The estimation equation is obtained by using the information of the beam projected on the wall and the parameters of this equation are experimentally obtained. With this algorithm, the original image of the defect can be reconstructed as an image normal to the wall. From the result of a series of experiments, the measuring accuracy of the defect is measured within 0.5% error bound of real defect size under 30 degree of the horizontally inclined angle. The proposed algorithm provides the method of reconstructing the image taken at any arbitrary horizontally inclined angle as the image normal as the wall and thus, it enables the accurate measurement of the defect lengths by using a single camera and a laser slit beam.

A Study on Cable Tension Estimation Using Smartphone Built-in Accelerometer and Camera (스마트폰 내장 가속도계와 카메라를 이용한 케이블 장력 추정에 관한 연구)

  • Lee, Hyeong-Jin
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.25 no.5
    • /
    • pp.773-782
    • /
    • 2022
  • Estimation of cable tension through proper measurements is one of the essential tasks in evaluating the safety of cable structures. In this paper, a study on cable tension estimation using the built-in accelerometer and camera in a smartphone was conducted. For the experimental study, visual displacement measurement using a smartphone camera and acceleration measurement using a built-in accelerometer were performed in the cable-stayed bridge model. The estimated natural frequencies and transformed tensions from these measurements were compared with the theoretical values and results from the normal visual displacement method. Through comparison, it can be seen that the error between the method using the smartphone and the normal visual displacement is sufficiently small to be acceptable. It has also been shown that those errors are much smaller than the difference between the values calculated by the theoretical model. These results show that the deviation according to the type of measurement method is not large and it is rather important to use an appropriate mathematical model. In conclusion, in the case of cable tension estimation, it can be said that the visual displacement measurement and acceleration using a smartphone can be a sufficiently applicable method, just like the normal visual displacement method. It is also noteworthy that the smartphone accelerometer has a larger magnitude error and has more limitations such as high-frequency sampling instability compared to the visual displacement method, but shows almost the same performance as the visual displacement method in this cable tension estimation.

A Study on Rigid body Placement Task of based on Robot Vision System (로봇 비젼시스템을 이용한 강체 배치 실험에 대한 연구)

  • 장완식;신광수;안철봉
    • Journal of the Korean Society for Precision Engineering
    • /
    • v.15 no.11
    • /
    • pp.100-107
    • /
    • 1998
  • This paper presents the development of estimation model and control method based on the new robot vision. This proposed control method is accomplished using the sequential estimation scheme that permits placement of the rigid body in each of the two-dimensional image planes of monitoring cameras. Estimation model with six parameters is developed based on the model that generalizes known 4-axis scara robot kinematics to accommodate unknown relative camera position and orientation, etc. Based on the estimated parameters, depending on each camera the joint angle of robot is estimated by the iteration method. The method is experimentally tested in two ways, the estimation model test and a three-dimensional rigid body placement task. Three results show that control scheme used is precise and robust. This feature can open the door to a range of application of multi-axis robot such as assembly and welding.

  • PDF

Estimation of Surface Spectral Reflectance using A Population with Similar Colors (유사색 모집단을 이용한 물체의 분광 반사율 추정)

  • 이철희;서봉우;안석출
    • Journal of Korea Multimedia Society
    • /
    • v.4 no.1
    • /
    • pp.37-45
    • /
    • 2001
  • The studies to estimate the surface spectral reflectance of an object have received widespread attention using the multi-spectral camera system. However, the multi-spectral camera system requires the additional color filter according to increment of the channel and system complexity is increased by multiple capture. Thus, this paper proposes an algorithm to reduce the estimation error of surface spectral reflectance with the conventional 3-band RGB camera. In the proposed method, adaptive principal components for each pixel are calculated by renewing the population of surface reflectances and the adaptive principal components can reduce estimation error of surface spectral reflectance of current pixel. To evaluate performance of the proposed estimation method, 3-band principal component analysis, 5-band wiener estimation method, and the proposed method are compared in the estimation experiment with the Macbeth Color Checker. As a result, the proposed method showed a lower mean square error between the estimated and the measured spectra compared to the conventional 3-band principal component analysis method and represented a similar or advanced estimation performance compared to the 5-band wiener method.

  • PDF

Illumination estimation based on valid pixel selection from CCD camera response (CCD카메라 응답으로부터 유효 화소 선택에 기반한 광원 추정)

  • 권오설;조양호;김윤태;송근호;하영호
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.5
    • /
    • pp.251-258
    • /
    • 2004
  • This paper proposes a method for estimating the illuminant chromaticity using the distributions of the camera responses obtained by a CCD camera in a real-world scene. Illuminant estimation using a highlight method is based on the geometric relation between a body and its surface reflection. In general, the pixels in a highlight region are affected by an illuminant geometric difference, camera quantization errors, and the non-uniformity of the CCD sensor. As such, this leads to inaccurate results if an illuminant is estimated using the pixels of a CCD camera without any preprocessing. Accordingly, to solve this problem the proposed method analyzes the distribution of the CCD camera responses and selects pixels using the Mahalanobis distance in highlight regions. The use of the Mahalanobis distance based on the camera responses enables the adaptive selection of valid pixels among the pixels distributed in the highlight regions. Lines are then determined based on the selected pixels with r-g chromaticity coordinates using a principal component analysis(PCA). Thereafter, the illuminant chromaticity is estimated based on the intersection points of the lines. Experimental results using the proposed method demonstrated a reduced estimation error compared with the conventional method.

Markerless camera pose estimation framework utilizing construction material with standardized specification

  • Harim Kim;Heejae Ahn;Sebeen Yoon;Taehoon Kim;Thomas H.-K. Kang;Young K. Ju;Minju Kim;Hunhee Cho
    • Computers and Concrete
    • /
    • v.33 no.5
    • /
    • pp.535-544
    • /
    • 2024
  • In the rapidly advancing landscape of computer vision (CV) technology, there is a burgeoning interest in its integration with the construction industry. Camera calibration is the process of deriving intrinsic and extrinsic parameters that affect when the coordinates of the 3D real world are projected onto the 2D plane, where the intrinsic parameters are internal factors of the camera, and extrinsic parameters are external factors such as the position and rotation of the camera. Camera pose estimation or extrinsic calibration, which estimates extrinsic parameters, is essential information for CV application at construction since it can be used for indoor navigation of construction robots and field monitoring by restoring depth information. Traditionally, camera pose estimation methods for cameras relied on target objects such as markers or patterns. However, these methods, which are marker- or pattern-based, are often time-consuming due to the requirement of installing a target object for estimation. As a solution to this challenge, this study introduces a novel framework that facilitates camera pose estimation using standardized materials found commonly in construction sites, such as concrete forms. The proposed framework obtains 3D real-world coordinates by referring to construction materials with certain specifications, extracts the 2D coordinates of the corresponding image plane through keypoint detection, and derives the camera's coordinate through the perspective-n-point (PnP) method which derives the extrinsic parameters by matching 3D and 2D coordinate pairs. This framework presents a substantial advancement as it streamlines the extrinsic calibration process, thereby potentially enhancing the efficiency of CV technology application and data collection at construction sites. This approach holds promise for expediting and optimizing various construction-related tasks by automating and simplifying the calibration procedure.

Multi-camera-based 3D Human Pose Estimation for Close-Proximity Human-robot Collaboration in Construction

  • Sarkar, Sajib;Jang, Youjin;Jeong, Inbae
    • International conference on construction engineering and project management
    • /
    • 2022.06a
    • /
    • pp.328-335
    • /
    • 2022
  • With the advance of robot capabilities and functionalities, construction robots assisting construction workers have been increasingly deployed on construction sites to improve safety, efficiency and productivity. For close-proximity human-robot collaboration in construction sites, robots need to be aware of the context, especially construction worker's behavior, in real-time to avoid collision with workers. To recognize human behavior, most previous studies obtained 3D human poses using a single camera or an RGB-depth (RGB-D) camera. However, single-camera detection has limitations such as occlusions, detection failure, and sensor malfunction, and an RGB-D camera may suffer from interference from lighting conditions and surface material. To address these issues, this study proposes a novel method of 3D human pose estimation by extracting 2D location of each joint from multiple images captured at the same time from different viewpoints, fusing each joint's 2D locations, and estimating the 3D joint location. For higher accuracy, the probabilistic representation is used to extract the 2D location of the joints, considering each joint location extracted from images as a noisy partial observation. Then, this study estimates the 3D human pose by fusing the probabilistic 2D joint locations to maximize the likelihood. The proposed method was evaluated in both simulation and laboratory settings, and the results demonstrated the accuracy of estimation and the feasibility in practice. This study contributes to ensuring human safety in close-proximity human-robot collaboration by providing a novel method of 3D human pose estimation.

  • PDF

A Defocus Technique based Depth from Lens Translation using Sequential SVD Factorization

  • Kim, Jong-Il;Ahn, Hyun-Sik;Jeong, Gu-Min;Kim, Do-Hyun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2005.06a
    • /
    • pp.383-388
    • /
    • 2005
  • Depth recovery in robot vision is an essential problem to infer the three dimensional geometry of scenes from a sequence of the two dimensional images. In the past, many studies have been proposed for the depth estimation such as stereopsis, motion parallax and blurring phenomena. Among cues for depth estimation, depth from lens translation is based on shape from motion by using feature points. This approach is derived from the correspondence of feature points detected in images and performs the depth estimation that uses information on the motion of feature points. The approaches using motion vectors suffer from the occlusion or missing part problem, and the image blur is ignored in the feature point detection. This paper presents a novel approach to the defocus technique based depth from lens translation using sequential SVD factorization. Solving such the problems requires modeling of mutual relationship between the light and optics until reaching the image plane. For this mutuality, we first discuss the optical properties of a camera system, because the image blur varies according to camera parameter settings. The camera system accounts for the camera model integrating a thin lens based camera model to explain the light and optical properties and a perspective projection camera model to explain the depth from lens translation. Then, depth from lens translation is proposed to use the feature points detected in edges of the image blur. The feature points contain the depth information derived from an amount of blur of width. The shape and motion can be estimated from the motion of feature points. This method uses the sequential SVD factorization to represent the orthogonal matrices that are singular value decomposition. Some experiments have been performed with a sequence of real and synthetic images comparing the presented method with the depth from lens translation. Experimental results have demonstrated the validity and shown the applicability of the proposed method to the depth estimation.

  • PDF