• Title/Summary/Keyword: camera

Search Result 10,596, Processing Time 0.035 seconds

A Study m Camera Calibration Using Artificial Neural Network (신경망을 이용한 카메라 보정에 관한 연구)

  • Jeon, Kyong-Pil;Woo, Dong-Min;Park, Dong-Chul
    • Proceedings of the KIEE Conference
    • /
    • 1996.07b
    • /
    • pp.1248-1250
    • /
    • 1996
  • The objective of camera calibration is to obtain the correlation between camera image coordinate and 3-D real world coordinate. Most calibration methods are based on the camera model which consists of physical parameters of the camera like position, orientation, focal length, etc and in this case camera calibration means the process of computing those parameters. In this research, we suggest a new approach which must be very efficient because the artificial neural network(ANN) model implicitly contains all the physical parameters, some of which are very difficult to be estimated by the existing calibration methods. Implicit camera calibration which means the process of calibrating a camera without explicitly computing its physical parameters can be used for both 3-D measurement and generation of image coordinates. As training each calibration points having different height, we can find the perspective projection point. The point can be used for reconstruction 3-D real world coordinate having arbitrary height and image coordinate of arbitrary 3-D real world coordinate. Experimental comparison of our method with well-known Tsai's 2 stage method is made to verify the effectiveness of the proposed method.

  • PDF

Investigation on Image Quality of Smartphone Cameras as Compared with a DSLR Camera by Using Target Image Edges

  • Seo, Suyoung
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.1
    • /
    • pp.49-60
    • /
    • 2016
  • This paper presents a set of methods to evaluate the image quality of smartphone cameras as compared with that of a DSLR camera. In recent years, smartphone cameras have been used broadly for many purposes. As the performance of smartphone cameras has been enhanced considerably, they can be considered to be used for precise mapping instead of metric cameras. To evaluate the possibility, we tested the quality of one DSLR camera and 3 smartphone cameras. In the first step, we compare the amount of lens distortions inherent in each camera using camera calibration sheet images. Then, we acquired target sheet images, extracted reference lines from them and evaluated the geometric quality of smartphone cameras based on the amount of errors occurring in fitting a straight line to observed points. In addition, we present a method to evaluate the radiometric quality of the images taken by each camera based on planar fitting errors. Also, we propose a method to quantify the geometric quality of the selected camera using edge displacements observed in target sheet images. The experimental results show that the geometric and radiometric qualities of smartphone cameras are comparable to those of a DSLR camera except lens distortion parameters.

Camera Calibration Using Neural Network with a Small Amount of Data (소수 데이터의 신경망 학습에 의한 카메라 보정)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.

Effects of selfie semantic network analysis and AR camera app use on appearance satisfaction and self-esteem (셀피의 의미연결망 분석과 AR 카메라 앱 사용이 외모만족도와 자아존중감에 미치는 영향)

  • Lee, Hyun-Jung
    • The Research Journal of the Costume Culture
    • /
    • v.30 no.5
    • /
    • pp.766-778
    • /
    • 2022
  • Image-oriented information is becoming increasingly important on social networking services (SNS); the background of this trend is the popularity of selfies. Currently, camera applications using augmented reality (AR) and artificial intelligence (AI) technologies are gaining traction. An AR camera app is a smartphone application that converts selfies into various interesting forms using filters. In this study, we investigated the change of keywords according to the time flow of selfies in Goolgle News articles through semantic network analysis. Additionally, we examined the effects of using an AR camera app on appearance satisfaction and self-esteem when taking a selfie. Semantic network analysis revealed that in 2013, postings of specific people were the most prominent selfie-related keywords. In 2019, keywords appeared regarding the launch of a new smartphone with a rear-facing camera for selfies; in 2020, keywords related to communication through selfies appeared. As a result of examining the effect of the degree of use of the AR camera app on appearance satisfaction, it was found that the higher the degree of use, the higher the user's interest in appearance. As a result of examining the effect of the degree of use of the AR camera app on self-esteem, it was found that the higher the degree of use, the higher the user's negative self-esteem.

Camera pose estimation framework for array-structured images

  • Shin, Min-Jung;Park, Woojune;Kim, Jung Hee;Kim, Joonsoo;Yun, Kuk-Jin;Kang, Suk-Ju
    • ETRI Journal
    • /
    • v.44 no.1
    • /
    • pp.10-23
    • /
    • 2022
  • Despite the significant progress in camera pose estimation and structure-from-motion reconstruction from unstructured images, methods that exploit a priori information on camera arrangements have been overlooked. Conventional state-of-the-art methods do not exploit the geometric structure to recover accurate camera poses from a set of patch images in an array for mosaic-based imaging that creates a wide field-of-view image by sewing together a collection of regular images. We propose a camera pose estimation framework that exploits the array-structured image settings in each incremental reconstruction step. It consists of the two-way registration, the 3D point outlier elimination and the bundle adjustment with a constraint term for consistent rotation vectors to reduce reprojection errors during optimization. We demonstrate that by using individual images' connected structures at different camera pose estimation steps, we can estimate camera poses more accurately from all structured mosaic-based image sets, including omnidirectional scenes.

Camera calibration parameters estimation using perspective variation ratio of grid type line widths (격자형 선폭들의 투영변화비를 이용한 카메라 교정 파라메터 추정)

  • Jeong, Jun-Ik;Choi, Seong-Gu;Rho, Do-Hwan
    • Proceedings of the KIEE Conference
    • /
    • 2004.11c
    • /
    • pp.30-32
    • /
    • 2004
  • With 3-D vision measuring, camera calibration is necessary to calculate parameters accurately. Camera calibration was developed widely in two categories. The first establishes reference points in space, and the second uses a grid type frame and statistical method. But, the former has difficulty to setup reference points and the latter has low accuracy. In this paper we present an algorithm for camera calibration using perspective ratio of the grid type frame with different line widths. It can easily estimate camera calibration parameters such as lens distortion, focal length, scale factor, pose, orientations, and distance. The advantage of this algorithm is that it can estimate the distance of the object. Also, the proposed camera calibration method is possible estimate distance in dynamic environment such as autonomous navigation. To validate proposed method, we set up the experiments with a frame on rotator at a distance of 1, 2, 3, 4[m] from camera and rotate the frame from -60 to 60 degrees. Both computer simulation and real data have been used to test the proposed method and very good results have been obtained. We have investigated the distance error affected by scale factor or different line widths and experimentally found an average scale factor that includes the least distance error with each image. The average scale factor tends to fluctuate with small variation and makes distance error decrease. Compared with classical methods that use stereo camera or two or three orthogonal planes, the proposed method is easy to use and flexible. It advances camera calibration one more step from static environments to real world such as autonomous land vehicle use.

  • PDF

Coordinate Determination for Texture Mapping using Camera Calibration Method (카메라 보정을 이용한 텍스쳐 좌표 결정에 관한 연구)

  • Jeong K. W.;Lee Y.Y.;Ha S.;Park S.H.;Kim J. J.
    • Korean Journal of Computational Design and Engineering
    • /
    • v.9 no.4
    • /
    • pp.397-405
    • /
    • 2004
  • Texture mapping is the process of covering 3D models with texture images in order to increase the visual realism of the models. For proper mapping the coordinates of texture images need to coincide with those of the 3D models. When projective images from the camera are used as texture images, the texture image coordinates are defined by a camera calibration method. The texture image coordinates are determined by the relation between the coordinate systems of the camera image and the 3D object. With the projective camera images, the distortion effect caused by the camera lenses should be compensated in order to get accurate texture coordinates. The distortion effect problem has been dealt with iterative methods, where the camera calibration coefficients are computed first without considering the distortion effect and then modified properly. The methods not only cause to change the position of the camera perspective line in the image plane, but also require more control points. In this paper, a new iterative method is suggested for reducing the error by fixing the principal points in the image plane. The method considers the image distortion effect independently and fixes the values of correction coefficients, with which the distortion coefficients can be computed with fewer control points. It is shown that the camera distortion effects are compensated with fewer numbers of control points than the previous methods and the projective texture mapping results in more realistic image.

PTZ Camera Based Multi Event Processing for Intelligent Video Network (지능형 영상네트워크 연계형 PTZ카메라 기반 다중 이벤트처리)

  • Chang, Il-Sik;Ahn, Seong-Je;Park, Gwang-Yeong;Cha, Jae-Sang;Park, Goo-Man
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.35 no.11A
    • /
    • pp.1066-1072
    • /
    • 2010
  • In this paper we proposed a multi event handling surveillance system using multiple PTZ cameras. One event is assigned to each PTZ camera to detect unusual situation. If a new object appears in the scene while a camera is tracking the old one, it can not handle two objects simultaneously. In the second case that the object moves out of the scene during the tracking, the camera loses the object. In the proposed method, the nearby camera takes the role to trace the new one or detect the lost one in each case. The nearby camera can get the new object location information from old camera and set the seamless event link for the object. Our simulation result shows the continuous camera-to-camera object tracking performance.

Multi-tracer Imaging of a Compton Camera (다중 추적자 영상을 위한 컴프턴 카메라)

  • Kim, Soo Mee
    • Progress in Medical Physics
    • /
    • v.26 no.1
    • /
    • pp.18-27
    • /
    • 2015
  • Since a Compton camera has high detection sensitivity due to electronic collimation and a good energy resolution, it is a potential imaging system for nuclear medicine. In this study, we investigated the feasibility of a Compton camera for multi-tracer imaging and proposed a rotating Compton camera to satisfy Orlov's condition for 3D imaging. Two software phantoms of 140 and 511 keV radiation sources were used for Monte-Carlo simulation and then the simulation data were reconstructed by listmode ordered subset expectation maximization to evaluate the capability of multi-tracer imaging in a Compton camera. And the Compton camera rotating around the object was proposed and tested with different rotation angle steps for improving the limited coverage of the fixed conventional Compton camera over the field-of-view in terms of histogram of angles in spherical coordinates. The simulation data showed the separate 140 and 511 keV images from simultaneous multi-tracer detection in both 2D and 3D imaging and the number of valid projection lines on the conical surfaces was inversely proportional to the decrease of rotation angle. Considering computation load and proper number of projection lines on the conical surface, the rotation angle of 30 degree was sufficient for 3D imaging of the Compton camera in terms of 26 min of computation time and 5 million of detected event number and the increased detection time can be solved with multiple Compton camera system. The Compton camera proposed in this study can be effective system for multi-tracer imaging and is a potential system for development of various disease diagnosis and therapy approaches.

Experiment on Camera Platform Calibration of a Multi-Looking Camera System using single Non-Metric Camera (비측정용 카메라를 이용한 Multi-Looking 카메라의 플랫폼 캘리브레이션 실험 연구)

  • Lee, Chang-No;Lee, Byoung-Kil;Eo, Yang-Dam
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.26 no.4
    • /
    • pp.351-357
    • /
    • 2008
  • An aerial multi-looking camera system equips itself with five separate cameras which enables acquiring one vertical image and four oblique images at the same time. This provides diverse information about the site compared to aerial photographs vertically. The geometric relationship of oblique cameras and a vertical camera can be modelled by 6 exterior orientation parameters. Once the relationship between the vertical camera and each oblique camera is determined, the exterior orientation parameters of the oblique images can be calculated by the exterior orientation parameters of the vertical image. In order to examine the exterior orientation of both a vertical camera and each oblique cameras in the multi-looking camera relatively, calibration targets were installed in a lab and 14 images were taken from three image stations by tilting and rotating a non-metric digital camera. The interior orientation parameters of the camera and the exterior orientation parameters of the images were estimated. The exterior orientation parameters of the oblique image with respect to the vertical image were calculated relatively by the exterior orientation parameters of the images and error propagation of the orientation angles and the position of the projection center was examined.