• Title/Summary/Keyword: Camera Modeling

Search Result 333, Processing Time 0.026 seconds

Robust Camera Calibration using TSK Fuzzy Modeling

  • Lee, Hee-Sung;Hong, Sung-Jun;Kim, Eun-Tai
    • International Journal of Fuzzy Logic and Intelligent Systems
    • /
    • v.7 no.3
    • /
    • pp.216-220
    • /
    • 2007
  • Camera calibration in machine vision is the process of determining the intrinsic camera parameters and the three-dimensional (3D) position and orientation of the camera frame relative to a certain world coordinate system. On the other hand, Takagi-Sugeno-Kang (TSK) fuzzy system is a very popular fuzzy system and approximates any nonlinear function to arbitrary accuracy with only a small number of fuzzy rules. It demonstrates not only nonlinear behavior but also transparent structure. In this paper, we present a novel and simple technique for camera calibration for machine vision using TSK fuzzy model. The proposed method divides the world into some regions according to camera view and uses the clustered 3D geometric knowledge. TSK fuzzy system is employed to estimate the camera parameters by combining partial information into complete 3D information. The experiments are performed to verify the proposed camera calibration.

A Method for Generation of Contour lines and 3D Modeling using Depth Sensor (깊이 센서를 이용한 등고선 레이어 생성 및 모델링 방법)

  • Jung, Hunjo;Lee, Dongeun
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.12 no.1
    • /
    • pp.27-33
    • /
    • 2016
  • In this study we propose a method for 3D landform reconstruction and object modeling method by generating contour lines on the map using a depth sensor which abstracts characteristics of geological layers from the depth map. Unlike the common visual camera, the depth-sensor is not affected by the intensity of illumination, and therefore a more robust contour and object can be extracted. The algorithm suggested in this paper first abstracts the characteristics of each geological layer from the depth map image and rearranges it into the proper order, then creates contour lines using the Bezier curve. Using the created contour lines, 3D images are reconstructed through rendering by mapping RGB images of the visual camera. Experimental results show that the proposed method using depth sensor can reconstruct contour map and 3D modeling in real-time. The generation of the contours with depth data is more efficient and economical in terms of the quality and accuracy.

Study on Influencing Factors of Camera Balance in MOBA Games - Focused on (MOBA 게임 카메라 밸런스 개선을 위한 영향요소 분석 - 중심으로)

  • LI, JING;Cho, Dong-Min
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.12
    • /
    • pp.1565-1575
    • /
    • 2020
  • This study examines the game balance of the MOBA game genre, which was selected as a model item for the Asian Games. The "bird-eye view" was used for a more efficient representation of 3D modeling. Based on that, statistical analysis was conducted to present appropriate game camera settings and camera balance to match the competitive structure of the MOBA game. A review of the game camera settings reveals that 64° to 70° is the angle that minimizes the difference in vision between the two-player teams the most. Through a one-way ANOVA analysis, we found that the user ranking level and SVB value are closely related. Therefore, the factor of the regression equation using the SVB value must have a user ranking level. As a result of the optimized camera focus analysis of , the camera setting methods were classified into 3 types. For main action games, the recommended camera angle is 64°~66°, and the recommended camera focus is 11.2 mm~19.3 mm. For action and strategy games, the camera angle is 66°~68°, camera focus - 19.3 mm~27.3 mm. And lastly, for the main strategy game, the recommended camera angle is 68°~70°, and the camera focus is 27.3 mm~35.3 mm.

Modeling and Calibration of a 3D Robot Laser Scanning System (3차원 로봇 레이저 스캐닝 시스템의 모델링과 캘리브레이션)

  • Lee Jong-Kwang;Yoon Ji Sup;Kang E-Sok
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.11 no.1
    • /
    • pp.34-40
    • /
    • 2005
  • In this paper, we describe the modeling for the 3D robot laser scanning system consisting of a laser stripe projector, camera, and 5-DOF robot and propose its calibration method. Nonlinear radial distortion in the camera model is considered for improving the calibration accuracy. The 3D range data is calculated using the optical triangulation principle which uses the geometrical relationship between the camera and the laser stripe plane. For optimal estimation of the system model parameters, real-coded genetic algorithm is applied in the calibration process. Experimental results show that the constructed system is able to measure the 3D position within about 1mm error. The proposed scheme could be applied to the kinematically dissimilar robot system without losing the generality and has a potential for recognition for the unknown environment.

Indoor Environment Modeling with Stereo Camera for Mobile Robot Navigation

  • Park, Sung-Kee;Park, Jong-Suk;Kim, Munsang;Lee, Chong-won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.34.5-34
    • /
    • 2002
  • In this paper we propose a new method for modeling indoor environment with stereo camera and suggest a localization method for mobile robot navigation on the basis of it. From the viewpoint of easiness in map building and exclusion of artificiality, the main idea of this paper is that environment is represented as global topological map and each node has omni-directional metric and color information by using stereo camera and pan/tilt mechanism. We use the depth and color information itself in image pixel as feature for environmental abstraction. In addition, we use only the depth and color information at horizontal centerline in image, where optical axis is passing. The usefulness of this m...

  • PDF

Maritime Object Segmentation and Tracking by using Radar and Visual Camera Integration

  • Hwang, Jae-Jeong;Cho, Sang-Gyu;Lee, Jung-Sik;Park, Sang-Hyon
    • Journal of information and communication convergence engineering
    • /
    • v.8 no.4
    • /
    • pp.466-471
    • /
    • 2010
  • We have proposed a method to detect and track moving ships using position from Radar and image processor. Real-time segmentation of moving regions in image sequences is a fundamental step in the radar-camera integrated system. Algorithms for segmentation of objects are implemented by composing of background subtraction, morphologic operation, connected components labeling, region growing, and minimum enclosing rectangle. Once the moving objects are detected, tracking is only performed upon pixels labeled as foreground with reduced additional computational burdens.

Development of camera caliberation technique using neural-network (신경회로망을 이용함 카메라 보정기법 개발)

  • 한성현;왕한홍;장영희
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 1997.10a
    • /
    • pp.1617-1620
    • /
    • 1997
  • This paper describes the camera caliberation based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distoriton causes an inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera aclibration is illustrated by simulation and experiment.

  • PDF

Development of Camera Calibration Technique Using Neural-Network (뉴럴네트워크를 이용한 카메라 보정기법 개발)

  • 장영희
    • Proceedings of the Korean Society of Machine Tool Engineers Conference
    • /
    • 1997.10a
    • /
    • pp.225-229
    • /
    • 1997
  • This paper describes the camera calibration based-neural network with a camera modeling that accounts for major sources of camera distortion, namely, radial, decentering, and thin prism distortion. Radial distortion causes and inward or outward displacement of a given image point from its ideal location. Actual optical systems are subject to various degrees of decentering, that is, the optical centers of lens elements are not strictly collinear. Thin prism distortion arises from imperfection in lens design and manufacturing as well as camera assembly. It is our purpose to develop the vision system for the pattern recognition and the automatic test of parts and to apply the line of manufacturing. The performance of proposed camera calibration is illustrated by simulation and experiment.

  • PDF

Indoor 3D Modeling Using a Rotating Stereo Frame Camera System and Accuracy Evaluation (회전식 프레임 카메라 시스템을 이용한 실내 3차원 모델링 및 정확도 평가)

  • Kang, Jeongin;Lee, Impyeong
    • Korean Journal of Remote Sensing
    • /
    • v.32 no.5
    • /
    • pp.511-527
    • /
    • 2016
  • We propose a rotating stereo frame camera system to acquire indoor images with a low cost. For experiments, we selected a test site and acquired images using the proposed system and control points using a total station. Using these data, we generated various indoor 3D models using commercial photogrammetric software, PhotoScan. We then performed qualitative and quantitative analysis of the generated indoor 3D models to investigate the possibility of the indoor modeling using the proposed system. From the results, it is confirmed that the generated indoor models using the proposed system can be applicable to the services not inquiring high accuracy.

SPOT Camera Modeling Using Auxiliary Data (영상보조자료를 이용한 SPOT 카메라 모델링)

  • 김만조;차승훈;고보연
    • Korean Journal of Remote Sensing
    • /
    • v.19 no.4
    • /
    • pp.285-290
    • /
    • 2003
  • In this paper, a camera modeling method that utilizes ephemeris data and imaging geometry is presented. The proposed method constructs a mathematical model only with parameters that are contained in auxiliary files and does not require any ground control points for model construction. Control points are only needed to eliminate geolocation error of the model that is originated from errors embedded in the parameters that are used in model construction. By using a few (one or two) control points, RMS error of around pixel size can be obtained and control points are not necessarily uniformly distributed in line direction of the scene. This advantage is crucial in large-scale projects and will enable to reduce project cost dramatically.