• Title/Summary/Keyword: 3d camera

Search Result 1,631, Processing Time 0.039 seconds

Omni-directional Visual-LiDAR SLAM for Multi-Camera System (다중 카메라 시스템을 위한 전방위 Visual-LiDAR SLAM)

  • Javed, Zeeshan;Kim, Gon-Woo
    • The Journal of Korea Robotics Society
    • /
    • v.17 no.3
    • /
    • pp.353-358
    • /
    • 2022
  • Due to the limited field of view of the pinhole camera, there is a lack of stability and accuracy in camera pose estimation applications such as visual SLAM. Nowadays, multiple-camera setups and large field of cameras are used to solve such issues. However, a multiple-camera system increases the computation complexity of the algorithm. Therefore, in multiple camera-assisted visual simultaneous localization and mapping (vSLAM) the multi-view tracking algorithm is proposed that can be used to balance the budget of the features in tracking and local mapping. The proposed algorithm is based on PanoSLAM architecture with a panoramic camera model. To avoid the scale issue 3D LiDAR is fused with omnidirectional camera setup. The depth is directly estimated from 3D LiDAR and the remaining features are triangulated from pose information. To validate the method, we collected a dataset from the outdoor environment and performed extensive experiments. The accuracy was measured by the absolute trajectory error which shows comparable robustness in various environments.

Point Cloud Generation Method Based on Lidar and Stereo Camera for Creating Virtual Space (가상공간 생성을 위한 라이다와 스테레오 카메라 기반 포인트 클라우드 생성 방안)

  • Lim, Yo Han;Jeong, In Hyeok;Lee, San Sung;Hwang, Sung Soo
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1518-1525
    • /
    • 2021
  • Due to the growth of VR industry and rise of digital twin industry, the importance of implementing 3D data same as real space is increasing. However, the fact that it requires expertise personnel and huge amount of time is a problem. In this paper, we propose a system that generates point cloud data with same shape and color as a real space, just by scanning the space. The proposed system integrates 3D geometric information from lidar and color information from stereo camera into one point cloud. Since the number of 3D points generated by lidar is not enough to express a real space with good quality, some of the pixels of 2D image generated by camera are mapped to the correct 3D coordinate to increase the number of points. Additionally, to minimize the capacity, overlapping points are filtered out so that only one point exists in the same 3D coordinates. Finally, 6DoF pose information generated from lidar point cloud is replaced with the one generated from camera image to position the points to a more accurate place. Experimental results show that the proposed system easily and quickly generates point clouds very similar to the scanned space.

The Effects of Roll Misalignment Errors, Shooting Distance, and Vergence Condition of 3D Camera on 3D Visual Fatigue (시각피로 모형: 카메라의 회전오차, 촬영 거리, 수렴 조건이 입체 시각피로에 미치는 영향)

  • Li, Hyung-Chul O.;Park, JongJin;Kim, ShinWoo
    • Journal of Broadcast Engineering
    • /
    • v.18 no.4
    • /
    • pp.589-598
    • /
    • 2013
  • In order to understand 3D visual fatigue, it is necessary to examine the visual fatigue induced by camera parameters as well as that induced by a pre-existing 3D content. In the present study, we examined the effects of camera parameters, such as roll misalignment error, shooting distance and vergence condition on 3D visual fatigue and we modelled it. The results indicate that roll misalignment error, shooting distance and vergence condition affect 3D visual fatigue and the effect of roll misalignment error on 3D visual fatigue is evident specifically when screen disparity is relatively small.

On Design of Visual Servoing using an Uncalibrated Camera in 3D Space

  • Morita, Masahiko;Kenji, Kohiyama;Shigeru, Uchikado;Lili, Sun
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.1121-1125
    • /
    • 2003
  • In this paper we deal with visual servoing that can control a robot arm with a camera using information of images only, without estimating 3D position and rotation of the robot arm. Here it is assumed that the robot arm is calibrated and the camera is uncalibrated. We use a pinhole camera model as the camera one. The essential notion can be show, that is, epipolar geometry, epipole, epipolar equation, and epipolar constrain. These play an important role in designing visual servoing. For easy understanding of the proposed method we first show a design in case of the calibrated camera. The design is constructed by 4 steps and the directional motion of the robot arm is fixed only to a constant direction. This means that an estimated epipole denotes the direction, to which the robot arm translates in 3D space, on the image plane.

  • PDF

New Initialization method for the robust self-calibration of the camera

  • Ha, Jong-Eun;Kang, Dong-Joong
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2003.10a
    • /
    • pp.752-757
    • /
    • 2003
  • Recently, 3D structure recovery through self-calibration of camera has been actively researched. Traditional calibration algorithm requires known 3D coordinates of the control points while self-calibration only requires the corresponding points of images, thus it has more flexibility in real application. In general, self-calibration algorithm results in the nonlinear optimization problem using constraints from the intrinsic parameters of the camera. Thus, it requires initial value for the nonlinear minimization. Traditional approaches get the initial values assuming they have the same intrinsic parameters while they are dealing with the situation where the intrinsic parameters of the camera may change. In this paper, we propose new initialization method using the minimum 2 images. Proposed method is based on the assumption that the least violation of the camera’s intrinsic parameter gives more stable initial value. Synthetic and real experiment shows this result.

  • PDF

Vision-based Camera Localization using DEM and Mountain Image (DEM과 산영상을 이용한 비전기반 카메라 위치인식)

  • Cha Jeong-Hee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.10 no.6 s.38
    • /
    • pp.177-186
    • /
    • 2005
  • In this Paper. we propose vision-based camera localization technique using 3D information which is created by mapping of DEM and mountain image. Typically, image features for localization have drawbacks, it is variable to camera viewpoint and after time information quantify increases . In this paper, we extract invariance features of geometry which is irrelevant to camera viewpoint and estimate camera extrinsic Parameter through accurate corresponding Points matching by Proposed similarity evaluation function and Graham search method we also propose 3D information creation method by using graphic theory and visual clues, The Proposed method has the three following stages; point features invariance vector extraction, 3D information creation, camera extrinsic Parameter estimation. In the experiments, we compare and analyse the proposed method with existing methods to demonstrate the superiority of the proposed methods.

  • PDF

A study on comparison between 3D computer graphics cameras and actual cameras (3D컴퓨터그래픽스 가상현실 애니메이션 카메라와 실제카메라의 비교 연구 - Maya, Softimage 3D, XSI 소프트웨어와 실제 정사진과 동사진 카메라를 중심으로)

  • Kang, Chong-Jin
    • Cartoon and Animation Studies
    • /
    • s.6
    • /
    • pp.193-220
    • /
    • 2002
  • The world being made by computers showing great expanses and complex and various expression provides not simply communication places but also a new civilization and a new creative world. Among these, 3D computer graphics, 3D animation and virtual reality technology wore sublimated as a new culture and a new genre of art by joining graphic design and computer engineering. In this study, I tried to make a diagnosis of possibilities, limits and differences of expression in the area of virtual reality computer graphics animation as a comparison between camera action, angle of actual still camera and film camera and virtual software for 3D computer graphics software - Maya, XSI, Softimage3D.

  • PDF

Proposal of 3D Camera-Based Digital Coordinate Recognition Technology (3D 카메라 기반 디지털 좌표 인식 기술 제안)

  • Koh, Jun-Young;Lee, Kang-Hee
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.229-230
    • /
    • 2022
  • 본 논문에서는 CNN Object Detection과 더불어 3D 카메라 기반 디지털 좌표 인식 기술을 제안한다. 이 기술은 3D Depth Camera인 Intel 사의 Realsense D455를 이용해 대상을 감지하고 분류하며 대상의 위치를 파악한다. 또한 이 기술은 기존의 Depth Camera 내장 거리와는 다르게 좌표를 인식하여 좌표간의 거리까지 계산이 가능하다. 또한 Tensorflow SSD 구조와의 메모리 공유를 통해 시스템의 자원 낭비를 줄이며, 속도를 높이는 멀티쓰레드를 탑재했다. 본 기술을 통해 좌표간의 거리를 계산함으로써 스포츠, 심리, 놀이, 산업 등 다양한 환경에서 활용할 수 있다.

  • PDF

Camera and Receiver Development for 3D HDTV Broadcasting (3차원 고화질TV 방송용 카메라 및 수신기 개발)

  • 이광순;허남호;안충현
    • Journal of Broadcast Engineering
    • /
    • v.7 no.3
    • /
    • pp.211-218
    • /
    • 2002
  • This paper introduces the HD 3DTV camera and 3DTV receiver that are compatible with the ATSC HDTV broadcasting system. The developed 3DTV camera is based on stereoscopic techniques, and it has control function to control both left and right zoom lens simultaneously and to control the vergence. Moreover, in order to control the vergence manually and to eliminate the synchronization problem of the both images, the 3DTV camera has the 3DTV video multiplexing function to combine the left and right images into the single image. The developed 3DTV signal, and it has the various analog/digital interfaces. The performance of the developed system is confirmed by shooting the selected soccer game in 2002 FIFA KOREA/JAPANTM World Cup and by broadcasting the match. The HD 3DTV camera and receiver will be applied to the 3DTV industries such as 3D movie, 3D game, 3D image processing, 3DTV broadcasting system, and so on.

Camera Calibration Using Neural Network with a Small Amount of Data (소수 데이터의 신경망 학습에 의한 카메라 보정)

  • Do, Yongtae
    • Journal of Sensor Science and Technology
    • /
    • v.28 no.3
    • /
    • pp.182-186
    • /
    • 2019
  • When a camera is employed for 3D sensing, accurate camera calibration is vital as it is a prerequisite for the subsequent steps of the sensing process. Camera calibration is usually performed by complex mathematical modeling and geometric analysis. On the other contrary, data learning using an artificial neural network can establish a transformation relation between the 3D space and the 2D camera image without explicit camera modeling. However, a neural network requires a large amount of accurate data for its learning. A significantly large amount of time and work using a precise system setup is needed to collect extensive data accurately in practice. In this study, we propose a two-step neural calibration method that is effective when only a small amount of learning data is available. In the first step, the camera projection transformation matrix is determined using the limited available data. In the second step, the transformation matrix is used for generating a large amount of synthetic data, and the neural network is trained using the generated data. Results of simulation study have shown that the proposed method as valid and effective.