• Title/Summary/Keyword: 3D Camera

Search Result 1,635, Processing Time 0.033 seconds

Image Quality of a Rotating Compton Camera Evaluated by Using 4-D Monte Carlo Simulation Technique (4-D 전산모사 기법을 이용한 호전형 컴프턴 카메라의 영상 특성 평가)

  • Seo, Hee;Lee, Se-Hyung;Park, Jin-Hyung;Kim, Chan-Hyeong;Park, Sung-Ho;Lee, Ju-Hahn;Lee, Chun-Sik;Lee, Jae-Sung
    • Journal of Radiation Protection and Research
    • /
    • v.34 no.3
    • /
    • pp.107-114
    • /
    • 2009
  • A Compton camera, which is based on Compton kinematics, is a very promising gamma-ray imaging device in that it could overcome the limitations of the conventional gamma-ray imaging devices. In the present study, the image quality of a rotating Compton camera was evaluated by using 4-D Monte Carlo simulation technique and the applicability to nuclear industrial applications was examined. It was found that Compton images were significantly improved when the Compton camera rotates around a gamma-ray source. It was also found that the 3-D imaging capability of a Compton camera could enable us to accurately determine the 3-D location of radioactive contamination in a concrete wall for decommissioning purpose of nuclear facilities. The 4-D Monte Carlo simulation technique, which was applied to the Compton camera fields for the first time, could be also used to model the time-dependent geometry for various applications.

3D Terrain Reconstruction Using 2D Laser Range Finder and Camera Based on Cubic Grid for UGV Navigation (무인 차량의 자율 주행을 위한 2차원 레이저 거리 센서와 카메라를 이용한 입방형 격자 기반의 3차원 지형형상 복원)

  • Joung, Ji-Hoon;An, Kwang-Ho;Kang, Jung-Won;Kim, Woo-Hyun;Chung, Myung-Jin
    • Journal of the Institute of Electronics Engineers of Korea SC
    • /
    • v.45 no.6
    • /
    • pp.26-34
    • /
    • 2008
  • The information of traversability and path planning is essential for UGV(Unmanned Ground Vehicle) navigation. Such information can be obtained by analyzing 3D terrain. In this paper, we present the method of 3D terrain modeling with color information from a camera, precise distance information from a 2D Laser Range Finder(LRF) and wheel encoder information from mobile robot with less data. And also we present the method of 3B terrain modeling with the information from GPS/IMU and 2D LRF with less data. To fuse the color information from camera and distance information from 2D LRF, we obtain extrinsic parameters between a camera and LRF using planar pattern. We set up such a fused system on a mobile robot and make an experiment on indoor environment. And we make an experiment on outdoor environment to reconstruction 3D terrain with 2D LRF and GPS/IMU(Inertial Measurement Unit). The obtained 3D terrain model is based on points and requires large amount of data. To reduce the amount of data, we use cubic grid-based model instead of point-based model.

Visual Tracking of Moving Target Using Mobile Robot with One Camera (하나의 카메라를 이용한 이동로봇의 이동물체 추적기법)

  • 한영준;한헌수
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.9 no.12
    • /
    • pp.1033-1041
    • /
    • 2003
  • A new visual tracking scheme is proposed for a mobile robot that tracks a moving object in 3D space in real time. Visual tracking is to control a mobile robot to keep a moving target at the center of input image at all time. We made it possible by simplifying the relationship between the 2D image frame captured by a single camera and the 3D workspace frame. To precisely calculate the input vector (orientation and distance) of the mobile robot, the speed vector of the target is determined by eliminating the speed component caused by the camera motion from the speed vector appeared in the input image. The problem of temporary disappearance of the target form the input image is solved by selecting the searching area based on the linear prediction of target motion. The experimental results have shown that the proposed scheme can make a mobile robot successfully follow a moving target in real time.

Human Legs Motion Estimation by using a Single Camera and a Planar Mirror (단일 카메라와 평면거울을 이용한 하지 운동 자세 추정)

  • Lee, Seok-Jun;Lee, Sung-Soo;Kang, Sun-Ho;Jung, Soon-Ki
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.16 no.11
    • /
    • pp.1131-1135
    • /
    • 2010
  • This paper presents a method to capture the posture of the human lower-limbs on the 3D space by using a single camera and a planar mirror. The system estimates the pose of the camera facing the mirror by using four coplanar IR markers attached on the planar mirror. After that, the training space is set up based on the relationship between the mirror and the camera. When a patient steps on the weight board, the system obtains relative position between patients' feet. The markers are attached on the sides of both legs, so that some markers are invisible from the camera due to the self-occlusion. The reflections of the markers on the mirror can partially resolve the above problem with a single camera system. The 3D positions of the markers are estimated by using the geometric information of the camera on the training space. Finally the system estimates and visualizes the posture and motion of the both legs based on the 3D marker positions.

3D Position Analysis and Tracking of an Object using Monocular USB port Camera (한 대의 USB port 카메라를 이용한 물체추적 과 3차원 정보 추출)

  • 이동엽;이동활;배종일;이만형
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2000.10a
    • /
    • pp.277-277
    • /
    • 2000
  • This paper's purpose obtain information of three dimension using a camera. This system embody to know the height of object using triangle method between reference point of circumstance and object. As I use java program, it is possible to make system regardless of operating system, set up the system. By using comportable USB port camera, we used to everywhere without the capture board. We can use the internet by using the java's JMF and applet everywhere, we regard the camera as fixed.

  • PDF

3D Reconstruction of an Indoor Scene Using Depth and Color Images (깊이 및 컬러 영상을 이용한 실내환경의 3D 복원)

  • Kim, Se-Hwan;Woo, Woon-Tack
    • Journal of the HCI Society of Korea
    • /
    • v.1 no.1
    • /
    • pp.53-61
    • /
    • 2006
  • In this paper, we propose a novel method for 3D reconstruction of an indoor scene using a multi-view camera. Until now, numerous disparity estimation algorithms have been developed with their own pros and cons. Thus, we may be given various sorts of depth images. In this paper, we deal with the generation of a 3D surface using several 3D point clouds acquired from a generic multi-view camera. Firstly, a 3D point cloud is estimated based on spatio-temporal property of several 3D point clouds. Secondly, the evaluated 3D point clouds, acquired from two viewpoints, are projected onto the same image plane to find correspondences, and registration is conducted through minimizing errors. Finally, a surface is created by fine-tuning 3D coordinates of point clouds, acquired from several viewpoints. The proposed method reduces the computational complexity by searching for corresponding points in 2D image plane, and is carried out effectively even if the precision of 3D point cloud is relatively low by exploiting the correlation with the neighborhood. Furthermore, it is possible to reconstruct an indoor environment by depth and color images on several position by using the multi-view camera. The reconstructed model can be adopted for interaction with as well as navigation in a virtual environment, and Mediated Reality (MR) applications.

  • PDF

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

Automatic Camera Pose Determination from a Single Face Image

  • Wei, Li;Lee, Eung-Joo;Ok, Soo-Yol;Bae, Sung-Ho;Lee, Suk-Hwan;Choo, Young-Yeol;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.12
    • /
    • pp.1566-1576
    • /
    • 2007
  • Camera pose information from 2D face image is very important for making virtual 3D face model synchronize with the real face. It is also very important for any other uses such as: human computer interface, 3D object estimation, automatic camera control etc. In this paper, we have presented a camera position determination algorithm from a single 2D face image using the relationship between mouth position information and face region boundary information. Our algorithm first corrects the color bias by a lighting compensation algorithm, then we nonlinearly transformed the image into $YC_bC_r$ color space and use the visible chrominance feature of face in this color space to detect human face region. And then for face candidate, use the nearly reversed relationship information between $C_b\;and\;C_r$ cluster of face feature to detect mouth position. And then we use the geometrical relationship between mouth position information and face region boundary information to determine rotation angles in both x-axis and y-axis of camera position and use the relationship between face region size information and Camera-Face distance information to determine the camera-face distance. Experimental results demonstrate the validity of our algorithm and the correct determination rate is accredited for applying it into practice.

  • PDF

The Analysis of 3D Position Accuracy of Multi-Looking Camera (다각촬영카메라의 3차원 위치정확도 분석)

  • Go, Jong-Sik;Choi, Yoon-Soo;Jang, Se-Jin;Lee, Ki-Wook
    • Spatial Information Research
    • /
    • v.19 no.3
    • /
    • pp.33-42
    • /
    • 2011
  • Since the method of generating 3D Spatial Information using aerial photographs was introduced, lots of researches on effective generation methods and applications have been performed. Nadir and oblique imagery are acquired in a same time by Pictometry system, and then 3D positioning is processed as Multi-Looking Camera procedure. In this procedure, the number of GCPs is the main factor which can affect the accuracy of true-orthoimage. In this study, 3D positioning accuracies of true-orthoimages which had been generated using various number of GCPs were estimated. Also, the standard of GCP number and distribution were proposed.

Three Dimensional Geometric Feature Detection Using Computer Vision System and Laser Structured Light (컴퓨터 시각과 레이저 구조광을 이용한 물체의 3차원 정보 추출)

  • Hwang, H.;Chang, Y.C.;Im, D.H.
    • Journal of Biosystems Engineering
    • /
    • v.23 no.4
    • /
    • pp.381-390
    • /
    • 1998
  • An algorithm to extract the 3-D geometric information of a static object was developed using a set of 2-D computer vision system and a laser structured lighting device. As a structured light pattern, multi-parallel lines were used in the study. The proposed algorithm was composed of three stages. The camera calibration, which determined a coordinate transformation between the image plane and the real 3-D world, was performed using known 6 pairs of points at the first stage. Then, utilizing the shifting phenomena of the projected laser beam on an object, the height of the object was computed at the second stage. Finally, using the height information of the 2-D image point, the corresponding 3-D information was computed using results of the camera calibration. For arbitrary geometric objects, the maximum error of the extracted 3-D feature using the proposed algorithm was less than 1~2mm. The results showed that the proposed algorithm was accurate for 3-D geometric feature detection of an object.

  • PDF