• Title/Summary/Keyword: RGB카메라

Search Result 270, Processing Time 0.025 seconds

Making of sRGB image through digital camera colorimetric characterization (디지털 카메라 색 특성분석을 통한 sRGB 이미지 생성)

  • 유종우;김홍석;박승옥;박철호;박진희
    • Korean Journal of Optics and Photonics
    • /
    • v.15 no.2
    • /
    • pp.183-189
    • /
    • 2004
  • As high quality digital cameras become readily available, digital cameras are being used not only for simple picture recording but also as information storing media in various fields. However, due to the fact that the spectral responses of the camera sensors are different from color matching functions of the CIE standard observer, the color can not be measured using these cameras. This study shows a method for converting camera image to sRGB image, in which color information is preserved. The transfer matrix between camera output signals and CIE stimulus values was determined using a multiple regression method with Macbeth ColorChecker as target colors. The CIE stimulus values for camera output signals can be mapped with a transfer matrix, and these values are converted to sRGB signals. As the result of testing a Kodak DC220 digital camera, the average color difference of Macbeth ColorChecker between true and displayed colors was 2.1 $\Delta$ $E_{ab}$ $^{*}$.$^{*}$.

Convenient View Calibration of Multiple RGB-D Cameras Using a Spherical Object (구형 물체를 이용한 다중 RGB-D 카메라의 간편한 시점보정)

  • Park, Soon-Yong;Choi, Sung-In
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.3 no.8
    • /
    • pp.309-314
    • /
    • 2014
  • To generate a complete 3D model from depth images of multiple RGB-D cameras, it is necessary to find 3D transformations between RGB-D cameras. This paper proposes a convenient view calibration technique using a spherical object. Conventional view calibration methods use either planar checkerboards or 3D objects with coded-pattern. In these conventional methods, detection and matching of pattern features and codes takes a significant time. In this paper, we propose a convenient view calibration method using both 3D depth and 2D texture images of a spherical object simultaneously. First, while moving the spherical object freely in the modeling space, depth and texture images of the object are acquired from all RGB-D camera simultaneously. Then, the external parameters of each RGB-D camera is calibrated so that the coordinates of the sphere center coincide in the world coordinate system.

Alignment of Convergent Multi-view Depth Map in Based on the Camera Intrinsic Parameter (카메라의 내부 파라미터를 고려한 수렴형 다중 깊이 지도의 정렬)

  • Lee, Kanghoon;Park, Jong-Il;Shin, Hong-Chang;Bang, Gun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2015.07a
    • /
    • pp.457-459
    • /
    • 2015
  • 본 논문에서는 원의 호 곡선에 따라 배치된 다중 RGB 카메라 영상으로 생성한 깊이 지도를 정렬하는 방법을 제안한다. 원의 호 곡선에 따라 배치된 카메라는 각 카메라의 광축이 한 점으로 만나서 수렴하는 형태가 이상적이다. 그러나 카메라 파라미터를 살펴보면 광축이 서로 수렴하지 않는다. 또한 카메라 파라미터는 오차가 존재하고 내부 파라미터도 서로 다르기 때문에 각 카메라 영상들은 수평과 수직 오차가 발생한다. 이와 같은 문제점을 해결하기 위해 첫 번째로 광축이 한 점으로 수렴하기 위해서 카메라 외부 파라미터를 보정하여 깊이 영상 정렬을 하였다. 두 번째로 내부 파라미터를 수정하여 각 깊이 영상들의 수평과 수직 오차를 감소시켰다. 일반적으로 정렬된 깊이 지도를 얻기 위해서는 초기 RGB 카메라 영상으로 정렬을 수행하고 그 결과 영상으로 깊이 영상을 생성한다. 하지만 RGB 영상으로 카메라의 회전과 위치를 보정하여 정렬하면 카메라 위치 변화에 따른 깊이 지도 변화값 적용이 복잡해 진다. 즉 정렬 계산 과정에서 소수점 단위 값이 사라지기에 최종 깊이 지도의 값에 영향을 미친다. 그래서 RGB 영상으로 깊이 지도를 생성하고 그것을 처음 RGB 카메라 파라미터로 워핑(warping)하였다. 그리고 워핑된 깊이 지도 값을 가지고 정렬을 수행하였다.

  • PDF

Smoke Detection Based on RGB-Depth Camera in Interior (RGB-Depth 카메라 기반의 실내 연기검출)

  • Park, Jang-Sik
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.9 no.2
    • /
    • pp.155-160
    • /
    • 2014
  • In this paper, an algorithm using RGB-depth camera is proposed to detect smoke in interrior. RGB-depth camera, the Kinect provides RGB color image and depth information. The Kinect sensor consists of an infra-red laser emitter, infra-red camera and an RGB camera. A specific pattern of speckles radiated from the laser source is projected onto the scene. This pattern is captured by the infra-red camera and is analyzed to get depth information. The distance of each speckle of the specific pattern is measured and the depth of object is estimated. As the depth of object is highly changed, the depth of object plain can not be determined by the Kinect. The depth of smoke can not be determined too because the density of smoke is changed with constant frequency and intensity of infra-red image is varied between each pixels. In this paper, a smoke detection algorithm using characteristics of the Kinect is proposed. The region that the depth information is not determined sets the candidate region of smoke. If the intensity of the candidate region of color image is larger than a threshold, the region is confirmed as smoke region. As results of simulations, it is shown that the proposed method is effective to detect smoke in interior.

Point Cloud Registration Algorithm Based on RGB-D Camera for Shooting Volumetric Objects (체적형 객체 촬영을 위한 RGB-D 카메라 기반의 포인트 클라우드 정합 알고리즘)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.24 no.5
    • /
    • pp.765-774
    • /
    • 2019
  • In this paper, we propose a point cloud matching algorithm for multiple RGB-D cameras. In general, computer vision is concerned with the problem of precisely estimating camera position. Existing 3D model generation methods require a large number of cameras or expensive 3D cameras. In addition, the conventional method of obtaining the camera external parameters through the two-dimensional image has a large estimation error. In this paper, we propose a method to obtain coordinate transformation parameters with an error within a valid range by using depth image and function optimization method to generate omni-directional three-dimensional model using 8 low-cost RGB-D cameras.

Real-time 3D Volumetric Model Generation using Multiview RGB-D Camera (다시점 RGB-D 카메라를 이용한 실시간 3차원 체적 모델의 생성)

  • Kim, Kyung-Jin;Park, Byung-Seo;Kim, Dong-Wook;Kwon, Soon-Chul;Seo, Young-Ho
    • Journal of Broadcast Engineering
    • /
    • v.25 no.3
    • /
    • pp.439-448
    • /
    • 2020
  • In this paper, we propose a modified optimization algorithm for point cloud matching of multi-view RGB-D cameras. In general, in the computer vision field, it is very important to accurately estimate the position of the camera. The 3D model generation methods proposed in the previous research require a large number of cameras or expensive 3D cameras. Also, the methods of obtaining the external parameters of the camera through the 2D image have a large error. In this paper, we propose a matching technique for generating a 3D point cloud and mesh model that can provide omnidirectional free viewpoint using 8 low-cost RGB-D cameras. We propose a method that uses a depth map-based function optimization method with RGB images and obtains coordinate transformation parameters that can generate a high-quality 3D model without obtaining initial parameters.

High-quality Texture Extraction for Point Clouds Reconstructed from RGB-D Images (RGB-D 영상으로 복원한 점 집합을 위한 고화질 텍스쳐 추출)

  • Seo, Woong;Park, Sang Uk;Ihm, Insung
    • Journal of the Korea Computer Graphics Society
    • /
    • v.24 no.3
    • /
    • pp.61-71
    • /
    • 2018
  • When triangular meshes are generated from the point clouds in global space reconstructed through camera pose estimation against captured RGB-D streams, the quality of the resulting meshes improves as more triangles are hired. However, for 3D reconstructed models beyond some size threshold, they become to suffer from the ugly-looking artefacts due to the insufficient precision of RGB-D sensors as well as significant burdens in memory requirement and rendering cost. In this paper, for the generation of 3D models appropriate for real-time applications, we propose an effective technique that extracts high-quality textures for moderate-sized meshes from the captured colors associated with the reconstructed point sets. In particular, we show that via a simple method based on the mapping between the 3D global space resulting from the camera pose estimation and the 2D texture space, textures can be generated effectively for the 3D models reconstructed from captured RGB-D image streams.

Depth and RGB-based Camera Pose Estimation for Capturing Volumetric Object (체적형 객체의 촬영을 위한 깊이 및 RGB 카메라 기반의 카메라 자세 추정 알고리즘)

  • Kim, Kyung-Jin;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.06a
    • /
    • pp.123-124
    • /
    • 2019
  • 본 논문에서는 다중 깊이 및 RGB 카메라의 캘리브레이션 최적화 알고리즘을 제안한다. 컴퓨터 비전 분야에서 카메라의 자세 및 위치를 추정하는 것은 꼭 필요한 과정 중 하나이다. 기존의 방법들은 핀홀 카메라 모델을 이용하여 카메라 파라미터를 계산하기 때문에 오차가 존재한다. 따라서 이 문제점을 개선하기 위해 깊이 카메라에서 얻은 물체의 실제 거리와 함수 최적화 방식을 이용하여 카메라 외부 파라미터의 최적화를 진행한다. 이 알고리즘을 이용하여 카메라 간의 정합을 진행하면 보다 더 좋은 품질의 3D 모델을 얻을 수 있다.

  • PDF

Recent Trends of Real-time 3D Reconstruction Technology using RGB-D Cameras (RGB-D 카메라 기반 실시간 3차원 복원기술 동향)

  • Kim, Y.H.;Park, J.Y.;Lee, J.S.
    • Electronics and Telecommunications Trends
    • /
    • v.31 no.4
    • /
    • pp.36-43
    • /
    • 2016
  • 실 환경에 존재하는 모든 것을 3차원 모델로 쉽게 복원할 수 있을 것이라는 생각과 원격지에 있는 환경과 사람을 같은 공간에 있는 듯 상호작용할 수 있게 된 것은 그리 오래되지 않았다. 이는 일정 해상도를 보장해주는 RGB-D 센서의 개발과 이러한 센서들을 사용한 3차원 복원 관련 연구들이 활발히 수행되면서 가능하게 되었다. 본고에서는 널리 쓰이고 있는 RGB-D 카메라를 사용하여 실시간으로 때로는 온라인상에서 3차원으로 복원하고 가시화하는 기술에 대하여 살펴보고자 한다. 하나 또는 여러 개의 RGB_D 카메라를 사용하거나 모바일 장치에 장착된 RGB-D 센서를 사용하여 넓은 공간, 움직이는 사람, 온라인 상태의 환경 등을 실시간으로 복원하기 위한 기술들을 세부적으로 설명한다. 또한, 최근에 발표된 기술들이 다루고 있는 이슈들을 설명하고 향후 3차원 복원기술의 연구개발 방향에 대해서 논의한다.

  • PDF

Flesh Tone Balance Algorithm for AWB of Facial Pictures (인물 사진을 위한 자동 톤 균형 알고리즘)

  • Bae, Tae-Wuk;Lee, Sung-Hak;Lee, Jung-Wook;Sohng, Kyu-Ik
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.34 no.11C
    • /
    • pp.1040-1048
    • /
    • 2009
  • This paper proposes an auto flesh tone balance algorithm for the picture that is taken for people. General white balance algorithms bring neutral region into focus. But, other objects can be basis if its spectral reflectance is known. In this paper the basis for white balance is human face. For experiment, first, transfer characteristic of image sensor is analyzed and camera output RGB on average face chromaticity under standard illumination is calculated. Second, Output rate for the image is adjusted to make RGB rate for the face photo area taken under unknown illumination RGB rate that is already calculated. Input tri-stimulus XYZ can be calculated from camera output RGB by camera transfer matrix. And input tri-stimulus XYZ is transformed to standard color space (sRGB) using sRGB transfer matrix. For display, RGB data is encoded as eight-bit data after gamma correction. Algorithm is applied to average face color that is light skin color of Macbeth color chart and average color of various face colors that are actually measured.