• Title/Summary/Keyword: RGB카메라

Search Result 270, Processing Time 0.024 seconds

Camera noise reduction in the low illumination conditions using convolutional network (컨벌루션 네트워크를 이용한 저조도 환경 카메라 잡음 제거)

  • Park, Gu-Yong;Ahn, Byeong-Yong;Cho, Nam-ik
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2017.06a
    • /
    • pp.163-165
    • /
    • 2017
  • 본 논문에서는 카메라 잡음 제거에 딥 러닝 알고리즘을 적용하는 연구를 진행하였다. 합성된 가우시언 잡음에 대하여 좋은 잡음 제거 성능을 보이는 DnCNN(Denoising Convolutional Network)를 이용하여 카메라 잡음을 제거하는 학습과 실험을 진행하였으며, 기준 실험으로는 RGB 색공간의 3채널 모두에 대하여 학습한 신경망(Neural Network)을 사용하였고, 본 논문의 실험에서는 그레이 이미지에 대하여 학습한 신경망을 사용하였다. 신경망의 평가를 위하여 딥 러닝 알고리즘 입력 이미지를 RGB 색공간(RGB Color Space)과 YCbCr 색공간(YCbCr Color Space) 2가지 색공간으로 표현하여 사용하였고, 입력 이미지에 노이즈를 첨가하기 위해 가우시안 노이즈(Gaussian Noise)를 이용하였다. 또한 가우시안 잡음과 다른 성질을 갖는 실제 카메라 잡음에 대해서도 학습과 테스트를 진행하였다.

  • PDF

Contactless Chroma Key System Using Gesture Recognition (제스처 인식을 이용한 비 접촉식 크로마키 시스템)

  • Jeong, Jongmyeon;Jo, HongLae;Kim, Hoyoung;Song, Sion;Lee, Junseo
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2015.07a
    • /
    • pp.159-160
    • /
    • 2015
  • 본 논문에서는 사용자의 제스처를 인식하여 동작하는 비 접촉식 크로마키 시스템을 제안한다. 이를 위해서 키넥트 카메라로부터 깊이(depth) 이미지와 RGB 이미지를 입력받는다. 먼저 깊이 카메라와 RGB 카메라의 위치 차이로 인한 불일치(disparity)를 보정하고, 깊이 이미지에 대해 모폴로지 연산을 수행하여 잡음을 제거한 후 RGB 이미지와 결합하여 객체 영역을 추출한다. 추출된 객체영역을 분석하여 사용자 손의 위치와 모양을 인식하고 손의 위치와 모양을 포인팅 장비로 간주하여 크로마키 시스템을 제어한다. 실험을 통해 비접촉식 크로마키 시스템이 실시간으로 동작함을 확인하였다.

  • PDF

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.

Implementation of the Color Matching Between Mobile Camera and Mobile LCD Based on RGB LUT (모바일 폰의 카메라와 LCD 모듈간의 RGB 참조표에 기반한 색 정합의 구현)

  • Son Chang-Hwan;Park Kee-Hyon;Lee Cheol-Hee;Ha Yeong-Ho
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.43 no.3 s.309
    • /
    • pp.25-33
    • /
    • 2006
  • This paper proposed device-independent color matching algorithm based on the 3D RGB lookup table (LUT) between mobile camera and mobile LCD (Liquid Crystal Display) to improve the color-fidelity. Proposed algorithm is composed of thee steps, which is device characterization, gamut mapping, 3D RGB-LUT design. First, the characterization of mobile LCD is executed using the sigmoidal function, different from conventional method such as GOG (Gain Offset Gamma) and S-curve modeling, based on the observation of electro-optical transfer function of mobile LCD. Next, mobile camera characterization is conducted by fitting the digital value of GretagColor chart captured under the daylight environment (D65) and tristimulus values (CIELAB) using the polynomial regression. However, the CIELAB values estimated by polynomial regression exceed the maximum boundary of the CIELAB color space. Therefore, these values are corrected by linear compression of the lightness and chroma. Finally, gamut mapping is used to overcome the gamut difference between mobile camera and moible LCD. To implement the real-time processing, 3D RGB-LUT is designed based on the 3D RGB-LUT and its performance is evaluated and compared with conventional method.

Text Detection and Binarization using Color Variance and an Improved K-means Color Clustering in Camera-captured Images (카메라 획득 영상에서의 색 분산 및 개선된 K-means 색 병합을 이용한 텍스트 영역 추출 및 이진화)

  • Song Young-Ja;Choi Yeong-Woo
    • The KIPS Transactions:PartB
    • /
    • v.13B no.3 s.106
    • /
    • pp.205-214
    • /
    • 2006
  • Texts in images have significant and detailed information about the scenes, and if we can automatically detect and recognize those texts in real-time, it can be used in various applications. In this paper, we propose a new text detection method that can find texts from the various camera-captured images and propose a text segmentation method from the detected text regions. The detection method proposes color variance as a detection feature in RGB color space, and the segmentation method suggests an improved K-means color clustering in RGB color space. We have tested the proposed methods using various kinds of document style and natural scene images captured by digital cameras and mobile-phone camera, and we also tested the method with a portion of ICDAR[1] contest images.

Noise Reduction Method Using Randomized Unscented Kalman Filter for RGB+D Camera Sensors (랜덤 무향 칼만 필터를 이용한 RGB+D 카메라 센서의 잡음 보정 기법)

  • Kwon, Oh-Seol
    • Journal of Broadcast Engineering
    • /
    • v.25 no.5
    • /
    • pp.808-811
    • /
    • 2020
  • This paper proposes a method to minimize the error of the Kinect camera sensor by using a random undirected Kalman filter. Kinect cameras, which provide RGB values and depth information, cause nonlinear errors in the sensor, causing problems in various applications such as skeleton detection. Conventional methods have tried to remove errors by using various filtering techniques. However, there is a limit to removing nonlinear noise effectively. Therefore, in this paper, a randomized unscented Kalman filter was applied to predict and update the nonlinear noise characteristics, we next tried to enhance a performance of skeleton detection. The experimental results confirmed that the proposed method is superior to the conventional method in quantitative results and reconstructed images on 3D space.

Multiple Depth and RGB Camera-based System to Acquire Point Cloud for MR Content Production (MR 콘텐츠 제작을 위한 다중 깊이 및 RGB 카메라 기반의 포인트 클라우드 획득 시스템)

  • Kim, Kyung-jin;Park, Byung-seo;Kim, Dong-wook;Seo, Young-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.445-446
    • /
    • 2019
  • Recently, attention has been focused on mixed reality (MR) technology, which provides an experience that can not be realized in reality by fusing virtual information into the real world. Mixed reality has the advantage of having excellent interaction with reality and maximizing immersion feeling. In this paper, we propose a method to acquire a point cloud for the production of mixed reality contents using multiple Depth and RGB camera system.

  • PDF

Real Time Implementation of Face Tracking System Using Color Information (색상 정보를 이용한 실시간 얼굴 추적 시스템 구현)

  • 김영운;이형지;정재호
    • Proceedings of the IEEK Conference
    • /
    • 2001.09a
    • /
    • pp.151-154
    • /
    • 2001
  • 본 논문의 목적은 범용 USB 카메라 입력 영상으로부터 실시간으로 얼굴을 추적하는 시스템을 구현하는데 있다. 먼저 USB 카메라로부터 영상을 입력받은 후 2차원 RGB 컬러 모델링으로 추출한 살색 영역을 찾고 가로, 세로 프로젝선 정보를 이용하여 얼굴을 찾는다. 기존의 RGB 컬러 모델을 개선하여 빛에 강인한 모델링을 하였으며, 프로젝션 정보를 이용할 때 일어나는 에러를 최소화하기 위하여 누적 히스토그램 영역 결합 알고리즘을 제안하였다. 구현한 시스템은 움직임이 많은 영상에도 빠른 속도를 보였으며, 특히 영상의 움직임이적은 경우 카메라에서 영상을 보여 주는 것과 동시에 얼굴을 찾아내어, 연속적인 프레임을 처리할 수 있는 성능을 보였다.

  • PDF

Skeleton-based 3D Pointcloud Registration Method (스켈레톤 기반의 3D 포인트 클라우드 정합 방법)

  • Park, Byung-Seo;Kim, Dong-Wook;Seo, Young-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2021.06a
    • /
    • pp.89-90
    • /
    • 2021
  • 본 논문에서는 3D(dimensional) 스켈레톤을 이용하여 멀티 뷰 RGB-D 카메라를 캘리브레이션 하는 새로운 기법을 제안하고자 한다. 멀티 뷰 카메라를 캘리브레이션 하기 위해서는 일관성 있는 특징점이 필요하다. 우리는 다시점 카메라를 캘리브레이션 하기 위한 특징점으로 사람의 스켈레톤을 사용한다. 사람의 스켈레톤은 최신의 자세 추정(pose estimation) 알고리즘들을 이용하여 쉽게 구할 수 있게 되었다. 우리는 자세 추정 알고리즘을 통해서 획득된 3D 스켈레톤의 관절 좌표를 특징점으로 사용하는 RGB-D 기반의 캘리브레이션 알고리즘을 제안한다.

  • PDF

Depthmap Generation with Registration of LIDAR and Color Images with Different Field-of-View (다른 화각을 가진 라이다와 칼라 영상 정보의 정합 및 깊이맵 생성)

  • Choi, Jaehoon;Lee, Deokwoo
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.6
    • /
    • pp.28-34
    • /
    • 2020
  • This paper proposes an approach to the fusion of two heterogeneous sensors with two different fields-of-view (FOV): LIDAR and an RGB camera. Registration between data captured by LIDAR and an RGB camera provided the fusion results. Registration was completed once a depthmap corresponding to a 2-dimensional RGB image was generated. For this fusion, RPLIDAR-A3 (manufactured by Slamtec) and a general digital camera were used to acquire depth and image data, respectively. LIDAR sensor provided distance information between the sensor and objects in a scene nearby the sensor, and an RGB camera provided a 2-dimensional image with color information. Fusion of 2D image and depth information enabled us to achieve better performance with applications of object detection and tracking. For instance, automatic driver assistance systems, robotics or other systems that require visual information processing might find the work in this paper useful. Since the LIDAR only provides depth value, processing and generation of a depthmap that corresponds to an RGB image is recommended. To validate the proposed approach, experimental results are provided.