• Title/Summary/Keyword: Color Camera

Search Result 950, Processing Time 0.022 seconds

Quality Enhancement of 3D Volumetric Contents Based on 6DoF for 5G Telepresence Service

  • Byung-Seo Park;Woosuk Kim;Jin-Kyum Kim;Dong-Wook Kim;Young-Ho Seo
    • Journal of Web Engineering
    • /
    • v.21 no.3
    • /
    • pp.729-750
    • /
    • 2022
  • In general, the importance of 6DoF (degree of freedom) 3D (dimension) volumetric contents technology is emerging in 5G (generation) telepresence service, Web-based (WebGL) graphics, computer vision, robotics, and next-generation augmented reality. Since it is possible to acquire RGB images and depth images in real-time through depth sensors that use various depth acquisition methods such as time of flight (ToF) and lidar, many changes have been made in object detection, tracking, and recognition research. In this paper, we propose a method to improve the quality of 3D models for 5G telepresence by processing images acquired through depth and RGB cameras on a multi-view camera system. In this paper, the quality is improved in two major ways. The first concerns the shape of the 3D model. A method of removing noise outside the object by applying a mask obtained from a color image and a combined filtering operation to obtain the difference in depth information between pixels inside the object were proposed. Second, we propose an illumination compensation method for images acquired through a multi-view camera system for photo-realistic 3D model generation. It is assumed that the three-dimensional volumetric shooting is done indoors, and the location and intensity of illumination according to time are constant. Since the multi-view camera uses a total of 8 pairs and converges toward the center of space, the intensity and angle of light incident on each camera are different even if the illumination is constant. Therefore, all cameras take a color correction chart and use a color optimization function to obtain a color conversion matrix that defines the relationship between the eight acquired images. Using this, the image input from all cameras is corrected based on the color correction chart. It was confirmed that the quality of the 3D model could be improved by effectively removing noise due to the proposed method when acquiring images of a 3D volumetric object using eight cameras. It has been experimentally proven that the color difference between images is reduced.

Eliminating Color Mixing of Projector-Camera System for Fast Radiometric Compensation (컬러 보정의 고속화를 위한 프로젝터-카메라 시스템의 컬러 혼합 성분 제거)

  • Lee, Moon-Hyun;Park, Han-Hoon;Park, Jong-Il
    • Journal of Broadcast Engineering
    • /
    • v.13 no.6
    • /
    • pp.941-950
    • /
    • 2008
  • The quality of projector output image is influenced by the surrounding conditions such as the shape and color of screen, and environmental light. Therefore, techniques that ensure desirable image quality, regardless of such surrounding conditions, have been in demand and are being steadily developed. Among the techniques, radiometric compensation is a representative one. In general, radiometric compensation is achieved by measuring the color of the screen and environmental light based on an analysis of camera image of projector output image and then adjusting the color of projector input image in a pixel-wise manner. This process is not time-consuming for small sizes of images but the speed of the process drops linearly with respect to image size. In large sizes of images, therefore, reducing the time required for performing the process becomes a critical problem. Therefore, this paper proposes a fast radiometric compensation method. The method uses color filters for eliminating the color mixing between projector and camera because the speed of radiometric compensation depends mainly on measuring color mixing between projector and camera. By using color filters, there is no need to measure the color mixing. Through experiments, the proposed method improved the compensation speed by 44 percent while maintaining the projector output image quality. This method is expected to be a key technique for widespread use of projectors for large-scale and high-quality display.

Efficient Method for Recovering Spectral Reflectance Using Spectrum Characteristic Matrix (스펙트럼 특성행렬을 이용한 효율적인 반사 스펙트럼 복원 방법)

  • Sim, Kyudong;Park, Jong-Il
    • Journal of Korea Multimedia Society
    • /
    • v.18 no.12
    • /
    • pp.1439-1444
    • /
    • 2015
  • Measuring spectral reflectance can be regarded as obtaining inherent color parameters, and spectral reflectance has been used in image processing. Model-based spectrum recovering, one of the method for obtaining spectral reflectance, uses ordinary camera with multiple illuminations. Conventional model-based methods allow to recover spectral reflectance efficiently by using only a few parameters, however it requires some parameters such as power spectrum of illuminations and spectrum sensitivity of camera. In this paper, we propose an enhanced model-based spectrum recovering method without pre-measured parameters: power spectrum of illuminations and spectrum sensitivity of camera. Instead of measuring each parameters, spectral reflectance can be efficiently recovered by estimating and using the spectrum characteristic matrix which contains spectrum parameters: basis function, power spectrum of illumination, and spectrum sensitivity of camera. The spectrum characteristic matrix can be easily estimated using captured images from scenes with color checker under multiple illuminations. Additionally, we suggest fast recovering method preserving positive constraint of spectrum by nonnegative basis function of spectral reflectance. Results of our method showed accurately reconstructed spectral reflectance and fast constrained estimation with unmeasured camera and illumination. As our method could be conducted conveniently, measuring spectral reflectance is expected to be widely used.

Calibration for Color Measurement of Lean Tissue and Fat of the Beef

  • Lee, S.H.;Hwang, H.
    • Agricultural and Biosystems Engineering
    • /
    • v.4 no.1
    • /
    • pp.16-21
    • /
    • 2003
  • In the agricultural field, a machine vision system has been widely used to automate most inspection processes especially in quality grading. Though machine vision system was very effective in quantifying geometrical quality factors, it had a deficiency in quantifying color information. This study was conducted to evaluate color of beef using machine vision system. Though measuring color of a beef using machine vision system had an advantage of covering whole lean tissue area at a time compared to a colorimeter, it revealed the problem of sensitivity depending on the system components such as types of camera, lighting conditions, and so on. The effect of color balancing control of a camera was investigated and multi-layer BP neural network based color calibration process was developed. Color calibration network model was trained using reference color patches and showed the high correlation with L*a*b* coordinates of a colorimeter. The proposed calibration process showed the successful adaptability to various measurement environments such as different types of cameras and light sources. Compared results with the proposed calibration process and MLR based calibration were also presented. Color calibration network was also successfully applied to measure the color of the beef. However, it was suggested that reflectance properties of reference materials for calibration and test materials should be considered to achieve more accurate color measurement.

  • PDF

Multiple Color and ToF Camera System for 3D Contents Generation

  • Ho, Yo-Sung
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.3
    • /
    • pp.175-182
    • /
    • 2017
  • In this paper, we present a multi-depth generation method using a time-of-flight (ToF) fusion camera system. Multi-view color cameras in the parallel type and ToF depth sensors are used for 3D scene capturing. Although each ToF depth sensor can measure the depth information of the scene in real-time, it has several problems to overcome. Therefore, after we capture low-resolution depth images by ToF depth sensors, we perform a post-processing to solve the problems. Then, the depth information of the depth sensor is warped to color image positions and used as initial disparity values. In addition, the warped depth data is used to generate a depth-discontinuity map for efficient stereo matching. By applying the stereo matching using belief propagation with the depth-discontinuity map and the initial disparity information, we have obtained more accurate and stable multi-view disparity maps in reduced time.

Image Reconstruction Using Line-scan Image for LCD Surface Inspection (LCD표면 검사를 위한 라인스캔 영상의 재구성)

  • 고민석;김우섭;송영철;최두현;박길흠
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.41 no.4
    • /
    • pp.69-74
    • /
    • 2004
  • In this paper, we propose a novel method for improving defect-detection performance based on reconstruction of line-scan camera images using both the projection profiles and color space transform. The proposed method consists of RGB region segmentation, representative value reconstruction using the tracing system, and Y image reconstruction using color-space transformation. Through experiments it is demonstrated that the performance using the reconstructed image is better than that using aerial image for LCD surface inspection.

Indoor Environment Modeling with Stereo Camera for Mobile Robot Navigation

  • Park, Sung-Kee;Park, Jong-Suk;Kim, Munsang;Lee, Chong-won
    • 제어로봇시스템학회:학술대회논문집
    • /
    • 2002.10a
    • /
    • pp.34.5-34
    • /
    • 2002
  • In this paper we propose a new method for modeling indoor environment with stereo camera and suggest a localization method for mobile robot navigation on the basis of it. From the viewpoint of easiness in map building and exclusion of artificiality, the main idea of this paper is that environment is represented as global topological map and each node has omni-directional metric and color information by using stereo camera and pan/tilt mechanism. We use the depth and color information itself in image pixel as feature for environmental abstraction. In addition, we use only the depth and color information at horizontal centerline in image, where optical axis is passing. The usefulness of this m...

  • PDF

Color Enhancement of Low Exposure Images using Histogram Specification and its Application to Color Shift Model-Based Refocusing

  • Lee, Eunsung;Kang, Wonseok;Kim, Sangjin
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.1 no.1
    • /
    • pp.8-16
    • /
    • 2012
  • An image obtained from a low light environment results in a low-exposure problem caused by non-ideal camera settings, i.e. aperture size and shutter speed. Of particular note, the multiple color-filter aperture (MCA) system inherently suffers from low-exposure problems and performance degradation in its image classification and registration processes due to its finite size of the apertures. In this context, this paper presents a novel method for the color enhancement of low-exposure images and its application to color shift model-based MCA system for image refocusing. Although various histogram equalization (HE) approaches have been proposed, they tend to distort the color information of the processed image due to the range limits of the histogram. The proposed color enhancement algorithm enhances the global brightness by analyzing the basic cause of the low-exposure phenomenon, and then compensates for the contrast degradation artifacts by using an adaptive histogram specification. We also apply the proposed algorithm to the preprocessing step of the refocusing technique in the MCA system to enhance the color image. The experimental results confirm that the proposed method can enhance the contrast of any low-exposure color image acquired by a conventional camera, and is suitable for commercial low-cost, high-quality imaging devices, such as consumer-grade camcorders, real-time 3D reconstruction systems, digital, and computational cameras.

  • PDF

Moving Object Tracking Method in Video Data Using Color Segmentation (칼라 분할 방식을 이용한 비디오 영상에서의 움직이는 물체의 검출과 추적)

  • 이재호;조수현;김회율
    • Proceedings of the IEEK Conference
    • /
    • 2001.06d
    • /
    • pp.219-222
    • /
    • 2001
  • Moving objects in video data are main elements for video analysis and retrieval. In this paper, we propose a new algorithm for tracking and segmenting moving objects in color image sequences that include complex camera motion such as zoom, pan and rotating. The Proposed algorithm is based on the Mean-shift color segmentation and stochastic region matching method. For segmenting moving objects, each sequence is divided into a set of similar color regions using Mean-shift color segmentation algorithm. Each segmented region is matched to the corresponding region in the subsequent frame. The motion vector of each matched region is then estimated and these motion vectors are summed to estimate global motion. Once motion vectors are estimated for all frame of video sequences, independently moving regions can be segmented by comparing their trajectories with that of global motion. Finally, segmented regions are merged into the independently moving object by comparing the similarities of trajectories, positions and emerging period. The experimental results show that the proposed algorithm is capable of segmenting independently moving objects in the video sequences including complex camera motion.

  • PDF

Emotion Recognition by Vision System (비젼에 의한 감성인식)

  • 이상윤;오재흥;주영훈;심귀보
    • Proceedings of the Korean Institute of Intelligent Systems Conference
    • /
    • 2001.12a
    • /
    • pp.203-207
    • /
    • 2001
  • In this Paper, we propose the neural network based emotion recognition method for intelligently recognizing the human's emotion using CCD color image. To do this, we first acquire the color image from the CCD camera, and then propose the method for recognizing the expression to be represented the structural correlation of man's feature Points(eyebrows, eye, nose, mouse) It is central technology that the Process of extract, separate and recognize correct data in the image. for representation is expressed by structural corelation of human's feature Points In the Proposed method, human's emotion is divided into four emotion (surprise, anger, happiness, sadness). Had separated complexion area using color-difference of color space by method that have separated background and human's face toughly to change such as external illumination in this paper. For this, we propose an algorithm to extract four feature Points from the face image acquired by the color CCD camera and find normalization face picture and some feature vectors from those. And then we apply back-prapagation algorithm to the secondary feature vector. Finally, we show the Practical application possibility of the proposed method.

  • PDF