• Title/Summary/Keyword: RGB image

Search Result 826, Processing Time 0.026 seconds

Object Detection and Performance Comparison based on RGB image and thermal infrared radiation (RGB 영상과 열 적외선 영상 기반 객체 탐지 알고리즘 수행 및 성능 비교)

  • Kim, Shin;Lee, Yegi;Yoon, Kyoungro;Lim, Hanshin;Lee, Hee Kyoung;Choo, Hyon-gon;Seo, Jeongil
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.176-179
    • /
    • 2020
  • 현재 대부분의 객체 탐지 알고리즘은 RGB 영상을 기반으로 개발되고 있다. 하지만 안개가 끼거나 비가 오는 날 또는 방중에 촬영한 RGB 영상은 흐리거나 잘 보이지 않아 높지 않은 객체 탐지 결과를 보여줄 수 있다. 열 적외선 영상은 열 센서로 인해 만들어지든 영상으로 RGB 영상에 비해 기상조건이나 촬영 시간대에 상관없이 취득 될 수 있다. 본 논문에서는 RGB 영상과 열 적외선 영상을 기반으로 객체 탐지 알고리즘을 수행하고 각 영상에 따른 객체 탐지 성능을 비교한다. 야간에 취득한 RGB 영상과 열 적외선 영상에 객체 탐지를 수행하였으며, 열 적외선 영상 기반 결과가 RGB 영상 기반일 때 보다 더 높은 정확도를 보여주었다. 추가적으로 밤 시간대의 RGB 영상과 열 적외선 영상을 선정하여 객체 탐지 네트워크를 튜닝하였으며, fine-tuned 네트워크를 이용하여 객체 탐지한 실험 결과 역시 열 적외선 영상이 RGB 영상보다 더 높은 객체 탐지 정확도를 보이는 것을 확인할 수 있었다.

  • PDF

Intended for photovoltaic modules Compare modeling between SfM based RGB and TIR Images (SfM 기반 RGB 및 TIR 영상해석을 통한 태양광 모듈 이상징후 정밀위치 검출)

  • Park, Joon-Kyu;Han, Woong-ji;Kwon, Young-Hun;Kang, Joon-Oh;Lee, Yong-Chang
    • Journal of Urban Science
    • /
    • v.8 no.1
    • /
    • pp.7-14
    • /
    • 2019
  • Recently, interest in solar energy, which is the center of new government energy policy, is increasing. However, the focus is on mass production of solar power plants, and policies and related technologies for maintenance and management of existing installed PV modules are insufficient. In this study, we use UAV (Unmanned Aerial Vehicle) to acquire RGB and infrared images, apply it to the structure-from-motion (SfM) based image analysis tool, model the three- And the position of the hot spot was monitored and coordinates were detected. As a result, it is possible to provide basic spatial information for maintenance of solar module by monitoring and position detection of hot-spot suspected solar cells by superimposing infrared image and RGB image based on unmanned aerial vehicle.

Low-cost Assessment of Canopy Light Interception and Leaf Area in Soybean Canopy Cover using RGB Color Images (RGB 컬러 이미지를 이용한 콩의 군락 피복과 엽면적에 대한 저비용 평가)

  • Lee, Yun-Ho;Sang, Wan-Gyu;Baek, Jae-Kyeong;Kim, Jun-Hwan;Cho, Jung-Il;Seo, Myung-Chul
    • Korean Journal of Agricultural and Forest Meteorology
    • /
    • v.22 no.1
    • /
    • pp.13-19
    • /
    • 2020
  • This study compared RGB color images with canopy light interception (LI) and leaf area index (LAI) measurements for low cost and low labor. LAI and LI were measured from vertical gap fraction derived from top of digital image in soybean canopy cover (cv Daewonkong, Deapongkong and Pungsannamulkong). RGB color images, LAI, and LI were collected from V4.5 stage to R5stage. Image segmentation was based on excess green minus excess red index (ExG-ExR). There was a linear relationship between LAI measured with LI (r2=0.84). There was alinear relation ship between LI measured with canopy cover on image (CCI) (r2=0.94). There was a significant positive relationship(r2=0.74) between LAI and CCI at all grow ingseason. Therefore, it is expected that in the future, the RGB color image could be able to easily measure the LAI and the LI at low cost and low labor.

Algorithm of Face Region Detection in the TV Color Background Image (TV컬러 배경영상에서 얼굴영역 검출 알고리즘)

  • Lee, Joo-Shin
    • Journal of Advanced Navigation Technology
    • /
    • v.15 no.4
    • /
    • pp.672-679
    • /
    • 2011
  • In this paper, detection algorithm of face region based on skin color of in the TV images is proposed. In the first, reference image is set to the sampled skin color, and then the extracted of face region is candidated using the Euclidean distance between the pixels of TV image. The eye image is detected by using the mean value and standard deviation of the component forming color difference between Y and C through the conversion of RGB color into CMY color model. Detecting the lips image is calculated by utilizing Q component through the conversion of RGB color model into YIQ color space. The detection of the face region is extracted using basis of knowledge by doing logical calculation of the eye image and lips image. To testify the proposed method, some experiments are performed using front color image down loaded from TV color image. Experimental results showed that face region can be detected in both case of the irrespective location & size of the human face.

Color Image Enhancement Using Vector Rotation Based on Color Constancy (칼라 항상성에 기초한 벡터 회전을 이용한 칼라 영상 향상)

  • 김경만;이채수;박영식;하영호
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 1996.06a
    • /
    • pp.181-185
    • /
    • 1996
  • Color image is largely corrupted by various ambient illumination. However, human perceives always white color as white under any illumination because of a characteristic of human vision, called color constancy. In the conventional algorithm which applied the constancy effect, after the RGB color space is transformed to the IHS(Intensity, Hue, and Saturation) color space, then the hue is preserved and the intensity or the saturation is properly enhanced. Then the enhanced IHS color is reversely transformed to the RGB color space. In this process, the color distortion is included due to the color gamut error. But in the proposed algorithm, there is not transformation. In that, the RGB color is considered as 3 dimensional color vector and we assume that white color is the natural daylight. As the color vector of the illumination can be calculated as the average vector of R, G, and B image, we can achieve the constancy effect by simply rotating the illumination vector to the white color vector. The simulation results show the efficiency of the vector rotating process for color image enhancement.

  • PDF

COLORNET: Importance of Color Spaces in Content based Image Retrieval

  • Judy Gateri;Richard Rimiru;Micheal Kimwele
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.33-40
    • /
    • 2023
  • The mainstay of current image recovery frameworks is Content-Based Image Retrieval (CBIR). The most distinctive retrieval method involves the submission of an image query, after which the system extracts visual characteristics such as shape, color, and texture from the images. Most of the techniques use RGB color space to extract and classify images as it is the default color space of the images when those techniques fail to change the color space of the images. To determine the most effective color space for retrieving images, this research discusses the transformation of RGB to different color spaces, feature extraction, and usage of Convolutional Neural Networks for retrieval.

Synthesis method of elemental images from Kinect images for space 3D image (공간 3D 영상디스플레이를 위한 Kinect 영상의 요소 영상 변환방법)

  • Ryu, Tae-Kyung;Hong, Seok-Min;Kim, Kyoung-Won;Lee, Byung-Gook
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2012.05a
    • /
    • pp.162-163
    • /
    • 2012
  • In this paper, we propose a synthesis method of elemental images from Kinect images for 3D integral imaging display. Since RGB images and depth image obtained from Kinect are not able to display 3D images in integral imaging system, we need transform the elemental images in integral imaging display. To do so, we synthesize the elemental images based on the geometric optics mapping from the depth plane images obtained from RGB image and depth image. To show the usefulness of the proposed system, we carry out the preliminary experiments using the two person object and present the experimental results.

  • PDF

Color Image Segmentations of a Vitiligo Skin Image with Android Platform Smartphone (안드로이드 기반의 스마트폰을 활용한 백반증 피부 영상 분할)

  • Park, Sang-Eun;Kim, Hyun-Tae;Kim, Jeong-Hwan;Kim, Kyeong-Seop
    • The Transactions of The Korean Institute of Electrical Engineers
    • /
    • v.63 no.1
    • /
    • pp.173-178
    • /
    • 2014
  • In this study, the new color image processing algorithms with an android-based mobile device are developed to detect the abnormal color densities in a skin image and interpret them as the vitiligo lesions. Our proposed method is firstly based on transforming RGB data into HSI domain and segmenting the imag into the vitiligo-skin candidates by applying Otsu's threshold algorithm. The structure elements for morphological image processing are suggested to delete the spurious regions in vitiligo regions and the image blob labeling algorithm is applied to compare RGB color densities of the abnormal skin region with them of a region of interest. Our suggested color image processing algorithms are implemented with an android-platform smartphone and thus a mobile device can be utilized to diagnose or monitor the patient's skin conditions under the environments of pervasive healthcare services.

Conversion of Image into Sound Based on HSI Histogram (HSI 히스토그램에 기초한 이미지-사운드 변환)

  • Kim, Sung-Il
    • The Journal of the Acoustical Society of Korea
    • /
    • v.30 no.3
    • /
    • pp.142-148
    • /
    • 2011
  • The final aim of the present study is to develop the intelligent robot, emulating human synesthetic skills which make it possible to associate a color image with a specific sound. This can be done on the basis of the mutual conversion between color image and sound. As a first step of the final goal, this study focused on a basic system using a conversion of color image into sound. This study describes a proposed method to convert color image into sound, based on the likelihood in the physical frequency information between light and sound. The method of converting color image into sound was implemented by using HSI histograms through RGB-to-HSI color model conversion, which was done by Microsoft Visual C++ (ver. 6.0). Two different color images were used on the simulation experiments, and the results revealed that the hue, saturation and intensity elements of each input color image were converted into fundamental frequency, harmonic and octave elements of a sound, respectively. Through the proposed system, the converted sound elements were then synthesized to automatically generate a sound source with wav file format, using Csound.

Real-time Human Pose Estimation using RGB-D images and Deep Learning

  • Rim, Beanbonyka;Sung, Nak-Jun;Ma, Jun;Choi, Yoo-Joo;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.3
    • /
    • pp.113-121
    • /
    • 2020
  • Human Pose Estimation (HPE) which localizes the human body joints becomes a high potential for high-level applications in the field of computer vision. The main challenges of HPE in real-time are occlusion, illumination change and diversity of pose appearance. The single RGB image is fed into HPE framework in order to reduce the computation cost by using depth-independent device such as a common camera, webcam, or phone cam. However, HPE based on the single RGB is not able to solve the above challenges due to inherent characteristics of color or texture. On the other hand, depth information which is fed into HPE framework and detects the human body parts in 3D coordinates can be usefully used to solve the above challenges. However, the depth information-based HPE requires the depth-dependent device which has space constraint and is cost consuming. Especially, the result of depth information-based HPE is less reliable due to the requirement of pose initialization and less stabilization of frame tracking. Therefore, this paper proposes a new method of HPE which is robust in estimating self-occlusion. There are many human parts which can be occluded by other body parts. However, this paper focuses only on head self-occlusion. The new method is a combination of the RGB image-based HPE framework and the depth information-based HPE framework. We evaluated the performance of the proposed method by COCO Object Keypoint Similarity library. By taking an advantage of RGB image-based HPE method and depth information-based HPE method, our HPE method based on RGB-D achieved the mAP of 0.903 and mAR of 0.938. It proved that our method outperforms the RGB-based HPE and the depth-based HPE.