• Title/Summary/Keyword: RGB Values

Search Result 223, Processing Time 0.022 seconds

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.

METALLICITY OF GLOBULAR CLUSTER NGC 5053 FROM VI CCD PHOTOMETRY

  • Sohn, Young-Jong
    • Journal of Astronomy and Space Sciences
    • /
    • v.18 no.1
    • /
    • pp.7-14
    • /
    • 2001
  • Red giant branch shape and the luminosity of horizontal branch on the (V-I)-V CMD are used to derive the metallicity the globular cluster NGC 5053. The metallicities of NGC 5053 derived by SMR method ([Fe/H]=-2.62$\pm$0.07) and the relation between[Fe/H] and $(V-I)_{0.g}$ ([Fe/H]=-2.50) are in good agreement with previously determined values. This result confirms that the morphologies of RGB and HB on the (V-I)-V CMDs can be good indirect photometric metallicity indicators of galactic globular clusters.

  • PDF

An Effective Mixed Steganography Based on LSB and LDR (LSB와 LDR을 기반한 효과적인 혼합 스테가노그래피)

  • Ji, Seon-Su
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.12 no.6
    • /
    • pp.561-566
    • /
    • 2019
  • In the Internet space, integrity and security must be maintained for secure and confidential communication, which ensures reliability between sender and receiver. Cryptography is an important factor in maintaining robustness against external attacks. For this purpose, encryption and steganography methods are used. Steganography is a method of hiding confidential information without making statistically significant changes to digital media. I propose a method of transforming the Hangul-Jamo consisting of choseong, jungseong and jongseong, and inserting them into RGB pixel values of the cover image. In order to improve security, a new blending method was used to hide the altered information in the lowest region. In this case, a mixture of LSB and LDR techniques was applied. PSNR was calculated for image quality. The PSNR of the proposed method is 43.225dB, which satisfies the lowest level.

Estimation of Fractional Vegetation Cover in Sand Dunes Using Multi-spectral Images from Fixed-wing UAV

  • Choi, Seok Keun;Lee, Soung Ki;Jung, Sung Heuk;Choi, Jae Wan;Choi, Do Yoen;Chun, Sook Jin
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.34 no.4
    • /
    • pp.431-441
    • /
    • 2016
  • Since the use of UAV (Unmanned Aerial Vehicle) is convenient for the acquisition of data on broad or inaccessible regions, it is nowadays used to establish spatial information for various fields, such as the environment, ecosystem, forest, or for military purposes. In this study, the process of estimating FVC (Fractional Vegetation Cover), based on multi-spectral UAV, to overcome the limitations of conventional methods is suggested. Hence, we propose that the FVC map is generated by using multi-spectral imaging. First, two types of result classifications were obtained based on RF (Random Forest) using RGB images and NDVI (Normalized Difference Vegetation Index) with RGB images. Then, the result map was reclassified into vegetation and non-vegetation. Finally, an FVC map-based RF were generated by using pixel calculation and FVC map-based GI (Gutman and Ignatov) model were indirectly made by fixed parameters. The method of adding NDVI shows a relatively higher accuracy compared to that of adding only RGB, and in particular, the GI model shows a lower RMSE (Root Mean Square Error) with 0.182 than RF. In this regard, the availability of the GI model which uses only the values of NDVI is higher than that of RF whose accuracy varies according to the results of classification. Our results showed that the GI mode ensures the quality of the FVC if the NDVI maintained at a uniform level. This can be easily achieved by using a UAV, which can provide vegetation data to improve the estimation of FVC.

Development of Deep Learning AI Model and RGB Imagery Analysis Using Pre-sieved Soil (입경 분류된 토양의 RGB 영상 분석 및 딥러닝 기법을 활용한 AI 모델 개발)

  • Kim, Dongseok;Song, Jisu;Jeong, Eunji;Hwang, Hyunjung;Park, Jaesung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.66 no.4
    • /
    • pp.27-39
    • /
    • 2024
  • Soil texture is determined by the proportions of sand, silt, and clay within the soil, which influence characteristics such as porosity, water retention capacity, electrical conductivity (EC), and pH. Traditional classification of soil texture requires significant sample preparation including oven drying to remove organic matter and moisture, a process that is both time-consuming and costly. This study aims to explore an alternative method by developing an AI model capable of predicting soil texture from images of pre-sorted soil samples using computer vision and deep learning technologies. Soil samples collected from agricultural fields were pre-processed using sieve analysis and the images of each sample were acquired in a controlled studio environment using a smartphone camera. Color distribution ratios based on RGB values of the images were analyzed using the OpenCV library in Python. A convolutional neural network (CNN) model, built on PyTorch, was enhanced using Digital Image Processing (DIP) techniques and then trained across nine distinct conditions to evaluate its robustness and accuracy. The model has achieved an accuracy of over 80% in classifying the images of pre-sorted soil samples, as validated by the components of the confusion matrix and measurements of the F1 score, demonstrating its potential to replace traditional experimental methods for soil texture classification. By utilizing an easily accessible tool, significant time and cost savings can be expected compared to traditional methods.

Detecting Boundaries between Different Color Regions in Color Codes

  • Kwon B. H.;Yoo H. J.;Kim T. W.
    • Proceedings of the IEEK Conference
    • /
    • 2004.08c
    • /
    • pp.846-849
    • /
    • 2004
  • Compared to the bar code which is being widely used for commercial products management, color code is advantageous in both the outlook and the number of combinations. And the color code has application areas complement to the RFID's. However, due to the severe distortion of the color component values, which is easily over $50{\%}$ of the scale, color codes have difficulty in finding applications in the industry. To improve the accuracy of recognition of color codes, it'd better to statistically process an entire color region and then determine its color than to process some samples selected from the region. For this purpose, we suggest a technique to detect edges between color regions in this paper, which is indispensable for an accurate segmentation of color regions. We first transformed RGB color image to HSI and YIQ color models, and then extracted I- and Y-components from them, respectively. Then we performed Canny edge detection on each component image. Each edge image usually had some edges missing. However, since the resulting edge images were complementary, we could obtain an optimal edge image by combining them.

  • PDF

Improvement of Face Components Detection using Neck Removal (목 부분의 제거를 통한 얼굴 검출 향상 기법)

  • Yoon, Ga-Rim;Yoon, Yo-Sup;Kim, Young-Bong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2004.11a
    • /
    • pp.321-326
    • /
    • 2004
  • Many researchers have been studied texturing the 3D face model with front and side pictures of ordinary person. It is very important to exactly detect the psition of eyes, nose, mouth of a human from the side pictures. Previous results first found the position of eye, nose, or mouth and then extract the other face components using their positional correlation. The detection results greatly depend on the correct extraction of the neck from the images. Therefore, we present a new algorithm that remove the neck completely and thus improve the detection rates of face components. To do this, we will use the RGB values and its differences.

  • PDF

Color Image Encryption Technique Using Quad-tree Decomposition Method (쿼드트리 분할 기술을 이용한 컬러 영상 암호화 기술)

  • Choi, Hyunjun
    • Journal of Advanced Navigation Technology
    • /
    • v.20 no.6
    • /
    • pp.625-630
    • /
    • 2016
  • Recently, various types of image contents are being produced, and interest in copyright protection technology is increasing. In this paper, we propose an image encryption technology for color images. This technique divides the image into RGB color components and then performs quad-tree decomposition based on the edge of image. After the quad-tree partitioning, encryption is performed on the selected blocks. Encryption is performed on color components to measure encryption efficiency, and encryption efficiency is measured even after reconstitution into a color image. The encryption efficiency uses a visual measurement method and an objective image quality evaluation method. The PSNR values were measured as 7~10 dB for color difference components and 16~19 dB for color images. The proposed image encryption technology will be used to protect copyright of various digital image contents in the future.

Development of a PC-based 3-D Seismic Visualization Software (PC 기반의 3차원 탄성파 자료 시각화 소프트웨어 개발 연구)

  • Kim, Hyeon-Gyu;Lee, Doo-Sung
    • Geophysics and Geophysical Exploration
    • /
    • v.6 no.1
    • /
    • pp.35-39
    • /
    • 2003
  • A software to visualize and analyse 3-D seismic data is developed using OpenGL, one of the most popular 3-D graphic library, under the PC and Windows platform. The software can visualize the data as volume and slices, whose color distribution is specified by a special dialog box that can pick a color in RGB or HSV format. The dialog box can also designate opacity values so that several 3-D objects can be displayed superimposed each other. Horizon picking is implemented very easily with this software thanks to the guided picking method. The picked points from a horizon will compose a set of points, mesh, and a surface, which can be viewed and analysed in three dimensions.