• Title/Summary/Keyword: RGB values

Search Result 221, Processing Time 0.025 seconds

A fast single image dehazing method based on statistical analysis

  • Bui, Minh Trung;Bang, Seongbae;Kim, Wonha
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.06a
    • /
    • pp.116-119
    • /
    • 2018
  • In this paper, we propose a new single-image dehazing method. The proposed method constructs color ellipsoids that are statistically fitted to haze pixel clusters in RGB space and then calculates the transmission values through color ellipsoid geometry. The transmission values generated by the proposed method maximize the contrast of dehazed pixels, while preventing over-saturated pixels. The values are also statistically robust because they are calculated from the averages of the haze pixel values. Furthermore, rather than apply a highly complex refinement process to reduce halo or unnatural artifacts, we embed a fuzzy segmentation process into the construction of the color ellipsoid so that the proposed method simultaneously executes the transmission calculation and the refinement process. The results of an experimental performance evaluation verify that compared to prevailing dehazing methods the proposed method performs effectively across a wide range of haze and noise levels without causing any visible artifacts. Moreover, the relatively low complexity of the proposed method will facilitate its real-time applications.

  • PDF

Development of the Weather Detection Algorithm using CCTV Images and Temperature, Humidity (CCTV 영상과 온·습도 정보를 이용한 기후검출 알고리즘 개발)

  • Park, Beung-Raul;Lim, Jong-Tea
    • Journal of Korea Multimedia Society
    • /
    • v.10 no.2
    • /
    • pp.209-217
    • /
    • 2007
  • This paper proposed to a detection scheme of weather information that is a part of CCTV Images Weather Detection System using CCTV images and Temperature, Humidity. The previous Partial Weather Detection System uses how to acquire weather information using images on the Road. In the system the contrast and RGB Values using clear images are gained. This information is distributed a input images to cloud, rain, snow and fog images. That is, this information is compared the snow and the fog images for acquisition more correctness information us ing difference images and binary images. Currently, We use to environment sense system, but we suggest a new Weather Detection Algorithm to detect weather information using CCTV images. Our algorithm is designed simply and systematically to detect and separate special characteristics of images from CCTV images. and using temperature & humidity in formation. This algorithm, there is more complex to implement than how to use DB with high overhead of time and space in the previous system. But our algorithm can be implement with low cost' and can be use the system in real work right away. Also, our algorithm can detect the exact information of weather with adding in formation including temperature, humidity, date, and time. At last, this paper s how the usefulness of our algorithm.

  • PDF

Implementation of ARM based Embedded System for Muscular Sense into both Color and Sound Conversion (근감각-색·음 변환을 위한 ARM 기반 임베디드시스템의 구현)

  • Kim, Sung-Ill
    • The Journal of the Korea Contents Association
    • /
    • v.16 no.8
    • /
    • pp.427-434
    • /
    • 2016
  • This paper focuses on a real-time hardware processing by implementing the ARM Cortex-M4 based embedded system, using a conversion algorithm from a muscular sense to both visual and auditory elements, which recognizes rotations of a human body, directional changes and motion amounts out of human senses. As an input method of muscular sense, AHRS(Attitude Heading Reference System) was used to acquire roll, pitch and yaw values in real time. These three input values were converted into three elements of HSI color model such as intensity, hue and saturation, respectively. Final color signals were acquired by converting HSI into RGB color model. In addition, Three input values of muscular sense were converted into three elements of sound such as octave, scale and velocity, which were synthesized to give an output sound using MIDI(Musical Instrument Digital Interface). The analysis results of both output color and sound signals revealed that input signals of muscular sense were correctly converted into both color and sound in real time by the proposed conversion method.

2D to 3D Anaglyph Image Conversion using Linear Curve in HTML5 (HTML5에서 직선의 기울기를 이용한 2D to 3D 입체 이미지 변환)

  • Park, Young Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.521-528
    • /
    • 2014
  • In this paper, we propose the method of converting 2D image to 3D image using linear curves in HTML5. We use only one image without any other information about depth map for creating 3D images. So we filter the original image to extract RGB colors for left and right eyes. After selecting the ready-made control point of linear curves to set up depth values, users can set up the depth values and modify them. Based on the depth values that the end users select, we reflect them. Anaglyph 3D is automatically made with the whole and partial depth information. As all of this work has been designed and implemented in Web environment using HTML5, it is very easy and convenient and end users can create any 3D image that they want to make.

2D to 3D Anaglyph Image Conversion using Quadratic & Cubic Bézier Curve in HTML5 (HTML5에서 Quadratic & Cubic Bézier 곡선을 이용한 2D to 3D 입체 이미지 변환)

  • Park, Young Soo
    • Journal of Digital Convergence
    • /
    • v.12 no.12
    • /
    • pp.553-560
    • /
    • 2014
  • In this paper, we propose a method to convert 2D image to 3D anaglyph using quadratic & cubic B$\acute{e}$zier Curves in HTML5. In order to convert 2D image to 3D anaglyph image, we filter the original image to extract the RGB color values and create two images for the left and right eyes. Users are to set up the depth values of the image through the control point using the quadratic and cubic B$\acute{e}$zier curves. We have processed the depth values of 2D image based on this control point to create the 3D image conversion reflecting the value of the control point which the users select. All of this work has been designed and implemented in Web environment in HTML5. So we have made it for anyone who wants to create their 3D images and it is very easy and convenient to use.

Face Recognition System for Multimedia Application (멀티미디어 응용을 위한 얼굴 인식시스템)

  • Park, Sang-Gyou;Seong, Hyeon-Kyeong;Han, Young-Hwan
    • Journal of IKEEE
    • /
    • v.6 no.2 s.11
    • /
    • pp.152-160
    • /
    • 2002
  • This paper is the realization of the face recognition system for multimedia application. This system is focused on the design concerning the improvement of recognition rate and the reduction of processing time for face recognition. The non-modificated application of typical RGB color system enables the reduction of time required for color system transform. The neural network and the application of algorithm using face characteristic improves the recognition rate. After mosaicking an image, a face-color block has been selected through the color analysis of mosaic block. The characteristic of the face removes the mis-checked face-color candidate block. Finally, from the face color block, four special values are obtained. These values are processed to the neural network using the back propagation algorithm. The output values are the touchstone to decide the genuineness of face field. The realized system showed 90% of face recognition rate with less than 0.1 second of processing time. This result can be understood as sufficient processing time and recognition rate to find out the face block for multimedia application in dynamic image.

  • PDF

Development of a soil total carbon prediction model using a multiple regression analysis method

  • Jun-Hyuk, Yoo;Jwa-Kyoung, Sung;Deogratius, Luyima;Taek-Keun, Oh;Jaesung, Cho
    • Korean Journal of Agricultural Science
    • /
    • v.48 no.4
    • /
    • pp.891-897
    • /
    • 2021
  • There is a need for a technology that can quickly and accurately analyze soil carbon contents. Existing soil carbon analysis methods are cumbersome in terms of professional manpower requirements, time, and cost. It is against this background that the present study leverages the soil physical properties of color and water content levels to develop a model capable of predicting the carbon content of soil sample. To predict the total carbon content of soil, the RGB values, water content of the soil, and lux levels were analyzed and used as statistical data. However, when R, G, and B with high correlations were all included in a multiple regression analysis as independent variables, a high level of multicollinearity was noted and G was thus excluded from the model. The estimates showed that the estimation coefficients for all independent variables were statistically significant at a significance level of 1%. The elastic values of R and B for the soil carbon content, which are of major interest in this study, were -2.90 and 1.47, respectively, showing that a 1% increase in the R value was correlated with a 2.90% decrease in the carbon content, whereas a 1% increase in the B value tallied with a 1.47% increase in the carbon content. Coefficient of determination (R2), root mean square error (RMSE), and mean absolute percentage error (MAPE) methods were used for regression verification, and calibration samples showed higher accuracy than the validation samples in terms of R2 and MAPE.

Digital Light Color Control System of LED Lamp using Inverse Tri-Stimulus Algorithm (역 삼자극치 알고리즘을 이용한 LED램프 디지털 광색제어시스템)

  • Kang, Shin-Ho;Lee, Jeong-Min;Ryeom, Jeong-Duk
    • Journal of the Korean Institute of Illuminating and Electrical Installation Engineers
    • /
    • v.25 no.1
    • /
    • pp.1-8
    • /
    • 2011
  • In this paper, the method to calculate chromaticity coordinate from spectral power distribution of LED is presented. Also, inverse tri-stimulus algorithm to find mixed luminance of red, green, blue LED from targeted luminance and chromaticity coordinate is proposed. Besides, digital light color control system of LED lamp applied this algorithm has been developed. In experiments, each chromaticity coordinate of red, green, blue LED calculated from this algorithm has relative percentage error of few % to measured values. Digital code is drawn from inverse tri-stimulus algorithm, and measured values of luminance and chromaticity coordinate of LED lamp digitally controlled by this code also have relative percentage error within a few % to targeted luminance and chromaticity coordinate.

Use of Unmanned Aerial Vehicle for Multi-temporal Monitoring of Soybean Vegetation Fraction

  • Yun, Hee Sup;Park, Soo Hyun;Kim, Hak-Jin;Lee, Wonsuk Daniel;Lee, Kyung Do;Hong, Suk Young;Jung, Gun Ho
    • Journal of Biosystems Engineering
    • /
    • v.41 no.2
    • /
    • pp.126-137
    • /
    • 2016
  • Purpose: The overall objective of this study was to evaluate the vegetation fraction of soybeans, grown under different cropping conditions using an unmanned aerial vehicle (UAV) equipped with a red, green, and blue (RGB) camera. Methods: Test plots were prepared based on different cropping treatments, i.e., soybean single-cropping, with and without herbicide application and soybean and barley-cover cropping, with and without herbicide application. The UAV flights were manually controlled using a remote flight controller on the ground, with 2.4 GHz radio frequency communication. For image pre-processing, the acquired images were pre-treated and georeferenced using a fisheye distortion removal function, and ground control points were collected using Google Maps. Tarpaulin panels of different colors were used to calibrate the multi-temporal images by converting the RGB digital number values into the RGB reflectance spectrum, utilizing a linear regression method. Excess Green (ExG) vegetation indices for each of the test plots were compared with the M-statistic method in order to quantitatively evaluate the greenness of soybean fields under different cropping systems. Results: The reflectance calibration methods used in the study showed high coefficients of determination, ranging from 0.8 to 0.9, indicating the feasibility of a linear regression fitting method for monitoring multi-temporal RGB images of soybean fields. As expected, the ExG vegetation indices changed according to different soybean growth stages, showing clear differences among the test plots with different cropping treatments in the early season of < 60 days after sowing (DAS). With the M-statistic method, the test plots under different treatments could be discriminated in the early seasons of <41 DAS, showing a value of M > 1. Conclusion: Therefore, multi-temporal images obtained with an UAV and a RGB camera could be applied for quantifying overall vegetation fractions and crop growth status, and this information could contribute to determine proper treatments for the vegetation fraction.

Improving Precision of the Exterior Orientation and the Pixel Position of a Multispectral Camera onboard a Drone through the Simultaneous Utilization of a High Resolution Camera (고해상도 카메라와의 동시 운영을 통한 드론 다분광카메라의 외부표정 및 영상 위치 정밀도 개선 연구)

  • Baek, Seungil;Byun, Minsu;Kim, Wonkook
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.39 no.6
    • /
    • pp.541-548
    • /
    • 2021
  • Recently, multispectral cameras are being actively utilized in various application fields such as agriculture, forest management, coastal environment monitoring, and so on, particularly onboard UAV's. Resultant multispectral images are typically georeferenced primarily based on the onboard GPS (Global Positioning System) and IMU (Inertial Measurement Unit)or accurate positional information of the pixels, or could be integrated with ground control points that are directly measured on the ground. However, due to the high cost of establishing GCP's prior to the georeferencing or for inaccessible areas, it is often required to derive the positions without such reference information. This study aims to provide a means to improve the georeferencing performance of a multispectral camera images without involving such ground reference points, but instead with the simultaneously onboard high resolution RGB camera. The exterior orientation parameters of the drone camera are first estimated through the bundle adjustment, and compared with the reference values derived with the GCP's. The results showed that the incorporation of the images from a high resolution RGB camera greatly improved both the exterior orientation estimation and the georeferencing of the multispectral camera. Additionally, an evaluation performed on the direction estimation from a ground point to the sensor showed that inclusion of RGB images can reduce the angle errors more by one order.