• Title/Summary/Keyword: BLUE-GREEN

Search Result 1,555, Processing Time 0.024 seconds

Analysis of Spatial Correlation between Surface Temperature and Absorbed Solar Radiation Using Drone - Focusing on Cool Roof Performance - (드론을 활용한 지표온도와 흡수일사 간 공간적 상관관계 분석 - 쿨루프 효과 분석을 중심으로 -)

  • Cho, Young-Il;Yoon, Donghyeon;Lee, Moung-Jin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_2
    • /
    • pp.1607-1622
    • /
    • 2022
  • The purpose of this study is to determine the actual performance of cool roof in preventing absorbed solar radiation. The spatial correlation between surface temperature and absorbed solar radiation is the method by which the performance of a cool roof can be understood and evaluated. The research area of this study is the vicinity of Jangyu Mugye-dong, Gimhae-si, Gyeongsangnam-do, where an actual cool roof is applied. FLIR Vue Pro R thermal infrared sensor, Micasense Red-Edge multi-spectral sensor and DJI H20T visible spectral sensor was used for aerial photography, with attached to the drone DJI Matrice 300 RTK. To perform the spatial correlation analysis, thermal infrared orthomosaics, absorbed solar radiation distribution maps were constructed, and land cover features of roof were extracted based on the drone aerial photographs. The temporal scope of this research ranged over 9 points of time at intervals of about 1 hour and 30 minutes from 7:15 to 19:15 on July 27, 2021. The correlation coefficient values of 0.550 for the normal roof and 0.387 for the cool roof were obtained on a daily average basis. However, at 11:30 and 13:00, when the Solar altitude was high on the date of analysis, the difference in correlation coefficient values between the normal roof and the cool roof was 0.022, 0.024, showing similar correlations. In other time series, the values of the correlation coefficient of the normal roof are about 0.1 higher than that of the cool roof. This study assessed and evaluated the potential of an actual cool roof to prevent solar radiation heating a rooftop through correlation comparison with a normal roof, which serves as a control group, by using high-resolution drone images. The results of this research can be used as reference data when local governments or communities seek to adopt strategies to eliminate the phenomenon of urban heat islands.

Hydroponic Nutrient Solution and Light Quality Influence on Lettuce (Lactuca sativa L.) Growth from the Artificial Light Type of Plant Factory System (인공광 식물공장에서 수경배양액 및 광질 조절이 상추 실생묘 생장에 미치는 영향)

  • Heo, Jeong-Wook;Park, Kyeong-Hun;Hong, Seung-Gil;Lee, Jae-Su;Baek, Jeong-Hyun
    • Korean Journal of Environmental Agriculture
    • /
    • v.38 no.4
    • /
    • pp.225-236
    • /
    • 2019
  • BACKGROUND: Hydroponics is one of the methods for evaluating plant production using the inorganic nutrient solutions, which is applied under the artificial light conditions of plant factory system. However, the application of the conventional inorganic nutrients for hydroponics caused several environmental problems: waste from culture mediums and high nitrate concentration in plants. Organic nutrients are generally irrigated as a supplementary fertilizer for plant growth promotion under field or greenhouse conditions. Hydroponic culture using organic nutrients derived from the agricultural by-products such as dumped stems, leaves or immature fruits is rarely considered in plant factory system. Effect of organic or conventional inorganic nutrient solutions on the growth and nutrient absorption pattern of green and red leaf lettuces was investigated in this experiment under fluorescent lamps (FL) and mixture Light-Emitting Diodes (LEDs). METHODS AND RESULTS: Single solution of tomatoes (TJ) and kales (K) deriving from agricultural by-products including leaves or stems and its mixed solution (mixture ration 1:1) with conventional inorganic Yamazaki (Y) were supplied for hydroponics under the plant factory system. The Yamazaki solution was considered as a control. 'Jeockchima' and 'Cheongchima' lettuce seedlings (Lactuca sativa L.) were used as plant materials. The seedlings which developed 2~3 true leaves were grown under the light qualities of FL and mixed LED lights of blue plus red plus white of 1:2:1 mixture in energy ratio for 35 days. Light intensity of the light sources was controlled at 180 μmol/㎡/s on the culture bed. The single and mixture nutrient solutions of organic and/or inorganic components which controlled at 1.5 dS/m EC and 5.8 pH were regularly irrigated by the deep flow technique (DFT) system on the culture gutters. Number of unfolded leaves of the seedlings grown under the single or mixed nutrient solutions were significantly increased compared to the conventional Y treatment. Leaf extension of 'Jeockchima' under the mixture LED radiation condition was not affected by Y and YK or YTJ mixture treatments. SPAD value in 'Jeockchima' leaves exposed by FL under the YK mixture medium was approximately 45 % higher than under conventional Y treatment. Otherwise, the maximum SPAD value in the leaves of 'Cheongchima' seedlings was shown in YK treatment under the mixture LED lights. NO3-N contents in Y treatment treated with inorganic nutrient at the end of the experiment were up to 75% declined rather than increased over 60 % in the K and TJ organic treatment. CONCLUSION: Growth of the seedlings was affected by the mixture treatments of the organic and inorganic solutions, although similar or lower dry weight was recorded than in the inorganic treatment Y under the plant factory system. Treatment Y containing the highest NO3-N content among the considered nutrients influenced growth increment of the seedlings comparing to the other nutrients. However effect of the higher NO3-N content in the seedling growth was different according to the light qualities considered in the experiment as shown in leaf expansion, pigmentation or dry weight promotion under the single or mixed nutrients.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

Study on Fabric and Embroidery of Possessed by Dong-A University Museum (동아대학교박물관 소장 <초충도수병>의 직물과 자수 연구)

  • Sim, Yeon-ok
    • Korean Journal of Heritage: History & Science
    • /
    • v.46 no.3
    • /
    • pp.230-250
    • /
    • 2013
  • possessed by Dong-A University Museum is designated as Treasure No. 595, and has been known for a more exquisite, delicate and realistic expression and a colorful three-dimensional structure compared to the 'grass and insect painting' work and its value in art history. However, it has not been analyzed and studied in fabric craft despite it being an embroidered work. This study used scientific devices to examine and analyze the Screen's fabric, thread colors, and embroidery techniques to clarify its patterns and fabric craft characteristics for its value in the history of fabric craft. As a result, consists of eight sides and its subject matters and composition are similar to those of the general paintings of grass and insects. The patterns on each side of the 'grass and insect painting' include cucumber, cockscomb, day lily, balsam pear, gillyflower, watermelon, eggplant, and chrysanthemums from the first side. Among these flowers, the balsam pear is a special material not found in the existing paintings of grass and insect. The eighth side only has the chrysanthemums with no insects and reptiles, making it different from the typical forms of the paintings of grass and insect. The fabric of the Screen uses black that is not seen in other decorative embroideries to emphasize and maximize various colors of threads. The fabric used the weave structure of 5-end satin called Gong Dan [non-patterned satin]. The threads used extremely slightly twisted threads that are incidentally twisted. Some threads use one color, while other threads use two or mixed colors in combination for three-dimensional expressions. Because the threads are severely deterioration and faded, it is impossible to know the original colors, but the most frequently used colors are yellow to green and other colors remaining relatively prominently are blue, grown, and violet. The colors of day lily, gillyflower, and strawberries are currently remaining as reddish yellow, but it is anticipated that they were originally orange and red considering the existing paintings of grass and insects. The embroidery technique was mostly surface satin stitch to fill the surfaces. This shows the traditional women's wisdom to reduce the waste of color threads. Satin stitch is a relatively simple embroidery technique for decorating a surface, but it uses various color threads and divides the surfaces for combined vertical, horizontal, and diagonal stitches or for the combination of long and short stitches for various textures and the sense of volume. The bodies of insects use the combination of buttonhole stitch, outline stitch, and satin stitch for three-dimensional expressions, but the use of buttonhole stitch is particularly noticeable. In addition to that, decorative stitches were used to give volume to the leaves and surface pine needle stitches were done on the scouring rush to add more realistic texture. Decorative stitches were added on top of gillyflower, strawberries, and cucumbers for a more delicate touch. is valuable in the history of paintings and art and bears great importance in the history of Korean embroidery as it uses outstanding technique and colors of Korea to express the Shin Sa-im-dang's 'Grass and Insect Painting'.

A Two-Stage Learning Method of CNN and K-means RGB Cluster for Sentiment Classification of Images (이미지 감성분류를 위한 CNN과 K-means RGB Cluster 이-단계 학습 방안)

  • Kim, Jeongtae;Park, Eunbi;Han, Kiwoong;Lee, Junghyun;Lee, Hong Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.3
    • /
    • pp.139-156
    • /
    • 2021
  • The biggest reason for using a deep learning model in image classification is that it is possible to consider the relationship between each region by extracting each region's features from the overall information of the image. However, the CNN model may not be suitable for emotional image data without the image's regional features. To solve the difficulty of classifying emotion images, many researchers each year propose a CNN-based architecture suitable for emotion images. Studies on the relationship between color and human emotion were also conducted, and results were derived that different emotions are induced according to color. In studies using deep learning, there have been studies that apply color information to image subtraction classification. The case where the image's color information is additionally used than the case where the classification model is trained with only the image improves the accuracy of classifying image emotions. This study proposes two ways to increase the accuracy by incorporating the result value after the model classifies an image's emotion. Both methods improve accuracy by modifying the result value based on statistics using the color of the picture. When performing the test by finding the two-color combinations most distributed for all training data, the two-color combinations most distributed for each test data image were found. The result values were corrected according to the color combination distribution. This method weights the result value obtained after the model classifies an image's emotion by creating an expression based on the log function and the exponential function. Emotion6, classified into six emotions, and Artphoto classified into eight categories were used for the image data. Densenet169, Mnasnet, Resnet101, Resnet152, and Vgg19 architectures were used for the CNN model, and the performance evaluation was compared before and after applying the two-stage learning to the CNN model. Inspired by color psychology, which deals with the relationship between colors and emotions, when creating a model that classifies an image's sentiment, we studied how to improve accuracy by modifying the result values based on color. Sixteen colors were used: red, orange, yellow, green, blue, indigo, purple, turquoise, pink, magenta, brown, gray, silver, gold, white, and black. It has meaning. Using Scikit-learn's Clustering, the seven colors that are primarily distributed in the image are checked. Then, the RGB coordinate values of the colors from the image are compared with the RGB coordinate values of the 16 colors presented in the above data. That is, it was converted to the closest color. Suppose three or more color combinations are selected. In that case, too many color combinations occur, resulting in a problem in which the distribution is scattered, so a situation fewer influences the result value. Therefore, to solve this problem, two-color combinations were found and weighted to the model. Before training, the most distributed color combinations were found for all training data images. The distribution of color combinations for each class was stored in a Python dictionary format to be used during testing. During the test, the two-color combinations that are most distributed for each test data image are found. After that, we checked how the color combinations were distributed in the training data and corrected the result. We devised several equations to weight the result value from the model based on the extracted color as described above. The data set was randomly divided by 80:20, and the model was verified using 20% of the data as a test set. After splitting the remaining 80% of the data into five divisions to perform 5-fold cross-validation, the model was trained five times using different verification datasets. Finally, the performance was checked using the test dataset that was previously separated. Adam was used as the activation function, and the learning rate was set to 0.01. The training was performed as much as 20 epochs, and if the validation loss value did not decrease during five epochs of learning, the experiment was stopped. Early tapping was set to load the model with the best validation loss value. The classification accuracy was better when the extracted information using color properties was used together than the case using only the CNN architecture.