• Title/Summary/Keyword: forest machine

Search Result 775, Processing Time 0.028 seconds

Predicting the Effects of Rooftop Greening and Evaluating CO2 Sequestration in Urban Heat Island Areas Using Satellite Imagery and Machine Learning (위성영상과 머신러닝 활용 도시열섬 지역 옥상녹화 효과 예측과 이산화탄소 흡수량 평가)

  • Minju Kim;Jeong U Park;Juhyeon Park;Jisoo Park;Chang-Uk Hyun
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_1
    • /
    • pp.481-493
    • /
    • 2023
  • In high-density urban areas, the urban heat island effect increases urban temperatures, leading to negative impacts such as worsened air pollution, increased cooling energy consumption, and increased greenhouse gas emissions. In urban environments where it is difficult to secure additional green spaces, rooftop greening is an efficient greenhouse gas reduction strategy. In this study, we not only analyzed the current status of the urban heat island effect but also utilized high-resolution satellite data and spatial information to estimate the available rooftop greening area within the study area. We evaluated the mitigation effect of the urban heat island phenomenon and carbon sequestration capacity through temperature predictions resulting from rooftop greening. To achieve this, we utilized WorldView-2 satellite data to classify land cover in the urban heat island areas of Busan city. We developed a prediction model for temperature changes before and after rooftop greening using machine learning techniques. To assess the degree of urban heat island mitigation due to changes in rooftop greening areas, we constructed a temperature change prediction model with temperature as the dependent variable using the random forest technique. In this process, we built a multiple regression model to derive high-resolution land surface temperatures for training data using Google Earth Engine, combining Landsat-8 and Sentinel-2 satellite data. Additionally, we evaluated carbon sequestration based on rooftop greening areas using a carbon absorption capacity per plant. The results of this study suggest that the developed satellite-based urban heat island assessment and temperature change prediction technology using Random Forest models can be applied to urban heat island-vulnerable areas with potential for expansion.

Estimation of TROPOMI-derived Ground-level SO2 Concentrations Using Machine Learning Over East Asia (기계학습을 활용한 동아시아 지역의 TROPOMI 기반 SO2 지상농도 추정)

  • Choi, Hyunyoung;Kang, Yoojin;Im, Jungho
    • Korean Journal of Remote Sensing
    • /
    • v.37 no.2
    • /
    • pp.275-290
    • /
    • 2021
  • Sulfur dioxide (SO2) in the atmosphere is mainly generated from anthropogenic emission sources. It forms ultra-fine particulate matter through chemical reaction and has harmful effect on both the environment and human health. In particular, ground-level SO2 concentrations are closely related to human activities. Satellite observations such as TROPOMI (TROPOspheric Monitoring Instrument)-derived column density data can provide spatially continuous monitoring of ground-level SO2 concentrations. This study aims to propose a 2-step residual corrected model to estimate ground-level SO2 concentrations through the synergistic use of satellite data and numerical model output. Random forest machine learning was adopted in the 2-step residual corrected model. The proposed model was evaluated through three cross-validations (i.e., random, spatial and temporal). The results showed that the model produced slopes of 1.14-1.25, R values of 0.55-0.65, and relative root-mean-square-error of 58-63%, which were improved by 10% for slopes and 3% for R and rRMSE when compared to the model without residual correction. The model performance by country was slightly reduced in Japan, often resulting in overestimation, where the sample size was small, and the concentration level was relatively low. The spatial and temporal distributions of SO2 produced by the model agreed with those of the in-situ measurements, especially over Yangtze River Delta in China and Seoul Metropolitan Area in South Korea, which are highly dependent on the characteristics of anthropogenic emission sources. The model proposed in this study can be used for long-term monitoring of ground-level SO2 concentrations on both the spatial and temporal domains.

A Study on Daytime Transparent Cloud Detection through Machine Learning: Using GK-2A/AMI (기계학습을 통한 주간 반투명 구름탐지 연구: GK-2A/AMI를 이용하여)

  • Byeon, Yugyeong;Jin, Donghyun;Seong, Noh-hun;Woo, Jongho;Jeon, Uujin;Han, Kyung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1181-1189
    • /
    • 2022
  • Clouds are composed of tiny water droplets, ice crystals, or mixtures suspended in the atmosphere and cover about two-thirds of the Earth's surface. Cloud detection in satellite images is a very difficult task to separate clouds and non-cloud areas because of similar reflectance characteristics to some other ground objects or the ground surface. In contrast to thick clouds, which have distinct characteristics, thin transparent clouds have weak contrast between clouds and background in satellite images and appear mixed with the ground surface. In order to overcome the limitations of transparent clouds in cloud detection, this study conducted cloud detection focusing on transparent clouds using machine learning techniques (Random Forest [RF], Convolutional Neural Networks [CNN]). As reference data, Cloud Mask and Cirrus Mask were used in MOD35 data provided by MOderate Resolution Imaging Spectroradiometer (MODIS), and the pixel ratio of training data was configured to be about 1:1:1 for clouds, transparent clouds, and clear sky for model training considering transparent cloud pixels. As a result of the qualitative comparison of the study, bothRF and CNN successfully detected various types of clouds, including transparent clouds, and in the case of RF+CNN, which mixed the results of the RF model and the CNN model, the cloud detection was well performed, and was confirmed that the limitations of the model were improved. As a quantitative result of the study, the overall accuracy (OA) value of RF was 92%, CNN showed 94.11%, and RF+CNN showed 94.29% accuracy.

Generation of Daily High-resolution Sea Surface Temperature for the Seas around the Korean Peninsula Using Multi-satellite Data and Artificial Intelligence (다종 위성자료와 인공지능 기법을 이용한 한반도 주변 해역의 고해상도 해수면온도 자료 생산)

  • Jung, Sihun;Choo, Minki;Im, Jungho;Cho, Dongjin
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.5_2
    • /
    • pp.707-723
    • /
    • 2022
  • Although satellite-based sea surface temperature (SST) is advantageous for monitoring large areas, spatiotemporal data gaps frequently occur due to various environmental or mechanical causes. Thus, it is crucial to fill in the gaps to maximize its usability. In this study, daily SST composite fields with a resolution of 4 km were produced through a two-step machine learning approach using polar-orbiting and geostationary satellite SST data. The first step was SST reconstruction based on Data Interpolate Convolutional AutoEncoder (DINCAE) using multi-satellite-derived SST data. The second step improved the reconstructed SST targeting in situ measurements based on light gradient boosting machine (LGBM) to finally produce daily SST composite fields. The DINCAE model was validated using random masks for 50 days, whereas the LGBM model was evaluated using leave-one-year-out cross-validation (LOYOCV). The SST reconstruction accuracy was high, resulting in R2 of 0.98, and a root-mean-square-error (RMSE) of 0.97℃. The accuracy increase by the second step was also high when compared to in situ measurements, resulting in an RMSE decrease of 0.21-0.29℃ and an MAE decrease of 0.17-0.24℃. The SST composite fields generated using all in situ data in this study were comparable with the existing data assimilated SST composite fields. In addition, the LGBM model in the second step greatly reduced the overfitting, which was reported as a limitation in the previous study that used random forest. The spatial distribution of the corrected SST was similar to those of existing high resolution SST composite fields, revealing that spatial details of oceanic phenomena such as fronts, eddies and SST gradients were well simulated. This research demonstrated the potential to produce high resolution seamless SST composite fields using multi-satellite data and artificial intelligence.

Estimation of Fractional Urban Tree Canopy Cover through Machine Learning Using Optical Satellite Images (기계학습을 이용한 광학 위성 영상 기반의 도시 내 수목 피복률 추정)

  • Sejeong Bae ;Bokyung Son ;Taejun Sung ;Yeonsu Lee ;Jungho Im ;Yoojin Kang
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.5_3
    • /
    • pp.1009-1029
    • /
    • 2023
  • Urban trees play a vital role in urban ecosystems,significantly reducing impervious surfaces and impacting carbon cycling within the city. Although previous research has demonstrated the efficacy of employing artificial intelligence in conjunction with airborne light detection and ranging (LiDAR) data to generate urban tree information, the availability and cost constraints associated with LiDAR data pose limitations. Consequently, this study employed freely accessible, high-resolution multispectral satellite imagery (i.e., Sentinel-2 data) to estimate fractional tree canopy cover (FTC) within the urban confines of Suwon, South Korea, employing machine learning techniques. This study leveraged a median composite image derived from a time series of Sentinel-2 images. In order to account for the diverse land cover found in urban areas, the model incorporated three types of input variables: average (mean) and standard deviation (std) values within a 30-meter grid from 10 m resolution of optical indices from Sentinel-2, and fractional coverage for distinct land cover classes within 30 m grids from the existing level 3 land cover map. Four schemes with different combinations of input variables were compared. Notably, when all three factors (i.e., mean, std, and fractional cover) were used to consider the variation of landcover in urban areas(Scheme 4, S4), the machine learning model exhibited improved performance compared to using only the mean of optical indices (Scheme 1). Of the various models proposed, the random forest (RF) model with S4 demonstrated the most remarkable performance, achieving R2 of 0.8196, and mean absolute error (MAE) of 0.0749, and a root mean squared error (RMSE) of 0.1022. The std variable exhibited the highest impact on model outputs within the heterogeneous land covers based on the variable importance analysis. This trained RF model with S4 was then applied to the entire Suwon region, consistently delivering robust results with an R2 of 0.8702, MAE of 0.0873, and RMSE of 0.1335. The FTC estimation method developed in this study is expected to offer advantages for application in various regions, providing fundamental data for a better understanding of carbon dynamics in urban ecosystems in the future.

Ensemble of Nested Dichotomies for Activity Recognition Using Accelerometer Data on Smartphone (Ensemble of Nested Dichotomies 기법을 이용한 스마트폰 가속도 센서 데이터 기반의 동작 인지)

  • Ha, Eu Tteum;Kim, Jeongmin;Ryu, Kwang Ryel
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.4
    • /
    • pp.123-132
    • /
    • 2013
  • As the smartphones are equipped with various sensors such as the accelerometer, GPS, gravity sensor, gyros, ambient light sensor, proximity sensor, and so on, there have been many research works on making use of these sensors to create valuable applications. Human activity recognition is one such application that is motivated by various welfare applications such as the support for the elderly, measurement of calorie consumption, analysis of lifestyles, analysis of exercise patterns, and so on. One of the challenges faced when using the smartphone sensors for activity recognition is that the number of sensors used should be minimized to save the battery power. When the number of sensors used are restricted, it is difficult to realize a highly accurate activity recognizer or a classifier because it is hard to distinguish between subtly different activities relying on only limited information. The difficulty gets especially severe when the number of different activity classes to be distinguished is very large. In this paper, we show that a fairly accurate classifier can be built that can distinguish ten different activities by using only a single sensor data, i.e., the smartphone accelerometer data. The approach that we take to dealing with this ten-class problem is to use the ensemble of nested dichotomy (END) method that transforms a multi-class problem into multiple two-class problems. END builds a committee of binary classifiers in a nested fashion using a binary tree. At the root of the binary tree, the set of all the classes are split into two subsets of classes by using a binary classifier. At a child node of the tree, a subset of classes is again split into two smaller subsets by using another binary classifier. Continuing in this way, we can obtain a binary tree where each leaf node contains a single class. This binary tree can be viewed as a nested dichotomy that can make multi-class predictions. Depending on how a set of classes are split into two subsets at each node, the final tree that we obtain can be different. Since there can be some classes that are correlated, a particular tree may perform better than the others. However, we can hardly identify the best tree without deep domain knowledge. The END method copes with this problem by building multiple dichotomy trees randomly during learning, and then combining the predictions made by each tree during classification. The END method is generally known to perform well even when the base learner is unable to model complex decision boundaries As the base classifier at each node of the dichotomy, we have used another ensemble classifier called the random forest. A random forest is built by repeatedly generating a decision tree each time with a different random subset of features using a bootstrap sample. By combining bagging with random feature subset selection, a random forest enjoys the advantage of having more diverse ensemble members than a simple bagging. As an overall result, our ensemble of nested dichotomy can actually be seen as a committee of committees of decision trees that can deal with a multi-class problem with high accuracy. The ten classes of activities that we distinguish in this paper are 'Sitting', 'Standing', 'Walking', 'Running', 'Walking Uphill', 'Walking Downhill', 'Running Uphill', 'Running Downhill', 'Falling', and 'Hobbling'. The features used for classifying these activities include not only the magnitude of acceleration vector at each time point but also the maximum, the minimum, and the standard deviation of vector magnitude within a time window of the last 2 seconds, etc. For experiments to compare the performance of END with those of other methods, the accelerometer data has been collected at every 0.1 second for 2 minutes for each activity from 5 volunteers. Among these 5,900 ($=5{\times}(60{\times}2-2)/0.1$) data collected for each activity (the data for the first 2 seconds are trashed because they do not have time window data), 4,700 have been used for training and the rest for testing. Although 'Walking Uphill' is often confused with some other similar activities, END has been found to classify all of the ten activities with a fairly high accuracy of 98.4%. On the other hand, the accuracies achieved by a decision tree, a k-nearest neighbor, and a one-versus-rest support vector machine have been observed as 97.6%, 96.5%, and 97.6%, respectively.

Analysis of the Behavior of Fluorescent Whitening Agents in Recycling Process of White Ledger (형광증백제가 함유된 백상고지의 재활용에 따른 형광증백제의 거동 분석)

  • Lee, Ji Young;Kim, Chul Hwan;Park, Jong-Hye;Kim, Eun-Hea;Sung, Yong Joo;Heo, Young-Jun;Kim, Young-Hoon;Kim, Yeon-Oh
    • Journal of Korea Technical Association of The Pulp and Paper Industry
    • /
    • v.47 no.1
    • /
    • pp.52-58
    • /
    • 2015
  • White ledger usually includes white office paper, computer paper, and copy machine paper. Because these grades need high optical properties, fluorescent whitening agents (FWAs) are widely used in the papermaking process. FWAs are the most powerful and effective chemical used to obtain high CIE whiteness and ISO brightness in papers. The rising demand for white or ultra-white papers has increased the use of FWAs. However, FWAs used in white ledger can restrict its use, even though white ledger is widely used as a raw material in paperboard mills. Therefore, it is necessary to develop methods to control FWAs from white ledger to increase its use in paperboard mills. In this study, the behaviors of disulpho fluorescent whitening agent (D-FWA), tetrasulpo fluorescent whitening agent (T-FWA), and hexasulpho fluorescent whitening agent (H-FWA) during the recycling process were identified as a first step to remove FWAs from white ledger. We prepared four types of papers (dyed with D-FWA, T-FWA, and H-FWA), disintegrated these papers, and made handsheets. This recycling process was carried out three times in a laboratory. After each round of recycling, the hand-sheets' CIE whiteness and fluorescence index were measured, and the distribution of FWAs in the Z-direction was observed using CLSM images. FWA reductions in the model papers were calculated using fluorescence indices as a function of the number of recycling. FWAs in handsheets containing T-FWA and H-FWA decreased linearly as a function of the number of recycling, but D-FWA did not show a significant reduction in the fluorescence index after recycling. T-FWA and H-FWA showed similar distributions of D-FWA after recycling. Therefore, as much T-FWA and H-FWA as possible must be detached in the early processes of papermaking at paperboard mills.

Study on Refining Technique of Raw Lacquer (I) - Properties of Raw Lacquer, Refined Lacquer and Film according as Their Collecting Places and Seasons - (옻칠의 정제기술에 관한 연구(I) - 생산지·생산시기에 따른 생칠과 옻칠의 특성 및 도막 특성 -)

  • Song, Hong-Keun;Han, Chang-Hoon
    • Journal of the Korean Wood Science and Technology
    • /
    • v.29 no.1
    • /
    • pp.31-42
    • /
    • 2001
  • In this study, we obtained fundamental data about Korean raw lacquer's physical and chemical properties to produce high quality lacquer. The tested raw lacquers were obtained from Won-ju in Korea, Shanxishang, Guizhoushang, Anhuishang in China. The drying time of refining lacquers, tensile strengths of dried films and uniformity of films are measured. The refined lacquers were prepared by experimentally scaled refining equipment. Films of lacquer were applied on glasses by film applicator. This films were tested by universal strength test machine. The films were pictured by scanning electron microscopy and confocal microscopy to define the uniformity of them. The refining method were not different among three different kind of raw lacquers which were different their collecting time and places. But the viscosity of them were quite different. When black refined lacquer is made with iron powder, the adding time of iron powder is critical to control the viscosity of it. The refining times, viscosity and tensile strength of refined lacquers were not depended the method of refining condition but the place of collecting of raw lacquer.

  • PDF

Building an Analytical Platform of Big Data for Quality Inspection in the Dairy Industry: A Machine Learning Approach (유제품 산업의 품질검사를 위한 빅데이터 플랫폼 개발: 머신러닝 접근법)

  • Hwang, Hyunseok;Lee, Sangil;Kim, Sunghyun;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.125-140
    • /
    • 2018
  • As one of the processes in the manufacturing industry, quality inspection inspects the intermediate products or final products to separate the good-quality goods that meet the quality management standard and the defective goods that do not. The manual inspection of quality in a mass production system may result in low consistency and efficiency. Therefore, the quality inspection of mass-produced products involves automatic checking and classifying by the machines in many processes. Although there are many preceding studies on improving or optimizing the process using the data generated in the production process, there have been many constraints with regard to actual implementation due to the technical limitations of processing a large volume of data in real time. The recent research studies on big data have improved the data processing technology and enabled collecting, processing, and analyzing process data in real time. This paper aims to propose the process and details of applying big data for quality inspection and examine the applicability of the proposed method to the dairy industry. We review the previous studies and propose a big data analysis procedure that is applicable to the manufacturing sector. To assess the feasibility of the proposed method, we applied two methods to one of the quality inspection processes in the dairy industry: convolutional neural network and random forest. We collected, processed, and analyzed the images of caps and straws in real time, and then determined whether the products were defective or not. The result confirmed that there was a drastic increase in classification accuracy compared to the quality inspection performed in the past.

A Development of Stem Analysis Program and its Comparison with other Method for Increment Calculation (수간석해(樹幹析解) 전산(電算)프로그램 개발(開發) 및 생장량(生長量) 계산방법(計算方法)의 비교(比較)에 관(關)한 연구(硏究))

  • Byun, Woo Hyuk;Lee, Woo Kyun;Yun, Kwang Bae
    • Journal of Korean Society of Forest Science
    • /
    • v.79 no.1
    • /
    • pp.1-15
    • /
    • 1990
  • In this study the stem analysis program, which can be operated with personal computer was developed to reduce time and cost of calculation, and to increase accuracy of analysis. The stem analysis method used in this program was compared with other methods. The results obtained were as follows : The value, 1/100mm measured from the latest annual ring measurement machine (Jahrringme${\beta}$geraete Johan Type II) was automatically inputed to the computer and saved into given file name. Turbo Pascal program was written to do this. The measured data was analyzed by stem analysis calculation program written by Fortran-77. Volume and height increments were approximated by spline function, and diameter of the stem disk was calculated by quadratic mean method. The increment values calculated by the programs were printed annually and in every five-year. Stem analysis diagram and several increment graphs were also easily printed. The result compared between those analysis methods showed that quadratic mean could reduce the error caused from eccentric pith. When the stem taper curve method, approximated by spline function, was used in the calculation of tree height and volume, increments would be more exactly calculated.

  • PDF