• 제목/요약/키워드: Linear algorithm

Search Result 4,052, Processing Time 0.032 seconds

A study on discharge estimation for the event using a deep learning algorithm (딥러닝 알고리즘을 이용한 강우 발생시의 유량 추정에 관한 연구)

  • Song, Chul Min
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.246-246
    • /
    • 2021
  • 본 연구는 강우 발생시 유량을 추정하는 것에 목적이 있다. 이를 위해 본 연구는 선행연구의 모형 개발방법론에서 벗어나 딥러닝 알고리즘 중 하나인 합성곱 신경망 (convolution neural network)과 수문학적 이미지 (hydrological image)를 이용하여 강우 발생시 유량을 추정하였다. 합성곱 신경망은 일반적으로 분류 문제 (classification)을 해결하기 위한 목적으로 개발되었기 때문에 불특정 연속변수인 유량을 모의하기에는 적합하지 않다. 이를 위해 본 연구에서는 합성곱 신경망의 완전 연결층 (Fully connected layer)를 개선하여 연속변수를 모의할 수 있도록 개선하였다. 대부분 합성곱 신경망은 RGB (red, green, blue) 사진 (photograph)을 이용하여 해당 사진이 나타내는 것을 예측하는 목적으로 사용하지만, 본 연구의 경우 일반 RGB 사진을 이용하여 유출량을 예측하는 것은 경험적 모형의 전제(독립변수와 종속변수의 관계)를 무너뜨리는 결과를 초래할 수 있다. 이를 위해 본 연구에서는 임의의 유역에 대해 2차원 공간에서 무차원의 수문학적 속성을 갖는 grid의 집합으로 정의되는 수문학적 이미지는 입력자료로 활용했다. 합성곱 신경망의 구조는 Convolution Layer와 Pulling Layer가 5회 반복하는 구조로 설정하고, 이후 Flatten Layer, 2개의 Dense Layer, 1개의 Batch Normalization Layer를 배열하고, 다시 1개의 Dense Layer가 이어지는 구조로 설계하였다. 마지막 Dense Layer의 활성화 함수는 분류모형에 이용되는 softmax 또는 sigmoid 함수를 대신하여 회귀모형에서 자주 사용되는 Linear 함수로 설정하였다. 이와 함께 각 층의 활성화 함수는 정규화 선형함수 (ReLu)를 이용하였으며, 모형의 학습 평가 및 검정을 판단하기 위해 MSE 및 MAE를 사용했다. 또한, 모형평가는 NSE와 RMSE를 이용하였다. 그 결과, 모형의 학습 평가에 대한 MSE는 11.629.8 m3/s에서 118.6 m3/s로, MAE는 25.4 m3/s에서 4.7 m3/s로 감소하였으며, 모형의 검정에 대한 MSE는 1,997.9 m3/s에서 527.9 m3/s로, MAE는 21.5 m3/s에서 9.4 m3/s로 감소한 것으로 나타났다. 또한, 모형평가를 위한 NSE는 0.7, RMSE는 27.0 m3/s로 나타나, 본 연구의 모형은 양호(moderate)한 것으로 판단하였다. 이에, 본 연구를 통해 제시된 방법론에 기반을 두어 CNN 모형 구조의 확장과 수문학적 이미지의 개선 또는 새로운 이미지 개발 등을 추진할 경우 모형의 예측 성능이 향상될 수 있는 여지가 있으며, 원격탐사 분야나, 위성 영상을 이용한 전 지구적 또는 광역 단위의 실시간 유량 모의 분야 등으로의 응용이 가능할 것으로 기대된다.

  • PDF

Dynamic Nonlinear Prediction Model of Univariate Hydrologic Time Series Using the Support Vector Machine and State-Space Model (Support Vector Machine과 상태공간모형을 이용한 단변량 수문 시계열의 동역학적 비선형 예측모형)

  • Kwon, Hyun-Han;Moon, Young-Il
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3B
    • /
    • pp.279-289
    • /
    • 2006
  • The reconstruction of low dimension nonlinear behavior from the hydrologic time series has been an active area of research in the last decade. In this study, we present the applications of a powerful state space reconstruction methodology using the method of Support Vector Machines (SVM) to the Great Salt Lake (GSL) volume. SVMs are machine learning systems that use a hypothesis space of linear functions in a Kernel induced higher dimensional feature space. SVMs are optimized by minimizing a bound on a generalized error (risk) measure, rather than just the mean square error over a training set. The utility of this SVM regression approach is demonstrated through applications to the short term forecasts of the biweekly GSL volume. The SVM based reconstruction is used to develop time series forecasts for multiple lead times ranging from the period of two weeks to several months. The reliability of the algorithm in learning and forecasting the dynamics is tested using split sample sensitivity analyses, with a particular interest in forecasting extreme states. Unlike previously reported methodologies, SVMs are able to extract the dynamics using only a few past observed data points (Support Vectors, SV) out of the training examples. Considering statistical measures, the prediction model based on SVM demonstrated encouraging and promising results in a short-term prediction. Thus, the SVM method presented in this study suggests a competitive methodology for the forecast of hydrologic time series.

Predicting blast-induced ground vibrations at limestone quarry from artificial neural network optimized by randomized and grid search cross-validation, and comparative analyses with blast vibration predictor models

  • Salman Ihsan;Shahab Saqib;Hafiz Muhammad Awais Rashid;Fawad S. Niazi;Mohsin Usman Qureshi
    • Geomechanics and Engineering
    • /
    • v.35 no.2
    • /
    • pp.121-133
    • /
    • 2023
  • The demand for cement and limestone crushed materials has increased many folds due to the tremendous increase in construction activities in Pakistan during the past few decades. The number of cement production industries has increased correspondingly, and so the rock-blasting operations at the limestone quarry sites. However, the safety procedures warranted at these sites for the blast-induced ground vibrations (BIGV) have not been adequately developed and/or implemented. Proper prediction and monitoring of BIGV are necessary to ensure the safety of structures in the vicinity of these quarry sites. In this paper, an attempt has been made to predict BIGV using artificial neural network (ANN) at three selected limestone quarries of Pakistan. The ANN has been developed in Python using Keras with sequential model and dense layers. The hyper parameters and neurons in each of the activation layers has been optimized using randomized and grid search method. The input parameters for the model include distance, a maximum charge per delay (MCPD), depth of hole, burden, spacing, and number of blast holes, whereas, peak particle velocity (PPV) is taken as the only output parameter. A total of 110 blast vibrations datasets were recorded from three different limestone quarries. The dataset has been divided into 85% for neural network training, and 15% for testing of the network. A five-layer ANN is trained with Rectified Linear Unit (ReLU) activation function, Adam optimization algorithm with a learning rate of 0.001, and batch size of 32 with the topology of 6-32-32-256-1. The blast datasets were utilized to compare the performance of ANN, multivariate regression analysis (MVRA), and empirical predictors. The performance was evaluated using the coefficient of determination (R2), mean absolute error (MAE), mean squared error (MSE), mean absolute percentage error (MAPE), and root mean squared error (RMSE)for predicted and measured PPV. To determine the relative influence of each parameter on the PPV, sensitivity analyses were performed for all input parameters. The analyses reveal that ANN performs superior than MVRA and other empirical predictors, andthat83% PPV is affected by distance and MCPD while hole depth, number of blast holes, burden and spacing contribute for the remaining 17%. This research provides valuable insights into improving safety measures and ensuring the structural integrity of buildings near limestone quarry sites.

Comparative analysis on darcy-forchheimer flow of 3-D MHD hybrid nanofluid (MoS2-Fe3O4/H2O) incorporating melting heat and mass transfer over a rotating disk with dufour and soret effects

  • A.M. Abd-Alla;Esraa N. Thabet;S.M.M.El-Kabeir;H. A. Hosham;Shimaa E. Waheed
    • Advances in nano research
    • /
    • v.16 no.4
    • /
    • pp.325-340
    • /
    • 2024
  • There are several novel uses for dispersing many nanoparticles into a conventional fluid, including dynamic sealing, damping, heat dissipation, microfluidics, and more. Therefore, melting heat and mass transfer characteristics of a 3-D MHD Hybrid Nanofluid flow over a rotating disc with presenting dufour and soret effects are assessed numerically in this study. In this instance, we investigated both ferric sulfate and molybdenum disulfide as nanoparticles suspended within base fluid water. The governing partial differential equations are transformed into linked higher-order non-linear ordinary differential equations by the local similarity transformation. The collection of these deduced equations is then resolved using a Chebyshev spectral collocation-based algorithm built into the Mathematica software. To demonstrate how different instances of hybrid/ nanofluid are impacted by changes in temperature, velocity, and the distribution of nanoparticle concentration, examples of graphical and numerical data are given. For many values of the material parameters, the computational findings are shown. Simulations conducted for different physical parameters in the model show that adding hybrid nanoparticle to the fluid mixture increases heat transfer in comparison to simple nanofluids. It has been identified that hybrid nanoparticles, as opposed to single-type nanoparticles, need to be taken into consideration to create an effective thermal system. Furthermore, porosity lowers the velocities of simple and hybrid nanofluids in both cases. Additionally, results show that the drag force from skin friction causes the nanoparticle fluid to travel more slowly than the hybrid nanoparticle fluid. The findings also demonstrate that suction factors like magnetic and porosity parameters, as well as nanoparticles, raise the skin friction coefficient. Furthermore, It indicates that the outcomes from different flow scenarios correlate and are in strong agreement with the findings from the published literature. Bar chart depictions are altered by changes in flow rates. Moreover, the results confirm doctors' views to prescribe hybrid nanoparticle and particle nanoparticle contents for achalasia patients and also those who suffer from esophageal stricture and tumors. The results of this study can also be applied to the energy generated by the melting disc surface, which has a variety of industrial uses. These include, but are not limited to, the preparation of semiconductor materials, the solidification of magma, the melting of permafrost, and the refreezing of frozen land.

Seasonal Variation of Thermal Effluents Dispersion from Kori Nuclear Power Plant Derived from Satellite Data (위성영상을 이용한 고리원자력발전소 온배수 확산의 계절변동)

  • Ahn, Ji-Suk;Kim, Sang-Woo;Park, Myung-Hee;Hwang, Jae-Dong;Lim, Jin-Wook
    • Journal of the Korean Association of Geographic Information Studies
    • /
    • v.17 no.4
    • /
    • pp.52-68
    • /
    • 2014
  • In this study, we investigated the seasonal variation of SST(Sea Surface Temperature) and thermal effluents estimated by using Landsat-7 ETM+ around the Kori Nuclear Power Plant for 10 years(2000~2010). Also, we analyzed the direction and range of thermal effluents dispersion by the tidal current and tide. The results are as follows, First, we figured out the algorithm to estimate SST through the linear regression analysis of Landsat DN(Digital Number) and NOAA SST. And then, the SST was verified by compared with the in situ measurement and NOAA SST. The determination coefficient is 0.97 and root mean square error is $1.05{\sim}1.24^{\circ}C$. Second, the SST distribution of Landsat-7 estimated by linear regression equation showed $12{\sim}13^{\circ}C$ in winter, $13{\sim}19^{\circ}C$ in spring, and $24{\sim}29^{\circ}C$ and $16{\sim}24^{\circ}C$ in summer and fall. The difference of between SST and thermal effluents temperature is $6{\sim}8^{\circ}C$ except for the summer season. The difference of SST is up to $2^{\circ}C$ in August. There is hardly any dispersion of thermal effluents in August. When it comes to the spread range of thermal effluents, the rise range of more than $1^{\circ}C$ in the sea surface temperature showed up to 7.56km from east to west and 8.43km from north to south. The maximum spread area was $11.65km^2$. It is expected that the findings of this study will be used as the foundational data for marine environment monitoring on the area around the nuclear power plant.

Selectively Partial Encryption of Images in Wavelet Domain (웨이블릿 영역에서의 선택적 부분 영상 암호화)

  • ;Dujit Dey
    • The Journal of Korean Institute of Communications and Information Sciences
    • /
    • v.28 no.6C
    • /
    • pp.648-658
    • /
    • 2003
  • As the usage of image/video contents increase, a security problem for the payed image data or the ones requiring confidentiality is raised. This paper proposed an image encryption methodology to hide the image information. The target data of it is the result from quantization in wavelet domain. This method encrypts only part of the image data rather than the whole data of the original image, in which three types of data selection methodologies were involved. First, by using the fact that the wavelet transform decomposes the original image into frequency sub-bands, only some of the frequency sub-bands were included in encryption to make the resulting image unrecognizable. In the data to represent each pixel, only MSBs were taken for encryption. Finally, pixels to be encrypted in a specific sub-band were selected randomly by using LFSR(Linear Feedback Shift Register). Part of the key for encryption was used for the seed value of LFSR and in selecting the parallel output bits of the LFSR for random selection so that the strength of encryption algorithm increased. The experiments have been performed with the proposed methods implemented in software for about 500 images, from which the result showed that only about 1/1000 amount of data to the original image can obtain the encryption effect not to recognize the original image. Consequently, we are sure that the proposed are efficient image encryption methods to acquire the high encryption effect with small amount of encryption. Also, in this paper, several encryption scheme according to the selection of the sub-bands and the number of bits from LFSR outputs for pixel selection have been proposed, and it has been shown that there exits a relation of trade-off between the execution time and the effect of the encryption. It means that the proposed methods can be selectively used according to the application areas. Also, because the proposed methods are performed in the application layer, they are expected to be a good solution for the end-to-end security problem, which is appearing as one of the important problems in the networks with both wired and wireless sections.

Red Tide Detection through Image Fusion of GOCI and Landsat OLI (GOCI와 Landsat OLI 영상 융합을 통한 적조 탐지)

  • Shin, Jisun;Kim, Keunyong;Min, Jee-Eun;Ryu, Joo-Hyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.2_2
    • /
    • pp.377-391
    • /
    • 2018
  • In order to efficiently monitor red tide over a wide range, the need for red tide detection using remote sensing is increasing. However, the previous studies focus on the development of red tide detection algorithm for ocean colour sensor. In this study, we propose the use of multi-sensor to improve the inaccuracy for red tide detection and remote sensing data in coastal areas with high turbidity, which are pointed out as limitations of satellite-based red tide monitoring. The study area were selected based on the red tide information provided by National Institute of Fisheries Science, and spatial fusion and spectral-based fusion were attempted using GOCI image as ocean colour sensor and Landsat OLI image as terrestrial sensor. Through spatial fusion of the two images, both the red tide of the coastal area and the outer sea areas, where the quality of Landsat OLI image was low, which were impossible to observe in GOCI images, showed improved detection results. As a result of spectral-based fusion performed by feature-level and rawdata-level, there was no significant difference in red tide distribution patterns derived from the two methods. However, in the feature-level method, the red tide area tends to overestimated as spatial resolution of the image low. As a result of pixel segmentation by linear spectral unmixing method, the difference in the red tide area was found to increase as the number of pixels with low red tide ratio increased. For rawdata-level, Gram-Schmidt sharpening method estimated a somewhat larger area than PC spectral sharpening method, but no significant difference was observed. In this study, it is shown that coastal red tide with high turbidity as well as outer sea areas can be detected through spatial fusion of ocean colour and terrestrial sensor. Also, by presenting various spectral-based fusion methods, more accurate red tide area estimation method is suggested. It is expected that this result will provide more precise detection of red tide around the Korean peninsula and accurate red tide area information needed to determine countermeasure to effectively control red tide.

Methods for Genetic Parameter Estimations of Carcass Weight, Longissimus Muscle Area and Marbling Score in Korean Cattle (한우의 도체중, 배장근단면적 및 근내지방도의 유전모수 추정방법)

  • Lee, D.H.
    • Journal of Animal Science and Technology
    • /
    • v.46 no.4
    • /
    • pp.509-516
    • /
    • 2004
  • This study is to investigate the amount of biased estimates for heritability and genetic correlation according to data structure on marbling scores in Korean cattle. Breeding population with 5 generations were simulated by way of selection for carcass weight, Longissimus muscle area and latent values of marbling scores and random mating. Latent variables of marbling scores were categorized into five by the thresholds of 0, I, 2, and 3 SD(DSI) or seven by the thresholds of -2, -1, 0,1I, 2, and 3 SD(DS2). Variance components and genetic pararneters(Heritabilities and Genetic correlations) were estimated by restricted maximum likelihood on multivariate linear mixed animal models and by Gibbs sampling algorithms on multivariate threshold mixed animal models in DS1 and DS2. Simulation was performed for 10 replicates and averages and empirical standard deviation were calculated. Using REML, heritabilitis of marbling score were under-estimated as 0.315 and 0.462 on DS1 and DS2, respectively, with comparison of the pararneter(0.500). Otherwise, using Gibbs sampling in the multivariate threshold animal models, these estimates did not significantly differ to the parameter. Residual correlations of marbling score to other traits were reduced with comparing the parameters when using REML algorithm with assuming linear and normal distribution. This would be due to loss of information and therefore, reduced variation on marbling score. As concluding, genetic variation of marbling would be well defined if liability concepts were adopted on marbling score and implemented threshold mixed model on genetic parameter estimation in Korean cattle.

Performance Evaluation of Monitoring System for Sargassum horneri Using GOCI-II: Focusing on the Results of Removing False Detection in the Yellow Sea and East China Sea (GOCI-II 기반 괭생이모자반 모니터링 시스템 성능 평가: 황해 및 동중국해 해역 오탐지 제거 결과를 중심으로)

  • Han-bit Lee;Ju-Eun Kim;Moon-Seon Kim;Dong-Su Kim;Seung-Hwan Min;Tae-Ho Kim
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.6_2
    • /
    • pp.1615-1633
    • /
    • 2023
  • Sargassum horneri is one of the floating algae in the sea, which breeds in large quantities in the Yellow Sea and East China Sea and then flows into the coast of Republic of Korea, causing various problems such as destroying the environment and damaging fish farms. In order to effectively prevent damage and preserve the coastal environment, the development of Sargassum horneri detection algorithms using satellite-based remote sensing technology has been actively developed. However, incorrect detection information causes an increase in the moving distance of ships collecting Sargassum horneri and confusion in the response of related local governments or institutions,so it is very important to minimize false detections when producing Sargassum horneri spatial information. This study applied technology to automatically remove false detection results using the GOCI-II-based Sargassum horneri detection algorithm of the National Ocean Satellite Center (NOSC) of the Korea Hydrographic and Oceanography Agency (KHOA). Based on the results of analyzing the causes of major false detection results, it includes a process of removing linear and sporadic false detections and green algae that occurs in large quantities along the coast of China in spring and summer by considering them as false detections. The technology to automatically remove false detection was applied to the dates when Sargassum horneri occurred from February 24 to June 25, 2022. Visual assessment results were generated using mid-resolution satellite images, qualitative and quantitative evaluations were performed. Linear false detection results were completely removed, and most of the sporadic and green algae false detection results that affected the distribution were removed. Even after the automatic false detection removal process, it was possible to confirm the distribution area of Sargassum horneri compared to the visual assessment results, and the accuracy and precision calculated using the binary classification model averaged 97.73% and 95.4%, respectively. Recall value was very low at 29.03%, which is presumed to be due to the effect of Sargassum horneri movement due to the observation time discrepancy between GOCI-II and mid-resolution satellite images, differences in spatial resolution, location deviation by orthocorrection, and cloud masking. The results of this study's removal of false detections of Sargassum horneri can determine the spatial distribution status in near real-time, but there are limitations in accurately estimating biomass. Therefore, continuous research on upgrading the Sargassum horneri monitoring system must be conducted to use it as data for establishing future Sargassum horneri response plans.

The Adaptive Personalization Method According to Users Purchasing Index : Application to Beverage Purchasing Predictions (고객별 구매빈도에 동적으로 적응하는 개인화 시스템 : 음료수 구매 예측에의 적용)

  • Park, Yoon-Joo
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.95-108
    • /
    • 2011
  • TThis is a study of the personalization method that intelligently adapts the level of clustering considering purchasing index of a customer. In the e-biz era, many companies gather customers' demographic and transactional information such as age, gender, purchasing date and product category. They use this information to predict customer's preferences or purchasing patterns so that they can provide more customized services to their customers. The previous Customer-Segmentation method provides customized services for each customer group. This method clusters a whole customer set into different groups based on their similarity and builds predictive models for the resulting groups. Thus, it can manage the number of predictive models and also provide more data for the customers who do not have enough data to build a good predictive model by using the data of other similar customers. However, this method often fails to provide highly personalized services to each customer, which is especially important to VIP customers. Furthermore, it clusters the customers who already have a considerable amount of data as well as the customers who only have small amount of data, which causes to increase computational cost unnecessarily without significant performance improvement. The other conventional method called 1-to-1 method provides more customized services than the Customer-Segmentation method for each individual customer since the predictive model are built using only the data for the individual customer. This method not only provides highly personalized services but also builds a relatively simple and less costly model that satisfies with each customer. However, the 1-to-1 method has a limitation that it does not produce a good predictive model when a customer has only a few numbers of data. In other words, if a customer has insufficient number of transactional data then the performance rate of this method deteriorate. In order to overcome the limitations of these two conventional methods, we suggested the new method called Intelligent Customer Segmentation method that provides adaptive personalized services according to the customer's purchasing index. The suggested method clusters customers according to their purchasing index, so that the prediction for the less purchasing customers are based on the data in more intensively clustered groups, and for the VIP customers, who already have a considerable amount of data, clustered to a much lesser extent or not clustered at all. The main idea of this method is that applying clustering technique when the number of transactional data of the target customer is less than the predefined criterion data size. In order to find this criterion number, we suggest the algorithm called sliding window correlation analysis in this study. The algorithm purposes to find the transactional data size that the performance of the 1-to-1 method is radically decreased due to the data sparity. After finding this criterion data size, we apply the conventional 1-to-1 method for the customers who have more data than the criterion and apply clustering technique who have less than this amount until they can use at least the predefined criterion amount of data for model building processes. We apply the two conventional methods and the newly suggested method to Neilsen's beverage purchasing data to predict the purchasing amounts of the customers and the purchasing categories. We use two data mining techniques (Support Vector Machine and Linear Regression) and two types of performance measures (MAE and RMSE) in order to predict two dependent variables as aforementioned. The results show that the suggested Intelligent Customer Segmentation method can outperform the conventional 1-to-1 method in many cases and produces the same level of performances compare with the Customer-Segmentation method spending much less computational cost.