• Title/Summary/Keyword: input parameter

Search Result 1,636, Processing Time 0.032 seconds

Prediction of Greenhouse Strawberry Production Using Machine Learning Algorithm (머신러닝 알고리즘을 이용한 온실 딸기 생산량 예측)

  • Kim, Na-eun;Han, Hee-sun;Arulmozhi, Elanchezhian;Moon, Byeong-eun;Choi, Yung-Woo;Kim, Hyeon-tae
    • Journal of Bio-Environment Control
    • /
    • v.31 no.1
    • /
    • pp.1-7
    • /
    • 2022
  • Strawberry is a stand-out cultivating fruit in Korea. The optimum production of strawberry is highly dependent on growing environment. Smart farm technology, and automatic monitoring and control system maintain a favorable environment for strawberry growth in greenhouses, as well as play an important role to improve production. Moreover, physiological parameters of strawberry plant and it is surrounding environment may allow to give an idea on production of strawberry. Therefore, this study intends to build a machine learning model to predict strawberry's yield, cultivated in greenhouse. The environmental parameter like as temperature, humidity and CO2 and physiological parameters such as length of leaves, number of flowers and fruits and chlorophyll content of 'Seolhyang' (widely growing strawberry cultivar in Korea) were collected from three strawberry greenhouses located in Sacheon of Gyeongsangnam-do during the period of 2019-2020. A predictive model, Lasso regression was designed and validated through 5-fold cross-validation. The current study found that performance of the Lasso regression model is good to predict the number of flowers and fruits, when the MAPE value are 0.511 and 0.488, respectively during the model validation. Overall, the present study demonstrates that using AI based regression model may be convenient for farms and agricultural companies to predict yield of crops with fewer input attributes.

Optimization of Approximate Modular Multiplier for R-LWE Cryptosystem (R-LWE 암호화를 위한 근사 모듈식 다항식 곱셈기 최적화)

  • Jae-Woo, Lee;Youngmin, Kim
    • Journal of IKEEE
    • /
    • v.26 no.4
    • /
    • pp.736-741
    • /
    • 2022
  • Lattice-based cryptography is the most practical post-quantum cryptography because it enjoys strong worst-case security, relatively efficient implementation, and simplicity. Ring learning with errors (R-LWE) is a public key encryption (PKE) method of lattice-based encryption (LBC), and the most important operation of R-LWE is the modular polynomial multiplication of rings. This paper proposes a method for optimizing modular multipliers based on approximate computing (AC) technology, targeting the medium-security parameter set of the R-LWE cryptosystem. First, as a simple way to implement complex logic, LUT is used to omit some of the approximate multiplication operations, and the 2's complement method is used to calculate the number of bits whose value is 1 when converting the value of the input data to binary. We propose a total of two methods to reduce the number of required adders by minimizing them. The proposed LUT-based modular multiplier reduced both speed and area by 9% compared to the existing R-LWE modular multiplier, and the modular multiplier using the 2's complement method reduced the area by 40% and improved the speed by 2%. appear. Finally, the area of the optimized modular multiplier with both of these methods applied was reduced by up to 43% compared to the previous one, and the speed was reduced by up to 10%.

Development of Dolphin Click Signal Classification Algorithm Based on Recurrent Neural Network for Marine Environment Monitoring (해양환경 모니터링을 위한 순환 신경망 기반의 돌고래 클릭 신호 분류 알고리즘 개발)

  • Seoje Jeong;Wookeen Chung;Sungryul Shin;Donghyeon Kim;Jeasoo Kim;Gihoon Byun;Dawoon Lee
    • Geophysics and Geophysical Exploration
    • /
    • v.26 no.3
    • /
    • pp.126-137
    • /
    • 2023
  • In this study, a recurrent neural network (RNN) was employed as a methodological approach to classify dolphin click signals derived from ocean monitoring data. To improve the accuracy of click signal classification, the single time series data were transformed into fractional domains using fractional Fourier transform to expand its features. Transformed data were used as input for three RNN models: long short-term memory (LSTM), gated recurrent unit (GRU), and bidirectional LSTM (BiLSTM), which were compared to determine the optimal network for the classification of signals. Because the fractional Fourier transform displayed different characteristics depending on the chosen angle parameter, the optimal angle range for each RNN was first determined. To evaluate network performance, metrics such as accuracy, precision, recall, and F1-score were employed. Numerical experiments demonstrated that all three networks performed well, however, the BiLSTM network outperformed LSTM and GRU in terms of learning results. Furthermore, the BiLSTM network provided lower misclassification than the other networks and was deemed the most practically appliable to field data.

Predictive Equation of Dynamic Modulus for Hot Mix Asphalt with Granite Aggregates (화강암 골재를 이용한 아스팔트 혼합물의 동탄성 계수 예측방정식)

  • Lee, Kwan-Ho;Kim, Hyun-O;Jang, Min-Seok
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.26 no.3D
    • /
    • pp.425-433
    • /
    • 2006
  • The presented work provided a predictive equation for dynamic modulus of hot mix asphalt, which showed higher reliability and more simplicity. Lots of test result by UTM at laboratory has been used to develop the precise predictive equation. Evaluation of dynamic modulus for 13mm and 19mm surface course and 25mm of base course of hot mix asphalt with granite aggregate and two asphalt binders (AP-3 and AP-5) were carried out. Superpave Level 1 Mix Design with gyrator compactor was adopted to determine the optimum asphalt binder content (OAC) and the measured ranges of OAC were between 5.1% and 5.4% for surface HMA, and around 4.2% for base HMA. The dynamic modulus and phase angle were determined by testing on UTM, with 5 different testing temperature (-10, 5, 20, 40, & $55^{\circ}C$) and 5 different loading frequencies (0.05, 0.1, 1, 10, 25 Hz). Using the measured dynamic modulus and phase angle, the input parameters of Sigmoidal function equation to represent the master curve were determined and these will be adopted in FEM analysis for asphalt pavements. The effect of each parameter for equation has been compared. Due to the limitation of laboratory tests, the reliability of predictive equation for dynamic modulus is around 80%.

Development of PSC I Girder Bridge Weigh-in-Motion System without Axle Detector (축감지기가 없는 PSC I 거더교의 주행중 차량하중분석시스템 개발)

  • Park, Min-Seok;Jo, Byung-Wan;Lee, Jungwhee;Kim, Sungkon
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.28 no.5A
    • /
    • pp.673-683
    • /
    • 2008
  • This study improved the existing method of using the longitudinal strain and concept of influence line to develop Bridge Weigh-in-Motion system without axle detector using the dynamic strain of the bridge girders and concrete slab. This paper first describes the considered algorithms of extracting passing vehicle information from the dynamic strain signal measured at the bridge slab, girders, and cross beams. Two different analysis methods of 1) influence line method, and 2) neural network method are considered, and parameter study of measurement locations is also performed. Then the procedures and the results of field tests are described. The field tests are performed to acquire training sets and test sets for neural networks, and also to verify and compare performances of the considered algorithms. Finally, comparison between the results of different algorithms and discussions are followed. For a PSC I-girder bridge, vehicle weight can be calculated within a reasonable error range using the dynamic strain gauge installed on the girders. The passing lane and passing speed of the vehicle can be accurately estimated using the strain signal from the concrete slab. The passing speed and peak duration were added to the input variables to reflect the influence of the dynamic interaction between the bridge and vehicles, and impact of the distance between axles, respectively; thus improving the accuracy of the weight calculation.

Realtime Streamflow Prediction using Quantitative Precipitation Model Output (정량강수모의를 이용한 실시간 유출예측)

  • Kang, Boosik;Moon, Sujin
    • KSCE Journal of Civil and Environmental Engineering Research
    • /
    • v.30 no.6B
    • /
    • pp.579-587
    • /
    • 2010
  • The mid-range streamflow forecast was performed using NWP(Numerical Weather Prediction) provided by KMA. The NWP consists of RDAPS for 48-hour forecast and GDAPS for 240-hour forecast. To enhance the accuracy of the NWP, QPM to downscale the original NWP and Quantile Mapping to adjust the systematic biases were applied to the original NWP output. The applicability of the suggested streamflow prediction system which was verified in Geum River basin. In the system, the streamflow simulation was computed through the long-term continuous SSARR model with the rainfall prediction input transform to the format required by SSARR. The RQPM of the 2-day rainfall prediction results for the period of Jan. 1~Jun. 20, 2006, showed reasonable predictability that the total RQPM precipitation amounts to 89.7% of the observed precipitation. The streamflow forecast associated with 2-day RQPM followed the observed hydrograph pattern with high accuracy even though there occurred missing forecast and false alarm in some rainfall events. However, predictability decrease in downstream station, e.g. Gyuam was found because of the difficulties in parameter calibration of rainfall-runoff model for controlled streamflow and reliability deduction of rating curve at gauge station with large cross section area. The 10-day precipitation prediction using GQPM shows significantly underestimation for the peak and total amounts, which affects streamflow prediction clearly. The improvement of GDAPS forecast using post-processing seems to have limitation and there needs efforts of stabilization or reform for the original NWP.

Driving Behaivor Optimization Using Genetic Algorithm and Analysis of Traffic Safety for Non-Autonomous Vehicles by Autonomous Vehicle Penetration Rate (유전알고리즘을 이용한 주행행태 최적화 및 자율주행차 도입률별 일반자동차 교통류 안전성 분석)

  • Somyoung Shin;Shinhyoung Park;Jiho Kim
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.5
    • /
    • pp.30-42
    • /
    • 2023
  • Various studies have been conducted using microtraffic simulation (VISSIM) to analyze the safety of traffic flow when introducing autonomous vehicles. However, no studies have analyzed traffic safety in mixed traffic while considering the driving behavior of general vehicles as a parameter in VISSIM. Therefore, the aim of this study was to optimize the input variables of VISSIM for non-autonomous vehicles through genetic algorithms to obtain realistic behavior. A traffic safety analysis was then performed according to the penetration rate of autonomous vehicles. In a 640 meter section of US highway I-101, the number of conflicts was analyzed when the trailing vehicle was a non-autonomous vehicle. The total number of conflicts increased until the proportion of autonomous vehicles exceeded 20%, and the number of conflicts decreased continuously after exceeding 20%. The number of conflicts between non-autonomous vehicles and autonomous vehicles increased with proportions of autonomous vehicles of up to 60%. However, there was a limitation in that the driving behavior of autonomous vehicles was based on the results of the literature and did not represent actual driving behavior. Therefore, for a more accurate analysis, future studies should reflect the actual driving behavior of autonomous vehicles.

A Study on Efficient AI Model Drift Detection Methods for MLOps (MLOps를 위한 효율적인 AI 모델 드리프트 탐지방안 연구)

  • Ye-eun Lee;Tae-jin Lee
    • Journal of Internet Computing and Services
    • /
    • v.24 no.5
    • /
    • pp.17-27
    • /
    • 2023
  • Today, as AI (Artificial Intelligence) technology develops and its practicality increases, it is widely used in various application fields in real life. At this time, the AI model is basically learned based on various statistical properties of the learning data and then distributed to the system, but unexpected changes in the data in a rapidly changing data situation cause a decrease in the model's performance. In particular, as it becomes important to find drift signals of deployed models in order to respond to new and unknown attacks that are constantly created in the security field, the need for lifecycle management of the entire model is gradually emerging. In general, it can be detected through performance changes in the model's accuracy and error rate (loss), but there are limitations in the usage environment in that an actual label for the model prediction result is required, and the detection of the point where the actual drift occurs is uncertain. there is. This is because the model's error rate is greatly influenced by various external environmental factors, model selection and parameter settings, and new input data, so it is necessary to precisely determine when actual drift in the data occurs based only on the corresponding value. There are limits to this. Therefore, this paper proposes a method to detect when actual drift occurs through an Anomaly analysis technique based on XAI (eXplainable Artificial Intelligence). As a result of testing a classification model that detects DGA (Domain Generation Algorithm), anomaly scores were extracted through the SHAP(Shapley Additive exPlanations) Value of the data after distribution, and as a result, it was confirmed that efficient drift point detection was possible.

Application of Multiple Linear Regression Analysis and Tree-Based Machine Learning Techniques for Cutter Life Index(CLI) Prediction (커터수명지수 예측을 위한 다중선형회귀분석과 트리 기반 머신러닝 기법 적용)

  • Ju-Pyo Hong;Tae Young Ko
    • Tunnel and Underground Space
    • /
    • v.33 no.6
    • /
    • pp.594-609
    • /
    • 2023
  • TBM (Tunnel Boring Machine) method is gaining popularity in urban and underwater tunneling projects due to its ability to ensure excavation face stability and minimize environmental impact. Among the prominent models for predicting disc cutter life, the NTNU model uses the Cutter Life Index(CLI) as a key parameter, but the complexity of testing procedures and rarity of equipment make measurement challenging. In this study, CLI was predicted using multiple linear regression analysis and tree-based machine learning techniques, utilizing rock properties. Through literature review, a database including rock uniaxial compressive strength, Brazilian tensile strength, equivalent quartz content, and Cerchar abrasivity index was built, and derived variables were added. The multiple linear regression analysis selected input variables based on statistical significance and multicollinearity, while the machine learning prediction model chose variables based on their importance. Dividing the data into 80% for training and 20% for testing, a comparative analysis of the predictive performance was conducted, and XGBoost was identified as the optimal model. The validity of the multiple linear regression and XGBoost models derived in this study was confirmed by comparing their predictive performance with prior research.

Investigating Dynamic Mutation Process of Issues Using Unstructured Text Analysis (부도예측을 위한 KNN 앙상블 모형의 동시 최적화)

  • Min, Sung-Hwan
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.1
    • /
    • pp.139-157
    • /
    • 2016
  • Bankruptcy involves considerable costs, so it can have significant effects on a country's economy. Thus, bankruptcy prediction is an important issue. Over the past several decades, many researchers have addressed topics associated with bankruptcy prediction. Early research on bankruptcy prediction employed conventional statistical methods such as univariate analysis, discriminant analysis, multiple regression, and logistic regression. Later on, many studies began utilizing artificial intelligence techniques such as inductive learning, neural networks, and case-based reasoning. Currently, ensemble models are being utilized to enhance the accuracy of bankruptcy prediction. Ensemble classification involves combining multiple classifiers to obtain more accurate predictions than those obtained using individual models. Ensemble learning techniques are known to be very useful for improving the generalization ability of the classifier. Base classifiers in the ensemble must be as accurate and diverse as possible in order to enhance the generalization ability of an ensemble model. Commonly used methods for constructing ensemble classifiers include bagging, boosting, and random subspace. The random subspace method selects a random feature subset for each classifier from the original feature space to diversify the base classifiers of an ensemble. Each ensemble member is trained by a randomly chosen feature subspace from the original feature set, and predictions from each ensemble member are combined by an aggregation method. The k-nearest neighbors (KNN) classifier is robust with respect to variations in the dataset but is very sensitive to changes in the feature space. For this reason, KNN is a good classifier for the random subspace method. The KNN random subspace ensemble model has been shown to be very effective for improving an individual KNN model. The k parameter of KNN base classifiers and selected feature subsets for base classifiers play an important role in determining the performance of the KNN ensemble model. However, few studies have focused on optimizing the k parameter and feature subsets of base classifiers in the ensemble. This study proposed a new ensemble method that improves upon the performance KNN ensemble model by optimizing both k parameters and feature subsets of base classifiers. A genetic algorithm was used to optimize the KNN ensemble model and improve the prediction accuracy of the ensemble model. The proposed model was applied to a bankruptcy prediction problem by using a real dataset from Korean companies. The research data included 1800 externally non-audited firms that filed for bankruptcy (900 cases) or non-bankruptcy (900 cases). Initially, the dataset consisted of 134 financial ratios. Prior to the experiments, 75 financial ratios were selected based on an independent sample t-test of each financial ratio as an input variable and bankruptcy or non-bankruptcy as an output variable. Of these, 24 financial ratios were selected by using a logistic regression backward feature selection method. The complete dataset was separated into two parts: training and validation. The training dataset was further divided into two portions: one for the training model and the other to avoid overfitting. The prediction accuracy against this dataset was used to determine the fitness value in order to avoid overfitting. The validation dataset was used to evaluate the effectiveness of the final model. A 10-fold cross-validation was implemented to compare the performances of the proposed model and other models. To evaluate the effectiveness of the proposed model, the classification accuracy of the proposed model was compared with that of other models. The Q-statistic values and average classification accuracies of base classifiers were investigated. The experimental results showed that the proposed model outperformed other models, such as the single model and random subspace ensemble model.