• Title/Summary/Keyword: Machine optimization

Search Result 958, Processing Time 0.024 seconds

Improving Field Crop Classification Accuracy Using GLCM and SVM with UAV-Acquired Images

  • Seung-Hwan Go;Jong-Hwa Park
    • Korean Journal of Remote Sensing
    • /
    • v.40 no.1
    • /
    • pp.93-101
    • /
    • 2024
  • Accurate field crop classification is essential for various agricultural applications, yet existing methods face challenges due to diverse crop types and complex field conditions. This study aimed to address these issues by combining support vector machine (SVM) models with multi-seasonal unmanned aerial vehicle (UAV) images, texture information extracted from Gray Level Co-occurrence Matrix (GLCM), and RGB spectral data. Twelve high-resolution UAV image captures spanned March-October 2021, while field surveys on three dates provided ground truth data. We focused on data from August (-A), September (-S), and October (-O) images and trained four support vector classifier (SVC) models (SVC-A, SVC-S, SVC-O, SVC-AS) using visual bands and eight GLCM features. Farm maps provided by the Ministry of Agriculture, Food and Rural Affairs proved efficient for open-field crop identification and served as a reference for accuracy comparison. Our analysis showcased the significant impact of hyperparameter tuning (C and gamma) on SVM model performance, requiring careful optimization for each scenario. Importantly, we identified models exhibiting distinct high-accuracy zones, with SVC-O trained on October data achieving the highest overall and individual crop classification accuracy. This success likely stems from its ability to capture distinct texture information from mature crops.Incorporating GLCM features proved highly effective for all models,significantly boosting classification accuracy.Among these features, homogeneity, entropy, and correlation consistently demonstrated the most impactful contribution. However, balancing accuracy with computational efficiency and feature selection remains crucial for practical application. Performance analysis revealed that SVC-O achieved exceptional results in overall and individual crop classification, while soybeans and rice were consistently classified well by all models. Challenges were encountered with cabbage due to its early growth stage and low field cover density. The study demonstrates the potential of utilizing farm maps and GLCM features in conjunction with SVM models for accurate field crop classification. Careful parameter tuning and model selection based on specific scenarios are key for optimizing performance in real-world applications.

Visual Model of Pattern Design Based on Deep Convolutional Neural Network

  • Jingjing Ye;Jun Wang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.2
    • /
    • pp.311-326
    • /
    • 2024
  • The rapid development of neural network technology promotes the neural network model driven by big data to overcome the texture effect of complex objects. Due to the limitations in complex scenes, it is necessary to establish custom template matching and apply it to the research of many fields of computational vision technology. The dependence on high-quality small label sample database data is not very strong, and the machine learning system of deep feature connection to complete the task of texture effect inference and speculation is relatively poor. The style transfer algorithm based on neural network collects and preserves the data of patterns, extracts and modernizes their features. Through the algorithm model, it is easier to present the texture color of patterns and display them digitally. In this paper, according to the texture effect reasoning of custom template matching, the 3D visualization of the target is transformed into a 3D model. The high similarity between the scene to be inferred and the user-defined template is calculated by the user-defined template of the multi-dimensional external feature label. The convolutional neural network is adopted to optimize the external area of the object to improve the sampling quality and computational performance of the sample pyramid structure. The results indicate that the proposed algorithm can accurately capture the significant target, achieve more ablation noise, and improve the visualization results. The proposed deep convolutional neural network optimization algorithm has good rapidity, data accuracy and robustness. The proposed algorithm can adapt to the calculation of more task scenes, display the redundant vision-related information of image conversion, enhance the powerful computing power, and further improve the computational efficiency and accuracy of convolutional networks, which has a high research significance for the study of image information conversion.

Improved Deep Learning-based Approach for Spatial-Temporal Trajectory Planning via Predictive Modeling of Future Location

  • Zain Ul Abideen;Xiaodong Sun;Chao Sun;Hafiz Shafiq Ur Rehman Khalil
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.7
    • /
    • pp.1726-1748
    • /
    • 2024
  • Trajectory planning is vital for autonomous systems like robotics and UAVs, as it determines optimal, safe paths considering physical limitations, environmental factors, and agent interactions. Recent advancements in trajectory planning and future location prediction stem from rapid progress in machine learning and optimization algorithms. In this paper, we proposed a novel framework for Spatial-temporal transformer-based feed-forward neural networks (STTFFNs). From the traffic flow local area point of view, skip-gram model is trained on trajectory data to generate embeddings that capture the high-level features of different trajectories. These embeddings can then be used as input to a transformer-based trajectory planning model, which can generate trajectories for new objects based on the embeddings of similar trajectories in the training data. In the next step, distant regions, we embedded feedforward network is responsible for generating the distant trajectories by taking as input a set of features that represent the object's current state and historical data. One advantage of using feedforward networks for distant trajectory planning is their ability to capture long-term dependencies in the data. In the final step of forecasting for future locations, the encoder and decoder are crucial parts of the proposed technique. Spatial destinations are encoded utilizing location-based social networks(LBSN) based on visiting semantic locations. The model has been specially trained to forecast future locations using precise longitude and latitude values. Following rigorous testing on two real-world datasets, Porto and Manhattan, it was discovered that the model outperformed a prediction accuracy of 8.7% previous state-of-the-art methods.

A Genetic Algorithm for Production Scheduling of Biopharmaceutical Contract Manufacturing Products (바이오의약품 위탁생산 일정계획 수립을 위한 유전자 알고리즘)

  • Ji-Hoon Kim;Jeong-Hyun Kim;Jae-Gon Kim
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.141-152
    • /
    • 2024
  • In the biopharmaceutical contract manufacturing organization (CMO) business, establishing a production schedule that satisfies the due date for various customer orders is crucial for competitiveness. In a CMO process, each order consists of multiple batches that can be allocated to multiple production lines in small batch units for parallel production. This study proposes a meta-heuristic algorithm to establish a scheduling plan that minimizes the total delivery delay of orders in a CMO process with identical parallel machine. Inspired by biological evolution, the proposed algorithm generates random data structures similar to chromosomes to solve specific problems and effectively explores various solutions through operations such as crossover and mutation. Based on real-world data provided by a domestic CMO company, computer experiments were conducted to verify that the proposed algorithm produces superior scheduling plans compared to expert algorithms used by the company and commercial optimization packages, within a reasonable computation time.

A counting-time optimization method for artificial neural network (ANN) based gamma-ray spectroscopy

  • Moonhyung Cho;Jisung Hwang;Sangho Lee;Kilyoung Ko;Wonku Kim;Gyuseong Cho
    • Nuclear Engineering and Technology
    • /
    • v.56 no.7
    • /
    • pp.2690-2697
    • /
    • 2024
  • With advancements in machine learning technologies, artificial neural networks (ANNs) are being widely used to improve the performance of gamma-ray spectroscopy based on NaI(Tl) scintillation detectors. Typically, the performance of ANNs is evaluated using test datasets composed of actual spectra. However, the generation of such test datasets encompassing a wide range of actual spectra representing various scenarios often proves inefficient and time-consuming. Thus, instead of measuring actual spectra, we generated virtual spectra with diverse spectral features by sampling from categorical distribution functions derived from the base spectra of six radioactive isotopes: 54Mn, 57Co, 60Co, 134Cs, 137Cs, and 241Am. For practical applications, we determined the optimum counting time (OCT) as the point at which the change in the Kullback-Leibler divergence (ΔKLDV) values between the synthetic spectra used for training the ANN and the virtual spectra approaches zero. The accuracies of the actual spectra were significantly improved when measured up to their respective OCTs. The outcomes demonstrated that the proposed method can effectively determine the OCTs for gamma-ray spectroscopy based on ANNs without the need to measure actual spectra.

Development of Big Data and AutoML Platforms for Smart Plants (스마트 플랜트를 위한 빅데이터 및 AutoML 플랫폼 개발)

  • Jin-Young Kang;Byeong-Seok Jeong
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.83-95
    • /
    • 2023
  • Big data analytics and AI play a critical role in the development of smart plants. This study presents a big data platform for plant data and an 'AutoML platform' for AI-based plant O&M(Operation and Maintenance). The big data platform collects, processes and stores large volumes of data generated in plants using Hadoop, Spark, and Kafka. The AutoML platform is a machine learning automation system aimed at constructing predictive models for equipment prognostics and process optimization in plants. The developed platforms configures a data pipeline considering compatibility with existing plant OISs(Operation Information Systems) and employs a web-based GUI to enhance both accessibility and convenience for users. Also, it has functions to load user-customizable modules into data processing and learning algorithms, which increases process flexibility. This paper demonstrates the operation of the platforms for a specific process of an oil company in Korea and presents an example of an effective data utilization platform for smart plants.

Research on design requirements for passive residual heat removal system of lead cooled fast reactor via model-based system engineering

  • Mao Tang;Junqian Yang;Pengcheng Zhao;Kai Wang
    • Nuclear Engineering and Technology
    • /
    • v.56 no.8
    • /
    • pp.3286-3297
    • /
    • 2024
  • Traditional text-based system engineering, which has been used in the design and application of passive residual heat removal system (PRHRS) for lead-cooled fast reactors, is prone to several problems such as low development efficiency, long iteration cycles, and model ambiguity. This study aims to effectively address the abovementioned problems by adopting a model-based system engineering (MBSE) method, which has been preliminarily applied to meet the design requirements of a PRHRS. The design process has been implemented based on the preliminary design of the system architecture and consists of three stages: top-level requirement analysis, functional requirements analysis, and design requirements synthesis. The results of the top-level requirements analysis and the corresponding use case diagram can determine the requirements, top-level use cases, and scenario flow of the system. During the functional requirements analysis, the sequence, activity, and state machine diagrams are used to develop the system function model and provide early confirmation. By comparing these sequence diagrams, the requirements for omissions and inconsistencies can be effectively checked. In the design requirements synthesis stage, the Analytic Hierarchy Process is used to conduct preliminary trade-off calculations on the system architecture, after which a white box model is established during the system architecture design. Through these two steps, the analysis and design of the system architecture are ultimately achieved. The resulting system architecture ensures the consistency of the design requirements. Ultimately, a functional hazard analysis was conducted for a specific incident to validate case requirements and further refine the system architecture. Future research can further reduce the design risk, improve the design efficiency, and provide a practical reference for the design and optimization of PRHRS in digital lead-cooled fast reactors.

A Survey on the Latest Research Trends in Retrieval-Augmented Generation (검색 증강 생성(RAG) 기술의 최신 연구 동향에 대한 조사)

  • Eunbin Lee;Ho Bae
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.9
    • /
    • pp.429-436
    • /
    • 2024
  • As Large Language Models (LLMs) continue to advance, effectively harnessing their potential has become increasingly important. LLMs, trained on vast datasets, are capable of generating text across a wide range of topics, making them useful in applications such as content creation, machine translation, and chatbots. However, they often face challenges in generalization due to gaps in specific or specialized knowledge, and updating these models with the latest information post-training remains a significant hurdle. To address these issues, Retrieval-Augmented Generation (RAG) models have been introduced. These models enhance response generation by retrieving information from continuously updated external databases, thereby reducing the hallucination phenomenon often seen in LLMs while improving efficiency and accuracy. This paper presents the foundational architecture of RAG, reviews recent research trends aimed at enhancing the retrieval capabilities of LLMs through RAG, and discusses evaluation techniques. Additionally, it explores performance optimization and real-world applications of RAG in various industries. Through this analysis, the paper aims to propose future research directions for the continued development of RAG models.

An Intelligent Intrusion Detection Model Based on Support Vector Machines and the Classification Threshold Optimization for Considering the Asymmetric Error Cost (비대칭 오류비용을 고려한 분류기준값 최적화와 SVM에 기반한 지능형 침입탐지모형)

  • Lee, Hyeon-Uk;Ahn, Hyun-Chul
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.4
    • /
    • pp.157-173
    • /
    • 2011
  • As the Internet use explodes recently, the malicious attacks and hacking for a system connected to network occur frequently. This means the fatal damage can be caused by these intrusions in the government agency, public office, and company operating various systems. For such reasons, there are growing interests and demand about the intrusion detection systems (IDS)-the security systems for detecting, identifying and responding to unauthorized or abnormal activities appropriately. The intrusion detection models that have been applied in conventional IDS are generally designed by modeling the experts' implicit knowledge on the network intrusions or the hackers' abnormal behaviors. These kinds of intrusion detection models perform well under the normal situations. However, they show poor performance when they meet a new or unknown pattern of the network attacks. For this reason, several recent studies try to adopt various artificial intelligence techniques, which can proactively respond to the unknown threats. Especially, artificial neural networks (ANNs) have popularly been applied in the prior studies because of its superior prediction accuracy. However, ANNs have some intrinsic limitations such as the risk of overfitting, the requirement of the large sample size, and the lack of understanding the prediction process (i.e. black box theory). As a result, the most recent studies on IDS have started to adopt support vector machine (SVM), the classification technique that is more stable and powerful compared to ANNs. SVM is known as a relatively high predictive power and generalization capability. Under this background, this study proposes a novel intelligent intrusion detection model that uses SVM as the classification model in order to improve the predictive ability of IDS. Also, our model is designed to consider the asymmetric error cost by optimizing the classification threshold. Generally, there are two common forms of errors in intrusion detection. The first error type is the False-Positive Error (FPE). In the case of FPE, the wrong judgment on it may result in the unnecessary fixation. The second error type is the False-Negative Error (FNE) that mainly misjudges the malware of the program as normal. Compared to FPE, FNE is more fatal. Thus, when considering total cost of misclassification in IDS, it is more reasonable to assign heavier weights on FNE rather than FPE. Therefore, we designed our proposed intrusion detection model to optimize the classification threshold in order to minimize the total misclassification cost. In this case, conventional SVM cannot be applied because it is designed to generate discrete output (i.e. a class). To resolve this problem, we used the revised SVM technique proposed by Platt(2000), which is able to generate the probability estimate. To validate the practical applicability of our model, we applied it to the real-world dataset for network intrusion detection. The experimental dataset was collected from the IDS sensor of an official institution in Korea from January to June 2010. We collected 15,000 log data in total, and selected 1,000 samples from them by using random sampling method. In addition, the SVM model was compared with the logistic regression (LOGIT), decision trees (DT), and ANN to confirm the superiority of the proposed model. LOGIT and DT was experimented using PASW Statistics v18.0, and ANN was experimented using Neuroshell 4.0. For SVM, LIBSVM v2.90-a freeware for training SVM classifier-was used. Empirical results showed that our proposed model based on SVM outperformed all the other comparative models in detecting network intrusions from the accuracy perspective. They also showed that our model reduced the total misclassification cost compared to the ANN-based intrusion detection model. As a result, it is expected that the intrusion detection model proposed in this paper would not only enhance the performance of IDS, but also lead to better management of FNE.

Steel Plate Faults Diagnosis with S-MTS (S-MTS를 이용한 강판의 표면 결함 진단)

  • Kim, Joon-Young;Cha, Jae-Min;Shin, Junguk;Yeom, Choongsub
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.1
    • /
    • pp.47-67
    • /
    • 2017
  • Steel plate faults is one of important factors to affect the quality and price of the steel plates. So far many steelmakers generally have used visual inspection method that could be based on an inspector's intuition or experience. Specifically, the inspector checks the steel plate faults by looking the surface of the steel plates. However, the accuracy of this method is critically low that it can cause errors above 30% in judgment. Therefore, accurate steel plate faults diagnosis system has been continuously required in the industry. In order to meet the needs, this study proposed a new steel plate faults diagnosis system using Simultaneous MTS (S-MTS), which is an advanced Mahalanobis Taguchi System (MTS) algorithm, to classify various surface defects of the steel plates. MTS has generally been used to solve binary classification problems in various fields, but MTS was not used for multiclass classification due to its low accuracy. The reason is that only one mahalanobis space is established in the MTS. In contrast, S-MTS is suitable for multi-class classification. That is, S-MTS establishes individual mahalanobis space for each class. 'Simultaneous' implies comparing mahalanobis distances at the same time. The proposed steel plate faults diagnosis system was developed in four main stages. In the first stage, after various reference groups and related variables are defined, data of the steel plate faults is collected and used to establish the individual mahalanobis space per the reference groups and construct the full measurement scale. In the second stage, the mahalanobis distances of test groups is calculated based on the established mahalanobis spaces of the reference groups. Then, appropriateness of the spaces is verified by examining the separability of the mahalanobis diatances. In the third stage, orthogonal arrays and Signal-to-Noise (SN) ratio of dynamic type are applied for variable optimization. Also, Overall SN ratio gain is derived from the SN ratio and SN ratio gain. If the derived overall SN ratio gain is negative, it means that the variable should be removed. However, the variable with the positive gain may be considered as worth keeping. Finally, in the fourth stage, the measurement scale that is composed of selected useful variables is reconstructed. Next, an experimental test should be implemented to verify the ability of multi-class classification and thus the accuracy of the classification is acquired. If the accuracy is acceptable, this diagnosis system can be used for future applications. Also, this study compared the accuracy of the proposed steel plate faults diagnosis system with that of other popular classification algorithms including Decision Tree, Multi Perception Neural Network (MLPNN), Logistic Regression (LR), Support Vector Machine (SVM), Tree Bagger Random Forest, Grid Search (GS), Genetic Algorithm (GA) and Particle Swarm Optimization (PSO). The steel plates faults dataset used in the study is taken from the University of California at Irvine (UCI) machine learning repository. As a result, the proposed steel plate faults diagnosis system based on S-MTS shows 90.79% of classification accuracy. The accuracy of the proposed diagnosis system is 6-27% higher than MLPNN, LR, GS, GA and PSO. Based on the fact that the accuracy of commercial systems is only about 75-80%, it means that the proposed system has enough classification performance to be applied in the industry. In addition, the proposed system can reduce the number of measurement sensors that are installed in the fields because of variable optimization process. These results show that the proposed system not only can have a good ability on the steel plate faults diagnosis but also reduce operation and maintenance cost. For our future work, it will be applied in the fields to validate actual effectiveness of the proposed system and plan to improve the accuracy based on the results.