• Title/Summary/Keyword: Training Datasets

Search Result 344, Processing Time 0.026 seconds

Land Cover Classification Using Sematic Image Segmentation with Deep Learning (딥러닝 기반의 영상분할을 이용한 토지피복분류)

  • Lee, Seonghyeok;Kim, Jinsoo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.279-288
    • /
    • 2019
  • We evaluated the land cover classification performance of SegNet, which features semantic segmentation of aerial imagery. We selected four semantic classes, i.e., urban, farmland, forest, and water areas, and created 2,000 datasets using aerial images and land cover maps. The datasets were divided at a 8:2 ratio into training (1,600) and validation datasets (400); we evaluated validation accuracy after tuning the hyperparameters. SegNet performance was optimal at a batch size of five with 100,000 iterations. When 200 test datasets were subjected to semantic segmentation using the trained SegNet model, the accuracies were farmland 87.89%, forest 87.18%, water 83.66%, and urban regions 82.67%; the overall accuracy was 85.48%. Thus, deep learning-based semantic segmentation can be used to classify land cover.

A Hybrid Multi-Level Feature Selection Framework for prediction of Chronic Disease

  • G.S. Raghavendra;Shanthi Mahesh;M.V.P. Chandrasekhara Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.101-106
    • /
    • 2023
  • Chronic illnesses are among the most common serious problems affecting human health. Early diagnosis of chronic diseases can assist to avoid or mitigate their consequences, potentially decreasing mortality rates. Using machine learning algorithms to identify risk factors is an exciting strategy. The issue with existing feature selection approaches is that each method provides a distinct set of properties that affect model correctness, and present methods cannot perform well on huge multidimensional datasets. We would like to introduce a novel model that contains a feature selection approach that selects optimal characteristics from big multidimensional data sets to provide reliable predictions of chronic illnesses without sacrificing data uniqueness.[1] To ensure the success of our proposed model, we employed balanced classes by employing hybrid balanced class sampling methods on the original dataset, as well as methods for data pre-processing and data transformation, to provide credible data for the training model. We ran and assessed our model on datasets with binary and multivalued classifications. We have used multiple datasets (Parkinson, arrythmia, breast cancer, kidney, diabetes). Suitable features are selected by using the Hybrid feature model consists of Lassocv, decision tree, random forest, gradient boosting,Adaboost, stochastic gradient descent and done voting of attributes which are common output from these methods.Accuracy of original dataset before applying framework is recorded and evaluated against reduced data set of attributes accuracy. The results are shown separately to provide comparisons. Based on the result analysis, we can conclude that our proposed model produced the highest accuracy on multi valued class datasets than on binary class attributes.[1]

Zero-shot Korean Sentiment Analysis with Large Language Models: Comparison with Pre-trained Language Models

  • Soon-Chan Kwon;Dong-Hee Lee;Beak-Cheol Jang
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.43-50
    • /
    • 2024
  • This paper evaluates the Korean sentiment analysis performance of large language models like GPT-3.5 and GPT-4 using a zero-shot approach facilitated by the ChatGPT API, comparing them to pre-trained Korean models such as KoBERT. Through experiments utilizing various Korean sentiment analysis datasets in fields like movies, gaming, and shopping, the efficiency of these models is validated. The results reveal that the LMKor-ELECTRA model displayed the highest performance based on F1-score, while GPT-4 particularly achieved high accuracy and F1-scores in movie and shopping datasets. This indicates that large language models can perform effectively in Korean sentiment analysis without prior training on specific datasets, suggesting their potential in zero-shot learning. However, relatively lower performance in some datasets highlights the limitations of the zero-shot based methodology. This study explores the feasibility of using large language models for Korean sentiment analysis, providing significant implications for future research in this area.

WQI Class Prediction of Sihwa Lake Using Machine Learning-Based Models (기계학습 기반 모델을 활용한 시화호의 수질평가지수 등급 예측)

  • KIM, SOO BIN;LEE, JAE SEONG;KIM, KYUNG TAE
    • The Sea:JOURNAL OF THE KOREAN SOCIETY OF OCEANOGRAPHY
    • /
    • v.27 no.2
    • /
    • pp.71-86
    • /
    • 2022
  • The water quality index (WQI) has been widely used to evaluate marine water quality. The WQI in Korea is categorized into five classes by marine environmental standards. But, the WQI calculation on huge datasets is a very complex and time-consuming process. In this regard, the current study proposed machine learning (ML) based models to predict WQI class by using water quality datasets. Sihwa Lake, one of specially-managed coastal zone, was selected as a modeling site. In this study, adaptive boosting (AdaBoost) and tree-based pipeline optimization (TPOT) algorithms were used to train models and each model performance was evaluated by metrics (accuracy, precision, F1, and Log loss) on classification. Before training, the feature importance and sensitivity analysis were conducted to find out the best input combination for each algorithm. The results proved that the bottom dissolved oxygen (DOBot) was the most important variable affecting model performance. Conversely, surface dissolved inorganic nitrogen (DINSur) and dissolved inorganic phosphorus (DIPSur) had weaker effects on the prediction of WQI class. In addition, the performance varied over features including stations, seasons, and WQI classes by comparing spatio-temporal and class sensitivities of each best model. In conclusion, the modeling results showed that the TPOT algorithm has better performance rather than the AdaBoost algorithm without considering feature selection. Moreover, the WQI class for unknown water quality datasets could be surely predicted using the TPOT model trained with satisfactory training datasets.

Comparison of Seismic Data Interpolation Performance using U-Net and cWGAN (U-Net과 cWGAN을 이용한 탄성파 탐사 자료 보간 성능 평가)

  • Yu, Jiyun;Yoon, Daeung
    • Geophysics and Geophysical Exploration
    • /
    • v.25 no.3
    • /
    • pp.140-161
    • /
    • 2022
  • Seismic data with missing traces are often obtained regularly or irregularly due to environmental and economic constraints in their acquisition. Accordingly, seismic data interpolation is an essential step in seismic data processing. Recently, research activity on machine learning-based seismic data interpolation has been flourishing. In particular, convolutional neural network (CNN) and generative adversarial network (GAN), which are widely used algorithms for super-resolution problem solving in the image processing field, are also used for seismic data interpolation. In this study, CNN-based algorithm, U-Net and GAN-based algorithm, and conditional Wasserstein GAN (cWGAN) were used as seismic data interpolation methods. The results and performances of the methods were evaluated thoroughly to find an optimal interpolation method, which reconstructs with high accuracy missing seismic data. The work process for model training and performance evaluation was divided into two cases (i.e., Cases I and II). In Case I, we trained the model using only the regularly sampled data with 50% missing traces. We evaluated the model performance by applying the trained model to a total of six different test datasets, which consisted of a combination of regular, irregular, and sampling ratios. In Case II, six different models were generated using the training datasets sampled in the same way as the six test datasets. The models were applied to the same test datasets used in Case I to compare the results. We found that cWGAN showed better prediction performance than U-Net with higher PSNR and SSIM. However, cWGAN generated additional noise to the prediction results; thus, an ensemble technique was performed to remove the noise and improve the accuracy. The cWGAN ensemble model removed successfully the noise and showed improved PSNR and SSIM compared with existing individual models.

The gene expression programming method to generate an equation to estimate fracture toughness of reinforced concrete

  • Ahmadreza Khodayari;Danial Fakhri;Adil Hussein, Mohammed;Ibrahim Albaijan;Arsalan Mahmoodzadeh;Hawkar Hashim Ibrahim;Ahmed Babeker Elhag;Shima Rashidi
    • Steel and Composite Structures
    • /
    • v.48 no.2
    • /
    • pp.163-177
    • /
    • 2023
  • Complex and intricate preparation techniques, the imperative for utmost precision and sensitivity in instrumentation, premature sample failure, and fragile specimens collectively contribute to the arduous task of measuring the fracture toughness of concrete in the laboratory. The objective of this research is to introduce and refine an equation based on the gene expression programming (GEP) method to calculate the fracture toughness of reinforced concrete, thereby minimizing the need for costly and time-consuming laboratory experiments. To accomplish this, various types of reinforced concrete, each incorporating distinct ratios of fibers and additives, were subjected to diverse loading angles relative to the initial crack (α) in order to ascertain the effective fracture toughness (Keff) of 660 samples utilizing the central straight notched Brazilian disc (CSNBD) test. Within the datasets, six pivotal input factors influencing the Keff of concrete, namely sample type (ST), diameter (D), thickness (t), length (L), force (F), and α, were taken into account. The ST and α parameters represent crucial inputs in the model presented in this study, marking the first instance that their influence has been examined via the CSNBD test. Of the 660 datasets, 460 were utilized for training purposes, while 100 each were allotted for testing and validation of the model. The GEP model was fine-tuned based on the training datasets, and its efficacy was evaluated using the separate test and validation datasets. In subsequent stages, the GEP model was optimized, yielding the most robust models. Ultimately, an equation was derived by averaging the most exemplary models, providing a means to predict the Keff parameter. This averaged equation exhibited exceptional proficiency in predicting the Keff of concrete. The significance of this work lies in the possibility of obtaining the Keff parameter without investing copious amounts of time and resources into the CSNBD test, simply by inputting the relevant parameters into the equation derived for diverse samples of reinforced concrete subject to varied loading angles.

A Study on the Development of integrated Process Safety Management System based on Artificial Intelligence (AI) (인공지능(AI) 기반 통합 공정안전관리 시스템 개발에 관한 연구)

  • KyungHyun Lee;RackJune Baek;WooSu Kim;HeeJeong Choi
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.1
    • /
    • pp.403-409
    • /
    • 2024
  • In this paper, the guidelines for the design of an Artificial Intelligence(AI) based Integrated Process Safety Management(PSM) system to enhance workplace safety using data from process safety reports submitted by hazardous and risky facility operators in accordance with the Occupational Safety and Health Act is proposed. The system composed of the proposed guidelines is to be implemented separately by individual facility operators and specialized process safety management agencies for single or multiple workplaces. It is structured with key components and stages, including data collection and preprocessing, expansion and segmentation, labeling, and the construction of training datasets. It enables the collection of process operation data and change approval data from various processes, allowing potential fault prediction and maintenance planning through the analysis of all data generated in workplace operations, thereby supporting decision-making during process operation. Moreover, it offers utility and effectiveness in time and cost savings, detection and prediction of various risk factors, including human errors, and continuous model improvement through the use of accurate and reliable training data and specialized datasets. Through this approach, it becomes possible to enhance workplace safety and prevent accidents.

THE APPLICATION OF ARTIFICIAL NEURAL NETWORKS TO LANDSLIDE SUSCEPTIBILITY MAPPING AT JANGHUNG, KOREA

  • LEE SARO;LEE MOUNG-JIN;WON JOONG-SUN
    • Proceedings of the KSRS Conference
    • /
    • 2004.10a
    • /
    • pp.294-297
    • /
    • 2004
  • The purpose of this study was to develop landslide susceptibility analysis techniques using artificial neural networks and then to apply these to the selected study area of Janghung in Korea. We aimed to verify the effect of data selection on training sites. Landslide locations were identified from interpretation of satellite images and field survey data, and a spatial database of the topography, soil, forest, and land use was constructed. Thirteen landslide-related factors were extracted from the spatial database. Using these factors, landslide susceptibility was analyzed using an artificial neural network. The weights of each factor were determined by the back-propagation training method. Five different training datasets were applied to analyze and verify the effect of training. Then, the landslide susceptibility indices were calculated using the trained back-propagation weights and susceptibility maps were constructed from Geographic Information System (GIS) data for the five cases. The results of the landslide susceptibility maps were verified and compared using landslide location data. GIS data were used to efficiently analyze the large volume of data, and the artificial neural network proved to be an effective tool to analyze landslide susceptibility.

  • PDF

Study on Fast-Changing Mixed-Modulation Recognition Based on Neural Network Algorithms

  • Jing, Qingfeng;Wang, Huaxia;Yang, Liming
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4664-4681
    • /
    • 2020
  • Modulation recognition (MR) plays a key role in cognitive radar, cognitive radio, and some other civilian and military fields. While existing methods can identify the signal modulation type by extracting the signal characteristics, the quality of feature extraction has a serious impact on the recognition results. In this paper, an end-to-end MR method based on long short-term memory (LSTM) and the gated recurrent unit (GRU) is put forward, which can directly predict the modulation type from a sampled signal. Additionally, the sliding window method is applied to fast-changing mixed-modulation signals for which the signal modulation type changes over time. The recognition accuracy on training datasets in different SNR ranges and the proportion of each modulation method in misclassified samples are analyzed, and it is found to be reasonable to select the evenly-distributed and full range of SNR data as the training data. With the improvement of the SNR, the recognition accuracy increases rapidly. When the length of the training dataset increases, the neural network recognition effect is better. The loss function value of the neural network decreases with the increase of the training dataset length, and then tends to be stable. Moreover, when the fast-changing period is less than 20ms, the error rate is as high as 50%. As the fast-changing period is increased to 30ms, the error rates of the GRU and LSTM neural networks are less than 5%.

Temporal matching prior network for vehicle license plate detection and recognition in videos

  • Yoo, Seok Bong;Han, Mikyong
    • ETRI Journal
    • /
    • v.42 no.3
    • /
    • pp.411-419
    • /
    • 2020
  • In real-world intelligent transportation systems, accuracy in vehicle license plate detection and recognition is considered quite critical. Many algorithms have been proposed for still images, but their accuracy on actual videos is not satisfactory. This stems from several problematic conditions in videos, such as vehicle motion blur, variety in viewpoints, outliers, and the lack of publicly available video datasets. In this study, we focus on these challenges and propose a license plate detection and recognition scheme for videos based on a temporal matching prior network. Specifically, to improve the robustness of detection and recognition accuracy in the presence of motion blur and outliers, forward and bidirectional matching priors between consecutive frames are properly combined with layer structures specifically designed for plate detection. We also built our own video dataset for the deep training of the proposed network. During network training, we perform data augmentation based on image rotation to increase robustness regarding the various viewpoints in videos.