• Title/Summary/Keyword: Training Datasets

Search Result 355, Processing Time 0.023 seconds

Data Augmentation using a Kernel Density Estimation for Motion Recognition Applications (움직임 인식응용을 위한 커널 밀도 추정 기반 학습용 데이터 증폭 기법)

  • Jung, Woosoon;Lee, Hyung Gyu
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.27 no.4
    • /
    • pp.19-27
    • /
    • 2022
  • In general, the performance of ML(Machine Learning) application is determined by various factors such as the type of ML model, the size of model (number of parameters), hyperparameters setting during the training, and training data. In particular, the recognition accuracy of ML may be deteriorated or experienced overfitting problem if the amount of dada used for training is insufficient. Existing studies focusing on image recognition have widely used open datasets for training and evaluating the proposed ML models. However, for specific applications where the sensor used, the target of recognition, and the recognition situation are different, it is necessary to build the dataset manually. In this case, the performance of ML largely depends on the quantity and quality of the data. In this paper, training data used for motion recognition application is augmented using the kernel density estimation algorithm which is a type of non-parametric estimation method. We then compare and analyze the recognition accuracy of a ML application by varying the number of original data, kernel types and augmentation rate used for data augmentation. Finally experimental results show that the recognition accuracy is improved by up to 14.31% when using the narrow bandwidth Tophat kernel.

The Automated Scoring of Kinematics Graph Answers through the Design and Application of a Convolutional Neural Network-Based Scoring Model (합성곱 신경망 기반 채점 모델 설계 및 적용을 통한 운동학 그래프 답안 자동 채점)

  • Jae-Sang Han;Hyun-Joo Kim
    • Journal of The Korean Association For Science Education
    • /
    • v.43 no.3
    • /
    • pp.237-251
    • /
    • 2023
  • This study explores the possibility of automated scoring for scientific graph answers by designing an automated scoring model using convolutional neural networks and applying it to students' kinematics graph answers. The researchers prepared 2,200 answers, which were divided into 2,000 training data and 200 validation data. Additionally, 202 student answers were divided into 100 training data and 102 test data. First, in the process of designing an automated scoring model and validating its performance, the automated scoring model was optimized for graph image classification using the answer dataset prepared by the researchers. Next, the automated scoring model was trained using various types of training datasets, and it was used to score the student test dataset. The performance of the automated scoring model has been improved as the amount of training data increased in amount and diversity. Finally, compared to human scoring, the accuracy was 97.06%, the kappa coefficient was 0.957, and the weighted kappa coefficient was 0.968. On the other hand, in the case of answer types that were not included in the training data, the s coring was almos t identical among human s corers however, the automated scoring model performed inaccurately.

A Comparative Performance Analysis of Spark-Based Distributed Deep-Learning Frameworks (스파크 기반 딥 러닝 분산 프레임워크 성능 비교 분석)

  • Jang, Jaehee;Park, Jaehong;Kim, Hanjoo;Yoon, Sungroh
    • KIISE Transactions on Computing Practices
    • /
    • v.23 no.5
    • /
    • pp.299-303
    • /
    • 2017
  • By piling up hidden layers in artificial neural networks, deep learning is delivering outstanding performances for high-level abstraction problems such as object/speech recognition and natural language processing. Alternatively, deep-learning users often struggle with the tremendous amounts of time and resources that are required to train deep neural networks. To alleviate this computational challenge, many approaches have been proposed in a diversity of areas. In this work, two of the existing Apache Spark-based acceleration frameworks for deep learning (SparkNet and DeepSpark) are compared and analyzed in terms of the training accuracy and the time demands. In the authors' experiments with the CIFAR-10 and CIFAR-100 benchmark datasets, SparkNet showed a more stable convergence behavior than DeepSpark; but in terms of the training accuracy, DeepSpark delivered a higher classification accuracy of approximately 15%. For some of the cases, DeepSpark also outperformed the sequential implementation running on a single machine in terms of both the accuracy and the running time.

Time Series Prediction of Dynamic Response of a Free-standing Riser using Quadratic Volterra Model (Quadratic Volterra 모델을 이용한 자유지지 라이저의 동적 응답 시계열 예측)

  • Kim, Yooil
    • Journal of the Society of Naval Architects of Korea
    • /
    • v.51 no.4
    • /
    • pp.274-282
    • /
    • 2014
  • Time series of the dynamic response of a slender marine structure was predicted using quadratic Volterra series. The wave-structure interaction system was identified using the NARX(Nonlinear Autoregressive with Exogenous Input) technique, and the network parameters were determined through the supervised training with the prepared datasets. The dataset used for the network training was obtained by carrying out the nonlinear finite element analysis on the freely standing riser under random ocean waves of white noise. The nonlinearities involved in the analysis were both large deformation of the structure under consideration and the quadratic term of relative velocity between the water particle and structure in Morison formula. The linear and quadratic frequency response functions of the given system were extracted using the multi-tone harmonic probing method and the time series of response of the structure was predicted using the quadratic Volterra series. In order to check the applicability of the method, the response of structure under the realistic ocean wave environment with given significant wave height and modal period was predicted and compared with the nonlinear time domain simulation results. It turned out that the predicted time series of the response of structure with quadratic Volterra series successfully captures the slowly varying response with reasonably good accuracy. It is expected that the method can be used in predicting the response of the slender offshore structure exposed to the Morison type load without relying on the computationally expensive time domain analysis, especially for the screening purpose.

Convolutional Neural Network with Expert Knowledge for Hyperspectral Remote Sensing Imagery Classification

  • Wu, Chunming;Wang, Meng;Gao, Lang;Song, Weijing;Tian, Tian;Choo, Kim-Kwang Raymond
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.8
    • /
    • pp.3917-3941
    • /
    • 2019
  • The recent interest in artificial intelligence and machine learning has partly contributed to an interest in the use of such approaches for hyperspectral remote sensing (HRS) imagery classification, as evidenced by the increasing number of deep framework with deep convolutional neural networks (CNN) structures proposed in the literature. In these approaches, the assumption of obtaining high quality deep features by using CNN is not always easy and efficient because of the complex data distribution and the limited sample size. In this paper, conventional handcrafted learning-based multi features based on expert knowledge are introduced as the input of a special designed CNN to improve the pixel description and classification performance of HRS imagery. The introduction of these handcrafted features can reduce the complexity of the original HRS data and reduce the sample requirements by eliminating redundant information and improving the starting point of deep feature training. It also provides some concise and effective features that are not readily available from direct training with CNN. Evaluations using three public HRS datasets demonstrate the utility of our proposed method in HRS classification.

Predictive modeling of the compressive strength of bacteria-incorporated geopolymer concrete using a gene expression programming approach

  • Mansouri, Iman;Ostovari, Mobin;Awoyera, Paul O.;Hu, Jong Wan
    • Computers and Concrete
    • /
    • v.27 no.4
    • /
    • pp.319-332
    • /
    • 2021
  • The performance of gene expression programming (GEP) in predicting the compressive strength of bacteria-incorporated geopolymer concrete (GPC) was examined in this study. Ground-granulated blast-furnace slag (GGBS), new bacterial strains, fly ash (FA), silica fume (SF), metakaolin (MK), and manufactured sand were used as ingredients in the concrete mixture. For the geopolymer preparation, an 8 M sodium hydroxide (NaOH) solution was used, and the ambient curing temperature (28℃) was maintained for all mixtures. The ratio of sodium silicate (Na2SiO3) to NaOH was 2.33, and the ratio of alkaline liquid to binder was 0.35. Based on experimental data collected from the literature, an evolutionary-based algorithm (GEP) was proposed to develop new predictive models for estimating the compressive strength of GPC containing bacteria. Data were classified into training and testing sets to obtain a closed-form solution using GEP. Independent variables for the model were the constituent materials of GPC, such as FA, MK, SF, and Bacillus bacteria. A total of six GEP formulations were developed for predicting the compressive strength of bacteria-incorporated GPC obtained at 1, 3, 7, 28, 56, and 90 days of curing. 80% and 20% of the data were used for training and testing the models, respectively. R2 values in the range of 0.9747 and 0.9950 (including train and test dataset) were obtained for the concrete samples, which showed that GEP can be used to predict the compressive strength of GPC containing bacteria with minimal error. Moreover, the GEP models were in good agreement with the experimental datasets and were robust and reliable. The models developed could serve as a tool for concrete constructors using geopolymers within the framework of this research.

Learning T.P.O Inference Model of Fashion Outfit Using LDAM Loss in Class Imbalance (LDAM 손실 함수를 활용한 클래스 불균형 상황에서의 옷차림 T.P.O 추론 모델 학습)

  • Park, Jonghyuk
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.17-25
    • /
    • 2021
  • When a person wears clothing, it is important to configure an outfit appropriate to the intended occasion. Therefore, T.P.O(Time, Place, Occasion) of the outfit is considered in various fashion recommendation systems based on artificial intelligence. However, there are few studies that directly infer the T.P.O from outfit images, as the nature of the problem causes multi-label and class imbalance problems, which makes model training challenging. Therefore, in this study, we propose a model that can infer the T.P.O of outfit images by employing a label-distribution-aware margin(LDAM) loss function. Datasets for the model training and evaluation were collected from fashion shopping malls. As a result of measuring performance, it was confirmed that the proposed model showed balanced performance in all T.P.O classes compared to baselines.

Deep Learning Methods for Recognition of Orchard Crops' Diseases

  • Sabitov, Baratbek;Biibsunova, Saltanat;Kashkaroeva, Altyn;Biibosunov, Bolotbek
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.257-261
    • /
    • 2022
  • Diseases of agricultural plants in recent years have spread greatly across the regions of the Kyrgyz Republic and pose a serious threat to the yield of many crops. The consequences of it can greatly affect the food security for an entire country. Due to force majeure, abnormal cases in climatic conditions, the annual incomes of many farmers and agricultural producers can be destroyed locally. Along with this, the rapid detection of plant diseases also remains difficult in many parts of the regions due to the lack of necessary infrastructure. In this case, it is possible to pave the way for the diagnosis of diseases with the help of the latest achievements due to the possibilities of feedback from the farmer - developer in the formation and updating of the database of sick and healthy plants with the help of advances in computer vision, developing on the basis of machine and deep learning. Currently, model training is increasingly used already on publicly available datasets, i.e. it has become popular to build new models already on trained models. The latter is called as transfer training and is developing very quickly. Using a publicly available data set from PlantVillage, which consists of 54,306 or NewPlantVillage with a data volumed with 87,356 images of sick and healthy plant leaves collected under controlled conditions, it is possible to build a deep convolutional neural network to identify 14 types of crops and 26 diseases. At the same time, the trained model can achieve an accuracy of more than 99% on a specially selected test set.

Turbulent-image Restoration Based on a Compound Multibranch Feature Fusion Network

  • Banglian Xu;Yao Fang;Leihong Zhang;Dawei Zhang;Lulu Zheng
    • Current Optics and Photonics
    • /
    • v.7 no.3
    • /
    • pp.237-247
    • /
    • 2023
  • In middle- and long-distance imaging systems, due to the atmospheric turbulence caused by temperature, wind speed, humidity, and so on, light waves propagating in the air are distorted, resulting in image-quality degradation such as geometric deformation and fuzziness. In remote sensing, astronomical observation, and traffic monitoring, image information loss due to degradation causes huge losses, so effective restoration of degraded images is very important. To restore images degraded by atmospheric turbulence, an image-restoration method based on improved compound multibranch feature fusion (CMFNetPro) was proposed. Based on the CMFNet network, an efficient channel-attention mechanism was used to replace the channel-attention mechanism to improve image quality and network efficiency. In the experiment, two-dimensional random distortion vector fields were used to construct two turbulent datasets with different degrees of distortion, based on the Google Landmarks Dataset v2 dataset. The experimental results showed that compared to the CMFNet, DeblurGAN-v2, and MIMO-UNet models, the proposed CMFNetPro network achieves better performance in both quality and training cost of turbulent-image restoration. In the mixed training, CMFNetPro was 1.2391 dB (weak turbulence), 0.8602 dB (strong turbulence) respectively higher in terms of peak signal-to-noise ratio and 0.0015 (weak turbulence), 0.0136 (strong turbulence) respectively higher in terms of structure similarity compared to CMFNet. CMFNetPro was 14.4 hours faster compared to the CMFNet. This provides a feasible scheme for turbulent-image restoration based on deep learning.

A Survey on Open Source based Large Language Models (오픈 소스 기반의 거대 언어 모델 연구 동향: 서베이)

  • Ha-Young Joo;Hyeontaek Oh;Jinhong Yang
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.4
    • /
    • pp.193-202
    • /
    • 2023
  • In recent years, the outstanding performance of large language models (LLMs) trained on extensive datasets has become a hot topic. Since studies on LLMs are available on open-source approaches, the ecosystem is expanding rapidly. Models that are task-specific, lightweight, and high-performing are being actively disseminated using additional training techniques using pre-trained LLMs as foundation models. On the other hand, the performance of LLMs for Korean is subpar because English comprises a significant proportion of the training dataset of existing LLMs. Therefore, research is being carried out on Korean-specific LLMs that allow for further learning with Korean language data. This paper identifies trends of open source based LLMs and introduces research on Korean specific large language models; moreover, the applications and limitations of large language models are described.