• Title/Summary/Keyword: training models

Search Result 1,531, Processing Time 0.031 seconds

A Study on the Improvement of Weapon System T&E performance System (국방 무기체계 시험평가 수행체계 개선방안 연구)

  • BaekJung Kim;Sukjae Jeong
    • Journal of The Korean Institute of Defense Technology
    • /
    • v.5 no.2
    • /
    • pp.1-9
    • /
    • 2023
  • The purpose of this study is to derive the need to improve the test and evaluation(T&E) performance system of the weapon systems to which advanced science and technology is applied, evaluate priorities, and present development plans. T&E of Al-based weapon systems through a literature research and case analysis on changes in the T&E environment for weapon system, 11 detailed evaluation items were derived from the in terms of the T&E system, structure, and technology. Al-based weapon system test evaluation should be performed in parallel with data-based performance evaluation and actual T&E, and performance measurement using a separate T&E data set is required for AI models performance evaluation. As a result of analyzing the importance of T&E through AHP analysis, the order of T&E system-technology-structure was evaluated, and the priority of detailed evaluation items was evaluated in the order of T&E result judgment-T&E organization and expert training-scientific T&E. For evaluation items with high priority, measures to improve the T&E performance system were presented.

  • PDF

COVID-19: Improving the accuracy using data augmentation and pre-trained DCNN Models

  • Saif Hassan;Abdul Ghafoor;Zahid Hussain Khand;Zafar Ali;Ghulam Mujtaba;Sajid Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.170-176
    • /
    • 2024
  • Since the World Health Organization (WHO) has declared COVID-19 as pandemic, many researchers have started working on developing vaccine and developing AI systems to detect COVID-19 patient using Chest X-ray images. The purpose of this work is to improve the performance of pre-trained Deep convolution neural nets (DCNNs) on Chest X-ray images dataset specially COVID-19 which is developed by collecting from different sources such as GitHub, Kaggle. To improve the performance of Deep CNNs, data augmentation is used in this study. The COVID-19 dataset collected from GitHub was containing 257 images while the other two classes normal and pneumonia were having more than 500 images each class. There were two issues whike training DCNN model on this dataset, one is unbalanced and second is the data is very less. In order to handle these both issues, we performed data augmentation such as rotation, flipping to increase and balance the dataset. After data augmentation each class contains 510 images. Results show that augmentation on Chest X-ray images helps in improving accuracy. The accuracy before and after augmentation produced by our proposed architecture is 96.8% and 98.4% respectively.

TCN-USAD for Anomaly Power Detection (이상 전력 탐지를 위한 TCN-USAD)

  • Hyeonseok Jin;Kyungbaek Kim
    • Smart Media Journal
    • /
    • v.13 no.7
    • /
    • pp.9-17
    • /
    • 2024
  • Due to the increase in energy consumption, and eco-friendly policies, there is a need for efficient energy consumption in buildings. Anomaly power detection based on deep learning are being used. Because of the difficulty in collecting anomaly data, anomaly detection is performed using reconstruction error with a Recurrent Neural Network(RNN) based autoencoder. However, there are some limitations such as the long time required to fully learn temporal features and its sensitivity to noise in the train data. To overcome these limitations, this paper proposes the TCN-USAD, combined with Temporal Convolution Network(TCN) and UnSupervised Anomaly Detection for multivariate data(USAD). The proposed model using TCN-based autoencoder and the USAD structure, which uses two decoders and adversarial training, to quickly learn temporal features and enable robust anomaly detection. To validate the performance of TCN-USAD, comparative experiments were performed using two building energy datasets. The results showed that the TCN-based autoencoder can perform faster and better reconstruction than RNN-based autoencoder. Furthermore, TCN-USAD achieved 20% improved F1-Score over other anomaly detection models, demonstrating excellent anomaly detection performance.

Semi-supervised SAR Image Classification with Threshold Learning Module (임계값 학습 모듈을 적용한 준지도 SAR 이미지 분류)

  • Jae-Jun Do;Sunok Kim
    • The Journal of Bigdata
    • /
    • v.8 no.2
    • /
    • pp.177-187
    • /
    • 2023
  • Semi-supervised learning (SSL) is an effective approach to training models using a small amount of labeled data and a larger amount of unlabeled data. However, many papers in the field use a fixed threshold when applying pseudo-labels without considering the feature-wise differences among images of different classes. In this paper, we propose a SSL method for synthetic aperture radar (SAR) image classification that applies different thresholds for each class instead of using a single fixed threshold for all classes. We propose a threshold learning module into the model, considering the differences in feature distributions among classes, to dynamically learn thresholds for each class. We compare the application of a SSL SAR image classification method using different thresholds and examined the advantages of employing class-specific thresholds.

Deep Learning-Based Lumen and Vessel Segmentation of Intravascular Ultrasound Images in Coronary Artery Disease

  • Gyu-Jun Jeong;Gaeun Lee;June-Goo Lee;Soo-Jin Kang
    • Korean Circulation Journal
    • /
    • v.54 no.1
    • /
    • pp.30-39
    • /
    • 2024
  • Background and Objectives: Intravascular ultrasound (IVUS) evaluation of coronary artery morphology is based on the lumen and vessel segmentation. This study aimed to develop an automatic segmentation algorithm and validate the performances for measuring quantitative IVUS parameters. Methods: A total of 1,063 patients were randomly assigned, with a ratio of 4:1 to the training and test sets. The independent data set of 111 IVUS pullbacks was obtained to assess the vessel-level performance. The lumen and external elastic membrane (EEM) boundaries were labeled manually in every IVUS frame with a 0.2-mm interval. The Efficient-UNet was utilized for the automatic segmentation of IVUS images. Results: At the frame-level, Efficient-UNet showed a high dice similarity coefficient (DSC, 0.93±0.05) and Jaccard index (JI, 0.87±0.08) for lumen segmentation, and demonstrated a high DSC (0.97±0.03) and JI (0.94±0.04) for EEM segmentation. At the vessel-level, there were close correlations between model-derived vs. experts-measured IVUS parameters; minimal lumen image area (r=0.92), EEM area (r=0.88), lumen volume (r=0.99) and plaque volume (r=0.95). The agreement between model-derived vs. expert-measured minimal lumen area was similarly excellent compared to the experts' agreement. The model-based lumen and EEM segmentation for a 20-mm lesion segment required 13.2 seconds, whereas manual segmentation with a 0.2-mm interval by an expert took 187.5 minutes on average. Conclusions: The deep learning models can accurately and quickly delineate vascular geometry. The artificial intelligence-based methodology may support clinicians' decision-making by real-time application in the catheterization laboratory.

Estimation of surface nitrogen dioxide mixing ratio in Seoul using the OMI satellite data (OMI 위성자료를 활용한 서울 지표 이산화질소 혼합비 추정 연구)

  • Kim, Daewon;Hong, Hyunkee;Choi, Wonei;Park, Junsung;Yang, Jiwon;Ryu, Jaeyong;Lee, Hanlim
    • Korean Journal of Remote Sensing
    • /
    • v.33 no.2
    • /
    • pp.135-147
    • /
    • 2017
  • We, for the first time, estimated daily and monthly surface nitrogen dioxide ($NO_2$) volume mixing ratio (VMR) using three regression models with $NO_2$ tropospheric vertical column density (OMIT-rop $NO_2$ VCD) data obtained from Ozone Monitoring Instrument (OMI) in Seoul in South Korea at OMI overpass time (13:45 local time). First linear regression model (M1) is a linear regression equation between OMI-Trop $NO_2$ VCD and in situ $NO_2$ VMR, whereas second linear regression model (M2) incorporates boundary layer height (BLH), temperature, and pressure obtained from Atmospheric Infrared Sounder (AIRS) and OMI-Trop $NO_2$ VCD. Last models (M3M & M3D) are a multiple linear regression equations which include OMI-Trop $NO_2$ VCD, BLH and various meteorological data. In this study, we determined three types of regression models for the training period between 2009 and 2011, and the performance of those regression models was evaluated via comparison with the surface $NO_2$ VMR data obtained from in situ measurements (in situ $NO_2$ VMR) in 2012. The monthly mean surface $NO_2$ VMRs estimated by M3M showed good agreements with those of in situ measurements(avg. R = 0.77). In terms of the daily (13:45LT) $NO_2$ estimation, the highest correlations were found between the daily surface $NO_2$ VMRs estimated by M3D and in-situ $NO_2$ VMRs (avg. R = 0.55). The estimated surface $NO_2$ VMRs by three modelstend to be underestimated. We also discussed the performance of these empirical modelsfor surface $NO_2$ VMR estimation with respect to otherstatistical data such asroot mean square error (RMSE), mean bias, mean absolute error (MAE), and percent difference. This present study shows a possibility of estimating surface $NO_2$ VMR using the satellite measurement.

Effects of Treadmill Exercise on Alpha-synuclein Mutation and Activated Neurotrophins in Nigrostriatal Region of MPTP-induced Parkinson Models (MPTP 파킨슨 모델의 트레드밀 운동이 알파시누크린 변성과 흑질선조체내 신경성장인자 활성화에 미치는 영향)

  • Park, Jae-Sung;Kim, Jeong-Hwan;Yoon, Sung-Jin
    • Journal of Korean Medicine Rehabilitation
    • /
    • v.19 no.2
    • /
    • pp.73-88
    • /
    • 2009
  • Objectives : Neuronal changes that result from treadmill exercise for patients with Parkinson's disease(PD) have not been well documented, although some clinical and laboratory reports suggest that regular exercise may produce a neuroprotective effect and restore dopaminergic and motor functions. However, it is not clear if the improvements are due to neuronal alterations within the affected nigrostriatal region or result from a more general effect of exercise on affect areas and motivation. In this study, we demonstrate that motorized treadmill exercise improves the neuronal outcomes in rodent models of PD. Methods : We used a chronic mouse model of parkinsonism, which was induced by injecting male C57BL/6 mice with 10 doses(Every 12 hour) of 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (30 mg/kg) and probenecid (20 mg/kg) over 5 days. These mice were able to sustain an exercise training program on a motorized rodent treadmill at a speed of 18 m/min, $0^{\circ}$ of inclination, 40 min/day, 5 days/week for 4 weeks. At the end of exercise training, we extracted the brain and compared their neuronal and neurochemical changes with the control(saline and sedentary) mice groups. Synphilin protein is the substance that manifestly reacts with ${\alpha}$-synuclein. In this study, we used Synphilin as a manifest sign of recovery from neurodegeneration. We analyze the brain stems of the substantia nigra and striatum region using the western blotting technique. Results : There were no expression of synphilin in the saline-induced groups. The addition of MPTP(1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine) greatly accelerated synphilin expression which meant an aggregation of ${\alpha}$-synuclein. But, the MPTP-induced treadmill exercise group showed significantly lower expression than the MPTP-induced sedentary group. This means treadmill exercise has a definite effect on the decrease of ${\alpha}$-synuclein aggregation. Conclusions : In this study, our results suggest that treadmill exercise promoted the removal of the aggregation of ${\alpha}$-synuclein, resulting in protection against disease development and blocks the apoptotic process in the chronic parkinsonian mice brain with severe neurodegeneration.

Sentiment Analysis of Movie Review Using Integrated CNN-LSTM Mode (CNN-LSTM 조합모델을 이용한 영화리뷰 감성분석)

  • Park, Ho-yeon;Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.4
    • /
    • pp.141-154
    • /
    • 2019
  • Rapid growth of internet technology and social media is progressing. Data mining technology has evolved to enable unstructured document representations in a variety of applications. Sentiment analysis is an important technology that can distinguish poor or high-quality content through text data of products, and it has proliferated during text mining. Sentiment analysis mainly analyzes people's opinions in text data by assigning predefined data categories as positive and negative. This has been studied in various directions in terms of accuracy from simple rule-based to dictionary-based approaches using predefined labels. In fact, sentiment analysis is one of the most active researches in natural language processing and is widely studied in text mining. When real online reviews aren't available for others, it's not only easy to openly collect information, but it also affects your business. In marketing, real-world information from customers is gathered on websites, not surveys. Depending on whether the website's posts are positive or negative, the customer response is reflected in the sales and tries to identify the information. However, many reviews on a website are not always good, and difficult to identify. The earlier studies in this research area used the reviews data of the Amazon.com shopping mal, but the research data used in the recent studies uses the data for stock market trends, blogs, news articles, weather forecasts, IMDB, and facebook etc. However, the lack of accuracy is recognized because sentiment calculations are changed according to the subject, paragraph, sentiment lexicon direction, and sentence strength. This study aims to classify the polarity analysis of sentiment analysis into positive and negative categories and increase the prediction accuracy of the polarity analysis using the pretrained IMDB review data set. First, the text classification algorithm related to sentiment analysis adopts the popular machine learning algorithms such as NB (naive bayes), SVM (support vector machines), XGboost, RF (random forests), and Gradient Boost as comparative models. Second, deep learning has demonstrated discriminative features that can extract complex features of data. Representative algorithms are CNN (convolution neural networks), RNN (recurrent neural networks), LSTM (long-short term memory). CNN can be used similarly to BoW when processing a sentence in vector format, but does not consider sequential data attributes. RNN can handle well in order because it takes into account the time information of the data, but there is a long-term dependency on memory. To solve the problem of long-term dependence, LSTM is used. For the comparison, CNN and LSTM were chosen as simple deep learning models. In addition to classical machine learning algorithms, CNN, LSTM, and the integrated models were analyzed. Although there are many parameters for the algorithms, we examined the relationship between numerical value and precision to find the optimal combination. And, we tried to figure out how the models work well for sentiment analysis and how these models work. This study proposes integrated CNN and LSTM algorithms to extract the positive and negative features of text analysis. The reasons for mixing these two algorithms are as follows. CNN can extract features for the classification automatically by applying convolution layer and massively parallel processing. LSTM is not capable of highly parallel processing. Like faucets, the LSTM has input, output, and forget gates that can be moved and controlled at a desired time. These gates have the advantage of placing memory blocks on hidden nodes. The memory block of the LSTM may not store all the data, but it can solve the CNN's long-term dependency problem. Furthermore, when LSTM is used in CNN's pooling layer, it has an end-to-end structure, so that spatial and temporal features can be designed simultaneously. In combination with CNN-LSTM, 90.33% accuracy was measured. This is slower than CNN, but faster than LSTM. The presented model was more accurate than other models. In addition, each word embedding layer can be improved when training the kernel step by step. CNN-LSTM can improve the weakness of each model, and there is an advantage of improving the learning by layer using the end-to-end structure of LSTM. Based on these reasons, this study tries to enhance the classification accuracy of movie reviews using the integrated CNN-LSTM model.

Corporate Default Prediction Model Using Deep Learning Time Series Algorithm, RNN and LSTM (딥러닝 시계열 알고리즘 적용한 기업부도예측모형 유용성 검증)

  • Cha, Sungjae;Kang, Jungseok
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.4
    • /
    • pp.1-32
    • /
    • 2018
  • In addition to stakeholders including managers, employees, creditors, and investors of bankrupt companies, corporate defaults have a ripple effect on the local and national economy. Before the Asian financial crisis, the Korean government only analyzed SMEs and tried to improve the forecasting power of a default prediction model, rather than developing various corporate default models. As a result, even large corporations called 'chaebol enterprises' become bankrupt. Even after that, the analysis of past corporate defaults has been focused on specific variables, and when the government restructured immediately after the global financial crisis, they only focused on certain main variables such as 'debt ratio'. A multifaceted study of corporate default prediction models is essential to ensure diverse interests, to avoid situations like the 'Lehman Brothers Case' of the global financial crisis, to avoid total collapse in a single moment. The key variables used in corporate defaults vary over time. This is confirmed by Beaver (1967, 1968) and Altman's (1968) analysis that Deakins'(1972) study shows that the major factors affecting corporate failure have changed. In Grice's (2001) study, the importance of predictive variables was also found through Zmijewski's (1984) and Ohlson's (1980) models. However, the studies that have been carried out in the past use static models. Most of them do not consider the changes that occur in the course of time. Therefore, in order to construct consistent prediction models, it is necessary to compensate the time-dependent bias by means of a time series analysis algorithm reflecting dynamic change. Based on the global financial crisis, which has had a significant impact on Korea, this study is conducted using 10 years of annual corporate data from 2000 to 2009. Data are divided into training data, validation data, and test data respectively, and are divided into 7, 2, and 1 years respectively. In order to construct a consistent bankruptcy model in the flow of time change, we first train a time series deep learning algorithm model using the data before the financial crisis (2000~2006). The parameter tuning of the existing model and the deep learning time series algorithm is conducted with validation data including the financial crisis period (2007~2008). As a result, we construct a model that shows similar pattern to the results of the learning data and shows excellent prediction power. After that, each bankruptcy prediction model is restructured by integrating the learning data and validation data again (2000 ~ 2008), applying the optimal parameters as in the previous validation. Finally, each corporate default prediction model is evaluated and compared using test data (2009) based on the trained models over nine years. Then, the usefulness of the corporate default prediction model based on the deep learning time series algorithm is proved. In addition, by adding the Lasso regression analysis to the existing methods (multiple discriminant analysis, logit model) which select the variables, it is proved that the deep learning time series algorithm model based on the three bundles of variables is useful for robust corporate default prediction. The definition of bankruptcy used is the same as that of Lee (2015). Independent variables include financial information such as financial ratios used in previous studies. Multivariate discriminant analysis, logit model, and Lasso regression model are used to select the optimal variable group. The influence of the Multivariate discriminant analysis model proposed by Altman (1968), the Logit model proposed by Ohlson (1980), the non-time series machine learning algorithms, and the deep learning time series algorithms are compared. In the case of corporate data, there are limitations of 'nonlinear variables', 'multi-collinearity' of variables, and 'lack of data'. While the logit model is nonlinear, the Lasso regression model solves the multi-collinearity problem, and the deep learning time series algorithm using the variable data generation method complements the lack of data. Big Data Technology, a leading technology in the future, is moving from simple human analysis, to automated AI analysis, and finally towards future intertwined AI applications. Although the study of the corporate default prediction model using the time series algorithm is still in its early stages, deep learning algorithm is much faster than regression analysis at corporate default prediction modeling. Also, it is more effective on prediction power. Through the Fourth Industrial Revolution, the current government and other overseas governments are working hard to integrate the system in everyday life of their nation and society. Yet the field of deep learning time series research for the financial industry is still insufficient. This is an initial study on deep learning time series algorithm analysis of corporate defaults. Therefore it is hoped that it will be used as a comparative analysis data for non-specialists who start a study combining financial data and deep learning time series algorithm.

Sentiment Analysis of Korean Reviews Using CNN: Focusing on Morpheme Embedding (CNN을 적용한 한국어 상품평 감성분석: 형태소 임베딩을 중심으로)

  • Park, Hyun-jung;Song, Min-chae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.2
    • /
    • pp.59-83
    • /
    • 2018
  • With the increasing importance of sentiment analysis to grasp the needs of customers and the public, various types of deep learning models have been actively applied to English texts. In the sentiment analysis of English texts by deep learning, natural language sentences included in training and test datasets are usually converted into sequences of word vectors before being entered into the deep learning models. In this case, word vectors generally refer to vector representations of words obtained through splitting a sentence by space characters. There are several ways to derive word vectors, one of which is Word2Vec used for producing the 300 dimensional Google word vectors from about 100 billion words of Google News data. They have been widely used in the studies of sentiment analysis of reviews from various fields such as restaurants, movies, laptops, cameras, etc. Unlike English, morpheme plays an essential role in sentiment analysis and sentence structure analysis in Korean, which is a typical agglutinative language with developed postpositions and endings. A morpheme can be defined as the smallest meaningful unit of a language, and a word consists of one or more morphemes. For example, for a word '예쁘고', the morphemes are '예쁘(= adjective)' and '고(=connective ending)'. Reflecting the significance of Korean morphemes, it seems reasonable to adopt the morphemes as a basic unit in Korean sentiment analysis. Therefore, in this study, we use 'morpheme vector' as an input to a deep learning model rather than 'word vector' which is mainly used in English text. The morpheme vector refers to a vector representation for the morpheme and can be derived by applying an existent word vector derivation mechanism to the sentences divided into constituent morphemes. By the way, here come some questions as follows. What is the desirable range of POS(Part-Of-Speech) tags when deriving morpheme vectors for improving the classification accuracy of a deep learning model? Is it proper to apply a typical word vector model which primarily relies on the form of words to Korean with a high homonym ratio? Will the text preprocessing such as correcting spelling or spacing errors affect the classification accuracy, especially when drawing morpheme vectors from Korean product reviews with a lot of grammatical mistakes and variations? We seek to find empirical answers to these fundamental issues, which may be encountered first when applying various deep learning models to Korean texts. As a starting point, we summarized these issues as three central research questions as follows. First, which is better effective, to use morpheme vectors from grammatically correct texts of other domain than the analysis target, or to use morpheme vectors from considerably ungrammatical texts of the same domain, as the initial input of a deep learning model? Second, what is an appropriate morpheme vector derivation method for Korean regarding the range of POS tags, homonym, text preprocessing, minimum frequency? Third, can we get a satisfactory level of classification accuracy when applying deep learning to Korean sentiment analysis? As an approach to these research questions, we generate various types of morpheme vectors reflecting the research questions and then compare the classification accuracy through a non-static CNN(Convolutional Neural Network) model taking in the morpheme vectors. As for training and test datasets, Naver Shopping's 17,260 cosmetics product reviews are used. To derive morpheme vectors, we use data from the same domain as the target one and data from other domain; Naver shopping's about 2 million cosmetics product reviews and 520,000 Naver News data arguably corresponding to Google's News data. The six primary sets of morpheme vectors constructed in this study differ in terms of the following three criteria. First, they come from two types of data source; Naver news of high grammatical correctness and Naver shopping's cosmetics product reviews of low grammatical correctness. Second, they are distinguished in the degree of data preprocessing, namely, only splitting sentences or up to additional spelling and spacing corrections after sentence separation. Third, they vary concerning the form of input fed into a word vector model; whether the morphemes themselves are entered into a word vector model or with their POS tags attached. The morpheme vectors further vary depending on the consideration range of POS tags, the minimum frequency of morphemes included, and the random initialization range. All morpheme vectors are derived through CBOW(Continuous Bag-Of-Words) model with the context window 5 and the vector dimension 300. It seems that utilizing the same domain text even with a lower degree of grammatical correctness, performing spelling and spacing corrections as well as sentence splitting, and incorporating morphemes of any POS tags including incomprehensible category lead to the better classification accuracy. The POS tag attachment, which is devised for the high proportion of homonyms in Korean, and the minimum frequency standard for the morpheme to be included seem not to have any definite influence on the classification accuracy.