• Title/Summary/Keyword: pre-trained model

Search Result 295, Processing Time 0.023 seconds

A Study on Algorithm of Life Cycle Cost for Improving Reliability in Product Design (제품설계 신뢰성 제고를 위한 LCC의 알고리즘 연구)

  • Kim Dong-Kwan;Jung Soo-Il
    • Journal of the Korea Safety Management & Science
    • /
    • v.7 no.5
    • /
    • pp.155-174
    • /
    • 2005
  • Parametric life-cycle cost(LCC) models have been integrated with traditional design tools, and used in prior work to demonstrate the rapid solution of holistic, analytical tradeoffs between detailed design variations. During early designs stages there may be competing concepts with dramatic differences. Additionally, detailed information is scarce, and decisions must be models. for a diverse range of concepts, and the lack of detailed information make the integration make the integration of traditional LCC models impractical. This paper explores an approximate method for providing preliminary life-cycle cost. Learning algorithms trained using the known characteristics of existing products be approximated quickly during conceptual design without the overhead of defining new models. Artificial neural networks are trained to generalize on product attributes and life cycle cost date from pre-existing LCC studies. The Product attribute data to quickly obtain and LCC for a new and then an application is provided. In additions, the statistical method, called regression analysis, is suggested to predict the LCC. Tests have shown it is possible to predict the life cycle cost, and the comparison results between a learning LCC model and a regression analysis is also shown

Predicting Brain Tumor Using Transfer Learning

  • Mustafa Abdul Salam;Sanaa Taha;Sameh Alahmady;Alwan Mohamed
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.5
    • /
    • pp.73-88
    • /
    • 2023
  • Brain tumors can also be an abnormal collection or accumulation of cells in the brain that can be life-threatening due to their ability to invade and metastasize to nearby tissues. Accurate diagnosis is critical to the success of treatment planning, and resonant imaging is the primary diagnostic imaging method used to diagnose brain tumors and their extent. Deep learning methods for computer vision applications have shown significant improvements in recent years, primarily due to the undeniable fact that there is a large amount of data on the market to teach models. Therefore, improvements within the model architecture perform better approximations in the monitored configuration. Tumor classification using these deep learning techniques has made great strides by providing reliable, annotated open data sets. Reduce computational effort and learn specific spatial and temporal relationships. This white paper describes transfer models such as the MobileNet model, VGG19 model, InceptionResNetV2 model, Inception model, and DenseNet201 model. The model uses three different optimizers, Adam, SGD, and RMSprop. Finally, the pre-trained MobileNet with RMSprop optimizer is the best model in this paper, with 0.995 accuracies, 0.99 sensitivity, and 1.00 specificity, while at the same time having the lowest computational cost.

COVID-19: Improving the accuracy using data augmentation and pre-trained DCNN Models

  • Saif Hassan;Abdul Ghafoor;Zahid Hussain Khand;Zafar Ali;Ghulam Mujtaba;Sajid Khan
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.7
    • /
    • pp.170-176
    • /
    • 2024
  • Since the World Health Organization (WHO) has declared COVID-19 as pandemic, many researchers have started working on developing vaccine and developing AI systems to detect COVID-19 patient using Chest X-ray images. The purpose of this work is to improve the performance of pre-trained Deep convolution neural nets (DCNNs) on Chest X-ray images dataset specially COVID-19 which is developed by collecting from different sources such as GitHub, Kaggle. To improve the performance of Deep CNNs, data augmentation is used in this study. The COVID-19 dataset collected from GitHub was containing 257 images while the other two classes normal and pneumonia were having more than 500 images each class. There were two issues whike training DCNN model on this dataset, one is unbalanced and second is the data is very less. In order to handle these both issues, we performed data augmentation such as rotation, flipping to increase and balance the dataset. After data augmentation each class contains 510 images. Results show that augmentation on Chest X-ray images helps in improving accuracy. The accuracy before and after augmentation produced by our proposed architecture is 96.8% and 98.4% respectively.

A System Engineering Approach to Predict the Critical Heat Flux Using Artificial Neural Network (ANN)

  • Wazif, Muhammad;Diab, Aya
    • Journal of the Korean Society of Systems Engineering
    • /
    • v.16 no.2
    • /
    • pp.38-46
    • /
    • 2020
  • The accurate measurement of critical heat flux (CHF) in flow boiling is important for the safety requirement of the nuclear power plant to prevent sharp degradation of the convective heat transfer between the surface of the fuel rod cladding and the reactor coolant. In this paper, a System Engineering approach is used to develop a model that predicts the CHF using machine learning. The model is built using artificial neural network (ANN). The model is then trained, tested and validated using pre-existing database for different flow conditions. The Talos library is used to tune the model by optimizing the hyper parameters and selecting the best network architecture. Once developed, the ANN model can predict the CHF based solely on a set of input parameters (pressure, mass flux, quality and hydraulic diameter) without resorting to any physics-based model. It is intended to use the developed model to predict the DNBR under a large break loss of coolant accident (LBLOCA) in APR1400. The System Engineering approach proved very helpful in facilitating the planning and management of the current work both efficiently and effectively.

Proper Base-model and Optimizer Combination Improves Transfer Learning Performance for Ultrasound Breast Cancer Classification (다단계 전이 학습을 이용한 유방암 초음파 영상 분류 응용)

  • Ayana, Gelan;Park, Jinhyung;Choe, Se-woon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.10a
    • /
    • pp.655-657
    • /
    • 2021
  • It is challenging to find breast ultrasound image training dataset to develop an accurate machine learning model due to various regulations, personal information issues, and expensiveness of acquiring the images. However, studies targeting transfer learning for ultrasound breast cancer images classification have not been able to achieve high performance compared to radiologists. Here, we propose an improved transfer learning model for ultrasound breast cancer classification using publicly available dataset. We argue that with a proper combination of ImageNet pre-trained model and optimizer, a better performing model for ultrasound breast cancer image classification can be achieved. The proposed model provided a preliminary test accuracy of 99.5%. With more experiments involving various hyperparameters, the model is expected to achieve higher performance when subjected to new instances.

  • PDF

A Study on the Use of Contrast Agent and the Improvement of Body Part Classification Performance through Deep Learning-Based CT Scan Reconstruction (딥러닝 기반 CT 스캔 재구성을 통한 조영제 사용 및 신체 부위 분류 성능 향상 연구)

  • Seongwon Na;Yousun Ko;Kyung Won Kim
    • Journal of Broadcast Engineering
    • /
    • v.28 no.3
    • /
    • pp.293-301
    • /
    • 2023
  • Unstandardized medical data collection and management are still being conducted manually, and studies are being conducted to classify CT data using deep learning to solve this problem. However, most studies are developing models based only on the axial plane, which is a basic CT slice. Because CT images depict only human structures unlike general images, reconstructing CT scans alone can provide richer physical features. This study seeks to find ways to achieve higher performance through various methods of converting CT scan to 2D as well as axial planes. The training used 1042 CT scans from five body parts and collected 179 test sets and 448 with external datasets for model evaluation. To develop a deep learning model, we used InceptionResNetV2 pre-trained with ImageNet as a backbone and re-trained the entire layer of the model. As a result of the experiment, the reconstruction data model achieved 99.33% in body part classification, 1.12% higher than the axial model, and the axial model was higher only in brain and neck in contrast classification. In conclusion, it was possible to achieve more accurate performance when learning with data that shows better anatomical features than when trained with axial slice alone.

Wave Prediction in a Harbour using Deep Learning with Offshore Data (딥러닝을 이용한 외해 해양기상자료로부터의 항내파고 예측)

  • Lee, Geun Se;Jeong, Dong Hyeon;Moon, Yong Ho;Park, Won Kyung;Chae, Jang Won
    • Journal of Korean Society of Coastal and Ocean Engineers
    • /
    • v.33 no.6
    • /
    • pp.367-373
    • /
    • 2021
  • In this study, deep learning model was set up to predict the wave heights inside a harbour. Various machine learning techniques were applied to the model in consideration of the transformation characteristics of offshore waves while propagating into the harbour. Pohang New Port was selected for model application, which had a serious problem of unloading due to swell and has lots of available wave data. Wave height, wave period, and wave direction at offshore sites and wave heights inside the harbour were used for the model input and output, respectively, and then the model was trained using deep learning method. By considering the correlation between the time series wave data of offshore and inside the harbour, the data set was separated into prevailing wave directions as a pre-processing method. As a result, It was confirmed that accuracy and stability of the model prediction are considerably increased.

General Relation Extraction Using Probabilistic Crossover (확률적 교차 연산을 이용한 보편적 관계 추출)

  • Je-Seung Lee;Jae-Hoon Kim
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.371-380
    • /
    • 2023
  • Relation extraction is to extract relationships between named entities from text. Traditionally, relation extraction methods only extract relations between predetermined subject and object entities. However, in end-to-end relation extraction, all possible relations must be extracted by considering the positions of the subject and object for each pair of entities, and so this method uses time and resources inefficiently. To alleviate this problem, this paper proposes a method that sets directions based on the positions of the subject and object, and extracts relations according to the directions. The proposed method utilizes existing relation extraction data to generate direction labels indicating the direction in which the subject points to the object in the sentence, adds entity position tokens and entity type to sentences to predict the directions using a pre-trained language model (KLUE-RoBERTa-base, RoBERTa-base), and generates representations of subject and object entities through probabilistic crossover operation. Then, we make use of these representations to extract relations. Experimental results show that the proposed model performs about 3 ~ 4%p better than a method for predicting integrated labels. In addition, when learning Korean and English data using the proposed model, the performance was 1.7%p higher in English than in Korean due to the number of data and language disorder and the values of the parameters that produce the best performance were different. By excluding the number of directional cases, the proposed model can reduce the waste of resources in end-to-end relation extraction.

Development of a transfer learning based detection system for burr image of injection molded products (전이학습 기반 사출 성형품 burr 이미지 검출 시스템 개발)

  • Yang, Dong-Cheol;Kim, Jong-Sun
    • Design & Manufacturing
    • /
    • v.15 no.3
    • /
    • pp.1-6
    • /
    • 2021
  • An artificial neural network model based on a deep learning algorithm is known to be more accurate than humans in image classification, but there is still a limit in the sense that there needs to be a lot of training data that can be called big data. Therefore, various techniques are being studied to build an artificial neural network model with high precision, even with small data. The transfer learning technique is assessed as an excellent alternative. As a result, the purpose of this study is to develop an artificial neural network system that can classify burr images of light guide plate products with 99% accuracy using transfer learning technique. Specifically, for the light guide plate product, 150 images of the normal product and the burr were taken at various angles, heights, positions, etc., respectively. Then, after the preprocessing of images such as thresholding and image augmentation, for a total of 3,300 images were generated. 2,970 images were separated for training, while the remaining 330 images were separated for model accuracy testing. For the transfer learning, a base model was developed using the NASNet-Large model that pre-trained 14 million ImageNet data. According to the final model accuracy test, the 99% accuracy in the image classification for training and test images was confirmed. Consequently, based on the results of this study, it is expected to help develop an integrated AI production management system by training not only the burr but also various defective images.

Comparison of Classification Performance Between Adult and Elderly Using Acoustic and Linguistic Features from Spontaneous Speech (자유대화의 음향적 특징 및 언어적 특징 기반의 성인과 노인 분류 성능 비교)

  • SeungHoon Han;Byung Ok Kang;Sunghee Dong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.8
    • /
    • pp.365-370
    • /
    • 2023
  • This paper aims to compare the performance of speech data classification into two groups, adult and elderly, based on the acoustic and linguistic characteristics that change due to aging, such as changes in respiratory patterns, phonation, pitch, frequency, and language expression ability. For acoustic features we used attributes related to the frequency, amplitude, and spectrum of speech voices. As for linguistic features, we extracted hidden state vector representations containing contextual information from the transcription of speech utterances using KoBERT, a Korean pre-trained language model that has shown excellent performance in natural language processing tasks. The classification performance of each model trained based on acoustic and linguistic features was evaluated, and the F1 scores of each model for the two classes, adult and elderly, were examined after address the class imbalance problem by down-sampling. The experimental results showed that using linguistic features provided better performance for classifying adult and elderly than using acoustic features, and even when the class proportions were equal, the classification performance for adult was higher than that for elderly.