• Title/Summary/Keyword: deep learning encoder

Search Result 145, Processing Time 0.024 seconds

Anomaly-based Alzheimer's disease detection using entropy-based probability Positron Emission Tomography images

  • Husnu Baris Baydargil;Jangsik Park;Ibrahim Furkan Ince
    • ETRI Journal
    • /
    • v.46 no.3
    • /
    • pp.513-525
    • /
    • 2024
  • Deep neural networks trained on labeled medical data face major challenges owing to the economic costs of data acquisition through expensive medical imaging devices, expert labor for data annotation, and large datasets to achieve optimal model performance. The heterogeneity of diseases, such as Alzheimer's disease, further complicates deep learning because the test cases may substantially differ from the training data, possibly increasing the rate of false positives. We propose a reconstruction-based self-supervised anomaly detection model to overcome these challenges. It has a dual-subnetwork encoder that enhances feature encoding augmented by skip connections to the decoder for improving the gradient flow. The novel encoder captures local and global features to improve image reconstruction. In addition, we introduce an entropy-based image conversion method. Extensive evaluations show that the proposed model outperforms benchmark models in anomaly detection and classification using an encoder. The supervised and unsupervised models show improved performances when trained with data preprocessed using the proposed image conversion method.

Network Intrusion Detection System Using Feature Extraction Based on AutoEncoder in IOT environment (IOT 환경에서의 오토인코더 기반 특징 추출을 이용한 네트워크 침입탐지 시스템)

  • Lee, Joohwa;Park, Keehyun
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.12
    • /
    • pp.483-490
    • /
    • 2019
  • In the Network Intrusion Detection System (NIDS), the function of classification is very important, and detection performance depends on various features. Recently, a lot of research has been carried out on deep learning, but network intrusion detection system experience slowing down problems due to the large volume of traffic and a high dimensional features. Therefore, we do not use deep learning as a classification, but as a preprocessing process for feature extraction and propose a research method from which classifications can be made based on extracted features. A stacked AutoEncoder, which is a representative unsupervised learning of deep learning, is used to extract features and classifications using the Random Forest classification algorithm. Using the data collected in the IOT environment, the performance was more than 99% when normal and attack traffic are classified into multiclass, and the performance and detection rate were superior even when compared with other models such as AE-RF and Single-RF.

Ensemble UNet 3+ for Medical Image Segmentation

  • JongJin, Park
    • International Journal of Internet, Broadcasting and Communication
    • /
    • v.15 no.1
    • /
    • pp.269-274
    • /
    • 2023
  • In this paper, we proposed a new UNet 3+ model for medical image segmentation. The proposed ensemble(E) UNet 3+ model consists of UNet 3+s of varying depths into one unified architecture. UNet 3+s of varying depths have same encoder, but have their own decoders. They can bridge semantic gap between encoder and decoder nodes of UNet 3+. Deep supervision was used for learning on a total of 8 nodes of the E-UNet 3+ to improve performance. The proposed E-UNet 3+ model shows better segmentation results than those of the UNet 3+. As a result of the simulation, the E-UNet 3+ model using deep supervision was the best with loss function values of 0.8904 and 0.8562 for training and validation data. For the test data, the UNet 3+ model using deep supervision was the best with a value of 0.7406. Qualitative comparison of the simulation results shows the results of the proposed model are better than those of existing UNet 3+.

Zero-anaphora resolution in Korean based on deep language representation model: BERT

  • Kim, Youngtae;Ra, Dongyul;Lim, Soojong
    • ETRI Journal
    • /
    • v.43 no.2
    • /
    • pp.299-312
    • /
    • 2021
  • It is necessary to achieve high performance in the task of zero anaphora resolution (ZAR) for completely understanding the texts in Korean, Japanese, Chinese, and various other languages. Deep-learning-based models are being employed for building ZAR systems, owing to the success of deep learning in the recent years. However, the objective of building a high-quality ZAR system is far from being achieved even using these models. To enhance the current ZAR techniques, we fine-tuned a pretrained bidirectional encoder representations from transformers (BERT). Notably, BERT is a general language representation model that enables systems to utilize deep bidirectional contextual information in a natural language text. It extensively exploits the attention mechanism based upon the sequence-transduction model Transformer. In our model, classification is simultaneously performed for all the words in the input word sequence to decide whether each word can be an antecedent. We seek end-to-end learning by disallowing any use of hand-crafted or dependency-parsing features. Experimental results show that compared with other models, our approach can significantly improve the performance of ZAR.

Network Traffic Classification Based on Deep Learning

  • Li, Junwei;Pan, Zhisong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.11
    • /
    • pp.4246-4267
    • /
    • 2020
  • As the network goes deep into all aspects of people's lives, the number and the complexity of network traffic is increasing, and traffic classification becomes more and more important. How to classify them effectively is an important prerequisite for network management and planning, and ensuring network security. With the continuous development of deep learning, more and more traffic classification begins to use it as the main method, which achieves better results than traditional classification methods. In this paper, we provide a comprehensive review of network traffic classification based on deep learning. Firstly, we introduce the research background and progress of network traffic classification. Then, we summarize and compare traffic classification based on deep learning such as stack autoencoder, one-dimensional convolution neural network, two-dimensional convolution neural network, three-dimensional convolution neural network, long short-term memory network and Deep Belief Networks. In addition, we compare traffic classification based on deep learning with other methods such as based on port number, deep packets detection and machine learning. Finally, the future research directions of network traffic classification based on deep learning are prospected.

Analysis of trends in deep learning and reinforcement learning

  • Dong-In Choi;Chungsoo Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.10
    • /
    • pp.55-65
    • /
    • 2023
  • In this paper, we apply KeyBERT(Keyword extraction with Bidirectional Encoder Representations of Transformers) algorithm-driven topic extraction and topic frequency analysis to deep learning and reinforcement learning research to discover the rapidly changing trends in them. First, we crawled abstracts of research papers on deep learning and reinforcement learning, and temporally divided them into two groups. After pre-processing the crawled data, we extracted topics using KeyBERT algorithm, and then analyzed the extracted topics in terms of topic occurrence frequency. This analysis reveals that there are distinct trends in research work of all analyzed algorithms and applications, and we can clearly tell which topics are gaining more interest. The analysis also proves the effectiveness of the utilized topic extraction and topic frequency analysis in research trend analysis, and this trend analysis scheme is expected to be used for research trend analysis in other research fields. In addition, the analysis can provide insight into how deep learning will evolve in the near future, and provide guidance for select research topics and methodologies by informing researchers of research topics and methodologies which are recently attracting attention.

Accuracy Assessment of Land-Use Land-Cover Classification Using Semantic Segmentation-Based Deep Learning Model and RapidEye Imagery (RapidEye 위성영상과 Semantic Segmentation 기반 딥러닝 모델을 이용한 토지피복분류의 정확도 평가)

  • Woodam Sim;Jong Su Yim;Jung-Soo Lee
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.3
    • /
    • pp.269-282
    • /
    • 2023
  • The purpose of this study was to construct land cover maps using a deep learning model and to select the optimal deep learning model for land cover classification by adjusting the dataset such as input image size and Stride application. Two types of deep learning models, the U-net model and the DeeplabV3+ model with an Encoder-Decoder network, were utilized. Also, the combination of the two deep learning models, which is an Ensemble model, was used in this study. The dataset utilized RapidEye satellite images as input images and the label images used Raster images based on the six categories of the land use of Intergovernmental Panel on Climate Change as true value. This study focused on the problem of the quality improvement of the dataset to enhance the accuracy of deep learning model and constructed twelve land cover maps using the combination of three deep learning models (U-net, DeeplabV3+, and Ensemble), two input image sizes (64 × 64 pixel and 256 × 256 pixel), and two Stride application rates (50% and 100%). The evaluation of the accuracy of the label images and the deep learning-based land cover maps showed that the U-net and DeeplabV3+ models had high accuracy, with overall accuracy values of approximately 87.9% and 89.8%, and kappa coefficients of over 72%. In addition, applying the Ensemble and Stride to the deep learning models resulted in a maximum increase of approximately 3% in accuracy and an improvement in the issue of boundary inconsistency, which is a problem associated with Semantic Segmentation based deep learning models.

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • v.66 no.1
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.

Deep Reference-based Dynamic Scene Deblurring

  • Cunzhe Liu;Zhen Hua;Jinjiang Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.3
    • /
    • pp.653-669
    • /
    • 2024
  • Dynamic scene deblurring is a complex computer vision problem owing to its difficulty to model mathematically. In this paper, we present a novel approach for image deblurring with the help of the sharp reference image, which utilizes the reference image for high-quality and high-frequency detail results. To better utilize the clear reference image, we develop an encoder-decoder network and two novel modules are designed to guide the network for better image restoration. The proposed Reference Extraction and Aggregation Module can effectively establish the correspondence between blurry image and reference image and explore the most relevant features for better blur removal and the proposed Spatial Feature Fusion Module enables the encoder to perceive blur information at different spatial scales. In the final, the multi-scale feature maps from the encoder and cascaded Reference Extraction and Aggregation Modules are integrated into the decoder for a global fusion and representation. Extensive quantitative and qualitative experimental results from the different benchmarks show the effectiveness of our proposed method.

Application of Improved Variational Recurrent Auto-Encoder for Korean Sentence Generation (한국어 문장 생성을 위한 Variational Recurrent Auto-Encoder 개선 및 활용)

  • Hahn, Sangchul;Hong, Seokjin;Choi, Heeyoul
    • Journal of KIISE
    • /
    • v.45 no.2
    • /
    • pp.157-164
    • /
    • 2018
  • Due to the revolutionary advances in deep learning, performance of pattern recognition has increased significantly in many applications like speech recognition and image recognition, and some systems outperform human-level intelligence in specific domains. Unlike pattern recognition, in this paper, we focus on generating Korean sentences based on a few Korean sentences. We apply variational recurrent auto-encoder (VRAE) and modify the model considering some characteristics of Korean sentences. To reduce the number of words in the model, we apply a word spacing model. Also, there are many Korean sentences which have the same meaning but different word order, even without subjects or objects; therefore we change the unidirectional encoder of VRAE into a bidirectional encoder. In addition, we apply an interpolation method on the encoded vectors from the given sentences, so that we can generate new sentences which are similar to the given sentences. In experiments, we confirm that our proposed method generates better sentences which are semantically more similar to the given sentences.