• Title/Summary/Keyword: Fully connected layer

Search Result 93, Processing Time 0.024 seconds

Application of the machine learning technique for the development of a condensation heat transfer model for a passive containment cooling system

  • Lee, Dong Hyun;Yoo, Jee Min;Kim, Hui Yung;Hong, Dong Jin;Yun, Byong Jo;Jeong, Jae Jun
    • Nuclear Engineering and Technology
    • /
    • v.54 no.6
    • /
    • pp.2297-2310
    • /
    • 2022
  • A condensation heat transfer model is essential to accurately predict the performance of the passive containment cooling system (PCCS) during an accident in an advanced light water reactor. However, most of existing models tend to predict condensation heat transfer very well for a specific range of thermal-hydraulic conditions. In this study, a new correlation for condensation heat transfer coefficient (HTC) is presented using machine learning technique. To secure sufficient training data, a large number of pseudo data were produced by using ten existing condensation models. Then, a neural network model was developed, consisting of a fully connected layer and a convolutional neural network (CNN) algorithm, DenseNet. Based on the hold-out cross-validation, the neural network was trained and validated against the pseudo data. Thereafter, it was evaluated using the experimental data, which were not used for training. The machine learning model predicted better results than the existing models. It was also confirmed through a parametric study that the machine learning model presents continuous and physical HTCs for various thermal-hydraulic conditions. By reflecting the effects of individual variables obtained from the parametric analysis, a new correlation was proposed. It yielded better results for almost all experimental conditions than the ten existing models.

Analyzing the internal parameters of a deep learning-based distributed hydrologic model to discern similarities and differences with a physics-based model (딥러닝 기반 격자형 수문모형의 내부 파라메터 분석을 통한 물리기반 모형과의 유사점 및 차별성 판독하기)

  • Dongkyun Kim
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2023.05a
    • /
    • pp.92-92
    • /
    • 2023
  • 본 연구에서는 대한민국 도시 유역에 대하여 딥러닝 네트워크 기반의 분산형 수문 모형을 개발하였다. 개발된 모형은 완전연결계층(Fully Connected Layer)으로 연결된 여러 개의 장단기 메모리(LSTM-Long Short-Term Memory) 은닉 유닛(Hidden Unit)으로 구성되었다. 개발된 모형을 사용하여 연구 지역인 중랑천 유역을 분석하기 위해 1km2 해상도의 239개 모델 격자 셀에서 10분 단위 레이더-지상 합성 강수량과 10분 단위 기온의 시계열을 입력으로 사용하여 10분 단위 하도 유량을 모의하였다. 모형은 보정과(2013~2016년)과 검증 기간(2017~2019년)에 대한 NSE 계수는각각 0.99와 0.67로 높은 정확도를 보였다. 본 연구는 모형을 추가적으로 심층 분석하여 다음과 같은 결론을 도출하였다: (1) 모형을 기반으로 생성된 유출-강수 비율 지도는 토지 피복 데이터에서 얻은 연구 지역의 불투수율 지도와 유사하며, 이는 모형이 수문학에 대한 선험적 정보에 의존하지 않고 입력 및 출력 데이터만으로 강우-유출 분할과정을 성공적으로 학습하였음을 의미한다. (2) 모형은 연속 수문 모형의 필수 전제 조건인 토양 수분 의존 유출 프로세스를 성공적으로 재현하였다; (3) 각 LSTM 은닉 유닛은 강수 자극에 대한 시간적 민감도가 다르며, 응답이 빠른 LSTM 은닉 유닛은 유역 출구 근처에서 더 큰 출력 가중치 계수를 가졌는데, 이는 모형이 강수 입력에 대한 직접 유출과 지하수가 주도하는 기저 흐름과 같이 응답 시간의 차이가 뚜렷한 수문순환의 구성 요소를 별도로 고려하는 메커니즘을 가지고 있음을 의미한다.

  • PDF

Attention-based CNN-BiGRU for Bengali Music Emotion Classification

  • Subhasish Ghosh;Omar Faruk Riad
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.9
    • /
    • pp.47-54
    • /
    • 2023
  • For Bengali music emotion classification, deep learning models, particularly CNN and RNN are frequently used. But previous researches had the flaws of low accuracy and overfitting problem. In this research, attention-based Conv1D and BiGRU model is designed for music emotion classification and comparative experimentation shows that the proposed model is classifying emotions more accurate. We have proposed a Conv1D and Bi-GRU with the attention-based model for emotion classification of our Bengali music dataset. The model integrates attention-based. Wav preprocessing makes use of MFCCs. To reduce the dimensionality of the feature space, contextual features were extracted from two Conv1D layers. In order to solve the overfitting problems, dropouts are utilized. Two bidirectional GRUs networks are used to update previous and future emotion representation of the output from the Conv1D layers. Two BiGRU layers are conntected to an attention mechanism to give various MFCC feature vectors more attention. Moreover, the attention mechanism has increased the accuracy of the proposed classification model. The vector is finally classified into four emotion classes: Angry, Happy, Relax, Sad; using a dense, fully connected layer with softmax activation. The proposed Conv1D+BiGRU+Attention model is efficient at classifying emotions in the Bengali music dataset than baseline methods. For our Bengali music dataset, the performance of our proposed model is 95%.

Revisiting Deep Learning Model for Image Quality Assessment: Is Strided Convolution Better than Pooling? (영상 화질 평가 딥러닝 모델 재검토: 스트라이드 컨볼루션이 풀링보다 좋은가?)

  • Uddin, AFM Shahab;Chung, TaeChoong;Bae, Sung-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.29-32
    • /
    • 2020
  • Due to the lack of improper image acquisition process, noise induction is an inevitable step. As a result, objective image quality assessment (IQA) plays an important role in estimating the visual quality of noisy image. Plenty of IQA methods have been proposed including traditional signal processing based methods as well as current deep learning based methods where the later one shows promising performance due to their complex representation ability. The deep learning based methods consists of several convolution layers and down sampling layers for feature extraction and fully connected layers for regression. Usually, the down sampling is performed by using max-pooling layer after each convolutional block. We reveal that this max-pooling causes information loss despite of knowing their importance. Consequently, we propose a better IQA method that replaces the max-pooling layers with strided convolutions to down sample the feature space and since the strided convolution layers have learnable parameters, they preserve optimal features and discard redundant information, thereby improve the prediction accuracy. The experimental results verify the effectiveness of the proposed method.

  • PDF

Aural-visual two-stream based infant cry recognition (Aural-visual two-stream 기반의 아기 울음소리 식별)

  • Bo, Zhao;Lee, Jonguk;Atif, Othmane;Park, Daihee;Chung, Yongwha
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2021.05a
    • /
    • pp.354-357
    • /
    • 2021
  • Infants communicate their feelings and needs to the outside world through non-verbal methods such as crying and displaying diverse facial expressions. However, inexperienced parents tend to decode these non-verbal messages incorrectly and take inappropriate actions, which might affect the bonding they build with their babies and the cognitive development of the newborns. In this paper, we propose an aural-visual two-stream based infant cry recognition system to help parents comprehend the feelings and needs of crying babies. The proposed system first extracts the features from the pre-processed audio and video data by using the VGGish model and 3D-CNN model respectively, fuses the extracted features using a fully connected layer, and finally applies a SoftMax function to classify the fused features and recognize the corresponding type of cry. The experimental results show that the proposed system classification exceeds 0.92 in F1-score, which is 0.08 and 0.10 higher than the single-stream aural model and single-stream visual model.

Malware detection methodology through on pre-training and transfer learning for AutoEncoder based deobfuscation (AutoEncoder 기반 역난독화 사전학습 및 전이학습을 통한 악성코드 탐지 방법론)

  • Jang, Jae-Seok;Ku, Bon-Jae;Eom, Sung-Jun;Han, Ji-Hyeong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.905-907
    • /
    • 2022
  • 악성코드를 분석하는 기존 기법인 정적분석은 빠르고 효율적으로 악성코드를 탐지할 수 있지만 난독화된 파일에 취약한 반면,, 동적분석은 난독화된 파일에 적합하지만 느리고 비용이 많이 든다는 단점을 가진다. 본 연구에서는 두 분석 기법의 단점을 해결하기 위해 딥러닝 모델을 활용한 난독화에 강한 정적분석 모델을 제안하였다. 본 연구에서 제안한 방법은 원본 코드 및 난독화된 파일을 grayscale 이미지로 변환하여 데이터셋을 구축하고 AutoEncoder 를 사전학습시켜 encoder 가 원본 파일과 난독화된 파일로부터 원본 파일의 특징을 추출할 수 있도록 한 이후, encoder 의 output 을 fully connected layer 의 입력으로 넣고 전이학습시켜 악성코드를 탐지하도록 하였다. 본 연구에서는 제안한 방법론은 난독화된 파일에서 악성코드를 탐지하는 성능을 F1 score 기준 14.17% 포인트 향상시켰고, 난독화된 파일과 원본 파일을 전체를 합친 데이터셋에서도 악성코드 탐지 성능을 F1 score 기준 7.22% 포인트 향상시켰다.

Transfer Learning-Based Feature Fusion Model for Classification of Maneuver Weapon Systems

  • Jinyong Hwang;You-Rak Choi;Tae-Jin Park;Ji-Hoon Bae
    • Journal of Information Processing Systems
    • /
    • v.19 no.5
    • /
    • pp.673-687
    • /
    • 2023
  • Convolutional neural network-based deep learning technology is the most commonly used in image identification, but it requires large-scale data for training. Therefore, application in specific fields in which data acquisition is limited, such as in the military, may be challenging. In particular, the identification of ground weapon systems is a very important mission, and high identification accuracy is required. Accordingly, various studies have been conducted to achieve high performance using small-scale data. Among them, the ensemble method, which achieves excellent performance through the prediction average of the pre-trained models, is the most representative method; however, it requires considerable time and effort to find the optimal combination of ensemble models. In addition, there is a performance limitation in the prediction results obtained by using an ensemble method. Furthermore, it is difficult to obtain the ensemble effect using models with imbalanced classification accuracies. In this paper, we propose a transfer learning-based feature fusion technique for heterogeneous models that extracts and fuses features of pre-trained heterogeneous models and finally, fine-tunes hyperparameters of the fully connected layer to improve the classification accuracy. The experimental results of this study indicate that it is possible to overcome the limitations of the existing ensemble methods by improving the classification accuracy through feature fusion between heterogeneous models based on transfer learning.

Association Analysis of Convolution Layer, Kernel and Accuracy in CNN (CNN의 컨볼루션 레이어, 커널과 정확도의 연관관계 분석)

  • Kong, Jun-Bea;Jang, Min-Seok
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.14 no.6
    • /
    • pp.1153-1160
    • /
    • 2019
  • In this paper, we experimented to find out how the number of convolution layers, the size, and the number of kernels affect the CNN. In addition, the general CNN was also tested for analysis and compared with the CNN used in the experiment. The neural networks used for the analysis are based on CNN, and each experimental model is experimented with the number of layers, the size, and the number of kernels at a constant value. All experiments were conducted using two layers of fully connected layers as a fixed. All other variables were tested with the same value. As the result of the analysis, when the number of layers is small, the data variance value is small regardless of the size and number of kernels, showing a solid accuracy. As the number of layers increases, the accuracy increases, but from above a certain number, the accuracy decreases, and the variance value also increases, resulting in a large accuracy deviation. The number of kernels had a greater effect on learning speed than other variables.

Animal Face Classification using Dual Deep Convolutional Neural Network

  • Khan, Rafiul Hasan;Kang, Kyung-Won;Lim, Seon-Ja;Youn, Sung-Dae;Kwon, Oh-Jun;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.4
    • /
    • pp.525-538
    • /
    • 2020
  • A practical animal face classification system that classifies animals in image and video data is considered as a pivotal topic in machine learning. In this research, we are proposing a novel method of fully connected dual Deep Convolutional Neural Network (DCNN), which extracts and analyzes image features on a large scale. With the inclusion of the state of the art Batch Normalization layer and Exponential Linear Unit (ELU) layer, our proposed DCNN has gained the capability of analyzing a large amount of dataset as well as extracting more features than before. For this research, we have built our dataset containing ten thousand animal faces of ten animal classes and a dual DCNN. The significance of our network is that it has four sets of convolutional functions that work laterally with each other. We used a relatively small amount of batch size and a large number of iteration to mitigate overfitting during the training session. We have also used image augmentation to vary the shapes of the training images for the better learning process. The results demonstrate that, with an accuracy rate of 92.0%, the proposed DCNN outruns its counterparts while causing less computing costs.

A ResNet based multiscale feature extraction for classifying multi-variate medical time series

  • Zhu, Junke;Sun, Le;Wang, Yilin;Subramani, Sudha;Peng, Dandan;Nicolas, Shangwe Charmant
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1431-1445
    • /
    • 2022
  • We construct a deep neural network model named ECGResNet. This model can diagnosis diseases based on 12-lead ECG data of eight common cardiovascular diseases with a high accuracy. We chose the 16 Blocks of ResNet50 as the main body of the model and added the Squeeze-and-Excitation module to learn the data information between channels adaptively. We modified the first convolutional layer of ResNet50 which has a convolutional kernel of 7 to a superposition of convolutional kernels of 8 and 16 as our feature extraction method. This way allows the model to focus on the overall trend of the ECG signal while also noticing subtle changes. The model further improves the accuracy of cardiovascular and cerebrovascular disease classification by using a fully connected layer that integrates factors such as gender and age. The ECGResNet model adds Dropout layers to both the residual block and SE module of ResNet50, further avoiding the phenomenon of model overfitting. The model was eventually trained using a five-fold cross-validation and Flooding training method, with an accuracy of 95% on the test set and an F1-score of 0.841.We design a new deep neural network, innovate a multi-scale feature extraction method, and apply the SE module to extract features of ECG data.