• Title/Summary/Keyword: Trainable parameter

Search Result 5, Processing Time 0.026 seconds

Performance Evaluation of ResNet-based Pneumonia Detection Model with the Small Number of Layers Using Chest X-ray Images (흉부 X선 영상을 이용한 작은 층수 ResNet 기반 폐렴 진단 모델의 성능 평가)

  • Youngeun Choi;Seungwan Lee
    • Journal of radiological science and technology
    • /
    • v.46 no.4
    • /
    • pp.277-285
    • /
    • 2023
  • In this study, pneumonia identification networks with the small number of layers were constructed by using chest X-ray images. The networks had similar trainable-parameters, and the performance of the trained models was quantitatively evaluated with the modification of the network architectures. A total of 6 networks were constructed: convolutional neural network (CNN), VGGNet, GoogleNet, residual network with identity blocks, ResNet with bottleneck blocks and ResNet with identity and bottleneck blocks. Trainable parameters for the 6 networks were set in a range of 273,921-294,817 by adjusting the output channels of convolution layers. The network training was implemented with binary cross entropy (BCE) loss function, sigmoid activation function, adaptive moment estimation (Adam) optimizer and 100 epochs. The performance of the trained models was evaluated in terms of training time, accuracy, precision, recall, specificity and F1-score. The results showed that the trained models with the small number of layers precisely detect pneumonia from chest X-ray images. In particular, the overall quantitative performance of the trained models based on the ResNets was above 0.9, and the performance levels were similar or superior to those based on the CNN, VGGNet and GoogleNet. Also, the residual blocks affected the performance of the trained models based on the ResNets. Therefore, in this study, we demonstrated that the object detection networks with the small number of layers are suitable for detecting pneumonia using chest X-ray images. And, the trained models based on the ResNets can be optimized by applying appropriate residual-blocks.

Evaluation System of Psychological Feelings for Corporate Identity Symbol Marks Using Fuzzy Neural Networks (퍼지 - 뉴럴네트워크를 이용한 CI 심벌마크의 감성평가시스템)

  • Chang, In-Seong;Park, Yong-Ju
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.27 no.3
    • /
    • pp.305-314
    • /
    • 2001
  • In this paper, we construct an automatic evaluation system of psychological feeling for corporate identity (CI) symbol mark based on a fuzzy neural network technique. The system is modelled by trainable fuzzy inference rules with several input variables (qualitative and quantitative design components of CI symbol mark) and a single output variable (consumer's feeling). The back propagation learning algorithm, which is a conventional learning method of multilayer feedforward neural networks, is used for parameter identification of the fuzzy inference system. The learning ability to train data and the generalization ability to test data are evaluated for the proposed evaluation system by computer simulations.

  • PDF

A Multi-layer Bidirectional Associative Neural Network with Improved Robust Capability for Hardware Implementation (성능개선과 하드웨어구현을 위한 다층구조 양방향연상기억 신경회로망 모델)

  • 정동규;이수영
    • Journal of the Korean Institute of Telematics and Electronics B
    • /
    • v.31B no.9
    • /
    • pp.159-165
    • /
    • 1994
  • In this paper, we propose a multi-layer associative neural network structure suitable for hardware implementaion with the function of performance refinement and improved robutst capability. Unlike other methods which reduce network complexity by putting restrictions on synaptic weithts, we are imposing a requirement of hidden layer neurons for the function. The proposed network has synaptic weights obtainted by Hebbian rule between adjacent layer's memory patterns such as Kosko's BAM. This network can be extended to arbitary multi-layer network trainable with Genetic algorithm for getting hidden layer memory patterns starting with initial random binary patterns. Learning is done to minimize newly defined network error. The newly defined error is composed of the errors at input, hidden, and output layers. After learning, we have bidirectional recall process for performance improvement of the network with one-shot recall. Experimental results carried out on pattern recognition problems demonstrate its performace according to the parameter which represets relative significance of the hidden layer error over the sum of input and output layer errors, show that the proposed model has much better performance than that of Kosko's bidirectional associative memory (BAM), and show the performance increment due to the bidirectionality in recall process.

  • PDF

Enhanced CNN Model for Brain Tumor Classification

  • Kasukurthi, Aravinda;Paleti, Lakshmikanth;Brahmaiah, Madamanchi;Sree, Ch.Sudha
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.5
    • /
    • pp.143-148
    • /
    • 2022
  • Brain tumor classification is an important process that allows doctors to plan treatment for patients based on the stages of the tumor. To improve classification performance, various CNN-based architectures are used for brain tumor classification. Existing methods for brain tumor segmentation suffer from overfitting and poor efficiency when dealing with large datasets. The enhanced CNN architecture proposed in this study is based on U-Net for brain tumor segmentation, RefineNet for pattern analysis, and SegNet architecture for brain tumor classification. The brain tumor benchmark dataset was used to evaluate the enhanced CNN model's efficiency. Based on the local and context information of the MRI image, the U-Net provides good segmentation. SegNet selects the most important features for classification while also reducing the trainable parameters. In the classification of brain tumors, the enhanced CNN method outperforms the existing methods. The enhanced CNN model has an accuracy of 96.85 percent, while the existing CNN with transfer learning has an accuracy of 94.82 percent.

α-feature map scaling for raw waveform speaker verification (α-특징 지도 스케일링을 이용한 원시파형 화자 인증)

  • Jung, Jee-weon;Shim, Hye-jin;Kim, Ju-ho;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.5
    • /
    • pp.441-446
    • /
    • 2020
  • In this paper, we propose the α-Feature Map Scaling (α-FMS) method which extends the FMS method that was designed to enhance the discriminative power of feature maps of deep neural networks in Speaker Verification (SV) systems. The FMS derives a scale vector from a feature map and then adds or multiplies them to the features, or sequentially apply both operations. However, the FMS method not only uses an identical scale vector for both addition and multiplication, but also has a limitation that it can only add a value between zero and one in case of addition. In this study, to overcome these limitations, we propose α-FMS to add a trainable parameter α to the feature map element-wise, and then multiply a scale vector. We compare the performance of the two methods: the one where α is a scalar, and the other where it is a vector. Both α-FMS methods are applied after each residual block of the deep neural network. The proposed system using the α-FMS methods are trained using the RawNet2 and tested using the VoxCeleb1 evaluation set. The result demonstrates an equal error rate of 2.47 % and 2.31 % for the two α-FMS methods respectively.