• Title/Summary/Keyword: Fully convolutional network model

Search Result 62, Processing Time 0.021 seconds

Modeling of Convolutional Neural Network-based Recommendation System

  • Kim, Tae-Yeun
    • Journal of Integrative Natural Science
    • /
    • v.14 no.4
    • /
    • pp.183-188
    • /
    • 2021
  • Collaborative filtering is one of the commonly used methods in the web recommendation system. Numerous researches on the collaborative filtering proposed the numbers of measures for enhancing the accuracy. This study suggests the movie recommendation system applied with Word2Vec and ensemble convolutional neural networks. First, user sentences and movie sentences are made from the user, movie, and rating information. Then, the user sentences and movie sentences are input into Word2Vec to figure out the user vector and movie vector. The user vector is input on the user convolutional model while the movie vector is input on the movie convolutional model. These user and movie convolutional models are connected to the fully-connected neural network model. Ultimately, the output layer of the fully-connected neural network model outputs the forecasts for user, movie, and rating. The test result showed that the system proposed in this study showed higher accuracy than the conventional cooperative filtering system and Word2Vec and deep neural network-based system suggested in the similar researches. The Word2Vec and deep neural network-based recommendation system is expected to help in enhancing the satisfaction while considering about the characteristics of users.

Multi-focus Image Fusion using Fully Convolutional Two-stream Network for Visual Sensors

  • Xu, Kaiping;Qin, Zheng;Wang, Guolong;Zhang, Huidi;Huang, Kai;Ye, Shuxiong
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.5
    • /
    • pp.2253-2272
    • /
    • 2018
  • We propose a deep learning method for multi-focus image fusion. Unlike most existing pixel-level fusion methods, either in spatial domain or in transform domain, our method directly learns an end-to-end fully convolutional two-stream network. The framework maps a pair of different focus images to a clean version, with a chain of convolutional layers, fusion layer and deconvolutional layers. Our deep fusion model has advantages of efficiency and robustness, yet demonstrates state-of-art fusion quality. We explore different parameter settings to achieve trade-offs between performance and speed. Moreover, the experiment results on our training dataset show that our network can achieve good performance with subjective visual perception and objective assessment metrics.

A Triple Residual Multiscale Fully Convolutional Network Model for Multimodal Infant Brain MRI Segmentation

  • Chen, Yunjie;Qin, Yuhang;Jin, Zilong;Fan, Zhiyong;Cai, Mao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.962-975
    • /
    • 2020
  • The accurate segmentation of infant brain MR image into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) is very important for early studying of brain growing patterns and morphological changes in neurodevelopmental disorders. Because of inherent myelination and maturation process, the WM and GM of babies (between 6 and 9 months of age) exhibit similar intensity levels in both T1-weighted (T1w) and T2-weighted (T2w) MR images in the isointense phase, which makes brain tissue segmentation very difficult. We propose a deep network architecture based on U-Net, called Triple Residual Multiscale Fully Convolutional Network (TRMFCN), whose structure exists three gates of input and inserts two blocks: residual multiscale block and concatenate block. We solved some difficulties and completed the segmentation task with the model. Our model outperforms the U-Net and some cutting-edge deep networks based on U-Net in evaluation of WM, GM and CSF. The data set we used for training and testing comes from iSeg-2017 challenge (http://iseg2017.web.unc.edu).

Study on Detection Technique for Sea Fog by using CCTV Images and Convolutional Neural Network (CCTV 영상과 합성곱 신경망을 활용한 해무 탐지 기법 연구)

  • Kim, Na-Kyeong;Bak, Su-Ho;Jeong, Min-Ji;Hwang, Do-Hyun;Enkhjargal, Unuzaya;Park, Mi-So;Kim, Bo-Ram;Yoon, Hong-Joo
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.15 no.6
    • /
    • pp.1081-1088
    • /
    • 2020
  • In this paper, the method of detecting sea fog through CCTV image is proposed based on convolutional neural networks. The study data randomly extracted 1,0004 images, sea-fog and not sea-fog, from a total of 11 ports or beaches (Busan Port, Busan New Port, Pyeongtaek Port, Incheon Port, Gunsan Port, Daesan Port, Mokpo Port, Yeosu Gwangyang Port, Ulsan Port, Pohang Port, and Haeundae Beach) based on 1km of visibility. 80% of the total 1,0004 datasets were extracted and used for learning the convolutional neural network model. The model has 16 convolutional layers and 3 fully connected layers, and a convolutional neural network that performs Softmax classification in the last fully connected layer is used. Model accuracy evaluation was performed using the remaining 20%, and the accuracy evaluation result showed a classification accuracy of about 96%.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Railroad Surface Defect Segmentation Using a Modified Fully Convolutional Network

  • Kim, Hyeonho;Lee, Suchul;Han, Seokmin
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.12
    • /
    • pp.4763-4775
    • /
    • 2020
  • This research aims to develop a deep learning-based method that automatically detects and segments the defects on railroad surfaces to reduce the cost of visual inspection of the railroad. We developed our segmentation model by modifying a fully convolutional network model [1], a well-known segmentation model used for machine learning, to detect and segment railroad surface defects. The data used in this research are images of the railroad surface with one or more defect regions. Railroad images were cropped to a suitable size, considering the long height and relatively narrow width of the images. They were also normalized based on the variance and mean of the data images. Using these images, the suggested model was trained to segment the defect regions. The proposed method showed promising results in the segmentation of defects. We consider that the proposed method can facilitate decision-making about railroad maintenance, and potentially be applied for other analyses.

A Fully Convolutional Network Model for Classifying Liver Fibrosis Stages from Ultrasound B-mode Images (초음파 B-모드 영상에서 FCN(fully convolutional network) 모델을 이용한 간 섬유화 단계 분류 알고리즘)

  • Kang, Sung Ho;You, Sun Kyoung;Lee, Jeong Eun;Ahn, Chi Young
    • Journal of Biomedical Engineering Research
    • /
    • v.41 no.1
    • /
    • pp.48-54
    • /
    • 2020
  • In this paper, we deal with a liver fibrosis classification problem using ultrasound B-mode images. Commonly representative methods for classifying the stages of liver fibrosis include liver biopsy and diagnosis based on ultrasound images. The overall liver shape and the smoothness and roughness of speckle pattern represented in ultrasound images are used for determining the fibrosis stages. Although the ultrasound image based classification is used frequently as an alternative or complementary method of the invasive biopsy, it also has the limitations that liver fibrosis stage decision depends on the image quality and the doctor's experience. With the rapid development of deep learning algorithms, several studies using deep learning methods have been carried out for automated liver fibrosis classification and showed superior performance of high accuracy. The performance of those deep learning methods depends closely on the amount of datasets. We propose an enhanced U-net architecture to maximize the classification accuracy with limited small amount of image datasets. U-net is well known as a neural network for fast and precise segmentation of medical images. We design it newly for the purpose of classifying liver fibrosis stages. In order to assess the performance of the proposed architecture, numerical experiments are conducted on a total of 118 ultrasound B-mode images acquired from 78 patients with liver fibrosis symptoms of F0~F4 stages. The experimental results support that the performance of the proposed architecture is much better compared to the transfer learning using the pre-trained model of VGGNet.

Residual Learning Based CNN for Gesture Recognition in Robot Interaction

  • Han, Hua
    • Journal of Information Processing Systems
    • /
    • v.17 no.2
    • /
    • pp.385-398
    • /
    • 2021
  • The complexity of deep learning models affects the real-time performance of gesture recognition, thereby limiting the application of gesture recognition algorithms in actual scenarios. Hence, a residual learning neural network based on a deep convolutional neural network is proposed. First, small convolution kernels are used to extract the local details of gesture images. Subsequently, a shallow residual structure is built to share weights, thereby avoiding gradient disappearance or gradient explosion as the network layer deepens; consequently, the difficulty of model optimisation is simplified. Additional convolutional neural networks are used to accelerate the refinement of deep abstract features based on the spatial importance of the gesture feature distribution. Finally, a fully connected cascade softmax classifier is used to complete the gesture recognition. Compared with the dense connection multiplexing feature information network, the proposed algorithm is optimised in feature multiplexing to avoid performance fluctuations caused by feature redundancy. Experimental results from the ISOGD gesture dataset and Gesture dataset prove that the proposed algorithm affords a fast convergence speed and high accuracy.

Performance Evaluation of Deep Learning Model according to the Ratio of Cultivation Area in Training Data (훈련자료 내 재배지역의 비율에 따른 딥러닝 모델의 성능 평가)

  • Seong, Seonkyeong;Choi, Jaewan
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1007-1014
    • /
    • 2022
  • Compact Advanced Satellite 500 (CAS500) can be used for various purposes, including vegetation, forestry, and agriculture fields. It is expected that it will be possible to acquire satellite images of various areas quickly. In order to use satellite images acquired through CAS500 in the agricultural field, it is necessary to develop a satellite image-based extraction technique for crop-cultivated areas.In particular, as research in the field of deep learning has become active in recent years, research on developing a deep learning model for extracting crop cultivation areas and generating training data is necessary. This manuscript classified the onion and garlic cultivation areas in Hapcheon-gun using PlanetScope satellite images and farm maps. In particular, for effective model learning, the model performance was analyzed according to the proportion of crop-cultivated areas. For the deep learning model used in the experiment, Fully Convolutional Densely Connected Convolutional Network (FC-DenseNet) was reconstructed to fit the purpose of crop cultivation area classification and utilized. As a result of the experiment, the ratio of crop cultivation areas in the training data affected the performance of the deep learning model.

Performance Improvement of Convolutional Neural Network for Pulmonary Nodule Detection (폐 결절 검출을 위한 합성곱 신경망의 성능 개선)

  • Kim, HanWoong;Kim, Byeongnam;Lee, JeeEun;Jang, Won Seuk;Yoo, Sun K.
    • Journal of Biomedical Engineering Research
    • /
    • v.38 no.5
    • /
    • pp.237-241
    • /
    • 2017
  • Early detection of the pulmonary nodule is important for diagnosis and treatment of lung cancer. Recently, CT has been used as a screening tool for lung nodule detection. And, it has been reported that computer aided detection(CAD) systems can improve the accuracy of the radiologist in detection nodules on CT scan. The previous study has been proposed a method using Convolutional Neural Network(CNN) in Lung CAD system. But the proposed model has a limitation in accuracy due to its sparse layer structure. Therefore, we propose a Deep Convolutional Neural Network to overcome this limitation. The model proposed in this work is consist of 14 layers including 8 convolutional layers and 4 fully connected layers. The CNN model is trained and tested with 61,404 regions-of-interest (ROIs) patches of lung image including 39,760 nodules and 21,644 non-nodules extracted from the Lung Image Database Consortium(LIDC) dataset. We could obtain the classification accuracy of 91.79% with the CNN model presented in this work. To prevent overfitting, we trained the model with Augmented Dataset and regularization term in the cost function. With L1, L2 regularization at Training process, we obtained 92.39%, 92.52% of accuracy respectively. And we obtained 93.52% with data augmentation. In conclusion, we could obtain the accuracy of 93.75% with L2 Regularization and Data Augmentation.