• 제목/요약/키워드: Deep convolutional neural networks

검색결과 401건 처리시간 0.03초

Pyramidal Deep Neural Networks for the Accurate Segmentation and Counting of Cells in Microscopy Data

  • Vununu, Caleb;Kang, Kyung-Won;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • 한국멀티미디어학회논문지
    • /
    • 제22권3호
    • /
    • pp.335-348
    • /
    • 2019
  • Cell segmentation and counting represent one of the most important tasks required in order to provide an exhaustive understanding of biological images. Conventional features suffer the lack of spatial consistency by causing the joining of the cells and, thus, complicating the cell counting task. We propose, in this work, a cascade of networks that take as inputs different versions of the original image. After constructing a Gaussian pyramid representation of the microscopy data, the inputs of different size and spatial resolution are given to a cascade of deep convolutional autoencoders whose task is to reconstruct the segmentation mask. The coarse masks obtained from the different networks are summed up in order to provide the final mask. The principal and main contribution of this work is to propose a novel method for the cell counting. Unlike the majority of the methods that use the obtained segmentation mask as the prior information for counting, we propose to utilize the hidden latent representations, often called the high-level features, as the inputs of a neural network based regressor. While the segmentation part of our method performs as good as the conventional deep learning methods, the proposed cell counting approach outperforms the state-of-the-art methods.

Arabic Text Recognition with Harakat Using Deep Learning

  • Ashwag, Maghraby;Esraa, Samkari
    • International Journal of Computer Science & Network Security
    • /
    • 제23권1호
    • /
    • pp.41-46
    • /
    • 2023
  • Because of the significant role that harakat plays in Arabic text, this paper used deep learning to extract Arabic text with its harakat from an image. Convolutional neural networks and recurrent neural network algorithms were applied to the dataset, which contained 110 images, each representing one word. The results showed the ability to extract some letters with harakat.

Human Motion Recognition Based on Spatio-temporal Convolutional Neural Network

  • Hu, Zeyuan;Park, Sange-yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.977-985
    • /
    • 2020
  • Aiming at the problem of complex feature extraction and low accuracy in human action recognition, this paper proposed a network structure combining batch normalization algorithm with GoogLeNet network model. Applying Batch Normalization idea in the field of image classification to action recognition field, it improved the algorithm by normalizing the network input training sample by mini-batch. For convolutional network, RGB image was the spatial input, and stacked optical flows was the temporal input. Then, it fused the spatio-temporal networks to get the final action recognition result. It trained and evaluated the architecture on the standard video actions benchmarks of UCF101 and HMDB51, which achieved the accuracy of 93.42% and 67.82%. The results show that the improved convolutional neural network has a significant improvement in improving the recognition rate and has obvious advantages in action recognition.

Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 시스템의 정확도 개선에 관한 연구 (A Study on the Accuracy Improvement of Movie Recommender System Using Word2Vec and Ensemble Convolutional Neural Networks)

  • 강부식
    • 디지털융복합연구
    • /
    • 제17권1호
    • /
    • pp.123-130
    • /
    • 2019
  • 웹 추천기법에서 가장 많이 사용하는 방식 중의 하나는 협업필터링 기법이다. 협업필터링 관련 많은 연구에서 정확도를 개선하기 위한 방안이 제시되어 왔다. 본 연구는 Word2Vec과 앙상블 합성곱 신경망을 활용한 영화추천 방안에 대해 제안한다. 먼저 사용자, 영화, 평점 정보에서 사용자 문장과 영화 문장을 구성한다. 사용자 문장과 영화 문장을 Word2Vec에 입력으로 넣어 사용자 벡터와 영화 벡터를 구한다. 사용자 벡터는 사용자 합성곱 모델에 입력하고, 영화 벡터는 영화 합성곱 모델에 입력한다. 사용자 합성곱 모델과 영화 합성곱 모델은 완전연결 신경망 모델로 연결된다. 최종적으로 완전연결 신경망의 출력 계층은 사용자 영화 평점의 예측값을 출력한다. 실험결과 전통적인 협업필터링 기법과 유사 연구에서 제안한 Word2Vec과 심층 신경망을 사용한 기법에 비해 본 연구의 제안기법이 정확도를 개선함을 알 수 있었다.

Localization of ripe tomato bunch using deep neural networks and class activation mapping

  • Seung-Woo Kang;Soo-Hyun Cho;Dae-Hyun Lee;Kyung-Chul Kim
    • 농업과학연구
    • /
    • 제50권3호
    • /
    • pp.357-364
    • /
    • 2023
  • In this study, we propose a ripe tomato bunch localization method based on convolutional neural networks, to be applied in robotic harvesting systems. Tomato images were obtained from a smart greenhouse at the Rural Development Administration (RDA). The sample images for training were extracted based on tomato maturity and resized to 128 × 128 pixels for use in the classification model. The model was constructed based on four-layer convolutional neural networks, and the classes were determined based on stage of maturity, using a Softmax classifier. The localization of the ripe tomato bunch region was indicated on a class activation map. The class activation map could show the approximate location of the tomato bunch but tends to present a local part or a large part of the ripe tomato bunch region, which could lead to poor performance. Therefore, we suggest a recursive method to improve the performance of the model. The classification results indicated that the accuracy, precision, recall, and F1-score were 0.98, 0.87, 0.98, and 0.92, respectively. The localization performance was 0.52, estimated by the Intersection over Union (IoU), and through input recursion, the IoU was improved by 13%. Based on the results, the proposed localization of the ripe tomato bunch area can be incorporated in robotic harvesting systems to establish the optimal harvesting paths.

철근콘크리트 손상 특성 추출을 위한 최적 컨볼루션 신경망 백본 연구 (A Study on Optimal Convolutional Neural Networks Backbone for Reinforced Concrete Damage Feature Extraction)

  • 박영훈
    • 대한토목학회논문집
    • /
    • 제43권4호
    • /
    • pp.511-523
    • /
    • 2023
  • 철근콘크리트 손상 감지를 위한 무인항공기와 딥러닝 연계에 대한 연구가 활발히 진행 중이다. 컨볼루션 신경망은 객체 분류, 검출, 분할 모델의 백본으로 모델 성능에 높은 영향을 준다. 사전학습 컨볼루션 신경망인 모바일넷은 적은 연산량으로 충분한 정확도가 확보 될 수 있어 무인항공기 기반 실시간 손상 감지 백본으로 효율적이다. 바닐라 컨볼루션 신경망과 모바일넷을 분석 한 결과 모바일넷이 바닐라 컨볼루션 신경망의 15.9~22.9% 수준의 낮은 연산량으로도 6.0~9.0% 높은 검증 정확도를 가지는 것으로 평가되었다. 모바일넷V2, 모바일넷V3Large, 모바일넷 V3Small은 거의 동일한 최대 검증 정확도를 가지는 것으로 나타났으며 모바일넷의 철근콘트리트 손상 이미지 특성 추출 최적 조건은 옵티마이저 RMSprop, 드롭아웃 미적용, 평균풀링인 것으로 분석되었다. 본 연구에서 도출된 모바일넷V2 기반 7가지 손상 감지 최대 검증 정확도 75.49%는 이미지 축적과 지속적 학습으로 향상 될 수 있다.

Multimodal Face Biometrics by Using Convolutional Neural Networks

  • Tiong, Leslie Ching Ow;Kim, Seong Tae;Ro, Yong Man
    • 한국멀티미디어학회논문지
    • /
    • 제20권2호
    • /
    • pp.170-178
    • /
    • 2017
  • Biometric recognition is one of the major challenging topics which needs high performance of recognition accuracy. Most of existing methods rely on a single source of biometric to achieve recognition. The recognition accuracy in biometrics is affected by the variability of effects, including illumination and appearance variations. In this paper, we propose a new multimodal biometrics recognition using convolutional neural network. We focus on multimodal biometrics from face and periocular regions. Through experiments, we have demonstrated that facial multimodal biometrics features deep learning framework is helpful for achieving high recognition performance.

A Gradient-Based Explanation Method for Node Classification Using Graph Convolutional Networks

  • Chaehyeon Kim;Hyewon Ryu;Ki Yong Lee
    • Journal of Information Processing Systems
    • /
    • 제19권6호
    • /
    • pp.803-816
    • /
    • 2023
  • Explainable artificial intelligence is a method that explains how a complex model (e.g., a deep neural network) yields its output from a given input. Recently, graph-type data have been widely used in various fields, and diverse graph neural networks (GNNs) have been developed for graph-type data. However, methods to explain the behavior of GNNs have not been studied much, and only a limited understanding of GNNs is currently available. Therefore, in this paper, we propose an explanation method for node classification using graph convolutional networks (GCNs), which is a representative type of GNN. The proposed method finds out which features of each node have the greatest influence on the classification of that node using GCN. The proposed method identifies influential features by backtracking the layers of the GCN from the output layer to the input layer using the gradients. The experimental results on both synthetic and real datasets demonstrate that the proposed explanation method accurately identifies the features of each node that have the greatest influence on its classification.

DEXA에서 딥러닝 기반의 척골 및 요골 자동 분할 모델 (Automated Ulna and Radius Segmentation model based on Deep Learning on DEXA)

  • 김영재;박성진;김경래;김광기
    • 한국멀티미디어학회논문지
    • /
    • 제21권12호
    • /
    • pp.1407-1416
    • /
    • 2018
  • The purpose of this study was to train a model for the ulna and radius bone segmentation based on Convolutional Neural Networks and to verify the segmentation model. The data consisted of 840 training data, 210 tuning data, and 200 verification data. The learning model for the ulna and radius bone bwas based on U-Net (19 convolutional and 8 maximum pooling) and trained with 8 batch sizes, 0.0001 learning rate, and 200 epochs. As a result, the average sensitivity of the training data was 0.998, the specificity was 0.972, the accuracy was 0.979, and the Dice's similarity coefficient was 0.968. In the validation data, the average sensitivity was 0.961, specificity was 0.978, accuracy was 0.972, and Dice's similarity coefficient was 0.961. The performance of deep convolutional neural network based models for the segmentation was good for ulna and radius bone.

Toward Practical Augmentation of Raman Spectra for Deep Learning Classification of Contamination in HDD

  • Seksan Laitrakun;Somrudee Deepaisarn;Sarun Gulyanon;Chayud Srisumarnk;Nattapol Chiewnawintawat;Angkoon Angkoonsawaengsuk;Pakorn Opaprakasit;Jirawan Jindakaew;Narisara Jaikaew
    • Journal of information and communication convergence engineering
    • /
    • 제21권3호
    • /
    • pp.208-215
    • /
    • 2023
  • Deep learning techniques provide powerful solutions to several pattern-recognition problems, including Raman spectral classification. However, these networks require large amounts of labeled data to perform well. Labeled data, which are typically obtained in a laboratory, can potentially be alleviated by data augmentation. This study investigated various data augmentation techniques and applied multiple deep learning methods to Raman spectral classification. Raman spectra yield fingerprint-like information about chemical compositions, but are prone to noise when the particles of the material are small. Five augmentation models were investigated to build robust deep learning classifiers: weighted sums of spectral signals, imitated chemical backgrounds, extended multiplicative signal augmentation, and generated Gaussian and Poisson-distributed noise. We compared the performance of nine state-of-the-art convolutional neural networks with all the augmentation techniques. The LeNet5 models with background noise augmentation yielded the highest accuracy when tested on real-world Raman spectral classification at 88.33% accuracy. A class activation map of the model was generated to provide a qualitative observation of the results.