• 제목/요약/키워드: AlexNet Networks

검색결과 18건 처리시간 0.019초

Audio and Video Bimodal Emotion Recognition in Social Networks Based on Improved AlexNet Network and Attention Mechanism

  • Liu, Min;Tang, Jun
    • Journal of Information Processing Systems
    • /
    • 제17권4호
    • /
    • pp.754-771
    • /
    • 2021
  • In the task of continuous dimension emotion recognition, the parts that highlight the emotional expression are not the same in each mode, and the influences of different modes on the emotional state is also different. Therefore, this paper studies the fusion of the two most important modes in emotional recognition (voice and visual expression), and proposes a two-mode dual-modal emotion recognition method combined with the attention mechanism of the improved AlexNet network. After a simple preprocessing of the audio signal and the video signal, respectively, the first step is to use the prior knowledge to realize the extraction of audio characteristics. Then, facial expression features are extracted by the improved AlexNet network. Finally, the multimodal attention mechanism is used to fuse facial expression features and audio features, and the improved loss function is used to optimize the modal missing problem, so as to improve the robustness of the model and the performance of emotion recognition. The experimental results show that the concordance coefficient of the proposed model in the two dimensions of arousal and valence (concordance correlation coefficient) were 0.729 and 0.718, respectively, which are superior to several comparative algorithms.

Variations of AlexNet and GoogLeNet to Improve Korean Character Recognition Performance

  • Lee, Sang-Geol;Sung, Yunsick;Kim, Yeon-Gyu;Cha, Eui-Young
    • Journal of Information Processing Systems
    • /
    • 제14권1호
    • /
    • pp.205-217
    • /
    • 2018
  • Deep learning using convolutional neural networks (CNNs) is being studied in various fields of image recognition and these studies show excellent performance. In this paper, we compare the performance of CNN architectures, KCR-AlexNet and KCR-GoogLeNet. The experimental data used in this paper is obtained from PHD08, a large-scale Korean character database. It has 2,187 samples of each Korean character with 2,350 Korean character classes for a total of 5,139,450 data samples. In the training results, KCR-AlexNet showed an accuracy of over 98% for the top-1 test and KCR-GoogLeNet showed an accuracy of over 99% for the top-1 test after the final training iteration. We made an additional Korean character dataset with fonts that were not in PHD08 to compare the classification success rate with commercial optical character recognition (OCR) programs and ensure the objectivity of the experiment. While the commercial OCR programs showed 66.95% to 83.16% classification success rates, KCR-AlexNet and KCR-GoogLeNet showed average classification success rates of 90.12% and 89.14%, respectively, which are higher than the commercial OCR programs' rates. Considering the time factor, KCR-AlexNet was faster than KCR-GoogLeNet when they were trained using PHD08; otherwise, KCR-GoogLeNet had a faster classification speed.

Convolutional Neural Networks for Character-level Classification

  • Ko, Dae-Gun;Song, Su-Han;Kang, Ki-Min;Han, Seong-Wook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • 제6권1호
    • /
    • pp.53-59
    • /
    • 2017
  • Optical character recognition (OCR) automatically recognizes text in an image. OCR is still a challenging problem in computer vision. A successful solution to OCR has important device applications, such as text-to-speech conversion and automatic document classification. In this work, we analyze character recognition performance using the current state-of-the-art deep-learning structures. One is the AlexNet structure, another is the LeNet structure, and the other one is the SPNet structure. For this, we have built our own dataset that contains digits and upper- and lower-case characters. We experiment in the presence of salt-and-pepper noise or Gaussian noise, and report the performance comparison in terms of recognition error. Experimental results indicate by five-fold cross-validation that the SPNet structure (our approach) outperforms AlexNet and LeNet in recognition error.

병렬형 합성곱 신경망을 이용한 골절합용 판의 탐지 성능 비교에 관한 연구 (A Study on Detection Performance Comparison of Bone Plates Using Parallel Convolution Neural Networks)

  • 이송연;허용정
    • 반도체디스플레이기술학회지
    • /
    • 제21권3호
    • /
    • pp.63-68
    • /
    • 2022
  • In this study, we produced defect detection models using parallel convolution neural networks. If convolution neural networks are constructed parallel type, the model's detection accuracy will increase and detection time will decrease. We produced parallel-type defect detection models using 4 types of convolutional algorithms. The performance of models was evaluated using evaluation indicators. The model's performance is detection accuracy and detection time. We compared the performance of each parallel model. The detection accuracy of the model using AlexNet is 97 % and the detection time is 0.3 seconds. We confirmed that when AlexNet algorithm is constructed parallel type, the model has the highest performance.

코로나바이러스 감염증19 데이터베이스에 기반을 둔 인공신경망 모델의 특성 평가 (Evaluation of Deep-Learning Feature Based COVID-19 Classifier in Various Neural Network)

  • 홍준용;정영진
    • 대한방사선기술학회지:방사선기술과학
    • /
    • 제43권5호
    • /
    • pp.397-404
    • /
    • 2020
  • Coronavirus disease(COVID-19) is highly infectious disease that directly affects the lungs. To observe the clinical findings from these lungs, the Chest Radiography(CXR) can be used in a fast manner. However, the diagnostic performance via CXR needs to be improved, since the identifying these findings are highly time-consuming and prone to human error. Therefore, Artificial Intelligence(AI) based tool may be useful to aid the diagnosis of COVID-19 via CXR. In this study, we explored various Deep learning(DL) approach to classify COVID-19, other viral pneumonia and normal. For the original dataset and lung-segmented dataset, the pre-trained AlexNet, SqueezeNet, ResNet18, DenseNet201 were transfer-trained and validated for 3 class - COVID-19, viral pneumonia, normal. In the results, AlexNet showed the highest mean accuracy of 99.15±2.69% and fastest training time of 1.61±0.56 min among 4 pre-trained neural networks. In this study, we demonstrated the performance of 4 pre-trained neural networks in COVID-19 diagnosis with CXR images. Further, we plotted the class activation map(CAM) of each network and demonstrated that the lung-segmentation pre-processing improve the performance of COVID-19 classifier with CXR images by excluding background features.

An Approximate DRAM Architecture for Energy-efficient Deep Learning

  • Nguyen, Duy Thanh;Chang, Ik-Joon
    • Journal of Semiconductor Engineering
    • /
    • 제1권1호
    • /
    • pp.31-37
    • /
    • 2020
  • We present an approximate DRAM architecture for energy-efficient deep learning. Our key premise is that by bounding memory errors to non-critical information, we can significantly reduce DRAM refresh energy without compromising recognition accuracy of deep neural networks. To validate the key premise, we make extensive Monte-Carlo simulations for several well-known convolutional neural networks such as LeNet, ConvNet and AlexNet with the input of MINIST, CIFAR-10, and ImageNet, respectively. We assume that the highest-order 8-bits (in single precision) and 4-bits (in half precision) are protected from retention errors under the proposed architecture and then, randomly inject bit-errors to unprotected bits with various bit-error-rates. Here, recognition accuracies of the above convolutional neural networks are successfully maintained up to the 10-5-order bit-error-rate. We simulate DRAM energy during inference of the above convolutional neural networks, where the proposed architecture shows the possibility of considerable energy saving up to 10 ~ 37.5% of total DRAM energy.

감정 제어 가능한 종단 간 음성합성 시스템 (Emotion Transfer with Strength Control for End-to-End TTS)

  • 전예진;이근배
    • 한국정보과학회 언어공학연구회:학술대회논문집(한글 및 한국어 정보처리)
    • /
    • 한국정보과학회언어공학연구회 2021년도 제33회 한글 및 한국어 정보처리 학술대회
    • /
    • pp.423-426
    • /
    • 2021
  • 본 논문은 전역 스타일 토큰(Global Style Token)을 기준으로 하여 감정의 세기를 조절할 수 있는 방법을 소개한다. 기존의 전역 스타일 토큰 연구에서는 원하는 스타일이 포함된 참조 오디오(reference audio)을 사용하여 음성을 합성하였다. 그러나, 참조 오디오의 스타일대로만 음성합성이 가능하기 때문에 세밀한 감정 조절에 어려움이 있었다. 이 문제를 해결하기 위해 본 논문에서는 전역 스타일 토큰의 레퍼런스 인코더 부분을 잔여 블록(residual block)과 컴퓨터 비전 분야에서 사용되는 AlexNet으로 대체하였다. AlexNet은 5개의 함성곱 신경망(convolutional neural networks) 으로 구성되어 있지만, 본 논문에서는 1개의 신경망을 제외한 4개의 레이어만 사용했다. 청취 평가(Mean Opinion Score)를 통해 제시된 방법으로 감정 세기의 조절 가능성을 보여준다.

  • PDF

통합메모리를 이용한 임베디드 환경에서의 딥러닝 프레임워크 성능 개선과 평가 (Performance Enhancement and Evaluation of a Deep Learning Framework on Embedded Systems using Unified Memory)

  • 이민학;강우철
    • 정보과학회 컴퓨팅의 실제 논문지
    • /
    • 제23권7호
    • /
    • pp.417-423
    • /
    • 2017
  • 최근, 딥러닝을 사용 가능한 임베디드 디바이스가 상용화됨에 따라 임베디드 시스템 영역에서도 딥러닝 활용에 대한 다양한 연구가 진행되고 있다. 그러나 임베디드 시스템을 고성능 PC 환경과 비교하면 상대적으로 저사양의 CPU/GPU 프로세서와 메모리를 탑재하고 있으므로 딥러닝 기술의 적용에 있어서 많은 제약이 있다. 본 논문에서는 다양한 최신 딥러닝 네트워크들을 임베디드 디바이스에 적용했을때의 성능을 시간과 전력이라는 관점에서 실험적으로 평가한다. 또한, 호스트 CPU와 GPU 디바이스간의 메모리를 공유하는 임베디드 시스템들의 아키텍처적인 특성을 이용하여 메모리 복사를 줄임으로써 실시간 성능과 저전력성을 높이는 방법을 제시한다. 제안된 방법은 대표적인 공개 딥러닝 프레임워크인 Caffe를 수정하여 구현되었으며, 임베디드 GPU를 탑재한 NVIDIA Jetson TK1에서 성능평가 되었다. 실험결과, 대부분의 딥러닝 네트워크에서 뚜렷한 성능향상을 관찰할 수 있었다. 특히, 메모리 사용량이 높은 AlexNet에서 약 33%의 이미지 인식 속도 단축과 50%의 소비 전력량 감소를 관찰할 수 있었다.

Transfer learning for crack detection in concrete structures: Evaluation of four models

  • Ali Bagheri;Mohammadreza Mosalmanyazdi;Hasanali Mosalmanyazdi
    • Structural Engineering and Mechanics
    • /
    • 제91권2호
    • /
    • pp.163-175
    • /
    • 2024
  • The objective of this research is to improve public safety in civil engineering by recognizing fractures in concrete structures quickly and correctly. The study offers a new crack detection method based on advanced image processing and machine learning techniques, specifically transfer learning with convolutional neural networks (CNNs). Four pre-trained models (VGG16, AlexNet, ResNet18, and DenseNet161) were fine-tuned to detect fractures in concrete surfaces. These models constantly produced accuracy rates greater than 80%, showing their ability to automate fracture identification and potentially reduce structural failure costs. Furthermore, the study expands its scope beyond crack detection to identify concrete health, using a dataset with a wide range of surface defects and anomalies including cracks. Notably, using VGG16, which was chosen as the most effective network architecture from the first phase, the study achieves excellent accuracy in classifying concrete health, demonstrating the model's satisfactorily performance even in more complex scenarios.

Interworking technology of neural network and data among deep learning frameworks

  • Park, Jaebok;Yoo, Seungmok;Yoon, Seokjin;Lee, Kyunghee;Cho, Changsik
    • ETRI Journal
    • /
    • 제41권6호
    • /
    • pp.760-770
    • /
    • 2019
  • Based on the growing demand for neural network technologies, various neural network inference engines are being developed. However, each inference engine has its own neural network storage format. There is a growing demand for standardization to solve this problem. This study presents interworking techniques for ensuring the compatibility of neural networks and data among the various deep learning frameworks. The proposed technique standardizes the graphic expression grammar and learning data storage format using the Neural Network Exchange Format (NNEF) of Khronos. The proposed converter includes a lexical, syntax, and parser. This NNEF parser converts neural network information into a parsing tree and quantizes data. To validate the proposed system, we verified that MNIST is immediately executed by importing AlexNet's neural network and learned data. Therefore, this study contributes an efficient design technique for a converter that can execute a neural network and learned data in various frameworks regardless of the storage format of each framework.