• Title/Summary/Keyword: VGG(Visual Geometry Group)

Search Result 11, Processing Time 0.025 seconds

An Optimized Deep Learning Techniques for Analyzing Mammograms

  • Satish Babu Bandaru;Natarajasivan. D;Rama Mohan Babu. G
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.39-48
    • /
    • 2023
  • Breast cancer screening makes extensive utilization of mammography. Even so, there has been a lot of debate with regards to this application's starting age as well as screening interval. The deep learning technique of transfer learning is employed for transferring the knowledge learnt from the source tasks to the target tasks. For the resolution of real-world problems, deep neural networks have demonstrated superior performance in comparison with the standard machine learning algorithms. The architecture of the deep neural networks has to be defined by taking into account the problem domain knowledge. Normally, this technique will consume a lot of time as well as computational resources. This work evaluated the efficacy of the deep learning neural network like Visual Geometry Group Network (VGG Net) Residual Network (Res Net), as well as inception network for classifying the mammograms. This work proposed optimization of ResNet with Teaching Learning Based Optimization (TLBO) algorithm's in order to predict breast cancers by means of mammogram images. The proposed TLBO-ResNet, an optimized ResNet with faster convergence ability when compared with other evolutionary methods for mammogram classification.

Multi-Modal based ViT Model for Video Data Emotion Classification (영상 데이터 감정 분류를 위한 멀티 모달 기반의 ViT 모델)

  • Yerim Kim;Dong-Gyu Lee;Seo-Yeong Ahn;Jee-Hyun Kim
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2023.01a
    • /
    • pp.9-12
    • /
    • 2023
  • 최근 영상 콘텐츠를 통해 영상물의 메시지뿐 아니라 메시지의 형식을 통해 전달된 감정이 시청하는 사람의 심리 상태에 영향을 주고 있다. 이에 따라, 영상 콘텐츠의 감정을 분류하는 연구가 활발히 진행되고 있고 본 논문에서는 대중적인 영상 스트리밍 플랫폼 중 하나인 유튜브 영상을 7가지의 감정 카테고리로 분류하는 여러 개의 영상 데이터 중 각 영상 데이터에서 오디오와 이미지 데이터를 각각 추출하여 학습에 이용하는 멀티 모달 방식 기반의 영상 감정 분류 모델을 제안한다. 사전 학습된 VGG(Visual Geometry Group)모델과 ViT(Vision Transformer) 모델을 오디오 분류 모델과 이미지 분류 모델에 이용하여 학습하고 본 논문에서 제안하는 병합 방법을 이용하여 병합 후 비교하였다. 본 논문에서는 기존 영상 데이터 감정 분류 방식과 다르게 영상 속에서 화자를 인식하지 않고 감정을 분류하여 최고 48%의 정확도를 얻었다.

  • PDF

A Deep Learning-Based Rate Control for HEVC Intra Coding

  • Marzuki, Ismail;Sim, Donggyu
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2019.11a
    • /
    • pp.180-181
    • /
    • 2019
  • This paper proposes a rate control algorithm for intra coding frame in HEVC encoder using a deep learning approach. The proposed algorithm is designed for CTU level bit allocation in intra frame by considering visual features spatially and temporally. Our features are generated using visual geometry group (VGG-16) with deep convolutional layers, then it is used for bit allocation per each CTU within an intra frame. According to our experiments, the proposed algorithm can achieve -2.04% Luma component BD-rate gain with minimal bit accuracy loss against the HM-16.20 rate control model.

  • PDF

A Deep Learning-Based Image Semantic Segmentation Algorithm

  • Chaoqun, Shen;Zhongliang, Sun
    • Journal of Information Processing Systems
    • /
    • v.19 no.1
    • /
    • pp.98-108
    • /
    • 2023
  • This paper is an attempt to design segmentation method based on fully convolutional networks (FCN) and attention mechanism. The first five layers of the Visual Geometry Group (VGG) 16 network serve as the coding part in the semantic segmentation network structure with the convolutional layer used to replace pooling to reduce loss of image feature extraction information. The up-sampling and deconvolution unit of the FCN is then used as the decoding part in the semantic segmentation network. In the deconvolution process, the skip structure is used to fuse different levels of information and the attention mechanism is incorporated to reduce accuracy loss. Finally, the segmentation results are obtained through pixel layer classification. The results show that our method outperforms the comparison methods in mean pixel accuracy (MPA) and mean intersection over union (MIOU).

VGG-based BAPL Score Classification of 18F-Florbetaben Amyloid Brain PET

  • Kang, Hyeon;Kim, Woong-Gon;Yang, Gyung-Seung;Kim, Hyun-Woo;Jeong, Ji-Eun;Yoon, Hyun-Jin;Cho, Kook;Jeong, Young-Jin;Kang, Do-Young
    • Biomedical Science Letters
    • /
    • v.24 no.4
    • /
    • pp.418-425
    • /
    • 2018
  • Amyloid brain positron emission tomography (PET) images are visually and subjectively analyzed by the physician with a lot of time and effort to determine the ${\beta}$-Amyloid ($A{\beta}$) deposition. We designed a convolutional neural network (CNN) model that predicts the $A{\beta}$-positive and $A{\beta}$-negative status. We performed 18F-florbetaben (FBB) brain PET on controls and patients (n=176) with mild cognitive impairment and Alzheimer's Disease (AD). We classified brain PET images visually as per the on the brain amyloid plaque load score. We designed the visual geometry group (VGG16) model for the visual assessment of slice-based samples. To evaluate only the gray matter and not the white matter, gray matter masking (GMM) was applied to the slice-based standard samples. All the performance metrics were higher with GMM than without GMM (accuracy 92.39 vs. 89.60, sensitivity 87.93 vs. 85.76, and specificity 98.94 vs. 95.32). For the patient-based standard, all the performance metrics were almost the same (accuracy 89.78 vs. 89.21), lower (sensitivity 93.97 vs. 99.14), and higher (specificity 81.67 vs. 70.00). The area under curve with the VGG16 model that observed the gray matter region only was slightly higher than the model that observed the whole brain for both slice-based and patient-based decision processes. Amyloid brain PET images can be appropriately analyzed using the CNN model for predicting the $A{\beta}$-positive and $A{\beta}$-negative status.

A Comparative Study of Deep Learning Techniques for Alzheimer's disease Detection in Medical Radiography

  • Amal Alshahrani;Jenan Mustafa;Manar Almatrafi;Layan Albaqami;Raneem Aljabri;Shahad Almuntashri
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.5
    • /
    • pp.53-63
    • /
    • 2024
  • Alzheimer's disease is a brain disorder that worsens over time and affects millions of people around the world. It leads to a gradual deterioration in memory, thinking ability, and behavioral and social skills until the person loses his ability to adapt to society. Technological progress in medical imaging and the use of artificial intelligence, has provided the possibility of detecting Alzheimer's disease through medical images such as magnetic resonance imaging (MRI). However, Deep learning algorithms, especially convolutional neural networks (CNNs), have shown great success in analyzing medical images for disease diagnosis and classification. Where CNNs can recognize patterns and objects from images, which makes them ideally suited for this study. In this paper, we proposed to compare the performances of Alzheimer's disease detection by using two deep learning methods: You Only Look Once (YOLO), a CNN-enabled object recognition algorithm, and Visual Geometry Group (VGG16) which is a type of deep convolutional neural network primarily used for image classification. We will compare our results using these modern models Instead of using CNN only like the previous research. In addition, the results showed different levels of accuracy for the various versions of YOLO and the VGG16 model. YOLO v5 reached 56.4% accuracy at 50 epochs and 61.5% accuracy at 100 epochs. YOLO v8, which is for classification, reached 84% accuracy overall at 100 epochs. YOLO v9, which is for object detection overall accuracy of 84.6%. The VGG16 model reached 99% accuracy for training after 25 epochs but only 78% accuracy for testing. Hence, the best model overall is YOLO v9, with the highest overall accuracy of 86.1%.

Super Resolution Performance Analysis of GAN according to Feature Extractor (특징 추출기에 따른 SRGAN의 초해상 성능 분석)

  • Park, Sung-Wook;Kim, Jun-Yeong;Park, Jun;Jung, Se-Hoon;Sim, Chun-Bo
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.501-503
    • /
    • 2022
  • 초해상이란 해상도가 낮은 영상을 해상도가 높은 영상으로 합성하는 기술이다. 딥러닝은 영상의 해상도를 높이는 초해상 기술에도 응용되며 실현은 2아4년에 발표된 SRCNN(Super Resolution Convolutional Neural Network) 모델로부터 시작됐다. 이후 오토인코더 (Autoencoders) 구조로는 SRCAE(Super Resolution Convolutional Autoencoders), 합성된 영상을 실제 영상과 통계적으로 구분되지 않도록 강제하는 GAN (Generative Adversarial Networks) 구조로는 SRGAN(Super Resolution Generative Adversarial Networks) 모델이 발표됐다. 모두 SRCNN의 성능을 웃도는 모델들이나 그중 가장 높은 성능을 끌어내는 SRGAN 조차 아직 완벽한 성능을 내진 못한다. 본 논문에서는 SRGAN의 성능을 개선하기 위해 사전 훈련된 특징 추출기(Pre-trained Feature Extractor) VGG(Visual Geometry Group)-19 모델을 변경하고, 기존 모델과 성능을 비교한다. 실험 결과, VGG-19 모델보다 윤곽이 뚜렷하고, 실제 영상과 더 가까운 영상을 합성할 수 있는 모델을 발견할 수 있을 것으로 기대된다.

Construction of a Bark Dataset for Automatic Tree Identification and Developing a Convolutional Neural Network-based Tree Species Identification Model (수목 동정을 위한 수피 분류 데이터셋 구축과 합성곱 신경망 기반 53개 수종의 동정 모델 개발)

  • Kim, Tae Kyung;Baek, Gyu Heon;Kim, Hyun Seok
    • Journal of Korean Society of Forest Science
    • /
    • v.110 no.2
    • /
    • pp.155-164
    • /
    • 2021
  • Many studies have been conducted on developing automatic plant identification algorithms using machine learning to various plant features, such as leaves and flowers. Unlike other plant characteristics, barks show only little change regardless of the season and are maintained for a long period. Nevertheless, barks show a complex shape with a large variation depending on the environment, and there are insufficient materials that can be utilized to train algorithms. Here, in addition to the previously published bark image dataset, BarkNet v.1.0, images of barks were collected, and a dataset consisting of 53 tree species that can be easily observed in Korea was presented. A convolutional neural network (CNN) was trained and tested on the dataset, and the factors that interfere with the model's performance were identified. For CNN architecture, VGG-16 and 19 were utilized. As a result, VGG-16 achieved 90.41% and VGG-19 achieved 92.62% accuracy. When tested on new tree images that do not exist in the original dataset but belong to the same genus or family, it was confirmed that more than 80% of cases were successfully identified as the same genus or family. Meanwhile, it was found that the model tended to misclassify when there were distracting features in the image, including leaves, mosses, and knots. In these cases, we propose that random cropping and classification by majority votes are valid for improving possible errors in training and inferences.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

Two person Interaction Recognition Based on Effective Hybrid Learning

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Kim, Jin Woo;Bashar, Md Rezaul;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.2
    • /
    • pp.751-770
    • /
    • 2019
  • Action recognition is an essential task in computer vision due to the variety of prospective applications, such as security surveillance, machine learning, and human-computer interaction. The availability of more video data than ever before and the lofty performance of deep convolutional neural networks also make it essential for action recognition in video. Unfortunately, limited crafted video features and the scarcity of benchmark datasets make it challenging to address the multi-person action recognition task in video data. In this work, we propose a deep convolutional neural network-based Effective Hybrid Learning (EHL) framework for two-person interaction classification in video data. Our approach exploits a pre-trained network model (the VGG16 from the University of Oxford Visual Geometry Group) and extends the Faster R-CNN (region-based convolutional neural network a state-of-the-art detector for image classification). We broaden a semi-supervised learning method combined with an active learning method to improve overall performance. Numerous types of two-person interactions exist in the real world, which makes this a challenging task. In our experiment, we consider a limited number of actions, such as hugging, fighting, linking arms, talking, and kidnapping in two environment such simple and complex. We show that our trained model with an active semi-supervised learning architecture gradually improves the performance. In a simple environment using an Intelligent Technology Laboratory (ITLab) dataset from Inha University, performance increased to 95.6% accuracy, and in a complex environment, performance reached 81% accuracy. Our method reduces data-labeling time, compared to supervised learning methods, for the ITLab dataset. We also conduct extensive experiment on Human Action Recognition benchmarks such as UT-Interaction dataset, HMDB51 dataset and obtain better performance than state-of-the-art approaches.