• 제목/요약/키워드: VGG Net

검색결과 89건 처리시간 0.023초

Road Damage Detection and Classification based on Multi-level Feature Pyramids

  • Yin, Junru;Qu, Jiantao;Huang, Wei;Chen, Qiqiang
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권2호
    • /
    • pp.786-799
    • /
    • 2021
  • Road damage detection is important for road maintenance. With the development of deep learning, more and more road damage detection methods have been proposed, such as Fast R-CNN, Faster R-CNN, Mask R-CNN and RetinaNet. However, because shallow and deep layers cannot be extracted at the same time, the existing methods do not perform well in detecting objects with fewer samples. In addition, these methods cannot obtain a highly accurate detecting bounding box. This paper presents a Multi-level Feature Pyramids method based on M2det. Because the feature layer has multi-scale and multi-level architecture, the feature layer containing more information and obvious features can be extracted. Moreover, an attention mechanism is used to improve the accuracy of local boundary boxes in the dataset. Experimental results show that the proposed method is better than the current state-of-the-art methods.

컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향 (The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model)

  • 김민정;김정훈;박지은;정우연;이종민
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권4호
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.

Sex determination from lateral cephalometric radiographs using an automated deep learning convolutional neural network

  • Khazaei, Maryam;Mollabashi, Vahid;Khotanlou, Hassan;Farhadian, Maryam
    • Imaging Science in Dentistry
    • /
    • 제52권3호
    • /
    • pp.239-244
    • /
    • 2022
  • Purpose: Despite the proliferation of numerous morphometric and anthropometric methods for sex identification based on linear, angular, and regional measurements of various parts of the body, these methods are subject to error due to the observer's knowledge and expertise. This study aimed to explore the possibility of automated sex determination using convolutional neural networks(CNNs) based on lateral cephalometric radiographs. Materials and Methods: Lateral cephalometric radiographs of 1,476 Iranian subjects (794 women and 682 men) from 18 to 49 years of age were included. Lateral cephalometric radiographs were considered as a network input and output layer including 2 classes(male and female). Eighty percent of the data was used as a training set and the rest as a test set. Hyperparameter tuning of each network was done after preprocessing and data augmentation steps. The predictive performance of different architectures (DenseNet, ResNet, and VGG) was evaluated based on their accuracy in test sets. Results: The CNN based on the DenseNet121 architecture, with an overall accuracy of 90%, had the best predictive power in sex determination. The prediction accuracy of this model was almost equal for men and women. Furthermore, with all architectures, the use of transfer learning improved predictive performance. Conclusion: The results confirmed that a CNN could predict a person's sex with high accuracy. This prediction was independent of human bias because feature extraction was done automatically. However, for more accurate sex determination on a wider scale, further studies with larger sample sizes are desirable.

COVID-19 Diagnosis from CXR images through pre-trained Deep Visual Embeddings

  • Khalid, Shahzaib;Syed, Muhammad Shehram Shah;Saba, Erum;Pirzada, Nasrullah
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.175-181
    • /
    • 2022
  • COVID-19 is an acute respiratory syndrome that affects the host's breathing and respiratory system. The novel disease's first case was reported in 2019 and has created a state of emergency in the whole world and declared a global pandemic within months after the first case. The disease created elements of socioeconomic crisis globally. The emergency has made it imperative for professionals to take the necessary measures to make early diagnoses of the disease. The conventional diagnosis for COVID-19 is through Polymerase Chain Reaction (PCR) testing. However, in a lot of rural societies, these tests are not available or take a lot of time to provide results. Hence, we propose a COVID-19 classification system by means of machine learning and transfer learning models. The proposed approach identifies individuals with COVID-19 and distinguishes them from those who are healthy with the help of Deep Visual Embeddings (DVE). Five state-of-the-art models: VGG-19, ResNet50, Inceptionv3, MobileNetv3, and EfficientNetB7, were used in this study along with five different pooling schemes to perform deep feature extraction. In addition, the features are normalized using standard scaling, and 4-fold cross-validation is used to validate the performance over multiple versions of the validation data. The best results of 88.86% UAR, 88.27% Specificity, 89.44% Sensitivity, 88.62% Accuracy, 89.06% Precision, and 87.52% F1-score were obtained using ResNet-50 with Average Pooling and Logistic regression with class weight as the classifier.

CT 정도관리를 위한 인공지능 모델 적용에 관한 연구 (Study on the Application of Artificial Intelligence Model for CT Quality Control)

  • 황호성;김동현;김호철
    • 대한의용생체공학회:의공학회지
    • /
    • 제44권3호
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

합리적 가격결정을 위한 전이학습모델기반 아보카도 분류 및 출하 예측 시스템 (Avocado Classification and Shipping Prediction System based on Transfer Learning Model for Rational Pricing)

  • 유성운;박승민
    • 한국전자통신학회논문지
    • /
    • 제18권2호
    • /
    • pp.329-335
    • /
    • 2023
  • 타임지가 선정한 슈퍼푸드이며, 후숙 과일 중 하나인 아보카도는 현지가격과 국내 유통 가격이 크게 차이가 나는 식품 중 하나이다. 이러한 아보카도의 분류과정을 자동화한다면 다양한 분야에서 인건비를 줄여 가격을 낮출 수 있을 것이다. 본 논문에서는 아보카도의 데이터셋을 크롤링을 통하여 제작하고, 딥러닝 기반 전이학습모델을 다수 사용하여, 최적의 분류모델을 만드는 것을 목표로 한다. 실험은 제작한 데이터셋에서 분리한 데이터셋에서 딥러닝 기반 전이학습모델에 직접 대입하고, 해당 모델의 하이퍼 파라미터를 Fine-tuning하며 진행하였다. 제작된 모델은 아보카도의 이미지를 입력하였을 때, 해당 아보카도의 익은 정도를 99% 이상의 정확도로 분류하였으며, 아보카도 생산 및 유통가정의 인력감소 및 정확성을 높일 수 있는 데이터셋 및 알고리즘을 제안한다.

MLCNN-COV: A multilabel convolutional neural network-based framework to identify negative COVID medicine responses from the chemical three-dimensional conformer

  • Pranab Das;Dilwar Hussain Mazumder
    • ETRI Journal
    • /
    • 제46권2호
    • /
    • pp.290-306
    • /
    • 2024
  • To treat the novel COronaVIrus Disease (COVID), comparatively fewer medicines have been approved. Due to the global pandemic status of COVID, several medicines are being developed to treat patients. The modern COVID medicines development process has various challenges, including predicting and detecting hazardous COVID medicine responses. Moreover, correctly predicting harmful COVID medicine reactions is essential for health safety. Significant developments in computational models in medicine development can make it possible to identify adverse COVID medicine reactions. Since the beginning of the COVID pandemic, there has been significant demand for developing COVID medicines. Therefore, this paper presents the transferlearning methodology and a multilabel convolutional neural network for COVID (MLCNN-COV) medicines development model to identify negative responses of COVID medicines. For analysis, a framework is proposed with five multilabel transfer-learning models, namely, MobileNetv2, ResNet50, VGG19, DenseNet201, and Inceptionv3, and an MLCNN-COV model is designed with an image augmentation (IA) technique and validated through experiments on the image of three-dimensional chemical conformer of 17 number of COVID medicines. The RGB color channel is utilized to represent the feature of the image, and image features are extracted by employing the Convolution2D and MaxPooling2D layer. The findings of the current MLCNN-COV are promising, and it can identify individual adverse reactions of medicines, with the accuracy ranging from 88.24% to 100%, which outperformed the transfer-learning model's performance. It shows that three-dimensional conformers adequately identify negative COVID medicine responses.

임베디드 보드에서의 인공신경망 압축을 이용한 CNN 모델의 가속 및 성능 검증 (Acceleration of CNN Model Using Neural Network Compression and its Performance Evaluation on Embedded Boards)

  • 문현철;이호영;김재곤
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2019년도 추계학술대회
    • /
    • pp.44-45
    • /
    • 2019
  • 최근 CNN 등 인공신경망은 최근 이미지 분류, 객체 인식, 자연어 처리 등 다양한 분야에서 뛰어난 성능을 보이고 있다. 그러나, 대부분의 분야에서 보다 더 높은 성능을 얻기 위해 사용한 인공신경망 모델들은 파라미터 수 및 연산량 등이 방대하여, 모바일 및 IoT 디바이스 같은 연산량이나 메모리가 제한된 환경에서 추론하기에는 제한적이다. 따라서 연산량 및 모델 파라미터 수를 압축하기 위한 딥러닝 경량화 알고리즘이 연구되고 있다. 본 논문에서는 임베디트 보드에서의 압축된 CNN 모델의 성능을 검증한다. 인공지능 지원 맞춤형 칩인 QCS605 를 내장한 임베디드 보드에서 카메라로 입력한 영상에 대해서 원 CNN 모델과 압축된 CNN 모델의 분류 성능과 동작속도 비교 분석한다. 본 논문의 실험에서는 CNN 모델로 MobileNetV2, VGG16 을 사용했으며, 주어진 모델에서 가지치기(pruning) 기법, 양자화, 행렬 분해 등의 인공신경망 압축 기술을 적용하였을 때 원래의 모델 대비 추론 시간 및 분류의 정확도 성능을 분석하고 인공신경망 압축 기술의 유용성을 확인하였다.

  • PDF

딥러닝 기반 주름 평가 (Rating wrinkled skin using deep learning)

  • 김진숙;김용남;김두홍;박래정;백지훈;강상구
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2018년도 추계학술발표대회
    • /
    • pp.637-640
    • /
    • 2018
  • The paper proposes a new deep network-based model that rates periorbital wrinkles in order to alleviate the shortcomings of the evaluation by human experts as well as to facilitate the automation. Periorbital wrinkles still need to be classified by human experts. Furthermore, the classification results from experts are different from each other in many cases due to the inter-interpreter variability and the absence of quantification criteria. Unlike existing classification methods which classify original images, the proposed model consists of a cascade of two deep networks: U-Net for the enhancement of wrinkles on an input image and VGG16 for final classification based on the wrinkle information. Experiments of the proposed model are made with a data set that consists of 433 images rated by experts, showing the promising performance.

Deep learning framework for bovine iris segmentation

  • Heemoon Yoon;Mira Park;Hayoung Lee;Jisoon An;Taehyun Lee;Sang-Hee Lee
    • Journal of Animal Science and Technology
    • /
    • 제66권1호
    • /
    • pp.167-177
    • /
    • 2024
  • Iris segmentation is an initial step for identifying the biometrics of animals when establishing a traceability system for livestock. In this study, we propose a deep learning framework for pixel-wise segmentation of bovine iris with a minimized use of annotation labels utilizing the BovineAAEyes80 public dataset. The proposed image segmentation framework encompasses data collection, data preparation, data augmentation selection, training of 15 deep neural network (DNN) models with varying encoder backbones and segmentation decoder DNNs, and evaluation of the models using multiple metrics and graphical segmentation results. This framework aims to provide comprehensive and in-depth information on each model's training and testing outcomes to optimize bovine iris segmentation performance. In the experiment, U-Net with a VGG16 backbone was identified as the optimal combination of encoder and decoder models for the dataset, achieving an accuracy and dice coefficient score of 99.50% and 98.35%, respectively. Notably, the selected model accurately segmented even corrupted images without proper annotation data. This study contributes to the advancement of iris segmentation and the establishment of a reliable DNN training framework.