• 제목/요약/키워드: VGG16 and Inception V3

검색결과 21건 처리시간 0.022초

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

  • Do, Thanh-Nghi;Le, Van-Thanh;Doan, Thi-Huong
    • Journal of information and communication convergence engineering
    • /
    • 제20권3호
    • /
    • pp.219-225
    • /
    • 2022
  • In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

HS 코드 분류를 위한 CNN 기반의 추천 모델 개발 (CNN-based Recommendation Model for Classifying HS Code)

  • 이동주;김건우;최근호
    • 경영과정보연구
    • /
    • 제39권3호
    • /
    • pp.1-16
    • /
    • 2020
  • 현재 운영되고 있는 관세신고납부제도는 납세의무자가 세액 산정을 스스로하고 그 세액을 본인 책임으로 납부하도록 하는 제도이다. 다시 말해, 관세법상 신고 납부제도는 납세액을 정확히 계산해서 납부할 의무와 책임이 온전히 납세의무자에게 무한정으로 부과하는 것을 원칙으로 하고 있다. 따라서, 만일 납세의무자가 그 의무와 책임을 제대로 행하지 못했을 경우에는 부족한 만큼의 세액 추징과 그에 대한 제제로 가산세를 부과하고 있다. 이러한 이유로 세액 산정의 기본이 되는 품목분류는 관세평가와 함께 가장 어려운 부분이며 잘못 분류하게 되면 기업에게도 큰 리스크가 될 수도 있다. 이러한 이유로 관세전문가인 관세사에게 상당한 수수료를 지불하면서 수입신고를 위탁하여 처리하고 있는 실정이다. 이에 본 연구에서는 수입신고 시 신고하려는 품목이 어떤 것인지 HS 코드 분류를 하여 수입신고 시 기재해야 할 HS 코드를 추천해 주는데 목적이 있다. HS 코드 분류를 위해 관세청 품목분류 결정 사례를 바탕으로 사례에 첨부된 이미지를 활용하여 HS 코드 분류를 하였다. 이미지 분류를 위해 이미지 인식에 많이 사용되는 딥러닝 알고리즘인 CNN을 사용하였는데, 세부적으로 CNN 모델 중 VggNet(Vgg16, Vgg19), ResNet50, Inception-V3 모델을 사용하였다. 분류 정확도를 높이기 위해 3개의 dataset을 만들어 실험을 진행하였다. Dataset 1은 HS 코드 이미지가 가장 많은 5종을 선정하였고 Dataset 2와 Dataset 3은 HS 코드 2단위 중 가장 데이터 샘플의 수가 많은 87류를 대상으로 하였으며, 이 중 샘플 수가 많은 5종으로 분류 범위를 좁혀 분석하였다. 이 중 dataset 3로 학습시켜 HS 코드 분류를 수행하였을 때 Vgg16 모델에서 분류 정확도가 73.12%로 가장 높았다. 본 연구는 HS 코드 이미지를 이용해 딥러닝에 기반한 HS 코드 분류를 최초로 시도하였다는 점에서 의의가 있다. 또한, 수출입 업무를 하고 있는 기업이나 개인사업자들이 본 연구에서 제안한 모델을 참조하여 활용할 수 있다면 수출입 신고 시 HS 코드 작성에 도움될 것으로 기대된다.

Classification of Apple Tree Leaves Diseases using Deep Learning Methods

  • Alsayed, Ashwaq;Alsabei, Amani;Arif, Muhammad
    • International Journal of Computer Science & Network Security
    • /
    • 제21권7호
    • /
    • pp.324-330
    • /
    • 2021
  • Agriculture is one of the essential needs of human life on planet Earth. It is the source of food and earnings for many individuals around the world. The economy of many countries is associated with the agriculture sector. Lots of diseases exist that attack various fruits and crops. Apple Tree Leaves also suffer different types of pathological conditions that affect their production. These pathological conditions include apple scab, cedar apple rust, or multiple diseases, etc. In this paper, an automatic detection framework based on deep learning is investigated for apple leaves disease classification. Different pre-trained models, VGG16, ResNetV2, InceptionV3, and MobileNetV2, are considered for transfer learning. A combination of parameters like learning rate, batch size, and optimizer is analyzed, and the best combination of ResNetV2 with Adam optimizer provided the best classification accuracy of 94%.

Multi-Class Classification Framework for Brain Tumor MR Image Classification by Using Deep CNN with Grid-Search Hyper Parameter Optimization Algorithm

  • Mukkapati, Naveen;Anbarasi, MS
    • International Journal of Computer Science & Network Security
    • /
    • 제22권4호
    • /
    • pp.101-110
    • /
    • 2022
  • Histopathological analysis of biopsy specimens is still used for diagnosis and classifying the brain tumors today. The available procedures are intrusive, time consuming, and inclined to human error. To overcome these disadvantages, need of implementing a fully automated deep learning-based model to classify brain tumor into multiple classes. The proposed CNN model with an accuracy of 92.98 % for categorizing tumors into five classes such as normal tumor, glioma tumor, meningioma tumor, pituitary tumor, and metastatic tumor. Using the grid search optimization approach, all of the critical hyper parameters of suggested CNN framework were instantly assigned. Alex Net, Inception v3, Res Net -50, VGG -16, and Google - Net are all examples of cutting-edge CNN models that are compared to the suggested CNN model. Using huge, publicly available clinical datasets, satisfactory classification results were produced. Physicians and radiologists can use the suggested CNN model to confirm their first screening for brain tumor Multi-classification.

정보보안을 위한 생체 인식 모델에 관한 연구 (A Study on Biometric Model for Information Security)

  • 김준영;정세훈;심춘보
    • 한국전자통신학회논문지
    • /
    • 제19권1호
    • /
    • pp.317-326
    • /
    • 2024
  • 생체 인식은 사람의 생체적, 행동적 특징 정보를 특정 장치로 추출하여 본인 여부를 판별하는 기술이다. 생체 인식 분야에서 생체 특성 위조, 복제, 해킹 등 사이버 위협이 증가하고 있다. 이에 대응하여 보안 시스템이 강화되고 복잡해지며, 개인이 사용하기 어려워지고 있다. 이를 위해 다중 생체 인식 모델이 연구되고 있다. 기존 연구들은 특징 융합 방법을 제시하고 있으나, 특징 융합 방법 간의 비교는 부족하다. 이에 본 논문에서는 지문, 얼굴, 홍채 영상을 이용한 다중 생체 인식 모델의 융합 방법을 비교 평가했다. 특징 추출을 위해VGG-16, ResNet-50, EfficientNet-B1, EfficientNet-B4, EfficientNet-B7, Inception-v3를 사용했으며, 특성융합을 위해 'Sensor-Level', 'Feature-Level', 'Score-Level', 'Rank-Level' 융합 방법을 비교 평가했다. 비교평가결과 'Feature-Level' 융합 방법에서 EfficientNet-B7 모델이 98.51%의 정확도를 보이며 높은 안정성을 보였다. 그러나 EfficietnNet-B7모델의 크기가 크기 때문에 생체 특성 융합을 위한 모델 경량화 연구가 필요하다.

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • 제63권2호
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.

CNN 기반 전이학습을 이용한 뼈 전이가 존재하는 뼈 스캔 영상 분류 (Classification of Whole Body Bone Scan Image with Bone Metastasis using CNN-based Transfer Learning)

  • 임지영;도탄콩;김수형;이귀상;이민희;민정준;범희승;김현식;강세령;양형정
    • 한국멀티미디어학회논문지
    • /
    • 제25권8호
    • /
    • pp.1224-1232
    • /
    • 2022
  • Whole body bone scan is the most frequently performed nuclear medicine imaging to evaluate bone metastasis in cancer patients. We evaluated the performance of a VGG16-based transfer learning classifier for bone scan images in which metastatic bone lesion was present. A total of 1,000 bone scans in 1,000 cancer patients (500 patients with bone metastasis, 500 patients without bone metastasis) were evaluated. Bone scans were labeled with abnormal/normal for bone metastasis using medical reports and image review. Subsequently, gradient-weighted class activation maps (Grad-CAMs) were generated for explainable AI. The proposed model showed AUROC 0.96 and F1-Score 0.90, indicating that it outperforms to VGG16, ResNet50, Xception, DenseNet121 and InceptionV3. Grad-CAM visualized that the proposed model focuses on hot uptakes, which are indicating active bone lesions, for classification of whole body bone scan images with bone metastases.

Breast Cancer Detection with Thermal Images and using Deep Learning

  • Amit Sarode;Vibha Bora
    • International Journal of Computer Science & Network Security
    • /
    • 제23권8호
    • /
    • pp.91-94
    • /
    • 2023
  • According to most experts and health workers, a living creature's body heat is little understood and crucial in the identification of disorders. Doctors in ancient medicine used wet mud or slurry clay to heal patients. When either of these progressed throughout the body, the area that dried up first was called the infected part. Today, thermal cameras that generate images with electromagnetic frequencies can be used to accomplish this. Thermography can detect swelling and clot areas that predict cancer without the need for harmful radiation and irritational touch. It has a significant benefit in medical testing because it can be utilized before any observable symptoms appear. In this work, machine learning (ML) is defined as statistical approaches that enable software systems to learn from data without having to be explicitly coded. By taking note of these heat scans of breasts and pinpointing suspected places where a doctor needs to conduct additional investigation, ML can assist in this endeavor. Thermal imaging is a more cost-effective alternative to other approaches that require specialized equipment, allowing machines to deliver a more convenient and effective approach to doctors.

결절성 폐암 검출을 위한 상용 및 맞춤형 CNN의 성능 비교 (Performance Comparison of Commercial and Customized CNN for Detection in Nodular Lung Cancer)

  • 박성욱;김승현;임수창;김도연
    • 한국멀티미디어학회논문지
    • /
    • 제23권6호
    • /
    • pp.729-737
    • /
    • 2020
  • Screening with low-dose spiral computed tomography (LDCT) has been shown to reduce lung cancer mortality by about 20% when compared to standard chest radiography. One of the problems arising from screening programs is that large amounts of CT image data must be interpreted by radiologists. To solve this problem, automated detection of pulmonary nodules is necessary; however, this is a challenging task because of the high number of false positive results. Here we demonstrate detection of pulmonary nodules using six off-the-shelf convolutional neural network (CNN) models after modification of the input/output layers and end-to-end training based on publicly databases for comparative evaluation. We used the well-known CNN models, LeNet-5, VGG-16, GoogLeNet Inception V3, ResNet-152, DensNet-201, and NASNet. Most of the CNN models provided superior results to those of obtained using customized CNN models. It is more desirable to modify the proven off-the-shelf network model than to customize the network model to detect the pulmonary nodules.

컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향 (The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model)

  • 김민정;김정훈;박지은;정우연;이종민
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권4호
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.