• Title/Summary/Keyword: DenseNet

Search Result 146, Processing Time 0.034 seconds

A computer vision-based approach for behavior recognition of gestating sows fed different fiber levels during high ambient temperature

  • Kasani, Payam Hosseinzadeh;Oh, Seung Min;Choi, Yo Han;Ha, Sang Hun;Jun, Hyungmin;Park, Kyu hyun;Ko, Han Seo;Kim, Jo Eun;Choi, Jung Woo;Cho, Eun Seok;Kim, Jin Soo
    • Journal of Animal Science and Technology
    • /
    • v.63 no.2
    • /
    • pp.367-379
    • /
    • 2021
  • The objectives of this study were to evaluate convolutional neural network models and computer vision techniques for the classification of swine posture with high accuracy and to use the derived result in the investigation of the effect of dietary fiber level on the behavioral characteristics of the pregnant sow under low and high ambient temperatures during the last stage of gestation. A total of 27 crossbred sows (Yorkshire × Landrace; average body weight, 192.2 ± 4.8 kg) were assigned to three treatments in a randomized complete block design during the last stage of gestation (days 90 to 114). The sows in group 1 were fed a 3% fiber diet under neutral ambient temperature; the sows in group 2 were fed a diet with 3% fiber under high ambient temperature (HT); the sows in group 3 were fed a 6% fiber diet under HT. Eight popular deep learning-based feature extraction frameworks (DenseNet121, DenseNet201, InceptionResNetV2, InceptionV3, MobileNet, VGG16, VGG19, and Xception) used for automatic swine posture classification were selected and compared using the swine posture image dataset that was constructed under real swine farm conditions. The neural network models showed excellent performance on previously unseen data (ability to generalize). The DenseNet121 feature extractor achieved the best performance with 99.83% accuracy, and both DenseNet201 and MobileNet showed an accuracy of 99.77% for the classification of the image dataset. The behavior of sows classified by the DenseNet121 feature extractor showed that the HT in our study reduced (p < 0.05) the standing behavior of sows and also has a tendency to increase (p = 0.082) lying behavior. High dietary fiber treatment tended to increase (p = 0.064) lying and decrease (p < 0.05) the standing behavior of sows, but there was no change in sitting under HT conditions.

Study on the Surface Defect Classification of Al 6061 Extruded Material By Using CNN-Based Algorithms (CNN을 이용한 Al 6061 압출재의 표면 결함 분류 연구)

  • Kim, S.B.;Lee, K.A.
    • Transactions of Materials Processing
    • /
    • v.31 no.4
    • /
    • pp.229-239
    • /
    • 2022
  • Convolution Neural Network(CNN) is a class of deep learning algorithms and can be used for image analysis. In particular, it has excellent performance in finding the pattern of images. Therefore, CNN is commonly applied for recognizing, learning and classifying images. In this study, the surface defect classification performance of Al 6061 extruded material using CNN-based algorithms were compared and evaluated. First, the data collection criteria were suggested and a total of 2,024 datasets were prepared. And they were randomly classified into 1,417 learning data and 607 evaluation data. After that, the size and quality of the training data set were improved using data augmentation techniques to increase the performance of deep learning. The CNN-based algorithms used in this study were VGGNet-16, VGGNet-19, ResNet-50 and DenseNet-121. The evaluation of the defect classification performance was made by comparing the accuracy, loss, and learning speed using verification data. The DenseNet-121 algorithm showed better performance than other algorithms with an accuracy of 99.13% and a loss value of 0.037. This was due to the structural characteristics of the DenseNet model, and the information loss was reduced by acquiring information from all previous layers for image identification in this algorithm. Based on the above results, the possibility of machine vision application of CNN-based model for the surface defect classification of Al extruded materials was also discussed.

Development of ResNet based Crop Growth Stage Estimation Model (ResNet 기반 작물 생육단계 추정 모델 개발)

  • Park, Jun;Kim, June-Yeong;Park, Sung-Wook;Jung, Se-Hoon;Sim, Chun-Bo
    • Smart Media Journal
    • /
    • v.11 no.2
    • /
    • pp.53-62
    • /
    • 2022
  • Due to the accelerated global warming phenomenon after industrialization, the frequency of changes in the existing environment and abnormal climate is increasing. Agriculture is an industry that is very sensitive to climate change, and global warming causes problems such as reducing crop yields and changing growing regions. In addition, environmental changes make the growth period of crops irregular, making it difficult for even experienced farmers to easily estimate the growth stage of crops, thereby causing various problems. Therefore, in this paper, we propose a CNN model for estimating the growth stage of crops. The proposed model was a model that modified the pooling layer of ResNet, and confirmed the accuracy of higher performance than the growth stage estimation of the ResNet and DenseNet models.

Comparison of Image Classification Performance in Convolutional Neural Network according to Transfer Learning (전이학습에 방법에 따른 컨벌루션 신경망의 영상 분류 성능 비교)

  • Park, Sung-Wook;Kim, Do-Yeon
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.12
    • /
    • pp.1387-1395
    • /
    • 2018
  • Core algorithm of deep learning Convolutional Neural Network(CNN) shows better performance than other machine learning algorithms. However, if there is not sufficient data, CNN can not achieve satisfactory performance even if the classifier is excellent. In this situation, it has been proven that the use of transfer learning can have a great effect. In this paper, we apply two transition learning methods(freezing, retraining) to three CNN models(ResNet-50, Inception-V3, DenseNet-121) and compare and analyze how the classification performance of CNN changes according to the methods. As a result of statistical significance test using various evaluation indicators, ResNet-50, Inception-V3, and DenseNet-121 differed by 1.18 times, 1.09 times, and 1.17 times, respectively. Based on this, we concluded that the retraining method may be more effective than the freezing method in case of transition learning in image classification problem.

Comparison of CNN-based models for apple pest classification (사과 병해충 분류를 위한 CNN 기반 모델 비교)

  • Lee, Su-min;Lee, Yu-hyeon;Lee, Eun-sol;Han, Se-yun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.460-463
    • /
    • 2022
  • 세계에서 가장 중요한 온대 과일 작물 중 하나인 사과의 생산성과 품질은 병해충 여부에 큰 영향을 받는다. 이를 진단하기 위해서는 효율적이고 많은 전문 지식과 상당한 시간이 필요하다. 그러므로 이를 해결하기 위해 효율적이고 정확하게 다양한 병해충을 진단하는 시스템이 필요하다. 본 논문에서는 이미지 분석에 큰 효율을 보인 딥러닝 기반 CNN 들을 비교 분석하여 사과의 병해충 여부를 판별하고 최적의 모델을 제시한다. 딥러닝 기반 CNN 구조를 가진 AlexNet, VGGNet, Inception-ResNet-v2, DenseNet 을 채택해 사과 병해충 분류 성능 평가를 진행했다. 그 결과 DenseNet 이 가장 우수한 성능을 보여주었다.

Pediatric RDS classification method employing segmentation-based deep learning network (영역 분할 기반 심층 신경망을 활용한 소아 RDS 판별 방법)

  • Kim, Jiyeong;Kang, Jaeha;Choi, Haechul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2022.06a
    • /
    • pp.1181-1183
    • /
    • 2022
  • 신생아 호흡곤란증후군(RDS, Respiratory Distress Syndrome)은 미숙아 사망의 주된 원인 중 하나이며, 이 질병은 빠른 진단과 치료가 필요하다. 소아의 x-ray 영상을 시각적으로 분석하여 RDS 의 판별을 하고 있으나, 이는 전문의의 주관적인 판단에 의지하기 때문에 상당한 시간적 비용과 인력이 소모된다. 이에 따라, 본 논문에서는 전문의의 진단을 보조하기 위해 심층 신경망을 활용한 소아 RDS/nonRDS 판별 방법을 제안한다. 소아 전신 X-ray 영상에 폐 영역 분할을 적용한 데이터 세트와 증강방법으로 추가한 데이터 세트를 구축하며, RDS 판별 성능을 높이기 위해 ImageNet 으로 사전학습된 DenseNet 판별 모델에 대해 구축된 데이터 세트로 추가 미세조정 학습을 수행한다. 추론 시 입력 X-ray 영상에 대해 MSRF-Net 으로 분할된 폐 영역을 얻고 이를 DenseNet 판별 모델에 적용하여 RDS 를 진단한다. 실험결과, 데이터 증강과 폐 영역을 분할을 적용한 판별 방법이 소아전신 X-ray 데이터 세트만을 사용하는 것과 비교하여 3.9%의 성능향상을 보였다.

  • PDF

Tomato Crop Diseases Classification Models Using Deep CNN-based Architectures (심층 CNN 기반 구조를 이용한 토마토 작물 병해충 분류 모델)

  • Kim, Sam-Keun;Ahn, Jae-Geun
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.22 no.5
    • /
    • pp.7-14
    • /
    • 2021
  • Tomato crops are highly affected by tomato diseases, and if not prevented, a disease can cause severe losses for the agricultural economy. Therefore, there is a need for a system that quickly and accurately diagnoses various tomato diseases. In this paper, we propose a system that classifies nine diseases as well as healthy tomato plants by applying various pretrained deep learning-based CNN models trained on an ImageNet dataset. The tomato leaf image dataset obtained from PlantVillage is provided as input to ResNet, Xception, and DenseNet, which have deep learning-based CNN architectures. The proposed models were constructed by adding a top-level classifier to the basic CNN model, and they were trained by applying a 5-fold cross-validation strategy. All three of the proposed models were trained in two stages: transfer learning (which freezes the layers of the basic CNN model and then trains only the top-level classifiers), and fine-tuned learning (which sets the learning rate to a very small number and trains after unfreezing basic CNN layers). SGD, RMSprop, and Adam were applied as optimization algorithms. The experimental results show that the DenseNet CNN model to which the RMSprop algorithm was applied output the best results, with 98.63% accuracy.

Dense Siamese Network for Building Change Detection (건물 변화 탐지를 위한 덴스 샴 네트워크)

  • Hwang, Gisu;Lee, Woo-Ju;Oh, Seoung-Jun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.691-694
    • /
    • 2020
  • 최근 원격 탐사 영상의 발달로 인해 작지만 중요한 객체에 대한 탐지 가능성이 커져 건물 변화 탐지에 대한 관심이 높아지고 있다. 본 논문은 건물 변화 탐지 방법 중 가장 좋은 성능을 가진 PGA-SiamNet 의 세부 변화 탐지의 정확도가 낮은 한계점을 개선시키기 위해 DensNet 기반의 Dense Siamese Network 를 제안한다. 제안하는 방법은 공개된 WHU 데이터 세트에 대해 변화 탐지 측정 지표인 TPR, OA, F1, Kappa 에 대해 97.02%, 99.5%, 97.44%, 97.16%의 성능을 얻었다. 기존 PGA-SiamNet 에 비해 TPR 은 0.83%, F1 은 0.02%, Kappa 는 0.02% 증가하였으며, 세부 변화 탐지의 성능이 우수함을 확인할 수 있다.

  • PDF

Detection of Plastic Greenhouses by Using Deep Learning Model for Aerial Orthoimages (딥러닝 모델을 이용한 항공정사영상의 비닐하우스 탐지)

  • Byunghyun Yoon;Seonkyeong Seong;Jaewan Choi
    • Korean Journal of Remote Sensing
    • /
    • v.39 no.2
    • /
    • pp.183-192
    • /
    • 2023
  • The remotely sensed data, such as satellite imagery and aerial photos, can be used to extract and detect some objects in the image through image interpretation and processing techniques. Significantly, the possibility for utilizing digital map updating and land monitoring has been increased through automatic object detection since spatial resolution of remotely sensed data has improved and technologies about deep learning have been developed. In this paper, we tried to extract plastic greenhouses into aerial orthophotos by using fully convolutional densely connected convolutional network (FC-DenseNet), one of the representative deep learning models for semantic segmentation. Then, a quantitative analysis of extraction results had performed. Using the farm map of the Ministry of Agriculture, Food and Rural Affairsin Korea, training data was generated by labeling plastic greenhouses into Damyang and Miryang areas. And then, FC-DenseNet was trained through a training dataset. To apply the deep learning model in the remotely sensed imagery, instance norm, which can maintain the spectral characteristics of bands, was used as normalization. In addition, optimal weights for each band were determined by adding attention modules in the deep learning model. In the experiments, it was found that a deep learning model can extract plastic greenhouses. These results can be applied to digital map updating of Farm-map and landcover maps.