• 제목/요약/키워드: Dense Network(DenseNet)

검색결과 72건 처리시간 0.021초

비전센서 및 딥러닝을 이용한 항만구조물 방충설비 세분화 시스템 개발 (Development of Fender Segmentation System for Port Structures using Vision Sensor and Deep Learning)

  • 민지영;유병준;김종혁;전해민
    • 한국구조물진단유지관리공학회 논문집
    • /
    • 제26권2호
    • /
    • pp.28-36
    • /
    • 2022
  • 매립지 위에 건설되는 항만시설물은 바람(태풍), 파랑, 선박과의 충돌 등 극한 외부 하중에 노출되기 때문에 구조물의 안전성 및 사용성을 주기적으로 평가하는 것이 중요하다. 본 논문에서는 항만 계류시설에 설치된 방충설비의 유지관리를 위하여 비전 및 딥러닝 기반의 방충설비 세분화(segmentation) 시스템을 개발하였다. 방충설비 세분화를 위하여 인코더-디코더 형식과 인간 시각체계의 편심 기능에서 영감을 얻은 수용 영역 블록(Receptive field block) 기반의 합성곱 모듈을 DenseNet 형식으로 개선하는 딥러닝 네트워크를 제안하였다. 네트워크 훈련을 위해 BP형, V형, 원통형, 타이어형 등 다양한 형태의 방충설비 영상을 수집하였으며, 탄성 변형, 좌우 반전, 색상 변환 및 기하학적 변환을 통해 영상을 증강시킨 다음 제안한 딥러닝 네트워크를 학습하였다. 기존의 세분화 모델인 VGG16-Unet과 비교하여 제안한 모델의 세분화 성능을 검증하였으며, 그 결과 본 시스템이 IoU 84%, 조화평균 90% 이상으로 정밀하게 실시간으로 세분화할 수 있음을 확인하였다. 제안한 방충설비 세분화 시스템의 현장적용 가능성을 검증하기 위하여 국내 항만 시설물에서 촬영된 영상을 기반으로 학습을 수행하였으며, 그 결과 기존 세분화 모델과 비교하였을 때 우수한 성능을 보이며 정밀하게 방충설비를 감지하는 것을 확인하였다.

HiGANCNN: A Hybrid Generative Adversarial Network and Convolutional Neural Network for Glaucoma Detection

  • Alsulami, Fairouz;Alseleahbi, Hind;Alsaedi, Rawan;Almaghdawi, Rasha;Alafif, Tarik;Ikram, Mohammad;Zong, Weiwei;Alzahrani, Yahya;Bawazeer, Ahmed
    • International Journal of Computer Science & Network Security
    • /
    • 제22권9호
    • /
    • pp.23-30
    • /
    • 2022
  • Glaucoma is a chronic neuropathy that affects the optic nerve which can lead to blindness. The detection and prediction of glaucoma become possible using deep neural networks. However, the detection performance relies on the availability of a large number of data. Therefore, we propose different frameworks, including a hybrid of a generative adversarial network and a convolutional neural network to automate and increase the performance of glaucoma detection. The proposed frameworks are evaluated using five public glaucoma datasets. The framework which uses a Deconvolutional Generative Adversarial Network (DCGAN) and a DenseNet pre-trained model achieves 99.6%, 99.08%, 99.4%, 98.69%, and 92.95% of classification accuracy on RIMONE, Drishti-GS, ACRIMA, ORIGA-light, and HRF datasets respectively. Based on the experimental results and evaluation, the proposed framework closely competes with the state-of-the-art methods using the five public glaucoma datasets without requiring any manually preprocessing step.

Analysis of Weights and Feature Patterns in Popular 2D Deep Neural Networks Models for MRI Image Classification

  • Khagi, Bijen;Kwon, Goo-Rak
    • Journal of Multimedia Information System
    • /
    • 제9권3호
    • /
    • pp.177-182
    • /
    • 2022
  • A deep neural network (DNN) includes variables whose values keep on changing with the training process until it reaches the final point of convergence. These variables are the co-efficient of a polynomial expression to relate to the feature extraction process. In general, DNNs work in multiple 'dimensions' depending upon the number of channels and batches accounted for training. However, after the execution of feature extraction and before entering the SoftMax or other classifier, there is a conversion of features from multiple N-dimensions to a single vector form, where 'N' represents the number of activation channels. This usually happens in a Fully connected layer (FCL) or a dense layer. This reduced 2D feature is the subject of study for our analysis. For this, we have used the FCL, so the trained weights of this FCL will be used for the weight-class correlation analysis. The popular DNN models selected for our study are ResNet-101, VGG-19, and GoogleNet. These models' weights are directly used for fine-tuning (with all trained weights initially transferred) and scratch trained (with no weights transferred). Then the comparison is done by plotting the graph of feature distribution and the final FCL weights.

컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향 (The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model)

  • 김민정;김정훈;박지은;정우연;이종민
    • 대한의용생체공학회:의공학회지
    • /
    • 제42권4호
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.

Research on Shellfish Recognition Based on Improved Faster RCNN

  • Feng, Yiran;Park, Sang-Yun;Lee, Eung-Joo
    • 한국멀티미디어학회논문지
    • /
    • 제24권5호
    • /
    • pp.695-700
    • /
    • 2021
  • The Faster RCNN-based shellfish recognition algorithm is introduced for shellfish recognition studies that currently do not have any deep learning-based algorithms in a practical setting. The original feature extraction module is replaced by DenseNet, which fuses multi-level feature data and optimises the NMS algorithm, network depth and merging method; overcoming the omission of shellfish overlap, multiple shellfish and insufficient light, effectively solving the problem of low shellfish classification accuracy. In the complexifier test environment, the test accuracy was improved by nearly 4%. Higher testing accuracy was achieved compared to the original testing algorithm. This provides favourable technical support for future applications of the improved Faster RCNN approach to seafood quality classification.

Tobacco Retail License Recognition Based on Dual Attention Mechanism

  • Shan, Yuxiang;Ren, Qin;Wang, Cheng;Wang, Xiuhui
    • Journal of Information Processing Systems
    • /
    • 제18권4호
    • /
    • pp.480-488
    • /
    • 2022
  • Images of tobacco retail licenses have complex unstructured characteristics, which is an urgent technical problem in the robot process automation of tobacco marketing. In this paper, a novel recognition approach using a double attention mechanism is presented to realize the automatic recognition and information extraction from such images. First, we utilized a DenseNet network to extract the license information from the input tobacco retail license data. Second, bi-directional long short-term memory was used for coding and decoding using a continuous decoder integrating dual attention to realize the recognition and information extraction of tobacco retail license images without segmentation. Finally, several performance experiments were conducted using a largescale dataset of tobacco retail licenses. The experimental results show that the proposed approach achieves a correction accuracy of 98.36% on the ZY-LQ dataset, outperforming most existing methods.

Estimation of Heading Date of Paddy Rice from Slanted View Images Using Deep Learning Classification Model

  • Hyeokjin Bak;Hoyoung Ban;SeongryulChang;Dongwon Gwon;Jae-Kyeong Baek;Jeong-Il Cho;Wan-Gyu Sang
    • 한국작물학회:학술대회논문집
    • /
    • 한국작물학회 2022년도 추계학술대회
    • /
    • pp.80-80
    • /
    • 2022
  • Estimation of heading date of paddy rice is laborious and time consuming. Therefore, automatic estimation of heading date of paddy rice is highly essential. In this experiment, deep learning classification models were used to classify two difference categories of rice (vegetative and reproductive stage) based on the panicle initiation of paddy field. Specifically, the dataset includes 444 slanted view images belonging to two categories and was then expanded to include 1,497 images via IMGAUG data augmentation technique. We adopt two transfer learning strategies: (First, used transferring model weights already trained on ImageNet to six classification network models: VGGNet, ResNet, DenseNet, InceptionV3, Xception and MobileNet, Second, fine-tuned some layers of the network according to our dataset). After training the CNN model, we used several evaluation metrics commonly used for classification tasks, including Accuracy, Precision, Recall, and F1-score. In addition, GradCAM was used to generate visual explanations for each image patch. Experimental results showed that the InceptionV3 is the best performing model in terms of the accuracy, average recall, precision, and F1-score. The fine-tuned InceptionV3 model achieved an overall classification accuracy of 0.95 with a high F1-score of 0.95. Our CNN model also represented the change of rice heading date under different date of transplanting. This study demonstrated that image based deep learning model can reliably be used as an automatic monitoring system to detect the heading date of rice crops using CCTV camera.

  • PDF

Use of deep learning in nano image processing through the CNN model

  • Xing, Lumin;Liu, Wenjian;Liu, Xiaoliang;Li, Xin;Wang, Han
    • Advances in nano research
    • /
    • 제12권2호
    • /
    • pp.185-195
    • /
    • 2022
  • Deep learning is another field of artificial intelligence (AI) utilized for computer aided diagnosis (CAD) and image processing in scientific research. Considering numerous mechanical repetitive tasks, reading image slices need time and improper with geographical limits, so the counting of image information is hard due to its strong subjectivity that raise the error ratio in misdiagnosis. Regarding the highest mortality rate of Lung cancer, there is a need for biopsy for determining its class for additional treatment. Deep learning has recently given strong tools in diagnose of lung cancer and making therapeutic regimen. However, identifying the pathological lung cancer's class by CT images in beginning phase because of the absence of powerful AI models and public training data set is difficult. Convolutional Neural Network (CNN) was proposed with its essential function in recognizing the pathological CT images. 472 patients subjected to staging FDG-PET/CT were selected in 2 months prior to surgery or biopsy. CNN was developed and showed the accuracy of 87%, 69%, and 69% in training, validation, and test sets, respectively, for T1-T2 and T3-T4 lung cancer classification. Subsequently, CNN (or deep learning) could improve the CT images' data set, indicating that the application of classifiers is adequate to accomplish better exactness in distinguishing pathological CT images that performs better than few deep learning models, such as ResNet-34, Alex Net, and Dense Net with or without Soft max weights.

Keypoints-Based 2D Virtual Try-on Network System

  • Pham, Duy Lai;Ngyuen, Nhat Tan;Chung, Sun-Tae
    • 한국멀티미디어학회논문지
    • /
    • 제23권2호
    • /
    • pp.186-203
    • /
    • 2020
  • Image-based Virtual Try-On Systems are among the most potential solution for virtual fitting which tries on a target clothes into a model person image and thus have attracted considerable research efforts. In many cases, current solutions for those fails in achieving naturally looking virtual fitted image where a target clothes is transferred into the body area of a model person of any shape and pose while keeping clothes context like texture, text, logo without distortion and artifacts. In this paper, we propose a new improved image-based virtual try-on network system based on keypoints, which we name as KP-VTON. The proposed KP-VTON first detects keypoints in the target clothes and reliably predicts keypoints in the clothes of a model person image by utilizing a dense human pose estimation. Then, through TPS transformation calculated by utilizing the keypoints as control points, the warped target clothes image, which is matched into the body area for wearing the target clothes, is obtained. Finally, a new try-on module adopting Attention U-Net is applied to handle more detailed synthesis of virtual fitted image. Extensive experiments on a well-known dataset show that the proposed KP-VTON performs better the state-of-the-art virtual try-on systems.

농작물 질병분류를 위한 전이학습에 사용되는 기초 합성곱신경망 모델간 성능 비교 (Performance Comparison of Base CNN Models in Transfer Learning for Crop Diseases Classification)

  • 윤협상;정석봉
    • 산업경영시스템학회지
    • /
    • 제44권3호
    • /
    • pp.33-38
    • /
    • 2021
  • Recently, transfer learning techniques with a base convolutional neural network (CNN) model have widely gained acceptance in early detection and classification of crop diseases to increase agricultural productivity with reducing disease spread. The transfer learning techniques based classifiers generally achieve over 90% of classification accuracy for crop diseases using dataset of crop leaf images (e.g., PlantVillage dataset), but they have ability to classify only the pre-trained diseases. This paper provides with an evaluation scheme on selecting an effective base CNN model for crop disease transfer learning with regard to the accuracy of trained target crops as well as of untrained target crops. First, we present transfer learning models called CDC (crop disease classification) architecture including widely used base (pre-trained) CNN models. We evaluate each performance of seven base CNN models for four untrained crops. The results of performance evaluation show that the DenseNet201 is one of the best base CNN models.