• Title/Summary/Keyword: DenseNet

Search Result 146, Processing Time 0.039 seconds

Perceptual Photo Enhancement with Generative Adversarial Networks (GAN 신경망을 통한 자각적 사진 향상)

  • Que, Yue;Lee, Hyo Jong
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.05a
    • /
    • pp.522-524
    • /
    • 2019
  • In spite of a rapid development in the quality of built-in mobile cameras, their some physical restrictions hinder them to achieve the satisfactory results of digital single lens reflex (DSLR) cameras. In this work we propose an end-to-end deep learning method to translate ordinary images by mobile cameras into DSLR-quality photos. The method is based on the framework of generative adversarial networks (GANs) with several improvements. First, we combined the U-Net with DenseNet and connected dense block (DB) in terms of U-Net. The Dense U-Net acts as the generator in our GAN model. Then, we improved the perceptual loss by using the VGG features and pixel-wise content, which could provide stronger supervision for contrast enhancement and texture recovery.

Comparison of Deep Learning Models for Judging Business Card Image Rotation (명함 이미지 회전 판단을 위한 딥러닝 모델 비교)

  • Ji-Hoon, Kyung
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.34-40
    • /
    • 2023
  • A smart business card printing system that automatically prints business cards requested by customers online is being activated. What matters is that the business card submitted by the customer to the system may be abnormal. This paper deals with the problem of determining whether the image of a business card has been abnormally rotated by adopting artificial intelligence technology. It is assumed that the business card rotates 0 degrees, 90 degrees, 180 degrees, and 270 degrees. Experiments were conducted by applying existing VGG, ResNet, and DenseNet artificial neural networks without designing special artificial neural networks, and they were able to distinguish image rotation with an accuracy of about 97%. DenseNet161 showed 97.9% accuracy and ResNet34 also showed 97.2% precision. This illustrates that if the problem is simple, it can produce sufficiently good results even if the neural network is not a complex one.

Assessment of the FC-DenseNet for Crop Cultivation Area Extraction by Using RapidEye Satellite Imagery (RapidEye 위성영상을 이용한 작물재배지역 추정을 위한 FC-DenseNet의 활용성 평가)

  • Seong, Seon-kyeong;Na, Sang-il;Choi, Jae-wan
    • Korean Journal of Remote Sensing
    • /
    • v.36 no.5_1
    • /
    • pp.823-833
    • /
    • 2020
  • In order to stably produce crops, there is an increasing demand for effective crop monitoring techniques in domestic agricultural areas. In this manuscript, a cultivation area extraction method by using deep learning model is developed, and then, applied to satellite imagery. Training dataset for crop cultivation areas were generated using RapidEye satellite images that include blue, green, red, red-edge, and NIR bands useful for vegetation and environmental analysis, and using this, we tried to estimate the crop cultivation area of onion and garlic by deep learning model. In order to training the model, atmospheric-corrected RapidEye satellite images were used, and then, a deep learning model using FC-DenseNet, which is one of the representative deep learning models for semantic segmentation, was created. The final crop cultivation area was determined as object-based data through combination with cadastral maps. As a result of the experiment, it was confirmed that the FC-DenseNet model learned using atmospheric-corrected training data can effectively detect crop cultivation areas.

Classification Method of Plant Leaf using DenseNet (DenseNet을 활용한 식물 잎 분류 방안 연구)

  • Park, Young Min;Gang, Su Myung;Chae, Ji Hun;Lee, Joon Jae
    • Journal of Korea Multimedia Society
    • /
    • v.21 no.5
    • /
    • pp.571-582
    • /
    • 2018
  • Recently, development of deep learning has shown better image classification result than human. According to recent research, a hidden layer of deep learning is deeper, and a preservation of extracted features shows good results. However, in the case of general images, the extracted features are clear and easy to sort. This study aims to classify plant leaf images. This plant leaf image has high similarity in each image. Since plant leaf images have high similarity not only between images of different species but also within the same species, classification accuracy is not increased by simply extending the hidden layer or connecting the layers. Therefore, in this paper, we tried to improve the hidden layer of the algorithm called DenseNet which shows the recent excellent classification results, and compare the results of several different modified layers. The proposed method makes it possible to classify plant leaf images collected in a natural environment more easily and accurately than conventional methods. This results in good classification of plant leaf image data including unnecessary noise obtained in a natural environment.

Performance Analysis of Feature Extractor for Transfer Learning of a Small Sample of Medical Images (소표본 의료 영상의 전이 학습을 위한 Feature Extractor 기법의 성능 비교 및 분석)

  • Lee, Dong-Ho;Hong, Dae-Yong;Lee, Yeon;Shin, Byeong-Seok
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.405-406
    • /
    • 2018
  • 본 논문은 소표본 의료용 영상 분석의 정확도 향상을 위해 전이학습 모델을 feature extractor로 구축하여 학습시키는 방법을 연구하였으며 성능 평가를 위해 선학습모델로 AlexNet, ResNet, DenseNet을 사용하여 fine tuning 기법을 적용하였을 때와의 성능을 비교 분석하였다. 그 결과 실험에 사용된 3개의 모델에서 fine tuning 기법보다 향상된 정확도를 보임을 확인하였고, 또한 ImageNet으로 학습된 AlexNet, ResNet, DenseNet이 소표본 의료용 X-Ray 영상에 적용될 수 있음을 보였다.

A Study on the Outlet Blockage Determination Technology of Conveyor System using Deep Learning

  • Jeong, Eui-Han;Suh, Young-Joo;Kim, Dong-Ju
    • Journal of the Korea Society of Computer and Information
    • /
    • v.25 no.5
    • /
    • pp.11-18
    • /
    • 2020
  • This study proposes a technique for the determination of outlet blockage using deep learning in a conveyor system. The proposed method aims to apply the best model to the actual process, where we train various CNN models for the determination of outlet blockage using images collected by CCTV in an industrial scene. We used the well-known CNN model such as VGGNet, ResNet, DenseNet and NASNet, and used 18,000 images collected by CCTV for model training and performance evaluation. As a experiment result with various models, VGGNet showed the best performance with 99.03% accuracy and 29.05ms processing time, and we confirmed that VGGNet is suitable for the determination of outlet blockage.

A Study on Trademark Vienna Classification Automation Using Faster R-CNN and DenseNet (Faster R-CNN과 DenseNet을 이용한 도형 상표 비엔나 분류 자동화 연구)

  • Lee, Jin-woo;Kim, Hong-ki;Lee, Ha-young;Ko, Bong-soo;Lee, Bong-gun
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2019.10a
    • /
    • pp.848-851
    • /
    • 2019
  • 이미지 형식으로 등록되는 상표의 특성상 상표의 검색에는 어려움이 따른다. 특허청은 도형 상표의 검색을 용이하게 하기 위해 상표가 포함하고 있는 구성요소에 도형분류코드를 부여한다. 하지만 도형 상표에 포함된 이미지를 확인하고 분류코드를 부여하는 과정은 사람이 직접 수행해야 한다는 어려움이 따른다. 이에 본 논문에서는 딥러닝을 이용하여 자동으로 도형 상표 내 객체를 인식하고 분류코드를 부여하는 방안을 제안한다. DenseNet을 이용하여 중분류를 먼저 예측한 후 각 중분류에 해당하는 Faster R-CNN 모델을 이용하여 세분류 예측을 수행하였다. 성능평가를 통해 비엔나분류 중분류별 평균 74.49%의 예측 정확도를 확인하였다.

Parallel Dense Merging Network with Dilated Convolutions for Semantic Segmentation of Sports Movement Scene

  • Huang, Dongya;Zhang, Li
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.11
    • /
    • pp.3493-3506
    • /
    • 2022
  • In the field of scene segmentation, the precise segmentation of object boundaries in sports movement scene images is a great challenge. The geometric information and spatial information of the image are very important, but in many models, they are usually easy to be lost, which has a big influence on the performance of the model. To alleviate this problem, a parallel dense dilated convolution merging Network (termed PDDCM-Net) was proposed. The proposed PDDCMNet consists of a feature extractor, parallel dilated convolutions, and dense dilated convolutions merged with different dilation rates. We utilize different combinations of dilated convolutions that expand the receptive field of the model with fewer parameters than other advanced methods. Importantly, PDDCM-Net fuses both low-level and high-level information, in effect alleviating the problem of accurately segmenting the edge of the object and positioning the object position accurately. Experimental results validate that the proposed PDDCM-Net achieves a great improvement compared to several representative models on the COCO-Stuff data set.

Face spoofing detection using DenseNet (DenseNet을 통한 얼굴 스푸핑 탐지 기술)

  • Kim, So-Eui;Yu, Su-Gyeong;Lee, Eui Chul
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2020.05a
    • /
    • pp.580-581
    • /
    • 2020
  • 얼굴을 이용한 신원인식 방법은 높은 사용 편의성과 보편성 때문에 다양한 분야에서 활용되고 있다. 그러나 타인의 얼굴 사진이나 테블릿 PC 를 통한 얼굴 동영상 재생과 같은 손쉬운 방법을 통한 얼굴 스푸핑 공격 사례가 다수 보고되고 있다. 하지만 기존의 영상의 텍스처 특징을 활용한 방법은 영상의 초점 상태에 취약하고 기계학습에 사용된 데이터에 의존적이다. 따라서 보다 강력한 스푸핑 탐지 기술이 필요하다. 본 연구에서는 다양한 각도와 거리 편차 요소를 포함하는 자체 구축 DB 와 DenseNet 을 활용한 딥러닝 기반의 위조 얼굴 검출 기술을 연구했다.

Emergency Sound Classification with Early Fusion (Early Fusion을 적용한 위급상황 음향 분류)

  • Jin-Hwan Yang;Sung-Sik Kim;Hyuk-Soon Choi;Nammee Moon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.1213-1214
    • /
    • 2023
  • 현재 국내외 CCTV 구축량 증가로 사생활 침해와 높은 설치 비용등이 문제점으로 제기되고 있다. 따라서 본 연구는 Early Fusion을 적용한 위급상황 음향 분류 모델을 제안한다. 음향 데이터에 STFT(Short Time Fourier Transform), Spectrogram, Mel-Spectrogram을 적용해 특징 벡터를 추출하고 3차원으로 Early Fusion하여 ResNet, DenseNet, EfficientNetV2으로 학습한다. 실험 결과 Early Fusion 방법이 가장 좋은 결과를 보였고 DenseNet, EfficientNetV2가 Accuracy, F1-Score 모두 0.972의 성능을 보였다.