• Title/Summary/Keyword: ResNet-50

Search Result 126, Processing Time 0.022 seconds

Improved Adapting a Single Network to Multiple Tasks By Bit Plane Slicing and Dithering (향상된 비트 평면 분할을 통한 다중 학습 통합 신경망 구축)

  • Bae, Joon-ki;Bae, Sung-ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.643-646
    • /
    • 2020
  • 본 논문에서는 직전 연구였던 비트 평면 분할과 디더링을 통한 다중 학습 통합 신경망 구축에서의 한계점을 분석하고, 향상시킨 방법을 제시한다. 통합 신경망을 구축하는 방법에 대해 최근까지 시도되었던 방법들은 신경망을 구성하는 가중치(weight)나 층(layer)를 공유하거나 태스크 별로 구분하는 것들이 있다. 이와 같은 선상에서 본 연구는 더 작은 단위인 가중치의 비트 평면을 태스크 별로 할당하여 보다 효율적인 통합 신경망을 구축한다. 실험은 이미지 분류 문제에 대해 수행하였다. 대중적인 신경망 구조인 ResNet18 에 대해 적용한 결과 데이터셋 CIFAR10 과 CIFAR100 에서 이론적인 압축률 50%를 달성하면서 성능 저하가 거의 발견되지 않았다.

  • PDF

Detection of Power Transmission Equipment in Image using Guided Grad-CAM (Guided Grad-CAM 을 이용한 영상 내 송전설비 검출기법)

  • Park, Eun-Soo;Kim, SeungHwan;Mujtaba, Ghulam;Ryu, Eun-Seok
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.709-713
    • /
    • 2020
  • 본 논문에서 육안으로도 구별하기 힘든 송전선과 같은 객체가 포함된 송전설비를 효과적으로 검출하는 방법을 제안한다. 객체 인식 모델에 송전탑 데이터 셋을 학습시켜 송전설비 Region of Interest(ROI)를 추출한다. 송전선 데이터 셋을 ResNet50 에 학습하고, 추출된 ROI 영상을 Guided Grad-CAM 을 출력한다. 추출된 Guided Grad-CAM 에 노이즈 제거 후처리를 적용하여 송전설비를 추출한다. 본 논문에서 제안된 기법을 적용할 경우 드론 또는 UAV 헬기 등에서 촬영된 영상으로 송전설비 유지보수가 가능하다.

  • PDF

Dynamic Filter Pruning for Compression of Deep Neural Network. (동적 필터 프루닝 기법을 이용한 심층 신경망 압축)

  • Cho, InCheon;Bae, SungHo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.675-679
    • /
    • 2020
  • 최근 이미지 분류의 성능 향상을 위해 깊은 레이어와 넓은 채널을 가지는 모델들이 제안되어져 왔다. 높은 분류 정확도를 보이는 모델을 제안하는 것은 과한 컴퓨팅 파워와 계산시간을 요구한다. 본 논문에서는 이미지 분류 기법에서 사용되는 딥 뉴럴 네트워크 모델에 있어, 프루닝 방법을 통해 상대적으로 불필요한 가중치를 제거함과 동시에 분류 정확도 하락을 최소로 하는 동적 필터 프루닝 방법을 제시한다. 원샷 프루닝 기법, 정적 필터 프루닝 기법과 다르게 제거된 가중치에 대해서 소생 기회를 제공함으로써 더 좋은 성능을 보인다. 또한, 재학습이 필요하지 않기 때문에 빠른 계산 속도와 적은 컴퓨팅 파워를 보장한다. ResNet20 에서 CIFAR10 데이터셋에 대하여 실험한 결과 약 50%의 압축률에도 88.74%의 분류 정확도를 보였다.

  • PDF

A Study on Inundation Detection Using Convolutional Neural Network Based on Deep Learning (딥러닝 기반 합성곱 신경망을 이용한 자동 침수감지 기술에 관한 연구)

  • Kim, Gilho
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.323-323
    • /
    • 2021
  • 본 연구는 국지적으로 발생하는 침수상황을 빠르게 감지하고 대처하기 위하여 다채널 실시간 CCTV 영상을 무인 모니터링하고 자동으로 감지하기 위한 영상분석 기술을 개발하는 것을 목적으로 한다. 이에 다양한 공간에서 촬영된 학습 및 검증을 위한 데이터를 구축하였고, 대표적인 CNN 계열 분류모델을 중심으로 딥러닝 모델을 개발하였다. 5가지 CNN 알고리즘으로 시험결과, ResNet-50 모델의 분류 정확도가 87.5%로 가장 우수한 성능을 보였다. 공간적으로는 실외, 도로공간에서 82% 이상의 분류성능을 보였고, 실내공간에서는 양질의 학습데이터 부족으로 분류성능이 떨어지는 것으로 나타났다. 본 연구성과는 지능형 CCTV 기술 발전과 방재 목적의 다목적 활용으로, 향후 홍수피해 저감을 위한 보조적인 수단으로 활용되길 기대한다.

  • PDF

Face Recognition and Preprocessing Technique for Speaker Identification in hard of hearing broadcasting (청각장애인용 방송에서 화자 식별을 위한 얼굴 인식 알고리즘 및 전처리 연구)

  • Kim, Nayeon;Cho, Sukhee;Bae, Byungjun;Ahn, ChungHyun
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.07a
    • /
    • pp.450-452
    • /
    • 2020
  • 본 논문에서는 딥러닝 기반 얼굴 인식 알고리즘에 대해 살펴보고, 이를 청각장애인용 방송에서 화자를 식별하고 감정 표현 자막을 표출하기 위한 배우 얼굴 인식 기술에 적용하고자 한다. 우선, 배우 얼굴 인식을 위한 방안으로 원샷 학습 기반의 딥러닝 얼굴 인식 알고리즘인 ResNet-50 기반 VGGFace2 모델의 구성에 대해 이해하고, 이러한 모델을 기반으로 다양한 전처리 방식을 적용하여 정확도를 측정함으로써 실제 청각장애인용 방송에서 배우 얼굴을 인식하기 위한 방안에 대해 모색한다.

  • PDF

The Application Methods of FarmMap Reading in Agricultural Land Using Deep Learning (딥러닝을 이용한 농경지 팜맵 판독 적용 방안)

  • Wee Seong Seung;Jung Nam Su;Lee Won Suk;Shin Yong Tae
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.2
    • /
    • pp.77-82
    • /
    • 2023
  • The Ministry of Agriculture, Food and Rural Affairs established the FarmMap, an digital map of agricultural land. In this study, using deep learning, we suggest the application of farm map reading to farmland such as paddy fields, fields, ginseng, fruit trees, facilities, and uncultivated land. The farm map is used as spatial information for planting status and drone operation by digitizing agricultural land in the real world using aerial and satellite images. A reading manual has been prepared and updated every year by demarcating the boundaries of agricultural land and reading the attributes. Human reading of agricultural land differs depending on reading ability and experience, and reading errors are difficult to verify in reality because of budget limitations. The farmmap has location information and class information of the corresponding object in the image of 5 types of farmland properties, so the suitable AI technique was tested with ResNet50, an instance segmentation model. The results of attribute reading of agricultural land using deep learning and attribute reading by humans were compared. If technology is developed by focusing on attribute reading that shows different results in the future, it is expected that it will play a big role in reducing attribute errors and improving the accuracy of digital map of agricultural land.

Optimization-based Deep Learning Model to Localize L3 Slice in Whole Body Computerized Tomography Images (컴퓨터 단층촬영 영상에서 3번 요추부 슬라이스 검출을 위한 최적화 기반 딥러닝 모델)

  • Seongwon Chae;Jae-Hyun Jo;Ye-Eun Park;Jin-Hyoung, Jeong;Sung Jin Kim;Ahnryul Choi
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.5
    • /
    • pp.331-337
    • /
    • 2023
  • In this paper, we propose a deep learning model to detect lumbar 3 (L3) CT images to determine the occurrence and degree of sarcopenia. In addition, we would like to propose an optimization technique that uses oversampling ratio and class weight as design parameters to address the problem of performance degradation due to data imbalance between L3 level and non-L3 level portions of CT data. In order to train and test the model, a total of 150 whole-body CT images of 104 prostate cancer patients and 46 bladder cancer patients who visited Gangneung Asan Medical Center were used. The deep learning model used ResNet50, and the design parameters of the optimization technique were selected as six types of model hyperparameters, data augmentation ratio, and class weight. It was confirmed that the proposed optimization-based L3 level extraction model reduced the median L3 error by about 1.0 slices compared to the control model (a model that optimized only 5 types of hyperparameters). Through the results of this study, accurate L3 slice detection was possible, and additionally, we were able to present the possibility of effectively solving the data imbalance problem through oversampling through data augmentation and class weight adjustment.

The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model (컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향)

  • Kim, Min Jeong;Kim, Jung Hun;Park, Ji Eun;Jeong, Woo Yeon;Lee, Jong Min
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.

Study on the Application of Artificial Intelligence Model for CT Quality Control (CT 정도관리를 위한 인공지능 모델 적용에 관한 연구)

  • Ho Seong Hwang;Dong Hyun Kim;Ho Chul Kim
    • Journal of Biomedical Engineering Research
    • /
    • v.44 no.3
    • /
    • pp.182-189
    • /
    • 2023
  • CT is a medical device that acquires medical images based on Attenuation coefficient of human organs related to X-rays. In addition, using this theory, it can acquire sagittal and coronal planes and 3D images of the human body. Then, CT is essential device for universal diagnostic test. But Exposure of CT scan is so high that it is regulated and managed with special medical equipment. As the special medical equipment, CT must implement quality control. In detail of quality control, Spatial resolution of existing phantom imaging tests, Contrast resolution and clinical image evaluation are qualitative tests. These tests are not objective, so the reliability of the CT undermine trust. Therefore, by applying an artificial intelligence classification model, we wanted to confirm the possibility of quantitative evaluation of the qualitative evaluation part of the phantom test. We used intelligence classification models (VGG19, DenseNet201, EfficientNet B2, inception_resnet_v2, ResNet50V2, and Xception). And the fine-tuning process used for learning was additionally performed. As a result, in all classification models, the accuracy of spatial resolution was 0.9562 or higher, the precision was 0.9535, the recall was 1, the loss value was 0.1774, and the learning time was from a maximum of 14 minutes to a minimum of 8 minutes and 10 seconds. Through the experimental results, it was concluded that the artificial intelligence model can be applied to CT implements quality control in spatial resolution and contrast resolution.

A Comparative Study on the Effective Deep Learning for Fingerprint Recognition with Scar and Wrinkle (상처와 주름이 있는 지문 판별에 효율적인 심층 학습 비교연구)

  • Kim, JunSeob;Rim, BeanBonyka;Sung, Nak-Jun;Hong, Min
    • Journal of Internet Computing and Services
    • /
    • v.21 no.4
    • /
    • pp.17-23
    • /
    • 2020
  • Biometric information indicating measurement items related to human characteristics has attracted great attention as security technology with high reliability since there is no fear of theft or loss. Among these biometric information, fingerprints are mainly used in fields such as identity verification and identification. If there is a problem such as a wound, wrinkle, or moisture that is difficult to authenticate to the fingerprint image when identifying the identity, the fingerprint expert can identify the problem with the fingerprint directly through the preprocessing step, and apply the image processing algorithm appropriate to the problem. Solve the problem. In this case, by implementing artificial intelligence software that distinguishes fingerprint images with cuts and wrinkles on the fingerprint, it is easy to check whether there are cuts or wrinkles, and by selecting an appropriate algorithm, the fingerprint image can be easily improved. In this study, we developed a total of 17,080 fingerprint databases by acquiring all finger prints of 1,010 students from the Royal University of Cambodia, 600 Sokoto open data sets, and 98 Korean students. In order to determine if there are any injuries or wrinkles in the built database, criteria were established, and the data were validated by experts. The training and test datasets consisted of Cambodian data and Sokoto data, and the ratio was set to 8: 2. The data of 98 Korean students were set up as a validation data set. Using the constructed data set, five CNN-based architectures such as Classic CNN, AlexNet, VGG-16, Resnet50, and Yolo v3 were implemented. A study was conducted to find the model that performed best on the readings. Among the five architectures, ResNet50 showed the best performance with 81.51%.