• 제목/요약/키워드: Dataset Training

검색결과 629건 처리시간 0.024초

Transfer learning in a deep convolutional neural network for implant fixture classification: A pilot study

  • Kim, Hak-Sun;Ha, Eun-Gyu;Kim, Young Hyun;Jeon, Kug Jin;Lee, Chena;Han, Sang-Sun
    • Imaging Science in Dentistry
    • /
    • 제52권2호
    • /
    • pp.219-224
    • /
    • 2022
  • Purpose: This study aimed to evaluate the performance of transfer learning in a deep convolutional neural network for classifying implant fixtures. Materials and Methods: Periapical radiographs of implant fixtures obtained using the Superline (Dentium Co. Ltd., Seoul, Korea), TS III(Osstem Implant Co. Ltd., Seoul, Korea), and Bone Level Implant(Institut Straumann AG, Basel, Switzerland) systems were selected from patients who underwent dental implant treatment. All 355 implant fixtures comprised the total dataset and were annotated with the name of the system. The total dataset was split into a training dataset and a test dataset at a ratio of 8 to 2, respectively. YOLOv3 (You Only Look Once version 3, available at https://pjreddie.com/darknet/yolo/), a deep convolutional neural network that has been pretrained with a large image dataset of objects, was used to train the model to classify fixtures in periapical images, in a process called transfer learning. This network was trained with the training dataset for 100, 200, and 300 epochs. Using the test dataset, the performance of the network was evaluated in terms of sensitivity, specificity, and accuracy. Results: When YOLOv3 was trained for 200 epochs, the sensitivity, specificity, accuracy, and confidence score were the highest for all systems, with overall results of 94.4%, 97.9%, 96.7%, and 0.75, respectively. The network showed the best performance in classifying Bone Level Implant fixtures, with 100.0% sensitivity, specificity, and accuracy. Conclusion: Through transfer learning, high performance could be achieved with YOLOv3, even using a small amount of data.

Two-Stream Convolutional Neural Network for Video Action Recognition

  • Qiao, Han;Liu, Shuang;Xu, Qingzhen;Liu, Shouqiang;Yang, Wanggan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제15권10호
    • /
    • pp.3668-3684
    • /
    • 2021
  • Video action recognition is widely used in video surveillance, behavior detection, human-computer interaction, medically assisted diagnosis and motion analysis. However, video action recognition can be disturbed by many factors, such as background, illumination and so on. Two-stream convolutional neural network uses the video spatial and temporal models to train separately, and performs fusion at the output end. The multi segment Two-Stream convolutional neural network model trains temporal and spatial information from the video to extract their feature and fuse them, then determine the category of video action. Google Xception model and the transfer learning is adopted in this paper, and the Xception model which trained on ImageNet is used as the initial weight. It greatly overcomes the problem of model underfitting caused by insufficient video behavior dataset, and it can effectively reduce the influence of various factors in the video. This way also greatly improves the accuracy and reduces the training time. What's more, to make up for the shortage of dataset, the kinetics400 dataset was used for pre-training, which greatly improved the accuracy of the model. In this applied research, through continuous efforts, the expected goal is basically achieved, and according to the study and research, the design of the original dual-flow model is improved.

밝기 변화에 강인한 적대적 음영 생성 및 훈련 글자 인식 알고리즘 (Adversarial Shade Generation and Training Text Recognition Algorithm that is Robust to Text in Brightness)

  • 서민석;김대한;최동걸
    • 로봇학회논문지
    • /
    • 제16권3호
    • /
    • pp.276-282
    • /
    • 2021
  • The system for recognizing text in natural scenes has been applied in various industries. However, due to the change in brightness that occurs in nature such as light reflection and shadow, the text recognition performance significantly decreases. To solve this problem, we propose an adversarial shadow generation and training algorithm that is robust to shadow changes. The adversarial shadow generation and training algorithm divides the entire image into a total of 9 grids, and adjusts the brightness with 4 trainable parameters for each grid. Finally, training is conducted in a adversarial relationship between the text recognition model and the shaded image generator. As the training progresses, more and more difficult shaded grid combinations occur. When training with this curriculum-learning attitude, we not only showed a performance improvement of more than 3% in the ICDAR2015 public benchmark dataset, but also confirmed that the performance improved when applied to our's android application text recognition dataset.

GP-GPU를 이용한 보행자 추론 CNN (Pedestrian Inference Convolution Neural Network Using GP-GPU)

  • 정준모
    • 전기전자학회논문지
    • /
    • 제21권3호
    • /
    • pp.244-247
    • /
    • 2017
  • 본 논문에서는 GP-GPU를 활용한 보행자 추론 컨볼루션 뉴럴 네트워크를 구현했다. CNN은 구조를 정한 후, 학습에서 얻은 가중치를 이용해 기존 연구인 256개의 스레드를 가지는 GP-GPU를 활용해 추론을 수행했다. 학습에는 Inter i7-4470 CPU와 Matlab을 사용했다. Dataset은 Daimler Pedestrian Dataset을 사용했다. GP-GPU는 PCIe를 이용해 PC로부터 제어를 받으며, FPGA로 동작한다. 각 레이어의 depth와 size에 따라 스레드를 할당했다. 풀링 레이어의 경우는 over warpping pooling을 사용했기 때문에 횡영역과 종영역에 추가적인 연산을 수행했다. 한 번의 추론에는 약 12ms가 걸린다.

I-QANet: 그래프 컨볼루션 네트워크를 활용한 향상된 기계독해 (I-QANet: Improved Machine Reading Comprehension using Graph Convolutional Networks)

  • 김정훈;김준영;박준;박성욱;정세훈;심춘보
    • 한국멀티미디어학회논문지
    • /
    • 제25권11호
    • /
    • pp.1643-1652
    • /
    • 2022
  • Most of the existing machine reading research has used Recurrent Neural Network (RNN) and Convolutional Neural Network (CNN) algorithms as networks. Among them, RNN was slow in training, and Question Answering Network (QANet) was announced to improve training speed. QANet is a model composed of CNN and self-attention. CNN extracts semantic and syntactic information well from the local corpus, but there is a limit to extracting the corresponding information from the global corpus. Graph Convolutional Networks (GCN) extracts semantic and syntactic information relatively well from the global corpus. In this paper, to take advantage of this strength of GCN, we propose I-QANet, which changed the CNN of QANet to GCN. The proposed model performed 1.2 times faster than the baseline in the Stanford Question Answering Dataset (SQuAD) dataset and showed 0.2% higher performance in Exact Match (EM) and 0.7% higher in F1. Furthermore, in the Korean Question Answering Dataset (KorQuAD) dataset consisting only of Korean, the learning time was 1.1 times faster than the baseline, and the EM and F1 performance were also 0.9% and 0.7% higher, respectively.

Development of Dataset Items for Commercial Space Design Applying AI

  • Jung Hwa SEO;Segeun CHUN;Ki-Pyeong, KIM
    • 한국인공지능학회지
    • /
    • 제11권1호
    • /
    • pp.25-29
    • /
    • 2023
  • In this paper, the purpose is to create a standard of AI training dataset type for commercial space design. As the market size of the field of space design continues to increase and the time spent increases indoors after COVID-19, interest in space is expanding throughout society. In addition, more and more consumers are getting used to the digital environment. Therefore, If you identify trends and preemptively propose the atmosphere and specifications that customers require quickly and easily, you can increase customer trust and conduct effective sales. As for the data set type, commercial districts were divided into a total of 8 categories, and images that could be processed were derived by refining 4,009,30MB JPG format images collected through web crawling. Then, by performing bounding and labeling operations, we developed a 'Dataset for AI Training' of 3,356 commercial space image data in CSV format with a size of 2.08MB. Through this study, elements of spatial images such as place type, space classification, and furniture can be extracted and used when developing AI algorithms, and it is expected that images requested by clients can be easily and quickly collected through spatial image input information.

실외 경비 환경에서 강인한 객체 검출 및 추적을 위한 실외 멀티 모달 센서 기반 학습용 데이터베이스 구축 (Multi Modal Sensor Training Dataset for the Robust Object Detection and Tracking in Outdoor Surveillance (MMO (Multi Modal Outdoor) Dataset))

  • 노동기;양원근;엄태영;이재광;김형록;백승민
    • 한국멀티미디어학회논문지
    • /
    • 제23권8호
    • /
    • pp.1006-1018
    • /
    • 2020
  • Dataset is getting more import to develop a learning based algorithm. Quality of the algorithm definitely depends on dataset. So we introduce new dataset over 200 thousands images which are fully labeled multi modal sensor data. Proposed dataset was designed and constructed for researchers who want to develop detection, tracking, and action classification in outdoor environment for surveillance scenarios. The dataset includes various images and multi modal sensor data under different weather and lighting condition. Therefor, we hope it will be very helpful to develop more robust algorithm for systems equipped with difference kinds of sensors in outdoor application. Case studies with the proposed dataset are also discussed in this paper.

전이학습을 활용한 도시지역 건물객체의 변화탐지 (Change Detection of Building Objects in Urban Area by Using Transfer Learning)

  • 모준상;성선경;최재완
    • 대한원격탐사학회지
    • /
    • 제37권6_1호
    • /
    • pp.1685-1695
    • /
    • 2021
  • 우수한 성능을 가지는 딥러닝 모델을 생성하기 위해서는 충분한 양의 학습자료가 필요하다. 하지만, 원격탐사 분야에서 충분한 양의 학습자료를 구축하기 위해서는 많은 시간과 비용을 필요로 한다. 따라서 적은 수의 학습자료를 활용한 딥러닝 모델의 전이학습(transfer learning)의 중요성이 증대되고 있다. 본 연구에서는 사전에 제작된 공개데이터셋을 기반으로 국내 정사영상 및 수치지도를 활용한 전이학습을 통해 국내 다시기 정사영상 내 존재하는 건물객체의 변화에 대한 탐지를 수행하였다. 이를 위하여, 변화탐지를 위한 공개데이터셋을 HRNet-v2 모델을 통하여 선행학습을 수행하고, 국내 정사영상 및 수치지도를 이용한 학습자료에 전이학습을 수행하였다. 전이학습에 대한 영향을 분석하기 위하여 두 곳의 실험지역에 전이 학습된 모델을 포함한 다양한 딥러닝 모델의 결과를 평가한 결과, 전이학습을 활용한 연구가 가장 우수함을 확인하였다. 이를 통하여, 전이학습을 활용해 부족한 양의 학습자료 문제를 해결하고, 다양한 원격탐사 자료에 대하여 효과적으로 변화탐지 기법을 적용할 수 있음을 확인하였다.

전이학습을 이용한 UNet 기반 건물 추출 딥러닝 모델의 학습률에 따른 성능 향상 분석 (Performance Improvement Analysis of Building Extraction Deep Learning Model Based on UNet Using Transfer Learning at Different Learning Rates)

  • 예철수;안영만;백태웅;김경태
    • 대한원격탐사학회지
    • /
    • 제39권5_4호
    • /
    • pp.1111-1123
    • /
    • 2023
  • 원격탐사 영상을 이용한 지표 속성의 변화를 모니터링 하기 위해서 딥러닝(deep learning) 모델을 이용한 의미론적 영상 분할 방법이 최근에 널리 사용되고 있다. 대표적인 의미론적 영상 분할 딥러닝 모델인 UNet 모델을 비롯하여 다양한 종류의 UNet 기반의 딥러닝 모델들의 성능 향상을 위해서는 학습 데이터셋의 크기가 충분해야 한다. 학습 데이터셋의 크기가 커지면 이를 처리하는 하드웨어 요구 사항도 커지고 학습에 소요되는 시간도 크게 증가되는 문제점이 발생한다. 이런 문제를 해결할 수 있는 방법인 전이학습은 대규모의 학습 데이터 셋이 없어도 모델 성능을 향상시킬 수 있는 효과적인 방법이다. 본 논문에서는 UNet 기반의 딥러닝 모델들을 대표적인 사전 학습 모델(pretrained model)인 VGG19 모델 및 ResNet50 모델과 결합한 세 종류의 전이학습 모델인 UNet-ResNet50 모델, UNet-VGG19 모델, CBAM-DRUNet-VGG19 모델을 제시하고 이를 건물 추출에 적용하여 전이학습 적용에 따른 정확도 향상을 분석하였다. 딥러닝 모델의 성능이 학습률의 영향을 많이 받는 점을 고려하여 학습률 설정에 따른 각 모델별 성능 변화도 함께 분석하였다. 건물 추출 결과의 성능 평가를 위해서 Kompsat-3A 데이터셋, WHU 데이터셋, INRIA 데이터셋을 사용하였으며 세 종류의 데이터셋에 대한 정확도 향상의 평균은 UNet 모델 대비 UNet-ResNet50 모델이 5.1%, UNet-VGG19 모델과 CBAM-DRUNet-VGG19 모델은 동일하게 7.2%의 결과를 얻었다.

ResNet 모델을 이용한 눈 주변 영역의 특징 추출 및 개인 인증 (Feature Extraction on a Periocular Region and Person Authentication Using a ResNet Model)

  • 김민기
    • 한국멀티미디어학회논문지
    • /
    • 제22권12호
    • /
    • pp.1347-1355
    • /
    • 2019
  • Deep learning approach based on convolution neural network (CNN) has extensively studied in the field of computer vision. However, periocular feature extraction using CNN was not well studied because it is practically impossible to collect large volume of biometric data. This study uses the ResNet model which was trained with the ImageNet dataset. To overcome the problem of insufficient training data, we focused on the training of multi-layer perception (MLP) having simple structure rather than training the CNN having complex structure. It first extracts features using the pretrained ResNet model and reduces the feature dimension by principle component analysis (PCA), then trains a MLP classifier. Experimental results with the public periocular dataset UBIPr show that the proposed method is effective in person authentication using periocular region. Especially it has the advantage which can be directly applied for other biometric traits.