• Title/Summary/Keyword: 분할 학습

Search Result 899, Processing Time 0.026 seconds

Deep learning algorithm of concrete spalling detection using focal loss and data augmentation (Focal loss와 데이터 증강 기법을 이용한 콘크리트 박락 탐지 심층 신경망 알고리즘)

  • Shim, Seungbo;Choi, Sang-Il;Kong, Suk-Min;Lee, Seong-Won
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.23 no.4
    • /
    • pp.253-263
    • /
    • 2021
  • Concrete structures are damaged by aging and external environmental factors. This type of damage is to appear in the form of cracks, to proceed in the form of spalling. Such concrete damage can act as the main cause of reducing the original design bearing capacity of the structure, and negatively affect the stability of the structure. If such damage continues, it may lead to a safety accident in the future, thus proper repair and reinforcement are required. To this end, an accurate and objective condition inspection of the structure must be performed, and for this inspection, a sensor technology capable of detecting damage area is required. For this reason, we propose a deep learning-based image processing algorithm that can detect spalling. To develop this, 298 spalling images were obtained, of which 253 images were used for training, and the remaining 45 images were used for testing. In addition, an improved loss function and data augmentation technique were applied to improve the detection performance. As a result, the detection performance of concrete spalling showed a mean intersection over union of 80.19%. In conclusion, we developed an algorithm to detect concrete spalling through a deep learning-based image processing technique, with an improved loss function and data augmentation technique. This technology is expected to be utilized for accurate inspection and diagnosis of structures in the future.

Detection Algorithm of Road Damage and Obstacle Based on Joint Deep Learning for Driving Safety (주행 안전을 위한 joint deep learning 기반의 도로 노면 파손 및 장애물 탐지 알고리즘)

  • Shim, Seungbo;Jeong, Jae-Jin
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.2
    • /
    • pp.95-111
    • /
    • 2021
  • As the population decreases in an aging society, the average age of drivers increases. Accordingly, the elderly at high risk of being in an accident need autonomous-driving vehicles. In order to secure driving safety on the road, several technologies to respond to various obstacles are required in those vehicles. Among them, technology is required to recognize static obstacles, such as poor road conditions, as well as dynamic obstacles, such as vehicles, bicycles, and people, that may be encountered while driving. In this study, we propose a deep neural network algorithm capable of simultaneously detecting these two types of obstacle. For this algorithm, we used 1,418 road images and produced annotation data that marks seven categories of dynamic obstacles and labels images to indicate road damage. As a result of training, dynamic obstacles were detected with an average accuracy of 46.22%, and road surface damage was detected with a mean intersection over union of 74.71%. In addition, the average elapsed time required to process a single image is 89ms, and this algorithm is suitable for personal mobility vehicles that are slower than ordinary vehicles. In the future, it is expected that driving safety with personal mobility vehicles will be improved by utilizing technology that detects road obstacles.

Panamax Second-hand Vessel Valuation Model (파나막스 중고선가치 추정모델 연구)

  • Lim, Sang-Seop;Lee, Ki-Hwan;Yang, Huck-Jun;Yun, Hee-Sung
    • Journal of Navigation and Port Research
    • /
    • v.43 no.1
    • /
    • pp.72-78
    • /
    • 2019
  • The second-hand ship market provides immediate access to the freight market for shipping investors. When introducing second-hand vessels, the precise estimate of the price is crucial to the decision-making process because it directly affects the burden of capital cost to investors in the future. Previous studies on the second-hand market have mainly focused on the market efficiency. The number of papers on the estimation of second-hand vessel values is very limited. This study proposes an artificial neural network model that has not been attempted in previous studies. Six factors, freight, new-building price, orderbook, scrap price, age and vessel size, that affect the second-hand ship price were identified through literature review. The employed data is 366 real trading records of Panamax second-hand vessels reported to Clarkson between January 2016 and December 2018. Statistical filtering was carried out through correlation analysis and stepwise regression analysis, and three parameters, which are freight, age and size, were selected. Ten-fold cross validation was used to estimate the hyper-parameters of the artificial neural network model. The result of this study confirmed that the performance of the artificial neural network model is better than that of simple stepwise regression analysis. The application of the statistical verification process and artificial neural network model differentiates this paper from others. In addition, it is expected that a scientific model that satisfies both statistical rationality and accuracy of the results will make a contribution to real-life practices.

Korean Part-Of-Speech Tagging by using Head-Tail Tokenization (Head-Tail 토큰화 기법을 이용한 한국어 품사 태깅)

  • Suh, Hyun-Jae;Kim, Jung-Min;Kang, Seung-Shik
    • Smart Media Journal
    • /
    • v.11 no.5
    • /
    • pp.17-25
    • /
    • 2022
  • Korean part-of-speech taggers decompose a compound morpheme into unit morphemes and attach part-of-speech tags. So, here is a disadvantage that part-of-speech for morphemes are over-classified in detail and complex word types are generated depending on the purpose of the taggers. When using the part-of-speech tagger for keyword extraction in deep learning based language processing, it is not required to decompose compound particles and verb-endings. In this study, the part-of-speech tagging problem is simplified by using a Head-Tail tokenization technique that divides only two types of tokens, a lexical morpheme part and a grammatical morpheme part that the problem of excessively decomposed morpheme was solved. Part-of-speech tagging was attempted with a statistical technique and a deep learning model on the Head-Tail tokenized corpus, and the accuracy of each model was evaluated. Part-of-speech tagging was implemented by TnT tagger, a statistical-based part-of-speech tagger, and Bi-LSTM tagger, a deep learning-based part-of-speech tagger. TnT tagger and Bi-LSTM tagger were trained on the Head-Tail tokenized corpus to measure the part-of-speech tagging accuracy. As a result, it showed that the Bi-LSTM tagger performs part-of-speech tagging with a high accuracy of 99.52% compared to 97.00% for the TnT tagger.

The Accuracy Assessment of Species Classification according to Spatial Resolution of Satellite Image Dataset Based on Deep Learning Model (딥러닝 모델 기반 위성영상 데이터세트 공간 해상도에 따른 수종분류 정확도 평가)

  • Park, Jeongmook;Sim, Woodam;Kim, Kyoungmin;Lim, Joongbin;Lee, Jung-Soo
    • Korean Journal of Remote Sensing
    • /
    • v.38 no.6_1
    • /
    • pp.1407-1422
    • /
    • 2022
  • This study was conducted to classify tree species and assess the classification accuracy, using SE-Inception, a classification-based deep learning model. The input images of the dataset used Worldview-3 and GeoEye-1 images, and the size of the input images was divided into 10 × 10 m, 30 × 30 m, and 50 × 50 m to compare and evaluate the accuracy of classification of tree species. The label data was divided into five tree species (Pinus densiflora, Pinus koraiensis, Larix kaempferi, Abies holophylla Maxim. and Quercus) by visually interpreting the divided image, and then labeling was performed manually. The dataset constructed a total of 2,429 images, of which about 85% was used as learning data and about 15% as verification data. As a result of classification using the deep learning model, the overall accuracy of up to 78% was achieved when using the Worldview-3 image, the accuracy of up to 84% when using the GeoEye-1 image, and the classification accuracy was high performance. In particular, Quercus showed high accuracy of more than 85% in F1 regardless of the input image size, but trees with similar spectral characteristics such as Pinus densiflora and Pinus koraiensis had many errors. Therefore, there may be limitations in extracting feature amount only with spectral information of satellite images, and classification accuracy may be improved by using images containing various pattern information such as vegetation index and Gray-Level Co-occurrence Matrix (GLCM).

Application of convolutional autoencoder for spatiotemporal bias-correction of radar precipitation (CAE 알고리즘을 이용한 레이더 강우 보정 평가)

  • Jung, Sungho;Oh, Sungryul;Lee, Daeeop;Le, Xuan Hien;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.7
    • /
    • pp.453-462
    • /
    • 2021
  • As the frequency of localized heavy rainfall has increased during recent years, the importance of high-resolution radar data has also increased. This study aims to correct the bias of Dual Polarization radar that still has a spatial and temporal bias. In many studies, various statistical techniques have been attempted to correct the bias of radar rainfall. In this study, the bias correction of the S-band Dual Polarization radar used in flood forecasting of ME was implemented by a Convolutional Autoencoder (CAE) algorithm, which is a type of Convolutional Neural Network (CNN). The CAE model was trained based on radar data sets that have a 10-min temporal resolution for the July 2017 flood event in Cheongju. The results showed that the newly developed CAE model provided improved simulation results in time and space by reducing the bias of raw radar rainfall. Therefore, the CAE model, which learns the spatial relationship between each adjacent grid, can be used for real-time updates of grid-based climate data generated by radar and satellites.

Speech extraction based on AuxIVA with weighted source variance and noise dependence for robust speech recognition (강인 음성 인식을 위한 가중화된 음원 분산 및 잡음 의존성을 활용한 보조함수 독립 벡터 분석 기반 음성 추출)

  • Shin, Ui-Hyeop;Park, Hyung-Min
    • The Journal of the Acoustical Society of Korea
    • /
    • v.41 no.3
    • /
    • pp.326-334
    • /
    • 2022
  • In this paper, we propose speech enhancement algorithm as a pre-processing for robust speech recognition in noisy environments. Auxiliary-function-based Independent Vector Analysis (AuxIVA) is performed with weighted covariance matrix using time-varying variances with scaling factor from target masks representing time-frequency contributions of target speech. The mask estimates can be obtained using Neural Network (NN) pre-trained for speech extraction or diffuseness using Coherence-to-Diffuse power Ratio (CDR) to find the direct sounds component of a target speech. In addition, outputs for omni-directional noise are closely chained by sharing the time-varying variances similarly to independent subspace analysis or IVA. The speech extraction method based on AuxIVA is also performed in Independent Low-Rank Matrix Analysis (ILRMA) framework by extending the Non-negative Matrix Factorization (NMF) for noise outputs to Non-negative Tensor Factorization (NTF) to maintain the inter-channel dependency in noise output channels. Experimental results on the CHiME-4 datasets demonstrate the effectiveness of the presented algorithms.

Low Power ADC Design for Mixed Signal Convolutional Neural Network Accelerator (혼성신호 컨볼루션 뉴럴 네트워크 가속기를 위한 저전력 ADC설계)

  • Lee, Jung Yeon;Asghar, Malik Summair;Arslan, Saad;Kim, HyungWon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.11
    • /
    • pp.1627-1634
    • /
    • 2021
  • This paper introduces a low-power compact ADC circuit for analog Convolutional filter for low-power neural network accelerator SOC. While convolutional neural network accelerators can speed up the learning and inference process, they have drawback of consuming excessive power and occupying large chip area due to large number of multiply-and-accumulate operators when implemented in complex digital circuits. To overcome these drawbacks, we implemented an analog convolutional filter that consists of an analog multiply-and-accumulate arithmetic circuit along with an ADC. This paper is focused on the design optimization of a low-power 8bit SAR ADC for the analog convolutional filter accelerator We demonstrate how to minimize the capacitor-array DAC, an important component of SAR ADC, which is three times smaller than the conventional circuit. The proposed ADC has been fabricated in CMOS 65nm process. It achieves an overall size of 1355.7㎛2, power consumption of 2.6㎼ at a frequency of 100MHz, SNDR of 44.19 dB, and ENOB of 7.04bit.

Lightening of Human Pose Estimation Algorithm Using MobileViT and Transfer Learning

  • Kunwoo Kim;Jonghyun Hong;Jonghyuk Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.9
    • /
    • pp.17-25
    • /
    • 2023
  • In this paper, we propose a model that can perform human pose estimation through a MobileViT-based model with fewer parameters and faster estimation. The based model demonstrates lightweight performance through a structure that combines features of convolutional neural networks with features of Vision Transformer. Transformer, which is a major mechanism in this study, has become more influential as its based models perform better than convolutional neural network-based models in the field of computer vision. Similarly, in the field of human pose estimation, Vision Transformer-based ViTPose maintains the best performance in all human pose estimation benchmarks such as COCO, OCHuman, and MPII. However, because Vision Transformer has a heavy model structure with a large number of parameters and requires a relatively large amount of computation, it costs users a lot to train the model. Accordingly, the based model overcame the insufficient Inductive Bias calculation problem, which requires a large amount of computation by Vision Transformer, with Local Representation through a convolutional neural network structure. Finally, the proposed model obtained a mean average precision of 0.694 on the MS COCO benchmark with 3.28 GFLOPs and 9.72 million parameters, which are 1/5 and 1/9 the number compared to ViTPose, respectively.

Deep learning-based speech recognition for Korean elderly speech data including dementia patients (치매 환자를 포함한 한국 노인 음성 데이터 딥러닝 기반 음성인식)

  • Jeonghyeon Mun;Joonseo Kang;Kiwoong Kim;Jongbin Bae;Hyeonjun Lee;Changwon Lim
    • The Korean Journal of Applied Statistics
    • /
    • v.36 no.1
    • /
    • pp.33-48
    • /
    • 2023
  • In this paper we consider automatic speech recognition (ASR) for Korean speech data in which elderly persons randomly speak a sequence of words such as animals and vegetables for one minute. Most of the speakers are over 60 years old and some of them are dementia patients. The goal is to compare deep-learning based ASR models for such data and to find models with good performance. ASR is a technology that can recognize spoken words and convert them into written text by computers. Recently, many deep-learning models with good performance have been developed for ASR. Training data for such models are mostly composed of the form of sentences. Furthermore, the speakers in the data should be able to pronounce accurately in most cases. However, in our data, most of the speakers are over the age of 60 and often have incorrect pronunciation. Also, it is Korean speech data in which speakers randomly say series of words, not sentences, for one minute. Therefore, pre-trained models based on typical training data may not be suitable for our data, and hence we train deep-learning based ASR models from scratch using our data. We also apply some data augmentation methods due to small data size.