• Title/Summary/Keyword: 학습 데이터

Search Result 6,406, Processing Time 0.034 seconds

Development of Deep Learning Based Deterioration Prediction Model for the Maintenance Planning of Highway Pavement (도로포장의 유지관리 계획 수립을 위한 딥러닝 기반 열화 예측 모델 개발)

  • Lee, Yongjun;Sun, Jongwan;Lee, Minjae
    • Korean Journal of Construction Engineering and Management
    • /
    • v.20 no.6
    • /
    • pp.34-43
    • /
    • 2019
  • The maintenance cost for road pavement is gradually increasing due to the continuous increase in road extension as well as increase in the number of old routes that have passed the public period. As a result, there is a need for a method of minimizing costs through preventative grievance preventive maintenance requires the establishment of a strategic plan through accurate prediction of road pavement. Hence, In this study, the deep neural network(DNN) and the recurrent neural network(RNN) were used in order to develop the expressway pavement damage prediction model. A superior model among these two network models was then suggested by comparing and analyzing their performance. In order to solve the RNN's vanishing gradient problem, the LSTM (Long short-term memory) circuits which are a more complicated form of the RNN structure were used. The learning result showed that the RMSE value of the RNN-LSTM model was 0.102 which was lower than the RMSE value of the DNN model, indicating that the performance of the RNN-LSTM model was superior. In addition, high accuracy of the RNN-LSTM model was verified through the comparison between the estimated average road pavement condition and the actually measured road pavement condition of the target section over time.

Automatic Word Spacing of the Korean Sentences by Using End-to-End Deep Neural Network (종단 간 심층 신경망을 이용한 한국어 문장 자동 띄어쓰기)

  • Lee, Hyun Young;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.8 no.11
    • /
    • pp.441-448
    • /
    • 2019
  • Previous researches on automatic spacing of Korean sentences has been researched to correct spacing errors by using n-gram based statistical techniques or morpheme analyzer to insert blanks in the word boundary. In this paper, we propose an end-to-end automatic word spacing by using deep neural network. Automatic word spacing problem could be defined as a tag classification problem in unit of syllable other than word. For contextual representation between syllables, Bi-LSTM encodes the dependency relationship between syllables into a fixed-length vector of continuous vector space using forward and backward LSTM cell. In order to conduct automatic word spacing of Korean sentences, after a fixed-length contextual vector by Bi-LSTM is classified into auto-spacing tag(B or I), the blank is inserted in the front of B tag. For tag classification method, we compose three types of classification neural networks. One is feedforward neural network, another is neural network language model and the other is linear-chain CRF. To compare our models, we measure the performance of automatic word spacing depending on the three of classification networks. linear-chain CRF of them used as classification neural network shows better performance than other models. We used KCC150 corpus as a training and testing data.

A study on combination of loss functions for effective mask-based speech enhancement in noisy environments (잡음 환경에 효과적인 마스크 기반 음성 향상을 위한 손실함수 조합에 관한 연구)

  • Jung, Jaehee;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.3
    • /
    • pp.234-240
    • /
    • 2021
  • In this paper, the mask-based speech enhancement is improved for effective speech recognition in noise environments. In the mask-based speech enhancement, enhanced spectrum is obtained by multiplying the noisy speech spectrum by the mask. The VoiceFilter (VF) model is used as the mask estimation, and the Spectrogram Inpainting (SI) technique is used to remove residual noise of enhanced spectrum. In this paper, we propose a combined loss to further improve speech enhancement. In order to effectively remove the residual noise in the speech, the positive part of the Triplet loss is used with the component loss. For the experiment TIMIT database is re-constructed using NOISEX92 noise and background music samples with various Signal to Noise Ratio (SNR) conditions. Source to Distortion Ratio (SDR), Perceptual Evaluation of Speech Quality (PESQ), and Short-Time Objective Intelligibility (STOI) are used as the metrics of performance evaluation. When the VF was trained with the mean squared error and the SI model was trained with the combined loss, SDR, PESQ, and STOI were improved by 0.5, 0.06, and 0.002 respectively compared to the system trained only with the mean squared error.

Comparative Analysis of CNN Deep Learning Model Performance Based on Quantification Application for High-Speed Marine Object Classification (고속 해상 객체 분류를 위한 양자화 적용 기반 CNN 딥러닝 모델 성능 비교 분석)

  • Lee, Seong-Ju;Lee, Hyo-Chan;Song, Hyun-Hak;Jeon, Ho-Seok;Im, Tae-ho
    • Journal of Internet Computing and Services
    • /
    • v.22 no.2
    • /
    • pp.59-68
    • /
    • 2021
  • As artificial intelligence(AI) technologies, which have made rapid growth recently, began to be applied to the marine environment such as ships, there have been active researches on the application of CNN-based models specialized for digital videos. In E-Navigation service, which is combined with various technologies to detect floating objects of clash risk to reduce human errors and prevent fires inside ships, real-time processing is of huge importance. More functions added, however, mean a need for high-performance processes, which raises prices and poses a cost burden on shipowners. This study thus set out to propose a method capable of processing information at a high rate while maintaining the accuracy by applying Quantization techniques of a deep learning model. First, videos were pre-processed fit for the detection of floating matters in the sea to ensure the efficient transmission of video data to the deep learning entry. Secondly, the quantization technique, one of lightweight techniques for a deep learning model, was applied to reduce the usage rate of memory and increase the processing speed. Finally, the proposed deep learning model to which video pre-processing and quantization were applied was applied to various embedded boards to measure its accuracy and processing speed and test its performance. The proposed method was able to reduce the usage of memory capacity four times and improve the processing speed about four to five times while maintaining the old accuracy of recognition.

Development of a deep-learning based automatic tracking of moving vehicles and incident detection processes on tunnels (딥러닝 기반 터널 내 이동체 자동 추적 및 유고상황 자동 감지 프로세스 개발)

  • Lee, Kyu Beom;Shin, Hyu Soung;Kim, Dong Gyu
    • Journal of Korean Tunnelling and Underground Space Association
    • /
    • v.20 no.6
    • /
    • pp.1161-1175
    • /
    • 2018
  • An unexpected event could be easily followed by a large secondary accident due to the limitation in sight of drivers in road tunnels. Therefore, a series of automated incident detection systems have been under operation, which, however, appear in very low detection rates due to very low image qualities on CCTVs in tunnels. In order to overcome that limit, deep learning based tunnel incident detection system was developed, which already showed high detection rates in November of 2017. However, since the object detection process could deal with only still images, moving direction and speed of moving vehicles could not be identified. Furthermore it was hard to detect stopping and reverse the status of moving vehicles. Therefore, apart from the object detection, an object tracking method has been introduced and combined with the detection algorithm to track the moving vehicles. Also, stopping-reverse discrimination algorithm was proposed, thereby implementing into the combined incident detection processes. Each performance on detection of stopping, reverse driving and fire incident state were evaluated with showing 100% detection rate. But the detection for 'person' object appears relatively low success rate to 78.5%. Nevertheless, it is believed that the enlarged richness of image big-data could dramatically enhance the detection capacity of the automatic incident detection system.

An Analysis of the Characteristics of Elementary Science Gifted Students' Problem Solving through Model Eliciting Activity(MEA) (Model Eliciting Activity(MEA)를 통한 초등 과학영재들의 문제해결 특성 분석)

  • Yoon, Jin-A;Han, Gum-ju;Nam, Younkyeng
    • Journal of the Korean Society of Earth Science Education
    • /
    • v.12 no.1
    • /
    • pp.64-81
    • /
    • 2019
  • The purpose of this study is to analyze elementary science gifted students' characteristics of the thinking in the problem solving process through a MEA(Model Eliciting Activity)activity. The subjects of this study are 40 elementary science gifted students who passed the first screen for the admission to the science gifted education institute in P university in 2018. The MEA activity was 'Coffee cup challenge', which is to find the best way to place cup side and bottom to save paper in a given material. Three drawings from each student and explanations of each drawing through out the design process were collected as the main data source. The data were analyzed by statistically (correlation coefficient) and qualitatively to find the relationship between; 1) the intuitive thinking and visual representation and 2) analytical thinking ability and communication skills that reflect MEA activities. In conclusion, first, intuitive thinking plays an important role in the ability of visual representation through pictures and the whole problem solving process. Second, the analytical thinking and elaboration process which are reflected through reflection on the arrangement of the drawings have a great influence on the communication skills. Therefore, this study investigated that MEA activities are useful activities to stimulate both intuitive and analytical thinking in elementary science gifted students, and to develop communication ability, by organizing their own ideas and providing learning opportunities for various solutions.

Landslide Susceptibility Prediction using Evidential Belief Function, Weight of Evidence and Artificial Neural Network Models (Evidential Belief Function, Weight of Evidence 및 Artificial Neural Network 모델을 이용한 산사태 공간 취약성 예측 연구)

  • Lee, Saro;Oh, Hyun-Joo
    • Korean Journal of Remote Sensing
    • /
    • v.35 no.2
    • /
    • pp.299-316
    • /
    • 2019
  • The purpose of this study was to analyze landslide susceptibility in the Pyeongchang area using Weight of Evidence (WOE) and Evidential Belief Function (EBF) as probability models and Artificial Neural Networks (ANN) as a machine learning model in a geographic information system (GIS). This study examined the widespread shallow landslides triggered by heavy rainfall during Typhoon Ewiniar in 2006, which caused serious property damage and significant loss of life. For the landslide susceptibility mapping, 3,955 landslide occurrences were detected using aerial photographs, and environmental spatial data such as terrain, geology, soil, forest, and land use were collected and constructed in a spatial database. Seventeen factors that could affect landsliding were extracted from the spatial database. All landslides were randomly separated into two datasets, a training set (50%) and validation set (50%), to establish and validate the EBF, WOE, and ANN models. According to the validation results of the area under the curve (AUC) method, the accuracy was 74.73%, 75.03%, and 70.87% for WOE, EBF, and ANN, respectively. The EBF model had the highest accuracy. However, all models had predictive accuracy exceeding 70%, the level that is effective for landslide susceptibility mapping. These models can be applied to predict landslide susceptibility in an area where landslides have not occurred previously based on the relationships between landslide and environmental factors. This susceptibility map can help reduce landslide risk, provide guidance for policy and land use development, and save time and expense for landslide hazard prevention. In the future, more generalized models should be developed by applying landslide susceptibility mapping in various areas.

Automatic Text Summarization based on Selective Copy mechanism against for Addressing OOV (미등록 어휘에 대한 선택적 복사를 적용한 문서 자동요약)

  • Lee, Tae-Seok;Seon, Choong-Nyoung;Jung, Youngim;Kang, Seung-Shik
    • Smart Media Journal
    • /
    • v.8 no.2
    • /
    • pp.58-65
    • /
    • 2019
  • Automatic text summarization is a process of shortening a text document by either extraction or abstraction. The abstraction approach inspired by deep learning methods scaling to a large amount of document is applied in recent work. Abstractive text summarization involves utilizing pre-generated word embedding information. Low-frequent but salient words such as terminologies are seldom included to dictionaries, that are so called, out-of-vocabulary(OOV) problems. OOV deteriorates the performance of Encoder-Decoder model in neural network. In order to address OOV words in abstractive text summarization, we propose a copy mechanism to facilitate copying new words in the target document and generating summary sentences. Different from the previous studies, the proposed approach combines accurate pointing information and selective copy mechanism based on bidirectional RNN and bidirectional LSTM. In addition, neural network gate model to estimate the generation probability and the loss function to optimize the entire abstraction model has been applied. The dataset has been constructed from the collection of abstractions and titles of journal articles. Experimental results demonstrate that both ROUGE-1 (based on word recall) and ROUGE-L (employed longest common subsequence) of the proposed Encoding-Decoding model have been improved to 47.01 and 29.55, respectively.

Classification of Natural and Artificial Forests from KOMPSAT-3/3A/5 Images Using Artificial Neural Network (인공신경망을 이용한 KOMPSAT-3/3A/5 영상으로부터 자연림과 인공림의 분류)

  • Lee, Yong-Suk;Park, Sung-Hwan;Jung, Hyung-Sup;Baek, Won-Kyung
    • Korean Journal of Remote Sensing
    • /
    • v.34 no.6_3
    • /
    • pp.1399-1414
    • /
    • 2018
  • Natural forests are un-manned forests where the artificial forces of people are not applied to the formation of forests. On the other hand, artificial forests are managed by people for their own purposes such as producing wood, preventing natural disasters, and protecting wind. The artificial forests enable us to enhance economical benefits of producing more wood per unit area because it is well-maintained with the purpose of the production of wood. The distinction surveys have been performed due to different management methods according to forests. The distinction survey between natural forests and artificial forests is traditionally performed via airborne remote sensing or in-situ surveys. In this study, we suggest a classification method of forest types using satellite imagery to reduce the time and cost of in-situ surveying. A classification map of natural forest and artificial forest were generated using KOMPSAT-3, 3A, 5 data by employing artificial neural network (ANN). And in order to validate the accuracy of classification, we utilized reference data from 1/5,000 stock map. As a result of the study on the classification of natural forest and plantation forest using artificial neural network, the overall accuracy of classification of learning result is 77.03% when compared with 1/5,000 stock map. It was confirmed that the acquisition time of the image and other factors such as needleleaf trees and broadleaf trees affect the distinction between artificial and natural forests using artificial neural networks.

Effects of Platform-based Exploratory and Exploitative Technology Strategy on Firm's Performance: Nanotechnology case (탐험과 활용관점 플랫폼 기술 포트폴리오 전략이 성과에 미치는 영향: 나노기술을 중심으로)

  • Moon, Hee-Sung;Shin, Juneseuk
    • Journal of Technology Innovation
    • /
    • v.27 no.1
    • /
    • pp.45-77
    • /
    • 2019
  • The balance between exploration for new possibility and exploitation for existing certainty is an important issue in strategy, innovation, R&D as well as organization learning. Among the convergence trends of technologies, many firms seek to have the wider technological knowledge assets and the deeper technology capabilities for the sustainable competitive advantage at the same time. While firms plan technology portfolio strategies, they should consider the attribute of the technology. Nanotechnology, a cutting-edge technology, is a general purpose technology, unlike conventional product-oriented technologies. This empirical study was focused on how multi-national firms' exploration and exploitation strategies for nanotechnology affect their innovative and financial performance. It uses multiple regression analysis on panel data. This result shows that the more diversified and specialized nanotechnology as platform technology is positively related to their innovative and financial performance, unlike the research results for product-oriented technologies. In addition, exploratory innovation is more effective to firm performance than exploitation. This implies how global firms can manage effectively platform technology strategies under the constraints of resources.