• Title/Summary/Keyword: neural network learning

Search Result 4,140, Processing Time 0.032 seconds

Development of a Building Safety Grade Calculation DNN Model based on Exterior Inspection Status Evaluation Data (건축물 안전등급 산출을 위한 외관 조사 상태 평가 데이터 기반 DNN 모델 구축)

  • Lee, Jae-Min;Kim, Sangyong;Kim, Seungho
    • Journal of the Korea Institute of Building Construction
    • /
    • v.21 no.6
    • /
    • pp.665-676
    • /
    • 2021
  • As the number of deteriorated buildings increases, the importance of safety diagnosis and maintenance of buildings has been rising. Existing visual investigations and building safety diagnosis objectivity and reliability are poor due to their reliance on the subjective judgment of the examiner. Therefore, this study presented the limitations of the previously conducted appearance investigation and proposed 3D Point Cloud data to increase the accuracy of existing detailed inspection data. In addition, this study conducted a calculation of an objective building safety grade using a Deep-Neural Network(DNN) structure. The DNN structure is generated using the existing detailed inspection data and precise safety diagnosis data, and the safety grade is calculated after applying the state evaluation data obtained using a 3D Point Cloud model. This proposed process was applied to 10 deteriorated buildings through the case study, and achieved a time reduction of about 50% compared to a conventional manual safety diagnosis based on the same building area. Subsequently, in this study, the accuracy of the safety grade calculation process was verified by comparing the safety grade result value with the existing value, and a DNN with a high accuracy of about 90% was constructed. This is expected to improve economic feasibility in the future by increasing the reliability of calculated safety ratings of old buildings, saving money and time compared to existing technologies.

Design and Implementation of BNN based Human Identification and Motion Classification System Using CW Radar (연속파 레이다를 활용한 이진 신경망 기반 사람 식별 및 동작 분류 시스템 설계 및 구현)

  • Kim, Kyeong-min;Kim, Seong-jin;NamKoong, Ho-jung;Jung, Yun-ho
    • Journal of Advanced Navigation Technology
    • /
    • v.26 no.4
    • /
    • pp.211-218
    • /
    • 2022
  • Continuous wave (CW) radar has the advantage of reliability and accuracy compared to other sensors such as camera and lidar. In addition, binarized neural network (BNN) has a characteristic that dramatically reduces memory usage and complexity compared to other deep learning networks. Therefore, this paper proposes binarized neural network based human identification and motion classification system using CW radar. After receiving a signal from CW radar, a spectrogram is generated through a short-time Fourier transform (STFT). Based on this spectrogram, we propose an algorithm that detects whether a person approaches a radar. Also, we designed an optimized BNN model that can support the accuracy of 90.0% for human identification and 98.3% for motion classification. In order to accelerate BNN operation, we designed BNN hardware accelerator on field programmable gate array (FPGA). The accelerator was implemented with 1,030 logics, 836 registers, and 334.904 Kbit block memory, and it was confirmed that the real-time operation was possible with a total calculation time of 6 ms from inference to transferring result.

Application of convolutional autoencoder for spatiotemporal bias-correction of radar precipitation (CAE 알고리즘을 이용한 레이더 강우 보정 평가)

  • Jung, Sungho;Oh, Sungryul;Lee, Daeeop;Le, Xuan Hien;Lee, Giha
    • Journal of Korea Water Resources Association
    • /
    • v.54 no.7
    • /
    • pp.453-462
    • /
    • 2021
  • As the frequency of localized heavy rainfall has increased during recent years, the importance of high-resolution radar data has also increased. This study aims to correct the bias of Dual Polarization radar that still has a spatial and temporal bias. In many studies, various statistical techniques have been attempted to correct the bias of radar rainfall. In this study, the bias correction of the S-band Dual Polarization radar used in flood forecasting of ME was implemented by a Convolutional Autoencoder (CAE) algorithm, which is a type of Convolutional Neural Network (CNN). The CAE model was trained based on radar data sets that have a 10-min temporal resolution for the July 2017 flood event in Cheongju. The results showed that the newly developed CAE model provided improved simulation results in time and space by reducing the bias of raw radar rainfall. Therefore, the CAE model, which learns the spatial relationship between each adjacent grid, can be used for real-time updates of grid-based climate data generated by radar and satellites.

Assessment of Applicability of CNN Algorithm for Interpretation of Thermal Images Acquired in Superficial Defect Inspection Zones (포장층 이상구간에서 획득한 열화상 이미지 해석을 위한 CNN 알고리즘의 적용성 평가)

  • Jang, Byeong-Su;Kim, YoungSeok;Kim, Sewon ;Choi, Hyun-Jun;Yoon, Hyung-Koo
    • Journal of the Korean Geotechnical Society
    • /
    • v.39 no.10
    • /
    • pp.41-48
    • /
    • 2023
  • The presence of abnormalities in the subgrade of roads poses safety risks to users and results in significant maintenance costs. In this study, we aimed to experimentally evaluate the temperature distributions in abnormal areas of subgrade materials using infrared cameras and analyze the data with machine learning techniques. The experimental site was configured as a cubic shape measuring 50 cm in width, length, and depth, with abnormal areas designated for water and air. Concrete blocks covered the upper part of the site to simulate the pavement layer. Temperature distribution was monitored over 23 h, from 4 PM to 3 PM the following day, resulting in image data and numerical temperature values extracted from the middle of the abnormal area. The temperature difference between the maximum and minimum values measured 34.8℃ for water, 34.2℃ for air, and 28.6℃ for the original subgrade. To classify conditions in the measured images, we employed the image analysis method of a convolutional neural network (CNN), utilizing ResNet-101 and SqueezeNet networks. The classification accuracies of ResNet-101 for water, air, and the original subgrade were 70%, 50%, and 80%, respectively. SqueezeNet achieved classification accuracies of 60% for water, 30% for air, and 70% for the original subgrade. This study highlights the effectiveness of CNN algorithms in analyzing subgrade properties and predicting subsurface conditions.

A study on end-to-end speaker diarization system using single-label classification (단일 레이블 분류를 이용한 종단 간 화자 분할 시스템 성능 향상에 관한 연구)

  • Jaehee Jung;Wooil Kim
    • The Journal of the Acoustical Society of Korea
    • /
    • v.42 no.6
    • /
    • pp.536-543
    • /
    • 2023
  • Speaker diarization, which labels for "who spoken when?" in speech with multiple speakers, has been studied on a deep neural network-based end-to-end method for labeling on speech overlap and optimization of speaker diarization models. Most deep neural network-based end-to-end speaker diarization systems perform multi-label classification problem that predicts the labels of all speakers spoken in each frame of speech. However, the performance of the multi-label-based model varies greatly depending on what the threshold is set to. In this paper, it is studied a speaker diarization system using single-label classification so that speaker diarization can be performed without thresholds. The proposed model estimate labels from the output of the model by converting speaker labels into a single label. To consider speaker label permutations in the training, the proposed model is used a combination of Permutation Invariant Training (PIT) loss and cross-entropy loss. In addition, how to add the residual connection structures to model is studied for effective learning of speaker diarization models with deep structures. The experiment used the Librispech database to generate and use simulated noise data for two speakers. When compared with the proposed method and baseline model using the Diarization Error Rate (DER) performance the proposed method can be labeling without threshold, and it has improved performance by about 20.7 %.

Chart-based Stock Price Prediction by Combing Variation Autoencoder and Attention Mechanisms (변이형 오토인코더와 어텐션 메커니즘을 결합한 차트기반 주가 예측)

  • Sanghyun Bae;Byounggu Choi
    • Information Systems Review
    • /
    • v.23 no.1
    • /
    • pp.23-43
    • /
    • 2021
  • Recently, many studies have been conducted to increase the accuracy of stock price prediction by analyzing candlestick charts using artificial intelligence techniques. However, these studies failed to consider the time-series characteristics of candlestick charts and to take into account the emotional state of market participants in data learning for stock price prediction. In order to overcome these limitations, this study produced input data by combining volatility index and candlestick charts to consider the emotional state of market participants, and used the data as input for a new method proposed on the basis of combining variantion autoencoder (VAE) and attention mechanisms for considering the time-series characteristics of candlestick chart. Fifty firms were randomly selected from the S&P 500 index and their stock prices were predicted to evaluate the performance of the method compared with existing ones such as convolutional neural network (CNN) or long-short term memory (LSTM). The results indicated the method proposed in this study showed superior performance compared to the existing ones. This study implied that the accuracy of stock price prediction could be improved by considering the emotional state of market participants and the time-series characteristics of the candlestick chart.

Spontaneous Speech Emotion Recognition Based On Spectrogram With Convolutional Neural Network (CNN 기반 스펙트로그램을 이용한 자유발화 음성감정인식)

  • Guiyoung Son;Soonil Kwon
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.6
    • /
    • pp.284-290
    • /
    • 2024
  • Speech emotion recognition (SER) is a technique that is used to analyze the speaker's voice patterns, including vibration, intensity, and tone, to determine their emotional state. There has been an increase in interest in artificial intelligence (AI) techniques, which are now widely used in medicine, education, industry, and the military. Nevertheless, existing researchers have attained impressive results by utilizing acted-out speech from skilled actors in a controlled environment for various scenarios. In particular, there is a mismatch between acted and spontaneous speech since acted speech includes more explicit emotional expressions than spontaneous speech. For this reason, spontaneous speech-emotion recognition remains a challenging task. This paper aims to conduct emotion recognition and improve performance using spontaneous speech data. To this end, we implement deep learning-based speech emotion recognition using the VGG (Visual Geometry Group) after converting 1-dimensional audio signals into a 2-dimensional spectrogram image. The experimental evaluations are performed on the Korean spontaneous emotional speech database from AI-Hub, consisting of 7 emotions, i.e., joy, love, anger, fear, sadness, surprise, and neutral. As a result, we achieved an average accuracy of 83.5% and 73.0% for adults and young people using a time-frequency 2-dimension spectrogram, respectively. In conclusion, our findings demonstrated that the suggested framework outperformed current state-of-the-art techniques for spontaneous speech and showed a promising performance despite the difficulty in quantifying spontaneous speech emotional expression.

An Investigation on Digital Humanities Research Trend by Analyzing the Papers of Digital Humanities Conferences (디지털 인문학 연구 동향 분석 - Digital Humanities 학술대회 논문을 중심으로 -)

  • Chung, EunKyung
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.55 no.1
    • /
    • pp.393-413
    • /
    • 2021
  • Digital humanities, which creates new and innovative knowledge through the combination of digital information technology and humanities research problems, can be seen as a representative multidisciplinary field of study. To investigate the intellectual structure of the digital humanities field, a network analysis of authors and keywords co-word was performed on a total of 441 papers in the last two years (2019, 2020) at the Digital Humanities Conference. As the results of the author and keyword analysis show, we can find out the active activities of Europe, North America, and Japanese and Chinese authors in East Asia. Through the co-author network, 11 dis-connected sub-networks are identified, which can be seen as a result of closed co-authoring activities. Through keyword analysis, 16 sub-subject areas are identified, which are machine learning, pedagogy, metadata, topic modeling, stylometry, cultural heritage, network, digital archive, natural language processing, digital library, twitter, drama, big data, neural network, virtual reality, and ethics. This results imply that a diver variety of digital information technologies are playing a major role in the digital humanities. In addition, keywords with high frequency can be classified into humanities-based keywords, digital information technology-based keywords, and convergence keywords. The dynamics of the growth and development of digital humanities can represented in these combinations of keywords.

A Study on the Development Trend of Artificial Intelligence Using Text Mining Technique: Focused on Open Source Software Projects on Github (텍스트 마이닝 기법을 활용한 인공지능 기술개발 동향 분석 연구: 깃허브 상의 오픈 소스 소프트웨어 프로젝트를 대상으로)

  • Chong, JiSeon;Kim, Dongsung;Lee, Hong Joo;Kim, Jong Woo
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.1-19
    • /
    • 2019
  • Artificial intelligence (AI) is one of the main driving forces leading the Fourth Industrial Revolution. The technologies associated with AI have already shown superior abilities that are equal to or better than people in many fields including image and speech recognition. Particularly, many efforts have been actively given to identify the current technology trends and analyze development directions of it, because AI technologies can be utilized in a wide range of fields including medical, financial, manufacturing, service, and education fields. Major platforms that can develop complex AI algorithms for learning, reasoning, and recognition have been open to the public as open source projects. As a result, technologies and services that utilize them have increased rapidly. It has been confirmed as one of the major reasons for the fast development of AI technologies. Additionally, the spread of the technology is greatly in debt to open source software, developed by major global companies, supporting natural language recognition, speech recognition, and image recognition. Therefore, this study aimed to identify the practical trend of AI technology development by analyzing OSS projects associated with AI, which have been developed by the online collaboration of many parties. This study searched and collected a list of major projects related to AI, which were generated from 2000 to July 2018 on Github. This study confirmed the development trends of major technologies in detail by applying text mining technique targeting topic information, which indicates the characteristics of the collected projects and technical fields. The results of the analysis showed that the number of software development projects by year was less than 100 projects per year until 2013. However, it increased to 229 projects in 2014 and 597 projects in 2015. Particularly, the number of open source projects related to AI increased rapidly in 2016 (2,559 OSS projects). It was confirmed that the number of projects initiated in 2017 was 14,213, which is almost four-folds of the number of total projects generated from 2009 to 2016 (3,555 projects). The number of projects initiated from Jan to Jul 2018 was 8,737. The development trend of AI-related technologies was evaluated by dividing the study period into three phases. The appearance frequency of topics indicate the technology trends of AI-related OSS projects. The results showed that the natural language processing technology has continued to be at the top in all years. It implied that OSS had been developed continuously. Until 2015, Python, C ++, and Java, programming languages, were listed as the top ten frequently appeared topics. However, after 2016, programming languages other than Python disappeared from the top ten topics. Instead of them, platforms supporting the development of AI algorithms, such as TensorFlow and Keras, are showing high appearance frequency. Additionally, reinforcement learning algorithms and convolutional neural networks, which have been used in various fields, were frequently appeared topics. The results of topic network analysis showed that the most important topics of degree centrality were similar to those of appearance frequency. The main difference was that visualization and medical imaging topics were found at the top of the list, although they were not in the top of the list from 2009 to 2012. The results indicated that OSS was developed in the medical field in order to utilize the AI technology. Moreover, although the computer vision was in the top 10 of the appearance frequency list from 2013 to 2015, they were not in the top 10 of the degree centrality. The topics at the top of the degree centrality list were similar to those at the top of the appearance frequency list. It was found that the ranks of the composite neural network and reinforcement learning were changed slightly. The trend of technology development was examined using the appearance frequency of topics and degree centrality. The results showed that machine learning revealed the highest frequency and the highest degree centrality in all years. Moreover, it is noteworthy that, although the deep learning topic showed a low frequency and a low degree centrality between 2009 and 2012, their ranks abruptly increased between 2013 and 2015. It was confirmed that in recent years both technologies had high appearance frequency and degree centrality. TensorFlow first appeared during the phase of 2013-2015, and the appearance frequency and degree centrality of it soared between 2016 and 2018 to be at the top of the lists after deep learning, python. Computer vision and reinforcement learning did not show an abrupt increase or decrease, and they had relatively low appearance frequency and degree centrality compared with the above-mentioned topics. Based on these analysis results, it is possible to identify the fields in which AI technologies are actively developed. The results of this study can be used as a baseline dataset for more empirical analysis on future technology trends that can be converged.

Application of Machine Learning to Predict Weight Loss in Overweight, and Obese Patients on Korean Medicine Weight Management Program (한의 체중 조절 프로그램에 참여한 과체중, 비만 환자에서의 머신러닝 기법을 적용한 체중 감량 예측 연구)

  • Kim, Eunjoo;Park, Young-Bae;Choi, Kahye;Lim, Young-Woo;Ok, Ji-Myung;Noh, Eun-Young;Song, Tae Min;Kang, Jihoon;Lee, Hyangsook;Kim, Seo-Young
    • The Journal of Korean Medicine
    • /
    • v.41 no.2
    • /
    • pp.58-79
    • /
    • 2020
  • Objectives: The purpose of this study is to predict the weight loss by applying machine learning using real-world clinical data from overweight and obese adults on weight loss program in 4 Korean Medicine obesity clinics. Methods: From January, 2017 to May, 2019, we collected data from overweight and obese adults (BMI≥23 kg/m2) who registered for a 3-month Gamitaeeumjowi-tang prescription program. Predictive analysis was conducted at the time of three prescriptions, and the expected reduced rate and reduced weight at the next order of prescription were predicted as binary classification (classification benchmark: highest quartile, median, lowest quartile). For the median, further analysis was conducted after using the variable selection method. The data set for each analysis was 25,988 in the first, 6,304 in the second, and 833 in the third. 5-fold cross validation was used to prevent overfitting. Results: Prediction accuracy was increased from 1st to 2nd and 3rd analysis. After selecting the variables based on the median, artificial neural network showed the highest accuracy in 1st (54.69%), 2nd (73.52%), and 3rd (81.88%) prediction analysis based on reduced rate. The prediction performance was additionally confirmed through AUC, Random Forest showed the highest in 1st (0.640), 2nd (0.816), and 3rd (0.939) prediction analysis based on reduced weight. Conclusions: The prediction of weight loss by applying machine learning showed that the accuracy was improved by using the initial weight loss information. There is a possibility that it can be used to screen patients who need intensive intervention when expected weight loss is low.