• Title/Summary/Keyword: Deep neural network (DNN)

Search Result 254, Processing Time 0.024 seconds

Data Augmentation for DNN-based Speech Enhancement (딥 뉴럴 네트워크 기반의 음성 향상을 위한 데이터 증강)

  • Lee, Seung Gwan;Lee, Sangmin
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.7
    • /
    • pp.749-758
    • /
    • 2019
  • This paper proposes a data augmentation algorithm to improve the performance of DNN(Deep Neural Network) based speech enhancement. Many deep learning models are exploring algorithms to maximize the performance in limited amount of data. The most commonly used algorithm is the data augmentation which is the technique artificially increases the amount of data. For the effective data augmentation algorithm, we used a formant enhancement method that assign the different weights to the formant frequencies. The DNN model which is trained using the proposed data augmentation algorithm was evaluated in various noise environments. The speech enhancement performance of the DNN model with the proposed data augmentation algorithm was compared with the algorithms which are the DNN model with the conventional data augmentation and without the data augmentation. As a result, the proposed data augmentation algorithm showed the higher speech enhancement performance than the other algorithms.

Comparative Analysis of PM10 Prediction Performance between Neural Network Models

  • Jung, Yong-Jin;Oh, Chang-Heon
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.4
    • /
    • pp.241-247
    • /
    • 2021
  • Particulate matter has emerged as a serious global problem, necessitating highly reliable information on the matter. Therefore, various algorithms have been used in studies to predict particulate matter. In this study, we compared the prediction performance of neural network models that have been actively studied for particulate matter prediction. Among the neural network algorithms, a deep neural network (DNN), a recurrent neural network, and long short-term memory were used to design the optimal prediction model using a hyper-parameter search. In the comparative analysis of the prediction performance of each model, the DNN model showed a lower root mean square error (RMSE) than the other algorithms in the performance comparison using the RMSE and the level of accuracy as metrics for evaluation. The stability of the recurrent neural network was slightly lower than that of the other algorithms, although the accuracy was higher.

Performance Comparison of Neural Network and Gradient Boosting Machine for Dropout Prediction of University Students

  • Hyeon Gyu Kim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.8
    • /
    • pp.49-58
    • /
    • 2023
  • Dropouts of students not only cause financial loss to the university, but also have negative impacts on individual students and society together. To resolve this issue, various studies have been conducted to predict student dropout using machine learning. This paper presents a model implemented using DNN (Deep Neural Network) and LGBM (Light Gradient Boosting Machine) to predict dropout of university students and compares their performance. The academic record and grade data collected from 20,050 students at A University, a small and medium-sized 4-year university in Seoul, were used for learning. Among the 140 attributes of the collected data, only the attributes with a correlation coefficient of 0.1 or higher with the attribute indicating dropout were extracted and used for learning. As learning algorithms, DNN (Deep Neural Network) and LightGBM (Light Gradient Boosting Machine) were used. Our experimental results showed that the F1-scores of DNN and LGBM were 0.798 and 0.826, respectively, indicating that LGBM provided 2.5% better prediction performance than DNN.

A survey on parallel training algorithms for deep neural networks (심층 신경망 병렬 학습 방법 연구 동향)

  • Yook, Dongsuk;Lee, Hyowon;Yoo, In-Chul
    • The Journal of the Acoustical Society of Korea
    • /
    • v.39 no.6
    • /
    • pp.505-514
    • /
    • 2020
  • Since a large amount of training data is typically needed to train Deep Neural Networks (DNNs), a parallel training approach is required to train the DNNs. The Stochastic Gradient Descent (SGD) algorithm is one of the most widely used methods to train the DNNs. However, since the SGD is an inherently sequential process, it requires some sort of approximation schemes to parallelize the SGD algorithm. In this paper, we review various efforts on parallelizing the SGD algorithm, and analyze the computational overhead, communication overhead, and the effects of the approximations.

PartitionTuner: An operator scheduler for deep-learning compilers supporting multiple heterogeneous processing units

  • Misun Yu;Yongin Kwon;Jemin Lee;Jeman Park;Junmo Park;Taeho Kim
    • ETRI Journal
    • /
    • v.45 no.2
    • /
    • pp.318-328
    • /
    • 2023
  • Recently, embedded systems, such as mobile platforms, have multiple processing units that can operate in parallel, such as centralized processing units (CPUs) and neural processing units (NPUs). We can use deep-learning compilers to generate machine code optimized for these embedded systems from a deep neural network (DNN). However, the deep-learning compilers proposed so far generate codes that sequentially execute DNN operators on a single processing unit or parallel codes for graphic processing units (GPUs). In this study, we propose PartitionTuner, an operator scheduler for deep-learning compilers that supports multiple heterogeneous PUs including CPUs and NPUs. PartitionTuner can generate an operator-scheduling plan that uses all available PUs simultaneously to minimize overall DNN inference time. Operator scheduling is based on the analysis of DNN architecture and the performance profiles of individual and group operators measured on heterogeneous processing units. By the experiments for seven DNNs, PartitionTuner generates scheduling plans that perform 5.03% better than a static type-based operator-scheduling technique for SqueezeNet. In addition, PartitionTuner outperforms recent profiling-based operator-scheduling techniques for ResNet50, ResNet18, and SqueezeNet by 7.18%, 5.36%, and 2.73%, respectively.

Deep Learning-based Environment-aware Home Automation System (딥러닝 기반 상황 맞춤형 홈 오토메이션 시스템)

  • Park, Min-ji;Noh, Yunsu;Jo, Seong-jun
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2019.05a
    • /
    • pp.334-337
    • /
    • 2019
  • In this study, we built the data collection system to learn user's habit data by deep learning and to create an indoor environment according to the situation. The system consists of a data collection server and several sensor nodes, which creates the environment according to the data collected. We used Google Inception v3 network to analyze the photographs and hand-designed second DNN (Deep Neural Network) to infer behaviors. As a result of the DNN learning, we gained 98.4% of Testing Accuracy. Through this results, we were be able to prove that DNN is capable of extrapolating the situation.

  • PDF

Model adaptation employing DNN-based estimation of noise corruption function for noise-robust speech recognition (잡음 환경 음성 인식을 위한 심층 신경망 기반의 잡음 오염 함수 예측을 통한 음향 모델 적응 기법)

  • Yoon, Ki-mu;Kim, Wooil
    • The Journal of the Acoustical Society of Korea
    • /
    • v.38 no.1
    • /
    • pp.47-50
    • /
    • 2019
  • This paper proposes an acoustic model adaptation method for effective speech recognition in noisy environments. In the proposed algorithm, the noise corruption function is estimated employing DNN (Deep Neural Network), and the function is applied to the model parameter estimation. The experimental results using the Aurora 2.0 framework and database demonstrate that the proposed model adaptation method shows more effective in known and unknown noisy environments compared to the conventional methods. In particular, the experiments of the unknown environments show 15.87 % of relative improvement in the average of WER (Word Error Rate).

Transfer Learning based DNN-SVM Hybrid Model for Breast Cancer Classification

  • Gui Rae Jo;Beomsu Baek;Young Soon Kim;Dong Hoon Lim
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.1-11
    • /
    • 2023
  • Breast cancer is the disease that affects women the most worldwide. Due to the development of computer technology, the efficiency of machine learning has increased, and thus plays an important role in cancer detection and diagnosis. Deep learning is a field of machine learning technology based on an artificial neural network, and its performance has been rapidly improved in recent years, and its application range is expanding. In this paper, we propose a DNN-SVM hybrid model that combines the structure of a deep neural network (DNN) based on transfer learning and a support vector machine (SVM) for breast cancer classification. The transfer learning-based proposed model is effective for small training data, has a fast learning speed, and can improve model performance by combining all the advantages of a single model, that is, DNN and SVM. To evaluate the performance of the proposed DNN-SVM Hybrid model, the performance test results with WOBC and WDBC breast cancer data provided by the UCI machine learning repository showed that the proposed model is superior to single models such as logistic regression, DNN, and SVM, and ensemble models such as random forest in various performance measures.

Study on the Prediction Model of Reheat Gas Turbine Inlet Temperature using Deep Neural Network Technique (심층신경망 기법을 이용한 재열 가스터빈 입구온도 예측모델에 관한 연구)

  • Young-Bok Han;Sung-Ho Kim;Byon-Gon Kim
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.18 no.5
    • /
    • pp.841-852
    • /
    • 2023
  • Gas turbines, which are used as generators for frequency regulation of the domestic power system, are increasing in use due to the carbon-neutral policy, quick startup and shutdown, and high thermal efficiency. Since the gas turbine rotates the turbine using high-temperature flame, the turbine inlet temperature is acting as a key factor determining the performance and lifespan of the device. However, since the inlet temperature cannot be directly measured, the temperature calculated by the manufacturer is used or the temperature predicted based on field experience is applied, which makes it difficult to operate and maintain the gas turbine in a stable manner. In this study, we present a model that can predict the inlet temperature of a reheat gas turbine based on Deep Neural Network (DNN), which is widely used in artificial neural networks, and verify the performance of the proposed DNN based on actual data.

Design of a Recommendation System for Improving Deep Neural Network Performance

  • Juhyoung Sung;Kiwon Kwon;Byoungchul Song
    • Journal of Internet Computing and Services
    • /
    • v.25 no.1
    • /
    • pp.49-56
    • /
    • 2024
  • There have been emerging many use-cases applying recommendation systems especially in online platform. Although the performance of recommendation systems is affected by a variety of factors, selecting appropriate features is difficult since most of recommendation systems have sparse data. Conventional matrix factorization (MF) method is a basic way to handle with problems in the recommendation systems. However, the MF based scheme cannot reflect non-linearity characteristics well. As deep learning technology has been attracted widely, a deep neural network (DNN) framework based collaborative filtering (CF) was introduced to complement the non-linearity issue. However, there is still a problem related to feature embedding for use as input to the DNN. In this paper, we propose an effective method using singular value decomposition (SVD) based feature embedding for improving the DNN performance of recommendation algorithms. We evaluate the performance of recommendation systems using MovieLens dataset and show the proposed scheme outperforms the existing methods. Moreover, we analyze the performance according to the number of latent features in the proposed algorithm. We expect that the proposed scheme can be applied to the generalized recommendation systems.