• Title/Summary/Keyword: Deep neural network (DNN)

Search Result 268, Processing Time 0.023 seconds

Korean and English Sentiment Analysis Using the Deep Learning

  • Ramadhani, Adyan Marendra;Choi, Hyung Rim;Lim, Seong Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.23 no.3
    • /
    • pp.59-71
    • /
    • 2018
  • Social media has immense popularity among all services today. Data from social network services (SNSs) can be used for various objectives, such as text prediction or sentiment analysis. There is a great deal of Korean and English data on social media that can be used for sentiment analysis, but handling such huge amounts of unstructured data presents a difficult task. Machine learning is needed to handle such huge amounts of data. This research focuses on predicting Korean and English sentiment using deep forward neural network with a deep learning architecture and compares it with other methods, such as LDA MLP and GENSIM, using logistic regression. The research findings indicate an approximately 75% accuracy rate when predicting sentiments using DNN, with a latent Dirichelet allocation (LDA) prediction accuracy rate of approximately 81%, with the corpus being approximately 64% accurate between English and Korean.

Speech emotion recognition using attention mechanism-based deep neural networks (주목 메커니즘 기반의 심층신경망을 이용한 음성 감정인식)

  • Ko, Sang-Sun;Cho, Hye-Seung;Kim, Hyoung-Gook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.407-412
    • /
    • 2017
  • In this paper, we propose a speech emotion recognition method using a deep neural network based on the attention mechanism. The proposed method consists of a combination of CNN (Convolution Neural Networks), GRU (Gated Recurrent Unit), DNN (Deep Neural Networks) and attention mechanism. The spectrogram of the speech signal contains characteristic patterns according to the emotion. Therefore, we modeled characteristic patterns according to the emotion by applying the tuned Gabor filters as convolutional filter of typical CNN. In addition, we applied the attention mechanism with CNN and FC (Fully-Connected) layer to obtain the attention weight by considering context information of extracted features and used it for emotion recognition. To verify the proposed method, we conducted emotion recognition experiments on six emotions. The experimental results show that the proposed method achieves higher performance in speech emotion recognition than the conventional methods.

Performance Evaluation of Concrete Drying Shrinkage Prediction Using DNN and LSTM (DNN과 LSTM을 활용한 콘크리트의 건조수축량 예측성능 평가)

  • Han, Jun-Hui;Lim, Gun-Su;Lee, Hyeon-Jik;Park, Jae-Woong;Kim, Jong;Han, Min-Cheol
    • Proceedings of the Korean Institute of Building Construction Conference
    • /
    • 2023.05a
    • /
    • pp.179-180
    • /
    • 2023
  • In this study, the performance of the prediction model was compared and analyzed using DNN and LSTM learning models to predict the amount of dry shrinkage of the concrete. As a result of the analysis, DNN model had a high error rate of about 51%, indicating overfitting to the training data. But, the LSTM learning model showed a relatively higher accuracy with an error rate of 12% compared to the DNN model. Also, the Pre_LSTM model which preprocess data, showed the performance with an error rate of 9% and a coefficient of determination of 0.887 in the LSTM learning model.

  • PDF

Deep neural networks for speaker verification with short speech utterances (짧은 음성을 대상으로 하는 화자 확인을 위한 심층 신경망)

  • Yang, IL-Ho;Heo, Hee-Soo;Yoon, Sung-Hyun;Yu, Ha-Jin
    • The Journal of the Acoustical Society of Korea
    • /
    • v.35 no.6
    • /
    • pp.501-509
    • /
    • 2016
  • We propose a method to improve the robustness of speaker verification on short test utterances. The accuracy of the state-of-the-art i-vector/probabilistic linear discriminant analysis systems can be degraded when testing utterance durations are short. The proposed method compensates for utterance variations of short test feature vectors using deep neural networks. We design three different types of DNN (Deep Neural Network) structures which are trained with different target output vectors. Each DNN is trained to minimize the discrepancy between the feed-forwarded output of a given short utterance feature and its original long utterance feature. We use short 2-10 s condition of the NIST (National Institute of Standards Technology, U.S.) 2008 SRE (Speaker Recognition Evaluation) corpus to evaluate the method. The experimental results show that the proposed method reduces the minimum detection cost relative to the baseline system.

Priority-based Multi-DNN scheduling framework for autonomous vehicles (자율주행차용 우선순위 기반 다중 DNN 모델 스케줄링 프레임워크)

  • Cho, Ho-Jin;Hong, Sun-Pyo;Kim, Myung-Sun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.25 no.3
    • /
    • pp.368-376
    • /
    • 2021
  • With the recent development of deep learning technology, autonomous things technology is attracting attention, and DNNs are widely used in embedded systems such as drones and autonomous vehicles. Embedded systems that can perform large-scale operations and process multiple DNNs for high recognition accuracy without relying on the cloud are being released. DNNs with various levels of priority exist within these systems. DNNs related to the safety-critical applications of autonomous vehicles have the highest priority, and they must be handled first. In this paper, we propose a priority-based scheduling framework for DNNs when multiple DNNs are executed simultaneously. Even if a low-priority DNN is being executed first, a high-priority DNN can preempt it, guaranteeing the fast response characteristics of safety-critical applications of autonomous vehicles. As a result of checking through extensive experiments, the performance improved by up to 76.6% in the actual commercial board.

Forecasting realized volatility using data normalization and recurrent neural network

  • Yoonjoo Lee;Dong Wan Shin;Ji Eun Choi
    • Communications for Statistical Applications and Methods
    • /
    • v.31 no.1
    • /
    • pp.105-127
    • /
    • 2024
  • We propose recurrent neural network (RNN) methods for forecasting realized volatility (RV). The data are RVs of ten major stock price indices, four from the US, and six from the EU. Forecasts are made for relative ratio of adjacent RVs instead of the RV itself in order to avoid the out-of-scale issue. Forecasts of RV ratios distribution are first constructed from which those of RVs are computed which are shown to be better than forecasts constructed directly from RV. The apparent asymmetry of RV ratio is addressed by the Piecewise Min-max (PM) normalization. The serial dependence of the ratio data renders us to consider two architectures, long short-term memory (LSTM) and gated recurrent unit (GRU). The hyperparameters of LSTM and GRU are tuned by the nested cross validation. The RNN forecast with the PM normalization and ratio transformation is shown to outperform other forecasts by other RNN models and by benchmarking models of the AR model, the support vector machine (SVM), the deep neural network (DNN), and the convolutional neural network (CNN).

An Adaptation Method in Noise Mismatch Conditions for DNN-based Speech Enhancement

  • Xu, Si-Ying;Niu, Tong;Qu, Dan;Long, Xing-Yan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.10
    • /
    • pp.4930-4951
    • /
    • 2018
  • The deep learning based speech enhancement has shown considerable success. However, it still suffers performance degradation under mismatch conditions. In this paper, an adaptation method is proposed to improve the performance under noise mismatch conditions. Firstly, we advise a noise aware training by supplying identity vectors (i-vectors) as parallel input features to adapt deep neural network (DNN) acoustic models with the target noise. Secondly, given a small amount of adaptation data, the noise-dependent DNN is obtained by using $L_2$ regularization from a noise-independent DNN, and forcing the estimated masks to be close to the unadapted condition. Finally, experiments were carried out on different noise and SNR conditions, and the proposed method has achieved significantly 0.1%-9.6% benefits of STOI, and provided consistent improvement in PESQ and segSNR against the baseline systems.

Deep Neural Network Model For Short-term Electric Peak Load Forecasting (단기 전력 부하 첨두치 예측을 위한 심층 신경회로망 모델)

  • Hwang, Heesoo
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.5
    • /
    • pp.1-6
    • /
    • 2018
  • In smart grid an accurate load forecasting is crucial in planning resources, which aids in improving its operation efficiency and reducing the dynamic uncertainties of energy systems. Research in this area has included the use of shallow neural networks and other machine learning techniques to solve this problem. Recent researches in the field of computer vision and speech recognition, have shown great promise for Deep Neural Networks (DNN). To improve the performance of daily electric peak load forecasting the paper presents a new deep neural network model which has the architecture of two multi-layer neural networks being serially connected. The proposed network model is progressively pre-learned layer by layer ahead of learning the whole network. For both one day and two day ahead peak load forecasting the proposed models are trained and tested using four years of hourly load data obtained from the Korea Power Exchange (KPX).

Validation Data Augmentation for Improving the Grading Accuracy of Diabetic Macular Edema using Deep Learning (딥러닝을 이용한 당뇨성황반부종 등급 분류의 정확도 개선을 위한 검증 데이터 증강 기법)

  • Lee, Tae Soo
    • Journal of Biomedical Engineering Research
    • /
    • v.40 no.2
    • /
    • pp.48-54
    • /
    • 2019
  • This paper proposed a method of validation data augmentation for improving the grading accuracy of diabetic macular edema (DME) using deep learning. The data augmentation technique is basically applied in order to secure diversity of data by transforming one image to several images through random translation, rotation, scaling and reflection in preparation of input data of the deep neural network (DNN). In this paper, we apply this technique in the validation process of the trained DNN, and improve the grading accuracy by combining the classification results of the augmented images. To verify the effectiveness, 1,200 retinal images of Messidor dataset was divided into training and validation data at the ratio 7:3. By applying random augmentation to 359 validation data, $1.61{\pm}0.55%$ accuracy improvement was achieved in the case of six times augmentation (N=6). This simple method has shown that the accuracy can be improved in the N range from 2 to 6 with the correlation coefficient of 0.5667. Therefore, it is expected to help improve the diagnostic accuracy of DME with the grading information provided by the proposed DNN.

Performance assessments of feature vectors and classification algorithms for amphibian sound classification (양서류 울음 소리 식별을 위한 특징 벡터 및 인식 알고리즘 성능 분석)

  • Park, Sangwook;Ko, Kyungdeuk;Ko, Hanseok
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.6
    • /
    • pp.401-406
    • /
    • 2017
  • This paper presents the performance assessment of several key algorithms conducted for amphibian species sound classification. Firstly, 9 target species including endangered species are defined and a database of their sounds is built. For performance assessment, three feature vectors such as MFCC (Mel Frequency Cepstral Coefficient), RCGCC (Robust Compressive Gammachirp filterbank Cepstral Coefficient), and SPCC (Subspace Projection Cepstral Coefficient), and three classifiers such as GMM(Gaussian Mixture Model), SVM(Support Vector Machine), DBN-DNN(Deep Belief Network - Deep Neural Network) are considered. In addition, i-vector based classification system which is widely used for speaker recognition, is used to assess for this task. Experimental results indicate that, SPCC-SVM achieved the best performance with 98.81 % while other methods also attained good performance with above 90 %.