• Title/Summary/Keyword: Audio deep-learning

Search Result 69, Processing Time 0.027 seconds

Area-wise relational knowledge distillation

  • Sungchul Cho;Sangje Park;Changwon Lim
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.5
    • /
    • pp.501-516
    • /
    • 2023
  • Knowledge distillation (KD) refers to extracting knowledge from a large and complex model (teacher) and transferring it to a relatively small model (student). This can be done by training the teacher model to obtain the activation function values of the hidden or the output layers and then retraining the student model using the same training data with the obtained values. Recently, relational KD (RKD) has been proposed to extract knowledge about relative differences in training data. This method improved the performance of the student model compared to conventional KDs. In this paper, we propose a new method for RKD by introducing a new loss function for RKD. The proposed loss function is defined using the area difference between the teacher model and the student model in a specific hidden layer, and it is shown that the model can be successfully compressed, and the generalization performance of the model can be improved. We demonstrate that the accuracy of the model applying the method proposed in the study of model compression of audio data is up to 1.8% higher than that of the existing method. For the study of model generalization, we demonstrate that the model has up to 0.5% better performance in accuracy when introducing the RKD method to self-KD using image data.

Considerations for Applying Korean Natural Language Processing Technology in Records Management (기록관리 분야에서 한국어 자연어 처리 기술을 적용하기 위한 고려사항)

  • Haklae, Kim
    • Journal of Korean Society of Archives and Records Management
    • /
    • v.22 no.4
    • /
    • pp.129-149
    • /
    • 2022
  • Records have temporal characteristics, including the past and present; linguistic characteristics not limited to a specific language; and various types categorized in a complex way. Processing records such as text, video, and audio in the life cycle of records' creation, preservation, and utilization entails exhaustive effort and cost. Primary natural language processing (NLP) technologies, such as machine translation, document summarization, named-entity recognition, and image recognition, can be widely applied to electronic records and analog digitization. In particular, Korean deep learning-based NLP technologies effectively recognize various record types and generate record management metadata. This paper provides an overview of Korean NLP technologies and discusses considerations for applying NLP technology in records management. The process of using NLP technologies, such as machine translation and optical character recognition for digital conversion of records, is introduced as an example implemented in the Python environment. In contrast, a plan to improve environmental factors and record digitization guidelines for applying NLP technology in the records management field is proposed for utilizing NLP technology.

Implementation of Character and Object Metadata Generation System for Media Archive Construction (미디어 아카이브 구축을 위한 등장인물, 사물 메타데이터 생성 시스템 구현)

  • Cho, Sungman;Lee, Seungju;Lee, Jaehyeon;Park, Gooman
    • Journal of Broadcast Engineering
    • /
    • v.24 no.6
    • /
    • pp.1076-1084
    • /
    • 2019
  • In this paper, we introduced a system that extracts metadata by recognizing characters and objects in media using deep learning technology. In the field of broadcasting, multimedia contents such as video, audio, image, and text have been converted to digital contents for a long time, but the unconverted resources still remain vast. Building media archives requires a lot of manual work, which is time consuming and costly. Therefore, by implementing a deep learning-based metadata generation system, it is possible to save time and cost in constructing media archives. The whole system consists of four elements: training data generation module, object recognition module, character recognition module, and API server. The deep learning network module and the face recognition module are implemented to recognize characters and objects from the media and describe them as metadata. The training data generation module was designed separately to facilitate the construction of data for training neural network, and the functions of face recognition and object recognition were configured as an API server. We trained the two neural-networks using 1500 persons and 80 kinds of object data and confirmed that the accuracy is 98% in the character test data and 42% in the object data.

Artificial speech bandwidth extension technique based on opus codec using deep belief network (심층 신뢰 신경망을 이용한 오푸스 코덱 기반 인공 음성 대역 확장 기술)

  • Choi, Yoonsang;Li, Yaxing;Kang, Sangwon
    • The Journal of the Acoustical Society of Korea
    • /
    • v.36 no.1
    • /
    • pp.70-77
    • /
    • 2017
  • Bandwidth extension is a technique to improve speech quality, intelligibility and naturalness, extending from the 300 ~ 3,400 Hz narrowband speech to the 50 ~ 7,000 Hz wideband speech. In this paper, an Artificial Bandwidth Extension (ABE) module embedded in the Opus audio decoder is designed using the information of narrowband speech to reduce the computational complexity of LPC (Linear Prediction Coding) and LSF (Line Spectral Frequencies) analysis and the algorithm delay of the ABE module. We proposed a spectral envelope extension method using DBN (Deep Belief Network), one of deep learning techniques, and the proposed scheme produces better extended spectrum than the traditional codebook mapping method.

Multicontents Integrated Image Animation within Synthesis for Hiqh Quality Multimodal Video (고화질 멀티 모달 영상 합성을 통한 다중 콘텐츠 통합 애니메이션 방법)

  • Jae Seung Roh;Jinbeom Kang
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.4
    • /
    • pp.257-269
    • /
    • 2023
  • There is currently a burgeoning demand for image synthesis from photos and videos using deep learning models. Existing video synthesis models solely extract motion information from the provided video to generate animation effects on photos. However, these synthesis models encounter challenges in achieving accurate lip synchronization with the audio and maintaining the image quality of the synthesized output. To tackle these issues, this paper introduces a novel framework based on an image animation approach. Within this framework, upon receiving a photo, a video, and audio input, it produces an output that not only retains the unique characteristics of the individuals in the photo but also synchronizes their movements with the provided video, achieving lip synchronization with the audio. Furthermore, a super-resolution model is employed to enhance the quality and resolution of the synthesized output.

Audio genre classification using deep learning (딥 러닝을 이용한 오디오 장르 분류)

  • Shin, Seong-Hyeon;Jang, Woo-Jin;Yun, Ho-won;Park, Ho-Chong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.80-81
    • /
    • 2016
  • 본 논문에서는 딥 러닝을 이용한 오디오 장르 분류 기술을 제안한다. 장르는 music, speech, effect 3가지로 정의하여 분류한다. 기존의 GMM을 이용한 장르 분류 기술은 speech의 인식률에 비해 music과 effect에 대한 인식률이 낮아 각 장르에 대한 인식률의 차이를 보인다. 이러한 문제를 해결하기 위해 본 논문에서는 딥 러닝을 이용해 높은 수준의 추상화 과정을 거쳐 더 세분된 학습을 진행한다. 제안한 방법을 사용하면 미세한 차이의 특성까지 학습해 장르에 대한 인식률의 차이를 줄일 수 있으며, 각 장르에 대해 GMM을 이용한 오디오 장르 분류보다 높은 인식률을 얻을 수 있다.

  • PDF

Audio Genre Classification based on Deep Learning using Spectrogram (스펙트로그램을 이용한 딥 러닝 기반의 오디오 장르 분류 기술)

  • Jang, Woo-Jin;Yun, Ho-Won;Shin, Seong-Hyeon;Park, Ho-chong
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2016.06a
    • /
    • pp.90-91
    • /
    • 2016
  • 본 논문에서는 스펙트로그램을 이용한 딥 러닝 기반의 오디오 장르 분류 기술을 제안한다. 기존의 오디오 장르 분류는 대부분 GMM 알고리즘을 이용하고, GMM의 특성에 따라 입력 성분들이 서로 직교한 성질을 갖는 MFCC를 오디오의 특성으로 사용한다. 그러나 딥 러닝을 입력의 성질에 제한이 없으므로 MFCC보다 가공되지 않은 특성을 사용할 수 있고, 이는 오디오의 특성을 더 명확히 표현하기 때문에 효과적인 학습을 할 수 있다. 본 논문에서는 딥 러닝에 효과적인 특성을 구하기 위하여 스펙트로그램(spectrogram)을 사용하여 오디오 특성을 추출하는 방법을 제안한다. 제안한 방법을 사용한면 MFCC를 특성으로 하는 딥 러닝보다 더 높은 인식률을 얻을 수 있다.

  • PDF

A Study on Using Deep learning for Event Classification Based on Audio and Radar (오디오와 레이더를 결합한 답러닝 환경 분류 연구)

  • Kim, Tae-Ho;Chang, Joon-Hyuk
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2018.05a
    • /
    • pp.344-345
    • /
    • 2018
  • 본 논문에서는 오디오와 레이더 기반의 딥러닝을 활용한 환경 분류 기술을 제안한다. 제안된 환경 분류 기술은 오디오를 이용한 환경 분류 딥러닝 모델과 레이더를 이용한 딥러닝 모델을 앙상블로 결합하여 환경을 분류한다. 특히, 오디오와 레이더 각 성능을 높이기 위해 별도의 모델이 제안된 딥러닝 환경분류 기법은 실내 환경 5 가지를 분류 하였으며, 오디오 또는 레이더 단일 데이터를 활용한 환경 분류에 비해 우수한 성능을 보였다.

Speech Emotion Recognition Using 2D-CNN with Mel-Frequency Cepstrum Coefficients

  • Eom, Youngsik;Bang, Junseong
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.3
    • /
    • pp.148-154
    • /
    • 2021
  • With the advent of context-aware computing, many attempts were made to understand emotions. Among these various attempts, Speech Emotion Recognition (SER) is a method of recognizing the speaker's emotions through speech information. The SER is successful in selecting distinctive 'features' and 'classifying' them in an appropriate way. In this paper, the performances of SER using neural network models (e.g., fully connected network (FCN), convolutional neural network (CNN)) with Mel-Frequency Cepstral Coefficients (MFCC) are examined in terms of the accuracy and distribution of emotion recognition. For Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) dataset, by tuning model parameters, a two-dimensional Convolutional Neural Network (2D-CNN) model with MFCC showed the best performance with an average accuracy of 88.54% for 5 emotions, anger, happiness, calm, fear, and sadness, of men and women. In addition, by examining the distribution of emotion recognition accuracies for neural network models, the 2D-CNN with MFCC can expect an overall accuracy of 75% or more.

Development of Automative Loudness Control Technique based on Audio Contents Analysis using Deep Learning (딥러닝을 이용한 오디오 콘텐츠 분석 기반의 자동 음량 제어 기술 개발)

  • Lee, Young Han;Cho, Choongsang;Kim, Je Woo
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.42-43
    • /
    • 2018
  • 국내 디지털 방송 프로그램은 2016년 방송법 개정 이후, ITU-R / EBU에서 제안한 측정 방식을 활용하여 채널 및 프로그램 간의 음량을 맞추어 제공되고 있다. 일반적으로 뉴스나 중계와 같이 실시간으로 음량을 맞춰야 하는 분야를 제외하고는 평균 음량을 규정에 맞춰 송출하고 있다. 본 논문에서는 일괄적으로 평균 음량을 맞출 경우 발생하는 저음량의 명료도를 높이기 위한 기술을 제안한다. 즉, 방송 음량을 조절하는 기술 중의 하나로 오디오 콘텐츠를 분석하여 구간별 음량 조절 정도를 달리함으로써 저음량에서의 음성은 상대적으로 높은 음량을 가지고 배경음악 등을 상대적으로 낮음 음량을 가지도록 생성함으로써 명료도를 높이는 방식을 제안한다. 제안한 방식의 성능을 확인하기 위해 오디오 콘텐츠 분석 정확도 측정과 오디오 파형 분석을 실시하였으며 이를 통해 기존의 음량 제어 기술과 비교하여 음성 구간에 대해 음량을 증폭시키는 것을 확인하였다.

  • PDF