• Title/Summary/Keyword: Audio deep-learning

Search Result 69, Processing Time 0.035 seconds

A Study on Acoustic Signal Characterization for Al and Steel Machining by Audio Deep Learning (오디오 딥러닝을 활용한 Al, Steel 소재의 절삭 깊이에 따른 오디오 판별)

  • Kim, Tae-won;Lee, Young Min;Choi, Hae-Woon
    • Journal of the Korean Society of Manufacturing Process Engineers
    • /
    • v.20 no.7
    • /
    • pp.72-79
    • /
    • 2021
  • This study reports on the experiment of using deep learning algorithms to determine the machining process of aluminium and steel. A face cutting milling tool was used for machining and the cutting speed was set between 3 and 4 mm/s. Both materials were machined with a depth to 0.5mm and 1.0mm. To demonstrate the developed deep learning algorithm, simulation experiments were performed using the VGGish algorithm in MATLAB toobox. Downcutting was used to cut aluminum and steel as a machining process for high quality and precise learning. As a result of learning algorithms using audio data, 61%-99% accuracy was obtained in four categories: Al 0.5mm, Al 1.0mm, Steel 0.5mm and Steel 1.0mm. Audio discrimination using deep learning is derived as a probabilistic result.

Robot Vision to Audio Description Based on Deep Learning for Effective Human-Robot Interaction (효과적인 인간-로봇 상호작용을 위한 딥러닝 기반 로봇 비전 자연어 설명문 생성 및 발화 기술)

  • Park, Dongkeon;Kang, Kyeong-Min;Bae, Jin-Woo;Han, Ji-Hyeong
    • The Journal of Korea Robotics Society
    • /
    • v.14 no.1
    • /
    • pp.22-30
    • /
    • 2019
  • For effective human-robot interaction, robots need to understand the current situation context well, but also the robots need to transfer its understanding to the human participant in efficient way. The most convenient way to deliver robot's understanding to the human participant is that the robot expresses its understanding using voice and natural language. Recently, the artificial intelligence for video understanding and natural language process has been developed very rapidly especially based on deep learning. Thus, this paper proposes robot vision to audio description method using deep learning. The applied deep learning model is a pipeline of two deep learning models for generating natural language sentence from robot vision and generating voice from the generated natural language sentence. Also, we conduct the real robot experiment to show the effectiveness of our method in human-robot interaction.

Recent deep learning methods for tabular data

  • Yejin Hwang;Jongwoo Song
    • Communications for Statistical Applications and Methods
    • /
    • v.30 no.2
    • /
    • pp.215-226
    • /
    • 2023
  • Deep learning has made great strides in the field of unstructured data such as text, images, and audio. However, in the case of tabular data analysis, machine learning algorithms such as ensemble methods are still better than deep learning. To keep up with the performance of machine learning algorithms with good predictive power, several deep learning methods for tabular data have been proposed recently. In this paper, we review the latest deep learning models for tabular data and compare the performances of these models using several datasets. In addition, we also compare the latest boosting methods to these deep learning methods and suggest the guidelines to the users, who analyze tabular datasets. In regression, machine learning methods are better than deep learning methods. But for the classification problems, deep learning methods perform better than the machine learning methods in some cases.

Development and Distribution of Deep Fake e-Learning Contents Videos Using Open-Source Tools

  • HO, Won;WOO, Ho-Sung;LEE, Dae-Hyun;KIM, Yong
    • Journal of Distribution Science
    • /
    • v.20 no.11
    • /
    • pp.121-129
    • /
    • 2022
  • Purpose: Artificial intelligence is widely used, particularly in the popular neural network theory called Deep learning. The improvement of computing speed and capability expedited the progress of Deep learning applications. The application of Deep learning in education has various effects and possibilities in creating and managing educational content and services that can replace human cognitive activity. Among Deep learning, Deep fake technology is used to combine and synchronize human faces with voices. This paper will show how to develop e-Learning content videos using those technologies and open-source tools. Research design, data, and methodology: This paper proposes 4 step development process, which is presented step by step on the Google Collab environment with source codes. This technology can produce various video styles. The advantage of this technology is that the characters of the video can be extended to any historical figures, celebrities, or even movie heroes producing immersive videos. Results: Prototypes for each case are also designed, developed, presented, and shared on YouTube for each specific case development. Conclusions: The method and process of creating e-learning video contents from the image, video, and audio files using Deep fake open-source technology was successfully implemented.

Acoustic Event Detection in Multichannel Audio Using Gated Recurrent Neural Networks with High-Resolution Spectral Features

  • Kim, Hyoung-Gook;Kim, Jin Young
    • ETRI Journal
    • /
    • v.39 no.6
    • /
    • pp.832-840
    • /
    • 2017
  • Recently, deep recurrent neural networks have achieved great success in various machine learning tasks, and have also been applied for sound event detection. The detection of temporally overlapping sound events in realistic environments is much more challenging than in monophonic detection problems. In this paper, we present an approach to improve the accuracy of polyphonic sound event detection in multichannel audio based on gated recurrent neural networks in combination with auditory spectral features. In the proposed method, human hearing perception-based spatial and spectral-domain noise-reduced harmonic features are extracted from multichannel audio and used as high-resolution spectral inputs to train gated recurrent neural networks. This provides a fast and stable convergence rate compared to long short-term memory recurrent neural networks. Our evaluation reveals that the proposed method outperforms the conventional approaches.

Energy-Aware Data-Preprocessing Scheme for Efficient Audio Deep Learning in Solar-Powered IoT Edge Computing Environments (태양 에너지 수집형 IoT 엣지 컴퓨팅 환경에서 효율적인 오디오 딥러닝을 위한 에너지 적응형 데이터 전처리 기법)

  • Yeontae Yoo;Dong Kun Noh
    • IEMEK Journal of Embedded Systems and Applications
    • /
    • v.18 no.4
    • /
    • pp.159-164
    • /
    • 2023
  • Solar energy harvesting IoT devices prioritize maximizing the utilization of collected energy due to the periodic recharging nature of solar energy, rather than minimizing energy consumption. Meanwhile, research on edge AI, which performs machine learning near the data source instead of the cloud, is actively conducted for reasons such as data confidentiality and privacy, response time, and cost. One such research area involves performing various audio AI applications using audio data collected from multiple IoT devices in an IoT edge computing environment. However, in most studies, IoT devices only perform sensing data transmission to the edge server, and all processes, including data preprocessing, are performed on the edge server. In this case, it not only leads to overload issues on the edge server but also causes network congestion by transmitting unnecessary data for learning. On the other way, if data preprocessing is delegated to each IoT device to address this issue, it leads to another problem of increased blackout time due to energy shortages in the devices. In this paper, we aim to alleviate the problem of increased blackout time in devices while mitigating issues in server-centric edge AI environments by determining where the data preprocessed based on the energy state of each IoT device. In the proposed method, IoT devices only perform the preprocessing process, which includes sound discrimination and noise removal, and transmit to the server if there is more energy available than the energy threshold required for the basic operation of the device.

Recent R&D Trends for 3D Deep Learning (3D 딥러닝 기술 동향)

  • Lee, S.W.;Hwang, B.W.;Lim, S.J.;Yoon, S.U.;Kim, T.J.;Choi, J.S.;Park, C.J.
    • Electronics and Telecommunications Trends
    • /
    • v.33 no.5
    • /
    • pp.103-110
    • /
    • 2018
  • Studies on artificial intelligence have been developed for the past couple of decades. After a few periods of prosperity and recession, a new machine learning method, so-called Deep Learning, has been introduced. This is the result of high-quality big- data, an increase in computing power, and the development of new algorithms. The main targets for deep learning are 1D audio and 2D images. The application domain is being extended from a discriminative model, such as classification/segmentation, to a generative model. Currently, deep learning is used for processing 3D data. However, unlike 2D, it is not easy to acquire 3D learning data. Although low-cost 3D data acquisition sensors have become more popular owing to advances in 3D vision technology, the generation/acquisition of 3D data remains a very difficult problem. Moreover, it is not easy to directly apply an existing network model, such as a convolution network, owing to the variety of 3D data representations. In this paper, we summarize the 3D deep learning technology that have started to be developed within the last 2 years.

Automatic melody extraction algorithm using a convolutional neural network

  • Lee, Jongseol;Jang, Dalwon;Yoon, Kyoungro
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.11 no.12
    • /
    • pp.6038-6053
    • /
    • 2017
  • In this study, we propose an automatic melody extraction algorithm using deep learning. In this algorithm, feature images, generated using the energy of frequency band, are extracted from polyphonic audio files and a deep learning technique, a convolutional neural network (CNN), is applied on the feature images. In the training data, a short frame of polyphonic music is labeled as a musical note and a classifier based on CNN is learned in order to determine a pitch value of a short frame of audio signal. We want to build a novel structure of melody extraction, thus the proposed algorithm has a simple structure and instead of using various signal processing techniques for melody extraction, we use only a CNN to find a melody from a polyphonic audio. Despite of simple structure, the promising results are obtained in the experiments. Compared with state-of-the-art algorithms, the proposed algorithm did not give the best result, but comparable results were obtained and we believe they could be improved with the appropriate training data. In this paper, melody extraction and the proposed algorithm are introduced first, and the proposed algorithm is then further explained in detail. Finally, we present our experiment and the comparison of results follows.

Diagnosis of Parkinson's disease based on audio voice using wav2vec (Wav2vec을 이용한 오디오 음성 기반의 파킨슨병 진단)

  • Yoon, Hee-Jin
    • Journal of Digital Convergence
    • /
    • v.19 no.12
    • /
    • pp.353-358
    • /
    • 2021
  • Parkinson's disease is the second most common degenerative brain disease after Alzheimer's in old age. Symptoms of Parkinson's disease are factors that reduce the quality of life in daily life, such as shaking hands, slowing behavior and cognitive function. Parkinson's disease that can slow the progression of the disease through early diagnosis. To diagnoze Parkinson's disease early, an algorithm was implemented to extract features using wav2vec and to diagnose the presence or absence of Parkinson's disease with deep learning(ANN). As a results of the experiment, the accuracy was 97.47%. It was better than the results of diagnosing Parkinson's disease using the existing neural network. The audio voice file could simply reduce the experiment process and obtain improved results.