• Title/Summary/Keyword: Multiple Audio Features

Search Result 15, Processing Time 0.031 seconds

Classification of Phornographic Video with using the Features of Multiple Audio (다중 오디오 특징을 이용한 유해 동영상의 판별)

  • Kim, Jung-Soo;Chung, Myung-Bum;Sung, Bo-Kyung;Kwon, Jin-Man;Koo, Kwang-Hyo;Ko, Il-Ju
    • 한국HCI학회:학술대회논문집
    • /
    • 2009.02a
    • /
    • pp.522-525
    • /
    • 2009
  • This paper proposed the content-based method of classifying filthy Phornographic video, which causes a big problem of modern society as the reverse function of internet. Audio data was used to extract the features from Phornographic video. There are frequency spectrum, autocorrelation, and MFCC as the feature of audio used in this paper. The sound that could be filthy contents was extracted, and the Phornographic was classified by measuring how much percentage of relevant sound was corresponding with the whole audio of video. For the experiment on the proposed method, The efficiency of classifying Phornographic was measured on each feature, and the measured result and comparison with using multi features were performed. I can obtain the better result than when only one feature of audio was extracted, and used.

  • PDF

CSpeech(Version 3.1)

  • Sik, Choe-Hong
    • Proceedings of the KSLP Conference
    • /
    • 1995.11a
    • /
    • pp.141-153
    • /
    • 1995
  • CSpeech is a software package that implements an audio waveform/speech analysis workstation on an IBM Personal Computer or hardware compatible computer. Features include digitizing audio waveforms on single or multiple channels, displaying the digitized waveforms, playing back audio waveforms from selected intervals of sing1e channels, saving and retrieving waveforms from binary format disk files, and analysing audio waveforms for their temporal and spectral properties. The distinguishing characteristics of CSpeech are its support for multiple channels, minimal restrictions on sample rate and waveform duration support fur a variety of hardware configurations, fast graphics display, and its user- extensible menu- based command structure.

  • PDF

Compression history detection for MP3 audio

  • Yan, Diqun;Wang, Rangding;Zhou, Jinglei;Jin, Chao;Wang, Zhifeng
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.2
    • /
    • pp.662-675
    • /
    • 2018
  • Compression history detection plays an important role in digital multimedia forensics. Most existing works, however, mainly focus on digital image and video. Additionally, the existed audio compression detection algorithms aim to detect the trace of double compression. In real forgery scenario, multiple compression is more likely to happen. In this paper, we proposed a detection algorithm to reveal the compression history for MP3 audio. The statistics of the scale factor and Huffman table index which are the parameters of MP3 codec have been extracted as the detecting features. The experimental results have shown that the proposed method can effectively identify whether the testing audio has been previously treated with single/double/triple compression.

Intra-Class Random Erasing (ICRE) augmentation for audio classification

  • Kumar, Teerath;Park, Jinbae;Bae, Sung-Ho
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2020.11a
    • /
    • pp.244-247
    • /
    • 2020
  • Data augmentation has been helpful in improving the performance in deep learning, when we have a limited data and random erasing is one of the augmentations that have shown impressive performance in deep learning in multiple domains. But the main issue is that sometime it loses good features when randomly selected region is erased by some random values, that does not improve performance as it should. We target that problem in way that good features should not be lost and also want random erasing at the same time. For that purpose, we introduce new augmentation technique named Intra-Class Random Erasing (ICRE) that focuses on data to learn robust features of the same class samples by randomly exchanging randomly selected region. We perform multiple experiments by using different models including resnet18, VGG16 over variety of the datasets including ESC10, UrbanSound8K. Our approach has shown effectiveness over others methods including random erasing.

  • PDF

The Implementation of Multi-Channel Audio Codec for Real-Time operation (실시간 처리를 위한 멀티채널 오디오 코덱의 구현)

  • Hong, Jin-Woo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.14 no.2E
    • /
    • pp.91-97
    • /
    • 1995
  • This paper describes the implementation of a multi-channel audio codec for HETV. This codec has the features of the 3/2-stereo plus low frequency enhancement, downward compatibility with the smaller number of channels, backward compatibility with the existing 2/0-stereo system(MPEG-1 audio), and multilingual capability. The encoder of this codec consists of 6-channel analog audio input part with the sampling rate of 48 kHz, 4-channel digital audio input part and three TMS320C40 /DSPs. The encoder implements multi-channel audio compression using a human perceptual psychoacoustic model, and has the bit rate reduction to 384 kbit/s without impairment of subjective quality. The decoder consists of 6-channel analog audio output part, 4-channel digital audio output part, and two TMS320C40 DSPs for a decoding procedure. The decoder analyzes the bit stream received with bit rate of 384 kbit/s from the encoder and reproduces the multi-channel audio signals for analog and digital outputs. The multi-processing of this audio codec using multiple DSPs is ensured by high speed transfer of date between DSPs through coordinating communication port activities with DMA coprocessors. Finally, some technical considerations are suggested to realize the problem of real-time operation, which are found out through the implementation of this codec using the MPEG-2 layer II sudio coding algorithm and the use of the hardware architecture with commercial multiple DSPs.

  • PDF

A Personal Video Event Classification Method based on Multi-Modalities by DNN-Learning (DNN 학습을 이용한 퍼스널 비디오 시퀀스의 멀티 모달 기반 이벤트 분류 방법)

  • Lee, Yu Jin;Nang, Jongho
    • Journal of KIISE
    • /
    • v.43 no.11
    • /
    • pp.1281-1297
    • /
    • 2016
  • In recent years, personal videos have seen a tremendous growth due to the substantial increase in the use of smart devices and networking services in which users create and share video content easily without many restrictions. However, taking both into account would significantly improve event detection performance because videos generally have multiple modalities and the frame data in video varies at different time points. This paper proposes an event detection method. In this method, high-level features are first extracted from multiple modalities in the videos, and the features are rearranged according to time sequence. Then the association of the modalities is learned by means of DNN to produce a personal video event detector. In our proposed method, audio and image data are first synchronized and then extracted. Then, the result is input into GoogLeNet as well as Multi-Layer Perceptron (MLP) to extract high-level features. The results are then re-arranged in time sequence, and every video is processed to extract one feature each for training by means of DNN.

Video Highlight Prediction Using Multiple Time-Interval Information of Chat and Audio (채팅과 오디오의 다중 시구간 정보를 이용한 영상의 하이라이트 예측)

  • Kim, Eunyul;Lee, Gyemin
    • Journal of Broadcast Engineering
    • /
    • v.24 no.4
    • /
    • pp.553-563
    • /
    • 2019
  • As the number of videos uploaded on live streaming platforms rapidly increases, the demand for providing highlight videos is increasing to promote viewer experiences. In this paper, we present novel methods for predicting highlights using chat logs and audio data in videos. The proposed models employ bi-directional LSTMs to understand the contextual flow of a video. We also propose to use the features over various time-intervals to understand the mid-to-long term flows. The proposed Our methods are demonstrated on e-Sports and baseball videos collected from personal broadcasting platforms such as Twitch and Kakao TV. The results show that the information from multiple time-intervals is useful in predicting video highlights.

Developing Smartphone-based Control Service of Vehicle's Convenience Features using CAN (CAN을 활용한 스마트폰 기반 차량 편의장치 제어 서비스 개발)

  • Jeon, Byoung Chan;Cha, Si Ho;Cho, Sang Yeop
    • Journal of Korea Society of Digital Industry and Information Management
    • /
    • v.8 no.1
    • /
    • pp.9-15
    • /
    • 2012
  • Multiple convenience features have been getting installed in recently released cars. However, the control of them has many uncomfortable matters yet. To resolve them, it is needed to study how to use easier the convenience features and control them remotely. Currently, wide range of convergence services are being released in various industries by using smartphone and smartphones with its state-of-the-art functions also are being released. In this paper, we design and implement smartphone-based applications for controling the vehicle's convenience features to control the vehicle convenience features with smartphone. To do this, we configure CAN (Controller Area Network) communication between the vehicle's various convenience features, and establish MCU (Micro Controller Unit) to control each feature. We also connect between the MCU and smartphones to make them available for the remote control. We can control lights, turn signals, audio, windows, air conditioner, and so on with the implemented smartphone-based control service of vehicle's convenience features using CAN remotely.

Video Highlight Prediction Using GAN and Multiple Time-Interval Information of Audio and Image (오디오와 이미지의 다중 시구간 정보와 GAN을 이용한 영상의 하이라이트 예측 알고리즘)

  • Lee, Hansol;Lee, Gyemin
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.143-150
    • /
    • 2020
  • Huge amounts of contents are being uploaded every day on various streaming platforms. Among those videos, game and sports videos account for a great portion. The broadcasting companies sometimes create and provide highlight videos. However, these tasks are time-consuming and costly. In this paper, we propose models that automatically predict highlights in games and sports matches. While most previous approaches use visual information exclusively, our models use both audio and visual information, and present a way to understand short term and long term flows of videos. We also describe models that combine GAN to find better highlight features. The proposed models are evaluated on e-sports and baseball videos.

Animal Sounds Classification Scheme Based on Multi-Feature Network with Mixed Datasets

  • Kim, Chung-Il;Cho, Yongjang;Jung, Seungwon;Rew, Jehyeok;Hwang, Eenjun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3384-3398
    • /
    • 2020
  • In recent years, as the environment has become an important issue in dealing with food, energy, and urban development, diverse environment-related applications such as environmental monitoring and ecosystem management have emerged. In such applications, automatic classification of animals using video or sound is very useful in terms of cost and convenience. So far, many works have been done for animal sounds classification using artificial intelligence techniques such as a convolutional neural network. However, most of them have dealt only with the sound of a specific class of animals such as bird sounds or insect sounds. Due to this, they are not suitable for classifying various types of animal sounds. In this paper, we propose a sound classification scheme based on a multi-feature network for classifying sounds of multiple species of animals. To do that, we first collected multiple animal sound datasets and grouped them into classes. Then, we extracted their audio features by generating mixed records and used those features for training. To evaluate the effectiveness of our scheme, we constructed an animal sound classification model and performed various experiments. We report some of the results.