• Title/Summary/Keyword: 음성 탐지

Search Result 92, Processing Time 0.02 seconds

Deep Learning Model for Metaverse Environment to Detect Metaphor (메타버스 환경에서 음성 혐오 발언 탐지를 위한 딥러닝 모델 설계)

  • Song, Jin-Su;Karabaeva, Dilnoza;Son, Seung-Woo;Shin, Young-Tea
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.05a
    • /
    • pp.621-623
    • /
    • 2022
  • 최근 코로나19로 인해 비대면으로 소통할 수 있는 플랫폼에 대한 관심이 증가하고 있으며, 가상 세계의 개념을 도입한 메타버스 플랫폼이 MZ세대의 새로운 SNS로 떠오르고 있다. 아바타를 통해 상호 교류가 가능한 메타버스는 텍스트 기반의 소통뿐만 아니라 음성과 동작 시선 등을 활용하여 변화된 의사소통 방식을 사용한다. 음성을 활용한 소통이 증가함에 따라 다른 이용자에게 불쾌감을 주는 혐오 발언에 대한 신고가 증가하고 있다. 그러나 기존 혐오 발언 탐지 시스템은 텍스트를 기반으로 하여 사전에 정의된 혐오 키워드만 특수문자로 대체하는 방식을 사용하기 때문에 음성 혐오 발언에 대해서는 탐지하지 못한다. 이에 본 논문에서는 인공지능을 활용한 음성 혐오 표현 탐지 시스템을 제안한다. 제안하는 시스템은 음성 데이터의 파형을 통해 은유적 혐오 표현과 혐오 발언에 대한 감정적 특징을 추출하고 음성 데이터를 텍스트 데이터로 변환하여 혐오 문장을 탐지한 결과와 결합한다. 향후, 제안하는 시스템의 현실적인 검증을 위해 시스템 구축을 통한 성능평가가 필요하다.

Voice Synthesis Detection Using Language Model-Based Speech Feature Extraction (언어 모델 기반 음성 특징 추출을 활용한 생성 음성 탐지)

  • Seung-min Kim;So-hee Park;Dae-seon Choi
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.3
    • /
    • pp.439-449
    • /
    • 2024
  • Recent rapid advancements in voice generation technology have enabled the natural synthesis of voices using text alone. However, this progress has led to an increase in malicious activities, such as voice phishing (voishing), where generated voices are exploited for criminal purposes. Numerous models have been developed to detect the presence of synthesized voices, typically by extracting features from the voice and using these features to determine the likelihood of voice generation.This paper proposes a new model for extracting voice features to address misuse cases arising from generated voices. It utilizes a deep learning-based audio codec model and the pre-trained natural language processing model BERT to extract novel voice features. To assess the suitability of the proposed voice feature extraction model for voice detection, four generated voice detection models were created using the extracted features, and performance evaluations were conducted. For performance comparison, three voice detection models based on Deepfeature proposed in previous studies were evaluated against other models in terms of accuracy and EER. The model proposed in this paper achieved an accuracy of 88.08%and a low EER of 11.79%, outperforming the existing models. These results confirm that the voice feature extraction method introduced in this paper can be an effective tool for distinguishing between generated and real voices.

Data augmentation in voice spoofing problem (데이터 증강기법을 이용한 음성 위조 공격 탐지모형의 성능 향상에 대한 연구)

  • Choi, Hyo-Jung;Kwak, Il-Youp
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.3
    • /
    • pp.449-460
    • /
    • 2021
  • ASVspoof 2017 deals with detection of replay attacks and aims to classify real human voices and fake voices. The spoofed voice refers to the voice that reproduces the original voice by different types of microphones and speakers. data augmentation research on image data has been actively conducted, and several studies have been conducted to attempt data augmentation on voice. However, there are not many attempts to augment data for voice replay attacks, so this paper explores how audio modification through data augmentation techniques affects the detection of replay attacks. A total of 7 data augmentation techniques were applied, and among them, dynamic value change (DVC) and pitch techniques helped improve performance. DVC and pitch showed an improvement of about 8% of the base model EER, and DVC in particular showed noticeable improvement in accuracy in some environments among 57 replay configurations. The greatest increase was achieved in RC53, and DVC led to an approximately 45% improvement in base model accuracy. The high-end recording and playback devices that were previously difficult to detect were well identified. Based on this study, we found that the DVC and pitch data augmentation techniques are helpful in improving performance in the voice spoofing detection problem.

Verification of Extended TRW Algorithm for DDoS Detection in SIP Environment (SIP 환경에서의 DDoS 공격 탐지를 위한 확장된 TRW 알고리즘 검증)

  • Yum, Sung-Yeol;Ha, Do-Yoon;Jeong, Hyun-Cheol;Park, Seok-Cheon
    • Journal of Korea Multimedia Society
    • /
    • v.13 no.4
    • /
    • pp.594-600
    • /
    • 2010
  • Many studies are DDoS in Internet network, but the study is the fact that is not enough in a voice network. Therefore, we designed the extended TRW algorithm that was a DDoS attack traffic detection algorithm for the voice network which used an IP data network to solve upper problems in this article and evaluated it. The algorithm that is proposed in this paper analyzes TRW algorithm to detect existing DDoS attack in Internet network and, design connection and end connection to apply to a voice network, define probability function to count this. For inspect the algorithm, Set a threshold and using NS-2 Simulator. We measured detection rate by an attack traffic type and detection time by attack speed. At the result of evaluation 4.3 seconds for detection when transmitted INVITE attack packets per 0.1 seconds and 89.6% performance because detected 13,453 packet with attack at 15,000 time when transmitted attack packet.

Design and Evaluation of DDoS Attack Detection Algorithm in Voice Network (음성망 환경에서 DDoS 공격 탐지 알고리즘 설계 및 평가)

  • Yun, Sung-Yeol;Kim, Hwan-Kuk;Park, Seok-Cheon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.13 no.12
    • /
    • pp.2555-2562
    • /
    • 2009
  • The algorithm that is proposed in this paper defined a probability function to count connection process and connection-end process to apply TRW algorithm to voice network. Set threshold to evaluate the algorithm that is proposed, Based on the type of connection attack traffic changing the probability to measure the effectiveness of the algorithm, and Attack packets based on the speed of attack detection time was measured. At the result of evaluation, proposed algorithm shows that DDoS attack starts at 10 packets per a second and it detects the attack after 1.2 seconds from the start. Moreover, it shows that the algorithm detects the attack in 0.5 second if the packets were 20 per a second.

Deep Learning-Based User Emergency Event Detection Algorithms Fusing Vision, Audio, Activity and Dust Sensors (영상, 음성, 활동, 먼지 센서를 융합한 딥러닝 기반 사용자 이상 징후 탐지 알고리즘)

  • Jung, Ju-ho;Lee, Do-hyun;Kim, Seong-su;Ahn, Jun-ho
    • Journal of Internet Computing and Services
    • /
    • v.21 no.5
    • /
    • pp.109-118
    • /
    • 2020
  • Recently, people are spending a lot of time inside their homes because of various diseases. It is difficult to ask others for help in the case of a single-person household that is injured in the house or infected with a disease and needs help from others. In this study, an algorithm is proposed to detect emergency event, which are situations in which single-person households need help from others, such as injuries or disease infections, in their homes. It proposes vision pattern detection algorithms using home CCTVs, audio pattern detection algorithms using artificial intelligence speakers, activity pattern detection algorithms using acceleration sensors in smartphones, and dust pattern detection algorithms using air purifiers. However, if it is difficult to use due to security issues of home CCTVs, it proposes a fusion method combining audio, activity and dust pattern sensors. Each algorithm collected data through YouTube and experiments to measure accuracy.

A Parametric Voice Activity Detection Based on the SPD-TE for Nonstationary Noises (비정체성 잡음을 위한 SPD-TE 기반 계수형 음성 활동 탐지)

  • Koo, Boneung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.34 no.4
    • /
    • pp.310-315
    • /
    • 2015
  • A single channel VAD (Voice Activity Detection) algorithm for nonstationary noise environment is proposed in this paper. Threshold values of the feature parameter for VAD decision are updated adaptively based on estimates of means and standard deviations of past non-speech frames. The feature parameter, SPD-TE (Spectral Power Difference-Teager Energy), is obtained by applying the Teager energy to the WPD (Wavelet Packet Decomposition) coefficients. It was reported previously that the SPD-TE is robust to noise as a feature for VAD. Experimental results by using TIMIT speech and NOISEX-92 noise databases show that decision accuracy of the proposed algorithm is comparable to several typical VAD algorithms including standards for SNR values ranging from 10 to -10 dB.

A Study on Lip Detection based on Eye Localization for Visual Speech Recognition in Mobile Environment (모바일 환경에서의 시각 음성인식을 위한 눈 정위 기반 입술 탐지에 대한 연구)

  • Gyu, Song-Min;Pham, Thanh Trung;Kim, Jin-Young;Taek, Hwang-Sung
    • Journal of the Korean Institute of Intelligent Systems
    • /
    • v.19 no.4
    • /
    • pp.478-484
    • /
    • 2009
  • Automatic speech recognition(ASR) is attractive technique in trend these day that seek convenient life. Although many approaches have been proposed for ASR but the performance is still not good in noisy environment. Now-a-days in the state of art in speech recognition, ASR uses not only the audio information but also the visual information. In this paper, We present a novel lip detection method for visual speech recognition in mobile environment. In order to apply visual information to speech recognition, we need to extract exact lip regions. Because eye-detection is more easy than lip-detection, we firstly detect positions of left and right eyes, then locate lip region roughly. After that we apply K-means clustering technique to devide that region into groups, than two lip corners and lip center are detected by choosing biggest one among clustered groups. Finally, we have shown the effectiveness of the proposed method through the experiments based on samsung AVSR database.

Lip and Voice Synchronization Using Visual Attention (시각적 어텐션을 활용한 입술과 목소리의 동기화 연구)

  • Dongryun Yoon;Hyeonjoong Cho
    • The Transactions of the Korea Information Processing Society
    • /
    • v.13 no.4
    • /
    • pp.166-173
    • /
    • 2024
  • This study explores lip-sync detection, focusing on the synchronization between lip movements and voices in videos. Typically, lip-sync detection techniques involve cropping the facial area of a given video, utilizing the lower half of the cropped box as input for the visual encoder to extract visual features. To enhance the emphasis on the articulatory region of lips for more accurate lip-sync detection, we propose utilizing a pre-trained visual attention-based encoder. The Visual Transformer Pooling (VTP) module is employed as the visual encoder, originally designed for the lip-reading task, predicting the script based solely on visual information without audio. Our experimental results demonstrate that, despite having fewer learning parameters, our proposed method outperforms the latest model, VocaList, on the LRS2 dataset, achieving a lip-sync detection accuracy of 94.5% based on five context frames. Moreover, our approach exhibits an approximately 8% superiority over VocaList in lip-sync detection accuracy, even on an untrained dataset, Acappella.

A Single Channel Voice Activity Detection for Noisy Environments Using Wavelet Packet Decomposition and Teager Energy (웨이블렛 패킷 변환과 Teager 에너지를 이용한 잡음 환경에서의 단일 채널 음성 판별)

  • Koo, Boneung
    • The Journal of the Acoustical Society of Korea
    • /
    • v.33 no.2
    • /
    • pp.139-145
    • /
    • 2014
  • In this paper, a feature parameter is obtained by applying the Teager energy to the WPD(Wavelet Packet Decomposition) coefficients. The threshold value is obtained based on means and standard deviations of nonspeech frames. Experimental results by using TIMIT speech and NOISEX-92 noise databases show that the proposed algorithm is superior to the typical VAD algorithm. The ROC(Receiver Operating Characteristics) curves are used to compare performance of VAD's for SNR values of ranging from 10 to -10 dB.