• Title/Summary/Keyword: Deepfake Detection

Search Result 13, Processing Time 0.021 seconds

A Comprehensive Study on Key Components of Grayscale-based Deepfake Detection

  • Seok Bin Son;Seong Hee Park;Youn Kyu Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.8
    • /
    • pp.2230-2252
    • /
    • 2024
  • Advances in deep learning technology have enabled the generation of more realistic deepfakes, which not only endanger individuals' identities but also exploit vulnerabilities in face recognition systems. The majority of existing deepfake detection methods have primarily focused on RGB-based analysis, offering unreliable performance in terms of detection accuracy and time. To address the issue, a grayscale-based deepfake detection method has recently been proposed. This method significantly reduces detection time while providing comparable accuracy to RGB-based methods. However, despite its significant effectiveness, the "key components" that directly affect the performance of grayscale-based deepfake detection have not been systematically analyzed. In this paper, we target three key components: RGB-to-grayscale conversion method, brightness level in grayscale, and resolution level in grayscale. To analyze their impacts on the performance of grayscale-based deepfake detection, we conducted comprehensive evaluations, including component-wise analysis and comparative analysis using real-world datasets. For each key component, we quantitatively analyzed its characteristics' performance and identified differences between them. Moreover, we successfully verified the effectiveness of an optimal combination of the key components by comparing it with existing deepfake detection methods.

Deepfake Image Detection based on Visual Saliency (Visual Saliency 기반의 딥페이크 이미지 탐지 기법)

  • Harim Noh;Jehyeok Rew
    • Journal of Platform Technology
    • /
    • v.12 no.1
    • /
    • pp.128-140
    • /
    • 2024
  • 'Deepfake' refers to a video synthesis technique that utilizes various artificial intelligence technologies to create highly realistic fake content, causing serious confusion to individuals and society by being used for generating fake news, fraud, malicious impersonation, and more. To address this issue, there is a need for methods to detect malicious images generated by deepfake accurately. In this paper, we extract and analyze saliency features from deepfake and real images, and detect candidate synthesis regions on the images, and finally construct an automatic deepfake detection model by focusing on the extracted features. The proposed saliency feature-based model can be universally applied in situations where deepfake detection is required, such as synthesized images and videos. To demonstrate the performance of our approach, we conducted several experiments that have shown the effectiveness of the deepfake detection task.

  • PDF

Cascaded-Hop For DeepFake Videos Detection

  • Zhang, Dengyong;Wu, Pengjie;Li, Feng;Zhu, Wenjie;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.5
    • /
    • pp.1671-1686
    • /
    • 2022
  • Face manipulation tools represented by Deepfake have threatened the security of people's biological identity information. Particularly, manipulation tools with deep learning technology have brought great challenges to Deepfake detection. There are many solutions for Deepfake detection based on traditional machine learning and advanced deep learning. However, those solutions of detectors almost have problems of poor performance when evaluated on different quality datasets. In this paper, for the sake of making high-quality Deepfake datasets, we provide a preprocessing method based on the image pixel matrix feature to eliminate similar images and the residual channel attention network (RCAN) to resize the scale of images. Significantly, we also describe a Deepfake detector named Cascaded-Hop which is based on the PixelHop++ system and the successive subspace learning (SSL) model. By feeding the preprocessed datasets, Cascaded-Hop achieves a good classification result on different manipulation types and multiple quality datasets. According to the experiment on FaceForensics++ and Celeb-DF, the AUC (area under curve) results of our proposed methods are comparable to the state-of-the-art models.

A Comparative Study on Deepfake Detection using Gray Channel Analysis (Gray 채널 분석을 사용한 딥페이크 탐지 성능 비교 연구)

  • Son, Seok Bin;Jo, Hee Hyeon;Kang, Hee Yoon;Lee, Byung Gul;Lee, Youn Kyu
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.9
    • /
    • pp.1224-1241
    • /
    • 2021
  • Recent development of deep learning techniques for image generation has led to straightforward generation of sophisticated deepfakes. However, as a result, privacy violations through deepfakes has also became increased. To solve this issue, a number of techniques for deepfake detection have been proposed, which are mainly focused on RGB channel-based analysis. Although existing studies have suggested the effectiveness of other color model-based analysis (i.e., Grayscale), their effectiveness has not been quantitatively validated yet. Thus, in this paper, we compare the effectiveness of Grayscale channel-based analysis with RGB channel-based analysis in deepfake detection. Based on the selected CNN-based models and deepfake datasets, we measured the performance of each color model-based analysis in terms of accuracy and time. The evaluation results confirmed that Grayscale channel-based analysis performs better than RGB-channel analysis in several cases.

CoNSIST : Consist of New methodologies on AASIST, leveraging Squeeze-and-Excitation, Positional Encoding, and Re-formulated HS-GAL

  • Jae-Hoon Ha;Joo-Won Mun;Sang-Yup Lee
    • Annual Conference of KIPS
    • /
    • 2024.05a
    • /
    • pp.692-695
    • /
    • 2024
  • With the recent advancements in artificial intelligence (AI), the performance of deep learning-based audio deepfake technology has significantly improved. This technology has been exploited for criminal activities, leading to various cases of victimization. To prevent such illicit outcomes, this paper proposes a deep learning-based audio deepfake detection model. In this study, we propose CoNSIST, an improved audio deepfake detection model, which incorporates three additional components into the graph-based end-to-end model AASIST: (i) Squeeze and Excitation, (ii) Positional Encoding, and (iii) Reformulated HS-GAL, This incorporation is expected to enable more effective feature extraction, elimination of unnecessary operations, and consideration of more diverse information, thereby improving the performance of the original AASIST. The results of multiple experiments indicate that CoNSIST has enhanced the performance of audio deepfake detection compared to existing models.

Blockchain Technology for Combating Deepfake and Protect Video/Image Integrity

  • Rashid, Md Mamunur;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.8
    • /
    • pp.1044-1058
    • /
    • 2021
  • Tempered electronic contents have multiplied in last few years, thanks to the emergence of sophisticated artificial intelligence(AI) algorithms. Deepfakes (fake footage, photos, speech, and videos) can be a frightening and destructive phenomenon that has the capacity to distort the facts and hamper reputation by presenting a fake reality. Evidence of ownership or authentication of digital material is crucial for combating the fabricated content influx we are facing today. Current solutions lack the capacity to track digital media's history and provenance. Due to the rise of misrepresentation created by technologies like deepfake, detection algorithms are required to verify the integrity of digital content. Many real-world scenarios have been claimed to benefit from blockchain's authentication capabilities. Despite the scattered efforts surrounding such remedies, relatively little research has been undertaken to discover where blockchain technology can be used to tackle the deepfake problem. Latest blockchain based innovations such as Smart Contract, Hyperledger fabric can play a vital role against the manipulation of digital content. The goal of this paper is to summarize and discuss the ongoing researches related to blockchain's capabilities to protect digital content authentication. We have also suggested a blockchain (smart contract) dependent framework that can keep the data integrity of original content and thus prevent deepfake. This study also aims at discussing how blockchain technology can be used more effectively in deepfake prevention as well as highlight the current state of deepfake video detection research, including the generating process, various detection algorithms, and existing benchmarks.

Deepfake Detection using Supervised Temporal Feature Extraction model and LSTM (지도 학습한 시계열적 특징 추출 모델과 LSTM을 활용한 딥페이크 판별 방법)

  • Lee, Chunghwan;Kim, Jaihoon;Yoon, Kijung
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • fall
    • /
    • pp.91-94
    • /
    • 2021
  • As deep learning technologies becoming developed, realistic fake videos synthesized by deep learning models called "Deepfake" videos became even more difficult to distinguish from original videos. As fake news or Deepfake blackmailing are causing confusion and serious problems, this paper suggests a novel model detecting Deepfake videos. We chose Residual Convolutional Neural Network (Resnet50) as an extraction model and Long Short-Term Memory (LSTM) which is a form of Recurrent Neural Network (RNN) as a classification model. We adopted cosine similarity with hinge loss to train our extraction model in embedding the features of Deepfake and original video. The result in this paper demonstrates that temporal features in the videos are essential for detecting Deepfake videos.

  • PDF

On the Performance Biases Arising from Inconsistencies in Evaluation Methodologies of Deepfake Detection Models (딥페이크 탐지 모델의 검증 방법론 불일치에 따른 성능 편향 분석 연구)

  • Hyunjoon Kim;Hong Eun Ahn;Leo Hyun Park;Taekyoung Kwon
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.5
    • /
    • pp.885-893
    • /
    • 2024
  • As deepfake technology advances, its increasing misuse has spurred extensive research into detection models. These models' performance evaluations, which include selecting train and test datasets, data preprocessing, and data augmentation, are often compromised by arbitrarily chosen validation methodologies in existing studies. This leads to biases under standardized conditions. This paper reviews these methodologies to pinpoint what diminishes evaluation reliability. Experiments in standardized environments reveal the difficulties in comparing performance absolutely. The findings highlighted the need for a consistent validation methodology to boost evaluation reliability and enable fair comparisons.

A Method of Detection of Deepfake Using Bidirectional Convolutional LSTM (Bidirectional Convolutional LSTM을 이용한 Deepfake 탐지 방법)

  • Lee, Dae-hyeon;Moon, Jong-sub
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.30 no.6
    • /
    • pp.1053-1065
    • /
    • 2020
  • With the recent development of hardware performance and artificial intelligence technology, sophisticated fake videos that are difficult to distinguish with the human's eye are increasing. Face synthesis technology using artificial intelligence is called Deepfake, and anyone with a little programming skill and deep learning knowledge can produce sophisticated fake videos using Deepfake. A number of indiscriminate fake videos has been increased significantly, which may lead to problems such as privacy violations, fake news and fraud. Therefore, it is necessary to detect fake video clips that cannot be discriminated by a human eyes. Thus, in this paper, we propose a deep-fake detection model applied with Bidirectional Convolution LSTM and Attention Module. Unlike LSTM, which considers only the forward sequential procedure, the model proposed in this paper uses the reverse order procedure. The Attention Module is used with a Convolutional neural network model to use the characteristics of each frame for extraction. Experiments have shown that the model proposed has 93.5% accuracy and AUC is up to 50% higher than the results of pre-existing studies.

Improving the Robustness of Deepfake Detection Models Against Adversarial Attacks (적대적 공격에 따른 딥페이크 탐지 모델 강화)

  • Lee, Sangyeong;Hou, Jong-Uk
    • Annual Conference of KIPS
    • /
    • 2022.11a
    • /
    • pp.724-726
    • /
    • 2022
  • 딥페이크(deepfake)로 인한 디지털 범죄는 날로 교묘해지면서 사회적으로 큰 파장을 불러일으키고 있다. 이때, 딥러닝 기반 모델의 오류를 발생시키는 적대적 공격(adversarial attack)의 등장으로 딥페이크를 탐지하는 모델의 취약성이 증가하고 있고, 이는 매우 치명적인 결과를 초래한다. 본 연구에서는 2 가지 방법을 통해 적대적 공격에도 영향을 받지 않는 강인한(robust) 모델을 구축하는 것을 목표로 한다. 모델 강화 기법인 적대적 학습(adversarial training)과 영상처리 기반 방어 기법인 크기 변환(resizing), JPEG 압축을 통해 적대적 공격에 대한 강인성을 입증한다.