• 제목/요약/키워드: Deepfake Detection

검색결과 13건 처리시간 0.021초

A Comprehensive Study on Key Components of Grayscale-based Deepfake Detection

  • Seok Bin Son;Seong Hee Park;Youn Kyu Lee
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제18권8호
    • /
    • pp.2230-2252
    • /
    • 2024
  • Advances in deep learning technology have enabled the generation of more realistic deepfakes, which not only endanger individuals' identities but also exploit vulnerabilities in face recognition systems. The majority of existing deepfake detection methods have primarily focused on RGB-based analysis, offering unreliable performance in terms of detection accuracy and time. To address the issue, a grayscale-based deepfake detection method has recently been proposed. This method significantly reduces detection time while providing comparable accuracy to RGB-based methods. However, despite its significant effectiveness, the "key components" that directly affect the performance of grayscale-based deepfake detection have not been systematically analyzed. In this paper, we target three key components: RGB-to-grayscale conversion method, brightness level in grayscale, and resolution level in grayscale. To analyze their impacts on the performance of grayscale-based deepfake detection, we conducted comprehensive evaluations, including component-wise analysis and comparative analysis using real-world datasets. For each key component, we quantitatively analyzed its characteristics' performance and identified differences between them. Moreover, we successfully verified the effectiveness of an optimal combination of the key components by comparing it with existing deepfake detection methods.

Visual Saliency 기반의 딥페이크 이미지 탐지 기법 (Deepfake Image Detection based on Visual Saliency)

  • 노하림;유제혁
    • Journal of Platform Technology
    • /
    • 제12권1호
    • /
    • pp.128-140
    • /
    • 2024
  • 딥페이크(Deepfake)란 다양한 인공지능 기술을 활용해 진짜와 같은 가짜를 만드는 영상 합성기술로, 가짜 뉴스 생성, 사기, 악의적인 도용 등에 활용되어 개인과 사회에게 심각한 혼란을 유발시키고 있다. 사회적 문제방지를 위해, 딥페이크로 생성된 이미지를 정교하게 분석하고 탐지하는 방법이 필요하다. 따라서, 본 논문에서는 딥페이크로 생성된 가짜 이미지와 진짜 이미지에서 Saliency 특징을 각각 추출하고 분석하여 합성 후보 영역을 검출하며, 추출된 특징들을 중점으로 학습하여 최종적으로 딥페이크 이미지 탐지 모델을 구축하였다. 제안된 Saliency 기반의 딥페이크 탐지 모델은 합성된 이미지, 동영상 등의 딥페이크 검출 상황에서 공통적으로 사용될 수 있으며, 다양한 비교실험을 통해 본 논문의 제안 방법이 효과적임을 보였다.

  • PDF

Cascaded-Hop For DeepFake Videos Detection

  • Zhang, Dengyong;Wu, Pengjie;Li, Feng;Zhu, Wenjie;Sheng, Victor S.
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제16권5호
    • /
    • pp.1671-1686
    • /
    • 2022
  • Face manipulation tools represented by Deepfake have threatened the security of people's biological identity information. Particularly, manipulation tools with deep learning technology have brought great challenges to Deepfake detection. There are many solutions for Deepfake detection based on traditional machine learning and advanced deep learning. However, those solutions of detectors almost have problems of poor performance when evaluated on different quality datasets. In this paper, for the sake of making high-quality Deepfake datasets, we provide a preprocessing method based on the image pixel matrix feature to eliminate similar images and the residual channel attention network (RCAN) to resize the scale of images. Significantly, we also describe a Deepfake detector named Cascaded-Hop which is based on the PixelHop++ system and the successive subspace learning (SSL) model. By feeding the preprocessed datasets, Cascaded-Hop achieves a good classification result on different manipulation types and multiple quality datasets. According to the experiment on FaceForensics++ and Celeb-DF, the AUC (area under curve) results of our proposed methods are comparable to the state-of-the-art models.

Gray 채널 분석을 사용한 딥페이크 탐지 성능 비교 연구 (A Comparative Study on Deepfake Detection using Gray Channel Analysis)

  • 손석빈;조희현;강희윤;이병걸;이윤규
    • 한국멀티미디어학회논문지
    • /
    • 제24권9호
    • /
    • pp.1224-1241
    • /
    • 2021
  • Recent development of deep learning techniques for image generation has led to straightforward generation of sophisticated deepfakes. However, as a result, privacy violations through deepfakes has also became increased. To solve this issue, a number of techniques for deepfake detection have been proposed, which are mainly focused on RGB channel-based analysis. Although existing studies have suggested the effectiveness of other color model-based analysis (i.e., Grayscale), their effectiveness has not been quantitatively validated yet. Thus, in this paper, we compare the effectiveness of Grayscale channel-based analysis with RGB channel-based analysis in deepfake detection. Based on the selected CNN-based models and deepfake datasets, we measured the performance of each color model-based analysis in terms of accuracy and time. The evaluation results confirmed that Grayscale channel-based analysis performs better than RGB-channel analysis in several cases.

CoNSIST : Consist of New methodologies on AASIST, leveraging Squeeze-and-Excitation, Positional Encoding, and Re-formulated HS-GAL

  • Jae-Hoon Ha;Joo-Won Mun;Sang-Yup Lee
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2024년도 춘계학술발표대회
    • /
    • pp.692-695
    • /
    • 2024
  • With the recent advancements in artificial intelligence (AI), the performance of deep learning-based audio deepfake technology has significantly improved. This technology has been exploited for criminal activities, leading to various cases of victimization. To prevent such illicit outcomes, this paper proposes a deep learning-based audio deepfake detection model. In this study, we propose CoNSIST, an improved audio deepfake detection model, which incorporates three additional components into the graph-based end-to-end model AASIST: (i) Squeeze and Excitation, (ii) Positional Encoding, and (iii) Reformulated HS-GAL, This incorporation is expected to enable more effective feature extraction, elimination of unnecessary operations, and consideration of more diverse information, thereby improving the performance of the original AASIST. The results of multiple experiments indicate that CoNSIST has enhanced the performance of audio deepfake detection compared to existing models.

Blockchain Technology for Combating Deepfake and Protect Video/Image Integrity

  • Rashid, Md Mamunur;Lee, Suk-Hwan;Kwon, Ki-Ryong
    • 한국멀티미디어학회논문지
    • /
    • 제24권8호
    • /
    • pp.1044-1058
    • /
    • 2021
  • Tempered electronic contents have multiplied in last few years, thanks to the emergence of sophisticated artificial intelligence(AI) algorithms. Deepfakes (fake footage, photos, speech, and videos) can be a frightening and destructive phenomenon that has the capacity to distort the facts and hamper reputation by presenting a fake reality. Evidence of ownership or authentication of digital material is crucial for combating the fabricated content influx we are facing today. Current solutions lack the capacity to track digital media's history and provenance. Due to the rise of misrepresentation created by technologies like deepfake, detection algorithms are required to verify the integrity of digital content. Many real-world scenarios have been claimed to benefit from blockchain's authentication capabilities. Despite the scattered efforts surrounding such remedies, relatively little research has been undertaken to discover where blockchain technology can be used to tackle the deepfake problem. Latest blockchain based innovations such as Smart Contract, Hyperledger fabric can play a vital role against the manipulation of digital content. The goal of this paper is to summarize and discuss the ongoing researches related to blockchain's capabilities to protect digital content authentication. We have also suggested a blockchain (smart contract) dependent framework that can keep the data integrity of original content and thus prevent deepfake. This study also aims at discussing how blockchain technology can be used more effectively in deepfake prevention as well as highlight the current state of deepfake video detection research, including the generating process, various detection algorithms, and existing benchmarks.

지도 학습한 시계열적 특징 추출 모델과 LSTM을 활용한 딥페이크 판별 방법 (Deepfake Detection using Supervised Temporal Feature Extraction model and LSTM)

  • 이정환;김재훈;윤기중
    • 한국방송∙미디어공학회:학술대회논문집
    • /
    • 한국방송∙미디어공학회 2021년도 추계학술대회
    • /
    • pp.91-94
    • /
    • 2021
  • As deep learning technologies becoming developed, realistic fake videos synthesized by deep learning models called "Deepfake" videos became even more difficult to distinguish from original videos. As fake news or Deepfake blackmailing are causing confusion and serious problems, this paper suggests a novel model detecting Deepfake videos. We chose Residual Convolutional Neural Network (Resnet50) as an extraction model and Long Short-Term Memory (LSTM) which is a form of Recurrent Neural Network (RNN) as a classification model. We adopted cosine similarity with hinge loss to train our extraction model in embedding the features of Deepfake and original video. The result in this paper demonstrates that temporal features in the videos are essential for detecting Deepfake videos.

  • PDF

딥페이크 탐지 모델의 검증 방법론 불일치에 따른 성능 편향 분석 연구 (On the Performance Biases Arising from Inconsistencies in Evaluation Methodologies of Deepfake Detection Models)

  • 김현준;안홍은;박래현;권태경
    • 정보보호학회논문지
    • /
    • 제34권5호
    • /
    • pp.885-893
    • /
    • 2024
  • 생성형 AI 기술이 정교해짐에 따라 빈번해지는 악의적 딥페이크 사용에 대응하기 위해 딥페이크 탐지 모델 연구가 활발히 진행되고 있다. 딥페이크 탐지 모델의 성능 평가는 학습 데이터셋 선택, 데이터셋 전처리, 학습방법, 평가 데이터셋 선택 과정을 순차적으로 거친다. 하지만 기존 딥페이크 탐지 연구들은 각 단계마다 임의로 검증 방법론을 선택하여 논문에서의 성능이 표준화된 환경에서는 재현되지 않는 성능 평향 문제가 발생한다. 본 논문에서는 기존 딥페이크 탐지 연구의 검증 방법론을 분석하여 성능 평가의 신뢰성 저하 원인을 파악한다. 나아가 표준화된 환경에서의 실험을 통해 탐지 모델의 절대적 성능 비교에 어려움이 있음을 보여준다. 본 연구의 실험 결과는 탐지 성능 평가 신뢰성 제고와 절대적 성능 비교를 위해서는 통일된 검증 방법론이 필요함을 제시한다.

Bidirectional Convolutional LSTM을 이용한 Deepfake 탐지 방법 (A Method of Detection of Deepfake Using Bidirectional Convolutional LSTM)

  • 이대현;문종섭
    • 정보보호학회논문지
    • /
    • 제30권6호
    • /
    • pp.1053-1065
    • /
    • 2020
  • 최근 하드웨어의 성능과 인공지능 기술이 발달함에 따라 육안으로 구분하기 어려운 정교한 가짜 동영상들이 증가하고 있다. 인공지능을 이용한 얼굴 합성 기술을 딥페이크라고 하며 약간의 프로그래밍 능력과 딥러닝 지식만 있다면 누구든지 딥페이크를 이용하여 정교한 가짜 동영상을 제작할 수 있다. 이에 무분별한 가짜 동영상이 크게 증가하였으며 이는 개인 정보 침해, 가짜 뉴스, 사기 등에 문제로 이어질 수 있다. 따라서 사람의 눈으로도 진위를 가릴 수 없는 가짜 동영상을 탐지할 수 있는 방안이 필요하다. 이에 본 논문에서는 Bidirectional Convolutional LSTM과 어텐션 모듈(Attention module)을 적용한 딥페이크 탐지 모델을 제안한다. 본 논문에서 제안하는 모델은 어텐션 모듈과 신경곱 합성망 모델을 같이 사용되어 각 프레임의 특징을 추출하고 기존의 제안되어왔던 시간의 순방향만을 고려하는 LSTM과 달리 시간의 역방향도 고려하여 학습한다. 어텐션 모듈은 합성곱 신경망 모델과 같이 사용되어 각 프레임의 특징 추출에 이용한다. 실험을 통해 본 논문에서 제안하는 모델은 93.5%의 정확도를 갖고 기존 연구의 결과보다 AUC가 최대 50% 가량 높음을 보였다.

적대적 공격에 따른 딥페이크 탐지 모델 강화 (Improving the Robustness of Deepfake Detection Models Against Adversarial Attacks)

  • 이상영;허종욱
    • 한국정보처리학회:학술대회논문집
    • /
    • 한국정보처리학회 2022년도 추계학술발표대회
    • /
    • pp.724-726
    • /
    • 2022
  • 딥페이크(deepfake)로 인한 디지털 범죄는 날로 교묘해지면서 사회적으로 큰 파장을 불러일으키고 있다. 이때, 딥러닝 기반 모델의 오류를 발생시키는 적대적 공격(adversarial attack)의 등장으로 딥페이크를 탐지하는 모델의 취약성이 증가하고 있고, 이는 매우 치명적인 결과를 초래한다. 본 연구에서는 2 가지 방법을 통해 적대적 공격에도 영향을 받지 않는 강인한(robust) 모델을 구축하는 것을 목표로 한다. 모델 강화 기법인 적대적 학습(adversarial training)과 영상처리 기반 방어 기법인 크기 변환(resizing), JPEG 압축을 통해 적대적 공격에 대한 강인성을 입증한다.