• Title/Summary/Keyword: ResNet50V2

Search Result 19, Processing Time 0.023 seconds

Violent crowd flow detection from surveillance cameras using deep transfer learning-gated recurrent unit

  • Elly Matul Imah;Riskyana Dewi Intan Puspitasari
    • ETRI Journal
    • /
    • v.46 no.4
    • /
    • pp.671-682
    • /
    • 2024
  • Violence can be committed anywhere, even in crowded places. It is hence necessary to monitor human activities for public safety. Surveillance cameras can monitor surrounding activities but require human assistance to continuously monitor every incident. Automatic violence detection is needed for early warning and fast response. However, such automation is still challenging because of low video resolution and blind spots. This paper uses ResNet50v2 and the gated recurrent unit (GRU) algorithm to detect violence in the Movies, Hockey, and Crowd video datasets. Spatial features were extracted from each frame sequence of the video using a pretrained model from ResNet50V2, which was then classified using the optimal trained model on the GRU architecture. The experimental results were then compared with wavelet feature extraction methods and classification models, such as the convolutional neural network and long short-term memory. The results show that the proposed combination of ResNet50V2 and GRU is robust and delivers the best performance in terms of accuracy, recall, precision, and F1-score. The use of ResNet50V2 for feature extraction can improve model performance.

Performance Comparison of CNN-Based Image Classification Models for Drone Identification System (드론 식별 시스템을 위한 합성곱 신경망 기반 이미지 분류 모델 성능 비교)

  • YeongWan Kim;DaeKyun Cho;GunWoo Park
    • The Journal of the Convergence on Culture Technology
    • /
    • v.10 no.4
    • /
    • pp.639-644
    • /
    • 2024
  • Recent developments in the use of drones on battlefields, extending beyond reconnaissance to firepower support, have greatly increased the importance of technologies for early automatic drone identification. In this study, to identify an effective image classification model that can distinguish drones from other aerial targets of similar size and appearance, such as birds and balloons, we utilized a dataset of 3,600 images collected from the internet. We adopted a transfer learning approach that combines the feature extraction capabilities of three pre-trained convolutional neural network models (VGG16, ResNet50, InceptionV3) with an additional classifier. Specifically, we conducted a comparative analysis of the performance of these three pre-trained models to determine the most effective one. The results showed that the InceptionV3 model achieved the highest accuracy at 99.66%. This research represents a new endeavor in utilizing existing convolutional neural network models and transfer learning for drone identification, which is expected to make a significant contribution to the advancement of drone identification technologies.

Image-Based Machine Learning Model for Malware Detection on LLVM IR (LLVM IR 대상 악성코드 탐지를 위한 이미지 기반 머신러닝 모델)

  • Kyung-bin Park;Yo-seob Yoon;Baasantogtokh Duulga;Kang-bin Yim
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.34 no.1
    • /
    • pp.31-40
    • /
    • 2024
  • Recently, static analysis-based signature and pattern detection technologies have limitations due to the advanced IT technologies. Moreover, It is a compatibility problem of multiple architectures and an inherent problem of signature and pattern detection. Malicious codes use obfuscation and packing techniques to hide their identity, and they also avoid existing static analysis-based signature and pattern detection techniques such as code rearrangement, register modification, and branching statement addition. In this paper, We propose an LLVM IR image-based automated static analysis of malicious code technology using machine learning to solve the problems mentioned above. Whether binary is obfuscated or packed, it's decompiled into LLVM IR, which is an intermediate representation dedicated to static analysis and optimization. "Therefore, the LLVM IR code is converted into an image before being fed to the CNN-based transfer learning algorithm ResNet50v2 supported by Keras". As a result, we present a model for image-based detection of malicious code.

SVM on Top of Deep Networks for Covid-19 Detection from Chest X-ray Images

  • Do, Thanh-Nghi;Le, Van-Thanh;Doan, Thi-Huong
    • Journal of information and communication convergence engineering
    • /
    • v.20 no.3
    • /
    • pp.219-225
    • /
    • 2022
  • In this study, we propose training a support vector machine (SVM) model on top of deep networks for detecting Covid-19 from chest X-ray images. We started by gathering a real chest X-ray image dataset, including positive Covid-19, normal cases, and other lung diseases not caused by Covid-19. Instead of training deep networks from scratch, we fine-tuned recent pre-trained deep network models, such as DenseNet121, MobileNet v2, Inception v3, Xception, ResNet50, VGG16, and VGG19, to classify chest X-ray images into one of three classes (Covid-19, normal, and other lung). We propose training an SVM model on top of deep networks to perform a nonlinear combination of deep network outputs, improving classification over any single deep network. The empirical test results on the real chest X-ray image dataset show that deep network models, with an exception of ResNet50 with 82.44%, provide an accuracy of at least 92% on the test set. The proposed SVM on top of the deep network achieved the highest accuracy of 96.16%.

Study the mutual robustness between parameter and accuracy in CNNs and developed an Automated Parameter Bit Operation Framework (CNN 의 파라미터와 정확도간 상호 강인성 연구 및 파라미터 비트 연산 자동화 프레임워크 개발)

  • Dong-In Lee;Jung-Heon Kim;Seung-Ho Lim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.05a
    • /
    • pp.451-452
    • /
    • 2023
  • 최근 CNN 이 다양한 산업에 확산되고 있으며, IoT 기기 및 엣지 컴퓨팅에 적합한 경량 모델에 대한 연구가 급증하고 있다. 본 논문에서는 CNN 모델의 파라미터 비트 연산을 위한 자동화 프레임워크를 제안하고, 파라미터 비트와 모델 정확도 사이의 관계를 실험 및 연구한다. 제안된 프레임워크는 하위 n- bit 를 0 으로 설정하여 정보 손실 발생시킴으로써 ImageNet 데이터셋으로 사전 학습된 CNN 모델의 파라미터와 정확도의 강인성을 비트 단위로 체계적으로 실험할 수 있다. 우리는 비트 연산을 수행한 파라미터로 InceptionV3, InceptionResnetV2, ResNet50, Xception, DenseNet121, MobileNetV1, MobileNetV2 모델의 정확도를 평가한다. 실험 결과는 성능이 낮은 모델일수록 파라미터와 정확도 간의 강인성이 높아 성능이 좋은 모델보다 정확도를 유지하는 비트 수가 적다는 것을 보여준다.

Ca2+ Sensitivity of Anoctamin 6/TMEM16F Is Regulated by the Putative Ca2+-Binding Reservoir at the N-Terminal Domain

  • Roh, Jae Won;Hwang, Ga Eun;Kim, Woo Kyung;Nam, Joo Hyun
    • Molecules and Cells
    • /
    • v.44 no.2
    • /
    • pp.88-100
    • /
    • 2021
  • Anoctamin 6/TMEM16F (ANO6) is a dual-function protein with Ca2+-activated ion channel and Ca2+-activated phospholipid scramblase activities, requiring a high intracellular Ca2+ concentration (e.g., half-maximal effective Ca2+ concentration [EC50] of [Ca2+]i > 10 μM), and strong and sustained depolarization above 0 mV. Structural comparison with Anoctamin 1/TMEM16A (ANO1), a canonical Ca2+-activated chloride channel exhibiting higher Ca2+ sensitivity (EC50 of 1 μM) than ANO6, suggested that a homologous Ca2+-transferring site in the N-terminal domain (Nt) might be responsible for the differential Ca2+ sensitivity and kinetics of activation between ANO6 and ANO1. To elucidate the role of the putative Ca2+-transferring reservoir in the Nt (Nt-CaRes), we constructed an ANO6-1-6 chimera in which Nt-CaRes was replaced with the corresponding domain of ANO1. ANO6-1-6 showed higher sensitivity to Ca2+ than ANO6. However, neither the speed of activation nor the voltage-dependence differed between ANO6 and ANO6-1-6. Molecular dynamics simulation revealed a reduced Ca2+ interaction with Nt-CaRes in ANO6 than ANO6-1-6. Moreover, mutations on potentially Ca2+-interacting acidic amino acids in ANO6 Nt-CaRes resulted in reduced Ca2+ sensitivity, implying direct interactions of Ca2+ with these residues. Based on these results, we cautiously suggest that the net charge of Nt-CaRes is responsible for the difference in Ca2+ sensitivity between ANO1 and ANO6.

Deep Learning-Based Box Office Prediction Using the Image Characteristics of Advertising Posters in Performing Arts (공연예술에서 광고포스터의 이미지 특성을 활용한 딥러닝 기반 관객예측)

  • Cho, Yujung;Kang, Kyungpyo;Kwon, Ohbyung
    • The Journal of Society for e-Business Studies
    • /
    • v.26 no.2
    • /
    • pp.19-43
    • /
    • 2021
  • The prediction of box office performance in performing arts institutions is an important issue in the performing arts industry and institutions. For this, traditional prediction methodology and data mining methodology using standardized data such as cast members, performance venues, and ticket prices have been proposed. However, although it is evident that audiences tend to seek out their intentions by the performance guide poster, few attempts were made to predict box office performance by analyzing poster images. Hence, the purpose of this study is to propose a deep learning application method that can predict box office success through performance-related poster images. Prediction was performed using deep learning algorithms such as Pure CNN, VGG-16, Inception-v3, and ResNet50 using poster images published on the KOPIS as learning data set. In addition, an ensemble with traditional regression analysis methodology was also attempted. As a result, it showed high discrimination performance exceeding 85% of box office prediction accuracy. This study is the first attempt to predict box office success using image data in the performing arts field, and the method proposed in this study can be applied to the areas of poster-based advertisements such as institutional promotions and corporate product advertisements.

Explanation-focused Adaptive Multi-teacher Knowledge Distillation (다중 신경망으로부터 해석 중심의 적응적 지식 증류)

  • Chih-Yun Li;Inwhee Joe
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.592-595
    • /
    • 2024
  • 엄청난 성능에도 불구하고, 심층 신경망은 예측결과에 대한 설명이 없는 블랙 박스로 작동한다는 비판을 받고 있다. 이러한 불투명한 표현은 신뢰성을 제한하고 모델의 대한 과학적 이해를 방해한다. 본 연구는 여러 개의 교사 신경망으로부터 설명 중심의 학생 신경망으로 지식 증류를 통해 해석 가능성을 향상시키는 것을 제안한다. 구체적으로, 인간이 정의한 개념 활성화 벡터 (CAV)를 통해 교사 모델의 개념 민감도를 방향성 도함수를 사용하여 계량화한다. 목표 개념에 대한 민감도 점수에 비례하여 교사 지식 융합을 가중치를 부여함으로써 증류된 학생 모델은 양호한 성능을 달성하면서 네트워크 논리를 해석으로 집중시킨다. 실험 결과, ResNet50, DenseNet201 및 EfficientNetV2-S 앙상블을 7 배 작은 아키텍처로 압축하여 정확도가 6% 향상되었다. 이 방법은 모델 용량, 예측 능력 및 해석 가능성 사이의 트레이드오프를 조화하고자 한다. 이는 모바일 플랫폼부터 안정성이 중요한 도메인에 걸쳐 믿을 수 있는 AI 의 미래를 여는 데 도움이 될 것이다.

Compression and Performance Evaluation of CNN Models on Embedded Board (임베디드 보드에서의 CNN 모델 압축 및 성능 검증)

  • Moon, Hyeon-Cheol;Lee, Ho-Young;Kim, Jae-Gon
    • Journal of Broadcast Engineering
    • /
    • v.25 no.2
    • /
    • pp.200-207
    • /
    • 2020
  • Recently, deep neural networks such as CNN are showing excellent performance in various fields such as image classification, object recognition, visual quality enhancement, etc. However, as the model size and computational complexity of deep learning models for most applications increases, it is hard to apply neural networks to IoT and mobile environments. Therefore, neural network compression algorithms for reducing the model size while keeping the performance have been being studied. In this paper, we apply few compression methods to CNN models and evaluate their performances in the embedded environment. For evaluate the performance, the classification performance and inference time of the original CNN models and the compressed CNN models on the image inputted by the camera are evaluated in the embedded board equipped with QCS605, which is a customized AI chip. In this paper, a few CNN models of MobileNetV2, ResNet50, and VGG-16 are compressed by applying the methods of pruning and matrix decomposition. The experimental results show that the compressed models give not only the model size reduction of 1.3~11.2 times at a classification performance loss of less than 2% compared to the original model, but also the inference time reduction of 1.2~2.21 times, and the memory reduction of 1.2~3.8 times in the embedded board.

Transfer Learning for Caladium bicolor Classification: Proof of Concept to Application Development

  • Porawat Visutsak;Xiabi Liu;Keun Ho Ryu;Naphat Bussabong;Nicha Sirikong;Preeyaphorn Intamong;Warakorn Sonnui;Siriwan Boonkerd;Jirawat Thongpiem;Maythar Poonpanit;Akarasate Homwiseswongsa;Kittipot Hirunwannapong;Chaimongkol Suksomsong;Rittikait Budrit
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.18 no.1
    • /
    • pp.126-146
    • /
    • 2024
  • Caladium bicolor is one of the most popular plants in Thailand. The original species of Caladium bicolor was found a hundred years ago. Until now, there are more than 500 species through multiplication. The classification of Caladium bicolor can be done by using its color and shape. This study aims to develop a model to classify Caladium bicolor using a transfer learning technique. This work also presents a proof of concept, GUI design, and web application deployment using the user-design-center method. We also evaluated the performance of the following pre-trained models in this work, and the results are as follow: 87.29% for AlexNet, 90.68% for GoogleNet, 93.59% for XceptionNet, 93.22% for MobileNetV2, 89.83% for RestNet18, 88.98% for RestNet50, 97.46% for RestNet101, and 94.92% for InceptionResNetV2. This work was implemented using MATLAB R2023a.