• Title/Summary/Keyword: Binary CNN

Search Result 61, Processing Time 0.021 seconds

CNN-Based Malware Detection Using Opcode Frequency-Based Image (Opcode 빈도수 기반 악성코드 이미지를 활용한 CNN 기반 악성코드 탐지 기법)

  • Ko, Seok Min;Yang, JaeHyeok;Choi, WonJun;Kim, TaeGuen
    • Journal of the Korea Institute of Information Security & Cryptology
    • /
    • v.32 no.5
    • /
    • pp.933-943
    • /
    • 2022
  • As the Internet develops and the utilization rate of computers increases, the threats posed by malware keep increasing. This leads to the demand for a system to automatically analyzes a large amount of malware. In this paper, an automatic malware analysis technique using a deep learning algorithm is introduced. Our proposed method uses CNN (Convolutional Neural Network) to analyze the malicious features represented as images. To reflect semantic information of malware for detection, our method uses the opcode frequency data of binary for image generation, rather than using bytes of binary. As a result of the experiments using the datasets consisting of 20,000 samples, it was found that the proposed method can detect malicious codes with 91% accuracy.

Masked Face Recognition via a Combined SIFT and DLBP Features Trained in CNN Model

  • Aljarallah, Nahla Fahad;Uliyan, Diaa Mohammed
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.6
    • /
    • pp.319-331
    • /
    • 2022
  • The latest global COVID-19 pandemic has made the use of facial masks an important aspect of our lives. People are advised to cover their faces in public spaces to discourage illness from spreading. Using these face masks posed a significant concern about the exactness of the face identification method used to search and unlock telephones at the school/office. Many companies have already built the requisite data in-house to incorporate such a scheme, using face recognition as an authentication. Unfortunately, veiled faces hinder the detection and acknowledgment of these facial identity schemes and seek to invalidate the internal data collection. Biometric systems that use the face as authentication cause problems with detection or recognition (face or persons). In this research, a novel model has been developed to detect and recognize faces and persons for authentication using scale invariant features (SIFT) for the whole segmented face with an efficient local binary texture features (DLBP) in region of eyes in the masked face. The Fuzzy C means is utilized to segment the image. These mixed features are trained significantly in a convolution neural network (CNN) model. The main advantage of this model is that can detect and recognizing faces by assigning weights to the selected features aimed to grant or provoke permissions with high accuracy.

Prediction and factors of Seoul apartment price using convolutional neural networks (CNN 모형을 이용한 서울 아파트 가격 예측과 그 요인)

  • Lee, Hyunjae;Son, Donghui;Kim, Sujin;Oh, Sein;Kim, Jaejik
    • The Korean Journal of Applied Statistics
    • /
    • v.33 no.5
    • /
    • pp.603-614
    • /
    • 2020
  • This study focuses on the prediction and factors of apartment prices in Seoul using a convolutional neural networks (CNN) model that has shown excellent performance as a predictive model of image data. To do this, we consider natural environmental factors, infrastructure factors, and social economic factors of the apartments as input variables of the CNN model. The natural environmental factors include rivers, green areas, and altitudes of apartments. The infrastructure factors have bus stops, subway stations, commercial districts, schools, and the social economic factors are the number of jobs and criminal rates, etc. We predict apartment prices and interpret the factors for the prices by converting the values of these input variables to play the same role as pixel values of image channels for the input layer in the CNN model. In addition, the CNN model used in this study takes into account the spatial characteristics of each apartment by describing the natural environmental and infrastructure factors variables as binary images centered on each apartment in each input layer.

A Study on Classification of CNN-based Linux Malware using Image Processing Techniques (영상처리기법을 이용한 CNN 기반 리눅스 악성코드 분류 연구)

  • Kim, Se-Jin;Kim, Do-Yeon;Lee, Hoo-Ki;Lee, Tae-Jin
    • Journal of the Korea Academia-Industrial cooperation Society
    • /
    • v.21 no.9
    • /
    • pp.634-642
    • /
    • 2020
  • With the proliferation of Internet of Things (IoT) devices, using the Linux operating system in various architectures has increased. Also, security threats against Linux-based IoT devices are increasing, and malware variants based on existing malware are constantly appearing. In this paper, we propose a system where the binary data of a visualized Executable and Linkable Format (ELF) file is applied to Local Binary Pattern (LBP) image processing techniques and a median filter to classify malware in a Convolutional Neural Network (CNN). As a result, the original image showed the highest accuracy and F1-score at 98.77%, and reproducibility also showed the highest score at 98.55%. For the median filter, the highest precision was 99.19%, and the lowest false positive rate was 0.008%. Using the LBP technique confirmed that the overall result was lower than putting the original ELF file through the median filter. When the results of putting the original file through image processing techniques were classified by majority, it was confirmed that the accuracy, precision, F1-score, and false positive rate were better than putting the original file through the median filter. In the future, the proposed system will be used to classify malware families or add other image processing techniques to improve the accuracy of majority vote classification. Or maybe we mean "the use of Linux O/S distributions for various architectures has increased" instead? If not, please rephrase as intended.

Identification of Steganographic Methods Using a Hierarchical CNN Structure (계층적 CNN 구조를 이용한 스테가노그래피 식별)

  • Kang, Sanghoon;Park, Hanhoon;Park, Jong-Il;Kim, Sanhae
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.20 no.4
    • /
    • pp.205-211
    • /
    • 2019
  • Steganalysis is a technique that aims to detect and recover data hidden by steganography. Steganalytic methods detect hidden data by analyzing visual and statistical distortions caused during data embedding. However, for recovering the hidden data, they need to know which steganographic methods the hidden data has been embedded by. Therefore, we propose a hierarchical convolutional neural network (CNN) structure that identifies a steganographic method applied to an input image through multi-level classification. We trained four base CNNs (each is a binary classifier that determines whether or not a steganographic method has been applied to an input image or which of two different steganographic methods has been applied to an input image) and connected them hierarchically. Experimental results demonstrate that the proposed hierarchical CNN structure can identify four different steganographic methods (LSB, PVD, WOW, and UNIWARD) with an accuracy of 79%.

Research on Chinese Microblog Sentiment Classification Based on TextCNN-BiLSTM Model

  • Haiqin Tang;Ruirui Zhang
    • Journal of Information Processing Systems
    • /
    • v.19 no.6
    • /
    • pp.842-857
    • /
    • 2023
  • Currently, most sentiment classification models on microblogging platforms analyze sentence parts of speech and emoticons without comprehending users' emotional inclinations and grasping moral nuances. This study proposes a hybrid sentiment analysis model. Given the distinct nature of microblog comments, the model employs a combined stop-word list and word2vec for word vectorization. To mitigate local information loss, the TextCNN model, devoid of pooling layers, is employed for local feature extraction, while BiLSTM is utilized for contextual feature extraction in deep learning. Subsequently, microblog comment sentiments are categorized using a classification layer. Given the binary classification task at the output layer and the numerous hidden layers within BiLSTM, the Tanh activation function is adopted in this model. Experimental findings demonstrate that the enhanced TextCNN-BiLSTM model attains a precision of 94.75%. This represents a 1.21%, 1.25%, and 1.25% enhancement in precision, recall, and F1 values, respectively, in comparison to the individual deep learning models TextCNN. Furthermore, it outperforms BiLSTM by 0.78%, 0.9%, and 0.9% in precision, recall, and F1 values.

Emotion Recognition using Short-Term Multi-Physiological Signals

  • Kang, Tae-Koo
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.3
    • /
    • pp.1076-1094
    • /
    • 2022
  • Technology for emotion recognition is an essential part of human personality analysis. To define human personality characteristics, the existing method used the survey method. However, there are many cases where communication cannot make without considering emotions. Hence, emotional recognition technology is an essential element for communication but has also been adopted in many other fields. A person's emotions are revealed in various ways, typically including facial, speech, and biometric responses. Therefore, various methods can recognize emotions, e.g., images, voice signals, and physiological signals. Physiological signals are measured with biological sensors and analyzed to identify emotions. This study employed two sensor types. First, the existing method, the binary arousal-valence method, was subdivided into four levels to classify emotions in more detail. Then, based on the current techniques classified as High/Low, the model was further subdivided into multi-levels. Finally, signal characteristics were extracted using a 1-D Convolution Neural Network (CNN) and classified sixteen feelings. Although CNN was used to learn images in 2D, sensor data in 1D was used as the input in this paper. Finally, the proposed emotional recognition system was evaluated by measuring actual sensors.

Parallel-Addition Convolution Algorithm in Grayscale Image (그레이스케일 영상의 병렬가산 컨볼루션 알고리즘)

  • Choi, Jong-Ho
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.10 no.4
    • /
    • pp.288-294
    • /
    • 2017
  • Recently, deep learning using convolutional neural network (CNN) has been extensively studied in image recognition. Convolution consists of addition and multiplication. Multiplication is computationally expensive in hardware implementation, relative to addition. It is also important factor limiting a chip design in an embedded deep learning system. In this paper, I propose a parallel-addition processing algorithm that converts grayscale images to the superposition of binary images and performs convolution only with addition. It is confirmed that the convolution can be performed by a parallel-addition method capable of reducing the processing time in experiment for verifying the availability of proposed algorithm.

CNN-based In-loop Filter on TU Block (TU 블록 크기에 따른 CNN기반 인루프필터)

  • Kim, Yang-Woo;Jeong, Seyoon;Cho, Seunghyun;Lee, Yung-Lyul
    • Proceedings of the Korean Society of Broadcast Engineers Conference
    • /
    • 2018.11a
    • /
    • pp.15-17
    • /
    • 2018
  • VVC(Versatile Video Coding)는 입력된 영상을 CTU(Coding Tree Unit) 단위로 분할하여 코딩하며, 이를 다시 QTBTT(Quadtree plus binary tree and triple tree)로 분할하고, TU(Transform Unit)도 이와 같은 단위로 분할된다. 따라서 TU의 크기는 $4{\times}4$, $4{\times}8$, $4{\times}16$, $4{\times}32$, $8{\times}4$, $16{\times}4$, $32{\times}4$, $8{\times}8$, $8{\times}16$, $8{\times}32$, $16{\times}8$, $32{\times}8$, $16{\times}16$, $16{\times}32$, $32{\times}16$, $32{\times}32$, $64{\times}64$의 17가지 종류가 있다. 기존의 VVC 참조 Software인 VTM에서는 디블록킹필터와 SAO(Sample Adaptive Offset)로 이루어진 인루프필터를 이용하여 에러를 복원하는데, 본 논문은 TU 크기에 따라서 원본블록과 복원블록의 차이(에러)가 통계적으로 다름을 이용하여 서로 다른 CNN(Convolution Neural Network)을 구축하고 에러를 복원하는 방법으로 VTM의 인루프 필터를 대체한다. 복원영상의 에러를 감소시키기 위하여 TU 블록크기에 따라 DenseNet의 Dense Block기반 CNN을 구성하고, Hyper Parameter와 복잡도의 감소를 위해 네트워크 간에 일부 가중치를 공유하는 모양의 Network를 구성하였다.

  • PDF

Binary Classification of Hypertensive Retinopathy Using Deep Dense CNN Learning

  • Mostafa E.A., Ibrahim;Qaisar, Abbas
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.12
    • /
    • pp.98-106
    • /
    • 2022
  • A condition of the retina known as hypertensive retinopathy (HR) is connected to high blood pressure. The severity and persistence of hypertension are directly correlated with the incidence of HR. To avoid blindness, it is essential to recognize and assess HR as soon as possible. Few computer-aided systems are currently available that can diagnose HR issues. On the other hand, those systems focused on gathering characteristics from a variety of retinopathy-related HR lesions and categorizing them using conventional machine-learning algorithms. Consequently, for limited applications, significant and complicated image processing methods are necessary. As seen in recent similar systems, the preciseness of classification is likewise lacking. To address these issues, a new CAD HR-diagnosis system employing the advanced Deep Dense CNN Learning (DD-CNN) technology is being developed to early identify HR. The HR-diagnosis system utilized a convolutional neural network that was previously trained as a feature extractor. The statistical investigation of more than 1400 retinography images is undertaken to assess the accuracy of the implemented system using several performance metrics such as specificity (SP), sensitivity (SE), area under the receiver operating curve (AUC), and accuracy (ACC). On average, we achieved a SE of 97%, ACC of 98%, SP of 99%, and AUC of 0.98. These results indicate that the proposed DD-CNN classifier is used to diagnose hypertensive retinopathy.