• Title/Summary/Keyword: Deep convolutional neural networks

Search Result 414, Processing Time 0.025 seconds

Efficient Large Dataset Construction using Image Smoothing and Image Size Reduction

  • Jaemin HWANG;Sac LEE;Hyunwoo LEE;Seyun PARK;Jiyoung LIM
    • Korean Journal of Artificial Intelligence
    • /
    • v.11 no.1
    • /
    • pp.17-24
    • /
    • 2023
  • With the continuous growth in the amount of data collected and analyzed, deep learning has become increasingly popular for extracting meaningful insights from various fields. However, hardware limitations pose a challenge for achieving meaningful results with limited data. To address this challenge, this paper proposes an algorithm that leverages the characteristics of convolutional neural networks (CNNs) to reduce the size of image datasets by 20% through smoothing and shrinking the size of images using color elements. The proposed algorithm reduces the learning time and, as a result, the computational load on hardware. The experiments conducted in this study show that the proposed method achieves effective learning with similar or slightly higher accuracy than the original dataset while reducing computational and time costs. This color-centric dataset construction method using image smoothing techniques can lead to more efficient learning on CNNs. This method can be applied in various applications, such as image classification and recognition, and can contribute to more efficient and cost-effective deep learning. This paper presents a promising approach to reducing the computational load and time costs associated with deep learning and provides meaningful results with limited data, enabling them to apply deep learning to a broader range of applications.

Malware Classification using Dynamic Analysis with Deep Learning

  • Asad Amin;Muhammad Nauman Durrani;Nadeem Kafi;Fahad Samad;Abdul Aziz
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.8
    • /
    • pp.49-62
    • /
    • 2023
  • There has been a rapid increase in the creation and alteration of new malware samples which is a huge financial risk for many organizations. There is a huge demand for improvement in classification and detection mechanisms available today, as some of the old strategies like classification using mac learning algorithms were proved to be useful but cannot perform well in the scalable auto feature extraction scenario. To overcome this there must be a mechanism to automatically analyze malware based on the automatic feature extraction process. For this purpose, the dynamic analysis of real malware executable files has been done to extract useful features like API call sequence and opcode sequence. The use of different hashing techniques has been analyzed to further generate images and convert them into image representable form which will allow us to use more advanced classification approaches to classify huge amounts of images using deep learning approaches. The use of deep learning algorithms like convolutional neural networks enables the classification of malware by converting it into images. These images when fed into the CNN after being converted into the grayscale image will perform comparatively well in case of dynamic changes in malware code as image samples will be changed by few pixels when classified based on a greyscale image. In this work, we used VGG-16 architecture of CNN for experimentation.

TeGCN:Transformer-embedded Graph Neural Network for Thin-filer default prediction (TeGCN:씬파일러 신용평가를 위한 트랜스포머 임베딩 기반 그래프 신경망 구조 개발)

  • Seongsu Kim;Junho Bae;Juhyeon Lee;Heejoo Jung;Hee-Woong Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.419-437
    • /
    • 2023
  • As the number of thin filers in Korea surpasses 12 million, there is a growing interest in enhancing the accuracy of assessing their credit default risk to generate additional revenue. Specifically, researchers are actively pursuing the development of default prediction models using machine learning and deep learning algorithms, in contrast to traditional statistical default prediction methods, which struggle to capture nonlinearity. Among these efforts, Graph Neural Network (GNN) architecture is noteworthy for predicting default in situations with limited data on thin filers. This is due to their ability to incorporate network information between borrowers alongside conventional credit-related data. However, prior research employing graph neural networks has faced limitations in effectively handling diverse categorical variables present in credit information. In this study, we introduce the Transformer embedded Graph Convolutional Network (TeGCN), which aims to address these limitations and enable effective default prediction for thin filers. TeGCN combines the TabTransformer, capable of extracting contextual information from categorical variables, with the Graph Convolutional Network, which captures network information between borrowers. Our TeGCN model surpasses the baseline model's performance across both the general borrower dataset and the thin filer dataset. Specially, our model performs outstanding results in thin filer default prediction. This study achieves high default prediction accuracy by a model structure tailored to characteristics of credit information containing numerous categorical variables, especially in the context of thin filers with limited data. Our study can contribute to resolving the financial exclusion issues faced by thin filers and facilitate additional revenue within the financial industry.

A Study on Fire Detection in Ship Engine Rooms Using Convolutional Neural Network (합성곱 신경망을 이용한 선박 기관실에서의 화재 검출에 관한 연구)

  • Park, Kyung-Min;Bae, Cherl-O
    • Journal of the Korean Society of Marine Environment & Safety
    • /
    • v.25 no.4
    • /
    • pp.476-481
    • /
    • 2019
  • Early detection of fire is an important measure for minimizing the loss of life and property damage. However, fire and smoke need to be simultaneously detected. In this context, numerous studies have been conducted on image-based fire detection. Conventional fire detection methods are compute-intensive and comprise several algorithms for extracting the flame and smoke characteristics. Hence, deep learning algorithms and convolution neural networks can be alternatively employed for fire detection. In this study, recorded image data of fire in a ship engine room were analyzed. The flame and smoke characteristics were extracted from the outer box, and the YOLO (You Only Look Once) convolutional neural network algorithm was subsequently employed for learning and testing. Experimental results were evaluated with respect to three attributes, namely detection rate, error rate, and accuracy. The respective values of detection rate, error rate, and accuracy are found to be 0.994, 0.011, and 0.998 for the flame, 0.978, 0.021, and 0.978 for the smoke, and the calculation time is found to be 0.009 s.

A Feasibility Study on Application of a Deep Convolutional Neural Network for Automatic Rock Type Classification (자동 암종 분류를 위한 딥러닝 영상처리 기법의 적용성 검토 연구)

  • Pham, Chuyen;Shin, Hyu-Soung
    • Tunnel and Underground Space
    • /
    • v.30 no.5
    • /
    • pp.462-472
    • /
    • 2020
  • Rock classification is fundamental discipline of exploring geological and geotechnical features in a site, which, however, may not be easy works because of high diversity of rock shape and color according to its origin, geological history and so on. With the great success of convolutional neural networks (CNN) in many different image-based classification tasks, there has been increasing interest in taking advantage of CNN to classify geological material. In this study, a feasibility of the deep CNN is investigated for automatically and accurately identifying rock types, focusing on the condition of various shapes and colors even in the same rock type. It can be further developed to a mobile application for assisting geologist in classifying rocks in fieldwork. The structure of CNN model used in this study is based on a deep residual neural network (ResNet), which is an ultra-deep CNN using in object detection and classification. The proposed CNN was trained on 10 typical rock types with an overall accuracy of 84% on the test set. The result demonstrates that the proposed approach is not only able to classify rock type using images, but also represents an improvement as taking highly diverse rock image dataset as input.

A Deep Learning Based Recommender System Using Visual Information (시각 정보를 활용한 딥러닝 기반 추천 시스템)

  • Moon, Hyunsil;Lim, Jinhyuk;Kim, Doyeon;Cho, Yoonho
    • Knowledge Management Research
    • /
    • v.21 no.3
    • /
    • pp.27-44
    • /
    • 2020
  • In order to solve the user's information overload problem, recommender systems infer users' preferences and suggest items that match them. The collaborative filtering (CF), the most successful recommendation algorithm, has been improving performance until recently and applied to various business domains. Visual information, such as book covers, could influence consumers' purchase decision making. However, CF-based recommender systems have rarely considered for visual information. In this study, we propose VizNCS, a CF-based deep learning model that uses visual information as additional information. VizNCS consists of two phases. In the first phase, we build convolutional neural networks (CNN) to extract visual features from image data. In the second phase, we supply the visual features to the NCF model that is known to easy to extend to other information among the deep learning-based recommendation systems. As the results of the performance comparison experiments, VizNCS showed higher performance than the vanilla NCF. We also conducted an additional experiment to see if the visual information affects differently depending on the product category. The result enables us to identify which categories were affected and which were not. We expect VizNCS to improve the recommender system performance and expand the recommender system's data source to visual information.

Weakly-supervised Semantic Segmentation using Exclusive Multi-Classifier Deep Learning Model (독점 멀티 분류기의 심층 학습 모델을 사용한 약지도 시맨틱 분할)

  • Choi, Hyeon-Joon;Kang, Dong-Joong
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.19 no.6
    • /
    • pp.227-233
    • /
    • 2019
  • Recently, along with the recent development of deep learning technique, neural networks are achieving success in computer vision filed. Convolutional neural network have shown outstanding performance in not only for a simple image classification task, but also for tasks with high difficulty such as object segmentation and detection. However many such deep learning models are based on supervised-learning, which requires more annotation labels than image-level label. Especially image semantic segmentation model requires pixel-level annotations for training, which is very. To solve these problems, this paper proposes a weakly-supervised semantic segmentation method which requires only image level label to train network. Existing weakly-supervised learning methods have limitations in detecting only specific area of object. In this paper, on the other hand, we use multi-classifier deep learning architecture so that our model recognizes more different parts of objects. The proposed method is evaluated using VOC 2012 validation dataset.

Deep Learning-based Automatic Wrinkles Segmentation on Microscope Skin Images for Skin Diagnosis (피부진단을 위한 딥러닝 기반 피부 영상에서의 자동 주름 추출)

  • Choi, Hyeon-yeong;Ko, Jae-pil
    • Journal of Advanced Navigation Technology
    • /
    • v.24 no.2
    • /
    • pp.148-154
    • /
    • 2020
  • Wrinkles are one of the main features of skin aging. Conventional image processing-based wrinkle detection is difficult to effectively cope with various skin images. In particular, Wrinkle extraction performance is significantly decreased when the wrinkles are not strong and similar to the surrounding skin. In this paper, deep learning is applied to extract wrinkles from microscopic skin images. In general, the microscope image is equipped with a wide-angle lens, so the brightness at the boundary area of the image is dark. In this paper, to solve this problem, the brightness of the skin image is estimated and corrected. In addition, We apply the structure of semantic segmentation network suitable for wrinkle extraction. The proposed method obtained an accuracy of 99.6% in test experiments on skin images collected in our laboratory.

Deep Multi-task Network for Simultaneous Hazy Image Semantic Segmentation and Dehazing (안개영상의 의미론적 분할 및 안개제거를 위한 심층 멀티태스크 네트워크)

  • Song, Taeyong;Jang, Hyunsung;Ha, Namkoo;Yeon, Yoonmo;Kwon, Kuyong;Sohn, Kwanghoon
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.9
    • /
    • pp.1000-1010
    • /
    • 2019
  • Image semantic segmentation and dehazing are key tasks in the computer vision. In recent years, researches in both tasks have achieved substantial improvements in performance with the development of Convolutional Neural Network (CNN). However, most of the previous works for semantic segmentation assume the images are captured in clear weather and show degraded performance under hazy images with low contrast and faded color. Meanwhile, dehazing aims to recover clear image given observed hazy image, which is an ill-posed problem and can be alleviated with additional information about the image. In this work, we propose a deep multi-task network for simultaneous semantic segmentation and dehazing. The proposed network takes single haze image as input and predicts dense semantic segmentation map and clear image. The visual information getting refined during the dehazing process can help the recognition task of semantic segmentation. On the other hand, semantic features obtained during the semantic segmentation process can provide cues for color priors for objects, which can help dehazing process. Experimental results demonstrate the effectiveness of the proposed multi-task approach, showing improved performance compared to the separate networks.

A Design of Small Scale Deep CNN Model for Facial Expression Recognition using the Low Resolution Image Datasets (저해상도 영상 자료를 사용하는 얼굴 표정 인식을 위한 소규모 심층 합성곱 신경망 모델 설계)

  • Salimov, Sirojiddin;Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.1
    • /
    • pp.75-80
    • /
    • 2021
  • Artificial intelligence is becoming an important part of our lives providing incredible benefits. In this respect, facial expression recognition has been one of the hot topics among computer vision researchers in recent decades. Classifying small dataset of low resolution images requires the development of a new small scale deep CNN model. To do this, we propose a method suitable for small datasets. Compared to the traditional deep CNN models, this model uses only a fraction of the memory in terms of total learnable weights, but it shows very similar results for the FER2013 and FERPlus datasets.