• Title/Summary/Keyword: convolutional neural networks

Search Result 627, Processing Time 0.032 seconds

A Survey on Deep Convolutional Neural Networks for Image Steganography and Steganalysis

  • Hussain, Israr;Zeng, Jishen;Qin, Xinhong;Tan, Shunquan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.3
    • /
    • pp.1228-1248
    • /
    • 2020
  • Steganalysis & steganography have witnessed immense progress over the past few years by the advancement of deep convolutional neural networks (DCNN). In this paper, we analyzed current research states from the latest image steganography and steganalysis frameworks based on deep learning. Our objective is to provide for future researchers the work being done on deep learning-based image steganography & steganalysis and highlights the strengths and weakness of existing up-to-date techniques. The result of this study opens new approaches for upcoming research and may serve as source of hypothesis for further significant research on deep learning-based image steganography and steganalysis. Finally, technical challenges of current methods and several promising directions on deep learning steganography and steganalysis are suggested to illustrate how these challenges can be transferred into prolific future research avenues.

Multimodal Face Biometrics by Using Convolutional Neural Networks

  • Tiong, Leslie Ching Ow;Kim, Seong Tae;Ro, Yong Man
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.2
    • /
    • pp.170-178
    • /
    • 2017
  • Biometric recognition is one of the major challenging topics which needs high performance of recognition accuracy. Most of existing methods rely on a single source of biometric to achieve recognition. The recognition accuracy in biometrics is affected by the variability of effects, including illumination and appearance variations. In this paper, we propose a new multimodal biometrics recognition using convolutional neural network. We focus on multimodal biometrics from face and periocular regions. Through experiments, we have demonstrated that facial multimodal biometrics features deep learning framework is helpful for achieving high recognition performance.

Convolutional Neural Network-based Real-Time Drone Detection Algorithm (심층 컨벌루션 신경망 기반의 실시간 드론 탐지 알고리즘)

  • Lee, Dong-Hyun
    • The Journal of Korea Robotics Society
    • /
    • v.12 no.4
    • /
    • pp.425-431
    • /
    • 2017
  • As drones gain more popularity these days, drone detection becomes more important part of the drone systems for safety, privacy, crime prevention and etc. However, existing drone detection systems are expensive and heavy so that they are only suitable for industrial or military purpose. This paper proposes a novel approach for training Convolutional Neural Networks to detect drones from images that can be used in embedded systems. Unlike previous works that consider the class probability of the image areas where the class object exists, the proposed approach takes account of all areas in the image for robust classification and object detection. Moreover, a novel loss function is proposed for the CNN to learn more effectively from limited amount of training data. The experimental results with various drone images show that the proposed approach performs efficiently in real drone detection scenarios.

Implementation of Fish Detection Based on Convolutional Neural Networks (CNN 기반의 물고기 탐지 알고리즘 구현)

  • Lee, Yong-Hwan;Kim, Heung-Jun
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.3
    • /
    • pp.124-129
    • /
    • 2020
  • Autonomous underwater vehicle makes attracts to many researchers. This paper proposes a convolutional neural network (CNN) based fish detection method. Since there are not enough data sets in the process of training, overfitting problem can be occurred in deep learning. To solve the problem, we apply the dropout algorithm to simplify the model. Experimental result showed that the implemented method is promising, and the effectiveness of identification by dropout approach is highly enhanced.

Comparison of CNN and YOLO for Object Detection (객체 검출을 위한 CNN과 YOLO 성능 비교 실험)

  • Lee, Yong-Hwan;Kim, Youngseop
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.1
    • /
    • pp.85-92
    • /
    • 2020
  • Object detection plays a critical role in the field of computer vision, and various researches have rapidly increased along with applying convolutional neural network and its modified structures since 2012. There are representative object detection algorithms, which are convolutional neural networks and YOLO. This paper presents two representative algorithm series, based on CNN and YOLO which solves the problem of CNN bounding box. We compare the performance of algorithm series in terms of accuracy, speed and cost. Compared with the latest advanced solution, YOLO v3 achieves a good trade-off between speed and accuracy.

Convolutional Neural Network Based on Accelerator-Aware Pruning for Object Detection in Single-Shot Multibox Detector (싱글숏 멀티박스 검출기에서 객체 검출을 위한 가속 회로 인지형 가지치기 기반 합성곱 신경망 기법)

  • Kang, Hyeong-Ju
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.24 no.1
    • /
    • pp.141-144
    • /
    • 2020
  • Convolutional neural networks (CNNs) show high performance in computer vision tasks including object detection, but a lot of weight storage and computation is required. In this paper, a pruning scheme is applied to CNNs for object detection, which can remove much amount of weights with a negligible performance degradation. Contrary to the previous ones, the pruning scheme applied in this paper considers the base accelerator architecture. With the consideration, the pruned CNNs can be efficiently performed on an ASIC or FPGA accelerator. Even with the constrained pruning, the resulting CNN shows a negligible degradation of detection performance, less-than-1% point degradation of mAP on VOD0712 test set. With the proposed scheme, CNNs can be applied to objection dtection efficiently.

Lightweight Residual Layer Based Convolutional Neural Networks for Traffic Sign Recognition (교통 신호 인식을 위한 경량 잔류층 기반 컨볼루션 신경망)

  • Shokhrukh, Kodirov;Yoo, Jae Hung
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.17 no.1
    • /
    • pp.105-110
    • /
    • 2022
  • Traffic sign recognition plays an important role in solving traffic-related problems. Traffic sign recognition and classification systems are key components for traffic safety, traffic monitoring, autonomous driving services, and autonomous vehicles. A lightweight model, applicable to portable devices, is an essential aspect of the design agenda. We suggest a lightweight convolutional neural network model with residual blocks for traffic sign recognition systems. The proposed model shows very competitive results on publicly available benchmark data.

Image Restoration Method using Denoising CNN (잡음제거 합성곱 신경망을 이용한 이미지 복원방법)

  • Kim, Seonjae;Lee, Jeongho;Lee, Suk-Hwan;Jun, Dongsan
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.1
    • /
    • pp.29-38
    • /
    • 2022
  • Although image compression is one of the essential technologies to transmit image data on a variety of surveillance and mobile healthcare applications, it causes unnecessary compression artifacts such as blocking and ringing artifacts by the lossy compression in the limited network bandwidth. Recently, image restoration methods using convolutional neural network (CNN) show the significant improvement of image quality from the compressed images. In this paper, we propose Image Denoising Convolutional Neural Networks (IDCNN) to reduce the compression artifacts for the purpose of improving the performance of object classification. In order to evaluate the classification accuracy, we used the ImageNet test dataset consisting of 50,000 natural images and measured the classification performance in terms of Top-1 and Top-5 accuracy. Experimental results show that the proposed IDCNN can improve Top-1 and Top-5 accuracy as high as 2.46% and 2.42%, respectively.

A Parallel Deep Convolutional Neural Network for Alzheimer's disease classification on PET/CT brain images

  • Baydargil, Husnu Baris;Park, Jangsik;Kang, Do-Young;Kang, Hyun;Cho, Kook
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.9
    • /
    • pp.3583-3597
    • /
    • 2020
  • In this paper, a parallel deep learning model using a convolutional neural network and a dilated convolutional neural network is proposed to classify Alzheimer's disease with high accuracy in PET/CT images. The developed model consists of two pipelines, a conventional CNN pipeline, and a dilated convolution pipeline. An input image is sent through both pipelines, and at the end of both pipelines, extracted features are concatenated and used for classifying Alzheimer's disease. Complimentary abilities of both networks provide better overall accuracy than single conventional CNNs in the dataset. Moreover, instead of performing binary classification, the proposed model performs three-class classification being Alzheimer's disease, mild cognitive impairment, and normal control. Using the data received from Dong-a University, the model performs classification detecting Alzheimer's disease with an accuracy of up to 95.51%.

Convolutional Neural Network Based Multi-feature Fusion for Non-rigid 3D Model Retrieval

  • Zeng, Hui;Liu, Yanrong;Li, Siqi;Che, JianYong;Wang, Xiuqing
    • Journal of Information Processing Systems
    • /
    • v.14 no.1
    • /
    • pp.176-190
    • /
    • 2018
  • This paper presents a novel convolutional neural network based multi-feature fusion learning method for non-rigid 3D model retrieval, which can investigate the useful discriminative information of the heat kernel signature (HKS) descriptor and the wave kernel signature (WKS) descriptor. At first, we compute the 2D shape distributions of the two kinds of descriptors to represent the 3D model and use them as the input to the networks. Then we construct two convolutional neural networks for the HKS distribution and the WKS distribution separately, and use the multi-feature fusion layer to connect them. The fusion layer not only can exploit more discriminative characteristics of the two descriptors, but also can complement the correlated information between the two kinds of descriptors. Furthermore, to further improve the performance of the description ability, the cross-connected layer is built to combine the low-level features with high-level features. Extensive experiments have validated the effectiveness of the designed multi-feature fusion learning method.