• Title/Summary/Keyword: Deep CNNs

Search Result 79, Processing Time 0.028 seconds

Accurate Human Localization for Automatic Labelling of Human from Fisheye Images

  • Than, Van Pha;Nguyen, Thanh Binh;Chung, Sun-Tae
    • Journal of Korea Multimedia Society
    • /
    • v.20 no.5
    • /
    • pp.769-781
    • /
    • 2017
  • Deep learning networks like Convolutional Neural Networks (CNNs) show successful performances in many computer vision applications such as image classification, object detection, and so on. For implementation of deep learning networks in embedded system with limited processing power and memory, deep learning network may need to be simplified. However, simplified deep learning network cannot learn every possible scene. One realistic strategy for embedded deep learning network is to construct a simplified deep learning network model optimized for the scene images of the installation place. Then, automatic training will be necessitated for commercialization. In this paper, as an intermediate step toward automatic training under fisheye camera environments, we study more precise human localization in fisheye images, and propose an accurate human localization method, Automatic Ground-Truth Labelling Method (AGTLM). AGTLM first localizes candidate human object bounding boxes by utilizing GoogLeNet-LSTM approach, and after reassurance process by GoogLeNet-based CNN network, finally refines them more correctly and precisely(tightly) by applying saliency object detection technique. The performance improvement of the proposed human localization method, AGTLM with respect to accuracy and tightness is shown through several experiments.

Plant Disease Identification using Deep Neural Networks

  • Mukherjee, Subham;Kumar, Pradeep;Saini, Rajkumar;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.233-238
    • /
    • 2017
  • Automatic identification of disease in plants from their leaves is one of the most challenging task to researchers. Diseases among plants degrade their performance and results into a huge reduction of agricultural products. Therefore, early and accurate diagnosis of such disease is of the utmost importance. The advancement in deep Convolutional Neural Network (CNN) has change the way of processing images as compared to traditional image processing techniques. Deep learning architectures are composed of multiple processing layers that learn the representations of data with multiple levels of abstraction. Therefore, proved highly effective in comparison to many state-of-the-art works. In this paper, we present a plant disease identification methodology from their leaves using deep CNNs. For this, we have adopted GoogLeNet that is considered a powerful architecture of deep learning to identify the disease types. Transfer learning has been used to fine tune the pre-trained model. An accuracy of 85.04% has been recorded in the identification of four disease class in Apple plant leaves. Finally, a comparison with other models has been performed to show the effectiveness of the approach.

Improvement of signal and noise performance using single image super-resolution based on deep learning in single photon-emission computed tomography imaging system

  • Kim, Kyuseok;Lee, Youngjin
    • Nuclear Engineering and Technology
    • /
    • v.53 no.7
    • /
    • pp.2341-2347
    • /
    • 2021
  • Because single-photon emission computed tomography (SPECT) is one of the widely used nuclear medicine imaging systems, it is extremely important to acquire high-quality images for diagnosis. In this study, we designed a super-resolution (SR) technique using dense block-based deep convolutional neural network (CNN) and evaluated the algorithm on real SPECT phantom images. To acquire the phantom images, a real SPECT system using a99mTc source and two physical phantoms was used. To confirm the image quality, the noise properties and visual quality metric evaluation parameters were calculated. The results demonstrate that our proposed method delivers a more valid SR improvement by using dense block-based deep CNNs as compared to conventional reconstruction techniques. In particular, when the proposed method was used, the quantitative performance was improved from 1.2 to 5.0 times compared to the result of using the conventional iterative reconstruction. Here, we confirmed the effects on the image quality of the resulting SR image, and our proposed technique was shown to be effective for nuclear medicine imaging.

Image-based rainfall prediction from a novel deep learning method

  • Byun, Jongyun;Kim, Jinwon;Jun, Changhyun
    • Proceedings of the Korea Water Resources Association Conference
    • /
    • 2021.06a
    • /
    • pp.183-183
    • /
    • 2021
  • Deep learning methods and their application have become an essential part of prediction and modeling in water-related research areas, including hydrological processes, climate change, etc. It is known that application of deep learning leads to high availability of data sources in hydrology, which shows its usefulness in analysis of precipitation, runoff, groundwater level, evapotranspiration, and so on. However, there is still a limitation on microclimate analysis and prediction with deep learning methods because of deficiency of gauge-based data and shortcomings of existing technologies. In this study, a real-time rainfall prediction model was developed from a sky image data set with convolutional neural networks (CNNs). These daily image data were collected at Chung-Ang University and Korea University. For high accuracy of the proposed model, it considers data classification, image processing, ratio adjustment of no-rain data. Rainfall prediction data were compared with minutely rainfall data at rain gauge stations close to image sensors. It indicates that the proposed model could offer an interpolation of current rainfall observation system and have large potential to fill an observation gap. Information from small-scaled areas leads to advance in accurate weather forecasting and hydrological modeling at a micro scale.

  • PDF

An Optimized Deep Learning Techniques for Analyzing Mammograms

  • Satish Babu Bandaru;Natarajasivan. D;Rama Mohan Babu. G
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.39-48
    • /
    • 2023
  • Breast cancer screening makes extensive utilization of mammography. Even so, there has been a lot of debate with regards to this application's starting age as well as screening interval. The deep learning technique of transfer learning is employed for transferring the knowledge learnt from the source tasks to the target tasks. For the resolution of real-world problems, deep neural networks have demonstrated superior performance in comparison with the standard machine learning algorithms. The architecture of the deep neural networks has to be defined by taking into account the problem domain knowledge. Normally, this technique will consume a lot of time as well as computational resources. This work evaluated the efficacy of the deep learning neural network like Visual Geometry Group Network (VGG Net) Residual Network (Res Net), as well as inception network for classifying the mammograms. This work proposed optimization of ResNet with Teaching Learning Based Optimization (TLBO) algorithm's in order to predict breast cancers by means of mammogram images. The proposed TLBO-ResNet, an optimized ResNet with faster convergence ability when compared with other evolutionary methods for mammogram classification.

User Identification Method using Palm Creases and Veins based on Deep Learning (손금과 손바닥 정맥을 함께 이용한 심층 신경망 기반 사용자 인식)

  • Kim, Seulbeen;Kim, Wonjun
    • Journal of Broadcast Engineering
    • /
    • v.23 no.3
    • /
    • pp.395-402
    • /
    • 2018
  • Human palms contain discriminative features for proving the identity of each person. In this paper, we present a novel method for user verification based on palmprints and palm veins. Specifically, the region of interest (ROI) is first determined to be forced to include the maximum amount of information with respect to underlying structures of a given palm image. The extracted ROI is subsequently enhanced by directional patterns and statistical characteristics of intensities. For multispectral palm images, each of convolutional neural networks (CNNs) is independently trained. In a spirit of ensemble, we finally combine network outputs to compute the probability of a given ROI image for determining the identity. Based on various experiments, we confirm that the proposed ensemble method is effective for user verification with palmprints and palm veins.

Hybrid CNN-SVM Based Seed Purity Identification and Classification System

  • Suganthi, M;Sathiaseelan, J.G.R.
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.10
    • /
    • pp.271-281
    • /
    • 2022
  • Manual seed classification challenges can be overcome using a reliable and autonomous seed purity identification and classification technique. It is a highly practical and commercially important requirement of the agricultural industry. Researchers can create a new data mining method with improved accuracy using current machine learning and artificial intelligence approaches. Seed classification can help with quality making, seed quality controller, and impurity identification. Seeds have traditionally been classified based on characteristics such as colour, shape, and texture. Generally, this is done by experts by visually examining each model, which is a very time-consuming and tedious task. This approach is simple to automate, making seed sorting far more efficient than manually inspecting them. Computer vision technologies based on machine learning (ML), symmetry, and, more specifically, convolutional neural networks (CNNs) have been widely used in related fields, resulting in greater labour efficiency in many cases. To sort a sample of 3000 seeds, KNN, SVM, CNN and CNN-SVM hybrid classification algorithms were used. A model that uses advanced deep learning techniques to categorise some well-known seeds is included in the proposed hybrid system. In most cases, the CNN-SVM model outperformed the comparable SVM and CNN models, demonstrating the effectiveness of utilising CNN-SVM to evaluate data. The findings of this research revealed that CNN-SVM could be used to analyse data with promising results. Future study should look into more seed kinds to expand the use of CNN-SVMs in data processing.

Dynamic Adjustment of the Pruning Threshold in Deep Compression (Deep Compression의 프루닝 문턱값 동적 조정)

  • Lee, Yeojin;Park, Hanhoon
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.3
    • /
    • pp.99-103
    • /
    • 2021
  • Recently, convolutional neural networks (CNNs) have been widely utilized due to their outstanding performance in various computer vision fields. However, due to their computational-intensive and high memory requirements, it is difficult to deploy CNNs on hardware platforms that have limited resources, such as mobile devices and IoT devices. To address these limitations, a neural network compression research is underway to reduce the size of neural networks while maintaining their performance. This paper proposes a CNN compression technique that dynamically adjusts the thresholds of pruning, one of the neural network compression techniques. Unlike the conventional pruning that experimentally or heuristically sets the thresholds that determine the weights to be pruned, the proposed technique can dynamically find the optimal thresholds that prevent accuracy degradation and output the light-weight neural network in less time. To validate the performance of the proposed technique, the LeNet was trained using the MNIST dataset and the light-weight LeNet could be automatically obtained 1.3 to 3 times faster without loss of accuracy.

DeepCleanNet: Training Deep Convolutional Neural Network with Extremely Noisy Labels

  • Olimov, Bekhzod;Kim, Jeonghong
    • Journal of Korea Multimedia Society
    • /
    • v.23 no.11
    • /
    • pp.1349-1360
    • /
    • 2020
  • In recent years, Convolutional Neural Networks (CNNs) have been successfully implemented in different tasks of computer vision. Since CNN models are the representatives of supervised learning algorithms, they demand large amount of data in order to train the classifiers. Thus, obtaining data with correct labels is imperative to attain the state-of-the-art performance of the CNN models. However, labelling datasets is quite tedious and expensive process, therefore real-life datasets often exhibit incorrect labels. Although the issue of poorly labelled datasets has been studied before, we have noticed that the methods are very complex and hard to reproduce. Therefore, in this research work, we propose Deep CleanNet - a considerably simple system that achieves competitive results when compared to the existing methods. We use K-means clustering algorithm for selecting data with correct labels and train the new dataset using a deep CNN model. The technique achieves competitive results in both training and validation stages. We conducted experiments using MNIST database of handwritten digits with 50% corrupted labels and achieved up to 10 and 20% increase in training and validation sets accuracy scores, respectively.

Fight Detection in Hockey Videos using Deep Network

  • Mukherjee, Subham;Saini, Rajkumar;Kumar, Pradeep;Roy, Partha Pratim;Dogra, Debi Prosad;Kim, Byung-Gyu
    • Journal of Multimedia Information System
    • /
    • v.4 no.4
    • /
    • pp.225-232
    • /
    • 2017
  • Understanding actions in videos is an important task. It helps in finding the anomalies present in videos such as fights. Detection of fights becomes more crucial when it comes to sports. This paper focuses on finding fight scenes in Hockey sport videos using blur & radon transform and convolutional neural networks (CNNs). First, the local motion within the video frames has been extracted using blur information. Next, fast fourier and radon transform have been applied on the local motion. The video frames with fight scene have been identified using transfer learning with the help of pre-trained deep learning model VGG-Net. Finally, a comparison of the methodology has been performed using feed forward neural networks. Accuracies of 56.00% and 75.00% have been achieved using feed forward neural network and VGG16-Net, respectively.