• Title/Summary/Keyword: convolutional neural networks

Search Result 627, Processing Time 0.035 seconds

Detection of Coffee Bean Defects using Convolutional Neural Networks (Convolutional Neural Network를 이용한 불량원두 검출 시스템)

  • Kim, Ho-Joong;Cho, Tai-Hoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2014.10a
    • /
    • pp.316-319
    • /
    • 2014
  • People's interests in coffee are increasing with the expansion of coffee market. In this trend, people's taste becomes more luxurious and coffee bean's quality is considered to be very important. Currently, bean defects are mainly detected by experienced specialists. In this paper, a detection system of bean defects using machine learning is presented. This system concentrates on detecting two main defect types : bean's shape and insect damage. Convolutional Neural Networks are used for machine learning. The neural networks are comprised of two neural networks. The first neural network detects defects in the bean's shape, and the second one detects the bean's insect damage. The development of this system could be a starting point for automated coffee bean defects detection. Later, further research is needed to detect other bean defect types.

  • PDF

A Study on Application of Reinforcement Learning Algorithm Using Pixel Data (픽셀 데이터를 이용한 강화 학습 알고리즘 적용에 관한 연구)

  • Moon, Saemaro;Choi, Yonglak
    • Journal of Information Technology Services
    • /
    • v.15 no.4
    • /
    • pp.85-95
    • /
    • 2016
  • Recently, deep learning and machine learning have attracted considerable attention and many supporting frameworks appeared. In artificial intelligence field, a large body of research is underway to apply the relevant knowledge for complex problem-solving, necessitating the application of various learning algorithms and training methods to artificial intelligence systems. In addition, there is a dearth of performance evaluation of decision making agents. The decision making agent that can find optimal solutions by using reinforcement learning methods designed through this research can collect raw pixel data observed from dynamic environments and make decisions by itself based on the data. The decision making agent uses convolutional neural networks to classify situations it confronts, and the data observed from the environment undergoes preprocessing before being used. This research represents how the convolutional neural networks and the decision making agent are configured, analyzes learning performance through a value-based algorithm and a policy-based algorithm : a Deep Q-Networks and a Policy Gradient, sets forth their differences and demonstrates how the convolutional neural networks affect entire learning performance when using pixel data. This research is expected to contribute to the improvement of artificial intelligence systems which can efficiently find optimal solutions by using features extracted from raw pixel data.

Feature Extraction Using Convolutional Neural Networks for Random Translation (랜덤 변환에 대한 컨볼루션 뉴럴 네트워크를 이용한 특징 추출)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.3
    • /
    • pp.515-521
    • /
    • 2020
  • Deep learning methods have been effectively used to provide great improvement in various research fields such as machine learning, image processing and computer vision. One of the most frequently used deep learning methods in image processing is the convolutional neural networks. Compared to the traditional artificial neural networks, convolutional neural networks do not use the predefined kernels, but instead they learn data specific kernels. This property makes them to be used as feature extractors as well. In this study, we compared the quality of CNN features for traditional texture feature extraction methods. Experimental results demonstrate the superiority of the CNN features. Additionally, the recognition process and result of a pioneering CNN on MNIST database are presented.

Ensemble of Convolution Neural Networks for Driver Smartphone Usage Detection Using Multiple Cameras

  • Zhang, Ziyi;Kang, Bo-Yeong
    • Journal of information and communication convergence engineering
    • /
    • v.18 no.2
    • /
    • pp.75-81
    • /
    • 2020
  • Approximately 1.3 million people die from traffic accidents each year, and smartphone usage while driving is one of the main causes of such accidents. Therefore, detection of smartphone usage by drivers has become an important part of distracted driving detection. Previous studies have used single camera-based methods to collect the driver images. However, smartphone usage detection by employing a single camera can be unsuccessful if the driver occludes the phone. In this paper, we present a driver smartphone usage detection system that uses multiple cameras to collect driver images from different perspectives, and then processes these images with ensemble convolutional neural networks. The ensemble method comprises three individual convolutional neural networks with a simple voting system. Each network provides a distinct image perspective and the voting mechanism selects the final classification. Experimental results verified that the proposed method avoided the limitations observed in single camera-based methods, and achieved 98.96% accuracy on our dataset.

An Approximate DRAM Architecture for Energy-efficient Deep Learning

  • Nguyen, Duy Thanh;Chang, Ik-Joon
    • Journal of Semiconductor Engineering
    • /
    • v.1 no.1
    • /
    • pp.31-37
    • /
    • 2020
  • We present an approximate DRAM architecture for energy-efficient deep learning. Our key premise is that by bounding memory errors to non-critical information, we can significantly reduce DRAM refresh energy without compromising recognition accuracy of deep neural networks. To validate the key premise, we make extensive Monte-Carlo simulations for several well-known convolutional neural networks such as LeNet, ConvNet and AlexNet with the input of MINIST, CIFAR-10, and ImageNet, respectively. We assume that the highest-order 8-bits (in single precision) and 4-bits (in half precision) are protected from retention errors under the proposed architecture and then, randomly inject bit-errors to unprotected bits with various bit-error-rates. Here, recognition accuracies of the above convolutional neural networks are successfully maintained up to the 10-5-order bit-error-rate. We simulate DRAM energy during inference of the above convolutional neural networks, where the proposed architecture shows the possibility of considerable energy saving up to 10 ~ 37.5% of total DRAM energy.

Neutron spectrum unfolding using two architectures of convolutional neural networks

  • Maha Bouhadida;Asmae Mazzi;Mariya Brovchenko;Thibaut Vinchon;Mokhtar Z. Alaya;Wilfried Monange;Francois Trompier
    • Nuclear Engineering and Technology
    • /
    • v.55 no.6
    • /
    • pp.2276-2282
    • /
    • 2023
  • We deploy artificial neural networks to unfold neutron spectra from measured energy-integrated quantities. These neutron spectra represent an important parameter allowing to compute the absorbed dose and the kerma to serve radiation protection in addition to nuclear safety. The built architectures are inspired from convolutional neural networks. The first architecture is made up of residual transposed convolution's blocks while the second is a modified version of the U-net architecture. A large and balanced dataset is simulated following "realistic" physical constraints to train the architectures in an efficient way. Results show a high accuracy prediction of neutron spectra ranging from thermal up to fast spectrum. The dataset processing, the attention paid to performances' metrics and the hyper-optimization are behind the architectures' robustness.

Architectures of Convolutional Neural Networks for the Prediction of Protein Secondary Structures (단백질 이차 구조 예측을 위한 합성곱 신경망의 구조)

  • Chi, Sang-Mun
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.22 no.5
    • /
    • pp.728-733
    • /
    • 2018
  • Deep learning has been actively studied for predicting protein secondary structure based only on the sequence information of the amino acids constituting the protein. In this paper, we compared the performances of the convolutional neural networks of various structures to predict the protein secondary structure. To investigate the optimal depth of the layer of neural network for the prediction of protein secondary structure, the performance according to the number of layers was investigated. We also applied the structure of GoogLeNet and ResNet which constitute building blocks of many image classification methods. These methods extract various features from input data, and smooth the gradient transmission in the learning process even using the deep layer. These architectures of convolutional neural networks were modified to suit the characteristics of protein data to improve performance.

Bagging deep convolutional autoencoders trained with a mixture of real data and GAN-generated data

  • Hu, Cong;Wu, Xiao-Jun;Shu, Zhen-Qiu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5427-5445
    • /
    • 2019
  • While deep neural networks have achieved remarkable performance in representation learning, a huge amount of labeled training data are usually required by supervised deep models such as convolutional neural networks. In this paper, we propose a new representation learning method, namely generative adversarial networks (GAN) based bagging deep convolutional autoencoders (GAN-BDCAE), which can map data to diverse hierarchical representations in an unsupervised fashion. To boost the size of training data, to train deep model and to aggregate diverse learning machines are the three principal avenues towards increasing the capabilities of representation learning of neural networks. We focus on combining those three techniques. To this aim, we adopt GAN for realistic unlabeled sample generation and bagging deep convolutional autoencoders (BDCAE) for robust feature learning. The proposed method improves the discriminative ability of learned feature embedding for solving subsequent pattern recognition problems. We evaluate our approach on three standard benchmarks and demonstrate the superiority of the proposed method compared to traditional unsupervised learning methods.

Development of Deep Learning Models for Multi-class Sentiment Analysis (딥러닝 기반의 다범주 감성분석 모델 개발)

  • Syaekhoni, M. Alex;Seo, Sang Hyun;Kwon, Young S.
    • Journal of Information Technology Services
    • /
    • v.16 no.4
    • /
    • pp.149-160
    • /
    • 2017
  • Sentiment analysis is the process of determining whether a piece of document, text or conversation is positive, negative, neural or other emotion. Sentiment analysis has been applied for several real-world applications, such as chatbot. In the last five years, the practical use of the chatbot has been prevailing in many field of industry. In the chatbot applications, to recognize the user emotion, sentiment analysis must be performed in advance in order to understand the intent of speakers. The specific emotion is more than describing positive or negative sentences. In light of this context, we propose deep learning models for conducting multi-class sentiment analysis for identifying speaker's emotion which is categorized to be joy, fear, guilt, sad, shame, disgust, and anger. Thus, we develop convolutional neural network (CNN), long short term memory (LSTM), and multi-layer neural network models, as deep neural networks models, for detecting emotion in a sentence. In addition, word embedding process was also applied in our research. In our experiments, we have found that long short term memory (LSTM) model performs best compared to convolutional neural networks and multi-layer neural networks. Moreover, we also show the practical applicability of the deep learning models to the sentiment analysis for chatbot.

Drone Image Classification based on Convolutional Neural Networks (컨볼루션 신경망을 기반으로 한 드론 영상 분류)

  • Joo, Young-Do
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.17 no.5
    • /
    • pp.97-102
    • /
    • 2017
  • Recently deep learning techniques such as convolutional neural networks (CNN) have been introduced to classify high-resolution remote sensing data. In this paper, we investigated the possibility of applying CNN to crop classification of farmland images captured by drones. The farming area was divided into seven classes: rice field, sweet potato, red pepper, corn, sesame leaf, fruit tree, and vinyl greenhouse. We performed image pre-processing and normalization to apply CNN, and the accuracy of image classification was more than 98%. With the output of this study, it is expected that the transition from the existing image classification methods to the deep learning based image classification methods will be facilitated in a fast manner, and the possibility of success can be confirmed.