• Title/Summary/Keyword: Neural Network Classifier

Search Result 492, Processing Time 0.026 seconds

Classifying Indian Medicinal Leaf Species Using LCFN-BRNN Model

  • Kiruba, Raji I;Thyagharajan, K.K;Vignesh, T;Kalaiarasi, G
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.15 no.10
    • /
    • pp.3708-3728
    • /
    • 2021
  • Indian herbal plants are used in agriculture and in the food, cosmetics, and pharmaceutical industries. Laboratory-based tests are routinely used to identify and classify similar herb species by analyzing their internal cell structures. In this paper, we have applied computer vision techniques to do the same. The original leaf image was preprocessed using the Chan-Vese active contour segmentation algorithm to efface the background from the image by setting the contraction bias as (v) -1 and smoothing factor (µ) as 0.5, and bringing the initial contour close to the image boundary. Thereafter the segmented grayscale image was fed to a leaky capacitance fired neuron model (LCFN), which differentiates between similar herbs by combining different groups of pixels in the leaf image. The LFCN's decay constant (f), decay constant (g) and threshold (h) parameters were empirically assigned as 0.7, 0.6 and h=18 to generate the 1D feature vector. The LCFN time sequence identified the internal leaf structure at different iterations. Our proposed framework was tested against newly collected herbal species of natural images, geometrically variant images in terms of size, orientation and position. The 1D sequence and shape features of aloe, betel, Indian borage, bittergourd, grape, insulin herb, guava, mango, nilavembu, nithiyakalyani, sweet basil and pomegranate were fed into the 5-fold Bayesian regularization neural network (BRNN), K-nearest neighbors (KNN), support vector machine (SVM), and ensemble classifier to obtain the highest classification accuracy of 91.19%.

Environmental Sound Classification for Selective Noise Cancellation in Industrial Sites (산업현장에서의 선택적 소음 제거를 위한 환경 사운드 분류 기술)

  • Choi, Hyunkook;Kim, Sangmin;Park, Hochong
    • Journal of Broadcast Engineering
    • /
    • v.25 no.6
    • /
    • pp.845-853
    • /
    • 2020
  • In this paper, we propose a method for classifying environmental sound for selective noise cancellation in industrial sites. Noise in industrial sites causes hearing loss in workers, and researches on noise cancellation have been widely conducted. However, the conventional methods have a problem of blocking all sounds and cannot provide the optimal operation per noise type because of common cancellation method for all types of noise. In order to perform selective noise cancellation, therefore, we propose a method for environmental sound classification based on deep learning. The proposed method uses new sets of acoustic features consisting of temporal and statistical properties of Mel-spectrogram, which can overcome the limitation of Mel-spectrogram features, and uses convolutional neural network as a classifier. We apply the proposed method to five-class sound classification with three noise classes and two non-noise classes. We confirm that the proposed method provides improved classification accuracy by 6.6% point, compared with that using conventional Mel-spectrogram features.

Principal Component analysis based Ambulatory monitoring of elderly (주성분 분석 기반의 노약자 응급 모니터링)

  • Sharma, Annapurna;Lee, Hoon-Jae;Chung, Wan-Young
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.12 no.11
    • /
    • pp.2105-2110
    • /
    • 2008
  • Embedding the compact wearable units to monitor the health status of a person has been analysed as a convenient solution for the home health care. This paper presents a method to detect fall from the other activities of daily living and also to classify those activities. This kind of ambulatory monitoring of the elderly and people with limited mobility can not only provide their general health status but also alarms whenever an emergency such as fall or gait has been occurred and a help is needed. A timely assistance in such a situation can reduce the loss of life. This work shows a detailed analysis of the data received from a chest worn sensor unit embedding a 3-axis accelerometer and depicts which features are important for the classification of human activities. How to arrange and reduce the features to a new feature set so that it can be classified using a simple classifier and also improving the classification resolution. Principal component analysis (PCA) has been used for modifying the feature set and afterwards for reducing the size of the same. Finally a Neural network classifier has been used to analyse the classification accuracies. The accuracy for detection of fall events was found to be 86%. The overall accuracy for the classification of Activities or daily living (ADL) and fall was around 94%.

Automatic Recognition of Pitch Accent Using Distributed Time-Delay Recursive Neural Network (분산 시간지연 회귀신경망을 이용한 피치 악센트 자동 인식)

  • Kim Sung-Suk
    • The Journal of the Acoustical Society of Korea
    • /
    • v.25 no.6
    • /
    • pp.277-281
    • /
    • 2006
  • This paper presents a method for the automatic recognition of pitch accents over syllables. The method that we propose is based on the time-delay recursive neural network (TDRNN). which is a neural network classifier with two different representation of dynamic context: the delayed input nodes allow the representation of an explicit trajectory F0(t) along time. while the recursive nodes provide long-term context information that reflects the characteristics of pitch accentuation in spoken English. We apply the TDRNN to pitch accent recognition in two forms: in the normal TDRNN. all of the prosodic features (pitch. energy, duration) are used as an entire set in a single TDRNN. while in the distributed TDRNN. the network consists of several TDRNNs each taking a single prosodic feature as the input. The final output of the distributed TDRNN is weighted sum of the output of individual TDRNN. We used the Boston Radio News Corpus (BRNC) for the experiments on the speaker-independent pitch accent recognition. π 1e experimental results show that the distributed TDRNN exhibits an average recognition accuracy of 83.64% over both pitch events and non-events.

A Study on Analyzing Sentiments on Movie Reviews by Multi-Level Sentiment Classifier (영화 리뷰 감성분석을 위한 텍스트 마이닝 기반 감성 분류기 구축)

  • Kim, Yuyoung;Song, Min
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.3
    • /
    • pp.71-89
    • /
    • 2016
  • Sentiment analysis is used for identifying emotions or sentiments embedded in the user generated data such as customer reviews from blogs, social network services, and so on. Various research fields such as computer science and business management can take advantage of this feature to analyze customer-generated opinions. In previous studies, the star rating of a review is regarded as the same as sentiment embedded in the text. However, it does not always correspond to the sentiment polarity. Due to this supposition, previous studies have some limitations in their accuracy. To solve this issue, the present study uses a supervised sentiment classification model to measure a more accurate sentiment polarity. This study aims to propose an advanced sentiment classifier and to discover the correlation between movie reviews and box-office success. The advanced sentiment classifier is based on two supervised machine learning techniques, the Support Vector Machines (SVM) and Feedforward Neural Network (FNN). The sentiment scores of the movie reviews are measured by the sentiment classifier and are analyzed by statistical correlations between movie reviews and box-office success. Movie reviews are collected along with a star-rate. The dataset used in this study consists of 1,258,538 reviews from 175 films gathered from Naver Movie website (movie.naver.com). The results show that the proposed sentiment classifier outperforms Naive Bayes (NB) classifier as its accuracy is about 6% higher than NB. Furthermore, the results indicate that there are positive correlations between the star-rate and the number of audiences, which can be regarded as the box-office success of a movie. The study also shows that there is the mild, positive correlation between the sentiment scores estimated by the classifier and the number of audiences. To verify the applicability of the sentiment scores, an independent sample t-test was conducted. For this, the movies were divided into two groups using the average of sentiment scores. The two groups are significantly different in terms of the star-rated scores.

A Time Series Graph based Convolutional Neural Network Model for Effective Input Variable Pattern Learning : Application to the Prediction of Stock Market (효과적인 입력변수 패턴 학습을 위한 시계열 그래프 기반 합성곱 신경망 모형: 주식시장 예측에의 응용)

  • Lee, Mo-Se;Ahn, Hyunchul
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.167-181
    • /
    • 2018
  • Over the past decade, deep learning has been in spotlight among various machine learning algorithms. In particular, CNN(Convolutional Neural Network), which is known as the effective solution for recognizing and classifying images or voices, has been popularly applied to classification and prediction problems. In this study, we investigate the way to apply CNN in business problem solving. Specifically, this study propose to apply CNN to stock market prediction, one of the most challenging tasks in the machine learning research. As mentioned, CNN has strength in interpreting images. Thus, the model proposed in this study adopts CNN as the binary classifier that predicts stock market direction (upward or downward) by using time series graphs as its inputs. That is, our proposal is to build a machine learning algorithm that mimics an experts called 'technical analysts' who examine the graph of past price movement, and predict future financial price movements. Our proposed model named 'CNN-FG(Convolutional Neural Network using Fluctuation Graph)' consists of five steps. In the first step, it divides the dataset into the intervals of 5 days. And then, it creates time series graphs for the divided dataset in step 2. The size of the image in which the graph is drawn is $40(pixels){\times}40(pixels)$, and the graph of each independent variable was drawn using different colors. In step 3, the model converts the images into the matrices. Each image is converted into the combination of three matrices in order to express the value of the color using R(red), G(green), and B(blue) scale. In the next step, it splits the dataset of the graph images into training and validation datasets. We used 80% of the total dataset as the training dataset, and the remaining 20% as the validation dataset. And then, CNN classifiers are trained using the images of training dataset in the final step. Regarding the parameters of CNN-FG, we adopted two convolution filters ($5{\times}5{\times}6$ and $5{\times}5{\times}9$) in the convolution layer. In the pooling layer, $2{\times}2$ max pooling filter was used. The numbers of the nodes in two hidden layers were set to, respectively, 900 and 32, and the number of the nodes in the output layer was set to 2(one is for the prediction of upward trend, and the other one is for downward trend). Activation functions for the convolution layer and the hidden layer were set to ReLU(Rectified Linear Unit), and one for the output layer set to Softmax function. To validate our model - CNN-FG, we applied it to the prediction of KOSPI200 for 2,026 days in eight years (from 2009 to 2016). To match the proportions of the two groups in the independent variable (i.e. tomorrow's stock market movement), we selected 1,950 samples by applying random sampling. Finally, we built the training dataset using 80% of the total dataset (1,560 samples), and the validation dataset using 20% (390 samples). The dependent variables of the experimental dataset included twelve technical indicators popularly been used in the previous studies. They include Stochastic %K, Stochastic %D, Momentum, ROC(rate of change), LW %R(Larry William's %R), A/D oscillator(accumulation/distribution oscillator), OSCP(price oscillator), CCI(commodity channel index), and so on. To confirm the superiority of CNN-FG, we compared its prediction accuracy with the ones of other classification models. Experimental results showed that CNN-FG outperforms LOGIT(logistic regression), ANN(artificial neural network), and SVM(support vector machine) with the statistical significance. These empirical results imply that converting time series business data into graphs and building CNN-based classification models using these graphs can be effective from the perspective of prediction accuracy. Thus, this paper sheds a light on how to apply deep learning techniques to the domain of business problem solving.

A DDoS Attack Detection Technique through CNN Model in Software Define Network (소프트웨어-정의 네트워크에서 CNN 모델을 이용한 DDoS 공격 탐지 기술)

  • Ko, Kwang-Man
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.6
    • /
    • pp.605-610
    • /
    • 2020
  • Software Defined Networking (SDN) is setting the standard for the management of networks due to its scalability, flexibility and functionality to program the network. The Distributed Denial of Service (DDoS) attack is most widely used to attack the SDN controller to bring down the network. Different methodologies have been utilized to detect DDoS attack previously. In this paper, first the dataset is obtained by Kaggle with 84 features, and then according to the rank, the 20 highest rank features are selected using Permutation Importance Algorithm. Then, the datasets are trained and tested with Convolution Neural Network (CNN) classifier model by utilizing deep learning techniques. Our proposed solution has achieved the best results, which will allow the critical systems which need more security to adopt and take full advantage of the SDN paradigm without compromising their security.

Automatic Wood Species Identification of Korean Softwood Based on Convolutional Neural Networks

  • Kwon, Ohkyung;Lee, Hyung Gu;Lee, Mi-Rim;Jang, Sujin;Yang, Sang-Yun;Park, Se-Yeong;Choi, In-Gyu;Yeo, Hwanmyeong
    • Journal of the Korean Wood Science and Technology
    • /
    • v.45 no.6
    • /
    • pp.797-808
    • /
    • 2017
  • Automatic wood species identification systems have enabled fast and accurate identification of wood species outside of specialized laboratories with well-trained experts on wood species identification. Conventional automatic wood species identification systems consist of two major parts: a feature extractor and a classifier. Feature extractors require hand-engineering to obtain optimal features to quantify the content of an image. A Convolutional Neural Network (CNN), which is one of the Deep Learning methods, trained for wood species can extract intrinsic feature representations and classify them correctly. It usually outperforms classifiers built on top of extracted features with a hand-tuning process. We developed an automatic wood species identification system utilizing CNN models such as LeNet, MiniVGGNet, and their variants. A smartphone camera was used for obtaining macroscopic images of rough sawn surfaces from cross sections of woods. Five Korean softwood species (cedar, cypress, Korean pine, Korean red pine, and larch) were under classification by the CNN models. The highest and most stable CNN model was LeNet3 that is two additional layers added to the original LeNet architecture. The accuracy of species identification by LeNet3 architecture for the five Korean softwood species was 99.3%. The result showed the automatic wood species identification system is sufficiently fast and accurate as well as small to be deployed to a mobile device such as a smartphone.

Weather Recognition Based on 3C-CNN

  • Tan, Ling;Xuan, Dawei;Xia, Jingming;Wang, Chao
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.8
    • /
    • pp.3567-3582
    • /
    • 2020
  • Human activities are often affected by weather conditions. Automatic weather recognition is meaningful to traffic alerting, driving assistance, and intelligent traffic. With the boost of deep learning and AI, deep convolutional neural networks (CNN) are utilized to identify weather situations. In this paper, a three-channel convolutional neural network (3C-CNN) model is proposed on the basis of ResNet50.The model extracts global weather features from the whole image through the ResNet50 branch, and extracts the sky and ground features from the top and bottom regions by two CNN5 branches. Then the global features and the local features are merged by the Concat function. Finally, the weather image is classified by Softmax classifier and the identification result is output. In addition, a medium-scale dataset containing 6,185 outdoor weather images named WeatherDataset-6 is established. 3C-CNN is used to train and test both on the Two-class Weather Images and WeatherDataset-6. The experimental results show that 3C-CNN achieves best on both datasets, with the average recognition accuracy up to 94.35% and 95.81% respectively, which is superior to other classic convolutional neural networks such as AlexNet, VGG16, and ResNet50. It is prospected that our method can also work well for images taken at night with further improvement.

Bayesian Texture Segmentation Using Multi-layer Perceptron and Markov Random Field Model (다층 퍼셉트론과 마코프 랜덤 필드 모델을 이용한 베이지안 결 분할)

  • Kim, Tae-Hyung;Eom, Il-Kyu;Kim, Yoo-Shin
    • Journal of the Institute of Electronics Engineers of Korea SP
    • /
    • v.44 no.1
    • /
    • pp.40-48
    • /
    • 2007
  • This paper presents a novel texture segmentation method using multilayer perceptron (MLP) networks and Markov random fields in multiscale Bayesian framework. Multiscale wavelet coefficients are used as input for the neural networks. The output of the neural network is modeled as a posterior probability. Texture classification at each scale is performed by the posterior probabilities from MLP networks and MAP (maximum a posterior) classification. Then, in order to obtain the more improved segmentation result at the finest scale, our proposed method fuses the multiscale MAP classifications sequentially from coarse to fine scales. This process is done by computing the MAP classification given the classification at one scale and a priori knowledge regarding contextual information which is extracted from the adjacent coarser scale classification. In this fusion process, the MRF (Markov random field) prior distribution and Gibbs sampler are used, where the MRF model serves as the smoothness constraint and the Gibbs sampler acts as the MAP classifier. The proposed segmentation method shows better performance than texture segmentation using the HMT (Hidden Markov trees) model and HMTseg.