• Title/Summary/Keyword: MNIST. Image processing

Search Result 8, Processing Time 0.023 seconds

Feature Extraction Using Convolutional Neural Networks for Random Translation (랜덤 변환에 대한 컨볼루션 뉴럴 네트워크를 이용한 특징 추출)

  • Jin, Taeseok
    • Journal of the Korean Society of Industry Convergence
    • /
    • v.23 no.3
    • /
    • pp.515-521
    • /
    • 2020
  • Deep learning methods have been effectively used to provide great improvement in various research fields such as machine learning, image processing and computer vision. One of the most frequently used deep learning methods in image processing is the convolutional neural networks. Compared to the traditional artificial neural networks, convolutional neural networks do not use the predefined kernels, but instead they learn data specific kernels. This property makes them to be used as feature extractors as well. In this study, we compared the quality of CNN features for traditional texture feature extraction methods. Experimental results demonstrate the superiority of the CNN features. Additionally, the recognition process and result of a pioneering CNN on MNIST database are presented.

Comparison of Image Classification Algorithms through Incorrect Answers (오답 분석을 통한 이미지 분류 알고리즘의 특징 비교)

  • Sol Kim;Jaehwan Lee
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2024.05a
    • /
    • pp.801-802
    • /
    • 2024
  • 본 연구에서는 MNIST 데이터셋을 활용하여 널리 사용되는 이미지 분류 알고리즘인ANN(Artificial Neural Network), CNN(Convolutional Neural Network), DNN(Deep Neural Network)의 성능을 분석한다. 주로 모델의 정확도에 초점을 맞추는 기존 연구와 달리, 본 연구에서는 각 모델이 잘못 분류한 오답을 중심으로 모델의 특징을 비교한다. 이를 통해 각 모델의 장단점을 파악하고 성능을 개선할 수 있을 것이라 기대한다.

Implementation of handwritten digit recognition CNN structure using GPGPU and Combined Layer (GPGPU와 Combined Layer를 이용한 필기체 숫자인식 CNN구조 구현)

  • Lee, Sangil;Nam, Kihun;Jung, Jun Mo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.3 no.4
    • /
    • pp.165-169
    • /
    • 2017
  • CNN(Convolutional Nerual Network) is one of the algorithms that show superior performance in image recognition and classification among machine learning algorithms. CNN is simple, but it has a large amount of computation and it takes a lot of time. Consequently, in this paper we performed an parallel processing unit for the convolution layer, pooling layer and the fully connected layer, which consumes a lot of handling time in the process of CNN, through the SIMT(Single Instruction Multiple Thread)'s structure of GPGPU(General-Purpose computing on Graphics Processing Units).And we also expect to improve performance by reducing the number of memory accesses and directly using the output of convolution layer not storing it in pooling layer. In this paper, we use MNIST dataset to verify this experiment and confirm that the proposed CNN structure is 12.38% better than existing structure.

Convolutional Neural Networks for Character-level Classification

  • Ko, Dae-Gun;Song, Su-Han;Kang, Ki-Min;Han, Seong-Wook
    • IEIE Transactions on Smart Processing and Computing
    • /
    • v.6 no.1
    • /
    • pp.53-59
    • /
    • 2017
  • Optical character recognition (OCR) automatically recognizes text in an image. OCR is still a challenging problem in computer vision. A successful solution to OCR has important device applications, such as text-to-speech conversion and automatic document classification. In this work, we analyze character recognition performance using the current state-of-the-art deep-learning structures. One is the AlexNet structure, another is the LeNet structure, and the other one is the SPNet structure. For this, we have built our own dataset that contains digits and upper- and lower-case characters. We experiment in the presence of salt-and-pepper noise or Gaussian noise, and report the performance comparison in terms of recognition error. Experimental results indicate by five-fold cross-validation that the SPNet structure (our approach) outperforms AlexNet and LeNet in recognition error.

Efficient Sign Language Recognition and Classification Using African Buffalo Optimization Using Support Vector Machine System

  • Karthikeyan M. P.;Vu Cao Lam;Dac-Nhuong Le
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.6
    • /
    • pp.8-16
    • /
    • 2024
  • Communication with the deaf has always been crucial. Deaf and hard-of-hearing persons can now express their thoughts and opinions to teachers through sign language, which has become a universal language and a very effective tool. This helps to improve their education. This facilitates and simplifies the referral procedure between them and the teachers. There are various bodily movements used in sign language, including those of arms, legs, and face. Pure expressiveness, proximity, and shared interests are examples of nonverbal physical communication that is distinct from gestures that convey a particular message. The meanings of gestures vary depending on your social or cultural background and are quite unique. Sign language prediction recognition is a highly popular and Research is ongoing in this area, and the SVM has shown value. Research in a number of fields where SVMs struggle has encouraged the development of numerous applications, such as SVM for enormous data sets, SVM for multi-classification, and SVM for unbalanced data sets.Without a precise diagnosis of the signs, right control measures cannot be applied when they are needed. One of the methods that is frequently utilized for the identification and categorization of sign languages is image processing. African Buffalo Optimization using Support Vector Machine (ABO+SVM) classification technology is used in this work to help identify and categorize peoples' sign languages. Segmentation by K-means clustering is used to first identify the sign region, after which color and texture features are extracted. The accuracy, sensitivity, Precision, specificity, and F1-score of the proposed system African Buffalo Optimization using Support Vector Machine (ABOSVM) are validated against the existing classifiers SVM, CNN, and PSO+ANN.

Information Processing in Primate Retinal Ganglion

  • Je, Sung-Kwan;Cho, Jae-Hyun;Kim, Gwang-Baek
    • Journal of information and communication convergence engineering
    • /
    • v.2 no.2
    • /
    • pp.132-137
    • /
    • 2004
  • Most of the current computer vision theories are based on hypotheses that are difficult to apply to the real world, and they simply imitate a coarse form of the human visual system. As a result, they have not been showing satisfying results. In the human visual system, there is a mechanism that processes information due to memory degradation with time and limited storage space. Starting from research on the human visual system, this study analyzes a mechanism that processes input information when information is transferred from the retina to ganglion cells. In this study, a model for the characteristics of ganglion cells in the retina is proposed after considering the structure of the retina and the efficiency of storage space. The MNIST database of handwritten letters is used as data for this research, and ART2 and SOM as recognizers. The results of this study show that the proposed recognition model is not much different from the general recognition model in terms of recognition rate, but the efficiency of storage space can be improved by constructing a mechanism that processes input information.

A Study on the Optimization of Convolution Operation Speed through FFT Algorithm (FFT 적용을 통한 Convolution 연산속도 향상에 관한 연구)

  • Lim, Su-Chang;Kim, Jong-Chan
    • Journal of Korea Multimedia Society
    • /
    • v.24 no.11
    • /
    • pp.1552-1559
    • /
    • 2021
  • Convolution neural networks (CNNs) show notable performance in image processing and are used as representative core models. CNNs extract and learn features from large amounts of train dataset. In general, it has a structure in which a convolution layer and a fully connected layer are stacked. The core of CNN is the convolution layer. The size of the kernel used for feature extraction and the number that affect the depth of the feature map determine the amount of weight parameters of the CNN that can be learned. These parameters are the main causes of increasing the computational complexity and memory usage of the entire neural network. The most computationally expensive components in CNNs are fully connected and spatial convolution computations. In this paper, we propose a Fourier Convolution Neural Network that performs the operation of the convolution layer in the Fourier domain. We work on modifying and improving the amount of computation by applying the fast fourier transform method. Using the MNIST dataset, the performance was similar to that of the general CNN in terms of accuracy. In terms of operation speed, 7.2% faster operation speed was achieved. An average of 19% faster speed was achieved in experiments using 1024x1024 images and various sizes of kernels.

Deep Learning Model Validation Method Based on Image Data Feature Coverage (영상 데이터 특징 커버리지 기반 딥러닝 모델 검증 기법)

  • Lim, Chang-Nam;Park, Ye-Seul;Lee, Jung-Won
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.9
    • /
    • pp.375-384
    • /
    • 2021
  • Deep learning techniques have been proven to have high performance in image processing and are applied in various fields. The most widely used methods for validating a deep learning model include a holdout verification method, a k-fold cross verification method, and a bootstrap method. These legacy methods consider the balance of the ratio between classes in the process of dividing the data set, but do not consider the ratio of various features that exist within the same class. If these features are not considered, verification results may be biased toward some features. Therefore, we propose a deep learning model validation method based on data feature coverage for image classification by improving the legacy methods. The proposed technique proposes a data feature coverage that can be measured numerically how much the training data set for training and validation of the deep learning model and the evaluation data set reflects the features of the entire data set. In this method, the data set can be divided by ensuring coverage to include all features of the entire data set, and the evaluation result of the model can be analyzed in units of feature clusters. As a result, by providing feature cluster information for the evaluation result of the trained model, feature information of data that affects the trained model can be provided.