• Title/Summary/Keyword: Input preprocessing

Search Result 296, Processing Time 0.025 seconds

Detection of Number and Character Area of License Plate Using Deep Learning and Semantic Image Segmentation (딥러닝과 의미론적 영상분할을 이용한 자동차 번호판의 숫자 및 문자영역 검출)

  • Lee, Jeong-Hwan
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.1
    • /
    • pp.29-35
    • /
    • 2021
  • License plate recognition plays a key role in intelligent transportation systems. Therefore, it is a very important process to efficiently detect the number and character areas. In this paper, we propose a method to effectively detect license plate number area by applying deep learning and semantic image segmentation algorithm. The proposed method is an algorithm that detects number and text areas directly from the license plate without preprocessing such as pixel projection. The license plate image was acquired from a fixed camera installed on the road, and was used in various real situations taking into account both weather and lighting changes. The input images was normalized to reduce the color change, and the deep learning neural networks used in the experiment were Vgg16, Vgg19, ResNet18, and ResNet50. To examine the performance of the proposed method, we experimented with 500 license plate images. 300 sheets were used for learning and 200 sheets were used for testing. As a result of computer simulation, it was the best when using ResNet50, and 95.77% accuracy was obtained.

Research on Subword Tokenization of Korean Neural Machine Translation and Proposal for Tokenization Method to Separate Jongsung from Syllables (한국어 인공신경망 기계번역의 서브 워드 분절 연구 및 음절 기반 종성 분리 토큰화 제안)

  • Eo, Sugyeong;Park, Chanjun;Moon, Hyeonseok;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.3
    • /
    • pp.1-7
    • /
    • 2021
  • Since Neural Machine Translation (NMT) uses only a limited number of words, there is a possibility that words that are not registered in the dictionary will be entered as input. The proposed method to alleviate this Out of Vocabulary (OOV) problem is Subword Tokenization, which is a methodology for constructing words by dividing sentences into subword units smaller than words. In this paper, we deal with general subword tokenization algorithms. Furthermore, in order to create a vocabulary that can handle the infinite conjugation of Korean adjectives and verbs, we propose a new methodology for subword tokenization training by separating the Jongsung(coda) from Korean syllables (consisting of Chosung-onset, Jungsung-neucleus and Jongsung-coda). As a result of the experiment, the methodology proposed in this paper outperforms the existing subword tokenization methodology.

The Effect of Type of Input Image on Accuracy in Classification Using Convolutional Neural Network Model (컨볼루션 신경망 모델을 이용한 분류에서 입력 영상의 종류가 정확도에 미치는 영향)

  • Kim, Min Jeong;Kim, Jung Hun;Park, Ji Eun;Jeong, Woo Yeon;Lee, Jong Min
    • Journal of Biomedical Engineering Research
    • /
    • v.42 no.4
    • /
    • pp.167-174
    • /
    • 2021
  • The purpose of this study is to classify TIFF images, PNG images, and JPEG images using deep learning, and to compare the accuracy by verifying the classification performance. The TIFF, PNG, and JPEG images converted from chest X-ray DICOM images were applied to five deep neural network models performed in image recognition and classification to compare classification performance. The data consisted of a total of 4,000 X-ray images, which were converted from DICOM images into 16-bit TIFF images and 8-bit PNG and JPEG images. The learning models are CNN models - VGG16, ResNet50, InceptionV3, DenseNet121, and EfficientNetB0. The accuracy of the five convolutional neural network models of TIFF images is 99.86%, 99.86%, 99.99%, 100%, and 99.89%. The accuracy of PNG images is 99.88%, 100%, 99.97%, 99.87%, and 100%. The accuracy of JPEG images is 100%, 100%, 99.96%, 99.89%, and 100%. Validation of classification performance using test data showed 100% in accuracy, precision, recall and F1 score. Our classification results show that when DICOM images are converted to TIFF, PNG, and JPEG images and learned through preprocessing, the learning works well in all formats. In medical imaging research using deep learning, the classification performance is not affected by converting DICOM images into any format.

A technique for predicting the cutting points of fish for the target weight using AI machine vision

  • Jang, Yong-hun;Lee, Myung-sub
    • Journal of the Korea Society of Computer and Information
    • /
    • v.27 no.4
    • /
    • pp.27-36
    • /
    • 2022
  • In this paper, to improve the conditions of the fish processing site, we propose a method to predict the cutting point of fish according to the target weight using AI machine vision. The proposed method performs image-based preprocessing by first photographing the top and front views of the input fish. Then, RANSAC(RANdom SAmple Consensus) is used to extract the fish contour line, and then 3D external information of the fish is obtained using 3D modeling. Next, machine learning is performed on the extracted three-dimensional feature information and measured weight information to generate a neural network model. Subsequently, the fish is cut at the cutting point predicted by the proposed technique, and then the weight of the cut piece is measured. We compared the measured weight with the target weight and evaluated the performance using evaluation methods such as MAE(Mean Absolute Error) and MRE(Mean Relative Error). The obtained results indicate that an average error rate of less than 3% was achieved in comparison to the target weight. The proposed technique is expected to contribute greatly to the development of the fishery industry in the future by being linked to the automation system.

Electric Power Demand Prediction Using Deep Learning Model with Temperature Data (기온 데이터를 반영한 전력수요 예측 딥러닝 모델)

  • Yoon, Hyoup-Sang;Jeong, Seok-Bong
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.7
    • /
    • pp.307-314
    • /
    • 2022
  • Recently, researches using deep learning-based models are being actively conducted to replace statistical-based time series forecast techniques to predict electric power demand. The result of analyzing the researches shows that the performance of the LSTM-based prediction model is acceptable, but it is not sufficient for long-term regional-wide power demand prediction. In this paper, we propose a WaveNet deep learning model to predict electric power demand 24-hour-ahead with temperature data in order to achieve the prediction accuracy better than MAPE value of 2% which statistical-based time series forecast techniques can present. First of all, we illustrate a delated causal one-dimensional convolutional neural network architecture of WaveNet and the preprocessing mechanism of the input data of electric power demand and temperature. Second, we present the training process and walk forward validation with the modified WaveNet. The performance comparison results show that the prediction model with temperature data achieves MAPE value of 1.33%, which is better than MAPE Value (2.33%) of the same model without temperature data.

An Inference System Using BIG5 Personality Traits for Filtering Preferred Resource

  • Jong-Hyun, Park
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.1
    • /
    • pp.9-16
    • /
    • 2023
  • In the IoT environment, various objects mutually interactive, and various services can be composed based on this environment. In the previous study, we have developed a resource collaboration system to provide services by substituting limited resources in the user's personal device using resource collaboration. However, in the preceding system, when the number of resources and situations increases, the inference time increases exponentially. To solve this problem, this study proposes a method of classifying users and resources by applying the BIG5 user type classification model. In this paper, we propose a method to reduce the inference time by filtering the user's preferred resources through BIG5 type-based preprocessing and using the filtered resources as an input to the recommendation system. We implement the proposed method as a prototype system and show the validation of our approach through performance and user satisfaction evaluation.

Sex determination from lateral cephalometric radiographs using an automated deep learning convolutional neural network

  • Khazaei, Maryam;Mollabashi, Vahid;Khotanlou, Hassan;Farhadian, Maryam
    • Imaging Science in Dentistry
    • /
    • v.52 no.3
    • /
    • pp.239-244
    • /
    • 2022
  • Purpose: Despite the proliferation of numerous morphometric and anthropometric methods for sex identification based on linear, angular, and regional measurements of various parts of the body, these methods are subject to error due to the observer's knowledge and expertise. This study aimed to explore the possibility of automated sex determination using convolutional neural networks(CNNs) based on lateral cephalometric radiographs. Materials and Methods: Lateral cephalometric radiographs of 1,476 Iranian subjects (794 women and 682 men) from 18 to 49 years of age were included. Lateral cephalometric radiographs were considered as a network input and output layer including 2 classes(male and female). Eighty percent of the data was used as a training set and the rest as a test set. Hyperparameter tuning of each network was done after preprocessing and data augmentation steps. The predictive performance of different architectures (DenseNet, ResNet, and VGG) was evaluated based on their accuracy in test sets. Results: The CNN based on the DenseNet121 architecture, with an overall accuracy of 90%, had the best predictive power in sex determination. The prediction accuracy of this model was almost equal for men and women. Furthermore, with all architectures, the use of transfer learning improved predictive performance. Conclusion: The results confirmed that a CNN could predict a person's sex with high accuracy. This prediction was independent of human bias because feature extraction was done automatically. However, for more accurate sex determination on a wider scale, further studies with larger sample sizes are desirable.

A Typo Correction System Using Artificial Neural Networks for a Text-based Ornamental Fish Search Engine

  • Hyunhak Song;Sungyoon Cho;Wongi Jeon;Kyungwon Park;Jaedong Shim;Kiwon Kwon
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.8
    • /
    • pp.2278-2291
    • /
    • 2023
  • Imported ornamental fish should be quarantined because they can have dangerous diseases depending on their habitat. The quarantine requires a lot of time because quarantine officers collect various information on the imported ornamental fish. Inefficient quarantine processes reduce its work efficiency and accuracy. Also, long-time quarantine causes the death of environmentally sensitive ornamental fish and huge financial losses. To improve existing quarantine systems, information on ornamental fish was collected and structured, and a server was established to develop quarantine performance support software equipped with a text search engine. However, the long names of ornamental fish in general can cause many typos and time bottlenecks when we type search words for the target fish information. Therefore, we need a technique that can correct typos. Typical typo character calibration compares input text with all characters in a calibrated candidate text dictionary. However, this approach requires computational power proportional to the number of typos, resulting in slow processing time and low calibration accuracy performance. Therefore, to improve the calibration accuracy of characters, we propose a fusion system of simple Artificial Neural Network (ANN) models and character preprocessing methods that accelerate the process by minimizing the computation of the models. We also propose a typo character generation method used for training the ANN models. Simulation results show that the proposed typo character correction system is about 6 times faster than the conventional method and has 10% higher accuracy.

Image Restoration Filter using Combined Weight in Mixed Noise Environment (복합잡음 환경에서 결합가중치를 이용한 영상복원 필터)

  • Cheon, Bong-Won;Kim, Nam-Ho
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2021.05a
    • /
    • pp.210-212
    • /
    • 2021
  • In modern society, various digital equipment are being distributed due to the influence of the 4th industrial revolution, and they are used in a wide range of fields such as automated processes, intelligent CCTV, medical industry, robots, and drones. Accordingly, the importance of the preprocessing process in a system operating based on an image is increasing, and an algorithm for effectively reconstructing an image is drawing attention. In this paper, we propose a filter algorithm based on a combined weight value to reconstruct an image in a complex noise environment. The proposed algorithm calculates the weight according to the spatial distance and the weight according to the difference between the pixel values for the input image and the pixel values inside the filtering mask, respectively. The final output was filtered by applying the join weights calculated based on the two weights to the mask. In order to verify the performance of the proposed algorithm, we simulated it by comparing it with the existing filter algorithm.

  • PDF

Artificial intelligence-based blood pressure prediction using photoplethysmography signals

  • Yonghee Lee;YongWan Ju;Jundong Lee
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.11
    • /
    • pp.155-160
    • /
    • 2023
  • This paper presents a method for predicting blood pressure using the photoplethysmography signals. First, after measuring the optical blood flow signal, artifacts are removed through a preprocessing process, and a signal for learning is obtained. In addition, weight and height, which affect blood pressure, are measured as additional information. Next, a system is built to estimate systolic and diastolic blood pressure by learning the photoplethysmography signals, height, and weight as input variables through an artificial intelligence algorithm. The constructed system predicts the systolic and diastolic blood pressures using the inputs. The proposed method can continuously predict blood pressure in real time by receiving photoplethysmography signals that reflect the state of the heart and blood vessels, and the height and weight of the subject in an unconstrained method. In order to confirm the usefulness of the artificial intelligence-based blood pressure prediction system presented in this study, the usefulness of the results is verified by comparing the measured blood pressure with the predicted blood pressure.