• Title/Summary/Keyword: Gradient descent

Search Result 339, Processing Time 0.02 seconds

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

A Study on the Efficacy of Edge-Based Adversarial Example Detection Model: Across Various Adversarial Algorithms

  • Jaesung Shim;Kyuri Jo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.29 no.2
    • /
    • pp.31-41
    • /
    • 2024
  • Deep learning models show excellent performance in tasks such as image classification and object detection in the field of computer vision, and are used in various ways in actual industrial sites. Recently, research on improving robustness has been actively conducted, along with pointing out that this deep learning model is vulnerable to hostile examples. A hostile example is an image in which small noise is added to induce misclassification, and can pose a significant threat when applying a deep learning model to a real environment. In this paper, we tried to confirm the robustness of the edge-learning classification model and the performance of the adversarial example detection model using it for adversarial examples of various algorithms. As a result of robustness experiments, the basic classification model showed about 17% accuracy for the FGSM algorithm, while the edge-learning models maintained accuracy in the 60-70% range, and the basic classification model showed accuracy in the 0-1% range for the PGD/DeepFool/CW algorithm, while the edge-learning models maintained accuracy in 80-90%. As a result of the adversarial example detection experiment, a high detection rate of 91-95% was confirmed for all algorithms of FGSM/PGD/DeepFool/CW. By presenting the possibility of defending against various hostile algorithms through this study, it is expected to improve the safety and reliability of deep learning models in various industries using computer vision.

Classification of Transport Vehicle Noise Events in Magnetotelluric Time Series Data in an Urban area Using Random Forest Techniques (Random Forest 기법을 이용한 도심지 MT 시계열 자료의 차량 잡음 분류)

  • Kwon, Hyoung-Seok;Ryu, Kyeongho;Sim, Ickhyeon;Lee, Choon-Ki;Oh, Seokhoon
    • Geophysics and Geophysical Exploration
    • /
    • v.23 no.4
    • /
    • pp.230-242
    • /
    • 2020
  • We performed a magnetotelluric (MT) survey to delineate the geological structures below the depth of 20 km in the Gyeongju area where an earthquake with a magnitude of 5.8 occurred in September 2016. The measured MT data were severely distorted by electrical noise caused by subways, power lines, factories, houses, and farmlands, and by vehicle noise from passing trains and large trucks. Using machine-learning methods, we classified the MT time series data obtained near the railway and highway into two groups according to the inclusion of traffic noise. We applied three schemes, stochastic gradient descent, support vector machine, and random forest, to the time series data for the highspeed train noise. We formulated three datasets, Hx, Hy, and Hx & Hy, for the time series data of the large truck noise and applied the random forest method to each dataset. To evaluate the effect of removing the traffic noise, we compared the time series data, amplitude spectra, and apparent resistivity curves before and after removing the traffic noise from the time series data. We also examined the frequency range affected by traffic noise and whether artifact noise occurred during the traffic noise removal process as a result of the residual difference.

A Relief Method to Obtain the Solution of Optimal Problems (최적화문제를 해결하기 위한 완화(Relief)법)

  • Song, Jeong-Young;Lee, Kyu-Beom;Jang, Jigeul
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.1
    • /
    • pp.155-161
    • /
    • 2020
  • In general, optimization problems are difficult to solve simply. The reason is that the given problem is solved as soon as it is simple, but the more complex it is, the very large number of cases. This study is about the optimization of AI neural network. What we are dealing with here is the relief method for constructing AI network. The main topics deal with non-deterministic issues such as the stability and unstability of the overall network state, cost down and energy down. For this one, we discuss associative memory models, that is, a method in which local minimum memory information does not select fake information. The simulated annealing, this is a method of estimating the direction with the lowest possible value and combining it with the previous one to modify it to a lower value. And nonlinear planning problems, it is a method of checking and correcting the input / output by applying the appropriate gradient descent method to minimize the very large number of objective functions. This research suggests a useful approach to relief method as a theoretical approach to solving optimization problems. Therefore, this research will be a good proposal to apply efficiently when constructing a new AI neural network.

A Study on Deep Learning Methodology for Bigdata Mining from Smart Farm using Heterogeneous Computing (스마트팜 빅데이터 분석을 위한 이기종간 심층학습 기법 연구)

  • Min, Jae-Ki;Lee, DongHoon
    • Proceedings of the Korean Society for Agricultural Machinery Conference
    • /
    • 2017.04a
    • /
    • pp.162-162
    • /
    • 2017
  • 구글에서 공개한 Tensorflow를 이용한 여러 학문 분야의 연구가 활발하다. 농업 시설환경을 대상으로 한 빅데이터의 축적이 증가함과 아울러 실효적인 정보 획득을 위한 각종 데이터 분석 및 마이닝 기법에 대한 연구 또한 활발한 상황이다. 한편, 타 분야의 성공적인 심층학습기법 응용사례에 비하여 농업 분야에서의 응용은 초기 성장 단계라 할 수 있다. 이는 농업 현장에서 취득한 정보의 난해성 및 완성도 높은 생육/환경 모델링 정보의 부재로 실효적인 전과정 처리 기술 도출에 소요되는 시간, 비용, 연구 환경이 상대적으로 부족하기 때문일 것이다. 특히, 센서 기반 데이터 취득 기술 증가에 따라 비약적으로 방대해진 수집 데이터를 시간 복잡도가 높은 심층 학습 모델링 연산에 기계적으로 단순 적용할 경우 시간 효율적인 측면에서 성공적인 결과 도출에 애로가 있을 것이다. 매우 높은 시간 복잡도를 해결하기 위하여 제시된 하드웨어 가속 기능의 경우 일부 개발환경에 국한이 되어 있다. 일례로, 구글의 Tensorflow는 오픈소스 기반 병렬 클러스터링 기술인 MPICH를 지원하는 알고리즘을 공개하지 않고 있다. 따라서, 본 연구에서는 심층학습 기법 연구에 있어서, 예상 가능한 다양한 자원을 활용하여 최대한 연산의 결과를 빨리 도출할 수 있는 하드웨어적인 접근 방법을 모색하였다. 호스트에서 수행하는 일방적인 학습 알고리즘과 달리 이기종간 심층 학습이 가능하기 위해선 우선, NFS(Network File System)를 이용하여 데이터 계층이 상호 연결이 되어야 한다. 이를 위해서 고속 네트워크를 기반으로 한 NFS의 이용이 필수적이다. 둘째로 제한된 자원의 한계를 극복하기 위한 메모 공유 라이브러리가 필요하다. 셋째로 이기종간 프로세서에 최적화된 병렬 처리용 컴파일러를 이용해야 한다. 가장 중요한 부분은 이기종간의 처리 능력에 따른 작업을 고르게 분배할 수 있는 작업 스케쥴링이 수행되어야 하며, 이는 처리하고자 하는 데이터의 형태에 따라 매우 가변적이므로 해당 데이터 도메인에 대한 엄밀한 사전 벤치마킹이 수행되어야 한다. 이러한 요구조건을 대부분 충족하는 Open-CL ver1.2(https://www.khronos.org/opencl/)를 이용하였다. 최신의 Open-CL 버전은 2.2이나 본 연구를 위하여 준비한 4가지 이기종 시스템에서 모두 공통적으로 지원하는 버전은 1.2이다. 실험적으로 선정된 4가지 이기종 시스템은 1) Windows 10 Pro, 2) Linux-Ubuntu 16.04.4 LTS-x86_64, 3) MAC OS X 10.11 4) Linux-Ubuntu 16.04.4 LTS-ARM Cortext-A15 이다. 비교 분석을 위하여 NVIDIA 사에서 제공하는 Pascal Titan X 2식을 SLI로 구성한 시스템을 준비하였다. 개별 시스템에서 별도로 컴파일 된 바이너리의 이름을 통일하고, 개별 시스템의 코어수를 동일하게 균등 배분하여 100 Hz의 데이터로 입력이 되는 온도 정보와 조도 정보를 입력으로 하고 이를 습도정보에 Linear Gradient Descent Optimizer를 이용하여 Epoch 10,000회의 학습을 수행하였다. 4종의 이기종에서 총 32개의 코어를 이용한 학습에서 17초 내외로 연산 수행을 마쳤으나, 비교 시스템에서는 11초 내외로 연산을 마치는 결과가 나왔다. 기보유 하드웨어의 적절한 활용이 가능한 심층학습 기법에 대한 연구를 지속할 것이다

  • PDF

Study of Selection of Regression Equation for Flow-conditions using Machine-learning Method: Focusing on Nakdonggang Waterbody (머신러닝 기법을 활용한 유황별 LOADEST 모형의 적정 회귀식 선정 연구: 낙동강 수계를 중심으로)

  • Kim, Jonggun;Park, Youn Shik;Lee, Seoro;Shin, Yongchul;Lim, Kyoung Jae;Kim, Ki-sung
    • Journal of The Korean Society of Agricultural Engineers
    • /
    • v.59 no.4
    • /
    • pp.97-107
    • /
    • 2017
  • This study is to determine the coefficients of regression equations and to select the optimal regression equation in the LOADEST model after classifying the whole study period into 5 flow conditions for 16 watersheds located in the Nakdonggang waterbody. The optimized coefficients of regression equations were derived using the gradient descent method as a learning method in Tensorflow which is the engine of machine-learning method. In South Korea, the variability of streamflow is relatively high, and rainfall is concentrated in summer that can significantly affect the characteristic analysis of pollutant loads. Thus, unlike the previous application of the LOADEST model (adjusting whole study period), the study period was classified into 5 flow conditions to estimate the optimized coefficients and regression equations in the LOADEST model. As shown in the results, the equation #9 which has 7 coefficients related to flow and seasonal characteristics was selected for each flow condition in the study watersheds. When compared the simulated load (SS) to observed load, the simulation showed a similar pattern to the observation for the high flow condition due to the flow parameters related to precipitation directly. On the other hand, although the simulated load showed a similar pattern to observation in several watersheds, most of study watersheds showed large differences for the low flow conditions. This is because the pollutant load during low flow conditions might be significantly affected by baseflow or point-source pollutant load. Thus, based on the results of this study, it can be found that to estimate the continuous pollutant load properly the regression equations need to be determined with proper coefficients based on various flow conditions in watersheds. Furthermore, the machine-learning method can be useful to estimate the coefficients of regression equations in the LOADEST model.

Data Mining using Instance Selection in Artificial Neural Networks for Bankruptcy Prediction (기업부도예측을 위한 인공신경망 모형에서의 사례선택기법에 의한 데이터 마이닝)

  • Kim, Kyoung-jae
    • Journal of Intelligence and Information Systems
    • /
    • v.10 no.1
    • /
    • pp.109-123
    • /
    • 2004
  • Corporate financial distress and bankruptcy prediction is one of the major application areas of artificial neural networks (ANNs) in finance and management. ANNs have showed high prediction performance in this area, but sometimes are confronted with inconsistent and unpredictable performance for noisy data. In addition, it may not be possible to train ANN or the training task cannot be effectively carried out without data reduction when the amount of data is so large because training the large data set needs much processing time and additional costs of collecting data. Instance selection is one of popular methods for dimensionality reduction and is directly related to data reduction. Although some researchers have addressed the need for instance selection in instance-based learning algorithms, there is little research on instance selection for ANN. This study proposes a genetic algorithm (GA) approach to instance selection in ANN for bankruptcy prediction. In this study, we use ANN supported by the GA to optimize the connection weights between layers and select relevant instances. It is expected that the globally evolved weights mitigate the well-known limitations of gradient descent algorithm of backpropagation algorithm. In addition, genetically selected instances will shorten the learning time and enhance prediction performance. This study will compare the proposed model with other major data mining techniques. Experimental results show that the GA approach is a promising method for instance selection in ANN.

  • PDF

A Design on Face Recognition System Based on pRBFNNs by Obtaining Real Time Image (실시간 이미지 획득을 통한 pRBFNNs 기반 얼굴인식 시스템 설계)

  • Oh, Sung-Kwun;Seok, Jin-Wook;Kim, Ki-Sang;Kim, Hyun-Ki
    • Journal of Institute of Control, Robotics and Systems
    • /
    • v.16 no.12
    • /
    • pp.1150-1158
    • /
    • 2010
  • In this study, the Polynomial-based Radial Basis Function Neural Networks is proposed as one of the recognition part of overall face recognition system that consists of two parts such as the preprocessing part and recognition part. The design methodology and procedure of the proposed pRBFNNs are presented to obtain the solution to high-dimensional pattern recognition problem. First, in preprocessing part, we use a CCD camera to obtain a picture frame in real-time. By using histogram equalization method, we can partially enhance the distorted image influenced by natural as well as artificial illumination. We use an AdaBoost algorithm proposed by Viola and Jones, which is exploited for the detection of facial image area between face and non-facial image area. As the feature extraction algorithm, PCA method is used. In this study, the PCA method, which is a feature extraction algorithm, is used to carry out the dimension reduction of facial image area formed by high-dimensional information. Secondly, we use pRBFNNs to identify the ID by recognizing unique pattern of each person. The proposed pRBFNNs architecture consists of three functional modules such as the condition part, the conclusion part, and the inference part as fuzzy rules formed in 'If-then' format. In the condition part of fuzzy rules, input space is partitioned with Fuzzy C-Means clustering. In the conclusion part of rules, the connection weight of pRBFNNs is represented as three kinds of polynomials such as constant, linear, and quadratic. Coefficients of connection weight identified with back-propagation using gradient descent method. The output of pRBFNNs model is obtained by fuzzy inference method in the inference part of fuzzy rules. The essential design parameters (including learning rate, momentum coefficient and fuzzification coefficient) of the networks are optimized by means of the Particle Swarm Optimization. The proposed pRBFNNs are applied to real-time face recognition system and then demonstrated from the viewpoint of output performance and recognition rate.

An Effective Feature Extraction Method for Fault Diagnosis of Induction Motors (유도전동기의 고장 진단을 위한 효과적인 특징 추출 방법)

  • Nguyen, Hung N.;Kim, Jong-Myon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.18 no.7
    • /
    • pp.23-35
    • /
    • 2013
  • This paper proposes an effective technique that is used to automatically extract feature vectors from vibration signals for fault classification systems. Conventional mel-frequency cepstral coefficients (MFCCs) are sensitive to noise of vibration signals, degrading classification accuracy. To solve this problem, this paper proposes spectral envelope cepstral coefficients (SECC) analysis, where a 4-step filter bank based on spectral envelopes of vibration signals is used: (1) a linear predictive coding (LPC) algorithm is used to specify spectral envelopes of all faulty vibration signals, (2) all envelopes are averaged to get general spectral shape, (3) a gradient descent method is used to find extremes of the average envelope and its frequencies, (4) a non-overlapped filter is used to have centers calculated from distances between valley frequencies of the envelope. This 4-step filter bank is then used in cepstral coefficients computation to extract feature vectors. Finally, a multi-layer support vector machine (MLSVM) with various sigma values uses these special parameters to identify faulty types of induction motors. Experimental results indicate that the proposed extraction method outperforms other feature extraction algorithms, yielding more than about 99.65% of classification accuracy.

Analysis of Important Indicators of TCB Using GBM (일반화가속모형을 이용한 기술신용평가 주요 지표 분석)

  • Jeon, Woo-Jeong(Michael);Seo, Young-Wook
    • The Journal of Society for e-Business Studies
    • /
    • v.22 no.4
    • /
    • pp.159-173
    • /
    • 2017
  • In order to provide technical financial support to small and medium-sized venture companies based on technology, the government implemented the TCB evaluation, which is a kind of technology rating evaluation, from the Kibo and a qualified private TCB. In this paper, we briefly review the current state of TCB evaluation and available indicators related to technology evaluation accumulated in the Korea Credit Information Services (TDB), and then use indicators that have a significant effect on the technology rating score. Multiple regression techniques will be explored. And the relative importance and classification accuracy of the indicators were calculated by applying the key indicators as independent features applied to the generalized boosting model, which is a representative machine learning classifier, as the class influence and the fitness of each model. As a result of the analysis, it was analyzed that the relative importance between the two models was not significantly different. However, GBM model had more weight on the InnoBiz certification, R&D department, patent registration and venture confirmation indicators than regression model.