• Title/Summary/Keyword: Deep Neural Network)

Search Result 2,065, Processing Time 0.024 seconds

Arabic Text Recognition with Harakat Using Deep Learning

  • Ashwag, Maghraby;Esraa, Samkari
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.1
    • /
    • pp.41-46
    • /
    • 2023
  • Because of the significant role that harakat plays in Arabic text, this paper used deep learning to extract Arabic text with its harakat from an image. Convolutional neural networks and recurrent neural network algorithms were applied to the dataset, which contained 110 images, each representing one word. The results showed the ability to extract some letters with harakat.

Improving Wind Speed Forecasts Using Deep Neural Network

  • Hong, Seokmin;Ku, SungKwan
    • International Journal of Advanced Culture Technology
    • /
    • v.7 no.4
    • /
    • pp.327-333
    • /
    • 2019
  • Wind speed data constitute important weather information for aircrafts flying at low altitudes, such as drones. Currently, the accuracy of low altitude wind predictions is much lower than that of high-altitude wind predictions. Deep neural networks are proposed in this study as a method to improve wind speed forecast information. Deep neural networks mimic the learning process of the interactions among neurons in the brain, and it is used in various fields, such as recognition of image, sound, and texts, image and natural language processing, and pattern recognition in time-series. In this study, the deep neural network model is constructed using the wind prediction values generated by the numerical model as an input to improve the wind speed forecasts. Using the ground wind speed forecast data collected at the Boseong Meteorological Observation Tower, wind speed forecast values obtained by the numerical model are compared with those obtained by the model proposed in this study for the verification of the validity and compatibility of the proposed model.

Daily Stock Price Forecasting Using Deep Neural Network Model (심층 신경회로망 모델을 이용한 일별 주가 예측)

  • Hwang, Heesoo
    • Journal of the Korea Convergence Society
    • /
    • v.9 no.6
    • /
    • pp.39-44
    • /
    • 2018
  • The application of deep neural networks to finance has received a great deal of attention from researchers because no assumption about a suitable mathematical model has to be made prior to forecasting and they are capable of extracting useful information from large sets of data, which is required to describe nonlinear input-output relations of financial time series. The paper presents a new deep neural network model where single layered autoencoder and 4 layered neural network are serially coupled for stock price forecasting. The autoencoder extracts deep features, which are fed into multi-layer neural networks to predict the next day's stock closing prices. The proposed deep neural network is progressively learned layer by layer ahead of the final learning of the total network. The proposed model to predict daily close prices of KOrea composite Stock Price Index (KOSPI) is built, and its performance is demonstrated.

Deep learning convolutional neural network algorithms for the early detection and diagnosis of dental caries on periapical radiographs: A systematic review

  • Musri, Nabilla;Christie, Brenda;Ichwan, Solachuddin Jauhari Arief;Cahyanto, Arief
    • Imaging Science in Dentistry
    • /
    • v.51 no.3
    • /
    • pp.237-242
    • /
    • 2021
  • Purpose: The aim of this study was to analyse and review deep learning convolutional neural networks for detecting and diagnosing early-stage dental caries on periapical radiographs. Materials and Methods: In order to conduct this review, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses(PRISMA) guidelines were followed. Studies published from 2015 to 2021 under the keywords(deep convolutional neural network) AND (caries), (deep learning caries) AND (convolutional neural network) AND (caries) were systematically reviewed. Results: When dental caries is improperly diagnosed, the lesion may eventually invade the enamel, dentin, and pulp tissue, leading to loss of tooth function. Rapid and precise detection and diagnosis are vital for implementing appropriate prevention and treatment of dental caries. Radiography and intraoral images are considered to play a vital role in detecting dental caries; nevertheless, studies have shown that 20% of suspicious areas are mistakenly diagnosed as dental caries using this technique; hence, diagnosis via radiography alone without an objective assessment is inaccurate. Identifying caries with a deep convolutional neural network-based detector enables the operator to distinguish changes in the location and morphological features of dental caries lesions. Deep learning algorithms have broader and more profound layers and are continually being developed, remarkably enhancing their precision in detecting and segmenting objects. Conclusion: Clinical applications of deep learning convolutional neural networks in the dental field have shown significant accuracy in detecting and diagnosing dental caries, and these models hold promise in supporting dental practitioners to improve patient outcomes.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.

An Optimized Deep Learning Techniques for Analyzing Mammograms

  • Satish Babu Bandaru;Natarajasivan. D;Rama Mohan Babu. G
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.7
    • /
    • pp.39-48
    • /
    • 2023
  • Breast cancer screening makes extensive utilization of mammography. Even so, there has been a lot of debate with regards to this application's starting age as well as screening interval. The deep learning technique of transfer learning is employed for transferring the knowledge learnt from the source tasks to the target tasks. For the resolution of real-world problems, deep neural networks have demonstrated superior performance in comparison with the standard machine learning algorithms. The architecture of the deep neural networks has to be defined by taking into account the problem domain knowledge. Normally, this technique will consume a lot of time as well as computational resources. This work evaluated the efficacy of the deep learning neural network like Visual Geometry Group Network (VGG Net) Residual Network (Res Net), as well as inception network for classifying the mammograms. This work proposed optimization of ResNet with Teaching Learning Based Optimization (TLBO) algorithm's in order to predict breast cancers by means of mammogram images. The proposed TLBO-ResNet, an optimized ResNet with faster convergence ability when compared with other evolutionary methods for mammogram classification.

Deep Structured Learning: Architectures and Applications

  • Lee, Soowook
    • International Journal of Advanced Culture Technology
    • /
    • v.6 no.4
    • /
    • pp.262-265
    • /
    • 2018
  • Deep learning, a sub-field of machine learning changing the prospects of artificial intelligence (AI) because of its recent advancements and application in various field. Deep learning deals with algorithms inspired by the structure and function of the brain called artificial neural networks. This works reviews basic architecture and recent advancement of deep structured learning. It also describes contemporary applications of deep structured learning and its advantages over the treditional learning in artificial interlligence. This study is useful for the general readers and students who are in the early stage of deep learning studies.

Comparative Study of Performance of Deep Learning Algorithms in Particulate Matter Concentration Prediction (미세먼지 농도 예측을 위한 딥러닝 알고리즘별 성능 비교)

  • Cho, Kyoung-Woo;Jung, Yong-jin;Oh, Chang-Heon
    • Journal of Advanced Navigation Technology
    • /
    • v.25 no.5
    • /
    • pp.409-414
    • /
    • 2021
  • The growing concerns on the emission of particulate matter has prompted a demand for highly reliable particulate matter forecasting. Currently, several studies on particulate matter prediction use various deep learning algorithms. In this study, we compared the predictive performances of typical neural networks used for particulate matter prediction. We used deep neural network(DNN), recurrent neural network, and long short-term memory algorithms to design an optimal predictive model on the basis of a hyperparameter search. The results of a comparative analysis of the predictive performances of the models indicate that the variation trend of the actual and predicted values generally showed a good performance. In the analysis based on the root mean square error and accuracy, the DNN-based prediction model showed a higher reliability for prediction errors compared with the other prediction models.

Dual deep neural network-based classifiers to detect experimental seizures

  • Jang, Hyun-Jong;Cho, Kyung-Ok
    • The Korean Journal of Physiology and Pharmacology
    • /
    • v.23 no.2
    • /
    • pp.131-139
    • /
    • 2019
  • Manually reviewing electroencephalograms (EEGs) is labor-intensive and demands automated seizure detection systems. To construct an efficient and robust event detector for experimental seizures from continuous EEG monitoring, we combined spectral analysis and deep neural networks. A deep neural network was trained to discriminate periodograms of 5-sec EEG segments from annotated convulsive seizures and the pre- and post-EEG segments. To use the entire EEG for training, a second network was trained with non-seizure EEGs that were misclassified as seizures by the first network. By sequentially applying the dual deep neural networks and simple pre- and post-processing, our autodetector identified all seizure events in 4,272 h of test EEG traces, with only 6 false positive events, corresponding to 100% sensitivity and 98% positive predictive value. Moreover, with pre-processing to reduce the computational burden, scanning and classifying 8,977 h of training and test EEG datasets took only 2.28 h with a personal computer. These results demonstrate that combining a basic feature extractor with dual deep neural networks and rule-based pre- and post-processing can detect convulsive seizures with great accuracy and low computational burden, highlighting the feasibility of our automated seizure detection algorithm.

Prediction and Comparison of Electrochemical Machining on Shape Memory Alloy(SMA) using Deep Neural Network(DNN)

  • Song, Woo Jae;Choi, Seung Geon;Lee, Eun-Sang
    • Journal of Electrochemical Science and Technology
    • /
    • v.10 no.3
    • /
    • pp.276-283
    • /
    • 2019
  • Nitinol is an alloy of nickel and titanium. Nitinol is one of the shape memory alloys(SMA) that are restored to a remembered form, changing the crystal structure at a given temperature. Because of these unique features, it is used in medical devices, high precision sensors, and aerospace industries. However, the conventional method of mechanical machining for nitinol has problems of thermal and residual stress after processing. Therefore, the electrochemical machining(ECM), which does not produce residual stress and thermal deformation, has emerged as an alternative processing technique. In addition, to replace the existing experimental planning methods, this study used deep neural network(DNN), which is the basis for AI. This method was shown to be more useful than conventional method of design of experiments(RSM, Taguchi, Regression) by applying deep neural network(DNN) to electrochemical machining(ECM) and comparing root mean square errors(RMSE). Comparison with actual experimental values has shown that DNN is a more useful method than conventional method. (DOE - RSM, Taguchi, Regression). The result of the machining was accurately and efficiently predicted by applying electrochemical machining(ECM) and deep neural network(DNN) to the shape memory alloy(SMA), which is a hard-mechinability material.