• Title/Summary/Keyword: Deep Autoencoder

Search Result 104, Processing Time 0.029 seconds

Visualization of Korean Speech Based on the Distance of Acoustic Features (음성특징의 거리에 기반한 한국어 발음의 시각화)

  • Pok, Gou-Chol
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.13 no.3
    • /
    • pp.197-205
    • /
    • 2020
  • Korean language has the characteristics that the pronunciation of phoneme units such as vowels and consonants are fixed and the pronunciation associated with a notation does not change, so that foreign learners can approach rather easily Korean language. However, when one pronounces words, phrases, or sentences, the pronunciation changes in a manner of a wide variation and complexity at the boundaries of syllables, and the association of notation and pronunciation does not hold any more. Consequently, it is very difficult for foreign learners to study Korean standard pronunciations. Despite these difficulties, it is believed that systematic analysis of pronunciation errors for Korean words is possible according to the advantageous observations that the relationship between Korean notations and pronunciations can be described as a set of firm rules without exceptions unlike other languages including English. In this paper, we propose a visualization framework which shows the differences between standard pronunciations and erratic ones as quantitative measures on the computer screen. Previous researches only show color representation and 3D graphics of speech properties, or an animated view of changing shapes of lips and mouth cavity. Moreover, the features used in the analysis are only point data such as the average of a speech range. In this study, we propose a method which can directly use the time-series data instead of using summary or distorted data. This was realized by using the deep learning-based technique which combines Self-organizing map, variational autoencoder model, and Markov model, and we achieved a superior performance enhancement compared to the method using the point-based data.

Comparative Analysis of Anomaly Detection Models using AE and Suggestion of Criteria for Determining Outliers

  • Kang, Gun-Ha;Sohn, Jung-Mo;Sim, Gun-Wu
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.8
    • /
    • pp.23-30
    • /
    • 2021
  • In this study, we present a comparative analysis of major autoencoder(AE)-based anomaly detection methods for quality determination in the manufacturing process and a new anomaly discrimination criterion. Due to the characteristics of manufacturing site, anomalous instances are few and their types greatly vary. These properties degrade the performance of an AI-based anomaly detection model using the dataset for both normal and anomalous cases, and incur a lot of time and costs in obtaining additional data for performance improvement. To solve this problem, the studies on AE-based models such as AE and VAE are underway, which perform anomaly detection using only normal data. In this work, based on Convolutional AE, VAE, and Dilated VAE models, statistics on residual images, MSE, and information entropy were selected as outlier discriminant criteria to compare and analyze the performance of each model. In particular, the range value applied to the Convolutional AE model showed the best performance with AUC PRC 0.9570, F1 Score 0.8812 and AUC ROC 0.9548, accuracy 87.60%. This shows a performance improvement of an accuracy about 20%P(Percentage Point) compared to MSE, which was frequently used as a standard for determining outliers, and confirmed that model performance can be improved according to the criteria for determining outliers.

Financial Market Prediction and Improving the Performance Based on Large-scale Exogenous Variables and Deep Neural Networks (대규모 외생 변수 및 Deep Neural Network 기반 금융 시장 예측 및 성능 향상)

  • Cheon, Sung Gil;Lee, Ju Hong;Choi, Bum Ghi;Song, Jae Won
    • Smart Media Journal
    • /
    • v.9 no.4
    • /
    • pp.26-35
    • /
    • 2020
  • Attempts to predict future stock prices have been studied steadily since the past. However, unlike general time-series data, financial time-series data has various obstacles to making predictions such as non-stationarity, long-term dependence, and non-linearity. In addition, variables of a wide range of data have limitations in the selection by humans, and the model should be able to automatically extract variables well. In this paper, we propose a 'sliding time step normalization' method that can normalize non-stationary data and LSTM autoencoder to compress variables from all variables. and 'moving transfer learning', which divides periods and performs transfer learning. In addition, the experiment shows that the performance is superior when using as many variables as possible through the neural network rather than using only 100 major financial variables and by using 'sliding time step normalization' to normalize the non-stationarity of data in all sections, it is shown to be effective in improving performance. 'moving transfer learning' shows that it is effective in improving the performance in long test intervals by evaluating the performance of the model and performing transfer learning in the test interval for each step.

Design of a Dual Network based Neural Architecture for a Cancellation of Monte Carlo Rendering Noise (몬테칼로 렌더링 노이즈 제거를 위한 듀얼 신경망 구조 설계)

  • Lee, Kwang-Yeob
    • Journal of IKEEE
    • /
    • v.23 no.4
    • /
    • pp.1366-1372
    • /
    • 2019
  • In this paper, we designed a revised neural network to remove the Monte Carlo Rendering noise contained in the ray tracing graphics. The Monte Carlo Rendering is the best way to enhance the graphic's realism, but because of the need to calculate more than thousands of light effects per pixel, rendering processing time has increased rapidly, causing a major problem with real-time processing. To improve this problem, the number of light used in pixels is reduced, where rendering noise occurs and various studies have been conducted to eliminate this noise. In this paper, a deep learning is used to remove rendering noise, especially by separating the rendering image into diffuse and specular light, so that the structure of the dual neural network is designed. As a result, the dual neural network improved by an average of 0.58 db for 64 test images based on PSNR, and 99.22% less light compared to reference image, enabling real-time race-tracing rendering.

FRS-OCC: Face Recognition System for Surveillance Based on Occlusion Invariant Technique

  • Abbas, Qaisar
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.8
    • /
    • pp.288-296
    • /
    • 2021
  • Automated face recognition in a runtime environment is gaining more and more important in the fields of surveillance and urban security. This is a difficult task keeping in mind the constantly volatile image landscape with varying features and attributes. For a system to be beneficial in industrial settings, it is pertinent that its efficiency isn't compromised when running on roads, intersections, and busy streets. However, recognition in such uncontrolled circumstances is a major problem in real-life applications. In this paper, the main problem of face recognition in which full face is not visible (Occlusion). This is a common occurrence as any person can change his features by wearing a scarf, sunglass or by merely growing a mustache or beard. Such types of discrepancies in facial appearance are frequently stumbled upon in an uncontrolled circumstance and possibly will be a reason to the security systems which are based upon face recognition. These types of variations are very common in a real-life environment. It has been analyzed that it has been studied less in literature but now researchers have a major focus on this type of variation. Existing state-of-the-art techniques suffer from several limitations. Most significant amongst them are low level of usability and poor response time in case of any calamity. In this paper, an improved face recognition system is developed to solve the problem of occlusion known as FRS-OCC. To build the FRS-OCC system, the color and texture features are used and then an incremental learning algorithm (Learn++) to select more informative features. Afterward, the trained stack-based autoencoder (SAE) deep learning algorithm is used to recognize a human face. Overall, the FRS-OCC system is used to introduce such algorithms which enhance the response time to guarantee a benchmark quality of service in any situation. To test and evaluate the performance of the proposed FRS-OCC system, the AR face dataset is utilized. On average, the FRS-OCC system is outperformed and achieved SE of 98.82%, SP of 98.49%, AC of 98.76% and AUC of 0.9995 compared to other state-of-the-art methods. The obtained results indicate that the FRS-OCC system can be used in any surveillance application.

Machine Learning-based Classification of Hyperspectral Imagery

  • Haq, Mohd Anul;Rehman, Ziaur;Ahmed, Ahsan;Khan, Mohd Abdul Rahim
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.4
    • /
    • pp.193-202
    • /
    • 2022
  • The classification of hyperspectral imagery (HSI) is essential in the surface of earth observation. Due to the continuous large number of bands, HSI data provide rich information about the object of study; however, it suffers from the curse of dimensionality. Dimensionality reduction is an essential aspect of Machine learning classification. The algorithms based on feature extraction can overcome the data dimensionality issue, thereby allowing the classifiers to utilize comprehensive models to reduce computational costs. This paper assesses and compares two HSI classification techniques. The first is based on the Joint Spatial-Spectral Stacked Autoencoder (JSSSA) method, the second is based on a shallow Artificial Neural Network (SNN), and the third is used the SVM model. The performance of the JSSSA technique is better than the SNN classification technique based on the overall accuracy and Kappa coefficient values. We observed that the JSSSA based method surpasses the SNN technique with an overall accuracy of 96.13% and Kappa coefficient value of 0.95. SNN also achieved a good accuracy of 92.40% and a Kappa coefficient value of 0.90, and SVM achieved an accuracy of 82.87%. The current study suggests that both JSSSA and SNN based techniques prove to be efficient methods for hyperspectral classification of snow features. This work classified the labeled/ground-truth datasets of snow in multiple classes. The labeled/ground-truth data can be valuable for applying deep neural networks such as CNN, hybrid CNN, RNN for glaciology, and snow-related hazard applications.

Development of a driver's emotion detection model using auto-encoder on driving behavior and psychological data

  • Eun-Seo, Jung;Seo-Hee, Kim;Yun-Jung, Hong;In-Beom, Yang;Jiyoung, Woo
    • Journal of the Korea Society of Computer and Information
    • /
    • v.28 no.3
    • /
    • pp.35-43
    • /
    • 2023
  • Emotion recognition while driving is an essential task to prevent accidents. Furthermore, in the era of autonomous driving, automobiles are the subject of mobility, requiring more emotional communication with drivers, and the emotion recognition market is gradually spreading. Accordingly, in this research plan, the driver's emotions are classified into seven categories using psychological and behavioral data, which are relatively easy to collect. The latent vectors extracted through the auto-encoder model were also used as features in this classification model, confirming that this affected performance improvement. Furthermore, it also confirmed that the performance was improved when using the framework presented in this paper compared to when the existing EEG data were included. Finally, 81% of the driver's emotion classification accuracy and 80% of F1-Score were achieved only through psychological, personal information, and behavioral data.

Arabic Words Extraction and Character Recognition from Picturesque Image Macros with Enhanced VGG-16 based Model Functionality Using Neural Networks

  • Ayed Ahmad Hamdan Al-Radaideh;Mohd Shafry bin Mohd Rahim;Wad Ghaban;Majdi Bsoul;Shahid Kamal;Naveed Abbas
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.7
    • /
    • pp.1807-1822
    • /
    • 2023
  • Innovation and rapid increased functionality in user friendly smartphones has encouraged shutterbugs to have picturesque image macros while in work environment or during travel. Formal signboards are placed with marketing objectives and are enriched with text for attracting people. Extracting and recognition of the text from natural images is an emerging research issue and needs consideration. When compared to conventional optical character recognition (OCR), the complex background, implicit noise, lighting, and orientation of these scenic text photos make this problem more difficult. Arabic language text scene extraction and recognition adds a number of complications and difficulties. The method described in this paper uses a two-phase methodology to extract Arabic text and word boundaries awareness from scenic images with varying text orientations. The first stage uses a convolution autoencoder, and the second uses Arabic Character Segmentation (ACS), which is followed by traditional two-layer neural networks for recognition. This study presents the way that how can an Arabic training and synthetic dataset be created for exemplify the superimposed text in different scene images. For this purpose a dataset of size 10K of cropped images has been created in the detection phase wherein Arabic text was found and 127k Arabic character dataset for the recognition phase. The phase-1 labels were generated from an Arabic corpus of quotes and sentences, which consists of 15kquotes and sentences. This study ensures that Arabic Word Awareness Region Detection (AWARD) approach with high flexibility in identifying complex Arabic text scene images, such as texts that are arbitrarily oriented, curved, or deformed, is used to detect these texts. Our research after experimentations shows that the system has a 91.8% word segmentation accuracy and a 94.2% character recognition accuracy. We believe in the future that the researchers will excel in the field of image processing while treating text images to improve or reduce noise by processing scene images in any language by enhancing the functionality of VGG-16 based model using Neural Networks.

Deep Learning-Based Personalized Recommendation Using Customer Behavior and Purchase History in E-Commerce (전자상거래에서 고객 행동 정보와 구매 기록을 활용한 딥러닝 기반 개인화 추천 시스템)

  • Hong, Da Young;Kim, Ga Yeong;Kim, Hyon Hee
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.6
    • /
    • pp.237-244
    • /
    • 2022
  • In this paper, we present VAE-based recommendation using online behavior log and purchase history to overcome data sparsity and cold start. To generate a variable for customers' purchase history, embedding and dimensionality reduction are applied to the customers' purchase history. Also, Variational Autoencoders are applied to online behavior and purchase history. A total number of 12 variables are used, and nDCG is chosen for performance evaluation. Our experimental results showed that the proposed VAE-based recommendation outperforms SVD-based recommendation. Also, the generated purchase history variable improves the recommendation performance.

Segmentation of Mammography Breast Images using Automatic Segmen Adversarial Network with Unet Neural Networks

  • Suriya Priyadharsini.M;J.G.R Sathiaseelan
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.12
    • /
    • pp.151-160
    • /
    • 2023
  • Breast cancer is the most dangerous and deadly form of cancer. Initial detection of breast cancer can significantly improve treatment effectiveness. The second most common cancer among Indian women in rural areas. Early detection of symptoms and signs is the most important technique to effectively treat breast cancer, as it enhances the odds of receiving an earlier, more specialist care. As a result, it has the possible to significantly improve survival odds by delaying or entirely eliminating cancer. Mammography is a high-resolution radiography technique that is an important factor in avoiding and diagnosing cancer at an early stage. Automatic segmentation of the breast part using Mammography pictures can help reduce the area available for cancer search while also saving time and effort compared to manual segmentation. Autoencoder-like convolutional and deconvolutional neural networks (CN-DCNN) were utilised in previous studies to automatically segment the breast area in Mammography pictures. We present Automatic SegmenAN, a unique end-to-end adversarial neural network for the job of medical image segmentation, in this paper. Because image segmentation necessitates extensive, pixel-level labelling, a standard GAN's discriminator's single scalar real/fake output may be inefficient in providing steady and appropriate gradient feedback to the networks. Instead of utilising a fully convolutional neural network as the segmentor, we suggested a new adversarial critic network with a multi-scale L1 loss function to force the critic and segmentor to learn both global and local attributes that collect long- and short-range spatial relations among pixels. We demonstrate that an Automatic SegmenAN perspective is more up to date and reliable for segmentation tasks than the state-of-the-art U-net segmentation technique.