• Title/Summary/Keyword: Artificial neural network machine translation

Search Result 10, Processing Time 0.026 seconds

Application of Artificial Neural Network For Sign Language Translation

  • Cho, Jeong-Ran;Kim, Hyung-Hoon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.185-192
    • /
    • 2019
  • In the case of a hearing impaired person using sign language, there are many difficulties in communicating with a normal person who does not understand sign language. The sign language translation system is a system that enables communication between the hearing impaired person using sign language and the normal person who does not understand sign language in this situation. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, the existing sign language translation system does not solve such difficulties due to some problems. Existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. Therefore, in this paper, a sign language translation system using an artificial neural network is devised to overcome the problems of the existing system.

Sign Language Image Recognition System Using Artificial Neural Network

  • Kim, Hyung-Hoon;Cho, Jeong-Ran
    • Journal of the Korea Society of Computer and Information
    • /
    • v.24 no.2
    • /
    • pp.193-200
    • /
    • 2019
  • Hearing impaired people are living in a voice culture area, but due to the difficulty of communicating with normal people using sign language, many people experience discomfort in daily life and social life and various disadvantages unlike their desires. Therefore, in this paper, we study a sign language translation system for communication between a normal person and a hearing impaired person using sign language and implement a prototype system for this. Previous studies on sign language translation systems for communication between normal people and hearing impaired people using sign language are classified into two types using video image system and shape input device. However, existing sign language translation systems have some problems that they do not recognize various sign language expressions of sign language users and require special devices. In this paper, we use machine learning method of artificial neural network to recognize various sign language expressions of sign language users. By using generalized smart phone and various video equipment for sign language image recognition, we intend to improve the usability of sign language translation system.

Application of Different Tools of Artificial Intelligence in Translation Language

  • Mohammad Ahmed Manasrah
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.3
    • /
    • pp.144-150
    • /
    • 2023
  • With progressive advancements in Man-made consciousness (computer based intelligence) and Profound Learning (DL), contributing altogether to Normal Language Handling (NLP), the precision and nature of Machine Interpretation (MT) has worked on complex. There is a discussion, but that its no time like the present the human interpretation became immaterial or excess. All things considered, human flaws are consistently dealt with by its own creations. With the utilization of brain networks in machine interpretation, its been as of late guaranteed that keen frameworks can now decipher at standard with human interpreters. In any case, simulated intelligence is as yet not without any trace of issues related with handling of a language, let be the intricacies and complexities common of interpretation. Then, at that point, comes the innate predispositions while planning smart frameworks. How we plan these frameworks relies upon what our identity is, subsequently setting in a one-sided perspective and social encounters. Given the variety of language designs and societies they address, their taking care of by keen machines, even with profound learning abilities, with human proficiency looks exceptionally far-fetched, at any rate, for the time being.

Understanding recurrent neural network for texts using English-Korean corpora

  • Lee, Hagyeong;Song, Jongwoo
    • Communications for Statistical Applications and Methods
    • /
    • v.27 no.3
    • /
    • pp.313-326
    • /
    • 2020
  • Deep Learning is the most important key to the development of Artificial Intelligence (AI). There are several distinguishable architectures of neural networks such as MLP, CNN, and RNN. Among them, we try to understand one of the main architectures called Recurrent Neural Network (RNN) that differs from other networks in handling sequential data, including time series and texts. As one of the main tasks recently in Natural Language Processing (NLP), we consider Neural Machine Translation (NMT) using RNNs. We also summarize fundamental structures of the recurrent networks, and some topics of representing natural words to reasonable numeric vectors. We organize topics to understand estimation procedures from representing input source sequences to predict target translated sequences. In addition, we apply multiple translation models with Gated Recurrent Unites (GRUs) in Keras on English-Korean sentences that contain about 26,000 pairwise sequences in total from two different corpora, colloquialism and news. We verified some crucial factors that influence the quality of training. We found that loss decreases with more recurrent dimensions and using bidirectional RNN in the encoder when dealing with short sequences. We also computed BLEU scores which are the main measures of the translation performance, and compared them with the score from Google Translate using the same test sentences. We sum up some difficulties when training a proper translation model as well as dealing with Korean language. The use of Keras in Python for overall tasks from processing raw texts to evaluating the translation model also allows us to include some useful functions and vocabulary libraries as well.

Deep recurrent neural networks with word embeddings for Urdu named entity recognition

  • Khan, Wahab;Daud, Ali;Alotaibi, Fahd;Aljohani, Naif;Arafat, Sachi
    • ETRI Journal
    • /
    • v.42 no.1
    • /
    • pp.90-100
    • /
    • 2020
  • Named entity recognition (NER) continues to be an important task in natural language processing because it is featured as a subtask and/or subproblem in information extraction and machine translation. In Urdu language processing, it is a very difficult task. This paper proposes various deep recurrent neural network (DRNN) learning models with word embedding. Experimental results demonstrate that they improve upon current state-of-the-art NER approaches for Urdu. The DRRN models evaluated include forward and bidirectional extensions of the long short-term memory and back propagation through time approaches. The proposed models consider both language-dependent features, such as part-of-speech tags, and language-independent features, such as the "context windows" of words. The effectiveness of the DRNN models with word embedding for NER in Urdu is demonstrated using three datasets. The results reveal that the proposed approach significantly outperforms previous conditional random field and artificial neural network approaches. The best f-measure values achieved on the three benchmark datasets using the proposed deep learning approaches are 81.1%, 79.94%, and 63.21%, respectively.

A Study on the History, Classification and Development Direction of Artificial Intelligence (인공지능의 역사, 분류 그리고 발전 방향에 관한 연구)

  • Cho, Min-Ho
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.16 no.2
    • /
    • pp.307-312
    • /
    • 2021
  • Artificial Intelligence has a long history and is used in various fields including image recognition and automatic translation. Therefore, when we first encounter artificial intelligence, many terms, concepts and technologies often have difficulty in setting or implementing research direction. This study summarized important concepts related to artificial intelligence and summarized the progress of the past 60 years to help researcher suffering from these difficulties. Through this, it is possible to establish the basis for the use of vast artificial intelligence technologies and establish the right direction for research.

Fruit price prediction study using artificial intelligence (인공지능을 이용한 과일 가격 예측 모델 연구)

  • Im, Jin-mo;Kim, Weol-Youg;Byoun, Woo-Jin;Shin, Seung-Jung
    • The Journal of the Convergence on Culture Technology
    • /
    • v.4 no.2
    • /
    • pp.197-204
    • /
    • 2018
  • One of the hottest issues in our 21st century is AI. Just as the automation of manual labor has been achieved through the Industrial Revolution in the agricultural society, the intelligence information society has come through the SW Revolution in the information society. With the advent of Google 'Alpha Go', the computer has learned and predicted its own machine learning, and now the time has come for the computer to surpass the human, even to the world of Baduk, in other words, the computer. Machine learning ML (machine learning) is a field of artificial intelligence. Machine learning ML (machine learning) is a field of artificial intelligence, which means that AI technology is developed to allow the computer to learn by itself. The time has come when computers are beyond human beings. Many companies use machine learning, for example, to keep learning images on Facebook, and then telling them who they are. We also used a neural network to build an efficient energy usage model for Google's data center optimization. As another example, Microsoft's real-time interpretation model is a more sophisticated translation model as the language-related input data increases through translation learning. As machine learning has been increasingly used in many fields, we have to jump into the AI industry to move forward in our 21st century society.

A Unicode based Deep Handwritten Character Recognition model for Telugu to English Language Translation

  • BV Subba Rao;J. Nageswara Rao;Bandi Vamsi;Venkata Nagaraju Thatha;Katta Subba Rao
    • International Journal of Computer Science & Network Security
    • /
    • v.24 no.2
    • /
    • pp.101-112
    • /
    • 2024
  • Telugu language is considered as fourth most used language in India especially in the regions of Andhra Pradesh, Telangana, Karnataka etc. In international recognized countries also, Telugu is widely growing spoken language. This language comprises of different dependent and independent vowels, consonants and digits. In this aspect, the enhancement of Telugu Handwritten Character Recognition (HCR) has not been propagated. HCR is a neural network technique of converting a documented image to edited text one which can be used for many other applications. This reduces time and effort without starting over from the beginning every time. In this work, a Unicode based Handwritten Character Recognition(U-HCR) is developed for translating the handwritten Telugu characters into English language. With the use of Centre of Gravity (CG) in our model we can easily divide a compound character into individual character with the help of Unicode values. For training this model, we have used both online and offline Telugu character datasets. To extract the features in the scanned image we used convolutional neural network along with Machine Learning classifiers like Random Forest and Support Vector Machine. Stochastic Gradient Descent (SGD), Root Mean Square Propagation (RMS-P) and Adaptative Moment Estimation (ADAM)optimizers are used in this work to enhance the performance of U-HCR and to reduce the loss function value. This loss value reduction can be possible with optimizers by using CNN. In both online and offline datasets, proposed model showed promising results by maintaining the accuracies with 90.28% for SGD, 96.97% for RMS-P and 93.57% for ADAM respectively.

Classification and analysis of error types for deep learning-based Korean spelling correction (딥러닝 기반 한국어 맞춤법 교정을 위한 오류 유형 분류 및 분석)

  • Koo, Seonmin;Park, Chanjun;So, Aram;Lim, Heuiseok
    • Journal of the Korea Convergence Society
    • /
    • v.12 no.12
    • /
    • pp.65-74
    • /
    • 2021
  • Recently, studies on Korean spelling correction have been actively conducted based on machine translation and automatic noise generation. These methods generate noise and use as train and data set. This has limitation in that it is difficult to accurately measure performance because it is unlikely that noise other than the noise used for learning is included in the test set In addition, there is no practical error type standard, so the type of error used in each study is different, making qualitative analysis difficult. This paper proposes new 'error type classification' for deep learning-based Korean spelling correction research, and error analysis perform on existing commercialized Korean spelling correctors (System A, B, C). As a result of analysis, it was found the three correction systems did not perform well in correcting other error types presented in this paper other than spacing, and hardly recognized errors in word order or tense.

Deep Learning in Radiation Oncology

  • Cheon, Wonjoong;Kim, Haksoo;Kim, Jinsung
    • Progress in Medical Physics
    • /
    • v.31 no.3
    • /
    • pp.111-123
    • /
    • 2020
  • Deep learning (DL) is a subset of machine learning and artificial intelligence that has a deep neural network with a structure similar to the human neural system and has been trained using big data. DL narrows the gap between data acquisition and meaningful interpretation without explicit programming. It has so far outperformed most classification and regression methods and can automatically learn data representations for specific tasks. The application areas of DL in radiation oncology include classification, semantic segmentation, object detection, image translation and generation, and image captioning. This article tries to understand what is the potential role of DL and what can be more achieved by utilizing it in radiation oncology. With the advances in DL, various studies contributing to the development of radiation oncology were investigated comprehensively. In this article, the radiation treatment process was divided into six consecutive stages as follows: patient assessment, simulation, target and organs-at-risk segmentation, treatment planning, quality assurance, and beam delivery in terms of workflow. Studies using DL were classified and organized according to each radiation treatment process. State-of-the-art studies were identified, and the clinical utilities of those researches were examined. The DL model could provide faster and more accurate solutions to problems faced by oncologists. While the effect of a data-driven approach on improving the quality of care for cancer patients is evidently clear, implementing these methods will require cultural changes at both the professional and institutional levels. We believe this paper will serve as a guide for both clinicians and medical physicists on issues that need to be addressed in time.