• Title/Summary/Keyword: deep-learning

Search Result 5,598, Processing Time 0.032 seconds

Malaysian Name-based Ethnicity Classification using LSTM

  • Hur, Youngbum
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3855-3867
    • /
    • 2022
  • Name separation (splitting full names into surnames and given names) is not a tedious task in a multiethnic country because the procedure for splitting surnames and given names is ethnicity-specific. Malaysia has multiple main ethnic groups; therefore, separating Malaysian full names into surnames and given names proves a challenge. In this study, we develop a two-phase framework for Malaysian name separation using deep learning. In the initial phase, we predict the ethnicity of full names. We propose a recurrent neural network with long short-term memory network-based model with character embeddings for prediction. Based on the predicted ethnicity, we use a rule-based algorithm for splitting full names into surnames and given names in the second phase. We evaluate the performance of the proposed model against various machine learning models and demonstrate that it outperforms them by an average of 9%. Moreover, transfer learning and fine-tuning of the proposed model with an additional dataset results in an improvement of up to 7% on average.

Two tales of platoon intelligence for autonomous mobility control: Enabling deep learning recipes

  • Soohyun Park;Haemin Lee;Chanyoung Park;Soyi Jung;Minseok Choi;Joongheon Kim
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.735-745
    • /
    • 2023
  • This paper surveys recent multiagent reinforcement learning and neural Myerson auction deep learning efforts to improve mobility control and resource management in autonomous ground and aerial vehicles. The multiagent reinforcement learning communication network (CommNet) was introduced to enable multiple agents to perform actions in a distributed manner to achieve shared goals by training all agents' states and actions in a single neural network. Additionally, the Myerson auction method guarantees trustworthiness among multiple agents to optimize rewards in highly dynamic systems. Our findings suggest that the integration of MARL CommNet and Myerson techniques is very much needed for improved efficiency and trustworthiness.

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

A Study on Area Detection Using Transfer-Learning Technique (Transfer-Learning 기법을 이용한 영역검출 기법에 관한 연구)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.178-179
    • /
    • 2018
  • Recently, methods of using machine learning in artificial intelligence such as autonomous navigation and speech recognition have been actively studied. Classical image processing methods such as classical boundary detection and pattern recognition have many limitations in order to recognize a specific object or area in a digital image. However, when a machine learning method such as deep-learning is used, Can be obtained. However, basically, a large amount of learning data must be secured for machine learning such as deep-learning. Therefore, it is difficult to apply the machine learning for area classification when the amount of data is very small, such as aerial photographs for environmental analysis. In this study, we apply a transfer-learning technique that can be used when the dataset size of the input image is small and the shape of the input image is not included in the category of the training dataset.

  • PDF

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.

Lane Change Methodology for Autonomous Vehicles Based on Deep Reinforcement Learning (심층강화학습 기반 자율주행차량의 차로변경 방법론)

  • DaYoon Park;SangHoon Bae;Trinh Tuan Hung;Boogi Park;Bokyung Jung
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.22 no.1
    • /
    • pp.276-290
    • /
    • 2023
  • Several efforts in Korea are currently underway with the goal of commercializing autonomous vehicles. Hence, various studies are emerging on autonomous vehicles that drive safely and quickly according to operating guidelines. The current study examines the path search of an autonomous vehicle from a microscopic viewpoint and tries to prove the efficiency required by learning the lane change of an autonomous vehicle through Deep Q-Learning. A SUMO was used to achieve this purpose. The scenario was set to start with a random lane at the starting point and make a right turn through a lane change to the third lane at the destination. As a result of the study, the analysis was divided into simulation-based lane change and simulation-based lane change applied with Deep Q-Learning. The average traffic speed was improved by about 40% in the case of simulation with Deep Q-Learning applied, compared to the case without application, and the average waiting time was reduced by about 2 seconds and the average queue length by about 2.3 vehicles.

Analysis and Study for Appropriate Deep Neural Network Structures and Self-Supervised Learning-based Brain Signal Data Representation Methods (딥 뉴럴 네트워크의 적절한 구조 및 자가-지도 학습 방법에 따른 뇌신호 데이터 표현 기술 분석 및 고찰)

  • Won-Jun Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.137-142
    • /
    • 2024
  • Recently, deep learning technology has become those methods as de facto standards in the area of medical data representation. But, deep learning inherently requires a large amount of training data, which poses a challenge for its direct application in the medical field where acquiring large-scale data is not straightforward. Additionally, brain signal modalities also suffer from these problems owing to the high variability. Research has focused on designing deep neural network structures capable of effectively extracting spectro-spatio-temporal characteristics of brain signals, or employing self-supervised learning methods to pre-learn the neurophysiological features of brain signals. This paper analyzes methodologies used to handle small-scale data in emerging fields such as brain-computer interfaces and brain signal-based state prediction, presenting future directions for these technologies. At first, this paper examines deep neural network structures for representing brain signals, then analyzes self-supervised learning methodologies aimed at efficiently learning the characteristics of brain signals. Finally, the paper discusses key insights and future directions for deep learning-based brain signal analysis.

Design for Deep Learning Configuration Management System using Block Chain (딥러닝 형상관리를 위한 블록체인 시스템 설계)

  • Bae, Su-Hwan;Shin, Yong-Tae
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.14 no.3
    • /
    • pp.201-207
    • /
    • 2021
  • Deep learning, a type of machine learning, performs learning while changing the weights as it progresses through each learning process. Tensor Flow and Keras provide the results of the end of the learning in graph form. Thus, If an error occurs, the result must be discarded. Consequently, existing technologies provide a function to roll back learning results, but the rollback function is limited to results up to five times. Moreover, they applied the concept of MLOps to track the deep learning process, but no rollback capability is provided. In this paper, we construct a system that manages the intermediate value of the learning process by blockchain to record the intermediate learning process and can rollback in the event of an error. To perform the functions of blockchain, the deep learning process and the rollback of learning results are designed to work by writing Smart Contracts. Performance evaluation shows that, when evaluating the rollback function of the existing deep learning method, the proposed method has a 100% recovery rate, compared to the existing technique, which reduces the recovery rate after 6 times, down to 10% when 50 times. In addition, when using Smart Contract in Ethereum blockchain, it is confirmed that 1.57 million won is continuously consumed per block creation.

Deep Learning Architectures and Applications (딥러닝의 모형과 응용사례)

  • Ahn, SungMahn
    • Journal of Intelligence and Information Systems
    • /
    • v.22 no.2
    • /
    • pp.127-142
    • /
    • 2016
  • Deep learning model is a kind of neural networks that allows multiple hidden layers. There are various deep learning architectures such as convolutional neural networks, deep belief networks and recurrent neural networks. Those have been applied to fields like computer vision, automatic speech recognition, natural language processing, audio recognition and bioinformatics where they have been shown to produce state-of-the-art results on various tasks. Among those architectures, convolutional neural networks and recurrent neural networks are classified as the supervised learning model. And in recent years, those supervised learning models have gained more popularity than unsupervised learning models such as deep belief networks, because supervised learning models have shown fashionable applications in such fields mentioned above. Deep learning models can be trained with backpropagation algorithm. Backpropagation is an abbreviation for "backward propagation of errors" and a common method of training artificial neural networks used in conjunction with an optimization method such as gradient descent. The method calculates the gradient of an error function with respect to all the weights in the network. The gradient is fed to the optimization method which in turn uses it to update the weights, in an attempt to minimize the error function. Convolutional neural networks use a special architecture which is particularly well-adapted to classify images. Using this architecture makes convolutional networks fast to train. This, in turn, helps us train deep, muti-layer networks, which are very good at classifying images. These days, deep convolutional networks are used in most neural networks for image recognition. Convolutional neural networks use three basic ideas: local receptive fields, shared weights, and pooling. By local receptive fields, we mean that each neuron in the first(or any) hidden layer will be connected to a small region of the input(or previous layer's) neurons. Shared weights mean that we're going to use the same weights and bias for each of the local receptive field. This means that all the neurons in the hidden layer detect exactly the same feature, just at different locations in the input image. In addition to the convolutional layers just described, convolutional neural networks also contain pooling layers. Pooling layers are usually used immediately after convolutional layers. What the pooling layers do is to simplify the information in the output from the convolutional layer. Recent convolutional network architectures have 10 to 20 hidden layers and billions of connections between units. Training deep learning networks has taken weeks several years ago, but thanks to progress in GPU and algorithm enhancement, training time has reduced to several hours. Neural networks with time-varying behavior are known as recurrent neural networks or RNNs. A recurrent neural network is a class of artificial neural network where connections between units form a directed cycle. This creates an internal state of the network which allows it to exhibit dynamic temporal behavior. Unlike feedforward neural networks, RNNs can use their internal memory to process arbitrary sequences of inputs. Early RNN models turned out to be very difficult to train, harder even than deep feedforward networks. The reason is the unstable gradient problem such as vanishing gradient and exploding gradient. The gradient can get smaller and smaller as it is propagated back through layers. This makes learning in early layers extremely slow. The problem actually gets worse in RNNs, since gradients aren't just propagated backward through layers, they're propagated backward through time. If the network runs for a long time, that can make the gradient extremely unstable and hard to learn from. It has been possible to incorporate an idea known as long short-term memory units (LSTMs) into RNNs. LSTMs make it much easier to get good results when training RNNs, and many recent papers make use of LSTMs or related ideas.