• Title/Summary/Keyword: deep-learning

Search Result 5,901, Processing Time 0.308 seconds

Learning Deep Representation by Increasing ConvNets Depth for Few Shot Learning

  • Fabian, H.S. Tan;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.75-81
    • /
    • 2019
  • Though recent advancement of deep learning methods have provided satisfactory results from large data domain, somehow yield poor performance on few-shot classification tasks. In order to train a model with strong performance, i.e. deep convolutional neural network, it depends heavily on huge dataset and the labeled classes of the dataset can be extremely humongous. The cost of human annotation and scarcity of the data among the classes have drastically limited the capability of current image classification model. On the contrary, humans are excellent in terms of learning or recognizing new unseen classes with merely small set of labeled examples. Few-shot learning aims to train a classification model with limited labeled samples to recognize new classes that have neverseen during training process. In this paper, we increase the backbone depth of the embedding network in orderto learn the variation between the intra-class. By increasing the network depth of the embedding module, we are able to achieve competitive performance due to the minimized intra-class variation.

Human-like sign-language learning method using deep learning

  • Ji, Yangho;Kim, Sunmok;Kim, Young-Joo;Lee, Ki-Baek
    • ETRI Journal
    • /
    • v.40 no.4
    • /
    • pp.435-445
    • /
    • 2018
  • This paper proposes a human-like sign-language learning method that uses a deep-learning technique. Inspired by the fact that humans can learn sign language from just a set of pictures in a book, in the proposed method, the input data are pre-processed into an image. In addition, the network is partially pre-trained to imitate the preliminarily obtained knowledge of humans. The learning process is implemented with a well-known network, that is, a convolutional neural network. Twelve sign actions are learned in 10 situations, and can be recognized with an accuracy of 99% in scenarios with low-cost equipment and limited data. The results show that the system is highly practical, as well as accurate and robust.

Extraction of the OLED Device Parameter based on Randomly Generated Monte Carlo Simulation with Deep Learning (무작위 생성 심층신경망 기반 유기발광다이오드 흑점 성장가속 전산모사를 통한 소자 변수 추출)

  • You, Seung Yeol;Park, Il-Hoo;Kim, Gyu-Tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.20 no.3
    • /
    • pp.131-135
    • /
    • 2021
  • Numbers of studies related to optimization of design of organic light emitting diodes(OLED) through machine learning are increasing. We propose the generative method of the image to assess the performance of the device combining with machine learning technique. Principle parameter regarding dark spot growth mechanism of the OLED can be the key factor to determine the long-time performance. Captured images from actual device and randomly generated images at specific time and initial pinhole state are fed into the deep neural network system. The simulation reinforced by the machine learning technique can predict the device parameters accurately and faster. Similarly, the inverse design using multiple layer perceptron(MLP) system can infer the initial degradation factors at manufacturing with given device parameter to feedback the design of manufacturing process.

Deep-learning based In-situ Monitoring and Prediction System for the Organic Light Emitting Diode

  • Park, Il-Hoo;Cho, Hyeran;Kim, Gyu-Tae
    • Journal of the Semiconductor & Display Technology
    • /
    • v.19 no.4
    • /
    • pp.126-129
    • /
    • 2020
  • We introduce a lifetime assessment technique using deep learning algorithm with complex electrical parameters such as resistivity, permittivity, impedance parameters as integrated indicators for predicting the degradation of the organic molecules. The evaluation system consists of fully automated in-situ measurement system and multiple layer perceptron learning system with five hidden layers and 1011 perceptra in each layer. Prediction accuracies are calculated and compared depending on the physical feature, learning hyperparameters. 62.5% of full time-series data are used for training and its prediction accuracy is estimated as r-square value of 0.99. Remaining 37.5% of the data are used for testing with prediction accuracy of 0.95. With k-fold cross-validation, the stability to the instantaneous changes in the measured data is also improved.

Current Status of Automatic Fish Measurement (어류의 외부형질 측정 자동화 개발 현황)

  • Yi, Myunggi
    • Korean Journal of Fisheries and Aquatic Sciences
    • /
    • v.55 no.5
    • /
    • pp.638-644
    • /
    • 2022
  • The measurement of morphological features is essential in aquaculture, fish industry and the management of fishery resources. The measurement of fish requires a large investment of manpower and time. To save time and labor for fish measurement, automated and reliable measurement methods have been developed. Automation was achieved by applying computer vision and machine learning techniques. Recently, machine learning methods based on deep learning have been used for most automatic fish measurement studies. Here, we review the current status of automatic fish measurement with traditional computer vision methods and deep learning-based methods.

Malaysian Name-based Ethnicity Classification using LSTM

  • Hur, Youngbum
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.16 no.12
    • /
    • pp.3855-3867
    • /
    • 2022
  • Name separation (splitting full names into surnames and given names) is not a tedious task in a multiethnic country because the procedure for splitting surnames and given names is ethnicity-specific. Malaysia has multiple main ethnic groups; therefore, separating Malaysian full names into surnames and given names proves a challenge. In this study, we develop a two-phase framework for Malaysian name separation using deep learning. In the initial phase, we predict the ethnicity of full names. We propose a recurrent neural network with long short-term memory network-based model with character embeddings for prediction. Based on the predicted ethnicity, we use a rule-based algorithm for splitting full names into surnames and given names in the second phase. We evaluate the performance of the proposed model against various machine learning models and demonstrate that it outperforms them by an average of 9%. Moreover, transfer learning and fine-tuning of the proposed model with an additional dataset results in an improvement of up to 7% on average.

Two tales of platoon intelligence for autonomous mobility control: Enabling deep learning recipes

  • Soohyun Park;Haemin Lee;Chanyoung Park;Soyi Jung;Minseok Choi;Joongheon Kim
    • ETRI Journal
    • /
    • v.45 no.5
    • /
    • pp.735-745
    • /
    • 2023
  • This paper surveys recent multiagent reinforcement learning and neural Myerson auction deep learning efforts to improve mobility control and resource management in autonomous ground and aerial vehicles. The multiagent reinforcement learning communication network (CommNet) was introduced to enable multiple agents to perform actions in a distributed manner to achieve shared goals by training all agents' states and actions in a single neural network. Additionally, the Myerson auction method guarantees trustworthiness among multiple agents to optimize rewards in highly dynamic systems. Our findings suggest that the integration of MARL CommNet and Myerson techniques is very much needed for improved efficiency and trustworthiness.

Analysis of the Status of Natural Language Processing Technology Based on Deep Learning (딥러닝 중심의 자연어 처리 기술 현황 분석)

  • Park, Sang-Un
    • The Journal of Bigdata
    • /
    • v.6 no.1
    • /
    • pp.63-81
    • /
    • 2021
  • The performance of natural language processing is rapidly improving due to the recent development and application of machine learning and deep learning technologies, and as a result, the field of application is expanding. In particular, as the demand for analysis on unstructured text data increases, interest in NLP(Natural Language Processing) is also increasing. However, due to the complexity and difficulty of the natural language preprocessing process and machine learning and deep learning theories, there are still high barriers to the use of natural language processing. In this paper, for an overall understanding of NLP, by examining the main fields of NLP that are currently being actively researched and the current state of major technologies centered on machine learning and deep learning, We want to provide a foundation to understand and utilize NLP more easily. Therefore, we investigated the change of NLP in AI(artificial intelligence) through the changes of the taxonomy of AI technology. The main areas of NLP which consists of language model, text classification, text generation, document summarization, question answering and machine translation were explained with state of the art deep learning models. In addition, major deep learning models utilized in NLP were explained, and data sets and evaluation measures for performance evaluation were summarized. We hope researchers who want to utilize NLP for various purposes in their field be able to understand the overall technical status and the main technologies of NLP through this paper.

A Study on Area Detection Using Transfer-Learning Technique (Transfer-Learning 기법을 이용한 영역검출 기법에 관한 연구)

  • Shin, Kwang-seong;Shin, Seong-yoon
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2018.10a
    • /
    • pp.178-179
    • /
    • 2018
  • Recently, methods of using machine learning in artificial intelligence such as autonomous navigation and speech recognition have been actively studied. Classical image processing methods such as classical boundary detection and pattern recognition have many limitations in order to recognize a specific object or area in a digital image. However, when a machine learning method such as deep-learning is used, Can be obtained. However, basically, a large amount of learning data must be secured for machine learning such as deep-learning. Therefore, it is difficult to apply the machine learning for area classification when the amount of data is very small, such as aerial photographs for environmental analysis. In this study, we apply a transfer-learning technique that can be used when the dataset size of the input image is small and the shape of the input image is not included in the category of the training dataset.

  • PDF

Distributed Edge Computing for DNA-Based Intelligent Services and Applications: A Review (딥러닝을 사용하는 IoT빅데이터 인프라에 필요한 DNA 기술을 위한 분산 엣지 컴퓨팅기술 리뷰)

  • Alemayehu, Temesgen Seyoum;Cho, We-Duke
    • KIPS Transactions on Computer and Communication Systems
    • /
    • v.9 no.12
    • /
    • pp.291-306
    • /
    • 2020
  • Nowadays, Data-Network-AI (DNA)-based intelligent services and applications have become a reality to provide a new dimension of services that improve the quality of life and productivity of businesses. Artificial intelligence (AI) can enhance the value of IoT data (data collected by IoT devices). The internet of things (IoT) promotes the learning and intelligence capability of AI. To extract insights from massive volume IoT data in real-time using deep learning, processing capability needs to happen in the IoT end devices where data is generated. However, deep learning requires a significant number of computational resources that may not be available at the IoT end devices. Such problems have been addressed by transporting bulks of data from the IoT end devices to the cloud datacenters for processing. But transferring IoT big data to the cloud incurs prohibitively high transmission delay and privacy issues which are a major concern. Edge computing, where distributed computing nodes are placed close to the IoT end devices, is a viable solution to meet the high computation and low-latency requirements and to preserve the privacy of users. This paper provides a comprehensive review of the current state of leveraging deep learning within edge computing to unleash the potential of IoT big data generated from IoT end devices. We believe that the revision will have a contribution to the development of DNA-based intelligent services and applications. It describes the different distributed training and inference architectures of deep learning models across multiple nodes of the edge computing platform. It also provides the different privacy-preserving approaches of deep learning on the edge computing environment and the various application domains where deep learning on the network edge can be useful. Finally, it discusses open issues and challenges leveraging deep learning within edge computing.