• Title/Summary/Keyword: MultiTask Learning

Search Result 139, Processing Time 0.019 seconds

A Federated Multi-Task Learning Model Based on Adaptive Distributed Data Latent Correlation Analysis

  • Wu, Shengbin;Wang, Yibai
    • Journal of Information Processing Systems
    • /
    • v.17 no.3
    • /
    • pp.441-452
    • /
    • 2021
  • Federated learning provides an efficient integrated model for distributed data, allowing the local training of different data. Meanwhile, the goal of multi-task learning is to simultaneously establish models for multiple related tasks, and to obtain the underlying main structure. However, traditional federated multi-task learning models not only have strict requirements for the data distribution, but also demand large amounts of calculation and have slow convergence, which hindered their promotion in many fields. In our work, we apply the rank constraint on weight vectors of the multi-task learning model to adaptively adjust the task's similarity learning, according to the distribution of federal node data. The proposed model has a general framework for solving optimal solutions, which can be used to deal with various data types. Experiments show that our model has achieved the best results in different dataset. Notably, our model can still obtain stable results in datasets with large distribution differences. In addition, compared with traditional federated multi-task learning models, our algorithm is able to converge on a local optimal solution within limited training iterations.

A study on the accuracy of multi-task learning structure artificial neural network applicable to multi-quality prediction in injection molding process (사출성형공정에서 다수 품질 예측에 적용가능한 다중 작업 학습 구조 인공신경망의 정확성에 대한 연구)

  • Lee, Jun-Han;Kim, Jong-Sun
    • Design & Manufacturing
    • /
    • v.16 no.3
    • /
    • pp.1-8
    • /
    • 2022
  • In this study, an artificial neural network(ANN) was constructed to establish the relationship between process condition prameters and the qualities of the injection-molded product in the injection molding process. Six process parmeters were set as input parameter for ANN: melt temperature, mold temperature, injection speed, packing pressure, packing time, and cooling time. As output parameters, the mass, nominal diameter, and height of the injection-molded product were set. Two learning structures were applied to the ANN. The single-task learning, in which all output parameters are learned in correlation with each other, and the multi-task learning structure in which each output parameters is individually learned according to the characteristics, were constructed. As a result of constructing an artificial neural network with two learning structures and evaluating the prediction performance, it was confirmed that the predicted value of the ANN to which the multi-task learning structure was applied had a low RMSE compared with the single-task learning structure. In addition, when comparing the quality specifications of injection molded products with the prediction values of the ANN, it was confirmed that the ANN of the multi-task learning structure satisfies the quality specifications for all of the mass, diameter, and height.

Exploring the Relationship between Shared Space and Performance in Multi-Task Learning (Multi-Task Learning에서 공유 공간과 성능과의 관계 탐구)

  • Seong, Su-Jin;Park, Seong-Jae;Jeong, In-Gyu;Cha, Jeong-Won
    • Annual Conference on Human and Language Technology
    • /
    • 2018.10a
    • /
    • pp.305-309
    • /
    • 2018
  • 딥러닝에서 층을 공유하여 작업에 따라 변하지 않는 정보를 사용하는 multi-task learning이 다양한 자연어 처리 문제에 훌륭하게 사용되었다. 그렇지만 우리가 아는 한 공유 공간의 상태와 성능과의 관계를 조사한 연구는 없었다. 본 연구에서는 공유 공간과 task 의존 공간의 자질의 수와 오염 정도가 성능에 미치는 영향도 조사하여 공유 공간과 성능 관계에 대해서 탐구한다. 이 결과는 multi-task를 진행하는 실험에서 공유 공간의 역할과 성능의 관계를 밝혀서 시스템의 성능 향상에 도움이 될 것이다.

  • PDF

Understanding and Application of Multi-Task Learning in Medical Artificial Intelligence (의료 인공지능에서의 멀티 태스크 러닝의 이해와 활용)

  • Young Jae Kim;Kwang Gi Kim
    • Journal of the Korean Society of Radiology
    • /
    • v.83 no.6
    • /
    • pp.1208-1218
    • /
    • 2022
  • In the medical field, artificial intelligence has been used in various ways with many developments. However, most artificial intelligence technologies are developed so that one model can perform only one task, which is a limitation in designing the complex reading process of doctors with artificial intelligence. Multi-task learning is an optimal way to overcome the limitations of single-task learning methods. Multi-task learning can create a model that is efficient and advantageous for generalization by simultaneously integrating various tasks into one model. This study investigated the concepts, types, and similar concepts as multi-task learning, and examined the status and future possibilities of multi-task learning in the medical research.

Facial Action Unit Detection with Multilayer Fused Multi-Task and Multi-Label Deep Learning Network

  • He, Jun;Li, Dongliang;Bo, Sun;Yu, Lejun
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.13 no.11
    • /
    • pp.5546-5559
    • /
    • 2019
  • Facial action units (AUs) have recently drawn increased attention because they can be used to recognize facial expressions. A variety of methods have been designed for frontal-view AU detection, but few have been able to handle multi-view face images. In this paper we propose a method for multi-view facial AU detection using a fused multilayer, multi-task, and multi-label deep learning network. The network can complete two tasks: AU detection and facial view detection. AU detection is a multi-label problem and facial view detection is a single-label problem. A residual network and multilayer fusion are applied to obtain more representative features. Our method is effective and performs well. The F1 score on FERA 2017 is 13.1% higher than the baseline. The facial view recognition accuracy is 0.991. This shows that our multi-task, multi-label model could achieve good performance on the two tasks.

Automatic assessment of post-earthquake buildings based on multi-task deep learning with auxiliary tasks

  • Zhihang Li;Huamei Zhu;Mengqi Huang;Pengxuan Ji;Hongyu Huang;Qianbing Zhang
    • Smart Structures and Systems
    • /
    • v.31 no.4
    • /
    • pp.383-392
    • /
    • 2023
  • Post-earthquake building condition assessment is crucial for subsequent rescue and remediation and can be automated by emerging computer vision and deep learning technologies. This study is based on an endeavour for the 2nd International Competition of Structural Health Monitoring (IC-SHM 2021). The task package includes five image segmentation objectives - defects (crack/spall/rebar exposure), structural component, and damage state. The structural component and damage state tasks are identified as the priority that can form actionable decisions. A multi-task Convolutional Neural Network (CNN) is proposed to conduct the two major tasks simultaneously. The rest 3 sub-tasks (spall/crack/rebar exposure) were incorporated as auxiliary tasks. By synchronously learning defect information (spall/crack/rebar exposure), the multi-task CNN model outperforms the counterpart single-task models in recognizing structural components and estimating damage states. Particularly, the pixel-level damage state estimation witnesses a mIoU (mean intersection over union) improvement from 0.5855 to 0.6374. For the defect detection tasks, rebar exposure is omitted due to the extremely biased sample distribution. The segmentations of crack and spall are automated by single-task U-Net but with extra efforts to resample the provided data. The segmentation of small objects (spall and crack) benefits from the resampling method, with a substantial IoU increment of nearly 10%.

Applying Coarse-to-Fine Curriculum Learning Mechanism to the multi-label classification task (다중 레이블 분류 작업에서의 Coarse-to-Fine Curriculum Learning 메카니즘 적용 방안)

  • Kong, Heesan;Park, Jaehun;Kim, Kwangsu
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.29-30
    • /
    • 2022
  • Curriculum learning은 딥러닝의 성능을 향상시키기 위해 사람의 학습 과정과 유사하게 일종의 'curriculum'을 도입해 모델을 학습시키는 방법이다. 대부분의 연구는 학습 데이터 중 개별 샘플의 난이도를 기반으로 점진적으로 모델을 학습시키는 방안에 중점을 두고 있다. 그러나, coarse-to-fine 메카니즘은 데이터의 난이도보다 학습에 사용되는 class의 유사도가 더욱 중요하다고 주장하며, 여러 난이도의 auxiliary task를 차례로 학습하는 방법을 제안했다. 그러나, 이 방법은 혼동행렬 기반으로 class의 유사성을 판단해 auxiliary task를 생성함으로 다중 레이블 분류에는 적용하기 어렵다는 한계점이 있다. 따라서, 본 논문에서는 multi-label 환경에서 multi-class와 binary task를 생성하는 방법을 제안해 coarse-to-fine 메카니즘 적용을 위한 방안을 제시하고, 그 결과를 분석한다.

  • PDF

Korean Dependency Parsing using Pointer Networks (포인터 네트워크를 이용한 한국어 의존 구문 분석)

  • Park, Cheoneum;Lee, Changki
    • Journal of KIISE
    • /
    • v.44 no.8
    • /
    • pp.822-831
    • /
    • 2017
  • In this paper, we propose a Korean dependency parsing model using multi-task learning based pointer networks. Multi-task learning is a method that can be used to improve the performance by learning two or more problems at the same time. In this paper, we perform dependency parsing by using pointer networks based on this method and simultaneously obtaining the dependency relation and dependency label information of the words. We define five input criteria to perform pointer networks based on multi-task learning of morpheme in dependency parsing of a word. We apply a fine-tuning method to further improve the performance of the dependency parsing proposed in this paper. The results of our experiment show that the proposed model has better UAS 91.79% and LAS 89.48% than conventional Korean dependency parsing.

Dual-scale BERT using multi-trait representations for holistic and trait-specific essay grading

  • Minsoo Cho;Jin-Xia Huang;Oh-Woog Kwon
    • ETRI Journal
    • /
    • v.46 no.1
    • /
    • pp.82-95
    • /
    • 2024
  • As automated essay scoring (AES) has progressed from handcrafted techniques to deep learning, holistic scoring capabilities have merged. However, specific trait assessment remains a challenge because of the limited depth of earlier methods in modeling dual assessments for holistic and multi-trait tasks. To overcome this challenge, we explore providing comprehensive feedback while modeling the interconnections between holistic and trait representations. We introduce the DualBERT-Trans-CNN model, which combines transformer-based representations with a novel dual-scale bidirectional encoder representations from transformers (BERT) encoding approach at the document-level. By explicitly leveraging multi-trait representations in a multi-task learning (MTL) framework, our DualBERT-Trans-CNN emphasizes the interrelation between holistic and trait-based score predictions, aiming for improved accuracy. For validation, we conducted extensive tests on the ASAP++ and TOEFL11 datasets. Against models of the same MTL setting, ours showed a 2.0% increase in its holistic score. Additionally, compared with single-task learning (STL) models, ours demonstrated a 3.6% enhancement in average multi-trait performance on the ASAP++ dataset.

Multi-task learning with contextual hierarchical attention for Korean coreference resolution

  • Cheoneum Park
    • ETRI Journal
    • /
    • v.45 no.1
    • /
    • pp.93-104
    • /
    • 2023
  • Coreference resolution is a task in discourse analysis that links several headwords used in any document object. We suggest pointer networks-based coreference resolution for Korean using multi-task learning (MTL) with an attention mechanism for a hierarchical structure. As Korean is a head-final language, the head can easily be found. Our model learns the distribution by referring to the same entity position and utilizes a pointer network to conduct coreference resolution depending on the input headword. As the input is a document, the input sequence is very long. Thus, the core idea is to learn the word- and sentence-level distributions in parallel with MTL, while using a shared representation to address the long sequence problem. The suggested technique is used to generate word representations for Korean based on contextual information using pre-trained language models for Korean. In the same experimental conditions, our model performed roughly 1.8% better on CoNLL F1 than previous research without hierarchical structure.