• Title/Summary/Keyword: few shot learning

Search Result 40, Processing Time 0.024 seconds

Parameter-Efficient Prompting for Few-Shot Learning (Prompting 기반 매개변수 효율적인 Few-Shot 학습 연구)

  • Eunhwan Park;Sung-Min Lee;Daeryong Seo;Donghyeon Jeon;Inho Kang;Seung-Hoon Na
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.343-347
    • /
    • 2022
  • 최근 자연어처리 분야에서는 BERT, RoBERTa, 그리고 BART와 같은 사전 학습된 언어 모델 (Pre-trained Language Models, PLM) 기반 미세 조정 학습을 통하여 여러 하위 과업에서 좋은 성능을 거두고 있다. 이는 사전 학습된 언어 모델 및 데이터 집합의 크기, 그리고 모델 구성의 중요성을 보여주며 대규모 사전 학습된 언어 모델이 각광받는 계기가 되었다. 하지만, 거대한 모델의 크기로 인하여 실제 산업에서 쉽게 쓰이기 힘들다는 단점이 명백히 존재함에 따라 최근 매개변수 효율적인 미세 조정 및 Few-Shot 학습 연구가 많은 주목을 받고 있다. 본 논문은 Prompt tuning, Prefix tuning와 프롬프트 기반 미세 조정 (Prompt-based fine-tuning)을 결합한 Few-Shot 학습 연구를 제안한다. 제안한 방법은 미세 조정 ←→ 사전 학습 간의 지식 격차를 줄일 뿐만 아니라 기존의 일반적인 미세 조정 기반 Few-Shot 학습 성능보다 크게 향상됨을 보인다.

  • PDF

Novel Image Classification Method Based on Few-Shot Learning in Monkey Species

  • Wang, Guangxing;Lee, Kwang-Chan;Shin, Seong-Yoon
    • Journal of information and communication convergence engineering
    • /
    • v.19 no.2
    • /
    • pp.79-83
    • /
    • 2021
  • This paper proposes a novel image classification method based on few-shot learning, which is mainly used to solve model overfitting and non-convergence in image classification tasks of small datasets and improve the accuracy of classification. This method uses model structure optimization to extend the basic convolutional neural network (CNN) model and extracts more image features by adding convolutional layers, thereby improving the classification accuracy. We incorporated certain measures to improve the performance of the model. First, we used general methods such as setting a lower learning rate and shuffling to promote the rapid convergence of the model. Second, we used the data expansion technology to preprocess small datasets to increase the number of training data sets and suppress over-fitting. We applied the model to 10 monkey species and achieved outstanding performances. Experiments indicated that our proposed method achieved an accuracy of 87.92%, which is 26.1% higher than that of the traditional CNN method and 1.1% higher than that of the deep convolutional neural network ResNet50.

Host-Based Intrusion Detection Model Using Few-Shot Learning (Few-Shot Learning을 사용한 호스트 기반 침입 탐지 모델)

  • Park, DaeKyeong;Shin, DongIl;Shin, DongKyoo;Kim, Sangsoo
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.7
    • /
    • pp.271-278
    • /
    • 2021
  • As the current cyber attacks become more intelligent, the existing Intrusion Detection System is difficult for detecting intelligent attacks that deviate from the existing stored patterns. In an attempt to solve this, a model of a deep learning-based intrusion detection system that analyzes the pattern of intelligent attacks through data learning has emerged. Intrusion detection systems are divided into host-based and network-based depending on the installation location. Unlike network-based intrusion detection systems, host-based intrusion detection systems have the disadvantage of having to observe the inside and outside of the system as a whole. However, it has the advantage of being able to detect intrusions that cannot be detected by a network-based intrusion detection system. Therefore, in this study, we conducted a study on a host-based intrusion detection system. In order to evaluate and improve the performance of the host-based intrusion detection system model, we used the host-based Leipzig Intrusion Detection-Data Set (LID-DS) published in 2018. In the performance evaluation of the model using that data set, in order to confirm the similarity of each data and reconstructed to identify whether it is normal data or abnormal data, 1D vector data is converted to 3D image data. Also, the deep learning model has the drawback of having to re-learn every time a new cyber attack method is seen. In other words, it is not efficient because it takes a long time to learn a large amount of data. To solve this problem, this paper proposes the Siamese Convolutional Neural Network (Siamese-CNN) to use the Few-Shot Learning method that shows excellent performance by learning the little amount of data. Siamese-CNN determines whether the attacks are of the same type by the similarity score of each sample of cyber attacks converted into images. The accuracy was calculated using Few-Shot Learning technique, and the performance of Vanilla Convolutional Neural Network (Vanilla-CNN) and Siamese-CNN was compared to confirm the performance of Siamese-CNN. As a result of measuring Accuracy, Precision, Recall and F1-Score index, it was confirmed that the recall of the Siamese-CNN model proposed in this study was increased by about 6% from the Vanilla-CNN model.

Deep Learning based Dynamic Taint Detection Technique for Binary Code Vulnerability Detection (바이너리 코드 취약점 탐지를 위한 딥러닝 기반 동적 오염 탐지 기술)

  • Kwang-Man Ko
    • The Journal of Korea Institute of Information, Electronics, and Communication Technology
    • /
    • v.16 no.3
    • /
    • pp.161-166
    • /
    • 2023
  • In recent years, new and variant hacking of binary codes has increased, and the limitations of techniques for detecting malicious codes in source programs and defending against attacks are often exposed. Advanced software security vulnerability detection technology using machine learning and deep learning technology for binary code and defense and response capabilities against attacks are required. In this paper, we propose a malware clustering method that groups malware based on the characteristics of the taint information after entering dynamic taint information by tracing the execution path of binary code. Malware vulnerability detection was applied to a three-layered Few-shot learning model, and F1-scores were calculated for each layer's CPU and GPU. We obtained 97~98% performance in the learning process and 80~81% detection performance in the test process.

Weighted Fast Adaptation Prior on Meta-Learning

  • Widhianingsih, Tintrim Dwi Ary;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.68-74
    • /
    • 2019
  • Along with the deeper architecture in the deep learning approaches, the need for the data becomes very big. In the real problem, to get huge data in some disciplines is very costly. Therefore, learning on limited data in the recent years turns to be a very appealing area. Meta-learning offers a new perspective to learn a model with this limitation. A state-of-the-art model that is made using a meta-learning framework, Meta-SGD, is proposed with a key idea of learning a hyperparameter or a learning rate of the fast adaptation stage in the outer update. However, this learning rate usually is set to be very small. In consequence, the objective function of SGD will give a little improvement to our weight parameters. In other words, the prior is being a key value of getting a good adaptation. As a goal of meta-learning approaches, learning using a single gradient step in the inner update may lead to a bad performance. Especially if the prior that we use is far from the expected one, or it works in the opposite way that it is very effective to adapt the model. By this reason, we propose to add a weight term to decrease, or increase in some conditions, the effect of this prior. The experiment on few-shot learning shows that emphasizing or weakening the prior can give better performance than using its original value.

TAGS: Text Augmentation with Generation and Selection (생성-선정을 통한 텍스트 증강 프레임워크)

  • Kim Kyung Min;Dong Hwan Kim;Seongung Jo;Heung-Seon Oh;Myeong-Ha Hwang
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.12 no.10
    • /
    • pp.455-460
    • /
    • 2023
  • Text augmentation is a methodology that creates new augmented texts by transforming or generating original texts for the purpose of improving the performance of NLP models. However existing text augmentation techniques have limitations such as lack of expressive diversity semantic distortion and limited number of augmented texts. Recently text augmentation using large language models and few-shot learning can overcome these limitations but there is also a risk of noise generation due to incorrect generation. In this paper, we propose a text augmentation method called TAGS that generates multiple candidate texts and selects the appropriate text as the augmented text. TAGS generates various expressions using few-shot learning while effectively selecting suitable data even with a small amount of original text by using contrastive learning and similarity comparison. We applied this method to task-oriented chatbot data and achieved more than sixty times quantitative improvement. We also analyzed the generated texts to confirm that they produced semantically and expressively diverse texts compared to the original texts. Moreover, we trained and evaluated a classification model using the augmented texts and showed that it improved the performance by more than 0.1915, confirming that it helps to improve the actual model performance.

Morpheme-Based Few-Shot Learning with Large Language Models for Korean Healthcare Named Entity Recognition (한국어 헬스케어 개체명 인식을 위한 거대 언어 모델에서의 형태소 기반 Few-Shot 학습 기법)

  • Su-Yeon Kang;Gun-Woo Kim
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.428-429
    • /
    • 2023
  • 개체명 인식은 자연어 처리의 핵심적인 작업으로, 특정 범주의 명칭을 문장에서 식별하고 분류한다. 이러한 기술은 헬스케어 분야에서 진단 지원 및 데이터 관리에 필수적이다. 그러나 기존의 사전 학습된 모델을 특정 도메인에 대해 전이학습하는 방법은 대량의 데이터에 크게 의존하는 한계를 가지고 있다. 본 연구는 방대한 데이터로 학습된 거대 언어 모델(LLM) 활용을 중심으로, 한국어의 교착어 특성을 반영하여 형태소 정보를 활용한 Few-Shot 프롬프트를 통해 한국어 헬스케어 도메인에서의 개체명 인식 방법을 제안한다.

CKFont2: An Improved Few-Shot Hangul Font Generation Model Based on Hangul Composability (CKFont2: 한글 구성요소를 이용한 개선된 퓨샷 한글 폰트 생성 모델)

  • Jangkyoung, Park;Ammar, Ul Hassan;Jaeyoung, Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.12
    • /
    • pp.499-508
    • /
    • 2022
  • A lot of research has been carried out on the Hangeul generation model using deep learning, and recently, research is being carried out how to minimize the number of characters input to generate one set of Hangul (Few-Shot Learning). In this paper, we propose a CKFont2 model using only 14 letters by analyzing and improving the CKFont (hereafter CKFont1) model using 28 letters. The CKFont2 model improves the performance of the CKFont1 model as a model that generates all Hangul using only 14 characters including 24 components (14 consonants and 10 vowels), where the CKFont1 model generates all Hangul by extracting 51 Hangul components from 28 characters. It uses the minimum number of characters for currently known models. From the basic consonants/vowels of Hangul, 27 components such as 5 double consonants, 11/11 compound consonants/vowels respectively are learned by deep learning and generated, and the generated 27 components are combined with 24 basic consonants/vowels. All Hangul characters are automatically generated from the combined 51 components. The superiority of the performance was verified by comparative analysis with results of the zi2zi, CKFont1, and MX-Font model. It is an efficient and effective model that has a simple structure and saves time and resources, and can be extended to Chinese, Thai, and Japanese.

Class Specific Autoencoders Enhance Sample Diversity

  • Kumar, Teerath;Park, Jinbae;Ali, Muhammad Salman;Uddin, AFM Shahab;Bae, Sung-Ho
    • Journal of Broadcast Engineering
    • /
    • v.26 no.7
    • /
    • pp.844-854
    • /
    • 2021
  • Semi-supervised learning (SSL) and few-shot learning (FSL) have shown impressive performance even then the volume of labeled data is very limited. However, SSL and FSL can encounter a significant performance degradation if the diversity gap between the labeled and unlabeled data is high. To reduce this diversity gap, we propose a novel scheme that relies on an autoencoder for generating pseudo examples. Specifically, the autoencoder is trained on a specific class using the available labeled data and the decoder of the trained autoencoder is then used to generate N samples of that specific class based on N random noise, sampled from a standard normal distribution. The above process is repeated for all the classes. Consequently, the generated data reduces the diversity gap and enhances the model performance. Extensive experiments on MNIST and FashionMNIST datasets for SSL and FSL verify the effectiveness of the proposed approach in terms of classification accuracy and robustness against adversarial attacks.

Implementation of Point detail Classification System using Few-shot Learning (Few-shot Learning을 이용한 격점상세도 분류 시스템 구현)

  • Park, Jin-Hyouk;Kim, Yong Hyun;Lee, Kook-Bum;Lee, Jongseo;Kim, Yu-Doo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1809-1815
    • /
    • 2022
  • A digital twin is a technology that creates a virtual world identical to the real world. Problems in the real world can be identified through various simulations, so it is a trend to be applied in various industries. In order to apply the digital twin, it is necessary to analyze the drawings in which the structure of the real world to be made identical is designed. Although the technology for analyzing drawings is being studied, it is difficult to apply them because the rules or standards for drawing drawings are different for each author. Therefore, in this paper, we implement a system that analyzes and classifies the vertex detail, one of the drawings, using artificial intelligence. Through this, we intend to confirm the possibility of analyzing and classifying drawings through artificial intelligence and introduce future research directions.