• 제목/요약/키워드: Knowledge distillation

검색결과 53건 처리시간 0.018초

저성능 자원에서 멀티 에이전트 운영을 위한 의도 분류 모델 경량화 (Compressing intent classification model for multi-agent in low-resource devices)

  • 윤용선;강진범
    • 지능정보연구
    • /
    • 제28권3호
    • /
    • pp.45-55
    • /
    • 2022
  • 최근 자연어 처리 분야에서 대규모 사전학습 언어모델(Large-scale pretrained language model, LPLM)이 발전함에 따라 이를 미세조정(Fine-tuning)한 의도 분류 모델의 성능도 개선되었다. 하지만 실시간 응답을 요하는 대화 시스템에서 대규모 모델을 미세조정하는 방법은 많은 운영 비용을 필요로 한다. 이를 해결하기 위해 본 연구는 저성능 자원에서도 멀티에이전트 운영이 가능한 의도 분류 모델 경량화 방법을 제안한다. 제안 방법은 경량화된 문장 인코더를 학습하는 과제 독립적(Task-agnostic) 단계와 경량화된 문장 인코더에 어답터(Adapter)를 부착하여 의도 분류 모델을 학습하는 과제 특화적(Task-specific) 단계로 구성된다. 다양한 도메인의 의도 분류 데이터셋으로 진행한 실험을 통해 제안 방법의 효과성을 입증하였다.

얼굴 인식을 위한 경량 인공 신경망 연구 조사 (A Comprehensive Survey of Lightweight Neural Networks for Face Recognition)

  • 장영립;양재경
    • 산업경영시스템학회지
    • /
    • 제46권1호
    • /
    • pp.55-67
    • /
    • 2023
  • Lightweight face recognition models, as one of the most popular and long-standing topics in the field of computer vision, has achieved vigorous development and has been widely used in many real-world applications due to fewer number of parameters, lower floating-point operations, and smaller model size. However, few surveys reviewed lightweight models and reimplemented these lightweight models by using the same calculating resource and training dataset. In this survey article, we present a comprehensive review about the recent research advances on the end-to-end efficient lightweight face recognition models and reimplement several of the most popular models. To start with, we introduce the overview of face recognition with lightweight models. Then, based on the construction of models, we categorize the lightweight models into: (1) artificially designing lightweight FR models, (2) pruned models to face recognition, (3) efficient automatic neural network architecture design based on neural architecture searching, (4) Knowledge distillation and (5) low-rank decomposition. As an example, we also introduce the SqueezeFaceNet and EfficientFaceNet by pruning SqueezeNet and EfficientNet. Additionally, we reimplement and present a detailed performance comparison of different lightweight models on the nine different test benchmarks. At last, the challenges and future works are provided. There are three main contributions in our survey: firstly, the categorized lightweight models can be conveniently identified so that we can explore new lightweight models for face recognition; secondly, the comprehensive performance comparisons are carried out so that ones can choose models when a state-of-the-art end-to-end face recognition system is deployed on mobile devices; thirdly, the challenges and future trends are stated to inspire our future works.

생성형 거대언어모델의 의학 적용 현황과 방향 - 동아시아 의학을 중심으로 - (Current Status and Direction of Generative Large Language Model Applications in Medicine - Focusing on East Asian Medicine -)

  • 강봉수;이상연;배효진;김창업
    • 동의생리병리학회지
    • /
    • 제38권2호
    • /
    • pp.49-58
    • /
    • 2024
  • The rapid advancement of generative large language models has revolutionized various real-life domains, emphasizing the importance of exploring their applications in healthcare. This study aims to examine how generative large language models are implemented in the medical domain, with the specific objective of searching for the possibility and potential of integration between generative large language models and East Asian medicine. Through a comprehensive current state analysis, we identified limitations in the deployment of generative large language models within East Asian medicine and proposed directions for future research. Our findings highlight the essential need for accumulating and generating structured data to improve the capabilities of generative large language models in East Asian medicine. Additionally, we tackle the issue of hallucination and the necessity for a robust model evaluation framework. Despite these challenges, the application of generative large language models in East Asian medicine has demonstrated promising results. Techniques such as model augmentation, multimodal structures, and knowledge distillation have the potential to significantly enhance accuracy, efficiency, and accessibility. In conclusion, we expect generative large language models to play a pivotal role in facilitating precise diagnostics, personalized treatment in clinical fields, and fostering innovation in education and research within East Asian medicine.