• Title/Summary/Keyword: few shot learning

Search Result 40, Processing Time 0.035 seconds

Few-Shot Korean Font Generation based on Hangul Composability (한글 조합성에 기반한 최소 글자를 사용하는 한글 폰트 생성 모델)

  • Park, Jangkyoung;Ul Hassan, Ammar;Choi, Jaeyoung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.10 no.11
    • /
    • pp.473-482
    • /
    • 2021
  • Although several Hangul generation models using deep learning have been introduced, they require a lot of data, have a complex structure, requires considerable time and resources, and often fail in style conversion. This paper proposes a model CKFont using the components of the initial, middle, and final components of Hangul as a way to compensate for these problems. The CKFont model is an end-to-end Hangul generation model based on GAN, and it can generate all Hangul in various styles with 28 characters and components of first, middle, and final components of Hangul characters. By acquiring local style information from components, the information is more accurate than global information acquisition, and the result of style conversion improves as it can reduce information loss. This is a model that uses the minimum number of characters among known models, and it is an efficient model that reduces style conversion failures, has a concise structure, and saves time and resources. The concept using components can be used for various image transformations and compositing as well as transformations of other languages.

Image Generation from Korean Dialogue Text via Prompt-based Few-shot Learning (프롬프트 기반 퓨샷 러닝을 통한 한국어 대화형 텍스트 기반 이미지 생성)

  • Eunchan Lee;Sangtae Ahn
    • Annual Conference on Human and Language Technology
    • /
    • 2022.10a
    • /
    • pp.447-451
    • /
    • 2022
  • 본 논문에서는 사용자가 대화 텍스트 방식의 입력을 주었을 때 이를 키워드 중심으로 변환하여 이미지를 생성해내는 방식을 제안한다. 대화 텍스트란 채팅 등에서 주로 사용하는 형식의 구어체를 말하며 이러한 텍스트 형식은 텍스트 기반 이미지 생성 모델이 적절한 아웃풋 이미지를 생성하기 어렵게 만든다. 이를 해결하기 위해 대화 텍스트를 키워드 중심 텍스트로 바꾸어 텍스트 기반 이미지 생성 모델의 입력으로 변환하는 과정이 이미지 생성의 질을 높이는 좋은 방안이 될 수 있는데 이러한 태스크에 적합한 학습 데이터는 충분하지 않다. 본 논문에서는 이러한 문제를 다루기 위한 하나의 방안으로 사전학습된 초대형 언어모델인 KoGPT 모델을 활용하며, 퓨샷 러닝을 통해 적은 양의 직접 제작한 데이터만을 학습시켜 대화 텍스트 기반의 이미지 생성을 구현하는 방법을 제안한다.

  • PDF

Query Normalization Using P-tuning of Large Pre-trained Language Model (Large Pre-trained Language Model의 P-tuning을 이용한 질의 정규화)

  • Suh, Soo-Bin;In, Soo-Kyo;Park, Jin-Seong;Nam, Kyeong-Min;Kim, Hyeon-Wook;Moon, Ki-Yoon;Hwang, Won-Yo;Kim, Kyung-Duk;Kang, In-Ho
    • Annual Conference on Human and Language Technology
    • /
    • 2021.10a
    • /
    • pp.396-401
    • /
    • 2021
  • 초거대 언어모델를 활용한 퓨샷(few shot) 학습법은 여러 자연어 처리 문제에서 좋은 성능을 보였다. 하지만 데이터를 활용한 추가 학습으로 문제를 추론하는 것이 아니라, 이산적인 공간에서 퓨샷 구성을 통해 문제를 정의하는 방식은 성능 향상에 한계가 존재한다. 이를 해결하기 위해 초거대 언어모델의 모수 전체가 아닌 일부를 추가 학습하거나 다른 신경망을 덧붙여 연속적인 공간에서 추론하는 P-tuning과 같은 데이터 기반 추가 학습 방법들이 등장하였다. 본 논문에서는 문맥에 따른 질의 정규화 문제를 대화형 음성 검색 서비스에 맞게 직접 정의하였고, 초거대 언어모델을 P-tuning으로 추가 학습한 경우 퓨샷 학습법 대비 정확도가 상승함을 보였다.

  • PDF

Pose Estimation and Image Matching for Tidy-up Task using a Robot Arm (로봇 팔을 활용한 정리작업을 위한 물체 자세추정 및 이미지 매칭)

  • Piao, Jinglan;Jo, HyunJun;Song, Jae-Bok
    • The Journal of Korea Robotics Society
    • /
    • v.16 no.4
    • /
    • pp.299-305
    • /
    • 2021
  • In this study, the task of robotic tidy-up is to clean the current environment up exactly like a target image. To perform a tidy-up task using a robot, it is necessary to estimate the pose of various objects and to classify the objects. Pose estimation requires the CAD model of an object, but these models of most objects in daily life are not available. Therefore, this study proposes an algorithm that uses point cloud and PCA to estimate the pose of objects without the help of CAD models in cluttered environments. In addition, objects are usually detected using a deep learning-based object detection. However, this method has a limitation in that only the learned objects can be recognized, and it may take a long time to learn. This study proposes an image matching based on few-shot learning and Siamese network. It was shown from experiments that the proposed method can be effectively applied to the robotic tidy-up system, which showed a success rate of 85% in the tidy-up task.

Siamese Neural Networks to Overcome the Insufficient Data Problems in Product Defect Detection (제품 결함 탐지에서 데이터 부족 문제를 극복하기 위한 샴 신경망의 활용)

  • Shin, Kang-hyeon;Jin, Kyo-hong
    • Proceedings of the Korean Institute of Information and Commucation Sciences Conference
    • /
    • 2022.05a
    • /
    • pp.108-111
    • /
    • 2022
  • Applying deep learning to machine vision systems for defect detection of products requires vast amounts of training data about various defect cases. However, since data imbalance occurs according to the type of defect in the actual manufacturing industry, it takes a lot of time to collect product images enough to generalize defect cases. In this paper, we apply a Siamese neural network that can be learned with even a small amount of data to product defect detection, and modify the image pairing method and contrastive loss function by properties the situation of product defect image data. We indirectly evaluated the embedding performance of Siamese neural networks using AUC-ROC, and it showed good performance when the images only paired among same products, not paired among defective products, and learned with exponential contrastive loss.

  • PDF

Multiple-Shot Person Re-identification by Features Learned from Third-party Image Sets

  • Zhao, Yanna;Wang, Lei;Zhao, Xu;Liu, Yuncai
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.9 no.2
    • /
    • pp.775-792
    • /
    • 2015
  • Person re-identification is an important and challenging task in computer vision with numerous real world applications. Despite significant progress has been made in the past few years, person re-identification remains an unsolved problem. This paper presents a novel appearance-based approach to person re-identification. The approach exploits region covariance matrix and color histograms to capture the statistical properties and chromatic information of each object. Robustness against low resolution, viewpoint changes and pose variations is achieved by a novel signature, that is, the combination of Log Covariance Matrix feature and HSV histogram (LCMH). In order to further improve re-identification performance, third-party image sets are utilized as a common reference to sufficiently represent any image set with the same type. Distinctive and reliable features for a given image set are extracted through decision boundary between the specific set and a third-party image set supervised by max-margin criteria. This method enables the usage of an existing dataset to represent new image data without time-consuming data collection and annotation. Comparisons with state-of-the-art methods carried out on benchmark datasets demonstrate promising performance of our method.

Few-Shot Image Synthesis using Noise-Based Deep Conditional Generative Adversarial Nets

  • Msiska, Finlyson Mwadambo;Hassan, Ammar Ul;Choi, Jaeyoung;Yoo, Jaewon
    • Smart Media Journal
    • /
    • v.10 no.1
    • /
    • pp.79-87
    • /
    • 2021
  • In recent years research on automatic font generation with machine learning mainly focus on using transformation-based methods, in comparison, generative model-based methods of font generation have received less attention. Transformation-based methods learn a mapping of the transformations from an existing input to a target. This makes them ambiguous because in some cases a single input reference may correspond to multiple possible outputs. In this work, we focus on font generation using the generative model-based methods which learn the buildup of the characters from noise-to-image. We propose a novel way to train a conditional generative deep neural model so that we can achieve font style control on the generated font images. Our research demonstrates how to generate new font images conditioned on both character class labels and character style labels when using the generative model-based methods. We achieve this by introducing a modified generator network which is given inputs noise, character class, and style, which help us to calculate losses separately for the character class labels and character style labels. We show that adding the character style vector on top of the character class vector separately gives the model rich information about the font and enables us to explicitly specify not only the character class but also the character style that we want the model to generate.

Named Entity Detection Using Generative Al for Personal Information-Specific Named Entity Annotation Conversation Dataset (개인정보 특화 개체명 주석 대화 데이터셋 기반 생성AI 활용 개체명 탐지)

  • Yejee Kang;Li Fei;Yeonji Jang;Seoyoon Park;Hansaem Kim
    • Annual Conference on Human and Language Technology
    • /
    • 2023.10a
    • /
    • pp.499-504
    • /
    • 2023
  • 본 연구에서는 민감한 개인정보의 유출과 남용 위험이 높아지고 있는 상황에서 정확한 개인정보 탐지 및 비식별화의 효율을 높이기 위해 개인정보 항목에 특화된 개체명 체계를 개발하였다. 개인정보 태그셋이 주석된 대화 데이터 4,981세트를 구축하고, 생성 AI 모델을 활용하여 개인정보 개체명 탐지 실험을 수행하였다. 실험을 위해 최적의 프롬프트를 설계하여 퓨샷러닝(few-shot learning)을 통해 탐지 결과를 평가하였다. 구축한 데이터셋과 영어 기반의 개인정보 주석 데이터셋을 비교 분석한 결과 고유식별번호 항목에 대해 본 연구에서 구축한 데이터셋에서 더 높은 탐지 성능이 나타났으며, 이를 통해 데이터셋의 필요성과 우수성을 입증하였다.

  • PDF

The way to make training data for deep learning model to recognize keywords in product catalog image at E-commerce (온라인 쇼핑몰에서 상품 설명 이미지 내의 키워드 인식을 위한 딥러닝 훈련 데이터 자동 생성 방안)

  • Kim, Kitae;Oh, Wonseok;Lim, Geunwon;Cha, Eunwoo;Shin, Minyoung;Kim, Jongwoo
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.1-23
    • /
    • 2018
  • From the 21st century, various high-quality services have come up with the growth of the internet or 'Information and Communication Technologies'. Especially, the scale of E-commerce industry in which Amazon and E-bay are standing out is exploding in a large way. As E-commerce grows, Customers could get what they want to buy easily while comparing various products because more products have been registered at online shopping malls. However, a problem has arisen with the growth of E-commerce. As too many products have been registered, it has become difficult for customers to search what they really need in the flood of products. When customers search for desired products with a generalized keyword, too many products have come out as a result. On the contrary, few products have been searched if customers type in details of products because concrete product-attributes have been registered rarely. In this situation, recognizing texts in images automatically with a machine can be a solution. Because bulk of product details are written in catalogs as image format, most of product information are not searched with text inputs in the current text-based searching system. It means if information in images can be converted to text format, customers can search products with product-details, which make them shop more conveniently. There are various existing OCR(Optical Character Recognition) programs which can recognize texts in images. But existing OCR programs are hard to be applied to catalog because they have problems in recognizing texts in certain circumstances, like texts are not big enough or fonts are not consistent. Therefore, this research suggests the way to recognize keywords in catalog with the Deep Learning algorithm which is state of the art in image-recognition area from 2010s. Single Shot Multibox Detector(SSD), which is a credited model for object-detection performance, can be used with structures re-designed to take into account the difference of text from object. But there is an issue that SSD model needs a lot of labeled-train data to be trained, because of the characteristic of deep learning algorithms, that it should be trained by supervised-learning. To collect data, we can try labelling location and classification information to texts in catalog manually. But if data are collected manually, many problems would come up. Some keywords would be missed because human can make mistakes while labelling train data. And it becomes too time-consuming to collect train data considering the scale of data needed or costly if a lot of workers are hired to shorten the time. Furthermore, if some specific keywords are needed to be trained, searching images that have the words would be difficult, as well. To solve the data issue, this research developed a program which create train data automatically. This program can make images which have various keywords and pictures like catalog and save location-information of keywords at the same time. With this program, not only data can be collected efficiently, but also the performance of SSD model becomes better. The SSD model recorded 81.99% of recognition rate with 20,000 data created by the program. Moreover, this research had an efficiency test of SSD model according to data differences to analyze what feature of data exert influence upon the performance of recognizing texts in images. As a result, it is figured out that the number of labeled keywords, the addition of overlapped keyword label, the existence of keywords that is not labeled, the spaces among keywords and the differences of background images are related to the performance of SSD model. This test can lead performance improvement of SSD model or other text-recognizing machine based on deep learning algorithm with high-quality data. SSD model which is re-designed to recognize texts in images and the program developed for creating train data are expected to contribute to improvement of searching system in E-commerce. Suppliers can put less time to register keywords for products and customers can search products with product-details which is written on the catalog.

3D Film Image Classification Based on Optimized Range of Histogram (히스토그램의 최적폭에 기반한 3차원 필름 영상의 분류)

  • Lee, Jae-Eun;Kim, Young-Bong;Kim, Jong-Nam
    • Journal of the Institute of Convergence Signal Processing
    • /
    • v.22 no.2
    • /
    • pp.71-78
    • /
    • 2021
  • In order to classify a target image in a cluster of images, the difference in brightness between the object and the background is mainly concerned, which is not easy to classify if the shape of the object is blurred and the sharpness is low. However, there are a few studies attempted to solve these problems, and there is still the problem of not properly distinguishing between wrong pattern and right pattern images when applied to actual data analysis. In this paper, we propose an algorithm that classifies 3D films into sharp and blurry using the width of the pixel values histogram. This algorithm determines the width of the right and wrong images based on the width of the pixel distributions. The larger the width histogram, the sharp the image, while the shorter the width histogram the blurry the image. Experiments show that the proposed algorithm reflects that the characteristics of these histograms allows classification of all wrong images and right images. To determine the reliability and validity of the proposed algorithm, we compare the results with the other obtained from preprocessed 3D films. We then trained the 3D films using few-shot learning algorithm for accurate classification. The experiments verify that the proposed algorithm can perform higher without complicated computations.