• Title/Summary/Keyword: one-shot learning

Search Result 18, Processing Time 0.024 seconds

Recent advances in few-shot learning for image domain: a survey (이미지 분석을 위한 퓨샷 학습의 최신 연구동향)

  • Ho-Sik Seok
    • Journal of IKEEE
    • /
    • v.27 no.4
    • /
    • pp.537-547
    • /
    • 2023
  • In many domains, lack of data inhibits adoption of advanced machine learning models. Recently, Few-Shot Learning (FSL) has been actively studied to tackle this problem. Utilizing prior knowledge obtained through observations on related domains, FSL achieved significant performance with only a few samples. In this paper, we present a survey on FSL in terms of data augmentation, embedding and metric learning, and meta-learning. In addition to interesting researches, we also introduce major benchmark datasets. FSL is widely adopted in various domains, but we focus on image analysis in this paper.

Zero-shot voice conversion with HuBERT

  • Hyelee Chung;Hosung Nam
    • Phonetics and Speech Sciences
    • /
    • v.15 no.3
    • /
    • pp.69-74
    • /
    • 2023
  • This study introduces an innovative model for zero-shot voice conversion that utilizes the capabilities of HuBERT. Zero-shot voice conversion models can transform the speech of one speaker to mimic that of another, even when the model has not been exposed to the target speaker's voice during the training phase. Comprising five main components (HuBERT, feature encoder, flow, speaker encoder, and vocoder), the model offers remarkable performance across a range of scenarios. Notably, it excels in the challenging unseen-to-unseen voice-conversion tasks. The effectiveness of the model was assessed based on the mean opinion scores and similarity scores, reflecting high voice quality and similarity to the target speakers. This model demonstrates considerable promise for a range of real-world applications demanding high-quality voice conversion. This study sets a precedent in the exploration of HuBERT-based models for voice conversion, and presents new directions for future research in this domain. Despite its complexities, the robust performance of this model underscores the viability of HuBERT in advancing voice conversion technology, making it a significant contributor to the field.

Weighted Fast Adaptation Prior on Meta-Learning

  • Widhianingsih, Tintrim Dwi Ary;Kang, Dae-Ki
    • International journal of advanced smart convergence
    • /
    • v.8 no.4
    • /
    • pp.68-74
    • /
    • 2019
  • Along with the deeper architecture in the deep learning approaches, the need for the data becomes very big. In the real problem, to get huge data in some disciplines is very costly. Therefore, learning on limited data in the recent years turns to be a very appealing area. Meta-learning offers a new perspective to learn a model with this limitation. A state-of-the-art model that is made using a meta-learning framework, Meta-SGD, is proposed with a key idea of learning a hyperparameter or a learning rate of the fast adaptation stage in the outer update. However, this learning rate usually is set to be very small. In consequence, the objective function of SGD will give a little improvement to our weight parameters. In other words, the prior is being a key value of getting a good adaptation. As a goal of meta-learning approaches, learning using a single gradient step in the inner update may lead to a bad performance. Especially if the prior that we use is far from the expected one, or it works in the opposite way that it is very effective to adapt the model. By this reason, we propose to add a weight term to decrease, or increase in some conditions, the effect of this prior. The experiment on few-shot learning shows that emphasizing or weakening the prior can give better performance than using its original value.

An Analysis of the methods to alleviate the cost of data labeling in Deep learning (딥 러닝에서 Labeling 부담을 줄이기 위한 연구분석)

  • Han, Seokmin
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.1
    • /
    • pp.545-550
    • /
    • 2022
  • In Deep Learning method, it is well known that it requires large amount of data to train the deep neural network. And it also requires the labeling of each data to fully train the neural network, which means that experts should spend lots of time to provide the labeling. To alleviate the problem of time-consuming labeling process, some methods have been suggested such as weak-supervised method, one-shot learning, self-supervised, suggestive learning, and so on. In this manuscript, those methods are analyzed and its possible future direction of the research is suggested.

Deep Learning-based Single Image Generative Adversarial Network: Performance Comparison and Trends (딥러닝 기반 단일 이미지 생성적 적대 신경망 기법 비교 분석)

  • Jeong, Seong-Hun;Kong, Kyeongbo
    • Journal of Broadcast Engineering
    • /
    • v.27 no.3
    • /
    • pp.437-450
    • /
    • 2022
  • Generative adversarial networks(GANs) have demonstrated remarkable success in image synthesis. However, since GANs show instability in the training stage on large datasets, it is difficult to apply to various application fields. A single image GAN is a field that generates various images by learning the internal distribution of a single image. In this paper, we investigate five Single Image GAN: SinGAN, ConSinGAN, InGAN, DeepSIM, and One-Shot GAN. We compare the performance of each model and analyze the pros and cons of a single image GAN.

Real-Time Hand Gesture Recognition Based on Deep Learning (딥러닝 기반 실시간 손 제스처 인식)

  • Kim, Gyu-Min;Baek, Joong-Hwan
    • Journal of Korea Multimedia Society
    • /
    • v.22 no.4
    • /
    • pp.424-431
    • /
    • 2019
  • In this paper, we propose a real-time hand gesture recognition algorithm to eliminate the inconvenience of using hand controllers in VR applications. The user's 3D hand coordinate information is detected by leap motion sensor and then the coordinates are generated into two dimensional image. We classify hand gestures in real-time by learning the imaged 3D hand coordinate information through SSD(Single Shot multibox Detector) model which is one of CNN(Convolutional Neural Networks) models. We propose to use all 3 channels rather than only one channel. A sliding window technique is also proposed to recognize the gesture in real time when the user actually makes a gesture. An experiment was conducted to measure the recognition rate and learning performance of the proposed model. Our proposed model showed 99.88% recognition accuracy and showed higher usability than the existing algorithm.

Implementation of Point detail Classification System using Few-shot Learning (Few-shot Learning을 이용한 격점상세도 분류 시스템 구현)

  • Park, Jin-Hyouk;Kim, Yong Hyun;Lee, Kook-Bum;Lee, Jongseo;Kim, Yu-Doo
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.26 no.12
    • /
    • pp.1809-1815
    • /
    • 2022
  • A digital twin is a technology that creates a virtual world identical to the real world. Problems in the real world can be identified through various simulations, so it is a trend to be applied in various industries. In order to apply the digital twin, it is necessary to analyze the drawings in which the structure of the real world to be made identical is designed. Although the technology for analyzing drawings is being studied, it is difficult to apply them because the rules or standards for drawing drawings are different for each author. Therefore, in this paper, we implement a system that analyzes and classifies the vertex detail, one of the drawings, using artificial intelligence. Through this, we intend to confirm the possibility of analyzing and classifying drawings through artificial intelligence and introduce future research directions.

CKFont2: An Improved Few-Shot Hangul Font Generation Model Based on Hangul Composability (CKFont2: 한글 구성요소를 이용한 개선된 퓨샷 한글 폰트 생성 모델)

  • Jangkyoung, Park;Ammar, Ul Hassan;Jaeyoung, Choi
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.11 no.12
    • /
    • pp.499-508
    • /
    • 2022
  • A lot of research has been carried out on the Hangeul generation model using deep learning, and recently, research is being carried out how to minimize the number of characters input to generate one set of Hangul (Few-Shot Learning). In this paper, we propose a CKFont2 model using only 14 letters by analyzing and improving the CKFont (hereafter CKFont1) model using 28 letters. The CKFont2 model improves the performance of the CKFont1 model as a model that generates all Hangul using only 14 characters including 24 components (14 consonants and 10 vowels), where the CKFont1 model generates all Hangul by extracting 51 Hangul components from 28 characters. It uses the minimum number of characters for currently known models. From the basic consonants/vowels of Hangul, 27 components such as 5 double consonants, 11/11 compound consonants/vowels respectively are learned by deep learning and generated, and the generated 27 components are combined with 24 basic consonants/vowels. All Hangul characters are automatically generated from the combined 51 components. The superiority of the performance was verified by comparative analysis with results of the zi2zi, CKFont1, and MX-Font model. It is an efficient and effective model that has a simple structure and saves time and resources, and can be extended to Chinese, Thai, and Japanese.

Denoise of Astronomical Images with Deep Learning

  • Park, Youngjun;Choi, Yun-Young;Moon, Yong-Jae;Park, Eunsu;Lim, Beomdu;Kim, Taeyoung
    • The Bulletin of The Korean Astronomical Society
    • /
    • v.44 no.1
    • /
    • pp.54.2-54.2
    • /
    • 2019
  • Removing noise which occurs inevitably when taking image data has been a big concern. There is a way to raise signal-to-noise ratio and it is regarded as the only way, image stacking. Image stacking is averaging or just adding all pixel values of multiple pictures taken of a specific area. Its performance and reliability are unquestioned, but its weaknesses are also evident. Object with fast proper motion can be vanished, and most of all, it takes too long time. So if we can handle single shot image well and achieve similar performance, we can overcome those weaknesses. Recent developments in deep learning have enabled things that were not possible with former algorithm-based programming. One of the things is generating data with more information from data with less information. As a part of that, we reproduced stacked image from single shot image using a kind of deep learning, conditional generative adversarial network (cGAN). r-band camcol2 south data were used from SDSS Stripe 82 data. From all fields, image data which is stacked with only 22 individual images and, as a pair of stacked image, single pass data which were included in all stacked image were used. All used fields are cut in $128{\times}128$ pixel size, so total number of image is 17930. 14234 pairs of all images were used for training cGAN and 3696 pairs were used for verify the result. As a result, RMS error of pixel values between generated data from the best condition and target data were $7.67{\times}10^{-4}$ compared to original input data, $1.24{\times}10^{-3}$. We also applied to a few test galaxy images and generated images were similar to stacked images qualitatively compared to other de-noising methods. In addition, with photometry, The number count of stacked-cGAN matched sources is larger than that of single pass-stacked one, especially for fainter objects. Also, magnitude completeness became better in fainter objects. With this work, it is possible to observe reliably 1 magnitude fainter object.

  • PDF

Interactions among Components of Pedagogical Content Knowledge of Science Teachers in a Teacher Learning Community (교사학습공동체 과학 교사의 PCK 요소 간 상호작용)

  • Yang, Jungeun;Choi, Aeran
    • Journal of the Korean Chemical Society
    • /
    • v.66 no.1
    • /
    • pp.15-30
    • /
    • 2022
  • The purpose of this study was to examine interactions among components of pedagogical content knowledge of middle school science teachers in a teacher learning community targeting science practice-based instruction. Data collection consisted of pre and post questionnaire and interview with each of five science teachers, audio-recording of teacher discussion in a teacher learning community, lesson plans, teacher written reflection, and video-recording of teaching practice. Qualitative data analysis revealed that there were two types of interactions, i.e., one-way interaction and two-ways interaction among components of pedagogical content knowledge of science teachers in a teacher learning community. There were also consecutive interactions as well as one-shot interaction. For two-ways interaction there were synchronous two-ways interaction in a teacher learning community meeting as well as consecutive two-ways interaction along with several meetings. This study provides implications that collaborative learning context in a teacher learning community should stimulate various types interactions among components of pedagogical content knowledge.