• Title/Summary/Keyword: Self-supervised Learning

Search Result 90, Processing Time 0.026 seconds

Analysis of the effect of class classification learning on the saliency map of Self-Supervised Transformer (클래스분류 학습이 Self-Supervised Transformer의 saliency map에 미치는 영향 분석)

  • Kim, JaeWook;Kim, Hyeoncheol
    • Proceedings of the Korean Society of Computer Information Conference
    • /
    • 2022.07a
    • /
    • pp.67-70
    • /
    • 2022
  • NLP 분야에서 적극 활용되기 시작한 Transformer 모델을 Vision 분야에서 적용하기 시작하면서 object detection과 segmentation 등 각종 분야에서 기존 CNN 기반 모델의 정체된 성능을 극복하며 향상되고 있다. 또한, label 데이터 없이 이미지들로만 자기지도학습을 한 ViT(Vision Transformer) 모델을 통해 이미지에 포함된 여러 중요한 객체의 영역을 검출하는 saliency map을 추출할 수 있게 되었으며, 이로 인해 ViT의 자기지도학습을 통한 object detection과 semantic segmentation 연구가 활발히 진행되고 있다. 본 논문에서는 ViT 모델 뒤에 classifier를 붙인 모델에 일반 학습한 모델과 자기지도학습의 pretrained weight을 사용해서 전이학습한 모델의 시각화를 통해 각 saliency map들을 비교 분석하였다. 이를 통해, 클래스 분류 학습 기반 전이학습이 transformer의 saliency map에 미치는 영향을 확인할 수 있었다.

  • PDF

The Identifier Recognition from Shipping Container Image by Using Contour Tracking and Self-Generation Supervised Learning Algorithm Based on Enhanced ART1 (윤곽선 추적과 개선된 ART1 기반 자가 생성 지도 학습 알고리즘을 이용한 운송 컨테이너 영상의 식별자 인식)

  • 김광백
    • Journal of Intelligence and Information Systems
    • /
    • v.9 no.3
    • /
    • pp.65-79
    • /
    • 2003
  • In general, the extraction and recognition of identifier is very hard work, because the scale or location of identifier is not fixed-form. And, because the provided image is contained by camera, it has some noises. In this paper, we propose methods for automatic detecting edge using canny edge mask. After detecting edges, we extract regions of identifier by detected edge information's. In regions of identifier, we extract each identifier using contour tracking algorithm. The self-generation supervised learning algorithm is proposed for recognizing them, which has the algorithm of combining the enhanced ART1 and the supervised teaming method. The proposed method has applied to the container images. The extraction rate of identifier obtained by using contour tracking algorithm showed better results than that from the histogram method. Furthermore, the recognition rate of the self-generation supervised teaming method based on enhanced ART1 was improved much more than that of the self-generation supervised learning method based conventional ART1.

  • PDF

Efficient Self-supervised Learning Techniques for Lightweight Depth Completion (경량 깊이완성기술을 위한 효율적인 자기지도학습 기법 연구)

  • Park, Jae-Hyuck;Min, Kyoung-Wook;Choi, Jeong Dan
    • The Journal of The Korea Institute of Intelligent Transport Systems
    • /
    • v.20 no.6
    • /
    • pp.313-330
    • /
    • 2021
  • In an autonomous driving system equipped with a camera and lidar, depth completion techniques enable dense depth estimation. In particular, using self-supervised learning it is possible to train the depth completion network even without ground truth. In actual autonomous driving, such depth completion should have very short latency as it is the input of other algorithms. So, rather than complicate the network structure to increase the accuracy like previous studies, this paper focuses on network latency. We design a U-Net type network with RegNet encoders optimized for GPU computation. Instead, this paper presents several techniques that can increase accuracy during the process of self-supervised learning. The proposed techniques increase the robustness to unreliable lidar inputs. Also, they improve the depth quality for edge and sky regions based on the semantic information extracted in advance. Our experiments confirm that our model is very lightweight (2.42 ms at 1280x480) but resistant to noise and has qualities close to the latest studies.

gMLP-based Self-Supervised Learning Anomaly Detection using a Simple Synthetic Data Generation Method (단순한 합성데이터 생성 방식을 활용한 gMLP 기반 자기 지도 학습 이상탐지 기법)

  • Ju-Hyo, Hwang;Kyo-Hong, Jin
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.27 no.1
    • /
    • pp.8-14
    • /
    • 2023
  • The existing self-supervised learning-based CutPaste generated synthetic data by cutting and attaching specific patches from normal images and then performed anomaly detection. However, this method has a problem in that there is a clear difference in the boundary of the patch. NSA for solving these problems have achieved higher anomaly detection performance by generating natural synthetic data through Poisson Blending. However, NSA has the disadvantage of having many hyperparameters that need to be adjusted for each class. In this paper, synthetic data similar to normal were generated by a simple method of making the size of the synthetic patch very small. At this time, since the patches are so locally synthesized, models that learn local features can easily overfit synthetic data. Therefore, we performed anomaly detection using gMLP, which learns global features, and even with simple synthesis methods, we were able to achieve higher performance than conventional self-supervised learning techniques.

Container Image Recognition using Fuzzy-based Noise Removal Method and ART2-based Self-Organizing Supervised Learning Algorithm (퍼지 기반 잡음 제거 방법과 ART2 기반 자가 생성 지도 학습 알고리즘을 이용한 컨테이너 인식 시스템)

  • Kim, Kwang-Baek;Heo, Gyeong-Yong;Woo, Young-Woon
    • Journal of the Korea Institute of Information and Communication Engineering
    • /
    • v.11 no.7
    • /
    • pp.1380-1386
    • /
    • 2007
  • This paper proposed an automatic recognition system of shipping container identifiers using fuzzy-based noise removal method and ART2-based self-organizing supervised learning algorithm. Generally, identifiers of a shipping container have a feature that the color of characters is blacker white. Considering such a feature, in a container image, all areas excepting areas with black or white colors are regarded as noises, and areas of identifiers and noises are discriminated by using a fuzzy-based noise detection method. Areas of identifiers are extracted by applying the edge detection by Sobel masking operation and the vertical and horizontal block extraction in turn to the noise-removed image. Extracted areas are binarized by using the iteration binarization algorithm, and individual identifiers are extracted by applying 8-directional contour tacking method. This paper proposed an ART2-based self-organizing supervised learning algorithm for the identifier recognition, which improves the performance of learning by applying generalized delta learning and Delta-bar-Delta algorithm. Experiments using real images of shipping containers showed that the proposed identifier extraction method and the ART2-based self-organizing supervised learning algorithm are more improved compared with the methods previously proposed.

Korean Text to Gloss: Self-Supervised Learning approach

  • Thanh-Vu Dang;Gwang-hyun Yu;Ji-yong Kim;Young-hwan Park;Chil-woo Lee;Jin-Young Kim
    • Smart Media Journal
    • /
    • v.12 no.1
    • /
    • pp.32-46
    • /
    • 2023
  • Natural Language Processing (NLP) has grown tremendously in recent years. Typically, bilingual, and multilingual translation models have been deployed widely in machine translation and gained vast attention from the research community. On the contrary, few studies have focused on translating between spoken and sign languages, especially non-English languages. Prior works on Sign Language Translation (SLT) have shown that a mid-level sign gloss representation enhances translation performance. Therefore, this study presents a new large-scale Korean sign language dataset, the Museum-Commentary Korean Sign Gloss (MCKSG) dataset, including 3828 pairs of Korean sentences and their corresponding sign glosses used in Museum-Commentary contexts. In addition, we propose a translation framework based on self-supervised learning, where the pretext task is a text-to-text from a Korean sentence to its back-translation versions, then the pre-trained network will be fine-tuned on the MCKSG dataset. Using self-supervised learning help to overcome the drawback of a shortage of sign language data. Through experimental results, our proposed model outperforms a baseline BERT model by 6.22%.

Multi-modal Meteorological Data Fusion based on Self-supervised Learning for Graph (Self-supervised Graph Learning을 통한 멀티모달 기상관측 융합)

  • Hyeon-Ju Jeon;Jeon-Ho Kang;In-Hyuk Kwon
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2023.11a
    • /
    • pp.589-591
    • /
    • 2023
  • 현재 수치예보 시스템은 항공기, 위성 등 다양한 센서에서 얻은 다종 관측 데이터를 동화하여 대기 상태를 추정하고 있지만, 관측변수 또는 물리량이 서로 다른 관측들을 처리하기 위한 계산 복잡도가 매우 높다. 본 연구에서 기존 시스템의 계산 효율성을 개선하여 관측을 평가하거나 전처리하는 데에 효율적으로 활용하기 위해, 각 관측의 특성을 고려한 자기 지도학습 방법을 통해 멀티모달 기상관측으로부터 실제 대기 상태를 추정하는 방법론을 제안하고자 한다. 비균질적으로 수집되는 멀티모달 기상관측 데이터를 융합하기 위해, (i) 기상관측의 heterogeneous network를 구축하여 개별 관측의 위상정보를 표현하고, (ii) pretext task 기반의 self-supervised learning을 바탕으로 개별 관측의 특성을 표현한다. (iii) Graph neural network 기반의 예측 모델을 통해 실제에 가까운 대기 상태를 추정한다. 제안하는 모델은 대규모 수치 시뮬레이션 시스템으로 수행되는 기존 기술의 한계점을 개선함으로써, 이상 관측 탐지, 관측의 편차 보정, 관측영향 평가 등 관측 전처리 기술로 활용할 수 있다.

Analysis and Study for Appropriate Deep Neural Network Structures and Self-Supervised Learning-based Brain Signal Data Representation Methods (딥 뉴럴 네트워크의 적절한 구조 및 자가-지도 학습 방법에 따른 뇌신호 데이터 표현 기술 분석 및 고찰)

  • Won-Jun Ko
    • The Journal of the Korea institute of electronic communication sciences
    • /
    • v.19 no.1
    • /
    • pp.137-142
    • /
    • 2024
  • Recently, deep learning technology has become those methods as de facto standards in the area of medical data representation. But, deep learning inherently requires a large amount of training data, which poses a challenge for its direct application in the medical field where acquiring large-scale data is not straightforward. Additionally, brain signal modalities also suffer from these problems owing to the high variability. Research has focused on designing deep neural network structures capable of effectively extracting spectro-spatio-temporal characteristics of brain signals, or employing self-supervised learning methods to pre-learn the neurophysiological features of brain signals. This paper analyzes methodologies used to handle small-scale data in emerging fields such as brain-computer interfaces and brain signal-based state prediction, presenting future directions for these technologies. At first, this paper examines deep neural network structures for representing brain signals, then analyzes self-supervised learning methodologies aimed at efficiently learning the characteristics of brain signals. Finally, the paper discusses key insights and future directions for deep learning-based brain signal analysis.

A study on Generating Molecules with Variational Auto-encoders based on Graph Neural Networks (그래프 신경망 기반 가변 자동 인코더로 분자 생성에 관한 연구)

  • Cahyadi, Edward Dwijayanto;Song, Mi-Hwa
    • Proceedings of the Korea Information Processing Society Conference
    • /
    • 2022.11a
    • /
    • pp.380-382
    • /
    • 2022
  • Extracting informative representation of molecules using graph neural networks(GNNs) is crucial in AI-driven drug discovery. Recently, the graph research community has been trying to replicate the success of self supervised in natural language processing, with several successes claimed. However, we find the benefit brought by self-supervised learning on applying varitional auto-encoders can be potentially effective on molecular data.

Semi-supervised Model for Fault Prediction using Tree Methods (트리 기법을 사용하는 세미감독형 결함 예측 모델)

  • Hong, Euyseok
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.4
    • /
    • pp.107-113
    • /
    • 2020
  • A number of studies have been conducted on predicting software faults, but most of them have been supervised models using labeled data as training data. Very few studies have been conducted on unsupervised models using only unlabeled data or semi-supervised models using enough unlabeled data and few labeled data. In this paper, we produced new semi-supervised models using tree algorithms in the self-training technique. As a result of the model performance evaluation experiment, the newly created tree models performed better than the existing models, and CollectiveWoods, in particular, outperformed other models. In addition, it showed very stable performance even in the case with very few labeled data.