• Title/Summary/Keyword: Semi-supervised Learning

Search Result 146, Processing Time 0.027 seconds

A study on the performance improvement of learning based on consistency regularization and unlabeled data augmentation (일치성규칙과 목표값이 없는 데이터 증대를 이용하는 학습의 성능 향상 방법에 관한 연구)

  • Kim, Hyunwoong;Seok, Kyungha
    • The Korean Journal of Applied Statistics
    • /
    • v.34 no.2
    • /
    • pp.167-175
    • /
    • 2021
  • Semi-supervised learning uses both labeled data and unlabeled data. Recently consistency regularization is very popular in semi-supervised learning. Unsupervised data augmentation (UDA) that uses unlabeled data augmentation is also based on the consistency regularization. The Kullback-Leibler divergence is used for the loss of unlabeled data and cross-entropy for the loss of labeled data through UDA learning. UDA uses techniques such as training signal annealing (TSA) and confidence-based masking to promote performance. In this study, we propose to use Jensen-Shannon divergence instead of Kullback-Leibler divergence, reverse-TSA and not to use confidence-based masking for performance improvement. Through experiment, we show that the proposed technique yields better performance than those of UDA.

Sentiment Orientation Using Deep Learning Sequential and Bidirectional Models

  • Alyamani, Hasan J.
    • International Journal of Computer Science & Network Security
    • /
    • v.21 no.11
    • /
    • pp.23-30
    • /
    • 2021
  • Sentiment Analysis has become very important field of research because posting of reviews is becoming a trend. Supervised, unsupervised and semi supervised machine learning methods done lot of work to mine this data. Feature engineering is complex and technical part of machine learning. Deep learning is a new trend, where this laborious work can be done automatically. Many researchers have done many works on Deep learning Convolutional Neural Network (CNN) and Long Shor Term Memory (LSTM) Neural Network. These requires high processing speed and memory. Here author suggested two models simple & bidirectional deep leaning, which can work on text data with normal processing speed. At end both models are compared and found bidirectional model is best, because simple model achieve 50% accuracy and bidirectional deep learning model achieve 99% accuracy on trained data while 78% accuracy on test data. But this is based on 10-epochs and 40-batch size. This accuracy can also be increased by making different attempts on epochs and batch size.

Deep Hashing for Semi-supervised Content Based Image Retrieval

  • Bashir, Muhammad Khawar;Saleem, Yasir
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.12 no.8
    • /
    • pp.3790-3803
    • /
    • 2018
  • Content-based image retrieval is an approach used to query images based on their semantics. Semantic based retrieval has its application in all fields including medicine, space, computing etc. Semantically generated binary hash codes can improve content-based image retrieval. These semantic labels / binary hash codes can be generated from unlabeled data using convolutional autoencoders. Proposed approach uses semi-supervised deep hashing with semantic learning and binary code generation by minimizing the objective function. Convolutional autoencoders are basis to extract semantic features due to its property of image generation from low level semantic representations. These representations of images are more effective than simple feature extraction and can preserve better semantic information. Proposed activation and loss functions helped to minimize classification error and produce better hash codes. Most widely used datasets have been used for verification of this approach that outperforms the existing methods.

Development of Semi-Supervised Deep Domain Adaptation Based Face Recognition Using Only a Single Training Sample (단일 훈련 샘플만을 활용하는 준-지도학습 심층 도메인 적응 기반 얼굴인식 기술 개발)

  • Kim, Kyeong Tae;Choi, Jae Young
    • Journal of Korea Multimedia Society
    • /
    • v.25 no.10
    • /
    • pp.1375-1385
    • /
    • 2022
  • In this paper, we propose a semi-supervised domain adaptation solution to deal with practical face recognition (FR) scenarios where a single face image for each target identity (to be recognized) is only available in the training phase. Main goal of the proposed method is to reduce the discrepancy between the target and the source domain face images, which ultimately improves FR performances. The proposed method is based on the Domain Adatation network (DAN) using an MMD loss function to reduce the discrepancy between domains. In order to train more effectively, we develop a novel loss function learning strategy in which MMD loss and cross-entropy loss functions are adopted by using different weights according to the progress of each epoch during the learning. The proposed weight adoptation focuses on the training of the source domain in the initial learning phase to learn facial feature information such as eyes, nose, and mouth. After the initial learning is completed, the resulting feature information is used to training a deep network using the target domain images. To evaluate the effectiveness of the proposed method, FR performances were evaluated with pretrained model trained only with CASIA-webface (source images) and fine-tuned model trained only with FERET's gallery (target images) under the same FR scenarios. The experimental results showed that the proposed semi-supervised domain adaptation can be improved by 24.78% compared to the pre-trained model and 28.42% compared to the fine-tuned model. In addition, the proposed method outperformed other state-of-the-arts domain adaptation approaches by 9.41%.

Semi-supervised learning of speech recognizers based on variational autoencoder and unsupervised data augmentation (변분 오토인코더와 비교사 데이터 증강을 이용한 음성인식기 준지도 학습)

  • Jo, Hyeon Ho;Kang, Byung Ok;Kwon, Oh-Wook
    • The Journal of the Acoustical Society of Korea
    • /
    • v.40 no.6
    • /
    • pp.578-586
    • /
    • 2021
  • We propose a semi-supervised learning method based on Variational AutoEncoder (VAE) and Unsupervised Data Augmentation (UDA) to improve the performance of an end-to-end speech recognizer. In the proposed method, first, the VAE-based augmentation model and the baseline end-to-end speech recognizer are trained using the original speech data. Then, the baseline end-to-end speech recognizer is trained again using data augmented from the learned augmentation model. Finally, the learned augmentation model and end-to-end speech recognizer are re-learned using the UDA-based semi-supervised learning method. As a result of the computer simulation, the augmentation model is shown to improve the Word Error Rate (WER) of the baseline end-to-end speech recognizer, and further improve its performance by combining it with the UDA-based learning method.

Sentiment Analysis to Evaluate Different Deep Learning Approaches

  • Sheikh Muhammad Saqib ;Tariq Naeem
    • International Journal of Computer Science & Network Security
    • /
    • v.23 no.11
    • /
    • pp.83-92
    • /
    • 2023
  • The majority of product users rely on the reviews that are posted on the appropriate website. Both users and the product's manufacturer could benefit from these reviews. Daily, thousands of reviews are submitted; how is it possible to read them all? Sentiment analysis has become a critical field of research as posting reviews become more and more common. Machine learning techniques that are supervised, unsupervised, and semi-supervised have worked very hard to harvest this data. The complicated and technological area of feature engineering falls within machine learning. Using deep learning, this tedious process may be completed automatically. Numerous studies have been conducted on deep learning models like LSTM, CNN, RNN, and GRU. Each model has employed a certain type of data, such as CNN for pictures and LSTM for language translation, etc. According to experimental results utilizing a publicly accessible dataset with reviews for all of the models, both positive and negative, and CNN, the best model for the dataset was identified in comparison to the other models, with an accuracy rate of 81%.

Energy-efficient semi-supervised learning framework for subchannel allocation in non-orthogonal multiple access systems

  • S. Devipriya;J. Martin Leo Manickam;B. Victoria Jancee
    • ETRI Journal
    • /
    • v.45 no.6
    • /
    • pp.963-973
    • /
    • 2023
  • Non-orthogonal multiple access (NOMA) is considered a key candidate technology for next-generation wireless communication systems due to its high spectral efficiency and massive connectivity. Incorporating the concepts of multiple-input-multiple-output (MIMO) into NOMA can further improve the system efficiency, but the hardware complexity increases. This study develops an energy-efficient (EE) subchannel assignment framework for MIMO-NOMA systems under the quality-of-service and interference constraints. This framework handles an energy-efficient co-training-based semi-supervised learning (EE-CSL) algorithm, which utilizes a small portion of existing labeled data generated by numerical iterative algorithms for training. To improve the learning performance of the proposed EE-CSL, initial assignment is performed by a many-to-one matching (MOM) algorithm. The MOM algorithm helps achieve a low complex solution. Simulation results illustrate that a lower computational complexity of the EE-CSL algorithm helps significantly minimize the energy consumption in a network. Furthermore, the sum rate of NOMA outperforms conventional orthogonal multiple access.

Graph Construction Based on Fast Low-Rank Representation in Graph-Based Semi-Supervised Learning (그래프 기반 준지도 학습에서 빠른 낮은 계수 표현 기반 그래프 구축)

  • Oh, Byonghwa;Yang, Jihoon
    • Journal of KIISE
    • /
    • v.45 no.1
    • /
    • pp.15-21
    • /
    • 2018
  • Low-Rank Representation (LRR) based methods are widely used in many practical applications, such as face clustering and object detection, because they can guarantee high prediction accuracy when used to constructing graphs in graph - based semi-supervised learning. However, in order to solve the LRR problem, it is necessary to perform singular value decomposition on the square matrix of the number of data points for each iteration of the algorithm; hence the calculation is inefficient. To solve this problem, we propose an improved and faster LRR method based on the recently published Fast LRR (FaLRR) and suggests ways to introduce and optimize additional constraints on the underlying optimization goals in order to address the fact that the FaLRR is fast but actually poor in classification problems. Our experiments confirm that the proposed method finds a better solution than LRR does. We also propose Fast MLRR (FaMLRR), which shows better results when the goal of minimizing is added.

Automatic Text Categorization based on Semi-Supervised Learning (준지도 학습 기반의 자동 문서 범주화)

  • Ko, Young-Joong;Seo, Jung-Yun
    • Journal of KIISE:Software and Applications
    • /
    • v.35 no.5
    • /
    • pp.325-334
    • /
    • 2008
  • The goal of text categorization is to classify documents into a certain number of pre-defined categories. The previous studies in this area have used a large number of labeled training documents for supervised learning. One problem is that it is difficult to create the labeled training documents. While it is easy to collect the unlabeled documents, it is not so easy to manually categorize them for creating training documents. In this paper, we propose a new text categorization method based on semi-supervised learning. The proposed method uses only unlabeled documents and keywords of each category, and it automatically constructs training data from them. Then a text classifier learns with them and classifies text documents. The proposed method shows a similar degree of performance, compared with the traditional supervised teaming methods. Therefore, this method can be used in the areas where low-cost text categorization is needed. It can also be used for creating labeled training documents.

A Study on the Prediction of Ship Collision Based on Semi-Supervised Learning (준지도 학습 기반 선박충돌 예측에 대한 연구)

  • Ho-June Seok;Seung Sim;Jeong-Hun Woo;Jun-Rae Cho;Deuk-Jae Cho;Jong-Hwa Baek;Jaeyong Jung
    • Proceedings of the Korean Institute of Navigation and Port Research Conference
    • /
    • 2023.05a
    • /
    • pp.204-205
    • /
    • 2023
  • This study studied a prediction model for sending collision alarms for small fishing boats based on semi-supervised learning(SSL). The supervised learning (SL) method requires a large number of labeled data, but the labeling process takes a lot of resources and time. This study used service data collected through a data pipeline linked to 'intelligent maritime traffic information service' and data collected from real-sea experiment. The model accuracy was improved as a result of learning not only real-sea experiment data with labeling determined based on actual user satisfaction but also service data without label determined together.

  • PDF