• 제목/요약/키워드: Supervised Data

검색결과 659건 처리시간 0.026초

Issues and Empirical Results for Improving Text Classification

  • Ko, Young-Joong;Seo, Jung-Yun
    • Journal of Computing Science and Engineering
    • /
    • 제5권2호
    • /
    • pp.150-160
    • /
    • 2011
  • Automatic text classification has a long history and many studies have been conducted in this field. In particular, many machine learning algorithms and information retrieval techniques have been applied to text classification tasks. Even though much technical progress has been made in text classification, there is still room for improvement in text classification. In this paper, we will discuss remaining issues in improving text classification. In this paper, three improvement issues are presented including automatic training data generation, noisy data treatment and term weighting and indexing, and four actual studies and their empirical results for those issues are introduced. First, the semi-supervised learning technique is applied to text classification to efficiently create training data. For effective noisy data treatment, a noisy data reduction method and a robust text classifier from noisy data are developed as a solution. Finally, the term weighting and indexing technique is revised by reflecting the importance of sentences into term weight calculation using summarization techniques.

준 지도 이상 탐지 기법의 성능 향상을 위한 섭동을 활용한 초구 기반 비정상 데이터 증강 기법 (Abnormal Data Augmentation Method Using Perturbation Based on Hypersphere for Semi-Supervised Anomaly Detection)

  • 정병길;권준형;민동준;이상근
    • 정보보호학회논문지
    • /
    • 제32권4호
    • /
    • pp.647-660
    • /
    • 2022
  • 최근 정상 데이터와 일부 비정상 데이터를 보유한 환경에서 딥러닝 기반 준 지도 학습 이상 탐지 기법이 매우 효과적으로 동작함이 알려져 있다. 하지만 사이버 보안 분야와 같이 실제 시스템에 대한 알려지지 않은 공격 등 비정상 데이터 확보가 어려운 환경에서는 비정상 데이터 부족이 발생할 가능성이 있다. 본 논문은 비정상 데이터가 정상 데이터보다 극히 작은 환경에서 준 지도 이상 탐지 기법에 적용 가능한 섭동을 활용한 초구 기반 비정상 데이터 증강 기법인 ADA-PH(Abnormal Data Augmentation Method using Perturbation based on Hypersphere)를 제안한다. ADA-PH는 정상 데이터를 잘 표현할 수 있는 초구의 중심으로부터 상대적으로 먼 거리에 위치한 샘플에 대해 적대적 섭동을 추가함으로써 비정상 데이터를 생성한다. 제안하는 기법은 비정상 데이터가 극소수로 존재하는 네트워크 침입 탐지 데이터셋에 대하여 데이터 증강을 수행하지 않았을 경우보다 평균적으로 23.63% 향상된 AUC가 도출되었고, 다른 증강 기법들과 비교했을 때 가장 높은 AUC가 또한 도출되었다. 또한, 실제 비정상 데이터에 유사한지에 대한 정량적 및 정성적 분석을 수행하였다.

고해상도 위성영상을 위한 감독분류 시스템 (Supervised Classification Systems for High Resolution Satellite Images)

  • 전영준;김진일
    • 한국정보과학회논문지:컴퓨팅의 실제 및 레터
    • /
    • 제9권3호
    • /
    • pp.301-310
    • /
    • 2003
  • 본 논문에서는 고해상도 위성영상의 효과적인 분류를 위한 감독분류 시스템을 설계하고 구현하였다. 구현된 시스템은 분류의 정확도 향상을 위한 훈련데이타의 효율적인 선택을 위해서 다양한 인터페이스와 통계자료를 제공한다. 또한, 다양한 위성영상 포맷의 지원과 새로운 감독분류 알고리즘의 확장을 용이하게 하기 위하여 시스템을 모듈화 하였으며, 분광 특성을 고려한 분류의 적용이 가능하다. 분류 알고리즘으로는 평행육면체 분류, 최소거리 분류, 마하라노비스 거리 분류, 최대우도 분류, 퍼지 분류의 감독분류기법을 이용하여 고해상도 위성영상의 처리를 지원한다. 본 시스템의 적용은 고해상도 IKONOS 위성영상을 입력으로 하고, 그 결과를 분석하여 봄으로써 시스템의 응용 가능성을 보여준다.

A supervised-learning-based spatial performance prediction framework for heterogeneous communication networks

  • Mukherjee, Shubhabrata;Choi, Taesang;Islam, Md Tajul;Choi, Baek-Young;Beard, Cory;Won, Seuck Ho;Song, Sejun
    • ETRI Journal
    • /
    • 제42권5호
    • /
    • pp.686-699
    • /
    • 2020
  • In this paper, we propose a supervised-learning-based spatial performance prediction (SLPP) framework for next-generation heterogeneous communication networks (HCNs). Adaptive asset placement, dynamic resource allocation, and load balancing are critical network functions in an HCN to ensure seamless network management and enhance service quality. Although many existing systems use measurement data to react to network performance changes, it is highly beneficial to perform accurate performance prediction for different systems to support various network functions. Recent advancements in complex statistical algorithms and computational efficiency have made machine-learning ubiquitous for accurate data-based prediction. A robust network performance prediction framework for optimizing performance and resource utilization through a linear discriminant analysis-based prediction approach has been proposed in this paper. Comparison results with different machine-learning techniques on real-world data demonstrate that SLPP provides superior accuracy and computational efficiency for both stationary and mobile user conditions.

EER-ASSL: Combining Rollback Learning and Deep Learning for Rapid Adaptive Object Detection

  • Ahmed, Minhaz Uddin;Kim, Yeong Hyeon;Rhee, Phill Kyu
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • 제14권12호
    • /
    • pp.4776-4794
    • /
    • 2020
  • We propose a rapid adaptive learning framework for streaming object detection, called EER-ASSL. The method combines the expected error reduction (EER) dependent rollback learning and the active semi-supervised learning (ASSL) for a rapid adaptive CNN detector. Most CNN object detectors are built on the assumption of static data distribution. However, images are often noisy and biased, and the data distribution is imbalanced in a real world environment. The proposed method consists of collaborative sampling and EER-ASSL. The EER-ASSL utilizes the active learning (AL) and rollback based semi-supervised learning (SSL). The AL allows us to select more informative and representative samples measuring uncertainty and diversity. The SSL divides the selected streaming image samples into the bins and each bin repeatedly transfers the discriminative knowledge of the EER and CNN models to the next bin until convergence and incorporation with the EER rollback learning algorithm is achieved. The EER models provide a rapid short-term myopic adaptation and the CNN models an incremental long-term performance improvement. EER-ASSL can overcome noisy and biased labels in varying data distribution. Extensive experiments shows that EER-ASSL obtained 70.9 mAP compared to state-of-the-art technology such as Faster RCNN, SSD300, and YOLOv2.

Data Security on Cloud by Cryptographic Methods Using Machine Learning Techniques

  • Gadde, Swetha;Amutharaj, J.;Usha, S.
    • International Journal of Computer Science & Network Security
    • /
    • 제22권5호
    • /
    • pp.342-347
    • /
    • 2022
  • On Cloud, the important data of the user that is protected on remote servers can be accessed via internet. Due to rapid shift in technology nowadays, there is a swift increase in the confidential and pivotal data. This comes up with the requirement of data security of the user's data. Data is of different type and each need discrete degree of conservation. The idea of data security data science permits building the computing procedure more applicable and bright as compared to conventional ones in the estate of data security. Our focus with this paper is to enhance the safety of data on the cloud and also to obliterate the problems associated with the data security. In our suggested plan, some basic solutions of security like cryptographic techniques and authentication are allotted in cloud computing world. This paper put your heads together about how machine learning techniques is used in data security in both offensive and defensive ventures, including analysis on cyber-attacks focused at machine learning techniques. The machine learning technique is based on the Supervised, UnSupervised, Semi-Supervised and Reinforcement Learning. Although numerous research has been done on this topic but in reference with the future scope a lot more investigation is required to be carried out in this field to determine how the data can be secured more firmly on cloud in respect with the Machine Learning Techniques and cryptographic methods.

ACCOUNTING FOR IMPORTANCE OF VARIABLES IN MUL TI-SENSOR DATA FUSION USING RANDOM FORESTS

  • Park No-Wook;Chi Kwang-Hoon
    • 대한원격탐사학회:학술대회논문집
    • /
    • 대한원격탐사학회 2005년도 Proceedings of ISRS 2005
    • /
    • pp.283-285
    • /
    • 2005
  • To account for the importance of variable in multi-sensor data fusion, random forests are applied to supervised land-cover classification. The random forests approach is a non-parametric ensemble classifier based on CART-like trees. Its distinguished feature is that the importance of variable can be estimated by randomly permuting the variable of interest in all the out-of-bag samples for each classifier. Supervised classification with a multi-sensor remote sensing data set including optical and polarimetric SAR data was carried out to illustrate the applicability of random forests. From the experimental result, the random forests approach could extract important variables or bands for land-cover discrimination and showed good performance, as compared with other non-parametric data fusion algorithms.

  • PDF

Application of Random Forests to Assessment of Importance of Variables in Multi-sensor Data Fusion for Land-cover Classification

  • Park No-Wook;Chi kwang-Hoon
    • 대한원격탐사학회지
    • /
    • 제22권3호
    • /
    • pp.211-219
    • /
    • 2006
  • A random forests classifier is applied to multi-sensor data fusion for supervised land-cover classification in order to account for the importance of variable. The random forests approach is a non-parametric ensemble classifier based on CART-like trees. The distinguished feature is that the importance of variable can be estimated by randomly permuting the variable of interest in all the out-of-bag samples for each classifier. Two different multi-sensor data sets for supervised classification were used to illustrate the applicability of random forests: one with optical and polarimetric SAR data and the other with multi-temporal Radarsat-l and ENVISAT ASAR data sets. From the experimental results, the random forests approach could extract important variables or bands for land-cover discrimination and showed reasonably good performance in terms of classification accuracy.

Anomaly-based Alzheimer's disease detection using entropy-based probability Positron Emission Tomography images

  • Husnu Baris Baydargil;Jangsik Park;Ibrahim Furkan Ince
    • ETRI Journal
    • /
    • 제46권3호
    • /
    • pp.513-525
    • /
    • 2024
  • Deep neural networks trained on labeled medical data face major challenges owing to the economic costs of data acquisition through expensive medical imaging devices, expert labor for data annotation, and large datasets to achieve optimal model performance. The heterogeneity of diseases, such as Alzheimer's disease, further complicates deep learning because the test cases may substantially differ from the training data, possibly increasing the rate of false positives. We propose a reconstruction-based self-supervised anomaly detection model to overcome these challenges. It has a dual-subnetwork encoder that enhances feature encoding augmented by skip connections to the decoder for improving the gradient flow. The novel encoder captures local and global features to improve image reconstruction. In addition, we introduce an entropy-based image conversion method. Extensive evaluations show that the proposed model outperforms benchmark models in anomaly detection and classification using an encoder. The supervised and unsupervised models show improved performances when trained with data preprocessed using the proposed image conversion method.

슬라이딩 윈도우 기반 다변량 스트림 데이타 분류 기법 (A Sliding Window-based Multivariate Stream Data Classification)

  • 서성보;강재우;남광우;류근호
    • 한국정보과학회논문지:데이타베이스
    • /
    • 제33권2호
    • /
    • pp.163-174
    • /
    • 2006
  • 분산 센서 네트워크에서 대용량 스트림 데이타를 제한된 네트워크, 전력, 프로세서를 이용하여 모든 센서 데이타를 전송하고 분석하는 것은 어렵고 바람직하지 않다. 그러므로 연속적으로 입력되는 데이타를 사전에 분류하여 특성에 따라 선택적으로 데이타를 처리하는 데이타 분류 기법이 요구된다. 이 논문에서는 다차원 센서에서 주기적으로 수집되는 스트림 데이타를 슬라이딩 윈도우 단위로 데이타를 분류하는 기법을 제안한다. 제안된 기법은 전처리 단계와 분류단계로 구성된다. 전처리 단계는 다변량 스트림 데이타를 포함한 각 슬라이딩 윈도우 입력에 대해 데이타의 변화 특성에 따라 문자 기호를 이용하여 다양한 이산적 문자열 데이타 집합으로 변환한다. 분류단계는 각 윈도우마다 생성된 이산적 문자열 데이타를 분류하기 위해 표준 문서 분류 알고리즘을 이용하였다. 실험을 위해 우리는 Supervised 학습(베이지안 분류기, SVM)과 Unsupervised 학습(Jaccard, TFIDF, Jaro, Jaro Winkler) 알고리즘을 비교하고 평가하였다. 실험결과 SVM과 TFIDF 기법이 우수한 결과를 보였으며, 특히 속성간의 상관 정도와 인접한 각 문자 기호를 연결한 n-gram방식을 함께 고려하였을 때 높은 정확도를 보였다.