• Title/Summary/Keyword: Semi-Supervised learning

Search Result 150, Processing Time 0.022 seconds

Performance Comparison of Anomaly Detection Algorithms: in terms of Anomaly Type and Data Properties (이상탐지 알고리즘 성능 비교: 이상치 유형과 데이터 속성 관점에서)

  • Jaeung Kim;Seung Ryul Jeong;Namgyu Kim
    • Journal of Intelligence and Information Systems
    • /
    • v.29 no.3
    • /
    • pp.229-247
    • /
    • 2023
  • With the increasing emphasis on anomaly detection across various fields, diverse anomaly detection algorithms have been developed for various data types and anomaly patterns. However, the performance of anomaly detection algorithms is generally evaluated on publicly available datasets, and the specific performance of each algorithm on anomalies of particular types remains unexplored. Consequently, selecting an appropriate anomaly detection algorithm for specific analytical contexts poses challenges. Therefore, in this paper, we aim to investigate the types of anomalies and various attributes of data. Subsequently, we intend to propose approaches that can assist in the selection of appropriate anomaly detection algorithms based on this understanding. Specifically, this study compares the performance of anomaly detection algorithms for four types of anomalies: local, global, contextual, and clustered anomalies. Through further analysis, the impact of label availability, data quantity, and dimensionality on algorithm performance is examined. Experimental results demonstrate that the most effective algorithm varies depending on the type of anomaly, and certain algorithms exhibit stable performance even in the absence of anomaly-specific information. Furthermore, in some types of anomalies, the performance of unsupervised anomaly detection algorithms was observed to be lower than that of supervised and semi-supervised learning algorithms. Lastly, we found that the performance of most algorithms is more strongly influenced by the type of anomalies when the data quantity is relatively scarce or abundant. Additionally, in cases of higher dimensionality, it was noted that excellent performance was exhibited in detecting local and global anomalies, while lower performance was observed for clustered anomaly types.

Korean Automated Scoring System for Supply-Type Items using Semi-Supervised Learning (준지도학습 방법을 이용한 한국어 서답형 문항 자동채점 시스템)

  • Cheon, Min-Ah;Seo, Hyeong-Won;Kim, Jae-Hoon;Noh, Eun-Hee;Sung, Kyung-Hee;Lim, EunYoung
    • Annual Conference on Human and Language Technology
    • /
    • 2014.10a
    • /
    • pp.112-116
    • /
    • 2014
  • 서답형 문항은 학생들의 종합적인 사고능력을 판단하는데 매우 유용하지만 채점할 때, 시간과 비용이 매우 많이 소요되고 채점자의 공정성을 확보해야 하는 어려움이 있다. 이러한 문제를 개선하기 위해 본 논문에서는 서답형 문항에 대한 자동채점 시스템을 제안한다. 본 논문에서 제안하는 시스템은 크게 언어 처리 단계와 채점 단계로 나뉜다. 첫 번째로 언어 처리 단계에서는 형태소 분석과 같은 한국어 정보처리 시스템을 이용하여 학생들의 답안을 분석한다. 두 번째로 채점 단계를 진행하는데 이 단계는 아래와 같은 순서로 진행된다. 1) 첫 번째 단계에서 분석 결과가 완전히 일치하는 답안들을 하나의 유형으로 간주하여 각 유형에 속한 답안의 빈도수가 높은 순서대로 정렬하여 인간 채점자가 고빈도 학생 답안을 수동으로 채점한다. 2) 현재까지 채점된 결과와 모범답안을 학습말뭉치로 간주하여 자질 추출 및 자질 가중치 학습을 수행한다. 3) 2)의 학습 결과를 토대로 미채점 답안들을 군집화하여 분류한다. 4) 분류된 결과 중에서 신뢰성이 높은 채점 답안에 대해서 인간 채점자가 확인하고 학습말뭉치에 추가한다. 5) 이와 같은 방법으로 미채점 답안이 존재하지 않을 때까지 반복한다. 제안된 시스템을 평가하기 위해서 2013년 학업성취도 평가의 사회(중3) 및 국어(고2) 과목의 서답형 문항을 사용하였다. 각 과목에서 1000개의 학생 답안을 추출하여 채점시간과 정확률을 평가하였다. 채점시간을 전체적으로 약 80% 이상 줄일 수 있었고 채점 정확률은 사회 및 국어 과목에 대해 각각 98.7%와 97.2%로 나타났다. 앞으로 자동 채점 시스템의 성능을 개선하고 인간 채점자의 집중도를 높일 수 있도록 인터페이스를 개선한다면 국가수준의 대단위 평가에 충분히 활용할 수 있을 것으로 생각한다.

  • PDF

Impurity profiling and chemometric analysis of methamphetamine seizures in Korea

  • Shin, Dong Won;Ko, Beom Jun;Cheong, Jae Chul;Lee, Wonho;Kim, Suhkmann;Kim, Jin Young
    • Analytical Science and Technology
    • /
    • v.33 no.2
    • /
    • pp.98-107
    • /
    • 2020
  • Methamphetamine (MA) is currently the most abused illicit drug in Korea. MA is produced by chemical synthesis, and the final target drug that is produced contains small amounts of the precursor chemicals, intermediates, and by-products. To identify and quantify these trace compounds in MA seizures, a practical and feasible approach for conducting chromatographic fingerprinting with a suite of traditional chemometric methods and recently introduced machine learning approaches was examined. This was achieved using gas chromatography (GC) coupled with a flame ionization detector (FID) and mass spectrometry (MS). Following appropriate examination of all the peaks in 71 samples, 166 impurities were selected as the characteristic components. Unsupervised (principal component analysis (PCA), hierarchical cluster analysis (HCA), and K-means clustering) and supervised (partial least squares-discriminant analysis (PLS-DA), orthogonal partial least squares-discriminant analysis (OPLS-DA), support vector machines (SVM), and deep neural network (DNN) with Keras) chemometric techniques were employed for classifying the 71 MA seizures. The results of the PCA, HCA, K-means clustering, PLS-DA, OPLS-DA, SVM, and DNN methods for quality evaluation were in good agreement. However, the tested MA seizures possessed distinct features, such as chirality, cutting agents, and boiling points. The study indicated that the established qualitative and semi-quantitative methods will be practical and useful analytical tools for characterizing trace compounds in illicit MA seizures. Moreover, they will provide a statistical basis for identifying the synthesis route, sources of supply, trafficking routes, and connections between seizures, which will support drug law enforcement agencies in their effort to eliminate organized MA crime.

A semi-supervised interpretable machine learning framework for sensor fault detection

  • Martakis, Panagiotis;Movsessian, Artur;Reuland, Yves;Pai, Sai G.S.;Quqa, Said;Cava, David Garcia;Tcherniak, Dmitri;Chatzi, Eleni
    • Smart Structures and Systems
    • /
    • v.29 no.1
    • /
    • pp.251-266
    • /
    • 2022
  • Structural Health Monitoring (SHM) of critical infrastructure comprises a major pillar of maintenance management, shielding public safety and economic sustainability. Although SHM is usually associated with data-driven metrics and thresholds, expert judgement is essential, especially in cases where erroneous predictions can bear casualties or substantial economic loss. Considering that visual inspections are time consuming and potentially subjective, artificial-intelligence tools may be leveraged in order to minimize the inspection effort and provide objective outcomes. In this context, timely detection of sensor malfunctioning is crucial in preventing inaccurate assessment and false alarms. The present work introduces a sensor-fault detection and interpretation framework, based on the well-established support-vector machine scheme for anomaly detection, combined with a coalitional game-theory approach. The proposed framework is implemented in two datasets, provided along the 1st International Project Competition for Structural Health Monitoring (IPC-SHM 2020), comprising acceleration and cable-load measurements from two real cable-stayed bridges. The results demonstrate good predictive performance and highlight the potential for seamless adaption of the algorithm to intrinsically different data domains. For the first time, the term "decision trajectories", originating from the field of cognitive sciences, is introduced and applied in the context of SHM. This provides an intuitive and comprehensive illustration of the impact of individual features, along with an elaboration on feature dependencies that drive individual model predictions. Overall, the proposed framework provides an easy-to-train, application-agnostic and interpretable anomaly detector, which can be integrated into the preprocessing part of various SHM and condition-monitoring applications, offering a first screening of the sensor health prior to further analysis.

A Method for Region-Specific Anomaly Detection on Patch-wise Segmented PA Chest Radiograph (PA 흉부 X-선 영상 패치 분할에 의한 지역 특수성 이상 탐지 방법)

  • Hyun-bin Kim;Jun-Chul Chun
    • Journal of Internet Computing and Services
    • /
    • v.24 no.1
    • /
    • pp.49-59
    • /
    • 2023
  • Recently, attention to the pandemic situation represented by COVID-19 emerged problems caused by unexpected shortage of medical personnel. In this paper, we present a method for diagnosing the presence or absence of lesional sign on PA chest X-ray images as computer vision solution to support diagnosis tasks. Method for visual anomaly detection based on feature modeling can be also applied to X-ray images. With extracting feature vectors from PA chest X-ray images and divide to patch unit, region-specific abnormality can be detected. As preliminary experiment, we created simulation data set containing multiple objects and present results of the comparative experiments in this paper. We present method to improve both efficiency and performance of the process through hard masking of patch features to aligned images. By summing up regional specificity and global anomaly detection results, it shows improved performance by 0.069 AUROC compared to previous studies. By aggregating region-specific and global anomaly detection results, it shows improved performance by 0.069 AUROC compared to our last study.

Korean Word Sense Disambiguation using Dictionary and Corpus (사전과 말뭉치를 이용한 한국어 단어 중의성 해소)

  • Jeong, Hanjo;Park, Byeonghwa
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.1
    • /
    • pp.1-13
    • /
    • 2015
  • As opinion mining in big data applications has been highlighted, a lot of research on unstructured data has made. Lots of social media on the Internet generate unstructured or semi-structured data every second and they are often made by natural or human languages we use in daily life. Many words in human languages have multiple meanings or senses. In this result, it is very difficult for computers to extract useful information from these datasets. Traditional web search engines are usually based on keyword search, resulting in incorrect search results which are far from users' intentions. Even though a lot of progress in enhancing the performance of search engines has made over the last years in order to provide users with appropriate results, there is still so much to improve it. Word sense disambiguation can play a very important role in dealing with natural language processing and is considered as one of the most difficult problems in this area. Major approaches to word sense disambiguation can be classified as knowledge-base, supervised corpus-based, and unsupervised corpus-based approaches. This paper presents a method which automatically generates a corpus for word sense disambiguation by taking advantage of examples in existing dictionaries and avoids expensive sense tagging processes. It experiments the effectiveness of the method based on Naïve Bayes Model, which is one of supervised learning algorithms, by using Korean standard unabridged dictionary and Sejong Corpus. Korean standard unabridged dictionary has approximately 57,000 sentences. Sejong Corpus has about 790,000 sentences tagged with part-of-speech and senses all together. For the experiment of this study, Korean standard unabridged dictionary and Sejong Corpus were experimented as a combination and separate entities using cross validation. Only nouns, target subjects in word sense disambiguation, were selected. 93,522 word senses among 265,655 nouns and 56,914 sentences from related proverbs and examples were additionally combined in the corpus. Sejong Corpus was easily merged with Korean standard unabridged dictionary because Sejong Corpus was tagged based on sense indices defined by Korean standard unabridged dictionary. Sense vectors were formed after the merged corpus was created. Terms used in creating sense vectors were added in the named entity dictionary of Korean morphological analyzer. By using the extended named entity dictionary, term vectors were extracted from the input sentences and then term vectors for the sentences were created. Given the extracted term vector and the sense vector model made during the pre-processing stage, the sense-tagged terms were determined by the vector space model based word sense disambiguation. In addition, this study shows the effectiveness of merged corpus from examples in Korean standard unabridged dictionary and Sejong Corpus. The experiment shows the better results in precision and recall are found with the merged corpus. This study suggests it can practically enhance the performance of internet search engines and help us to understand more accurate meaning of a sentence in natural language processing pertinent to search engines, opinion mining, and text mining. Naïve Bayes classifier used in this study represents a supervised learning algorithm and uses Bayes theorem. Naïve Bayes classifier has an assumption that all senses are independent. Even though the assumption of Naïve Bayes classifier is not realistic and ignores the correlation between attributes, Naïve Bayes classifier is widely used because of its simplicity and in practice it is known to be very effective in many applications such as text classification and medical diagnosis. However, further research need to be carried out to consider all possible combinations and/or partial combinations of all senses in a sentence. Also, the effectiveness of word sense disambiguation may be improved if rhetorical structures or morphological dependencies between words are analyzed through syntactic analysis.

Abbreviation Disambiguation using Topic Modeling (토픽모델링을 이용한 약어 중의성 해소)

  • Woon-Kyo Lee;Ja-Hee Kim;Junki Yang
    • Journal of the Korea Society for Simulation
    • /
    • v.32 no.1
    • /
    • pp.35-44
    • /
    • 2023
  • In recent, there are many research cases that analyze trends or research trends with text analysis. When collecting documents by searching for keywords in abbreviations for data analysis, it is necessary to disambiguate abbreviations. In many studies, documents are classified by hand-work reading the data one by one to find the data necessary for the study. Most of the studies to disambiguate abbreviations are studies that clarify the meaning of words and use supervised learning. The previous method to disambiguate abbreviation is not suitable for classification studies of documents looking for research data from abbreviation search documents, and related studies are also insufficient. This paper proposes a method of semi-automatically classifying documents collected by abbreviations by going topic modeling with Non-Negative Matrix Factorization, an unsupervised learning method, in the data pre-processing step. To verify the proposed method, papers were collected from academic DB with the abbreviation 'MSA'. The proposed method found 316 papers related to Micro Services Architecture in 1,401 papers. The document classification accuracy of the proposed method was measured at 92.36%. It is expected that the proposed method can reduce the researcher's time and cost due to hand work.

Automatic Training Corpus Generation Method of Named Entity Recognition Using Knowledge-Bases (개체명 인식 코퍼스 생성을 위한 지식베이스 활용 기법)

  • Park, Youngmin;Kim, Yejin;Kang, Sangwoo;Seo, Jungyun
    • Korean Journal of Cognitive Science
    • /
    • v.27 no.1
    • /
    • pp.27-41
    • /
    • 2016
  • Named entity recognition is to classify elements in text into predefined categories and used for various departments which receives natural language inputs. In this paper, we propose a method which can generate named entity training corpus automatically using knowledge bases. We apply two different methods to generate corpus depending on the knowledge bases. One of the methods attaches named entity labels to text data using Wikipedia. The other method crawls data from web and labels named entities to web text data using Freebase. We conduct two experiments to evaluate corpus quality and our proposed method for generating Named entity recognition corpus automatically. We extract sentences randomly from two corpus which called Wikipedia corpus and Web corpus then label them to validate both automatic labeled corpus. We also show the performance of named entity recognizer trained by corpus generated in our proposed method. The result shows that our proposed method adapts well with new corpus which reflects diverse sentence structures and the newest entities.

  • PDF

Technology Development for Non-Contact Interface of Multi-Region Classifier based on Context-Aware (상황 인식 기반 다중 영역 분류기 비접촉 인터페이스기술 개발)

  • Jin, Songguo;Rhee, Phill-Kyu
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.20 no.6
    • /
    • pp.175-182
    • /
    • 2020
  • The non-contact eye tracking is a nonintrusive human-computer interface providing hands-free communications for people with severe disabilities. Recently. it is expected to do an important role in non-contact systems due to the recent coronavirus COVID-19, etc. This paper proposes a novel approach for an eye mouse using an eye tracking method based on a context-aware based AdaBoost multi-region classifier and ASSL algorithm. The conventional AdaBoost algorithm, however, cannot provide sufficiently reliable performance in face tracking for eye cursor pointing estimation, because it cannot take advantage of the spatial context relations among facial features. Therefore, we propose the eye-region context based AdaBoost multiple classifier for the efficient non-contact gaze tracking and mouse implementation. The proposed method detects, tracks, and aggregates various eye features to evaluate the gaze and adjusts active and semi-supervised learning based on the on-screen cursor. The proposed system has been successfully employed in eye location, and it can also be used to detect and track eye features. This system controls the computer cursor along the user's gaze and it was postprocessing by applying Gaussian modeling to prevent shaking during the real-time tracking using Kalman filter. In this system, target objects were randomly generated and the eye tracking performance was analyzed according to the Fits law in real time. It is expected that the utilization of non-contact interfaces.

Road Extraction from Images Using Semantic Segmentation Algorithm (영상 기반 Semantic Segmentation 알고리즘을 이용한 도로 추출)

  • Oh, Haeng Yeol;Jeon, Seung Bae;Kim, Geon;Jeong, Myeong-Hun
    • Journal of the Korean Society of Surveying, Geodesy, Photogrammetry and Cartography
    • /
    • v.40 no.3
    • /
    • pp.239-247
    • /
    • 2022
  • Cities are becoming more complex due to rapid industrialization and population growth in modern times. In particular, urban areas are rapidly changing due to housing site development, reconstruction, and demolition. Thus accurate road information is necessary for various purposes, such as High Definition Map for autonomous car driving. In the case of the Republic of Korea, accurate spatial information can be generated by making a map through the existing map production process. However, targeting a large area is limited due to time and money. Road, one of the map elements, is a hub and essential means of transportation that provides many different resources for human civilization. Therefore, it is essential to update road information accurately and quickly. This study uses Semantic Segmentation algorithms Such as LinkNet, D-LinkNet, and NL-LinkNet to extract roads from drone images and then apply hyperparameter optimization to models with the highest performance. As a result, the LinkNet model using pre-trained ResNet-34 as the encoder achieved 85.125 mIoU. Subsequent studies should focus on comparing the results of this study with those of studies using state-of-the-art object detection algorithms or semi-supervised learning-based Semantic Segmentation techniques. The results of this study can be applied to improve the speed of the existing map update process.