• Title/Summary/Keyword: Text data

Search Result 2,953, Processing Time 0.03 seconds

Text-independent Speaker Identification by Bagging VQ Classifier

  • Kyung, Youn-Jeong;Park, Bong-Dae;Lee, Hwang-Soo
    • The Journal of the Acoustical Society of Korea
    • /
    • v.20 no.2E
    • /
    • pp.17-24
    • /
    • 2001
  • In this paper, we propose the bootstrap and aggregating (bagging) vector quantization (VQ) classifier to improve the performance of the text-independent speaker recognition system. This method generates multiple training data sets by resampling the original training data set, constructs the corresponding VQ classifiers, and then integrates the multiple VQ classifiers into a single classifier by voting. The bagging method has been proven to greatly improve the performance of unstable classifiers. Through two different experiments, this paper shows that the VQ classifier is unstable. In one of these experiments, the bias and variance of a VQ classifier are computed with a waveform database. The variance of the VQ classifier is compared with that of the classification and regression tree (CART) classifier[1]. The variance of the VQ classifier is shown to be as large as that of the CART classifier. The other experiment involves speaker recognition. The speaker recognition rates vary significantly by the minor changes in the training data set. The speaker recognition experiments involving a closed set, text-independent and speaker identification are performed with the TIMIT database to compare the performance of the bagging VQ classifier with that of the conventional VQ classifier. The bagging VQ classifier yields improved performance over the conventional VQ classifier. It also outperforms the conventional VQ classifier in small training data set problems.

  • PDF

User Authentication Based on Keystroke Dynamics of Free Text and One-Class Classifiers (자유로운 문자열의 키스트로크 다이나믹스와 일범주 분류기를 활용한 사용자 인증)

  • Seo, Dongmin;Kang, Pilsung
    • Journal of Korean Institute of Industrial Engineers
    • /
    • v.42 no.4
    • /
    • pp.280-289
    • /
    • 2016
  • User authentication is an important issue on computer network systems. Most of the current computer network systems use the ID-password string match as the primary user authentication method. However, in password-based authentication, whoever acquires the password of a valid user can access the system without any restrictions. In this paper, we present a keystroke dynamics-based user authentication to resolve limitations of the password-based authentication. Since most previous studies employed a fixed-length text as an input data, we aims at enhancing the authentication performance by combining four different variable creation methods from a variable-length free text as an input data. As authentication algorithms, four one-class classifiers are employed. We verify the proposed approach through an experiment based on actual keystroke data collected from 100 participants who provided more than 17,000 keystrokes for both Korean and English. The experimental results show that our proposed method significantly improve the authentication performance compared to the existing approaches.

The Frequency Analysis of Teacher's Emotional Response in Mathematics Class (수학 담화에서 나타나는 교사의 감성적 언어 빈도 분석)

  • Son, Bok Eun;Ko, Ho Kyoung
    • Communications of Mathematical Education
    • /
    • v.32 no.4
    • /
    • pp.555-573
    • /
    • 2018
  • The purpose of this study is to identify the emotional language of math teachers in math class using text mining techniques. For this purpose, we collected the discourse data of the teachers in the class by using the excellent class video. The analysis of the extracted unstructured data proceeded to three stages: data collection, data preprocessing, and text mining analysis. According to text mining analysis, there was few emotional language in teacher's response in mathematics class. This result can infer the characteristics of mathematics class in the aspect of affective domain.

A Study on the Characteristics of Amekaji Fashion Trends Using Big Data Text Mining Analysis (빅데이터 텍스트 마이닝 분석을 활용한 아메카지 패션 트렌드 특징 고찰)

  • Kim, Gihyung
    • Journal of Fashion Business
    • /
    • v.26 no.3
    • /
    • pp.138-154
    • /
    • 2022
  • The purpose of this study is to identify the characteristics of domestic American casual fashion trends using big data text mining analysis. 108,524 posts and 2,038,999 extracted keywords from Naver and Daum related to American casual fashion in the past 5 years were collected and refined by the Textom program, and frequency analysis, word cloud, N-gram, centrality analysis, and CONCOR analysis were performed. The frequency analysis, 'vintage', 'style', 'daily look', 'coordination', 'workwear', 'men's wear' appeared as the main keywords. The main nationality of the representative brands was Japanese, followed by American, Korean, and others. As a result of the CONCOR analysis, four clusters were derived: "general American casual trend", "vintage taste", "direct sales mania", and "American styling". This study results showed that Japanese American casual clothes are influenced by American casual clothes, and American casual fashion in Korea, which has been reinterpreted, is completed with various coordination and creative styles such as workwear, street, military, classic, etc., focusing on items and brands. Looks were worn and shared on social networks, and the existence of an active consumer group and market potential to obtain genuine products, ranging from second-hand transactions for limited edition vintages to individual transactions were also confirmed. The significance of this study is that it presented the characteristics of American casual fashion trends academically based on online text data that the public actually uses because it has been spread by the public.

A Study on the Use of Stopword Corpus for Cleansing Unstructured Text Data (비정형 텍스트 데이터 정제를 위한 불용어 코퍼스의 활용에 관한 연구)

  • Lee, Won-Jo
    • The Journal of the Convergence on Culture Technology
    • /
    • v.8 no.6
    • /
    • pp.891-897
    • /
    • 2022
  • In big data analysis, raw text data mostly exists in various unstructured data forms, so it becomes a structured data form that can be analyzed only after undergoing heuristic pre-processing and computer post-processing cleansing. Therefore, in this study, unnecessary elements are purified through pre-processing of the collected raw data in order to apply the wordcloud of R program, which is one of the text data analysis techniques, and stopwords are removed in the post-processing process. Then, a case study of wordcloud analysis was conducted, which calculates the frequency of occurrence of words and expresses words with high frequency as key issues. In this study, to improve the problems of the "nested stopword source code" method, which is the existing stopword processing method, using the word cloud technique of R, we propose the use of "general stopword corpus" and "user-defined stopword corpus" and conduct case analysis. The advantages and disadvantages of the proposed "unstructured data cleansing process model" are comparatively verified and presented, and the practical application of word cloud visualization analysis using the "proposed external corpus cleansing technique" is presented.

Research on the Financial Data Fraud Detection of Chinese Listed Enterprises by Integrating Audit Opinions

  • Leiruo Zhou;Yunlong Duan;Wei Wei
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.17 no.12
    • /
    • pp.3218-3241
    • /
    • 2023
  • Financial fraud undermines the sustainable development of financial markets. Financial statements can be regarded as the key source of information to obtain the operating conditions of listed companies. Current research focuses more on mining financial digital data instead of looking into text data. However, text data can reveal emotional information, which is an important basis for detecting financial fraud. The audit opinion of the financial statement is especially the fair opinion of a certified public accountant on the quality of enterprise financial reports. Therefore, this research was carried out by using the data features of 4,153 listed companies' financial annual reports and audits of text opinions in the past six years, and the paper puts forward a financial fraud detection model integrating audit opinions. First, the financial data index database and audit opinion text database were built. Second, digitized audit opinions with deep learning Bert model was employed. Finally, both the extracted audit numerical characteristics and the financial numerical indicators were used as the training data of the LightGBM model. What is worth paying attention to is that the imbalanced distribution of sample labels is also one of the focuses of financial fraud research. To solve this problem, data enhancement and Focal Loss feature learning functions were used in data processing and model training respectively. The experimental results show that compared with the conventional financial fraud detection model, the performance of the proposed model is improved greatly, with Area Under the Curve (AUC) and Accuracy reaching 81.42% and 78.15%, respectively.

Financial Footnote Analysis for Financial Ratio Predictions based on Text-Mining Techniques (재무제표 주석의 텍스트 분석 통한 재무 비율 예측 향상 연구)

  • Choe, Hyoung-Gyu;Lee, Sang-Yong Tom
    • Knowledge Management Research
    • /
    • v.21 no.2
    • /
    • pp.177-196
    • /
    • 2020
  • Since the adoption of K-IFRS(Korean International Financial Reporting Standards), the amount of financial footnotes has been increased. However, due to the stereotypical phrase and the lack of conciseness, deriving the core information from footnotes is not really easy yet. To propose a solution for this problem, this study tried financial footnote analysis for financial ratio predictions based on text-mining techniques. Using the financial statements data from 2013 to 2018, we tried to predict the earning per share (EPS) of the following quarter. We found that measured prediction errors were significantly reduced when text-mined footnotes data were jointly used. We believe this result came from the fact that discretionary financial figures, which were hardly predicted with quantitative financial data, were more correlated with footnotes texts.

Text-Independent Speaker Verification Using Variational Gaussian Mixture Model

  • Moattar, Mohammad Hossein;Homayounpour, Mohammad Mehdi
    • ETRI Journal
    • /
    • v.33 no.6
    • /
    • pp.914-923
    • /
    • 2011
  • This paper concerns robust and reliable speaker model training for text-independent speaker verification. The baseline speaker modeling approach is the Gaussian mixture model (GMM). In text-independent speaker verification, the amount of speech data may be different for speakers. However, we still wish the modeling approach to perform equally well for all speakers. Besides, the modeling technique must be least vulnerable against unseen data. A traditional approach for GMM training is expectation maximization (EM) method, which is known for its overfitting problem and its weakness in handling insufficient training data. To tackle these problems, variational approximation is proposed. Variational approaches are known to be robust against overtraining and data insufficiency. We evaluated the proposed approach on two different databases, namely KING and TFarsdat. The experiments show that the proposed approach improves the performance on TFarsdat and KING databases by 0.56% and 4.81%, respectively. Also, the experiments show that the variationally optimized GMM is more robust against noise and the verification error rate in noisy environments for TFarsdat dataset decreases by 1.52%.

Analysis of 'Better Class' Characteristics and Patterns from College Lecture Evaluation by Longitudinal Big Data

  • Nam, Min-Woo;Cho, Eun-Soon
    • International Journal of Contents
    • /
    • v.15 no.3
    • /
    • pp.7-12
    • /
    • 2019
  • The purpose of this study was to analyze characteristics and patterns of 'better class' by using the longitudinal text mining big data analysis technique from subjective lecture evaluation comments. First, this study classified upper 30% classes to deduce certain characteristics and patterns from every five-year subjective text data for 10 years. A total of 47,177courses (100%) from spring semester 2005 to fall semester 2014 were analyzed from a university at a metropolitan city in the mid area of South Korea. This study extracted meaningful words such as good, course, professor, appreciation, lecture, interesting, useful, know, easy, improvement, progress, teaching material, passion, and concern from the order of frequency 2005-2009. The other set of words were class, appreciation, professor, good, course, interesting, understanding, useful, help, student, effort, thinking, not difficult, explanation, lecture, hard, pleasant, easy, study, examination, like, various, fun, and knowledge 2010-2014. This study suggests that the characteristics and patterns of 'better class' at college, should be analyzed according to different academic code such as liberal arts, fine arts, social science, engineering, math and science, and etc.

A Study on De-Identification Methods to Create a Basis for Safety Report Text Mining Analysis (항공안전 보고 데이터 텍스트 분석 기반 조성을 위한 비식별 처리 기술 적용 연구)

  • Hwang, Do-bin;Kim, Young-gon;Sim, Yeong-min
    • Journal of the Korean Society for Aviation and Aeronautics
    • /
    • v.29 no.4
    • /
    • pp.160-165
    • /
    • 2021
  • In order to identify and analyze potential aviation safety hazards, analysis of aviation safety report data must be preceded. Therefore, in consideration of the provisions of the Aviation Safety Act and the recommendations of ICAO Doc 9859 SMM Edition 4th, personal information in the reporting data and sensitive information of the reporter, etc. It identifies the scope of de-identification targets and suggests a method for applying de-identification processing technology to personal and sensitive information including unstructured text data.