• Title/Summary/Keyword: Question Classification

Search Result 157, Processing Time 0.027 seconds

Machine Learning Algorithm Accuracy for Code-Switching Analytics in Detecting Mood

  • Latib, Latifah Abd;Subramaniam, Hema;Ramli, Siti Khadijah;Ali, Affezah;Yulia, Astri;Shahdan, Tengku Shahrom Tengku;Zulkefly, Nor Sheereen
    • International Journal of Computer Science & Network Security
    • /
    • v.22 no.9
    • /
    • pp.334-342
    • /
    • 2022
  • Nowadays, as we can notice on social media, most users choose to use more than one language in their online postings. Thus, social media analytics needs reviewing as code-switching analytics instead of traditional analytics. This paper aims to present evidence comparable to the accuracy of code-switching analytics techniques in analysing the mood state of social media users. We conducted a systematic literature review (SLR) to study the social media analytics that examined the effectiveness of code-switching analytics techniques. One primary question and three sub-questions have been raised for this purpose. The study investigates the computational models used to detect and measures emotional well-being. The study primarily focuses on online postings text, including the extended text analysis, analysing and predicting using past experiences, and classifying the mood upon analysis. We used thirty-two (32) papers for our evidence synthesis and identified four main task classifications that can be used potentially in code-switching analytics. The tasks include determining analytics algorithms, classification techniques, mood classes, and analytics flow. Results showed that CNN-BiLSTM was the machine learning algorithm that affected code-switching analytics accuracy the most with 83.21%. In addition, the analytics accuracy when using the code-mixing emotion corpus could enhance by about 20% compared to when performing with one language. Our meta-analyses showed that code-mixing emotion corpus was effective in improving the mood analytics accuracy level. This SLR result has pointed to two apparent gaps in the research field: i) lack of studies that focus on Malay-English code-mixing analytics and ii) lack of studies investigating various mood classes via the code-mixing approach.

A Study on the Construction of Financial-Specific Language Model Applicable to the Financial Institutions (금융권에 적용 가능한 금융특화언어모델 구축방안에 관한 연구)

  • Jae Kwon Bae
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.29 no.3
    • /
    • pp.79-87
    • /
    • 2024
  • Recently, the importance of pre-trained language models (PLM) has been emphasized for natural language processing (NLP) such as text classification, sentiment analysis, and question answering. Korean PLM shows high performance in NLP in general-purpose domains, but is weak in domains such as finance, medicine, and law. The main goal of this study is to propose a language model learning process and method to build a financial-specific language model that shows good performance not only in the financial domain but also in general-purpose domains. The five steps of the financial-specific language model are (1) financial data collection and preprocessing, (2) selection of model architecture such as PLM or foundation model, (3) domain data learning and instruction tuning, (4) model verification and evaluation, and (5) model deployment and utilization. Through this, a method for constructing pre-learning data that takes advantage of the characteristics of the financial domain and an efficient LLM training method, adaptive learning and instruction tuning techniques, were presented.

Prediction of Correct Answer Rate and Identification of Significant Factors for CSAT English Test Based on Data Mining Techniques (데이터마이닝 기법을 활용한 대학수학능력시험 영어영역 정답률 예측 및 주요 요인 분석)

  • Park, Hee Jin;Jang, Kyoung Ye;Lee, Youn Ho;Kim, Woo Je;Kang, Pil Sung
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.4 no.11
    • /
    • pp.509-520
    • /
    • 2015
  • College Scholastic Ability Test(CSAT) is a primary test to evaluate the study achievement of high-school students and used by most universities for admission decision in South Korea. Because its level of difficulty is a significant issue to both students and universities, the government makes a huge effort to have a consistent difficulty level every year. However, the actual levels of difficulty have significantly fluctuated, which causes many problems with university admission. In this paper, we build two types of data-driven prediction models to predict correct answer rate and to identify significant factors for CSAT English test through accumulated test data of CSAT, unlike traditional methods depending on experts' judgments. Initially, we derive candidate question-specific factors that can influence the correct answer rate, such as the position, EBS-relation, readability, from the annual CSAT practices and CSAT for 10 years. In addition, we drive context-specific factors by employing topic modeling which identify the underlying topics over the text. Then, the correct answer rate is predicted by multiple linear regression and level of difficulty is predicted by classification tree. The experimental results show that 90% of accuracy can be achieved by the level of difficulty (difficult/easy) classification model, whereas the error rate for correct answer rate is below 16%. Points and problem category are found to be critical to predict the correct answer rate. In addition, the correct answer rate is also influenced by some of the topics discovered by topic modeling. Based on our study, it will be possible to predict the range of expected correct answer rate for both question-level and entire test-level, which will help CSAT examiners to control the level of difficulties.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.

A Study on the Effect of Network Centralities on Recommendation Performance (네트워크 중심성 척도가 추천 성능에 미치는 영향에 대한 연구)

  • Lee, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.27 no.1
    • /
    • pp.23-46
    • /
    • 2021
  • Collaborative filtering, which is often used in personalization recommendations, is recognized as a very useful technique to find similar customers and recommend products to them based on their purchase history. However, the traditional collaborative filtering technique has raised the question of having difficulty calculating the similarity for new customers or products due to the method of calculating similaritiesbased on direct connections and common features among customers. For this reason, a hybrid technique was designed to use content-based filtering techniques together. On the one hand, efforts have been made to solve these problems by applying the structural characteristics of social networks. This applies a method of indirectly calculating similarities through their similar customers placed between them. This means creating a customer's network based on purchasing data and calculating the similarity between the two based on the features of the network that indirectly connects the two customers within this network. Such similarity can be used as a measure to predict whether the target customer accepts recommendations. The centrality metrics of networks can be utilized for the calculation of these similarities. Different centrality metrics have important implications in that they may have different effects on recommended performance. In this study, furthermore, the effect of these centrality metrics on the performance of recommendation may vary depending on recommender algorithms. In addition, recommendation techniques using network analysis can be expected to contribute to increasing recommendation performance even if they apply not only to new customers or products but also to entire customers or products. By considering a customer's purchase of an item as a link generated between the customer and the item on the network, the prediction of user acceptance of recommendation is solved as a prediction of whether a new link will be created between them. As the classification models fit the purpose of solving the binary problem of whether the link is engaged or not, decision tree, k-nearest neighbors (KNN), logistic regression, artificial neural network, and support vector machine (SVM) are selected in the research. The data for performance evaluation used order data collected from an online shopping mall over four years and two months. Among them, the previous three years and eight months constitute social networks composed of and the experiment was conducted by organizing the data collected into the social network. The next four months' records were used to train and evaluate recommender models. Experiments with the centrality metrics applied to each model show that the recommendation acceptance rates of the centrality metrics are different for each algorithm at a meaningful level. In this work, we analyzed only four commonly used centrality metrics: degree centrality, betweenness centrality, closeness centrality, and eigenvector centrality. Eigenvector centrality records the lowest performance in all models except support vector machines. Closeness centrality and betweenness centrality show similar performance across all models. Degree centrality ranking moderate across overall models while betweenness centrality always ranking higher than degree centrality. Finally, closeness centrality is characterized by distinct differences in performance according to the model. It ranks first in logistic regression, artificial neural network, and decision tree withnumerically high performance. However, it only records very low rankings in support vector machine and K-neighborhood with low-performance levels. As the experiment results reveal, in a classification model, network centrality metrics over a subnetwork that connects the two nodes can effectively predict the connectivity between two nodes in a social network. Furthermore, each metric has a different performance depending on the classification model type. This result implies that choosing appropriate metrics for each algorithm can lead to achieving higher recommendation performance. In general, betweenness centrality can guarantee a high level of performance in any model. It would be possible to consider the introduction of proximity centrality to obtain higher performance for certain models.

The Social Influence of the Landscape Architecture Engineer Examination on the Establishment of Authenticity in Landscaping History Department (조경기사 '조경사' 과목이 조경역사학(造景歷史學) 분야의 진정성 확립에 미친 사회적 영향)

  • Lee, Chang-Hun;Shin, Hyun-Sil;Kim, Kyu-Seob;Lee, Won-Ho
    • Journal of the Korean Institute of Traditional Landscape Architecture
    • /
    • v.36 no.3
    • /
    • pp.128-136
    • /
    • 2018
  • This study was centered on the protested data of the issue of "History of Landscape Architecture" in the handwritten course of landscaping articles of National Qualifications Test. The purpose of this study is to examine the types of social problems in the process of correcting erroneous historical facts. The purpose of this study was to find alternatives for the development of the field of landscape and culture history that can assist in the verification of the historical facts of the landscape sciences examination questions. The main results are as follows. First, as a result of analyzing the contents of the landscape architects' subject matter, the establishment of concept of landscape style and form and the confirmation of historical facts were investigated as important types to be established for development of landscape landscape history department. It seems that the social consensus of the expert group is needed to supplement the lack of data to refer to landscape architectural theory. Second, the analysis of the problematic narrative contents resulted in a total of five types of questionnaires. The appeared in the Undefined style and form(52.94%), Unproven historical facts(25.13%), Obscurity Era classification(11.77%), Lack of specificity(6.95%), Content scope of obscurity events(3.21%) Third, it is not only the lack of information to learn the theory by comparing and analyzing the contents of the statements in the landscape architect 's question items, but also the difference of contents between books was analyzed as the main cause of the problem. As a result of examining the characteristics and examples of the issues raised in landscape architectural problems, it was related to the social phenomenon, and it was classified into cultural factors and political factors. Fourth, the resolution of problematic issues in landscape architects' landscaping articles, which are national technical qualification tests, shows positive results. The information determined in the process of solving the perceived content can be used directly in landscaping field, and it helps the accuracy of the verification process by identifying the types and characteristics of the issues.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

Conflict resolution and political tasks on the usage of beauty care devices by beauty artists (미용업종사자의 미용기기 사용에 대한 분쟁해결과 정책적 과제)

  • Kim, Ju-Ri
    • Journal of Arbitration Studies
    • /
    • v.27 no.2
    • /
    • pp.83-105
    • /
    • 2017
  • In contemporary society interest in and consumption of beauty treatment are increasing, raising interest in health and beauty. However, beauty-related laws are becoming factors of hindrance of beauty development. Currently the Public Health Control Act plays a basic role in the beauty art business in Korea, However the contents are in discord with international laws and its definition is not clear. Therefore it is causing conflicts of different occupations and job associations which are similar to art business. Especially, because neither definitions nor policies on beauty care devices exist in the Public Health Control Act, beauty care devices using in foreign countries cannot be used in Korea due to classification as medical devices. Under this circumstance, therefore, beauty care device uses by beauty artists violate the law. The government has tried to solve these irrational regulations. Recently, the Small and Medium Business Administration announced 'the improvement plan of small business and young founders site regulation for public economy recovery' in a ministerial meeting on December 28, 2016. Regulations on policy preparation for skincare devices were inclusive in this announcement. It is the question whether the regulations will be executed or not. Even though beauty industrial competitiveness was presented in the 18th Presidential Council on National Competitiveness in 2009, it was not practiced. The proposal bills for beauty law improvement have been put forth several times since 2000 including an improvement plan for regulating beauty care devices. However, so far there have been no improvements. The damage on the regulation classifying beauty devices as medical devices is not only restricted to skincare. This develops beauty devices and the beauty industry which imports and exports beauty devices. When beauty devices are exported, complicated procedures are unavoidable and when beauty devices are imported, irrational problems like reregistration procedures and costs occur. The reason why an improvement plan has not gone into practice is the resistance of the dermatologists' association. Dermatologists tend to stand positively against harming public health by saying that beauty devices used by beauty artists cause people to suffer side effects. In contrast, anyone who has a licence to use beauty devices is able to use them in foreign countries. It is not only infringement of one's right as a beauty artist but also people's right to receive beauty care services. With this reason, Korean's current law under which beauty devices are ruled as medical devices should be revised with accordance to domestic surroundings. Therefore in order to advance and globalize the beauty industry, the support and cooperation of the Korean government and relevant associations is needed to legislate and revise the beauty devices laws. The relevant associations abandon regional self-centeredness and cooperate to define ranges, size and management of beauty devices for safe use. If no collaboration exists, an arbitration agency should be established to solve the problem.

Chatbot Design Method Using Hybrid Word Vector Expression Model Based on Real Telemarketing Data

  • Zhang, Jie;Zhang, Jianing;Ma, Shuhao;Yang, Jie;Gui, Guan
    • KSII Transactions on Internet and Information Systems (TIIS)
    • /
    • v.14 no.4
    • /
    • pp.1400-1418
    • /
    • 2020
  • In the development of commercial promotion, chatbot is known as one of significant skill by application of natural language processing (NLP). Conventional design methods are using bag-of-words model (BOW) alone based on Google database and other online corpus. For one thing, in the bag-of-words model, the vectors are Irrelevant to one another. Even though this method is friendly to discrete features, it is not conducive to the machine to understand continuous statements due to the loss of the connection between words in the encoded word vector. For other thing, existing methods are used to test in state-of-the-art online corpus but it is hard to apply in real applications such as telemarketing data. In this paper, we propose an improved chatbot design way using hybrid bag-of-words model and skip-gram model based on the real telemarketing data. Specifically, we first collect the real data in the telemarketing field and perform data cleaning and data classification on the constructed corpus. Second, the word representation is adopted hybrid bag-of-words model and skip-gram model. The skip-gram model maps synonyms in the vicinity of vector space. The correlation between words is expressed, so the amount of information contained in the word vector is increased, making up for the shortcomings caused by using bag-of-words model alone. Third, we use the term frequency-inverse document frequency (TF-IDF) weighting method to improve the weight of key words, then output the final word expression. At last, the answer is produced using hybrid retrieval model and generate model. The retrieval model can accurately answer questions in the field. The generate model can supplement the question of answering the open domain, in which the answer to the final reply is completed by long-short term memory (LSTM) training and prediction. Experimental results show which the hybrid word vector expression model can improve the accuracy of the response and the whole system can communicate with humans.

A Study on The Effects of Business Plan upon Firm Performance (사업계획이 경영성과에 미치는 영향에 관한 연구: 구성요소 및 기업가유형, 발전단계 측면에서)

  • Koh, In-Kon
    • Asia-Pacific Journal of Business Venturing and Entrepreneurship
    • /
    • v.6 no.4
    • /
    • pp.111-135
    • /
    • 2011
  • While previous studies and publications all assert a strong correlation between company's business plan and performance, very few have actually conducted practical analyses to support that. This study takes a practical approach in its analysis of Korean small and mid-sized enterprises(SME) with the view to finding an answer to the question. In addition, with the considerations of entrepreneur type and company's development stage, I analyzed the differences of business plan components' effects on performances. I selected business plan's components, which have been suggested only in theory and in concept, through the literature review and preliminary examination. Corporate performances were the recent improvements of ROS, ROA, market share and the number of employees to measure how greatly each is impacted by the components of a business plan. Results show that business plan components have influenced upon the number of employees. The business plan components discriminated superior company group and inferior company group properly. Especially, finance & related system and advertising & distribution factors showed statistically significant classification forecasting power. Technical/Craftsman evaluated the effects of producing & sales and profit & quality factors high and General/Opportunistic evaluated the effects of finance & related system, advertising & distribution, corporate mission factors high. The effect of corporate mission was highest among company development stages. Finance & related system and advertising & distribution factors showed the statistically significant difference in entrepreneur type and company development stages.

  • PDF