• Title/Summary/Keyword: evaluation module

Search Result 946, Processing Time 0.027 seconds

A Methodology for Automatic Multi-Categorization of Single-Categorized Documents (단일 카테고리 문서의 다중 카테고리 자동확장 방법론)

  • Hong, Jin-Sung;Kim, Namgyu;Lee, Sangwon
    • Journal of Intelligence and Information Systems
    • /
    • v.20 no.3
    • /
    • pp.77-92
    • /
    • 2014
  • Recently, numerous documents including unstructured data and text have been created due to the rapid increase in the usage of social media and the Internet. Each document is usually provided with a specific category for the convenience of the users. In the past, the categorization was performed manually. However, in the case of manual categorization, not only can the accuracy of the categorization be not guaranteed but the categorization also requires a large amount of time and huge costs. Many studies have been conducted towards the automatic creation of categories to solve the limitations of manual categorization. Unfortunately, most of these methods cannot be applied to categorizing complex documents with multiple topics because the methods work by assuming that one document can be categorized into one category only. In order to overcome this limitation, some studies have attempted to categorize each document into multiple categories. However, they are also limited in that their learning process involves training using a multi-categorized document set. These methods therefore cannot be applied to multi-categorization of most documents unless multi-categorized training sets are provided. To overcome the limitation of the requirement of a multi-categorized training set by traditional multi-categorization algorithms, we propose a new methodology that can extend a category of a single-categorized document to multiple categorizes by analyzing relationships among categories, topics, and documents. First, we attempt to find the relationship between documents and topics by using the result of topic analysis for single-categorized documents. Second, we construct a correspondence table between topics and categories by investigating the relationship between them. Finally, we calculate the matching scores for each document to multiple categories. The results imply that a document can be classified into a certain category if and only if the matching score is higher than the predefined threshold. For example, we can classify a certain document into three categories that have larger matching scores than the predefined threshold. The main contribution of our study is that our methodology can improve the applicability of traditional multi-category classifiers by generating multi-categorized documents from single-categorized documents. Additionally, we propose a module for verifying the accuracy of the proposed methodology. For performance evaluation, we performed intensive experiments with news articles. News articles are clearly categorized based on the theme, whereas the use of vulgar language and slang is smaller than other usual text document. We collected news articles from July 2012 to June 2013. The articles exhibit large variations in terms of the number of types of categories. This is because readers have different levels of interest in each category. Additionally, the result is also attributed to the differences in the frequency of the events in each category. In order to minimize the distortion of the result from the number of articles in different categories, we extracted 3,000 articles equally from each of the eight categories. Therefore, the total number of articles used in our experiments was 24,000. The eight categories were "IT Science," "Economy," "Society," "Life and Culture," "World," "Sports," "Entertainment," and "Politics." By using the news articles that we collected, we calculated the document/category correspondence scores by utilizing topic/category and document/topics correspondence scores. The document/category correspondence score can be said to indicate the degree of correspondence of each document to a certain category. As a result, we could present two additional categories for each of the 23,089 documents. Precision, recall, and F-score were revealed to be 0.605, 0.629, and 0.617 respectively when only the top 1 predicted category was evaluated, whereas they were revealed to be 0.838, 0.290, and 0.431 when the top 1 - 3 predicted categories were considered. It was very interesting to find a large variation between the scores of the eight categories on precision, recall, and F-score.

Assessing Middle School Students' Understanding of Radiative Equilibrium, the Greenhouse Effect, and Global Warming Through Their Interpretation of Heat Balance Data (열수지 자료 해석에서 드러난 중학생의 복사 평형, 온실 효과, 지구 온난화에 대한 이해)

  • Chung, Sueim;Yu, Eun-Jeong
    • Journal of the Korean earth science society
    • /
    • v.42 no.6
    • /
    • pp.770-788
    • /
    • 2021
  • This study aimed to determine whether middle school students could understand global warming and the greenhouse effect, and explain them in terms of global radiative equilibrium. From July 13 to July 24 in 2021, 118 students in the third grade of middle school, who completed a class module on 'atmosphere and weather', participated in an online assessment consisting of multiple-choice and written answers on radiative equilibrium, the greenhouse effect, and global warming; 97 complete responses were obtained. After analysis, it was found that over half the students (61.9%) correctly described the meaning of radiative equilibrium; however, their explanations frequently contained prior knowledge or specific examples outside of the presented data. The majority of the students (92.8%) knew that the greenhouse effect occurs within Earth's atmosphere, but many (32.0%) thought of the greenhouse effect as a state in which the radiative equilibrium is broken. Less than half the students (47.4%) answered correctly that radiative equilibrium occurs on both Earth and the Moon. Most of the students (69.1%) understood that atmospheric re-radiation is the cause of the greenhouse effect, but few (39.2%) answered correctly that the amount of surface radiation emitted is greater than the amount of solar radiation absorbed by the Earth's surface. In addition, about half the students (49.5%) had a good understanding of the relationship between the increase in greenhouse gases and the absorption of atmospheric gases, and the resulting reradiation to the surface. However, when asked about greenhouse gases increases, their thoughts on surface emissions were very diverse; 14.4% said they increased, 9.3% said there was no change, 7.2% said they decreased, and 18.6% gave no response. Radiation equilibrium, the greenhouse effect, and global warming are a large semantic network connected by the balance and interaction of the Earth system. This can thus serve as a conceptual system for students to understand, apply, and interpret climate change caused by global warming. Therefore, with the current climate change crisis facing mankind, sophisticated program development and classroom experiences should be provided to encourage students to think scientifically and establish scientific concepts based on accurate understanding, with follow-up studies conducted to observe the effects.

THE EFFECT OF THE REPEATABILITY FILE IN THE NIRS EATTY ACIDS ANALYSIS OF ANIMAL EATS

  • Perez Marin, M.D.;De Pedro, E.;Garcia Olmo, J.;Garrido Varo, A.
    • Proceedings of the Korean Society of Near Infrared Spectroscopy Conference
    • /
    • 2001.06a
    • /
    • pp.4107-4107
    • /
    • 2001
  • Previous works have shown the viability of NIRS technology for the prediction of fatty acids in Iberian pig fat, but although the resulting equations showed high precision, in the predictions of new samples important fluctuations were detected, greater with the time passed from calibration development to NIRS analysis. This fact makes the use of NIRS calibrations in routine analysis difficult. Moreover, this problem only appears in products like fat, that show spectrums with very defined absorption peaks at some wavelengths. This circumstance causes a high sensibility to small changes of the instrument, which are not perceived with the normal checks. To avoid these inconveniences, the software WinISI 1.04 has a mathematic algorithm that consist of create a “Repeatability File”. This file is used during calibration development to minimize the variation sources that can affect the NIRS predictions. The objective of the current work is the evaluation of the use of a repeatability file in quantitative NIRS analysis of Iberian pig fat. A total of 188 samples of Iberian pig fat, produced by COVAP, were used. NIR data were recorded using a FOSS NIRSystems 6500 I spectrophotometer equipped with a spinning module. Samples were analysed by folded transmission, using two sample cells of 0.1mm pathlength and gold surface. High accuracy calibration equations were obtained, without and with repeatability file, to determine the content of six fatty acids: miristic (SECV$\sub$without/=0.07% r$^2$$\sub$without/=0.76 and SECV$\sub$with/=0.08% r$^2$$\sub$with/=0.65), Palmitic (SECV$\sub$without/=0.28 r$^2$$\sub$without/=0.97 and SECV$\sub$with/=0.24% r$^2$$\sub$with/=0.98), palmitoleic (SECV$\sub$without/=0.08 r$^2$$\sub$without/=0.94 and SECV$\sub$with/=0.09% r$^2$$\sub$with/=0.92), Stearic (SECV$\sub$without/=0.27 r$^2$$\sub$without/=0.97 and SECV$\sub$with/=0.29% r$^2$$\sub$with/=0.96), oleic (SECV$\sub$without/=0.20 r$^2$$\sub$without/=0.99 and SECV$\sub$with/=0.20% r$^2$$\sub$with/=0.99) and linoleic (SECV$\sub$without/=0.16 r$^2$$\sub$without/=0.98 and SECV$\sub$with/=0.16% r$^2$$\sub$with/=0.98). The use of a repeatability file like a tool to reduce the variation sources that can disturbed the prediction accuracy was very effective. Although in calibration results the differences are negligible, the effect caused by the repeatability file is appreciated mainly when are predicted new samples that are not in the calibration set and whose spectrum were recorded a long time after the equation development. In this case, bias values corresponding to fatty acids predictions were lower when the repeatability file was used: miristic (bias$\sub$without/=-0.05 and bias$\sub$with/=-0.04), Palmitic (bias$\sub$without/=-0.42 and bias$\sub$with/=-0.11), Palmitoleic (bias$\sub$without/=-0.03 and bias$\sub$with/=0.03), Stearic (bias$\sub$without/=0.47 and bias$\sub$with/=0.28), oleic (bias$\sub$without/=0.14 and bias$\sub$with/=-0.04) and linoleic (bias$\sub$without/=0.25 and bias$\sub$with/=-0.20).

  • PDF

Real data-based active sonar signal synthesis method (실데이터 기반 능동 소나 신호 합성 방법론)

  • Yunsu Kim;Juho Kim;Jongwon Seok;Jungpyo Hong
    • The Journal of the Acoustical Society of Korea
    • /
    • v.43 no.1
    • /
    • pp.9-18
    • /
    • 2024
  • The importance of active sonar systems is emerging due to the quietness of underwater targets and the increase in ambient noise due to the increase in maritime traffic. However, the low signal-to-noise ratio of the echo signal due to multipath propagation of the signal, various clutter, ambient noise and reverberation makes it difficult to identify underwater targets using active sonar. Attempts have been made to apply data-based methods such as machine learning or deep learning to improve the performance of underwater target recognition systems, but it is difficult to collect enough data for training due to the nature of sonar datasets. Methods based on mathematical modeling have been mainly used to compensate for insufficient active sonar data. However, methodologies based on mathematical modeling have limitations in accurately simulating complex underwater phenomena. Therefore, in this paper, we propose a sonar signal synthesis method based on a deep neural network. In order to apply the neural network model to the field of sonar signal synthesis, the proposed method appropriately corrects the attention-based encoder and decoder to the sonar signal, which is the main module of the Tacotron model mainly used in the field of speech synthesis. It is possible to synthesize a signal more similar to the actual signal by training the proposed model using the dataset collected by arranging a simulated target in an actual marine environment. In order to verify the performance of the proposed method, Perceptual evaluation of audio quality test was conducted and within score difference -2.3 was shown compared to actual signal in a total of four different environments. These results prove that the active sonar signal generated by the proposed method approximates the actual signal.

A Study on Intelligent Value Chain Network System based on Firms' Information (기업정보 기반 지능형 밸류체인 네트워크 시스템에 관한 연구)

  • Sung, Tae-Eung;Kim, Kang-Hoe;Moon, Young-Su;Lee, Ho-Shin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.3
    • /
    • pp.67-88
    • /
    • 2018
  • Until recently, as we recognize the significance of sustainable growth and competitiveness of small-and-medium sized enterprises (SMEs), governmental support for tangible resources such as R&D, manpower, funds, etc. has been mainly provided. However, it is also true that the inefficiency of support systems such as underestimated or redundant support has been raised because there exist conflicting policies in terms of appropriateness, effectiveness and efficiency of business support. From the perspective of the government or a company, we believe that due to limited resources of SMEs technology development and capacity enhancement through collaboration with external sources is the basis for creating competitive advantage for companies, and also emphasize value creation activities for it. This is why value chain network analysis is necessary in order to analyze inter-company deal relationships from a series of value chains and visualize results through establishing knowledge ecosystems at the corporate level. There exist Technology Opportunity Discovery (TOD) system that provides information on relevant products or technology status of companies with patents through retrievals over patent, product, or company name, CRETOP and KISLINE which both allow to view company (financial) information and credit information, but there exists no online system that provides a list of similar (competitive) companies based on the analysis of value chain network or information on potential clients or demanders that can have business deals in future. Therefore, we focus on the "Value Chain Network System (VCNS)", a support partner for planning the corporate business strategy developed and managed by KISTI, and investigate the types of embedded network-based analysis modules, databases (D/Bs) to support them, and how to utilize the system efficiently. Further we explore the function of network visualization in intelligent value chain analysis system which becomes the core information to understand industrial structure ystem and to develop a company's new product development. In order for a company to have the competitive superiority over other companies, it is necessary to identify who are the competitors with patents or products currently being produced, and searching for similar companies or competitors by each type of industry is the key to securing competitiveness in the commercialization of the target company. In addition, transaction information, which becomes business activity between companies, plays an important role in providing information regarding potential customers when both parties enter similar fields together. Identifying a competitor at the enterprise or industry level by using a network map based on such inter-company sales information can be implemented as a core module of value chain analysis. The Value Chain Network System (VCNS) combines the concepts of value chain and industrial structure analysis with corporate information simply collected to date, so that it can grasp not only the market competition situation of individual companies but also the value chain relationship of a specific industry. Especially, it can be useful as an information analysis tool at the corporate level such as identification of industry structure, identification of competitor trends, analysis of competitors, locating suppliers (sellers) and demanders (buyers), industry trends by item, finding promising items, finding new entrants, finding core companies and items by value chain, and recognizing the patents with corresponding companies, etc. In addition, based on the objectivity and reliability of the analysis results from transaction deals information and financial data, it is expected that value chain network system will be utilized for various purposes such as information support for business evaluation, R&D decision support and mid-term or short-term demand forecasting, in particular to more than 15,000 member companies in Korea, employees in R&D service sectors government-funded research institutes and public organizations. In order to strengthen business competitiveness of companies, technology, patent and market information have been provided so far mainly by government agencies and private research-and-development service companies. This service has been presented in frames of patent analysis (mainly for rating, quantitative analysis) or market analysis (for market prediction and demand forecasting based on market reports). However, there was a limitation to solving the lack of information, which is one of the difficulties that firms in Korea often face in the stage of commercialization. In particular, it is much more difficult to obtain information about competitors and potential candidates. In this study, the real-time value chain analysis and visualization service module based on the proposed network map and the data in hands is compared with the expected market share, estimated sales volume, contact information (which implies potential suppliers for raw material / parts, and potential demanders for complete products / modules). In future research, we intend to carry out the in-depth research for further investigating the indices of competitive factors through participation of research subjects and newly developing competitive indices for competitors or substitute items, and to additively promoting with data mining techniques and algorithms for improving the performance of VCNS.

A Study on Market Size Estimation Method by Product Group Using Word2Vec Algorithm (Word2Vec을 활용한 제품군별 시장규모 추정 방법에 관한 연구)

  • Jung, Ye Lim;Kim, Ji Hui;Yoo, Hyoung Sun
    • Journal of Intelligence and Information Systems
    • /
    • v.26 no.1
    • /
    • pp.1-21
    • /
    • 2020
  • With the rapid development of artificial intelligence technology, various techniques have been developed to extract meaningful information from unstructured text data which constitutes a large portion of big data. Over the past decades, text mining technologies have been utilized in various industries for practical applications. In the field of business intelligence, it has been employed to discover new market and/or technology opportunities and support rational decision making of business participants. The market information such as market size, market growth rate, and market share is essential for setting companies' business strategies. There has been a continuous demand in various fields for specific product level-market information. However, the information has been generally provided at industry level or broad categories based on classification standards, making it difficult to obtain specific and proper information. In this regard, we propose a new methodology that can estimate the market sizes of product groups at more detailed levels than that of previously offered. We applied Word2Vec algorithm, a neural network based semantic word embedding model, to enable automatic market size estimation from individual companies' product information in a bottom-up manner. The overall process is as follows: First, the data related to product information is collected, refined, and restructured into suitable form for applying Word2Vec model. Next, the preprocessed data is embedded into vector space by Word2Vec and then the product groups are derived by extracting similar products names based on cosine similarity calculation. Finally, the sales data on the extracted products is summated to estimate the market size of the product groups. As an experimental data, text data of product names from Statistics Korea's microdata (345,103 cases) were mapped in multidimensional vector space by Word2Vec training. We performed parameters optimization for training and then applied vector dimension of 300 and window size of 15 as optimized parameters for further experiments. We employed index words of Korean Standard Industry Classification (KSIC) as a product name dataset to more efficiently cluster product groups. The product names which are similar to KSIC indexes were extracted based on cosine similarity. The market size of extracted products as one product category was calculated from individual companies' sales data. The market sizes of 11,654 specific product lines were automatically estimated by the proposed model. For the performance verification, the results were compared with actual market size of some items. The Pearson's correlation coefficient was 0.513. Our approach has several advantages differing from the previous studies. First, text mining and machine learning techniques were applied for the first time on market size estimation, overcoming the limitations of traditional sampling based- or multiple assumption required-methods. In addition, the level of market category can be easily and efficiently adjusted according to the purpose of information use by changing cosine similarity threshold. Furthermore, it has a high potential of practical applications since it can resolve unmet needs for detailed market size information in public and private sectors. Specifically, it can be utilized in technology evaluation and technology commercialization support program conducted by governmental institutions, as well as business strategies consulting and market analysis report publishing by private firms. The limitation of our study is that the presented model needs to be improved in terms of accuracy and reliability. The semantic-based word embedding module can be advanced by giving a proper order in the preprocessed dataset or by combining another algorithm such as Jaccard similarity with Word2Vec. Also, the methods of product group clustering can be changed to other types of unsupervised machine learning algorithm. Our group is currently working on subsequent studies and we expect that it can further improve the performance of the conceptually proposed basic model in this study.