• Title/Summary/Keyword: Extraction time

Search Result 3,543, Processing Time 0.03 seconds

Construction of Event Networks from Large News Data Using Text Mining Techniques (텍스트 마이닝 기법을 적용한 뉴스 데이터에서의 사건 네트워크 구축)

  • Lee, Minchul;Kim, Hea-Jin
    • Journal of Intelligence and Information Systems
    • /
    • v.24 no.1
    • /
    • pp.183-203
    • /
    • 2018
  • News articles are the most suitable medium for examining the events occurring at home and abroad. Especially, as the development of information and communication technology has brought various kinds of online news media, the news about the events occurring in society has increased greatly. So automatically summarizing key events from massive amounts of news data will help users to look at many of the events at a glance. In addition, if we build and provide an event network based on the relevance of events, it will be able to greatly help the reader in understanding the current events. In this study, we propose a method for extracting event networks from large news text data. To this end, we first collected Korean political and social articles from March 2016 to March 2017, and integrated the synonyms by leaving only meaningful words through preprocessing using NPMI and Word2Vec. Latent Dirichlet allocation (LDA) topic modeling was used to calculate the subject distribution by date and to find the peak of the subject distribution and to detect the event. A total of 32 topics were extracted from the topic modeling, and the point of occurrence of the event was deduced by looking at the point at which each subject distribution surged. As a result, a total of 85 events were detected, but the final 16 events were filtered and presented using the Gaussian smoothing technique. We also calculated the relevance score between events detected to construct the event network. Using the cosine coefficient between the co-occurred events, we calculated the relevance between the events and connected the events to construct the event network. Finally, we set up the event network by setting each event to each vertex and the relevance score between events to the vertices connecting the vertices. The event network constructed in our methods helped us to sort out major events in the political and social fields in Korea that occurred in the last one year in chronological order and at the same time identify which events are related to certain events. Our approach differs from existing event detection methods in that LDA topic modeling makes it possible to easily analyze large amounts of data and to identify the relevance of events that were difficult to detect in existing event detection. We applied various text mining techniques and Word2vec technique in the text preprocessing to improve the accuracy of the extraction of proper nouns and synthetic nouns, which have been difficult in analyzing existing Korean texts, can be found. In this study, the detection and network configuration techniques of the event have the following advantages in practical application. First, LDA topic modeling, which is unsupervised learning, can easily analyze subject and topic words and distribution from huge amount of data. Also, by using the date information of the collected news articles, it is possible to express the distribution by topic in a time series. Second, we can find out the connection of events in the form of present and summarized form by calculating relevance score and constructing event network by using simultaneous occurrence of topics that are difficult to grasp in existing event detection. It can be seen from the fact that the inter-event relevance-based event network proposed in this study was actually constructed in order of occurrence time. It is also possible to identify what happened as a starting point for a series of events through the event network. The limitation of this study is that the characteristics of LDA topic modeling have different results according to the initial parameters and the number of subjects, and the subject and event name of the analysis result should be given by the subjective judgment of the researcher. Also, since each topic is assumed to be exclusive and independent, it does not take into account the relevance between themes. Subsequent studies need to calculate the relevance between events that are not covered in this study or those that belong to the same subject.

Construction of Consumer Confidence index based on Sentiment analysis using News articles (뉴스기사를 이용한 소비자의 경기심리지수 생성)

  • Song, Minchae;Shin, Kyung-shik
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.1-27
    • /
    • 2017
  • It is known that the economic sentiment index and macroeconomic indicators are closely related because economic agent's judgment and forecast of the business conditions affect economic fluctuations. For this reason, consumer sentiment or confidence provides steady fodder for business and is treated as an important piece of economic information. In Korea, private consumption accounts and consumer sentiment index highly relevant for both, which is a very important economic indicator for evaluating and forecasting the domestic economic situation. However, despite offering relevant insights into private consumption and GDP, the traditional approach to measuring the consumer confidence based on the survey has several limits. One possible weakness is that it takes considerable time to research, collect, and aggregate the data. If certain urgent issues arise, timely information will not be announced until the end of each month. In addition, the survey only contains information derived from questionnaire items, which means it can be difficult to catch up to the direct effects of newly arising issues. The survey also faces potential declines in response rates and erroneous responses. Therefore, it is necessary to find a way to complement it. For this purpose, we construct and assess an index designed to measure consumer economic sentiment index using sentiment analysis. Unlike the survey-based measures, our index relies on textual analysis to extract sentiment from economic and financial news articles. In particular, text data such as news articles and SNS are timely and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. There exist two main approaches to the automatic extraction of sentiment from a text, we apply the lexicon-based approach, using sentiment lexicon dictionaries of words annotated with the semantic orientations. In creating the sentiment lexicon dictionaries, we enter the semantic orientation of individual words manually, though we do not attempt a full linguistic analysis (one that involves analysis of word senses or argument structure); this is the limitation of our research and further work in that direction remains possible. In this study, we generate a time series index of economic sentiment in the news. The construction of the index consists of three broad steps: (1) Collecting a large corpus of economic news articles on the web, (2) Applying lexicon-based methods for sentiment analysis of each article to score the article in terms of sentiment orientation (positive, negative and neutral), and (3) Constructing an economic sentiment index of consumers by aggregating monthly time series for each sentiment word. In line with existing scholarly assessments of the relationship between the consumer confidence index and macroeconomic indicators, any new index should be assessed for its usefulness. We examine the new index's usefulness by comparing other economic indicators to the CSI. To check the usefulness of the newly index based on sentiment analysis, trend and cross - correlation analysis are carried out to analyze the relations and lagged structure. Finally, we analyze the forecasting power using the one step ahead of out of sample prediction. As a result, the news sentiment index correlates strongly with related contemporaneous key indicators in almost all experiments. We also find that news sentiment shocks predict future economic activity in most cases. In almost all experiments, the news sentiment index strongly correlates with related contemporaneous key indicators. Furthermore, in most cases, news sentiment shocks predict future economic activity; in head-to-head comparisons, the news sentiment measures outperform survey-based sentiment index as CSI. Policy makers want to understand consumer or public opinions about existing or proposed policies. Such opinions enable relevant government decision-makers to respond quickly to monitor various web media, SNS, or news articles. Textual data, such as news articles and social networks (Twitter, Facebook and blogs) are generated at high-speeds and cover a wide range of issues; because such sources can quickly capture the economic impact of specific economic issues, they have great potential as economic indicators. Although research using unstructured data in economic analysis is in its early stages, but the utilization of data is expected to greatly increase once its usefulness is confirmed.

A Proposal of a Keyword Extraction System for Detecting Social Issues (사회문제 해결형 기술수요 발굴을 위한 키워드 추출 시스템 제안)

  • Jeong, Dami;Kim, Jaeseok;Kim, Gi-Nam;Heo, Jong-Uk;On, Byung-Won;Kang, Mijung
    • Journal of Intelligence and Information Systems
    • /
    • v.19 no.3
    • /
    • pp.1-23
    • /
    • 2013
  • To discover significant social issues such as unemployment, economy crisis, social welfare etc. that are urgent issues to be solved in a modern society, in the existing approach, researchers usually collect opinions from professional experts and scholars through either online or offline surveys. However, such a method does not seem to be effective from time to time. As usual, due to the problem of expense, a large number of survey replies are seldom gathered. In some cases, it is also hard to find out professional persons dealing with specific social issues. Thus, the sample set is often small and may have some bias. Furthermore, regarding a social issue, several experts may make totally different conclusions because each expert has his subjective point of view and different background. In this case, it is considerably hard to figure out what current social issues are and which social issues are really important. To surmount the shortcomings of the current approach, in this paper, we develop a prototype system that semi-automatically detects social issue keywords representing social issues and problems from about 1.3 million news articles issued by about 10 major domestic presses in Korea from June 2009 until July 2012. Our proposed system consists of (1) collecting and extracting texts from the collected news articles, (2) identifying only news articles related to social issues, (3) analyzing the lexical items of Korean sentences, (4) finding a set of topics regarding social keywords over time based on probabilistic topic modeling, (5) matching relevant paragraphs to a given topic, and (6) visualizing social keywords for easy understanding. In particular, we propose a novel matching algorithm relying on generative models. The goal of our proposed matching algorithm is to best match paragraphs to each topic. Technically, using a topic model such as Latent Dirichlet Allocation (LDA), we can obtain a set of topics, each of which has relevant terms and their probability values. In our problem, given a set of text documents (e.g., news articles), LDA shows a set of topic clusters, and then each topic cluster is labeled by human annotators, where each topic label stands for a social keyword. For example, suppose there is a topic (e.g., Topic1 = {(unemployment, 0.4), (layoff, 0.3), (business, 0.3)}) and then a human annotator labels "Unemployment Problem" on Topic1. In this example, it is non-trivial to understand what happened to the unemployment problem in our society. In other words, taking a look at only social keywords, we have no idea of the detailed events occurring in our society. To tackle this matter, we develop the matching algorithm that computes the probability value of a paragraph given a topic, relying on (i) topic terms and (ii) their probability values. For instance, given a set of text documents, we segment each text document to paragraphs. In the meantime, using LDA, we can extract a set of topics from the text documents. Based on our matching process, each paragraph is assigned to a topic, indicating that the paragraph best matches the topic. Finally, each topic has several best matched paragraphs. Furthermore, assuming there are a topic (e.g., Unemployment Problem) and the best matched paragraph (e.g., Up to 300 workers lost their jobs in XXX company at Seoul). In this case, we can grasp the detailed information of the social keyword such as "300 workers", "unemployment", "XXX company", and "Seoul". In addition, our system visualizes social keywords over time. Therefore, through our matching process and keyword visualization, most researchers will be able to detect social issues easily and quickly. Through this prototype system, we have detected various social issues appearing in our society and also showed effectiveness of our proposed methods according to our experimental results. Note that you can also use our proof-of-concept system in http://dslab.snu.ac.kr/demo.html.

A study on the classification of research topics based on COVID-19 academic research using Topic modeling (토픽모델링을 활용한 COVID-19 학술 연구 기반 연구 주제 분류에 관한 연구)

  • Yoo, So-yeon;Lim, Gyoo-gun
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.1
    • /
    • pp.155-174
    • /
    • 2022
  • From January 2020 to October 2021, more than 500,000 academic studies related to COVID-19 (Coronavirus-2, a fatal respiratory syndrome) have been published. The rapid increase in the number of papers related to COVID-19 is putting time and technical constraints on healthcare professionals and policy makers to quickly find important research. Therefore, in this study, we propose a method of extracting useful information from text data of extensive literature using LDA and Word2vec algorithm. Papers related to keywords to be searched were extracted from papers related to COVID-19, and detailed topics were identified. The data used the CORD-19 data set on Kaggle, a free academic resource prepared by major research groups and the White House to respond to the COVID-19 pandemic, updated weekly. The research methods are divided into two main categories. First, 41,062 articles were collected through data filtering and pre-processing of the abstracts of 47,110 academic papers including full text. For this purpose, the number of publications related to COVID-19 by year was analyzed through exploratory data analysis using a Python program, and the top 10 journals under active research were identified. LDA and Word2vec algorithm were used to derive research topics related to COVID-19, and after analyzing related words, similarity was measured. Second, papers containing 'vaccine' and 'treatment' were extracted from among the topics derived from all papers, and a total of 4,555 papers related to 'vaccine' and 5,971 papers related to 'treatment' were extracted. did For each collected paper, detailed topics were analyzed using LDA and Word2vec algorithms, and a clustering method through PCA dimension reduction was applied to visualize groups of papers with similar themes using the t-SNE algorithm. A noteworthy point from the results of this study is that the topics that were not derived from the topics derived for all papers being researched in relation to COVID-19 (

    ) were the topic modeling results for each research topic (
    ) was found to be derived from For example, as a result of topic modeling for papers related to 'vaccine', a new topic titled Topic 05 'neutralizing antibodies' was extracted. A neutralizing antibody is an antibody that protects cells from infection when a virus enters the body, and is said to play an important role in the production of therapeutic agents and vaccine development. In addition, as a result of extracting topics from papers related to 'treatment', a new topic called Topic 05 'cytokine' was discovered. A cytokine storm is when the immune cells of our body do not defend against attacks, but attack normal cells. Hidden topics that could not be found for the entire thesis were classified according to keywords, and topic modeling was performed to find detailed topics. In this study, we proposed a method of extracting topics from a large amount of literature using the LDA algorithm and extracting similar words using the Skip-gram method that predicts the similar words as the central word among the Word2vec models. The combination of the LDA model and the Word2vec model tried to show better performance by identifying the relationship between the document and the LDA subject and the relationship between the Word2vec document. In addition, as a clustering method through PCA dimension reduction, a method for intuitively classifying documents by using the t-SNE technique to classify documents with similar themes and forming groups into a structured organization of documents was presented. In a situation where the efforts of many researchers to overcome COVID-19 cannot keep up with the rapid publication of academic papers related to COVID-19, it will reduce the precious time and effort of healthcare professionals and policy makers, and rapidly gain new insights. We hope to help you get It is also expected to be used as basic data for researchers to explore new research directions.

  • Effects of the Substances Extracted from Dried Mushroom(Lentinus edodes) by Several Organic Solvents on the Stability of Fat (건조(乾燥)표고버섯의 각종(各種) 용매추출물(溶媒抽出物)의 항산화작용(抗酸化作用)의 효과(效果))

    • Ma, Sang-Jo
      • Korean Journal of Food Science and Technology
      • /
      • v.15 no.2
      • /
      • pp.150-154
      • /
      • 1983
    • Mushroom, Lentinus edodes which had been dried at $50^{\circ}C$ for 20 hours were extracted with small amount of ethanol, methanol, chloroform and petroleum ether, respectively. The extracts were then dissolved in edible soybean oil, and the resulting substrates and a portion of the soybean oil (control) were placed in an incubator $(37.0{\pm}1.0^{\circ}C)$ for eight weeks. Peroxide values and TBA values of control and the substrates were determined regularly during the storage period. The results of the present study were as follows: 1. The moisture contents of the mushroom which was 84.88% on wet basis at the time of harvest were reduced to 15.12% after drying. 2. Extracts obtained from alcohols were effective in retarding the POV development. 3. There was not much difference among the TBA values after 14 days, but significant difference of the TBA values in control and the substrates extracts were observed in longer storage period TBA values of substrate containing ethanol and methanol in the later stage period were smaller than that of the substrates containing petroleum ether and chloroform. 4. In view of the POV and TBA value development, ethanol and methanol were more effective solvents for the extraction of antioxidant compounds in the dried mushroom than chloroform and petroleum ether.

    • PDF

    Anti-Oxidative and Anti-Obesity Effect of Combined Extract and Individual Extract of Samjunghwan (혼합추출 및 개별추출 방식의 삼정환의 항산화 및 항비만효과)

    • Han, Kyungsun;Wang, Jinghwa;Lim, Dongwoo;Chin, Young-Won;Choi, Young Hee;Choi, Han-Seok;Lee, Myeong-Jong;Kim, Hojun
      • Journal of Korean Medicine for Obesity Research
      • /
      • v.14 no.2
      • /
      • pp.47-54
      • /
      • 2014
    • Objectives: This study is to confirm the effect of combined extract and individual extract of Samjunghwan (SJH) in anti-oxidative and anti-obesity effect. Methods: Combined ethanol extract of readily made SJH and individual ethanol extract of Atractylodes japonica, Cortex lycii radicis, and Morus alba Linne was combined after the extraction. To evaluate the anti-oxidative effect of SJH, total phenol compound and 2,2-Diphenyl-1-picrylhydrazyl (DPPH) free radical scavenging ability were conducted. Real-time quantitative-polymerase chain reaction analysis of transcription factor peroxisome proliferator-activated receptror ${\gamma}$ ($PPAR{\gamma}$), adenosine monophosphate-activated protein kinase (AMPK)-${\alpha}1$, tumor necrosis factor-${\alpha}$ ($TNF{\alpha}$) and 3-hydroxy-3-methylglutaryl CoA reducatase (HMG-CoA reductase) were done with 3T3-L1 cells to investigate the ant-obesity effect. Also, cell viability analysis were done to see to toxicity of SJH. Results: Individual extract of SJH showed significant decrease in $TNF{\alpha}$ and AMPK transcription while $PPAR{\gamma}$ showed significant increase. Combined extract and individual extract of SJH both showed decrease in HMG-CoA reductase. DPPH free radical scavenging ability and total phenol compound was analogous between two groups. Conclusions: Individual extract of SJH appears to be more effective in anti-oxidation and anti-obesity effect compared to combined extract of SJH.

    Wave Analysis and Spectrum Estimation for the Optimal Design of the Wave Energy Converter in the Hupo Coastal Sea (파력발전장치 설계를 위한후포 연안의 파랑 분석 및 스펙트럼 추정)

    • Kweon, Hyuck-Min;Cho, Hongyeon;Jeong, Weon-Mu
      • Journal of Korean Society of Coastal and Ocean Engineers
      • /
      • v.25 no.3
      • /
      • pp.147-153
      • /
      • 2013
    • There exist various types of the WEC (Wave Energy Converter), and among them, the point absorber is the most popularly investigated type. However, it is difficult to find examples of systematically measured data analysis for the design of the point absorber type of power buoy in the world. The study investigates the wave load acting on the point absorber type resonance power buoy wave energy extraction system proposed by Kweon et al. (2010). This study analyzes the time series spectra with respect to the three-year wave data (2002.05.01~2005.03.29) measured using the pressure type wave gage at the seaside of north breakwater of Hupo harbor located in the east coast of the Korean peninsula. From the analysis results, it could be deduced that monthly wave period and wave height variations were apparent and that monthly wave powers were unevenly distributed annually. The average wave steepness of the usual wave was 0.01, lower than that of the wind wave range of 0.02-0.04. The mode of the average wave period has the value of 5.31 sec, while mode of the wave height of the applicable period has the value of 0.29 m. The occurrence probability of the peak period is a bi-modal type, with a mode value between 4.47 sec and 6.78 sec. The design wave period can be selected from the above four values of 0.01, 5.31, 4.47, 6.78. About 95% of measured wave heights are below 1 m. Through this study, it was found that a resonance power buoy system is necessary in coastal areas with low wave energy and that the optimal design for overcoming the uneven monthly distribution of wave power is a major task in the development of a WEF (Wave Energy Farm). Finding it impossible to express the average spectrum of the usual wave in terms of the standard spectrum equation, this study proposes a new spectrum equation with three parameters, with which basic data for the prediction of the power production using wave power buoy and the fatigue analysis of the system can be given.

    Decision Making on the Non surgical, Surgical Treatment on Chronic Adult Periodontitis (만성 성인성 치주염 치료시 비외과적, 외과적 방법에 대한 의사결정)

    • Song, Si-Eun;Li, Seung-Won;Cho, Kyoo-Sung;Chai, Jung-Kiu;Kim, Chong-Kwan
      • Journal of Periodontal and Implant Science
      • /
      • v.28 no.4
      • /
      • pp.645-660
      • /
      • 1998
    • The purpose of this study was to make and ascertain a decision making process on the base of patient-oriented utilitarianism in the treatment of patients of chronic adult periodontitis. Fifty subjects were chosen in Yonsei Dental hospital and the other fifty were chosen in Severance dental hospital according to the selection criteria. Fifty four patients agreed in this study. NS group(N=32) was treated with scaling and root planing without any surgical intervention, the other S group(N=22) done with flap operation. During the active treatment and healing time, all patients of both groups were educated about the importance of oral hygiene and controlled every visit to the hospital. When periodontal treatment needed according to the diagnostic results, some patients were subjected to professional tooth cleaning and scaling once every 3 months according to an individually designed oral hygienic protocol. Probing depth was recorded on baseline and 18 months after treatments. A questionnaire composed of 6 kinds(hygienic easiness, hypersensitivity, post treatment comfort, complication, functional comfort, compliance) of questions was delivered to each patient to obtain the subjective evaluation regarding the results of therapy. The decision tree for the treatment of adult periodontal disease was made on the result of 2 kinds of periodontal treatment and patient's ubjective evaluation. The optimal path was calculated by using the success rate of the results as the probability and utility according to relative value and the economic value in the insurance system. The success rate to achieve the diagnostic goal of periodontal treatment as the remaining pocket depth less than 3mm and without BOP was $0.83{\pm}0.12$ by non surgical treatment and $0.82{\pm}0.14$ by surgical treatment without any statistically significant difference. The moderate success rate of more than 4mm probing pocket depth were 0.17 together. The utilities of non-surgical treatment results were 100 for a result with less than 3mm probing pocket depth, 80 for the other results with more than 4mm probing pocket depth, 0 for the extraction. Those of surgical treatment results were the same except 75 for the results with more than 4mm. The pooling results of subjective evaluation by using a questionnaire were 60% for satisfaction level and 40% for no satisfaction level in the patient group receiving nonsurgical treatment and 33% and 67% in the other group receiving surgical treatment. The utilities for 4 satisfaction levels were 100, 75, 60, 50 on the base of that the patient would express the satisfaction level with normal distribution. The optimal path of periodontal treatment was rolled back by timing the utility on terminal node and the success rate, the distributed ratio of patient's satisfaction level. Both results of the calculation was non surgical treatment. Therefore, it can be said that non-surgical treatment may be the optimal path for this decision tree of treatment protocol if the goal of the periodontal treatment is to achieve the remaining probing pocket depth of less than 3mm for adult chronic periodontitis and if the utilitarian philosophy to maximise the expected utility for the patients is advocated.

    • PDF

    Using the METHONTOLOGY Approach to a Graduation Screen Ontology Development: An Experiential Investigation of the METHONTOLOGY Framework

    • Park, Jin-Soo;Sung, Ki-Moon;Moon, Se-Won
      • Asia pacific journal of information systems
      • /
      • v.20 no.2
      • /
      • pp.125-155
      • /
      • 2010
    • Ontologies have been adopted in various business and scientific communities as a key component of the Semantic Web. Despite the increasing importance of ontologies, ontology developers still perceive construction tasks as a challenge. A clearly defined and well-structured methodology can reduce the time required to develop an ontology and increase the probability of success of a project. However, no reliable knowledge-engineering methodology for ontology development currently exists; every methodology has been tailored toward the development of a particular ontology. In this study, we developed a Graduation Screen Ontology (GSO). The graduation screen domain was chosen for the several reasons. First, the graduation screen process is a complicated task requiring a complex reasoning process. Second, GSO may be reused for other universities because the graduation screen process is similar for most universities. Finally, GSO can be built within a given period because the size of the selected domain is reasonable. No standard ontology development methodology exists; thus, one of the existing ontology development methodologies had to be chosen. The most important considerations for selecting the ontology development methodology of GSO included whether it can be applied to a new domain; whether it covers a broader set of development tasks; and whether it gives sufficient explanation of each development task. We evaluated various ontology development methodologies based on the evaluation framework proposed by G$\acute{o}$mez-P$\acute{e}$rez et al. We concluded that METHONTOLOGY was the most applicable to the building of GSO for this study. METHONTOLOGY was derived from the experience of developing Chemical Ontology at the Polytechnic University of Madrid by Fern$\acute{a}$ndez-L$\acute{o}$pez et al. and is regarded as the most mature ontology development methodology. METHONTOLOGY describes a very detailed approach for building an ontology under a centralized development environment at the conceptual level. This methodology consists of three broad processes, with each process containing specific sub-processes: management (scheduling, control, and quality assurance); development (specification, conceptualization, formalization, implementation, and maintenance); and support process (knowledge acquisition, evaluation, documentation, configuration management, and integration). An ontology development language and ontology development tool for GSO construction also had to be selected. We adopted OWL-DL as the ontology development language. OWL was selected because of its computational quality of consistency in checking and classification, which is crucial in developing coherent and useful ontological models for very complex domains. In addition, Protege-OWL was chosen for an ontology development tool because it is supported by METHONTOLOGY and is widely used because of its platform-independent characteristics. Based on the GSO development experience of the researchers, some issues relating to the METHONTOLOGY, OWL-DL, and Prot$\acute{e}$g$\acute{e}$-OWL were identified. We focused on presenting drawbacks of METHONTOLOGY and discussing how each weakness could be addressed. First, METHONTOLOGY insists that domain experts who do not have ontology construction experience can easily build ontologies. However, it is still difficult for these domain experts to develop a sophisticated ontology, especially if they have insufficient background knowledge related to the ontology. Second, METHONTOLOGY does not include a development stage called the "feasibility study." This pre-development stage helps developers ensure not only that a planned ontology is necessary and sufficiently valuable to begin an ontology building project, but also to determine whether the project will be successful. Third, METHONTOLOGY excludes an explanation on the use and integration of existing ontologies. If an additional stage for considering reuse is introduced, developers might share benefits of reuse. Fourth, METHONTOLOGY fails to address the importance of collaboration. This methodology needs to explain the allocation of specific tasks to different developer groups, and how to combine these tasks once specific given jobs are completed. Fifth, METHONTOLOGY fails to suggest the methods and techniques applied in the conceptualization stage sufficiently. Introducing methods of concept extraction from multiple informal sources or methods of identifying relations may enhance the quality of ontologies. Sixth, METHONTOLOGY does not provide an evaluation process to confirm whether WebODE perfectly transforms a conceptual ontology into a formal ontology. It also does not guarantee whether the outcomes of the conceptualization stage are completely reflected in the implementation stage. Seventh, METHONTOLOGY needs to add criteria for user evaluation of the actual use of the constructed ontology under user environments. Eighth, although METHONTOLOGY allows continual knowledge acquisition while working on the ontology development process, consistent updates can be difficult for developers. Ninth, METHONTOLOGY demands that developers complete various documents during the conceptualization stage; thus, it can be considered a heavy methodology. Adopting an agile methodology will result in reinforcing active communication among developers and reducing the burden of documentation completion. Finally, this study concludes with contributions and practical implications. No previous research has addressed issues related to METHONTOLOGY from empirical experiences; this study is an initial attempt. In addition, several lessons learned from the development experience are discussed. This study also affords some insights for ontology methodology researchers who want to design a more advanced ontology development methodology.

    Antioxidant Activity of the Oven-dried Paprika Powders with Various Colors and Phycochemical Properties and Antioxidant Activity of Pork Patty Containing Various Paprika Powder (파프리카의 색이 열풍 건조한 파프리카 분말을 첨가한 돈육 분쇄육의 이화학적 특성과 항산화 활성 평가)

    • Shim, Yong Woo;Chin, Koo Bok
      • Food Science of Animal Resources
      • /
      • v.33 no.5
      • /
      • pp.626-632
      • /
      • 2013
    • This study was performed to determine the antioxidant activity of the oven-dried paprika powder as affected by the color differences of paprika and to evaluate physicochemical characteristics and antioxidant activity of pork patties with various levels of paprika powders. The total phenolic contents of the paprika were not affected by color and solvent (p>0.05). The methanol extracted paprika powder showed higher DPPH radical scavenging activity than water extracted counterpart, and no differences were observed at concentration of 0.5% as compared to the reference (ascorbic acid) (p>0.05). In all treatments, the iron chelating ability increased with increasing concentrations. At a concentration of 1.0%, methanol extracts of orange paprika (MOP) and water extracts of red paprika (WRP) were not different from the reference, (ethylendiaminetetraacetic acid, EDTA). The paprika color and extraction solvent didn't affect reducing power of paprika powder at each concentration (p>0.05). Pork patties with red paprika powder were higher redness values than those with orange ones, regardless of addition level. The addition of red paprika increased the yellowness, and patties with 1.0% orange paprika powder showed the highest value. TBARS values were decreased with increasing paprika powder, especially, patties with 1.0% paprika powder were lower TBARS than those with 0.5% paprika powder, resulting in similar to those with ascorbic acid (p>0.05). Although the microbial counts increased with storage time, paprika powders did not inhibit microorganisms during storage. In conclusion, paprika powders could be used as a natural antioxidant in meat products, regardless of paprika color.


    (34141) Korea Institute of Science and Technology Information, 245, Daehak-ro, Yuseong-gu, Daejeon
    Copyright (C) KISTI. All Rights Reserved.