• Title/Summary/Keyword: 한국어 질의

Search Result 429, Processing Time 0.021 seconds

A Study on Speech Recognition Using the HM-Net Topology Design Algorithm Based on Decision Tree State-clustering (결정트리 상태 클러스터링에 의한 HM-Net 구조결정 알고리즘을 이용한 음성인식에 관한 연구)

  • 정현열;정호열;오세진;황철준;김범국
    • The Journal of the Acoustical Society of Korea
    • /
    • v.21 no.2
    • /
    • pp.199-210
    • /
    • 2002
  • In this paper, we carried out the study on speech recognition using the KM-Net topology design algorithm based on decision tree state-clustering to improve the performance of acoustic models in speech recognition. The Korean has many allophonic and grammatical rules compared to other languages, so we investigate the allophonic variations, which defined the Korean phonetics, and construct the phoneme question set for phonetic decision tree. The basic idea of the HM-Net topology design algorithm is that it has the basic structure of SSS (Successive State Splitting) algorithm and split again the states of the context-dependent acoustic models pre-constructed. That is, it have generated. the phonetic decision tree using the phoneme question sets each the state of models, and have iteratively trained the state sequence of the context-dependent acoustic models using the PDT-SSS (Phonetic Decision Tree-based SSS) algorithm. To verify the effectiveness of the above algorithm we carried out the speech recognition experiments for 452 words of center for Korean language Engineering (KLE452) and 200 sentences of air flight reservation task (YNU200). Experimental results show that the recognition accuracy has progressively improved according to the number of states variations after perform the splitting of states in the phoneme, word and continuous speech recognition experiments respectively. Through the experiments, we have got the average 71.5%, 99.2% of the phoneme, word recognition accuracy when the state number is 2,000, respectively and the average 91.6% of the continuous speech recognition accuracy when the state number is 800. Also we haute carried out the word recognition experiments using the HTK (HMM Too1kit) which is performed the state tying, compared to share the parameters of the HM-Net topology design algorithm. In word recognition experiments, the HM-Net topology design algorithm has an average of 4.0% higher recognition accuracy than the context-dependent acoustic models generated by the HTK implying the effectiveness of it.

KB-BERT: Training and Application of Korean Pre-trained Language Model in Financial Domain (KB-BERT: 금융 특화 한국어 사전학습 언어모델과 그 응용)

  • Kim, Donggyu;Lee, Dongwook;Park, Jangwon;Oh, Sungwoo;Kwon, Sungjun;Lee, Inyong;Choi, Dongwon
    • Journal of Intelligence and Information Systems
    • /
    • v.28 no.2
    • /
    • pp.191-206
    • /
    • 2022
  • Recently, it is a de-facto approach to utilize a pre-trained language model(PLM) to achieve the state-of-the-art performance for various natural language tasks(called downstream tasks) such as sentiment analysis and question answering. However, similar to any other machine learning method, PLM tends to depend on the data distribution seen during the training phase and shows worse performance on the unseen (Out-of-Distribution) domain. Due to the aforementioned reason, there have been many efforts to develop domain-specified PLM for various fields such as medical and legal industries. In this paper, we discuss the training of a finance domain-specified PLM for the Korean language and its applications. Our finance domain-specified PLM, KB-BERT, is trained on a carefully curated financial corpus that includes domain-specific documents such as financial reports. We provide extensive performance evaluation results on three natural language tasks, topic classification, sentiment analysis, and question answering. Compared to the state-of-the-art Korean PLM models such as KoELECTRA and KLUE-RoBERTa, KB-BERT shows comparable performance on general datasets based on common corpora like Wikipedia and news articles. Moreover, KB-BERT outperforms compared models on finance domain datasets that require finance-specific knowledge to solve given problems.

Prevalence and Its Correlates of Restless Legs Syndrome in Outpatients with Bipolar Disorders (양극성장애 환자의 하지불안증후군 유병율과 관련 특성)

  • Lee, Neung-Se;Yoon, Bo-Hyun;Lee, Hyun Jae;Sea, Young-Hwa;Song, Je-Heon;Park, Suhee;Lee, Ji Seon
    • Korean Journal of Psychosomatic Medicine
    • /
    • v.22 no.2
    • /
    • pp.121-129
    • /
    • 2014
  • Objectives : This study was to assess the prevalence and its correlates of restless legs syndrome(RLS) in outpatients with bipolar disorder. Methods : A total of 100 clinical stabilized bipolar outpatients were examined. The presence of RLS and its severity were assessed using the International Restless Legs Sydrome Study Group(IRLSSG) diagnostic criteria. Beck's Depression Inventory(BDI), Spielberg's State Anxiety Inventory(STAI-X-1), Pittsburgh Sleep Quality Index(PSQI), Korean version Drug Attitude Inventory(KDAI-10), Subjective Well-Beings under Neuroleptic Treatment Scale-Short Form(SWN-K) and Barnes Akathisia Rating Scale(BARS) were used to evaluate the depressive symptomatology, level of anxiety, subjective quality of sleep, subjective feeling of well-being, drug attitude, presence of akathisia, respectively. Results : Of the 100 bipolar outpatients, 7(7%) were met to full criteria of IRLSSG and 36(36%) have at least one of the 4 IRLSSG criterion. Because of relatively small sample size, non-parametric analysis were done to compare the characteristics among 3 groups(full-RLS, 1 ${\geq}$positive RLS-symptom and Non-RLS). There were no significant differences in sex, age, and other sociodemographic and clinical data among 3 groups. BDI, STAI-X-1 and PSQI are tended to be impaired in RLS and 1 ${\geq}$positive RLS-symptom groups. Conclusions : This is the first preliminary study for studying the prevalence and its correlates of RLS in bipolar disorder. The results shows that relatively small proportion of RLS was present in bipolar disorder patients when compared to patients with schizophrenia. Same tendencies shown in schizophrenic patients were found that bipolar patients with RLS had more depressive symptoms, state anxiety and poor subjective sleep quality. Further systematic studies may be needed to find the characteristics of RLS in bipolar patients.

  • PDF

Relationship between Dyspnea and Disease Severity, Quality of Life, and Social Factor in Patients with Chronic Obstructive Pulmonary Disease (만성폐쇄성폐질환자에서 질병 중증도 및 삶의 질을 비롯한 사회적 요인과 호흡곤란과의 관계)

  • Kim, Eun-Jin;Park, Jae-Hyung;Yoon, Suk-Jin;Lee, Seung-Jun;Cha, Seung-Ick;Park, Jae-Yong;Jung, Tae-Hoon;Kim, Chang-Ho
    • Tuberculosis and Respiratory Diseases
    • /
    • v.60 no.4
    • /
    • pp.397-403
    • /
    • 2006
  • Background: Chronic obstructive pulmonary disease(COPD) is categorized by the percentage of the predicted $FEV_1$(Forced expiratory volume in 1 second) result which is highly correlated with disease severity(morbidity and mortality). In COPD patients, dyspnea seems to be different from disease severity. We investigated whether dyspnea is correlated with disease severity, as measured by $FEV_1$, quality of life(QoL), occupation, and supporting level of family members and neighbors. Method: Thirty-six clinically stable patients with chronically irreversible airflow limitation were enrolled. We used the Medical Research Council(MRC) dyspnea scale to assess the level of dyspnea and the Korean St. Goerge's respiratory questionnaire(SGRQ) as measure the QoL. Result: The mean percentage of the predicted $FEV_1$ was 32.0%. Dyspnea was not correlated with GOLD stage using $FEV_1$(p=0.114). With deteriorating level of dyspnea the scores of symptoms(p=0.041), activity(p=0.004), impact(p=0.001), and total SGRQ score(p<0.001) were significantly increased. Dyspnea was not correlated with the level of occupation(p=0.259). The supporting level of family members and neighbors was significantly negatively correlated with dyspnea scale(p=0.011). Conclusion: In the management of COPD patients, we have to remember that the level of subjective dyspnea is correlated with QoL(symptom, activity and impact on society) and social supporting level as well as GOLD stage($FEV_1$).

Discovery of Genre Information on the Web (웹 상에서의 특정 장르 문서 발견)

  • Joo, Won-Kyun;Myaeng, Sung-Hyon
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10e
    • /
    • pp.28-35
    • /
    • 1999
  • 정보공유를 목적으로 제안된 웹의 활성화와 함께 유용한 정보들이 웹상에 기하급수적으로 등장함에 따라 정보공간의 확장으로 인한 검색 신뢰도의 저하 문제에 직면하게 되었다. 본 연구에서는 대용량 웹 환경하에서 사용자의 정보발견을 돕기 위해 텍스트이외의 새로운 요소들을 사용하여 특정장르문서를 발견하는 개념을 도입하였다. 먼저 사용자가 발견하고자 하는 장르의 모습을 텍스트, URL정보, 링크 정보. 문서구조 정보 등의 장르 식별요소 값을 이용해 표현한 후, 후보 문서들의 장르관련도를 측정함으로써 특정장르 문서를 검색한다. 각 장르식별요소값은 나름대로의 방법에 의해 계산되는데 $0{\sim}1$사이의 값을 가지며, 종합적인 장르관련도는 각 장르식별요소값의 증거통합 방법에 의해 구한다. 본 논문에서는 각 장르식별요소들의 역할과 장르식별요소가 장르발견에 미치는 영향을 알아보며, 최종적으로 특정 장르 문서발견에 있어서의 검색 신뢰도 향상을 보이기 위해 실험모델을 설계/구현하였다. 본 실험은 웹 문서를 대상으로 하는데, 아직까지 URL, 링크 정보를 모두 갖춘 테스트컬렉션이 없기 때문에 실험을 위해 일반적인 웹 문서로 직접 구성한 컬렉션을 사용하였다. 발견하고자 하는 장르는 "컴퓨터 분야의 컨퍼런스 홈페이지"로 정하였으며 30개의 컴퓨터 분야를 선정하였다. 비교대상으로는 일반 웹 검색 엔진인 알타비스타와 메타검색 엔진인 메타크롤러를 선택하였고. 각 질의에 대해 상위 30개의 결과를 대상으로 정확도를 평가하였다. 결과로서 각 장르식별요소들은 모두 검색 신뢰도의 향상에 기여를 하며, 제안하는 방법은 알타비스타와 메타크롤러에 비해 각각 평균적으로 67.34%, 71.78%의 검색 신뢰도 향상을 보임을 입증하였다.적응에 문제점을 가지기도 하였다. 본 연구에서는 그 동안 계속되어 온 한글과 한잔의 사용에 관한 논쟁을 언어심리학적인 연구 방법을 통해 조사하였다. 즉, 글을 읽는 속도, 글의 의미를 얼마나 정확하게 이해했는지, 어느 것이 더 기억에 오래 남는지를 측정하여 어느 쪽의 입장이 옮은 지를 판단하는 것이다. 실험 결과는 문장을 읽는 시간에서는 한글 전용문인 경우에 월등히 빨랐다. 그러나. 내용에 대한 기억 검사에서는 국한 혼용 조건에서 더 우수하였다. 반면에, 이해력 검사에서는 천장 효과(Ceiling effect)로 두 조건간에 차이가 없었다. 따라서, 본 실험 결과에 따르면, 글의 읽기 속도가 중요한 문서에서는 한글 전용이 좋은 반면에 글의 내용 기억이 강조되는 경우에는 한자를 혼용하는 것이 더 효율적이다.이 높은 활성을 보였다. 7. 이상을 종합하여 볼 때 고구마 끝순에는 페놀화합물이 다량 함유되어 있어 높은 항산화 활성을 가지며, 아질산염소거능 및 ACE저해활성과 같은 생리적 효과도 높아 기능성 채소로 이용하기에 충분한 가치가 있다고 판단된다.등의 관련 질환의 예방, 치료용 의약품 개발과 기능성 식품에 효과적으로 이용될 수 있음을 시사한다.tall fescue 23%, Kentucky bluegrass 6%, perennial ryegrass 8%) 및 white clover 23%를 유지하였다. 이상의 결과를 종합할 때, 초종과 파종비율에 따른 혼파초지의 건물수량과 사료가치의 차이를 확인할 수 있었으며, 레드 클로버 + 혼파 초지가 건물수량과 사료가치를 높이는데 효과적이었다.\ell}$ 이었으며 , yeast extract 첨가(添加)하여 배양시(培養時)는 yeast extract 농도(濃度)가 증가(增加)함

  • PDF

Text integration processing based on connectives in Aphasics (실어증 환자의 접속사 정보처리에 관한 연구)

  • Kim, Soo-Jeong;Moon, Young-Sun;Kim, Mi-Ra;Kim, Yoo-Jeong;Nam, Ki-Chun
    • Annual Conference on Human and Language Technology
    • /
    • 1999.10e
    • /
    • pp.441-446
    • /
    • 1999
  • 본 연구는 접속사를 통한 텍스트 통합 과정이 논리적 추론 종류에 따라 다른 정보처리 과정 혹은 다른 종류의 단원적 구조(modular structure in language processing)에 의해 처리되는지를 조사하기 위해 실시되었다. 또한, 접속사를 통한 추론 과정이 실어증의 증상 종류에 따라 다른 종류의 언어정보처리 손상이 있는지를 평가하기 위해 실시되었다. 실험에 참가한 환자는 이해성 실어증환자(Wernicke aphasic), 전반성 실어증 환자(Global aphasic), 표현성 실어증 환자(Broca aphasic) 등이었다. en 종류의 과제를 이용하였다. 한 과제는 앞 뒤 문장을 논리적 관계성을 표현하는 접속사를 채워 넣는 과제였고 다른 과제는 접속사가 포함된 텍스트가 옳은지를 판단하는 정오 판단 과제였다. 실험재료 문장에 사용된 접속사는 추가적인 정보를 제공하는 '그리고'와 대등 관계를 나타내는 '그러나' 및 인과 관계를 표현하는 '그래서' 였다. 이 세 종류의 접속사는 각기 다른 논리적 관계성을 나타낸다. 실험 결과는 실어증 환자가 전반적으로 채워 넣기 과제에서 보다는 정오 판단 과제에서 더 많은 실수를 보였으며, 표현성 실어증 환자보다는 이해성 실어증 환자가 더 많은 오류를 보였다. 또한, 세 종류의 접속사 중에 '그리고'가 표함된 텍스트에서 더 많은 실수를 보였다. 이 연구에서 나타난 흥미 있는 결과는 표현성 실어증 환자는 '그러나' 접속사가 포함된 텍스트에서의 수행이 '그래서'가 포함된 경우에서보다 좋은 반면에 전반성 실어증 환자는 '그래서'를 포함하는 텍스트에서의 수행이 '그러나'를 포함하는 텍스트에서의 수행이 더 우수해서 이중해리(double dissociation)가 나타난다는 사실이다. 이 결과는 선후 문장이 어떤 종류의 논리적 관계성을 지니는가에 따라 다른 종류의 정보처리가 진행된다는 것을 암시하는 결과이다.>$\textrm{cm}^2$.。C로 비교적 양호한 초전박막의 전기적 특성을 나타내었다.(Mg+Fe)비를 갖고 전자에 비해 Al이 풍부한 환경에서 생성되었으며, 따라서 활석과 연관되지 않은 녹니석은 생성시 광체와 인접한 화강아질 편마암에 의해 주로영향을 받았을 것으로 생각된다. 녹니석의 이러한 2가지 화학조성상의 경향은 녹니석과 공존하는 운모류나 각섬석류들의 화학분석결과와도 잘 일치한다. 이러한 결과는 이 지역의 활석 광상이 초염기성암 기원의 사문암이 열수변질작용을 받아 생성되었음을 명확하게 지시하며, 따라서 활석 광석내에 존재하는 녹니석은 활석의 근원 광물로서 녹니석편암 및 녹니석 편마암 매의 녹니석이 활석화되고 남은 잔존광물이 아니라, 주변암에 의해 성분상의 영향을 받은 열수와 사문암과의 변질교대작용에 의한 활석화과정 중에 주로 생성된 것으로 추정된다. 이러한 결과는 연구지역의 활석광상이 초염기성암의 사문암화 작용과 활석화 작용의 두 가지 변질작용에 의해 형성되어졌음을 알려준다.농도 증가 없이 폐 조직에 약 50배 정도의 고농도 cisplatin을 투여할 수 있었으며, 또한 분리 폐 관류 시 cisplatin에 의한 직접적 폐 독성은 발견되지 않았다이 낮았으나 통계학적 의의는 없었다[10.0%(4/40) : 8.2%(20/244), p>0.05]. 결론: 비디오흉강경술에서 재발을 낮추기 위해 수술시 폐야 전체를 관찰하여 존재하는 폐기포를 놓치지 않는 것이 중요하며, 폐기포를 확인하지 못한 경우와 이차성 자연기흉에 대해서는 흉막유착술에 더 세심한 주의가 필요하다는 것을 확인하였다. 비디오흉강경수술은 통증이 적고, 입원기간이 짧고, 사회로의 복귀가 빠르며, 고위험군에 적용할 수 있고, 무엇보다도 미용상의 이점이 크다는 면에서 자연기흉에 대해 유용한 치료방법임에는 틀림이 없으나 개흉술에 비해 재발율이 높고 비용이 비싸다는 문제가 제기되고 있는 만큼

  • PDF

Korean Food Review Analysis Using Large Language Models: Sentiment Analysis and Multi-Labeling for Food Safety Hazard Detection (대형 언어 모델을 활용한 한국어 식품 리뷰 분석: 감성분석과 다중 라벨링을 통한 식품안전 위해 탐지 연구)

  • Eun-Seon Choi;Kyung-Hee Lee;Wan-Sup Cho
    • The Journal of Bigdata
    • /
    • v.9 no.1
    • /
    • pp.75-88
    • /
    • 2024
  • Recently, there have been cases reported in the news of individuals experiencing symptoms of food poisoning after consuming raw beef purchased from online platforms, or reviews claiming that cherry tomatoes tasted bitter. This suggests the potential for analyzing food reviews on online platforms to detect food hazards, enabling government agencies, food manufacturers, and distributors to manage consumer food safety risks. This study proposes a classification model that uses sentiment analysis and large language models to analyze food reviews and detect negative ones, multi-labeling key food safety hazards (food poisoning, spoilage, chemical odors, foreign objects). The sentiment analysis model effectively minimized the misclassification of negative reviews with a low False Positive rate using a 'funnel' model. The multi-labeling model for food safety hazards showed high performance with both recall and accuracy over 96% when using GPT-4 Turbo compared to GPT-3.5. Government agencies, food manufacturers, and distributors can use the proposed model to monitor consumer reviews in real-time, detect potential food safety issues early, and manage risks. Such a system can protect corporate brand reputation, enhance consumer protection, and ultimately improve consumer health and safety.

Restoring Omitted Sentence Constituents in Encyclopedia Documents Using Structural SVM (Structural SVM을 이용한 백과사전 문서 내 생략 문장성분 복원)

  • Hwang, Min-Kook;Kim, Youngtae;Ra, Dongyul;Lim, Soojong;Kim, Hyunki
    • Journal of Intelligence and Information Systems
    • /
    • v.21 no.2
    • /
    • pp.131-150
    • /
    • 2015
  • Omission of noun phrases for obligatory cases is a common phenomenon in sentences of Korean and Japanese, which is not observed in English. When an argument of a predicate can be filled with a noun phrase co-referential with the title, the argument is more easily omitted in Encyclopedia texts. The omitted noun phrase is called a zero anaphor or zero pronoun. Encyclopedias like Wikipedia are major source for information extraction by intelligent application systems such as information retrieval and question answering systems. However, omission of noun phrases makes the quality of information extraction poor. This paper deals with the problem of developing a system that can restore omitted noun phrases in encyclopedia documents. The problem that our system deals with is almost similar to zero anaphora resolution which is one of the important problems in natural language processing. A noun phrase existing in the text that can be used for restoration is called an antecedent. An antecedent must be co-referential with the zero anaphor. While the candidates for the antecedent are only noun phrases in the same text in case of zero anaphora resolution, the title is also a candidate in our problem. In our system, the first stage is in charge of detecting the zero anaphor. In the second stage, antecedent search is carried out by considering the candidates. If antecedent search fails, an attempt made, in the third stage, to use the title as the antecedent. The main characteristic of our system is to make use of a structural SVM for finding the antecedent. The noun phrases in the text that appear before the position of zero anaphor comprise the search space. The main technique used in the methods proposed in previous research works is to perform binary classification for all the noun phrases in the search space. The noun phrase classified to be an antecedent with highest confidence is selected as the antecedent. However, we propose in this paper that antecedent search is viewed as the problem of assigning the antecedent indicator labels to a sequence of noun phrases. In other words, sequence labeling is employed in antecedent search in the text. We are the first to suggest this idea. To perform sequence labeling, we suggest to use a structural SVM which receives a sequence of noun phrases as input and returns the sequence of labels as output. An output label takes one of two values: one indicating that the corresponding noun phrase is the antecedent and the other indicating that it is not. The structural SVM we used is based on the modified Pegasos algorithm which exploits a subgradient descent methodology used for optimization problems. To train and test our system we selected a set of Wikipedia texts and constructed the annotated corpus in which gold-standard answers are provided such as zero anaphors and their possible antecedents. Training examples are prepared using the annotated corpus and used to train the SVMs and test the system. For zero anaphor detection, sentences are parsed by a syntactic analyzer and subject or object cases omitted are identified. Thus performance of our system is dependent on that of the syntactic analyzer, which is a limitation of our system. When an antecedent is not found in the text, our system tries to use the title to restore the zero anaphor. This is based on binary classification using the regular SVM. The experiment showed that our system's performance is F1 = 68.58%. This means that state-of-the-art system can be developed with our technique. It is expected that future work that enables the system to utilize semantic information can lead to a significant performance improvement.

Knowledge Extraction Methodology and Framework from Wikipedia Articles for Construction of Knowledge-Base (지식베이스 구축을 위한 한국어 위키피디아의 학습 기반 지식추출 방법론 및 플랫폼 연구)

  • Kim, JaeHun;Lee, Myungjin
    • Journal of Intelligence and Information Systems
    • /
    • v.25 no.1
    • /
    • pp.43-61
    • /
    • 2019
  • Development of technologies in artificial intelligence has been rapidly increasing with the Fourth Industrial Revolution, and researches related to AI have been actively conducted in a variety of fields such as autonomous vehicles, natural language processing, and robotics. These researches have been focused on solving cognitive problems such as learning and problem solving related to human intelligence from the 1950s. The field of artificial intelligence has achieved more technological advance than ever, due to recent interest in technology and research on various algorithms. The knowledge-based system is a sub-domain of artificial intelligence, and it aims to enable artificial intelligence agents to make decisions by using machine-readable and processible knowledge constructed from complex and informal human knowledge and rules in various fields. A knowledge base is used to optimize information collection, organization, and retrieval, and recently it is used with statistical artificial intelligence such as machine learning. Recently, the purpose of the knowledge base is to express, publish, and share knowledge on the web by describing and connecting web resources such as pages and data. These knowledge bases are used for intelligent processing in various fields of artificial intelligence such as question answering system of the smart speaker. However, building a useful knowledge base is a time-consuming task and still requires a lot of effort of the experts. In recent years, many kinds of research and technologies of knowledge based artificial intelligence use DBpedia that is one of the biggest knowledge base aiming to extract structured content from the various information of Wikipedia. DBpedia contains various information extracted from Wikipedia such as a title, categories, and links, but the most useful knowledge is from infobox of Wikipedia that presents a summary of some unifying aspect created by users. These knowledge are created by the mapping rule between infobox structures and DBpedia ontology schema defined in DBpedia Extraction Framework. In this way, DBpedia can expect high reliability in terms of accuracy of knowledge by using the method of generating knowledge from semi-structured infobox data created by users. However, since only about 50% of all wiki pages contain infobox in Korean Wikipedia, DBpedia has limitations in term of knowledge scalability. This paper proposes a method to extract knowledge from text documents according to the ontology schema using machine learning. In order to demonstrate the appropriateness of this method, we explain a knowledge extraction model according to the DBpedia ontology schema by learning Wikipedia infoboxes. Our knowledge extraction model consists of three steps, document classification as ontology classes, proper sentence classification to extract triples, and value selection and transformation into RDF triple structure. The structure of Wikipedia infobox are defined as infobox templates that provide standardized information across related articles, and DBpedia ontology schema can be mapped these infobox templates. Based on these mapping relations, we classify the input document according to infobox categories which means ontology classes. After determining the classification of the input document, we classify the appropriate sentence according to attributes belonging to the classification. Finally, we extract knowledge from sentences that are classified as appropriate, and we convert knowledge into a form of triples. In order to train models, we generated training data set from Wikipedia dump using a method to add BIO tags to sentences, so we trained about 200 classes and about 2,500 relations for extracting knowledge. Furthermore, we evaluated comparative experiments of CRF and Bi-LSTM-CRF for the knowledge extraction process. Through this proposed process, it is possible to utilize structured knowledge by extracting knowledge according to the ontology schema from text documents. In addition, this methodology can significantly reduce the effort of the experts to construct instances according to the ontology schema.