• Title/Summary/Keyword: 동사정보

Search Result 276, Processing Time 0.035 seconds

Occurrence Patterns of Paddy Weeds and Distribution of Resistant Weeds to an ALS Inhibiting Herbicide in Jeonnam by a Soil Assay Method (토양검정법에 의한 전남지역 논잡초 발생양상과 ALS 저해제 제초제 저항성 논잡초 분포)

  • Jeong, Jang Yong;Yun, Young Beom;Jang, Se Ji;Hyun, Kyu Hwn;Shin, Dong Young;Lee, Jeongran;Kwon, Oh Do;Kuk, Yong In
    • Weed & Turfgrass Science
    • /
    • v.7 no.3
    • /
    • pp.191-199
    • /
    • 2018
  • This study was to investigate the occurrence patterns of paddy weeds, their resistance levels to an ALS inhibiting herbicide, and to estimate the areas of resistance in these paddy fields. We used soil collected from 358 paddy fields of Jeonnam province in 2017. Based on their life cycles, weeds were 96% annuals and 4% perennial. Additionally, according to morphological classification, 59% were broad leaves, 28% were sedges and 13% were grasses. Different areas within Jeonnam province contained different numbers and occurrence rates of weed species. However, generally, we observed Lindernia dubia var. dubia, Monochoria vaginalis var. plantaginea, Ludwigia prostrata, L. procumbens, Cyperus difformis, Scirpus juncoides, Eleocharis Kuroguwai, Echinochloa oryzoides, and E. crus-galli var. echinata. We also observed seven weeds resistant to an ALS inhibiting herbicide. They were M. vaginalis, S. juncoides, C. difformis, L. dubia, Ludwigia prostrata, E. oryzoides, and E. crus-galli var. echinata. Although there were differences in the number and occurrence rate of resistant weed species to an ALS inhibiting herbicide among areas in Jeonnam province, the M. vaginalis, C. difformis, and S. juncoides occurred in 23 cities and counties in Jeonnam including Gwangju metropolitan city. Based on the rates (52%) of resistant occurrence to an ALS inhibiting herbicide in Jeonnam province, the area of weed resistant paddy fields was estimated to be 91,543 ha.

An Efficient Correction Method for Misrecognized Words in Off-line Hangul Character Recognition (오프라인 한글 문자 인식을 위한 효율적인 오인식 단어 교정 방법)

  • Lee, Byeong-Hui;Kim, Tae-Gyun
    • The Transactions of the Korea Information Processing Society
    • /
    • v.3 no.6
    • /
    • pp.1598-1606
    • /
    • 1996
  • In order to achieve high accuracy of off-line character recognition(OCR) systems, the recognized text must be processed through a post-processing stage using contextual information. In this paper, we reclassify Korean word classes in terms of OCR word correction. And we collect combinations of Korean particles(approximately 900) linguistic verbal from(around 800). We aggregate 9 Korean irregular verbal phrases defined from a Korean linguistic point of view. Using these Korean word information and a Head-tail method, we can correct misrecognized words. A Korean character recognizer demonstrates 93.7% correct character recognition without a post-processing stage. The entire recognition rate of our system with a post-processing stage exceeds 97% correct character recognition.

  • PDF

A Study on the Vitalization of Dong-office Minilibraries as Service Stations for Public Library in Daejeon City (봉사거점으로서 동사무소문고의 활성화 방안 연구 - 대전광역시를 중심으로 -)

  • Kim, Young-Shin
    • Journal of the Korean Society for Library and Information Science
    • /
    • v.36 no.1
    • /
    • pp.5-24
    • /
    • 2002
  • The purpose of this study is to find out ways for establishing the identity and for strengthening the function of Dong-office minilibraries, which are nation-widely sprouted according to the Government’s policy of converting the Dong-office’s functions. The general operating status of the minilibraries were investigated by telephone interviews with the minilibrary operators, the collection and usage status by on-site observation, and the user behaviour by user surveys. For the effective operation of the Dong-office minilibrariea they should be identified as service stations of public libraries, supported technically by the local public libraries, and connected to the local educational and social service institutions for the securance of users and volunteers. Networks among the minilibrary operators should also be established for exchanges and cooperations. Furthermore, operating models should be developed according to the library’s environments and the user’s needs.

A Deep Learning Model for Disaster Alerts Classification

  • Park, Soonwook;Jun, Hyeyoon;Kim, Yoonsoo;Lee, Soowon
    • Journal of the Korea Society of Computer and Information
    • /
    • v.26 no.12
    • /
    • pp.1-9
    • /
    • 2021
  • Disaster alerts are text messages sent by government to people in the area in the event of a disaster. Since the number of disaster alerts has increased, the number of people who block disaster alerts is increasing as many unnecessary disaster alerts are being received. To solve this problem, this study proposes a deep learning model that automatically classifies disaster alerts by disaster type, and allows only necessary disaster alerts to be received according to the recipient. The proposed model embeds disaster alerts via KoBERT and classifies them by disaster type with LSTM. As a result of classifying disaster alerts using 3 combinations of parts of speech: [Noun], [Noun + Adjective + Verb] and [All parts], and 4 classification models: Proposed model, Keyword classification, Word2Vec + 1D-CNN and KoBERT + FFNN, the proposed model achieved the highest performance with 0.988954 accuracy.

A Homonym Disambiguation System based on Semantic Information Extracted from Dictionary Definitions (사전의 뜻풀이말에서 추출한 의미정보에 기반한 동형이의어 중의성 해결 시스템)

  • Hur, Jeong;Ock, Cheol-Young
    • Journal of KIISE:Software and Applications
    • /
    • v.28 no.9
    • /
    • pp.688-698
    • /
    • 2001
  • A homonym could be disambiguated by anther words in the context such as nouns, predicates used with the homonym. This paper proposes a homonym disambiguation system based on statistical semantic information which is extracted from definitions in dictionary. The semantic information consists of nouns and predicates that are used with the homonym in definitions. In order to extract accurate semantic information, definitions are used with the homonym in definitions. In order to extract accurate semantic information, definitions are classified into two types. One has hyponym-hypernym relation between title word and head word (homonym) in definition. The hyponym-hypernym relation is one level semantic hierarchy and can be extended to deeper levels in order to overcome the problem of data sparseness. The other is the case that the homonym is used in the middle of definition. The system considers nouns and predicates simultaneously to disambiguate the homonym. Nine homonyms are examined in order to determine the weight of nouns and predicates which affect accrutacy of homonym disambiguation. From experiments using training corpus(definitions in dictionary), the average accruracy of homonym disamguation is 96.11% when the weight is 0.9 and 0.1 for noun and verb respectively. And another experiment to meaure the generality of the homonym disambiguation system results in the 80.73% average accuracy to 1,796 untraining sentences from Korean Information Base I and ETRI corpus.

  • PDF

A Model of Natural Language Information Retrieval Using Main Keywords and Sub-keywords (주 키워드와 부 키워드를 이용한 자연언어 정보 검색 모델)

  • Kang, Hyun-Kyu;Park, Se-Young
    • The Transactions of the Korea Information Processing Society
    • /
    • v.4 no.12
    • /
    • pp.3052-3062
    • /
    • 1997
  • An Information Retrieval (IR) is to retrieve relevant information that satisfies user's information needs. However a major role of IR systems is not just the generation of sets of relevant documents, but to help determine which documents are most likely to be relevant to the given requirements. Various attempts have been made in the recent past to use syntactic analysis methods for the generation of complex construction that are essential for content identification in various automatic text analysis systems. Unfortunately, it is known that methods based on syntactic understanding alone are not sufficiently powerful to Produce complete analyses of arbitrary text samples. In this paper, we present a document ranking method based on two-level ranking. The first level is used to retrieve the documents, and the second level to reorder the retrieved documents. The main keywords used in the first level can be defined as nouns and/or compound nouns that possess good document discrimination powers. The sub-keywords used in the second level can be also defined as adjectives, adverbs, and/or verbs that are not main keywords, and function words. An empirical study was conducted from a Korean encyclopedia with 23,113 entries and 161 Korean natural language queries collected by end users. 850% of the natural language queries contained sub-keywords. The two-level document ranking methods provides significant improvement in retrieval effectiveness over traditional ranking methods.

  • PDF

Methods for Integration of Documents using Hierarchical Structure based on the Formal Concept Analysis (FCA 기반 계층적 구조를 이용한 문서 통합 기법)

  • Kim, Tae-Hwan;Jeon, Ho-Cheol;Choi, Joong-Min
    • Journal of Intelligence and Information Systems
    • /
    • v.17 no.3
    • /
    • pp.63-77
    • /
    • 2011
  • The World Wide Web is a very large distributed digital information space. From its origins in 1991, the web has grown to encompass diverse information resources as personal home pasges, online digital libraries and virtual museums. Some estimates suggest that the web currently includes over 500 billion pages in the deep web. The ability to search and retrieve information from the web efficiently and effectively is an enabling technology for realizing its full potential. With powerful workstations and parallel processing technology, efficiency is not a bottleneck. In fact, some existing search tools sift through gigabyte.syze precompiled web indexes in a fraction of a second. But retrieval effectiveness is a different matter. Current search tools retrieve too many documents, of which only a small fraction are relevant to the user query. Furthermore, the most relevant documents do not nessarily appear at the top of the query output order. Also, current search tools can not retrieve the documents related with retrieved document from gigantic amount of documents. The most important problem for lots of current searching systems is to increase the quality of search. It means to provide related documents or decrease the number of unrelated documents as low as possible in the results of search. For this problem, CiteSeer proposed the ACI (Autonomous Citation Indexing) of the articles on the World Wide Web. A "citation index" indexes the links between articles that researchers make when they cite other articles. Citation indexes are very useful for a number of purposes, including literature search and analysis of the academic literature. For details of this work, references contained in academic articles are used to give credit to previous work in the literature and provide a link between the "citing" and "cited" articles. A citation index indexes the citations that an article makes, linking the articleswith the cited works. Citation indexes were originally designed mainly for information retrieval. The citation links allow navigating the literature in unique ways. Papers can be located independent of language, and words in thetitle, keywords or document. A citation index allows navigation backward in time (the list of cited articles) and forwardin time (which subsequent articles cite the current article?) But CiteSeer can not indexes the links between articles that researchers doesn't make. Because it indexes the links between articles that only researchers make when they cite other articles. Also, CiteSeer is not easy to scalability. Because CiteSeer can not indexes the links between articles that researchers doesn't make. All these problems make us orient for designing more effective search system. This paper shows a method that extracts subject and predicate per each sentence in documents. A document will be changed into the tabular form that extracted predicate checked value of possible subject and object. We make a hierarchical graph of a document using the table and then integrate graphs of documents. The graph of entire documents calculates the area of document as compared with integrated documents. We mark relation among the documents as compared with the area of documents. Also it proposes a method for structural integration of documents that retrieves documents from the graph. It makes that the user can find information easier. We compared the performance of the proposed approaches with lucene search engine using the formulas for ranking. As a result, the F.measure is about 60% and it is better as about 15%.

Constructing a Korean Subcategorization Dictionary with Semantic Roles using Thesaurus and Predicate Patterns (시소러스와 술어 패턴을 이용한 의미역 부착 한국어 하위범주화 사전의 구축)

  • Yang, Seung-Hyun;Kim, Young-Sum;Woo, Yo-Sub;Yoon, Deok-Ho
    • Journal of KIISE:Computing Practices and Letters
    • /
    • v.6 no.3
    • /
    • pp.364-372
    • /
    • 2000
  • Subcategorization, defining dependency relation between predicates and their complements, is an important source of knowledge for resolving syntactic and semantic ambiguities arising in analyzing sentences. This paper describes a Korean subcategorization dictionary, particularly annotated with semantic roles of complements coupled with thesaural semantic hierarchy as well as syntactic dependencies. For annotating roles, we defined 25 semantic roles associated with surface case markers that can be used to derive semantic structures directly from syntactic ones. In addition, we used more than 120,000 entries of thesaurus to specify concept markers of noun complements, and also used 47 and 17 predicate patterns for verbs and adjectives, respectively, to express dependency relation between predicates and their complements. Using a full-fledged thesaurus for specifying concept markers makes it possible to build an effective selectional restriction mechanism coupled with the subcategorization dictionary, and using the standard predicate patterns for specifying dependency relations makes it possible to avoid inconsistency in the results and to reduce the costs for constructing the dictionary. On the bases of these, we built a Korean subcategorization dictionary for frequently used 13,000 predicates found in corpora with the aid of a tool specially designed to support this task. An experimental result shows that this dictionary can provide 72.7% of predicates in corpora with appropriate subcategorization information.

  • PDF

KorLexClas 1.5: A Lexical Semantic Network for Korean Numeral Classifiers (한국어 수분류사 어휘의미망 KorLexClas 1.5)

  • Hwang, Soon-Hee;Kwon, Hyuk-Chul;Yoon, Ae-Sun
    • Journal of KIISE:Software and Applications
    • /
    • v.37 no.1
    • /
    • pp.60-73
    • /
    • 2010
  • This paper aims to describe KorLexClas 1.5 which provides us with a very large list of Korean numeral classifiers, and with the co-occurring noun categories that select each numeral classifier. Differently from KorLex of other POS, of which the structure depends largely on their reference model (Princeton WordNet), KorLexClas 1.0 and its extended version 1.5 adopt a direct building method. They demand a considerable time and expert knowledge to establish the hierarchies of numeral classifiers and the relationships between lexical items. For the efficiency of construction as well as the reliability of KorLexClas 1.5, we use following processes: (1) to use various language resources while their cross-checking for the selection of classifier candidates; (2) to extend the list of numeral classifiers by using a shallow parsing techniques; (3) to set up the hierarchies of the numeral classifiers based on the previous linguistic studies; and (4) to determine LUB(Least Upper Bound) of the numeral classifiers in KorLexNoun 1.5. The last process provides the open list of the co-occurring nouns for KorLexClas 1.5 with the extensibility. KorLexClas 1.5 is expected to be used in a variety of NLP applications, including MT.

Construction of Test Collection for Evaluation of Scientific Relation Extraction System (과학기술분야 용어 간 관계추출 시스템의 평가를 위한 테스트컬렉션 구축)

  • Choi, Yun-Soo;Choi, Sung-Pil;Jeong, Chang-Hoo;Yoon, Hwa-Mook;You, Beom-Jong
    • Proceedings of the Korea Contents Association Conference
    • /
    • 2009.05a
    • /
    • pp.754-758
    • /
    • 2009
  • Extracting information in large-scale documents would be very useful not only for information retrieval but also for question answering and summarization. Even though relation extraction is very important area, it is difficult to develop and evaluate a machine learning based system without test collection. The study shows how to build test collection(KREC2008) for the relation extraction system. We extracted technology terms from abstracts of journals and selected several relation candidates between them using Wordnet. Judges who were well trained in evaluation process assigned a relation from candidates. The process provides the method with which even non-experts are able to build test collection easily. KREC2008 are open to the public for researchers and developers and will be utilized for development and evaluation of relation extraction system.

  • PDF