• Title/Summary/Keyword: Similar Documents

Search Result 283, Processing Time 0.034 seconds

Analysis of the Alignment between Elementary Science Curriculum and Teacher Guidebook - Examining Learning Objectives in 2009 Grade 3~4 Science Curriculum - (초등 과학과 교육과정과 교사용지도서 목표 간의 비교 분석 - 2009 개정 교육과정 3~4학년을 중심으로 -)

  • Na, Jiyeon;Yoon, Hye-Gyoung;Kim, Mijung
    • Journal of Korean Elementary Science Education
    • /
    • v.34 no.2
    • /
    • pp.183-193
    • /
    • 2015
  • Teacher guidebooks are practical and commonly used resources for teachers to deliver the goals and contents of science curriculum in classroom teaching. Thus, the alignment of teacher guidebooks and science curriculum could be critical to undertake the effectiveness of curriculum implication in science classrooms. This study is to investigate how the learning objectives of science curriculum are implicated in teacher guidebooks by analyzing the dimensions of knowledge and cognitive process in learning objectives in both documents. Grade 3~4 learning objectives (82 objectives in the curriculum, 459 in the teacher guidebook, 541 in total) in 2009 Revised science curriculum and teacher guidebooks were coded and analyzed based on the Revised Bloom's Taxonomy. The analysis focused on how the knowledge dimensions and cognitive processes of the curriculum were emphasized and restructured in the teacher guidebooks to examine the coalition between the two important documents. The study found: 1) the learning objectives in Grade 3~4 in both documents were skewed to certain knowledge dimension (conceptual) and cognitive process (understand); 2) there was a high coalition between unit objectives and lesson objectives in the teacher guidebooks, however, relatively low coalition between the curriculum and the teacher guidebooks; and 3) learning objectives in the curriculum were delivered in teacher guidebooks in various patterns (similar, detailed, additional, in portion, and the same), and 'detailed' and 'additional' were frequently shown. There also appeared new objectives in the teacher guidebooks, which were not present in the curriculum. The findings in this study could provide some suggestions to the current project of developing 2015 Science Curriculum in regard to understanding the dimensions of knowledge and cognitive process of learning objectives and their alignments with textbooks and teacher guidebooks.

Named Entity Recognition for Patent Documents Based on Conditional Random Fields (조건부 랜덤 필드를 이용한 특허 문서의 개체명 인식)

  • Lee, Tae Seok;Shin, Su Mi;Kang, Seung Shik
    • KIPS Transactions on Software and Data Engineering
    • /
    • v.5 no.9
    • /
    • pp.419-424
    • /
    • 2016
  • Named entity recognition is required to improve the retrieval accuracy of patent documents or similar patents in the claims and patent descriptions. In this paper, we proposed an automatic named entity recognition for patents by using a conditional random field that is one of the best methods in machine learning research. Named entity recognition system has been constructed from the training set of tagged corpus with 660,000 words and 70,000 words are used as a test set for evaluation. The experiment shows that the accuracy is 93.6% and the Kappa coefficient is 0.67 between manual tagging and automatic tagging system. This figure is better than the Kappa coefficient 0.6 for manually tagged results and it shows that automatic named entity tagging system can be used as a practical tagging for patent documents in replacement of a manual tagging.

Clustering of Web Document Exploiting with the Co-link in Hypertext (동시링크를 이용한 웹 문서 클러스터링 실험)

  • 김영기;이원희;권혁철
    • Journal of Korean Library and Information Science Society
    • /
    • v.34 no.2
    • /
    • pp.233-253
    • /
    • 2003
  • Knowledge organization is the way we humans understand the world. There are two types of information organization mechanisms studied in information retrieval: namely classification md clustering. Classification organizes entities by pigeonholing them into predefined categories, whereas clustering organizes information by grouping similar or related entities together. The system of the Internet information resources extracts a keyword from the words which appear in the web document and draws up a reverse file. Term clustering based on grouping related terms, however, did not prove overly successful and was mostly abandoned in cases of documents used different languages each other or door-way-pages composed of only an anchor text. This study examines infometric analysis and clustering possibility of web documents based on co-link topology of web pages.

  • PDF

Dental implant cost by top-down approach (하향식(Top-down)방식을 적용한 치과 임플란트 원가산정)

  • Shin, Hosung;Kim, Min-Young
    • The Journal of the Korean dental association
    • /
    • v.52 no.7
    • /
    • pp.416-424
    • /
    • 2014
  • The purpose of this study is to analyze the cost of dental implant by top-down method and, on that basis, to provide salient information for the suitable standard of dental insurance fee. A survey data and accounting documents from 36 samples of dental clinics secured with the organisation authority are used and analysed for extracting the representative sample of dental clinic. A researcher visited the dental clinics in person and conducted additional interviews in the omitted case of accounting documents. A dental implant cost by top-down method was estimated to 1,430,000 won. Labor cost accounted for 43% of the total cost structure, ranking it 1st and, management cost, material cost and interest cost on investment cost in order are investigated. Labor cost counts of the total cost that shows the similar aspect to the existing result. Cost in cost accounting of medical care would be used for the judge that cost pursed value for dental service, not price or fee.

Case Study on the Discrepancies of Bill of Lading under UCP 600 (UCP 600 이후 선화증권 하자관련 분쟁사례)

  • Seo, Jung-Doo
    • THE INTERNATIONAL COMMERCE & LAW REVIEW
    • /
    • v.45
    • /
    • pp.111-136
    • /
    • 2010
  • Bill of lading means the transport document ("marine", "ocean" or "port-to-port" or similar), however named, covering sea shipment only. Data in a bill of lading, when read in context with the credit, the document itself and international standard banking practice, need not be identical to, but must not conflict with, data in that document, any other stipulated document or the credit, according to UCP 600 and ISBP. This article has provided the general guideline of the discrepancies on the basis of UCP 600, ISBP 681 and the ICC Banking Commission Opinions, for the solution of the unpaid problems of the credit transactions. I have studied especially the ICC Banking Commission Opinions and the DOCDEX Decisions on the bill of lading after UCP 600, the international standard banking practice (ISBP 681), and the recent Korean cases. As such, this article would fill a need gap in the market between the general principles in the UCP provisions and the daily job of the practitioner. The credit practitioners are suggested to this resulting guidance whenever doubts arise as to how to check the credit documents in daily practice.

  • PDF

Integration of XML Schemas Based on Domain Ontology (도메인 온톨로지에 기반한 XML 스키마의 통합)

  • Kang, Hae-Ran;Lee, Kyong-Ho
    • Journal of Korea Multimedia Society
    • /
    • v.11 no.7
    • /
    • pp.940-955
    • /
    • 2008
  • Semantically similar XML documents in the same application domain might often conform to different schemas. To uniformly view and query such XML documents, we need an efficient method of integrating XML schemas. This paper proposes a sophisticated method for integrating XML schemas in the same application domain. To compute mapping relationships between schemas, the proposed method utilizes various relationships, such as synonyms and hypernyms, between lexical items based on dictionaries and domain ontologies. Particularly, the relationships between lexical items are elaborated by taking their structural information into account. In addition, this paper proposes a more accurate method for integrating compositors. Experimental results with schemas in various application domains show that the utilization of domain ontologies and the structural relationships between lexical items enhance the precision and recall of integrated schemas.

  • PDF

Approaching Content Reuse for Efficient Technical Documentation (효율적인 기술문서화를 위한 콘텐트 재사용성 접근방법)

  • Koo, Heung-Seo
    • Journal of Korea Society of Industrial Information Systems
    • /
    • v.15 no.5
    • /
    • pp.113-118
    • /
    • 2010
  • The single-sourcing of content is extremely beneficial because when we are managing several projects with hundreds or thousands of documentation, we don't want to be changing the same content, or substantially similar content in multiple locations. The Darwin Information Typing Architecture (DITA) is an XML-based architecture for authoring, producing, and delivering technical documents. It consists of a set of design principles for creating Information -typed topic modules and for using that content in various ways. In this paper, we examine the approach of using The Darwin Information Typing Architecture for technical documents development to enhance the reuse of existing content components for difference information products.

Korean Language Clustering using Word2Vec (Word2Vec를 이용한 한국어 단어 군집화 기법)

  • Heu, Jee-Uk
    • The Journal of the Institute of Internet, Broadcasting and Communication
    • /
    • v.18 no.5
    • /
    • pp.25-30
    • /
    • 2018
  • Recently with the development of Internet technology, a lot of research area such as retrieval and extracting data have getting important for providing the information efficiently and quickly. Especially, the technique of analyzing and finding the semantic similar words for given korean word such as compound words or generated newly is necessary because it is not easy to catch the meaning or semantic about them. To handle of this problem, word clustering is one of the technique which is grouping the similar words of given word. In this paper, we proposed the korean language clustering technique that clusters the similar words by embedding the words using Word2Vec from the given documents.

Modern Interpretation on Kinesiology of Yangsaeng-Doinbub Presented in "Zhu-Bing-Yuan-Hou-Lun.Yao-Bei-Bing-Zhu-Hou" ("제병원후론(諸病源候論).요배병제후(腰背病諸侯)"에서 제시된 양생도인법(養生導引法)의 현대운동학적 이해)

  • Kim, Se-Jun;Kim, Soon-Joong
    • Journal of Korean Medicine Rehabilitation
    • /
    • v.24 no.2
    • /
    • pp.115-130
    • /
    • 2014
  • Objectives The objective of this study is to interpretate Yangsaeng-Doinbub presented in "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou" in a modern kineologic approach Methods Based on the interpretation of "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou" and implementation of its kinesiology, this study presents similar kineologies and their purposes, with the reference to various documents on modern kinesiology. Results 1) Yangsaeng-Doinbub presented in "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou" is similar to stretching, active exercise and resistance exercise. 2) Exercises in Yangsaeng-Doinbub presented in "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou", which are similar to resistance exercise, can be used for isometic exercise of cervical extensor. 3) Exercises in Yangsaeng-Doinbub presented in "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou", which are similar to Stretching exercise, has its purpose to stretch quadratus Lumborum, lateral side of body, gluteus Maximus, quadriceps femoris, shoulder extensor, hamstrings, hip joint, ankle dorsi flexor, thoracic rotator,inferior shoulder joint. 4) Exercises in Yangsaeng-Doinbub presented in "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou", which are similar to active exercise, can be used for strengthen exteral oblique. 5) Doctors can make various applications of Yansaeng-Doinbub. For example, it can be used to correct improper low back and neck exercise patterns. 6) Yangsaeng-Doinbub also describes breathing methods, which help normalization of breathing exercises and increase the efficiency of spine exercises. Conclusions The modern interpretation on kinesiology of Yangsaeng-Doinbub presented in "Zhu-Bing-Yuan-Hou-Lun Yao-Bei-Bing-Zhu-Hou" leads to a conclusion that Yangsaeng-Doinbub consists of numourous exercises for various body parts. In particular, breathing methods increase efficiency of such exercises. Plus, the exercises in Yangsaeng-Doinbub can be applied to various uses by doctors.

Efficient Topic Modeling by Mapping Global and Local Topics (전역 토픽의 지역 매핑을 통한 효율적 토픽 모델링 방안)

  • Choi, Hochang;Kim, Namgyu
    • Journal of Intelligence and Information Systems
    • /
    • v.23 no.3
    • /
    • pp.69-94
    • /
    • 2017
  • Recently, increase of demand for big data analysis has been driving the vigorous development of related technologies and tools. In addition, development of IT and increased penetration rate of smart devices are producing a large amount of data. According to this phenomenon, data analysis technology is rapidly becoming popular. Also, attempts to acquire insights through data analysis have been continuously increasing. It means that the big data analysis will be more important in various industries for the foreseeable future. Big data analysis is generally performed by a small number of experts and delivered to each demander of analysis. However, increase of interest about big data analysis arouses activation of computer programming education and development of many programs for data analysis. Accordingly, the entry barriers of big data analysis are gradually lowering and data analysis technology being spread out. As the result, big data analysis is expected to be performed by demanders of analysis themselves. Along with this, interest about various unstructured data is continually increasing. Especially, a lot of attention is focused on using text data. Emergence of new platforms and techniques using the web bring about mass production of text data and active attempt to analyze text data. Furthermore, result of text analysis has been utilized in various fields. Text mining is a concept that embraces various theories and techniques for text analysis. Many text mining techniques are utilized in this field for various research purposes, topic modeling is one of the most widely used and studied. Topic modeling is a technique that extracts the major issues from a lot of documents, identifies the documents that correspond to each issue and provides identified documents as a cluster. It is evaluated as a very useful technique in that reflect the semantic elements of the document. Traditional topic modeling is based on the distribution of key terms across the entire document. Thus, it is essential to analyze the entire document at once to identify topic of each document. This condition causes a long time in analysis process when topic modeling is applied to a lot of documents. In addition, it has a scalability problem that is an exponential increase in the processing time with the increase of analysis objects. This problem is particularly noticeable when the documents are distributed across multiple systems or regions. To overcome these problems, divide and conquer approach can be applied to topic modeling. It means dividing a large number of documents into sub-units and deriving topics through repetition of topic modeling to each unit. This method can be used for topic modeling on a large number of documents with limited system resources, and can improve processing speed of topic modeling. It also can significantly reduce analysis time and cost through ability to analyze documents in each location or place without combining analysis object documents. However, despite many advantages, this method has two major problems. First, the relationship between local topics derived from each unit and global topics derived from entire document is unclear. It means that in each document, local topics can be identified, but global topics cannot be identified. Second, a method for measuring the accuracy of the proposed methodology should be established. That is to say, assuming that global topic is ideal answer, the difference in a local topic on a global topic needs to be measured. By those difficulties, the study in this method is not performed sufficiently, compare with other studies dealing with topic modeling. In this paper, we propose a topic modeling approach to solve the above two problems. First of all, we divide the entire document cluster(Global set) into sub-clusters(Local set), and generate the reduced entire document cluster(RGS, Reduced global set) that consist of delegated documents extracted from each local set. We try to solve the first problem by mapping RGS topics and local topics. Along with this, we verify the accuracy of the proposed methodology by detecting documents, whether to be discerned as the same topic at result of global and local set. Using 24,000 news articles, we conduct experiments to evaluate practical applicability of the proposed methodology. In addition, through additional experiment, we confirmed that the proposed methodology can provide similar results to the entire topic modeling. We also proposed a reasonable method for comparing the result of both methods.